The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CRI(505hit)

301-320hit(505hit)

  • Background-Adjusted Weber-Fechner Fraction Considering Crispening Effect

    Dong-Ha LEE  Chan-Ho HAN  Kyu-Ik SOHNG  

     
    LETTER

      Vol:
    E88-A No:6
      Page(s):
    1529-1532

    The recognition limit of luminance difference in the human visual system (HVS) has not been studied systematically. In this paper, surround adapted Weber-Fechner fraction is calculated based on the crispening effect. It is found that surround adapted fractions have reduced to 1/3 of the traditional Weber-Fechner fractions. As compared with Breitmeyer's experiments, the presented result is a reasonable one. It can be used as some guide to design the digital display system when a designer needs to decide bit count of digital signal in considering of the limit of brightness level, and as the inspection tool of display manufacturing of brightness smear, defect, and so on.

  • A Protocol for Peer-to-Peer Multi-Player Networked Virtual Ball Game

    Tatsuhiro YONEKURA  Yoshihiro KAWANO  

     
    PAPER

      Vol:
    E88-D No:5
      Page(s):
    926-937

    This paper reports our study of how to gain consistency of states in a ball-game typed Distributed Virtual Environment (DVE) with lag, in peer-to-peer (P2P) architecture. That is, we are studying how to reduce in real-time the difference of states between the participating terminals in a virtual ball game caused by transmission lag or update interval. We are also studying how to control shared objects in real-time in a server-less network architecture. Specifically, a priority field called Allocated Topographical Zone (AtoZ) is used in P2P for DVE. By implementing this function, each terminal can compute which avatar holds the ownership of a shared object by calculating mutually the state of the local avatar predicted by the remote terminals. The region for ownership determined by AtoZ allows an avatar to access and control an object dominantly, so that a geometrical property is dynamically changed depending upon the relative arrangement between the object and avatars. Moreover considering the critical case, defined as inconsistent phenomena between the peers, caused by the network latency, a stricter ownership determination algorithm, called dead zone is introduced. By using these protocols in combination, a robust and effective scheme is achieved for a virtual ball game. As an example of the application, a real-time networked doubles air-hockey is implemented for evaluation of the influence of these protocols on interactivity and on consistency.

  • Efficient Wavelet-Based Image Retrieval Using Coarse Segmentation and Fine Region Feature Extraction

    Yongqing SUN  Shinji OZAWA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E88-D No:5
      Page(s):
    1021-1030

    Semantic image segmentation and appropriate region content description are crucial issues for region-based image retrieval (RBIR). In this paper, a novel region-based image retrieval method is proposed, which performs fast coarse image segmentation and fine region feature extraction using the decomposition property of image wavelet transform. First, coarse image segmentation is conducted efficiently in the Low-Low(LL) frequency subband of image wavelet transform. Second, the feature vector of each segmented region is hierarchically extracted from all different wavelet frequency subbands, which captures the distinctive feature (e.g., semantic texture) inside one region finely. Experiment results show the efficiency and the effectiveness of the proposed method for region-based image retrieval.

  • An Energy-Efficient Clustered Superscalar Processor

    Toshinori SATO  Akihiro CHIYONOBU  

     
    PAPER-Digital

      Vol:
    E88-C No:4
      Page(s):
    544-551

    Power consumption is a major concern in embedded microprocessors design. Reducing power has also been a critical design goal for general-purpose microprocessors. Since they require high performance as well as low power, power reduction at the cost of performance cannot be accepted. There are a lot of device-level techniques that reduce power with maintaining performance. They select non-critical paths as candidates for low-power design, and performance-oriented design is used only in speed-critical paths. The same philosophy can be applied to architectural-level design. We evaluate a technique, which exploits dynamic information regarding instruction criticality in order to reduce power. We evaluate an instruction steering policy for a clustered microarchitecture, which is based on instruction criticality, and find it is substantially energy-efficient while it suffers performance degradation.

  • Design and Evaluation of a Weighted Sacrificing Fair Queueing Algorithm for Wireless Packet Networks

    Sheng-Tzong CHENG  Ming-Hung TAO  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E88-B No:4
      Page(s):
    1568-1576

    Fair scheduling algorithms have been proposed to tackle the problem of bursty and location-dependent errors in wireless packet networks. Most of those algorithms ensure the fairness property and guarantee the QoS of all sessions in a large-scale cellular network such as GSM or GPRS. In this paper, we propose the Weighted-Sacrificing Fair Queueing (WSFQ) scheduling algorithm for small-area and device-limited wireless networks. WSFQ slows down the growth of queue length in limited-buffer devices, still maintains the properties of fairness, and guarantees the throughputs of the system. Moreover, WSFQ can easily adapt itself to various kinds of traffic load. We also implement a packet-based scheduling algorithm, the Packetized Weighted Sacrificing Fair Queueing (PWSFQ), to approach the WSFQ. WSFQ and PWSFQ are evaluated by comparing with other algorithms by mathematical analysis and simulations.

  • Adaptive Decomposition of Dynamic Scene into Object-Based Distribution Components Based on Mixture Model Framework

    Mutsumi WATANABE  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E88-D No:4
      Page(s):
    758-766

    This paper newly proposes a method to automatically decompose real scene images into multiple object-oriented component regions. First, histogram patterns of a specific image feature, such as intensity or hue value, are estimated from image sequence and stored up. Next, Gaussian distribution parameters which correspond to object components involved in the scene are estimated by applying the EM algorithm to the accumulated histogram. The number of the components is simultaneously estimated by evaluating the minimum value of Bayesian Information Criterion (BIC). This method can be applied to a variety of computer vision issues, for example, the color image segmentation and the recognition of scene situation transition. Experimental results applied for indoor and outdoor scenes showed the effectiveness of the proposed method.

  • A Kernel-Based Fisher Discriminant Analysis for Face Detection

    Takio KURITA  Toshiharu TAGUCHI  

     
    PAPER-Pattern Recognition

      Vol:
    E88-D No:3
      Page(s):
    628-635

    This paper presents a modification of kernel-based Fisher discriminant analysis (FDA) to design one-class classifier for face detection. In face detection, it is reasonable to assume "face" images to cluster in certain way, but "non face" images usually do not cluster since different kinds of images are included. It is difficult to model "non face" images as a single distribution in the discriminant space constructed by the usual two-class FDA. Also the dimension of the discriminant space constructed by the usual two-class FDA is bounded by 1. This means that we can not obtain higher dimensional discriminant space. To overcome these drawbacks of the usual two-class FDA, the discriminant criterion of FDA is modified such that the trace of covariance matrix of "face" class is minimized and the sum of squared errors between the average vector of "face" class and feature vectors of "non face" images are maximized. By this modification a higher dimensional discriminant space can be obtained. Experiments are conducted on "face" and "non face" classification using face images gathered from the available face databases and many face images on the Web. The results show that the proposed method can outperform the support vector machine (SVM). A close relationship between the proposed kernel-based FDA and kernel-based Principal Component Analysis (PCA) is also discussed.

  • Verification of Multi-Class Recognition Decision: A Classification Approach

    Tomoko MATSUI  Frank K. SOONG  Biing-Hwang JUANG  

     
    PAPER-Spoken Language Systems

      Vol:
    E88-D No:3
      Page(s):
    455-462

    We investigate strategies to improve the utterance verification performance using a 2-class pattern classification approach, including: utilizing N-best candidate scores, modifying segmentation boundaries, applying background and out-of-vocabulary filler models, incorporating contexts, and minimizing verification errors via discriminative training. A connected-digit database recorded in a noisy, moving car with a hands-free microphone mounted on the sun-visor is used to evaluate the verification performance. The equal error rate (EER) of word verification is employed as the sole performance measure. All factors and their effects on the verification performance are presented in detail. The EER is reduced from 29%, using the standard likelihood ratio test, down to 21.4%, when all features are properly integrated.

  • Low-Complexity Estimation Method of Cyclic-Prefix Length for DMT VDSL System

    Hui-Chul WON  Gi-Hong IM  

     
    LETTER-Transmission Systems and Transmission Equipment for Communications

      Vol:
    E88-B No:2
      Page(s):
    758-761

    In this letter, we propose a low-complexity estimation method of cyclic-prefix (CP) length for a discrete multitone (DMT) very high-speed digital subscriber line (VDSL) system. Using the sign bits of the received DMT VDSL signals, the proposed method provides a good estimate of CP length, which is suitable for various channel characteristics. This simple estimation method is consistent with the initialization procedure of T1E1.4 multi-carrier modulation (MCM)-based VDSL Standard. Finally, simulation results with VDSL test loops are presented.

  • Selection of Shared-State Hidden Markov Model Structure Using Bayesian Criterion

    Shinji WATANABE  Yasuhiro MINAMI  Atsushi NAKAMURA  Naonori UEDA  

     
    PAPER

      Vol:
    E88-D No:1
      Page(s):
    1-9

    A Shared-State Hidden Markov Model (SS-HMM) has been widely used as an acoustic model in speech recognition. In this paper, we propose a method for constructing SS-HMMs within a practical Bayesian framework. Our method derives the Bayesian model selection criterion for the SS-HMM based on the variational Bayesian approach. The appropriate phonetic decision tree structure of the SS-HMM is found by using the Bayesian criterion. Unlike the conventional asymptotic criteria, this criterion is applicable even in the case of an insufficient amount of training data. The experimental results on isolated word recognition demonstrate that the proposed method does not require the tuning parameter that must be tuned according to the amount of training data, and is useful for selecting the appropriate SS-HMM structure for practical use.

  • A Low-Complexity Stopping Criterion for Iterative Turbo Decoding

    Dong-Soo LEE  In-Cheol PARK  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E88-B No:1
      Page(s):
    399-401

    This letter proposes an efficient and simple stopping criterion for turbo decoding, which is derived by observing the behavior of log-likelihood ratio (LLR) values. Based on the behavior, the proposed criterion counts the number of absolute LLR values less than a threshold and the number of hard decision 1's in order to complete the iterative decoding procedure. Simulation results show that the proposed approach achieves a reduced number of iterations while maintaining similar BER/FER performance to the previous criteria.

  • Modelling and Stability Analysis of Binary ABR Flow Control in ATM Network

    Fengyuan REN  Chuang LIN  Bo WEI  

     
    PAPER-Network

      Vol:
    E88-B No:1
      Page(s):
    210-218

    Available Bit Rate (ABR) flow control is an effective measure in ATM network congestion control. In large scale and high-speed network, the simplicity of algorithm is crucial to optimize the switch performance. Although the binary flow control is very simple, the queue length and allowed cell rate (ACR) controlled by the standard EFCI algorithm oscillate with great amplitude, which has negative impact on the performance, so its applicability was doubted, and then the explicit rate feedback mechanism was introduced and explored. In this study, the model of binary flow control is built based on the fluid flow theory, and its correctness is validated by simulation experiments. The linear model describing the source end system how to regulate the cell rate is obtained through local linearization method. Then, we evaluate and analyze the standard EFCI algorithm using the describing function approach, which is well-developed in nonlinear control theory. The conclusion is that queue and ACR oscillations are caused by the inappropriate nonlinear control rule originated from intuition, but not intrinsic attribute of the binary flow control mechanism. The simulation experiments validate our analysis and conclusion. Finally, the new scheme about parameter settings is put forward to remedy the weakness existed in the standard EFCI switches without any change on the hardware architecture. The numerical results demonstrate that the new scheme is effective and fruitful.

  • Support Vector Domain Classifier Based on Multiplicative Updates

    Congde LU  Taiyi ZHANG  Wei ZHANG  

     
    LETTER-Image/Visual Signal Processing

      Vol:
    E87-A No:8
      Page(s):
    2051-2053

    This paper proposes a learning classifier based on Support Vector Domain Description (SVDD) for two-class problem. First, by the description of the training samples from one class, a sphere boundary containing these samples is obtained; then, this boundary is used to classify the test samples. In addition, instead of the traditional quadratic programming, multiplicative updates is used to solve the Lagrange multiplier in optimizing the solution of the sphere boundary. The experiment on CBCL face database illustrates the effectiveness of this learning algorithm in comparison with Support Vector Machine (SVM) and Sequential Minimal Optimization (SMO).

  • On Formulations and Solutions in Linear Image Restoration Problems

    Akira TANAKA  Hideyuki IMAI  Masaaki MIYAKOSHI  

     
    PAPER-Image

      Vol:
    E87-A No:8
      Page(s):
    2144-2151

    In terms of the formulation of the optimality, image restoration filters can be divided into two streams. One is formulated as an optimization problem in which the fidelity of a restored image is indirectly evaluated, and the other is formulated as an optimization problem based on a direct evaluation. Originally, the formulation of the optimality and the solutions derived from the formulation are identical each other. However in many studies adopting the former stream, an arbitrary choice of a solution without a mathematical ground passes unremarked. In this paper, we discuss the relation between the formulation of the optimality and the solution derived from the formulation from a mathematical point of view, and investigate the relation between a direct style formulation and an indirect one. Through these analyses, we show that the both formulations yield the identical filter in practical situations.

  • Complexity Metrics for Software Architectures

    Jianjun ZHAO  

     
    LETTER-Software Engineering

      Vol:
    E87-D No:8
      Page(s):
    2152-2156

    A large body of research in the measurement of software complexity at code level has been conducted, but little effort has been made to measure the architectural-level complexity of a software system. In this paper, we propose some architectural-level metrics which are appropriate for evaluating the architectural attributes of a software system. The main feature of our approach is to assess the architectural-level complexity of a software system by analyzing its formal architectural specification, and therefore the process of metric computation can be automated completely.

  • A 300-mW Programmable QAM Transceiver for VDSL Applications

    Hyoungsik NAM  Tae Hun KIM  Yongchul SONG  Jae Hoon SHIM  Beomsup KIM  Yong Hoon LEE  

     
    PAPER-Microwaves, Millimeter-Waves

      Vol:
    E87-C No:8
      Page(s):
    1367-1375

    This paper describes the design of a programmable QAM transceiver for VDSL applications. A 12-b DAC with 64-dB spurious-free dynamic range (SFDR) at 75-MS/s and an 11-b ADC with 72.3-dB SFDR at 70-MS/s are integrated in this complete physical layer IC. A digital IIR notch filter is included in order to not interrupt existing amateur radio bands. The proposed dual loop AGC adjusts the gain of a variable gain amplifier (VGA) to obtain maximum SNR while avoiding saturation. Using several low power techniques, the total power consumption is reduced to 300-mW at 1.8-V core and 3.3-V I/O supplies. The transceiver is fabricated in a 0.18-µm CMOS process and the chip size is 5-mm 5-mm. This VDSL transceiver supports 13-Mbps data rate over a 9000-ft channel with a BER < 10-7.

  • Automatic Generation of Non-uniform HMM Topologies Based on the MDL Criterion

    Takatoshi JITSUHIRO  Tomoko MATSUI  Satoshi NAKAMURA  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:8
      Page(s):
    2121-2129

    We propose a new method to introduce the Minimum Description Length (MDL) criterion to the automatic generation of non-uniform, context-dependent HMM topologies. Phonetic decision tree clustering is widely used, based on the Maximum Likelihood (ML) criterion, and only creates contextual variations. However, the ML criterion needs to predetermine control parameters, such as the total number of states, empirically for use as stop criteria. Information criteria have been applied to solve this problem for decision tree clustering. However, decision tree clustering cannot create topologies with various state lengths automatically. Therefore, we propose a method that applies the MDL criterion as split and stop criteria to the Successive State Splitting (SSS) algorithm as a means of generating contextual and temporal variations. This proposed method, the MDL-SSS algorithm, can automatically create adequate topologies without such predetermined parameters. Experimental results for travel arrangement dialogs and lecture speech show that the MDL-SSS can automatically stop splitting and obtain more appropriate HMM topologies than the original one.

  • Multi-Dipole Sources Identification from EEG Topography Using System Identification Method

    Xiaoxiao BAI  Qinyu ZHANG  Yohsuke KINOUCHI  Tadayoshi MINATO  

     
    PAPER-Biological Engineering

      Vol:
    E87-D No:6
      Page(s):
    1566-1574

    The goal of source localization in the brain is to estimate a set of parameters for representing source characteristics; one of such parameters is the source number. We here propose a method combining the Powell algorithm with the information criterion method for determining the optimal dipole number. The potential errors can be calculated by the Powell algorithm with the concentric 4-sphere head model and 32 electrodes, then the number of dipoles is determined by the information criterion method with the potential errors mentioned above. This method has the advantages of a high identification accuracy of dipole number and a small number of EEG data because in this method: (1) only one EEG topography is used in the computation, (2) 32 electrodes are used to obtain the EEG data, (3) the optimal dipole number can be obtained by this method. In order to prove our method to be efficient, precise and robust to noise, 10% white noise is introduced to test this method theoretically. Some investigations are presented here to show our method is an advanced approach for determining the optimal dipole number.

  • Bit Error Correctable Multiple Description Coding

    Kwang-Pyo CHOI  Chang-su HAN  Keun-Young LEE  

     
    PAPER

      Vol:
    E87-A No:6
      Page(s):
    1433-1440

    This paper proposes a new method, EC-MDC that can detect and correct bit errors in the bitstream generated by multiple description coding. The proposed method generates two sub-bitstreams having a few redundancies as much as conventional multiple description coding. If a sub-bitstream at one side has bit error, the bit error can be corrected by using sub-bitstream of the other side. In BER-SNR experiments, reconstruction quality of the proposed method shows about 11dB higher than that of the conventional MDC at BER < 10-3 when a sub-bitstream is corrupted.

  • Some Relations between Watson-Crick Finite Automata and Chomsky Hierarchy

    Sadaki HIROSE  Kunifumi TSUDA  Yasuhiro OGOSHI  Haruhiko KIMURA  

     
    LETTER-Automata and Formal Language Theory

      Vol:
    E87-D No:5
      Page(s):
    1261-1264

    Watson-Crick automata, recently introduced in, are new types of automata in the DNA computing framework, working on tapes which are double stranded sequences of symbols related by a complementarity relation, similar to a DNA molecule. The automata scan separately each of the two strands in a corelated mannar. Some restricted variants of them were also introduced and the relationship between the families of languages recognized by them were investigated in. In this paper, we clarify some relations between the families of languages recognized by the restricted variants of Watson-Crick finite automata and the families in the Chomsky hierarchy.

301-320hit(505hit)