The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

7981-8000hit(16314hit)

  • An Integrated Dynamic Online Management Framework for QoS-Sensitive Multimedia Overlay Networks

    Sungwook KIM  Myungwhan CHOI  Sungchun KIM  

     
    LETTER-Network

      Vol:
    E91-B No:3
      Page(s):
    910-914

    New multimedia services over cellular/WLAN overlay networks require different Quality of Service (QoS) levels. Therefore, an efficient network management system is necessary in order to realize QoS sensitive multimedia services while enhancing network performance. In this paper, we propose a new online network management framework for overlay networks. Our online approach to network management exhibits dynamic adaptability, flexibility, and responsiveness to the traffic conditions in multimedia networks. Simulation results indicate that our proposed framework can strike the appropriate balance between performance criteria under widely varying diverse traffic loads.

  • Feature Compensation Employing Multiple Environmental Models for Robust In-Vehicle Speech Recognition

    Wooil KIM  John H.L. HANSEN  

     
    PAPER-Noisy Speech Recognition

      Vol:
    E91-D No:3
      Page(s):
    430-438

    An effective feature compensation method is developed for reliable speech recognition in real-life in-vehicle environments. The CU-Move corpus, used for evaluation, contains a range of speech and noise signals collected for a number of speakers under actual driving conditions. PCGMM-based feature compensation, considered in this paper, utilizes parallel model combination to generate noise-corrupted speech model by combining clean speech and the noise model. In order to address unknown time-varying background noise, an interpolation method of multiple environmental models is employed. To alleviate computational expenses due to multiple models, an Environment Transition Model is employed, which is motivated from Noise Language Model used in Environmental Sniffing. An environment dependent scheme of mixture sharing technique is proposed and shown to be more effective in reducing the computational complexity. A smaller environmental model set is determined by the environment transition model for mixture sharing. The proposed scheme is evaluated on the connected single digits portion of the CU-Move database using the Aurora2 evaluation toolkit. Experimental results indicate that our feature compensation method is effective for improving speech recognition in real-life in-vehicle conditions. A reduction of 73.10% of the computational requirements was obtained by employing the environment dependent mixture sharing scheme with only a slight change in recognition performance. This demonstrates that the proposed method is effective in maintaining the distinctive characteristics among the different environmental models, even when selecting a large number of Gaussian components for mixture sharing.

  • A Robust and Non-invasive Fetal Electrocardiogram Extraction Algorithm in a Semi-Blind Way

    Yalan YE  Zhi-Lin ZHANG  Jia CHEN  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E91-A No:3
      Page(s):
    916-920

    Fetal electrocardiogram (FECG) extraction is of vital importance in biomedical signal processing. A promising approach is blind source extraction (BSE) emerging from the neural network fields, which is generally implemented in a semi-blind way. In this paper, we propose a robust extraction algorithm that can extract the clear FECG as the first extracted signal. The algorithm exploits the fact that the FECG signal's kurtosis value lies in a specific range, while the kurtosis values of other unwanted signals do not belong to this range. Moreover, the algorithm is very robust to outliers and its robustness is theoretically analyzed and is confirmed by simulation. In addition, the algorithm can work well in some adverse situations when the kurtosis values of some source signals are very close to each other. The above reasons mean that the algorithm is an appealing method which obtains an accurate and reliable FECG.

  • A Method of Locating Open Faults on Incompletely Identified Pass/Fail Information

    Koji YAMAZAKI  Yuzo TAKAMATSU  

     
    PAPER-Fault Diagnosis

      Vol:
    E91-D No:3
      Page(s):
    661-666

    In order to reduce the test cost, built-in self test (BIST) is widely used. One of the serious problems of BIST is that the compacted signature in BIST has very little information for fault diagnosis. Especially, it is difficult to determine which tests detect a fault. Therefore, it is important to develop an efficient fault diagnosis method by using incompletely identified pass/fail information. Where the incompletely identified pass/fail information means that a failing test block consists of at least one failing test and some passing tests, and all of the tests in passing test blocks are the passing test. In this paper, we propose a method to locate open faults by using incompletely identified pass/fail information. Experimental results for ISCAS'85 and ITC'99 benchmark circuits show that the number of candidate faults becomes less than 5 in many cases.

  • Post-BIST Fault Diagnosis for Multiple Faults

    Hiroshi TAKAHASHI  Yoshinobu HIGAMI  Shuhei KADOYAMA  Yuzo TAKAMATSU  Koji YAMAZAKI  Takashi AIKYO  Yasuo SATO  

     
    LETTER

      Vol:
    E91-D No:3
      Page(s):
    771-775

    With the increasing complexity of LSI, Built-In Self Test (BIST) is a promising technique for production testing. We herein propose a method for diagnosing multiple stuck-at faults based on the compressed responses from BIST. We refer to fault diagnosis based on the ambiguous test pattern set obtained by the compressed responses of BIST as post-BIST fault diagnosis [1]. In the present paper, we propose an effective method by which to perform post-BIST fault diagnosis for multiple stuck-at faults. The efficiency of the success ratio and the feasibility of diagnosing large circuits are discussed.

  • MIMO Systems in the Presence of Feedback Delay

    Kenichi KOBAYASHI  Tomoaki OHTSUKI  Toshinobu KANEKO  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E91-B No:3
      Page(s):
    829-836

    Multiple-Input Multiple-Output (MIMO) systems can achieve high data-rate and high capacity transmission. In MIMO systems, eigen-beam space division multiplexing (E-SDM) that achieves much higher capacity by weighting at the transmitter based on feedback channel state information (CSI) has been studied. Early studies for E-SDM have assumed perfect CSI at the transmitter. However, in practice, the CSI fed back to the transmitter from the receiver becomes outdated due to the time-varying nature of the channels and feedback delay. Therefore, an outdated E-SDM cannot achieve the full performance possible. In this paper, we evaluate the performance of E-SDM with methods for reducing performance degradation due to feedback delay. We use three methods: 1) method that predicts CSI at future times when it will be used and feeds the predicted CSI back to the transmitter (denoted hereafter as channel prediction); 2), 3) method that uses the receive weight based on zero-forcing (ZF) or minimum mean square error (MMSE) criterion instead of those based on singular value decomposition (SVD) criterion (denoted hereafter as ZF or MMSE-based receive weight). We also propose methods that combine channel prediction with ZF or MMSE-based receive weight. Simulation results show that bit error rate (BER) degradation of E-SDM in the presence of feedback delay is reduced by using methods for reducing performance degradation due to feedback delay. We also show that methods that combine channel prediction with ZF or MMSE-based receive weight can achieve good BER even when the large feedback delay exists.

  • A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns

    Kazuya HARAGUCHI  Mutsunori YAGIURA  Endre BOROS  Toshihide IBARAKI  

     
    PAPER-Algorithm Theory

      Vol:
    E91-D No:3
      Page(s):
    781-788

    We consider a data set in which each example is an n-dimensional Boolean vector labeled as true or false. A pattern is a co-occurrence of a particular value combination of a given subset of the variables. If a pattern appears frequently in the true examples and infrequently in the false examples, we consider it a good pattern. In this paper, we discuss the problem of determining the data size needed for removing "deceptive" good patterns; in a data set of a small size, many good patterns may appear superficially, simply by chance, independently of the underlying structure. Our hypothesis is that, in order to remove such deceptive good patterns, the data set should contain a greater number of examples than that at which a random data set contains few good patterns. We justify this hypothesis by computational studies. We also derive a theoretical upper bound on the needed data size in view of our hypothesis.

  • Advances in High-Tc Single Flux Quantum Device Technologies

    Keiichi TANABE  Hironori WAKANA  Koji TSUBONE  Yoshinobu TARUTANI  Seiji ADACHI  Yoshihiro ISHIMARU  Michitaka MARUYAMA  Tsunehiro HATO  Akira YOSHIDA  Hideo SUZUKI  

     
    INVITED PAPER

      Vol:
    E91-C No:3
      Page(s):
    280-292

    We have developed the fabrication process, the circuit design technology, and the cryopackaging technology for high-Tc single flux quantum (SFQ) devices with the aim of application to an analog-to-digital (A/D) converter circuit for future wireless communication and a sampler system for high-speed measurements. Reproducibility of fabricating ramp-edge Josephson junctions with IcRn products above 1 mV at 40 K and small Ic spreads on a superconducting groundplane was much improved by employing smooth multilayer structures and optimizing the junction fabrication process. The separated base-electrode layout (SBL) method that suppresses the Jc spread for interface-modified junctions in circuits was developed. This method enabled low-frequency logic operations of various elementary SFQ circuits with relatively wide bias current margins and operation of a toggle-flip-flop (T-FF) above 200 GHz at 40 K. Operation of a 1:2 demultiplexer, one of main elements of a hybrid-type Σ-Δ A/D converter circuit, was also demonstrated. We developed a sampler system in which a sampler circuit with a potential bandwidth over 100 GHz was cooled by a compact stirling cooler, and waveform observation experiments confirmed the actual system bandwidth well over 50 GHz.

  • Robust Noise Suppression Algorithm with the Kalman Filter Theory for White and Colored Disturbance

    Nari TANABE  Toshihiro FURUKAWA  Shigeo TSUJII  

     
    PAPER-Digital Signal Processing

      Vol:
    E91-A No:3
      Page(s):
    818-829

    We propose a noise suppression algorithm with the Kalman filter theory. The algorithm aims to achieve robust noise suppression for the additive white and colored disturbance from the canonical state space models with (i) a state equation composed of the speech signal and (ii) an observation equation composed of the speech signal and additive noise. The remarkable features of the proposed algorithm are (1) applied to adaptive white and colored noises where the additive colored noise uses babble noise, (2) realization of high performance noise suppression without sacrificing high quality of the speech signal despite simple noise suppression using only the Kalman filter algorithm, while many conventional methods based on the Kalman filter theory usually perform the noise suppression using the parameter estimation algorithm of AR (auto-regressive) system and the Kalman filter algorithm. We show the effectiveness of the proposed method, which utilizes the Kalman filter theory for the proposed canonical state space model with the colored driving source, using numerical results and subjective evaluation results.

  • Likelihood Estimation for Reduced-Complexity ML Detectors in a MIMO Spatial-Multiplexing System

    Masatsugu HIGASHINAKA  Katsuyuki MOTOYOSHI  Akihiro OKAZAKI  Takayuki NAGAYASU  Hiroshi KUBO  Akihiro SHIBUYA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E91-B No:3
      Page(s):
    837-847

    This paper proposes a likelihood estimation method for reduced-complexity maximum-likelihood (ML) detectors in a multiple-input multiple-output (MIMO) spatial-multiplexing (SM) system. Reduced-complexity ML detectors, e.g., Sphere Decoder (SD) and QR decomposition (QRD)-M algorithm, are very promising as MIMO detectors because they can estimate the ML or a quasi-ML symbol with very low computational complexity. However, they may lose likelihood information about signal vectors having the opposite bit to the hard decision and bit error rate performance of the reduced-complexity ML detectors are inferior to that of the ML detector when soft-decision decoding is employed. This paper proposes a simple estimation method of the lost likelihood information suitable for the reduced-complexity ML detectors. The proposed likelihood estimation method is applicable to any reduced-complexity ML detectors and produces accurate soft-decision bits. Computer simulation confirms that the proposed method provides excellent decoding performance, keeping the advantage of low computational cost of the reduced-complexity ML detectors.

  • WDM-PON Based on Wavelength Locked Fabry-Pérot Laser Diodes and Multi-Branch Optical Distribution Network

    Tae-Won OH  Hak-Kyu LEE  Chang-Hee LEE  

     
    LETTER-Fiber-Optic Transmission for Communications

      Vol:
    E91-B No:2
      Page(s):
    579-580

    We demonstrate a wavelength division multiplexing passive optical network (WDM-PON) based on wavelength-locked Fabry-Perot laser diodes and thin-film filters. Twelve Fast Ethernet signals are bi-directionally transmitted over the multi-branch optical distribution network (ODN). The ODN has distributed branch nodes and bus networks.

  • An XQDD-Based Verification Method for Quantum Circuits

    Shiou-An WANG  Chin-Yung LU  I-Ming TSAI  Sy-Yen KUO  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E91-A No:2
      Page(s):
    584-594

    Synthesis of quantum circuits is essential for building quantum computers. It is important to verify that the circuits designed perform the correct functions. In this paper, we propose an algorithm which can be used to verify the quantum circuits synthesized by any method. The proposed algorithm is based on BDD (Binary Decision Diagram) and is called X-decomposition Quantum Decision Diagram (XQDD). In this method, quantum operations are modeled using a graphic method and the verification process is based on comparing these graphic diagrams. We also develop an algorithm to verify reversible circuits even if they have a different number of garbage qubits. In most cases, the number of nodes used in XQDD is less than that in other representations. In general, the proposed method is more efficient in terms of space and time and can be used to verify many quantum circuits in polynomial time.

  • Enhancement of Sound Sources Located within a Particular Area Using a Pair of Small Microphone Arrays

    Yusuke HIOKA  Kazunori KOBAYASHI  Ken'ichi FURUYA  Akitoshi KATAOKA  

     
    PAPER-Engineering Acoustics

      Vol:
    E91-A No:2
      Page(s):
    561-574

    A method for extracting a sound signal from a particular area that is surrounded by multiple ambient noise sources is proposed. This method performs several fixed beamformings on a pair of small microphone arrays separated from each other to estimate the signal and noise power spectra. Noise suppression is achieved by applying spectrum emphasis to the output of fixed beamforming in the frequency domain, which is derived from the estimated power spectra. In experiments performed in a room with reverberation, this method succeeded in suppressing the ambient noise, giving an SNR improvement of more than 10 dB, which is better than the performance of the conventional fixed and adaptive beamforming methods using a large-aperture microphone array. We also confirmed that this method keeps its performance even if the noise source location changes continuously or abruptly.

  • Accelerating Web Content Filtering by the Early Decision Algorithm

    Po-Ching LIN  Ming-Dao LIU  Ying-Dar LIN  Yuan-Cheng LAI  

     
    PAPER-Contents Technology and Web Information Systems

      Vol:
    E91-D No:2
      Page(s):
    251-257

    Real-time content analysis is typically a bottleneck in Web filtering. To accelerate the filtering process, this work presents a simple, but effective early decision algorithm that analyzes only part of the Web content. This algorithm can make the filtering decision, either to block or to pass the Web content, as soon as it is confident with a high probability that the content really belongs to a banned or an allowed category. Experiments show the algorithm needs to examine only around one-fourth of the Web content on average, while the accuracy remains fairly good: 89% for the banned content and 93% for the allowed content. This algorithm can complement other Web filtering approaches, such as URL blocking, to filter the Web content with high accuracy and efficiency. Text classification algorithms in other applications can also follow the principle of early decision to accelerate their applications.

  • Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    Fan CHEN  Kazunori KOTANI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E91-D No:2
      Page(s):
    341-350

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

  • Color Constancy Based on Image Similarity

    Bing LI  De XU  Jin-Hua WANG  Rui LU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E91-D No:2
      Page(s):
    375-378

    Computational color constancy is a classical problem in computer vision. It is an under-constrained problem, which can be solved based on some constraint. Existing algorithms can be divided into two groups: physics-based algorithms and statistics-based approaches. In this paper, we propose a new hypothesis that the images generated under a same illumination have some similar features. Based on this hypothesis, a novel statistics-based color constancy algorithm is given and a new similarity function between images is also defined. The experimental results show that our algorithm is effective and it is more important that the dimension of the features in our algorithm is much lower than many former statistics-based algorithms.

  • Image Restoration for Quantifying TFT-LCD Defect Levels

    Kyu Nam CHOI  No Kap PARK  Suk In YOO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E91-D No:2
      Page(s):
    322-329

    Though machine vision systems for automatically detecting visual defects, called mura, have been developed for thin flat transistor liquid crystal display (TFT-LCD) panels, they have not yet reached a level of reliability which can replace human inspectors. To establish an objective criterion for identifying real defects, some index functions for quantifying defect levels based on human perception have been recently researched. However, while these functions have been verified in the laboratory, further consideration is needed in order to apply them to real systems in the field. To begin with, we should correct the distortion occurring through the capturing of panels. Distortion can cause the defect level in the observed image to differ from that in the panel. There are several known methods to restore the observed image in general vision systems. However, TFT-LCD panel images have a unique background degradation composed of background non-uniformity and vignetting effect which cannot easily be restored through traditional methods. Therefore, in this paper we present a new method to correct background degradation of TFT-LCD panel images using principal component analysis (PCA). Experimental results show that our method properly restores the given observed images and the transformed shape of muras closely approaches the original undistorted shape.

  • CombNET-III with Nonlinear Gating Network and Its Application in Large-Scale Classification Problems

    Mauricio KUGLER  Susumu KUROYANAGI  Anto Satriyo NUGROHO  Akira IWATA  

     
    PAPER-Pattern Recognition

      Vol:
    E91-D No:2
      Page(s):
    286-295

    Modern applications of pattern recognition generate very large amounts of data, which require large computational effort to process. However, the majority of the methods intended for large-scale problems aim to merely adapt standard classification methods without considering if those algorithms are appropriated for large-scale problems. CombNET-II was one of the first methods specifically proposed for such kind of a task. Recently, an extension of this model, named CombNET-III, was proposed. The main modifications over the previous model was the substitution of the expert networks by Support Vectors Machines (SVM) and the development of a general probabilistic framework. Although the previous model's performance and flexibility were improved, the low accuracy of the gating network was still compromising CombNET-III's classification results. In addition, due to the use of SVM based experts, the computational complexity is higher than CombNET-II. This paper proposes a new two-layered gating network structure that reduces the compromise between number of clusters and accuracy, increasing the model's performance with only a small complexity increase. This high-accuracy gating network also enables the removal the low confidence expert networks from the decoding procedure. This, in addition to a new faster strategy for calculating multiclass SVM outputs significantly reduced the computational complexity. Experimental results of problems with large number of categories show that the proposed model outperforms the original CombNET-III, while presenting a computational complexity more than one order of magnitude smaller. Moreover, when applied to a database with a large number of samples, it outperformed all compared methods, confirming the proposed model's flexibility.

  • An Adaptive RFID Anti-Collision Algorithm Based on Dynamic Framed ALOHA

    ChangWoo LEE  Hyeonwoo CHO  Sang Woo KIM  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E91-B No:2
      Page(s):
    641-645

    The collision of ID signals from a large number of co-located passive RFID tags is a serious problem; to realize a practical RFID systems we need an effective anti-collision algorithm. This letter presents an adaptive algorithm to minimize the total time slots and the number of rounds required for identifying the tags within the RFID reader's interrogation zone. The proposed algorithm is based on the framed ALOHA protocol, and the frame size is adaptively updated each round. Simulation results show that our proposed algorithm is more efficient than the conventional algorithms based on the framed ALOHA.

  • Energy-Efficient Transmission Scheme for WPANs with a TDMA-Based Contention-Free Access Protocol

    Yang-Ick JOO  Yeonwoo LEE  

     
    LETTER-Network

      Vol:
    E91-B No:2
      Page(s):
    609-612

    Energy-efficient transmission scheme is very essential for Wireless Personal Area Networks (WPNs) for maximizing the lifetime of energy-constrained wireless devices and assuring the required QoS in the actual physical transmission at each allocated TDMA time slot. We therefore propose the minimum energy (ME) criterion based adaptive transmission scheme which determines the optimum combination of transmit power, physical data rate and fragment size required to simultaneously minimize the energy consumption and satisfy the required QoS in each assigned time duration. The improved performances offered by the proposed algorithm are demonstrated via computer simulation in terms of throughput and energy consumption.

7981-8000hit(16314hit)