The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

1941-1960hit(3945hit)

  • Recursion Theoretic Operators for Function Complexity Classes

    Kenya UENO  

     
    PAPER-Computation and Computational Models

      Vol:
    E91-D No:4
      Page(s):
    990-995

    We characterize the gap between time and space complexity of functions by operators and completeness. First, we introduce a new notion of operators for function complexity classes based on recursive function theory and construct an operator which generates FPSPACE from FP. Then, we introduce new function classes composed of functions whose output lengths are bounded by the input length plus some constant. We characterize FP and FPSPACE by using these classes and operators. Finally, we define a new notion of completeness for FPSPACE and show a FPSPACE-complete function.

  • Joint Receive Antenna Selection for Multi-User MIMO Systems with Vector Precoding

    Wei MIAO  Yunzhou LI  Shidong ZHOU  Jing WANG  Xibin XU  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E91-B No:4
      Page(s):
    1176-1179

    Vector precoding is a nonlinear broadcast precoding scheme in the downlink of multi-user MIMO systems which outperforms linear precoding and THP (Tomlinson-Harashima Precoding). This letter discusses the problem of joint receive antenna selection in the multi-user MIMO downlink with vector precoding. Based on random matrix analysis, we derive a simple heuristic selection criterion using singular value decomposition (SVD) and carry out an exhaustive search to determine for each user which receive antenna should be used. Simulation results reveal that receive antenna selection using our proposed criterion obtains the same diversity order as the optimal selection criterion.

  • Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Kazunori SHIMIZU  Nozomu TOGAWA  Takeshi IKENAGA  Satoshi GOTO  

     
    PAPER

      Vol:
    E91-A No:4
      Page(s):
    1054-1061

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  • A High-Speed Pipelined Degree-Computationless Modified Euclidean Algorithm Architecture for Reed-Solomon Decoders

    Seungbeom LEE  Hanho LEE  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E91-A No:3
      Page(s):
    830-835

    This paper presents a novel high-speed low-complexity pipelined degree-computationless modified Euclidean (pDCME) algorithm architecture for high-speed RS decoders. The pDCME algorithm allows elimination of the degree-computation so as to reduce hardware complexity and obtain high-speed processing. A high-speed RS decoder based on the pDCME algorithm has been designed and implemented with 0.13-µm CMOS standard cell technology in a supply voltage of 1.1 V. The proposed RS decoder operates at a clock frequency of 660 MHz and has a throughput of 5.3 Gb/s. The proposed architecture requires approximately 15% fewer gate counts and a simpler control logic than architectures based on the popular modified Euclidean algorithm.

  • 6-bit 1.6-GS/s 85-mW Flash Analog to Digital Converter Using Symmetric Three-Input Comparator

    Yun-Jeong KIM  Jong-Ho LEE  Ja-Hyun KOO  Kwang-Hyun BAEK  Suki KIM  

     
    LETTER-Electronic Circuits

      Vol:
    E91-C No:3
      Page(s):
    392-395

    In this paper, we describe a 6-bit 1.6-GS/s flash analog to digital converter (ADC). To reduce the power consumption and active area, we propose a new interpolation architecture using a symmetric three-input comparator. This ADC achieves 5.56 effective bits for input frequencies up to 220 MHz at 1.6 GS/s, and almost five effective bits for 660 MHz input at 1.6 GS/s. Peak INL and DNL are less than 0.5 LSB and 0.45 LSB, respectively. This ADC consumes 85 mW from 1.8 V at 1.6 GS/s and occupies an active area of 0.27 mm2. It is fabricated in 0.18-µm CMOS.

  • Likelihood Estimation for Reduced-Complexity ML Detectors in a MIMO Spatial-Multiplexing System

    Masatsugu HIGASHINAKA  Katsuyuki MOTOYOSHI  Akihiro OKAZAKI  Takayuki NAGAYASU  Hiroshi KUBO  Akihiro SHIBUYA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E91-B No:3
      Page(s):
    837-847

    This paper proposes a likelihood estimation method for reduced-complexity maximum-likelihood (ML) detectors in a multiple-input multiple-output (MIMO) spatial-multiplexing (SM) system. Reduced-complexity ML detectors, e.g., Sphere Decoder (SD) and QR decomposition (QRD)-M algorithm, are very promising as MIMO detectors because they can estimate the ML or a quasi-ML symbol with very low computational complexity. However, they may lose likelihood information about signal vectors having the opposite bit to the hard decision and bit error rate performance of the reduced-complexity ML detectors are inferior to that of the ML detector when soft-decision decoding is employed. This paper proposes a simple estimation method of the lost likelihood information suitable for the reduced-complexity ML detectors. The proposed likelihood estimation method is applicable to any reduced-complexity ML detectors and produces accurate soft-decision bits. Computer simulation confirms that the proposed method provides excellent decoding performance, keeping the advantage of low computational cost of the reduced-complexity ML detectors.

  • Test Data Compression for Scan-Based BIST Aiming at 100x Compression Rate

    Masayuki ARAI  Satoshi FUKUMOTO  Kazuhiko IWASAKI  Tatsuru MATSUO  Takahisa HIRAIDE  Hideaki KONISHI  Michiaki EMORI  Takashi AIKYO  

     
    PAPER-Test Compression

      Vol:
    E91-D No:3
      Page(s):
    726-735

    We developed test data compression scheme for scan-based BIST, aiming to compress test stimuli and responses by more than 100 times. As scan-BIST architecture, we adopt BIST-Aided Scan Test (BAST), and combines four techniques: the invert-and-shift operation, run-length compression, scan address partitioning, and LFSR pre-shifting. Our scheme achieved a 100x compression rate in environments where Xs do not occur without reducing the fault coverage of the original ATPG vectors. Furthermore, we enhanced the masking logic to reduce data for X-masking so that test data is still compressed to 1/100 in a practical environment where Xs occur. We applied our scheme to five real VLSI chips, and the technique compressed the test data by 100x for scan-based BIST.

  • Proposal of a Desk-Side Supercomputer with Reconfigurable Data-Paths Using Rapid Single-Flux-Quantum Circuits

    Naofumi TAKAGI  Kazuaki MURAKAMI  Akira FUJIMAKI  Nobuyuki YOSHIKAWA  Koji INOUE  Hiroaki HONDA  

     
    INVITED PAPER

      Vol:
    E91-C No:3
      Page(s):
    350-355

    We propose a desk-side supercomputer with large-scale reconfigurable data-paths (LSRDPs) using superconducting rapid single-flux-quantum (RSFQ) circuits. It has several sets of computing unit which consists of a general-purpose microprocessor, an LSRDP and a memory. An LSRDP consists of a lot of, e.g., a few thousand, floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the LSRDP to fit a computation, i.e., a group of floating-point operations, which appears in a 'for' loop of numerical programs by setting the route in ORNs before the execution of the loop. We propose to implement the LSRDPs by RSFQ circuits. The processors and the memories can be implemented by semiconductor technology. We expect that a 10 TFLOPS supercomputer, as well as a refrigerating engine, will be housed in a desk-side rack, using a near-future RSFQ process technology, such as 0.35 µm process.

  • A Companding Technique for PAPR Reduction of OFDM Systems

    Miin-Jong HAO  Chung-Ping LIAW  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E91-B No:3
      Page(s):
    935-938

    A companding technique using the hyperbolic tangent transform is proposed for reducing the peak-to-average-power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals. This technique is practical and can be implemented easily in integrated circuit design. The PAPR value of an OFDM system and the optimal companding coefficient to attain the minimum quantization error are derived. Error probability performance of the system after the companding is evaluated. Our simulation results exhibits that the system with the suggested scheme has nearly the same performance as the systems with the µ-law or A-law companding techniques.

  • An Architecture of Embedded Decompressor with Reconfigurability for Test Compression

    Hideyuki ICHIHARA  Tomoyuki SAIKI  Tomoo INOUE  

     
    PAPER-Test Compression

      Vol:
    E91-D No:3
      Page(s):
    713-719

    Test compression / decompression scheme for reducing the test application time and memory requirement of an LSI tester has been proposed. In the scheme, the employed coding algorithms are tailored to a given test data, so that the tailored coding algorithm can highly compress the test data. However, these methods have some drawbacks, e.g., the coding algorithm is ineffective in extra test data except for the given test data. In this paper, we introduce an embedded decompressor that is reconfigurable according to coding algorithms and given test data. Its reconfigurability can overcome the drawbacks of conventional decompressors with keeping high compression ratio. Moreover, we propose an architecture of reconfigurable decompressors for four variable-length codings. In the proposed architecture, the common functions for four codings are implemented as fixed (or non-reconfigurable) components so as to reduce the configuration data, which is stored on an ATE and sent to a CUT. Experimental results show that (1) the configuration data size becomes reasonably small by reducing the configuration part of the decompressor, (2) the reconfigurable decompressor is effective for SoC testing in respect of the test data size, and (3) it can achieve an optimal compression of test data by Huffman coding.

  • Noise Suppression Based on Multi-Model Compositions Using Multi-Pass Search with Multi-Label N-gram Models

    Takatoshi JITSUHIRO  Tomoji TORIYAMA  Kiyoshi KOGURE  

     
    PAPER-Noisy Speech Recognition

      Vol:
    E91-D No:3
      Page(s):
    402-410

    We propose a noise suppression method based on multi-model compositions and multi-pass search. In real environments, input speech for speech recognition includes many kinds of noise signals. To obtain good recognized candidates, suppressing many kinds of noise signals at once and finding target speech is important. Before noise suppression, to find speech and noise label sequences, we introduce multi-pass search with acoustic models including many kinds of noise models and their compositions, their n-gram models, and their lexicon. Noise suppression is frame-synchronously performed using the multiple models selected by recognized label sequences with time alignments. We evaluated this method using the E-Nightingale task, which contains voice memoranda spoken by nurses during actual work at hospitals. The proposed method obtained higher performance than the conventional method.

  • A Robust and Non-invasive Fetal Electrocardiogram Extraction Algorithm in a Semi-Blind Way

    Yalan YE  Zhi-Lin ZHANG  Jia CHEN  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E91-A No:3
      Page(s):
    916-920

    Fetal electrocardiogram (FECG) extraction is of vital importance in biomedical signal processing. A promising approach is blind source extraction (BSE) emerging from the neural network fields, which is generally implemented in a semi-blind way. In this paper, we propose a robust extraction algorithm that can extract the clear FECG as the first extracted signal. The algorithm exploits the fact that the FECG signal's kurtosis value lies in a specific range, while the kurtosis values of other unwanted signals do not belong to this range. Moreover, the algorithm is very robust to outliers and its robustness is theoretically analyzed and is confirmed by simulation. In addition, the algorithm can work well in some adverse situations when the kurtosis values of some source signals are very close to each other. The above reasons mean that the algorithm is an appealing method which obtains an accurate and reliable FECG.

  • Robust F0 Estimation Using ELS-Based Robust Complex Speech Analysis

    Keiichi FUNAKI  Tatsuhiko KINJO  

     
    LETTER-Digital Signal Processing

      Vol:
    E91-A No:3
      Page(s):
    868-871

    Complex speech analysis for an analytic speech signal can accurately estimate the spectrum in low frequencies since the analytic signal provides spectrum only over positive frequencies. The remarkable feature makes it possible to realize more accurate F0 estimation using complex residual signal extracted by complex-valued speech analysis. We have already proposed F0 estimation using complex LPC residual, in which the autocorrelation function weighted by AMDF was adopted as the criterion. The method adopted MMSE-based complex LPC analysis and it has been reported that it can estimate more accurate F0 for IRS filtered speech corrupted by white Gauss noise although it can not work better for the IRS filtered speech corrupted by pink noise. In this paper, robust complex speech analysis based on ELS (Extended Least Square) method is introduced in order to overcome the drawback. The experimental results for additive white Gauss or pink noise demonstrate that the proposed algorithm based on robust ELS-based complex AR analysis can perform better than other methods.

  • Experimental Evaluation of the Super Sweep Spectrum Analyzer

    Masao NAGANO  Toshio ONODERA  Mototaka SONE  

     
    PAPER-Digital Signal Processing

      Vol:
    E91-A No:3
      Page(s):
    782-790

    A sweep spectrum analyzer has been improved over the years, but the fundamental method has not been changed before the 'Super Sweep' method appeared. The 'Super Sweep' method has been expected to break the limitation of the conventional sweep spectrum analyzer, a limit of the maximum sweep rate which is in inverse proportion to the square of the frequency resolution. The superior performance of the 'Super Sweep' method, however, has not been experimentally proved yet. This paper gives the experimental evaluation on the 'Super Sweep' spectrum analyzer, of which theoretical concepts have already been presented by the authors of this paper. Before giving the experimental results, we give complete analysis for a sweep spectrum analyzer and express the principle of the super-sweep operation with a complete set of equations. We developed an experimental system whose components operated in an optimum condition as the spectrum analyzer. Then we investigated its properties, a peak level reduction and broadening of the frequency resolution of the measured spectrum, by changing the sweep rate. We also confirmed that the experimental system satisfactorily detected the spectrum at least 30 times faster than the conventional method and the sweep rate was in proportion to the bandwidth of the base band signal to be analyzed. We proved that the 'Super Sweep' method broke the restriction of the sweep rate put on a conventional sweep spectrum analyzer.

  • Robust Speech Recognition by Model Adaptation and Normalization Using Pre-Observed Noise

    Satoshi KOBASHIKAWA  Satoshi TAKAHASHI  

     
    PAPER-Noisy Speech Recognition

      Vol:
    E91-D No:3
      Page(s):
    422-429

    Users require speech recognition systems that offer rapid response and high accuracy concurrently. Speech recognition accuracy is degraded by additive noise, imposed by ambient noise, and convolutional noise, created by space transfer characteristics, especially in distant talking situations. Against each type of noise, existing model adaptation techniques achieve robustness by using HMM-composition and CMN (cepstral mean normalization). Since they need an additive noise sample as well as a user speech sample to generate the models required, they can not achieve rapid response, though it may be possible to catch just the additive noise in a previous step. In the previous step, the technique proposed herein uses just the additive noise to generate an adapted and normalized model against both types of noise. When the user's speech sample is captured, only online-CMN need be performed to start the recognition processing, so the technique offers rapid response. In addition, to cover the unpredictable S/N values possible in real applications, the technique creates several S/N HMMs. Simulations using artificial speech data show that the proposed technique increased the character correct rate by 11.62% compared to CMN.

  • Multichannel Linear Prediction Method Compliant with the MPEG-4 ALS

    Yutaka KAMAMOTO  Noboru HARADA  Takehiro MORIYA  

     
    PAPER-Audio Coding

      Vol:
    E91-A No:3
      Page(s):
    756-762

    A new linear prediction analysis method for multichannel signals was devised, with the goal of enhancing the compression performance of the MPEG-4 Audio Lossless Coding (ALS) compliant encoder and decoder. The multichannel coding tool for this standard carries out an adaptively weighted subtraction of the residual signals of the coding channel from those of the reference channel, both of which are produced by independent linear prediction. Our linear prediction method tries to directly minimize the amplitude of the predicted residual signal after subtraction of the signals of the coding channel, and the method has been implemented in the MPEG-4 ALS codec software. The results of a comprehensive evaluation show that this method reduces the size of a compressed file. The maximum improvement of the compression ratio is 14.6% which is achieved at the cost of a small increase in computational complexity at the encoder and without increase in decoding time. This is a practical method because the compressed bitstream remains compliant with the MPEG-4 ALS standard.

  • Feature Compensation Employing Multiple Environmental Models for Robust In-Vehicle Speech Recognition

    Wooil KIM  John H.L. HANSEN  

     
    PAPER-Noisy Speech Recognition

      Vol:
    E91-D No:3
      Page(s):
    430-438

    An effective feature compensation method is developed for reliable speech recognition in real-life in-vehicle environments. The CU-Move corpus, used for evaluation, contains a range of speech and noise signals collected for a number of speakers under actual driving conditions. PCGMM-based feature compensation, considered in this paper, utilizes parallel model combination to generate noise-corrupted speech model by combining clean speech and the noise model. In order to address unknown time-varying background noise, an interpolation method of multiple environmental models is employed. To alleviate computational expenses due to multiple models, an Environment Transition Model is employed, which is motivated from Noise Language Model used in Environmental Sniffing. An environment dependent scheme of mixture sharing technique is proposed and shown to be more effective in reducing the computational complexity. A smaller environmental model set is determined by the environment transition model for mixture sharing. The proposed scheme is evaluated on the connected single digits portion of the CU-Move database using the Aurora2 evaluation toolkit. Experimental results indicate that our feature compensation method is effective for improving speech recognition in real-life in-vehicle conditions. A reduction of 73.10% of the computational requirements was obtained by employing the environment dependent mixture sharing scheme with only a slight change in recognition performance. This demonstrates that the proposed method is effective in maintaining the distinctive characteristics among the different environmental models, even when selecting a large number of Gaussian components for mixture sharing.

  • Near-Optimal Block Alignments

    Kuo-Tsung TSENG  Chang-Biau YANG  Kuo-Si HUANG  Yung-Hsing PENG  

     
    PAPER-Algorithm Theory

      Vol:
    E91-D No:3
      Page(s):
    789-795

    The optimal alignment of two given biosequences is mathematically optimal, but it may not be a biologically optimal one. To investigate more possible alignments with biological meaning, one can relax the scoring functions to get near-optimal alignments. Though the near optimal alignments increase the possibility of finding the correct alignment, they may confuse the biologists because the size of candidates is large. In this paper, we present the filter scheme for the near-optimal alignments. An easy method for tracing the near-optimal alignments and an algorithm for filtering those alignments are proposed. The time complexity of our algorithm is O(dmn) in the worst case, where d is the maximum distance between the near-optimal alignments and the optimal alignment, and m and n are the lengths of the input sequences, respectively.

  • Study on Expansion of Convolutional Compactors over Galois Field

    Masayuki ARAI  Satoshi FUKUMOTO  Kazuhiko IWASAKI  

     
    PAPER-Test Compression

      Vol:
    E91-D No:3
      Page(s):
    706-712

    Convolutional compactors offer a promising technique of compacting test responses. In this study we expand the architecture of convolutional compactor onto a Galois field in order to improve compaction ratio as well as reduce X-masking probability, namely, the probability that an error is masked by unknown values. While each scan chain is independently connected by EOR gates in the conventional arrangement, the proposed scheme treats q signals as an element over GF(2q), and the connections are configured on the same field. We show the arrangement of the proposed compactors and the equivalent expression over GF(2). We then evaluate the effectiveness of the proposed expansion in terms of X-masking probability by simulations with uniform distribution of X-values, as well as reduction of hardware overheads. Furthermore, we evaluate a multi-weight arrangement of the proposed compactors for non-uniform X distributions.

  • Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    Fan CHEN  Kazunori KOTANI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E91-D No:2
      Page(s):
    341-350

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

1941-1960hit(3945hit)