The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] error(1060hit)

181-200hit(1060hit)

  • Iterative Optimal Design for Fast Filter Bank with Low Complexity

    Jinguang HAO  Wenjiang PEI  Kai WANG  Yili XIA  Cunlai PU  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:2
      Page(s):
    639-642

    In this paper, an iterative optimal method is proposed to design the prototype filters for a fast filter bank (FFB) with low complexity, aiming to control the optimum ripple magnitude tolerance of each filter according to the overall specifications. This problem is formulated as an optimization problem for which the total number of multiplications is to be minimized subject to the constrained ripple in the passband and stopband. In the following, an iterative solution is proposed to solve this optimization problem for the purpose of obtaining the impulse response coefficients with low complexity at each stage. Simulations are conducted to verify the performance of the proposed scheme and show that compared with the original method, the proposed scheme can reduce about 24.24% of multiplications. In addition, the proposed scheme and the original method provide similar mean square error (MSE) and the mean absolute error (MAE) of the frequency response.

  • Asymptotic Error Probability Analysis of DQPSK/DDQPSK over Nakagami-m Fading Channels

    Hoojin LEE  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E99-B No:1
      Page(s):
    152-156

    In this paper, we derive two simple asymptotic closed-form formulas for the average bit error probability (BEP) of differential quaternary phase shift keying (DQPSK) with Gray encoding and a simple asymptotic approximation for the average symbol error probability (SEP) of doubly-differential quaternary phase shift keying (DDQPSK) in Nakagami-m fading channels. Compared with the existing BEP/SEP expressions, the derived concise formulas are much more effective in evaluating the asymptotic properties of DQPSK/DDQPSK with various Nakagami fading parameters, the accuracy of which is verified by extensive numerical results.

  • A Precise Model for Cross-Point Memory Array

    Yoshiaki ASAO  Fumio HORIGUCHI  

     
    PAPER-Integrated Electronics

      Vol:
    E99-C No:1
      Page(s):
    119-128

    A simplified circuit has been utilized for fast computation of the current flowing in the cross-point memory array. However, the circuit has a constraint in that the selected cell is located farthest from current drivers so as to estimate the current degraded by metal wire resistance. This is because the length of the current path along the metal wire varies with the selected address in the cross-point memory array. In this paper, a new simplified circuit is proposed for calculating the current at every address in order to take account of the metal wire resistance. By employing the Monte Carlo simulation to solve the proposed simplified circuit, the current distribution across the array is obtained, so that failure rates of read disturbance and write error are estimated precisely. By comparing the conventional and the proposed simplified circuits, it was found that the conventional simplified circuit estimated optimistic failure rates for read disturbance and for write error when the wire resistance was prominent enough as a parasitic resistance.

  • Stochastic Resonance of Signal Detection in Mono-Threshold System Using Additive and Multiplicative Noises

    Jian LIU  Youguo WANG  Qiqing ZHAI  

     
    PAPER-Noise and Vibration

      Vol:
    E99-A No:1
      Page(s):
    323-329

    The phenomenon of stochastic resonance (SR) in a mono-threshold-system-based detector (MTD) with additive background noise and multiplicative external noise is investigated. On the basis of maximum a posteriori probability (MAP) criterion, we deal with the binary signal transmission in four scenarios. The performance of the MTD is characterized by the probability of error detection, and the effects of system threshold and noise intensity on detectability are discussed in this paper. Similar to prior studies that focus on additive noises, along with increases in noise intensity, we also observe a non-monotone phenomenon in the multiplicative ways. However, unlike the case with the additive noise, optimal multiplicative noises all tend toward infinity for fixed additive noise intensities. The results of our model are potentially useful for the design of a sensor network and can help one to understand the biological mechanism of synaptic transmission.

  • Distance Estimation Based on Statistical Models of Received Signal Strength

    Masahiro FUJII  Yuma HIROTA  Hiroyuki HATANO  Atsushi ITO  Yu WATANABE  

     
    LETTER

      Vol:
    E99-A No:1
      Page(s):
    199-203

    In this letter, we propose a new distance estimation method based on statistical models of a Received Signal Strength (RSS) at the receiver. The conventional distance estimator estimates the distance between the transmitter and the receiver based on the statistical average of the RSS when the receiver obtains instantaneous RSS and an estimate of the hyperparameters which consists of the path loss exponent and so on. However, it is well-known that instantaneous RSS does not always correspond to the average RSS because the RSS varies in accordance with a statistical model. Although the statistical model has been introduced for the hyperparameters estimation and the localization system, the conventional distance estimator has not yet utilized it. We introduce the statistical model to the distance estimator whose expected value of the estimate corresponds to true distance. Our theoretical analysis establishes that the proposed distance estimator is preferable to the conventional one in order to improve accuracy in the expected value of the distance estimate. Moreover, we evaluate the Mean Square Error (MSE) between true distance and the estimate. We provide evidence that the MSE is always proportional to the square of the distance if the estimate of the hyperparameters is ideally obtained.

  • The Optimal MMSE-Based OSIC Detector for MIMO System

    Yunchao SONG  Chen LIU  Feng LU  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E99-B No:1
      Page(s):
    232-239

    The ordered successive interference cancellation (OSIC) detector based on the minimum mean square error (MMSE) criterion has been proved to be a low-complexity detector with efficient bit error rate (BER) performance. As the well-known MMSE-Based OSIC detector, the MMSE-Based vertical Bell Laboratories Layered Space-Time (VBLAST) detector, whose computational complexity is cubic, can not attain the minimum BER performance. Some approaches to reducing the BER of the MMSE-Based VBLAST detector have been contributed, however these improvements have large computational complexity. In this paper, a low complexity MMSE-Based OSIC detector called MMSE-OBEP (ordering based on error probability) is proposed to improve the BER performance of the previous MMSE-Based OSIC detectors, and it has cubic complexity. The proposed detector derives the near-exact error probability of the symbols in the MMSE-Based OSIC detector, thus giving priority to detect the symbol with the smallest error probability can minimize the error propagation in the MMSE-Based OSIC detector and enhance the BER performance. We show that, although the computational complexity of the proposed detector is cubic, it can provide better BER performance than the previous MMSE-Based OSIC detector.

  • Digital Halftoning through Approximate Optimization of Scale-Related Perceived Error Metric

    Zifen HE  Yinhui ZHANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/10/20
      Vol:
    E99-D No:1
      Page(s):
    305-308

    This work presents an approximate global optimization method for image halftone by fusing multi-scale information of the tree model. We employ Gaussian mixture model and hidden Markov tree to characterized the intra-scale clustering and inter-scale persistence properties of the detailed coefficients, respectively. The model of multiscale perceived error metric and the theory of scale-related perceived error metric are used to fuse the statistical distribution of the error metric of the scale of clustering and cross-scale persistence. An Energy function is then generated. Through energy minimization via graph cuts, we gain the halftone image. In the related experiment, we demonstrate the superior performance of this new algorithm when compared with several algorithms and quantitative evaluation.

  • A Decoding Algorithm for Cyclic Codes over Symbol-Pair Read Channels

    Makoto TAKITA  Masanori HIROTOMO  Masakatu MORII  

     
    PAPER-Coding Theory

      Vol:
    E98-A No:12
      Page(s):
    2415-2422

    Cassuto and Blaum presented a new coding framework for channels whose outputs are overlapping pairs of symbols in storage applications. Such channels are called symbol-pair read channels. Pair distance and pair error are used in symbol-pair read channels. Yaakobi et al. proved a lower bound on the minimum pair distance of cyclic codes. Furthermore, they provided a decoding algorithm for correcting pair errors using a decoder for cyclic codes, and showed the number of pair errors that can be corrected by their algorithm. However, their algorithm cannot correct all pair error vectors within half of the minimum pair distance. In this paper, we propose an efficient decoding algorithm for cyclic codes over symbol-pair read channels. It is based on the relationship between pair errors and syndromes. In addition, we show that the proposed algorithm can correct more pair errors than Yaakobi's algorithm.

  • A Self-Recoverable, Frequency-Aware and Cost-Effective Robust Latch Design for Nanoscale CMOS Technology

    Aibin YAN  Huaguo LIANG  Zhengfeng HUANG  Cuiyun JIANG  Maoxiang YI  

     
    PAPER-Electronic Circuits

      Vol:
    E98-C No:12
      Page(s):
    1171-1178

    In this paper, a self-recoverable, frequency-aware and cost-effective robust latch (referred to as RFC) is proposed in 45nm CMOS technology. By means of triple mutually feedback Muller C-elements, the internal nodes and output node of the latch are self-recoverable from single event upset (SEU), i.e. particle striking induced logic upset, regardless of the energy of the striking particle. The proposed robust latch offers a much wider spectrum of working clock frequency on account of a smaller delay and insensitivity to high impedance state. The proposed robust latch performs with lower costs regarding power and area than most of the compared latches. SPICE simulation results demonstrate that the area-power-delay product is 73.74% saving on average compared with previous radiation hardened latches.

  • ECC-Based Bit-Write Reduction Code Generation for Non-Volatile Memory

    Masashi TAWADA  Shinji KIMURA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2494-2504

    Non-volatile memory has many advantages such as high density and low leakage power but it consumes larger writing energy than SRAM. It is quite necessary to reduce writing energy in non-volatile memory design. In this paper, we propose write-reduction codes based on error correcting codes and reduce writing energy in non-volatile memory by decreasing the number of writing bits. When a data is written into a memory cell, we do not write it directly but encode it into a codeword. In our write-reduction codes, every data corresponds to an information vector in an error-correcting code and an information vector corresponds not to a single codeword but a set of write-reduction codewords. Given a writing data and current memory bits, we can deterministically select a particular write-reduction codeword corresponding to the data to be written, where the maximum number of flipped bits are theoretically minimized. Then the number of writing bits into memory cells will also be minimized. Experimental results demonstrate that we have achieved writing-bits reduction by an average of 51% and energy reduction by an average of 33% compared to non-encoded memory.

  • Syndrome Decoding of Symbol-Pair Codes

    Makoto TAKITA  Masanori HIROTOMO  Masakatu MORII  

     
    PAPER-Coding Theory

      Vol:
    E98-A No:12
      Page(s):
    2423-2428

    Cassuto and Blaum proposed new error correcting codes which are called symbol-pair codes. They presented a coding framework for channels whose outputs are overlapping pairs of symbols in storage applications. Such channels are called symbol-pair read channels. The pair distance and pair error are used in symbol-pair read channels. Cassuto et al. and Yaakobi et al. presented decoding algorithms for symbol-pair codes. However, their decoding algorithms cannot always correct errors whose number is not more than half the minimum pair distance. In this paper, we propose a new decoding algorithm using syndromes of symbol-pair codes. In addition, we show that the proposed algorithm can correct all pair errors within the pair error correcting capability.

  • The Error Exponent of Zero-Rate Multiterminal Hypothesis Testing for Sources with Common Information

    Makoto UEDA  Shigeaki KUZUOKA  

     
    PAPER-Shannon Theory

      Vol:
    E98-A No:12
      Page(s):
    2384-2392

    The multiterminal hypothesis testing problem with zero-rate constraint is considered. For this problem, an upper bound on the optimal error exponent is given by Shalaby and Papamarcou, provided that the positivity condition holds. Our contribution is to prove that Shalaby and Papamarcou's upper bound is valid under a weaker condition: (i) two remote observations have a common random variable in the sense of Gácks and Körner, and (ii) when the value of the common random variable is fixed, the conditional distribution of remaining random variables satisfies the positivity condition. Moreover, a generalization of the main result is also given.

  • A Fast Settling All Digital PLL Using Temperature Compensated Oscillator Tuning Word Estimation Algorithm

    Keisuke OKUNO  Shintaro IZUMI  Kana MASAKI  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER-Circuit Design

      Vol:
    E98-A No:12
      Page(s):
    2592-2599

    This report describes an all-digital phase-locked loop (ADPLL) using a temperature compensated settling time reduction technique. The novelty of this work is autonomous oscillation control word estimation without a look-up table or memory circuits. The proposed ADPLL employs a multi-phase digitally controlled oscillator (DCO). In the proposed estimation method, the optimum oscillator tuning word (OTW) is estimated from the DCO frequency characteristic in the setup phase of ADPLL. The proposed ADPLL, which occupies 0.27×0.36mm2, is fabricated by a 65 nm CMOS process. The temperature compensation PLL controller (TCPC) is implemented using an FPGA. Although the proposed method has 20% area overhead, measurement results show that the 47% settling time is reduced. The average settling time at 25°C is 3µs. The average reduction energy is at least 42% from 0°C to 100°C.

  • Code Generation Limiting Maximum and Minimum Hamming Distances for Non-Volatile Memories

    Tatsuro KOJO  Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2484-2493

    Data stored in non-volatile memories may be destructed due to crosstalk and radiation but we can restore their data by using error-correcting codes. However, non-volatile memories consume a large amount of energy in writing. How to reduce maximum writing bits even using error-correcting codes is one of the challenges in non-volatile memory design. In this paper, we first propose Doughnut code which is based on state encoding limiting maximum and minimum Hamming distances. After that, we propose a code expansion method, which improves maximum and minimum Hamming distances. When we apply our code expansion method to Doughnut code, we can obtain a code which reduces maximum-flipped bits and has error-correcting ability equal to Hamming code. Experimental results show that the proposed code efficiently reduces the number of maximum-writing bits.

  • A Fundamental Inequality for Lower-Bounding the Error Probability for Classical and Classical-Quantum Multiple Access Channels and Its Applications

    Takuya KUBO  Hiroshi NAGAOKA  

     
    PAPER-Shannon Theory

      Vol:
    E98-A No:12
      Page(s):
    2376-2383

    In the study of the capacity problem for multiple access channels (MACs), a lower bound on the error probability obtained by Han plays a crucial role in the converse parts of several kinds of channel coding theorems in the information-spectrum framework. Recently, Yagi and Oohama showed a tighter bound than the Han bound by means of Polyanskiy's converse. In this paper, we give a new bound which generalizes and strengthens the Yagi-Oohama bound, and demonstrate that the bound plays a fundamental role in deriving extensions of several known bounds. In particular, the Yagi-Oohama bound is generalized to two different directions; i.e, to general input distributions and to general encoders. In addition we extend these bounds to the quantum MACs and apply them to the converse problems for several information-spectrum settings.

  • Target Source Separation Based on Discriminative Nonnegative Matrix Factorization Incorporating Cross-Reconstruction Error

    Kisoo KWON  Jong Won SHIN  Nam Soo KIM  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/08/19
      Vol:
    E98-D No:11
      Page(s):
    2017-2020

    Nonnegative matrix factorization (NMF) is an unsupervised technique to represent nonnegative data as linear combinations of nonnegative bases, which has shown impressive performance for source separation. However, its source separation performance degrades when one signal can also be described well with the bases for the interfering source signals. In this paper, we propose a discriminative NMF (DNMF) algorithm which exploits the reconstruction error for the interfering signals as well as the target signal based on target bases. The objective function for training the bases is constructed so as to yield high reconstruction error for the interfering source signals while guaranteeing low reconstruction error for the target source signals. Experiments show that the proposed method outperformed the standard NMF and another DNMF method in terms of both the perceptual evaluation of speech quality score and signal-to-distortion ratio in various noisy environments.

  • Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution

    Shuqiong WU  Hiroshi NAGAHASHI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/08/13
      Vol:
    E98-D No:11
      Page(s):
    1906-1915

    Gentle AdaBoost is widely used in object detection and pattern recognition due to its efficiency and stability. To focus on instances with small margins, Gentle AdaBoost assigns larger weights to these instances during the training. However, misclassification of small-margin instances can still occur, which will cause the weights of these instances to become larger and larger. Eventually, several large-weight instances might dominate the whole data distribution, encouraging Gentle AdaBoost to choose weak hypotheses that fit only these instances in the late training phase. This phenomenon, known as “classifier distortion”, degrades the generalization error and can easily lead to overfitting since the deviation of all selected weak hypotheses is increased by the late-selected ones. To solve this problem, we propose a new variant which we call “Penalized AdaBoost”. In each iteration, our approach not only penalizes the misclassification of instances with small margins but also restrains the weight increase for instances with minimal margins. Our method performs better than Gentle AdaBoost because it avoids the “classifier distortion” effectively. Experiments show that our method achieves far lower generalization errors and a similar training speed compared with Gentle AdaBoost.

  • Ensemble and Multiple Kernel Regressors: Which Is Better?

    Akira TANAKA  Hirofumi TAKEBAYASHI  Ichigaku TAKIGAWA  Hideyuki IMAI  Mineichi KUDO  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E98-A No:11
      Page(s):
    2315-2324

    For the last few decades, learning with multiple kernels, represented by the ensemble kernel regressor and the multiple kernel regressor, has attracted much attention in the field of kernel-based machine learning. Although their efficacy was investigated numerically in many works, their theoretical ground is not investigated sufficiently, since we do not have a theoretical framework to evaluate them. In this paper, we introduce a unified framework for evaluating kernel regressors with multiple kernels. On the basis of the framework, we analyze the generalization errors of the ensemble kernel regressor and the multiple kernel regressor, and give a sufficient condition for the ensemble kernel regressor to outperform the multiple kernel regressor in terms of the generalization error in noise-free case. We also show that each kernel regressor can be better than the other without the sufficient condition by giving examples, which supports the importance of the sufficient condition.

  • Error Correction Using Long Context Match for Smartphone Speech Recognition

    Yuan LIANG  Koji IWANO  Koichi SHINODA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2015/07/31
      Vol:
    E98-D No:11
      Page(s):
    1932-1942

    Most error correction interfaces for speech recognition applications on smartphones require the user to first mark an error region and choose the correct word from a candidate list. We propose a simple multimodal interface to make the process more efficient. We develop Long Context Match (LCM) to get candidates that complement the conventional word confusion network (WCN). Assuming that not only the preceding words but also the succeeding words of the error region are validated by users, we use such contexts to search higher-order n-grams corpora for matching word sequences. For this purpose, we also utilize the Web text data. Furthermore, we propose a combination of LCM and WCN (“LCM + WCN”) to provide users with candidate lists that are more relevant than those yielded by WCN alone. We compare our interface with the WCN-based interface on the Corpus of Spontaneous Japanese (CSJ). Our proposed “LCM + WCN” method improved the 1-best accuracy by 23%, improved the Mean Reciprocal Rank (MRR) by 28%, and our interface reduced the user's load by 12%.

  • Posteriori Restoration of Turn-Taking and ASR Results for Incorrectly Segmented Utterances

    Kazunori KOMATANI  Naoki HOTTA  Satoshi SATO  Mikio NAKANO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2015/07/24
      Vol:
    E98-D No:11
      Page(s):
    1923-1931

    Appropriate turn-taking is important in spoken dialogue systems as well as generating correct responses. Especially if the dialogue features quick responses, a user utterance is often incorrectly segmented due to short pauses within it by voice activity detection (VAD). Incorrectly segmented utterances cause problems both in the automatic speech recognition (ASR) results and turn-taking: i.e., an incorrect VAD result leads to ASR errors and causes the system to start responding though the user is still speaking. We develop a method that performs a posteriori restoration for incorrectly segmented utterances and implement it as a plug-in for the MMDAgent open-source software. A crucial part of the method is to classify whether the restoration is required or not. We cast it as a binary classification problem of detecting originally single utterances from pairs of utterance fragments. Various features are used representing timing, prosody, and ASR result information. Experiments show that the proposed method outperformed a baseline with manually-selected features by 4.8% and 3.9% in cross-domain evaluations with two domains. More detailed analysis revealed that the dominant and domain-independent features were utterance intervals and results from the Gaussian mixture model (GMM).

181-200hit(1060hit)