The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] error(1060hit)

641-660hit(1060hit)

  • A Variable-Length Encoding Method to Prevent the Error Propagation Effect in Video Communication

    Linhua MA  Yilin CHANG  Jun LIU  Xinmin DU  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E89-D No:4
      Page(s):
    1592-1595

    A novel variable-length code (VLC), called alternate VLC (AVLC), is proposed, which employs two types of VLC to encode source symbols alternately. Its advantage is that it can not only stop the symbol error propagation effect, but also correct symbol insertion errors and symbol deletion errors, which is very important in video communication.

  • A Novel Wavelet-Based Notch Filter with Controlled Null Width

    Yung-Yi WANG  Ying LU  Liang-Cheng LEE  

     
    PAPER-Digital Signal Processing

      Vol:
    E89-A No:4
      Page(s):
    1069-1075

    This paper presents a wavelet-based approach for the design of the finite impulse response (FIR) notch filter with controlled null width. The M-band P-regular wavelet filters are employed to constitute the null space of the derivative constraint matrix. Taking advantage of the vanishing moment property of the wavelet filters, the proposed method controls the null width of the notch filter by adjusting the regularity of the employed wavelet filters. Besides, the selection of large number of bands of the wavelet filters can effectively reduce the minimum mean square error and thus improve the performance of the notch filter. Computer simulations show that, in addition to possessing lower computational complexity, the proposed reduced-rank method has similar frequency response compared to those of the full-rank-based techniques.

  • Study of Turbo Codes and Decoding in Binary Erasure Channel Based on Stopping Set Analysis

    Jeong Woo LEE  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E89-B No:4
      Page(s):
    1178-1186

    In this paper, we define a stopping set of turbo codes with the iterative decoding in the binary erasure channel. Based on the stopping set analysis, we study the block and bit erasure probabilities of turbo codes and the performance degradation of the iterative decoding against the maximum-likelihood decoding. The error floor performance of turbo codes with the iterative decoding is dominated by the small stopping sets. The performance degradation of the iterative decoding is negligible in the error floor region, so the error floor performance is asymptotically dominated by the low weight codewords.

  • Error Identification in At-Speed Scan BIST Environment in the Presence of Circuit and Tester Speed Mismatch

    Yoshiyuki NAKAMURA  Thomas CLOUQUEUR  Kewal K. SALUJA  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Vol:
    E89-D No:3
      Page(s):
    1165-1172

    In this paper, we provide a practical formulation of the problem of identifying all error occurrences and all failed scan cells in at-speed scan based BIST environment. We propose a method that can be used to identify every error when the circuit test frequency is higher than the tester frequency. Our approach requires very little extra hardware for diagnosis and the test application time required to identify errors is a linear function of the frequency ratio between the CUT and the tester.

  • Quarternary Signal Sets for Digital Communications with Nonuniform Sources

    Ha H. NGUYEN  Tyler NECHIPORENKO  

     
    LETTER-Communication Theory and Signals

      Vol:
    E89-A No:3
      Page(s):
    832-835

    This letter considers the signal design problems for quaternary digital communications with nonuniform sources. The designs are considered for both the average and equal energy constraints and for a two-dimensional signal space. A tight upper bound on the bit error probability (BEP) is employed as the design criterion. The optimal quarternary signal sets are presented and their BEP performance is compared with that of the standard QPSK and the binary signal set previously designed for nonuniform sources. Results shows that a considerable saving in the transmitted power can be achieved by the proposed average-energy signal set for a highly nonuniform source.

  • Soft Error Hardened Latch Scheme with Forward Body Bias in a 90-nm Technology and Beyond

    Yoshihide KOMATSU  Yukio ARIMA  Koichiro ISHIBASHI  

     
    PAPER-Soft Error

      Vol:
    E89-C No:3
      Page(s):
    384-391

    This paper describes a soft error hardened latch (SEH-Latch) scheme that has an error correction function in the fine process. The storage node of the latch is separated into three electrodes and a soft error on one node is collected by the other two nodes despite the large amount and long-lasting influx of radiation-induced charges. To achieve this, we designed two types of SEH-Latch circuits and a standard latch circuit using 130-nm 2-well, 3-well, and also 90-nm 2-well CMOS processes. The proposed circuit demonstrated immunity that was two orders higher through an irradiation test using alpha-particles, and immunity that was one order higher through neutron irradiation. We also demonstrated forward body bias control, which improves alpha-ray immunity by 26% for a standard latch and achieves 44 times improvement in the proposed latch.

  • On the Number of Integrators Needed for Dynamic Observer Error Linearization via Integrators

    Kyungtak YU  Nam-Hoon JO  Jin Heon SEO  

     
    LETTER-Systems and Control

      Vol:
    E89-A No:3
      Page(s):
    817-821

    In this letter, an illustrative example is given, which shows that the number of integrators needed for the dynamic observer error linearization using integrators can not be bounded by a function of the dimension of the system and the number of outputs in contrast to dynamic feedback linearization results.

  • Forward Error Correction for Visual Communication Systems Using VBR Codec

    Konomi MOCHIZUKI  Yasuhiko YOSHIMURA  Yoshihiko UEMATSU  Ryoichi SUZUKI  

     
    PAPER

      Vol:
    E89-B No:2
      Page(s):
    334-341

    Packet loss and delay cause degradation in the quality of real-time, interactive applications such as video conferencing. Forward error correction (FEC) schemes have been proposed to make the applications more resilient to packet loss, because the time required to recover the lost packets is shorter than that required to retransmit the lost packets. On the other hand, the codec generally used in real-time applications like MPEG4 has the feature that the sending bit rate and the packet size of the traffic vary significantly according to the motion of an object in a video. If the traditional FEC coding, which is calculated on the basis of a fixed-size block, is applied to such applications, a waste of bandwidth and a delay variation are caused and the quality is degraded. In this paper, we propose suitable FEC schemes for visual communication systems using variable bit-rate (VBR) codec and evaluate the effectiveness of these schemes using our prototype implementation and experimental network.

  • Foveation Based Error Resilience Optimization for H.264 Intra Coded Frame in Wireless Communication

    Yu CHEN  XuDong ZHANG  DeSheng WANG  

     
    LETTER-Multimedia Systems for Communications" Multimedia Systems for Communications

      Vol:
    E89-B No:2
      Page(s):
    633-636

    Based on the observation that foveation analysis can be used to find most critical content in terms of human visual perception in video and image, one effective error resilience method is proposed for robust transmission of H.264 intra-coded frame in wireless channel. It firstly exploits the results of foveation analysis to find foveated area in picture, and then considers the results of pre-error concealment effect analysis to search for the center of foveation macro-blocks (MB) in foveated area, finally new error resilient alignment order of MB and new coding order of MB are proposed that are used in video encoder. Extensive experimental results on different portrait video sequences over random bit error wireless channel demonstrate that this proposed method can achieve better subjective and objective effect than original JM 8.2 H.264 video codec with little effect on coding rate and image quality.

  • An Error Detection Method Based on Coded Block Pattern Information Verification for Wireless Video Communication

    Yu CHEN  XuDong ZHANG  DeSheng WANG  

     
    LETTER-Multimedia Systems for Communications" Multimedia Systems for Communications

      Vol:
    E89-B No:2
      Page(s):
    629-632

    A novel error detection method based on coded block pattern (CBP) information verification is proposed for error concealment of inter-coded video frames transmitted in wireless channel. This method firstly modifies the original video stream structure by the aggregation of certain important information, and then inserts some error verification bits into the video stream for each encoded macro block (MB), these bits can be used as reference information to determine whether each encoded MB is corrupted. Experimental results on additive Gauss white noise simulation wireless channel and H.263+ baseline codec show that the proposed method can outperform other reference approaches on error detection performance. In addition, it can preserve the original video quality with a small coding overhead increase.

  • Concurrent Error Detection in Montgomery Multiplication over GF(2m)

    Che-Wun CHIOU  Chiou-Yng LEE  An-Wen DENG  Jim-Min LIN  

     
    PAPER-Information Security

      Vol:
    E89-A No:2
      Page(s):
    566-574

    Because fault-based attacks on cryptosystems have been proven effective, fault diagnosis and tolerance in cryptography have started a new surge of research and development activity in the field of applied cryptography. Without magnitude comparisons, the Montgomery multiplication algorithm is very attractive and popular for Elliptic Curve Cryptosystems. This paper will design a Montgomery multiplier array with a bit-parallel architecture in GF(2m) with concurrent error detection capability to protect it against fault-based attacks. The robust Montgomery multiplier array with concurrent error detection requires only about 0.2% extra space overhead (if m=512 is as an example) and requires four extra clock cycles compared to the original Montgomery multiplier array without concurrent error detection.

  • Decision Aided Hybrid MMSE/SIC Multiuser Detection: Structure and AME Performance Analysis

    Hoang-Yang LU  Wen-Hsien FANG  

     
    PAPER-Spread Spectrum Technologies and Applications

      Vol:
    E89-A No:2
      Page(s):
    600-610

    This paper presents a simple, yet effective hybrid of the minimum mean square error (MMSE) multi-user detection (MUD) and successive interference cancellation (SIC) for direct-sequence code division multiple access (DS-CDMA) systems. The proposed hybrid MUD first divides the users into groups, with each group consisting of users with a close power level. The SIC is then used to distinguish users among different groups, while the MMSE MUD is used to detect signals within each group. To further improve the performance impaired by the propagation errors, an information reuse scheme is also addressed, which can be used in conjunction with the hybrid MMSE/SIC MUD to adequately cancel the multiple access interferences (MAIs) so as to attain more accurate detections. Furthermore, the asymptotic multiuser efficiency (AME), a measure to characterize the near-far resistance capability, is also conducted to provide further insights into the new detectors. Furnished simulations, in both additive white Gaussian noise (AWGN) channels and slow flat Rayleigh fading channels, show that the performances of the proposed hybrid MMSE/SIC detectors, with or without the decision aided scheme, are superior to that of the SIC and, especially, the one with decision aided is close to that of the MMSE MUD but with substantially lower computational complexity.

  • An Anomaly Intrusion Detection System Based on Vector Quantization

    Jun ZHENG  Mingzeng HU  

     
    PAPER-Intrusion Detection

      Vol:
    E89-D No:1
      Page(s):
    201-210

    Machine learning and data mining algorithms are increasingly being used in the intrusion detection systems (IDS), but their performances are laggard to some extent especially applied in network based intrusion detection: the larger load of network traffic monitoring requires more efficient algorithm in practice. In this paper, we propose and design an anomaly intrusion detection (AID) system based on the vector quantization (VQ) which is widely used for data compression and high-dimension multimedia data index. The design procedure optimizes the performance of intrusion detection by jointly accounting for accurate usage profile modeling by the VQ codebook and fast similarity measures between feature vectors to reduce the computational cost. The former is just the key of getting high detection rate and the later is the footstone of guaranteeing efficiency and real-time style of intrusion detection. Experiment comparisons to other related researches show that the performance of intrusion detection is improved greatly.

  • Lowering Error Floor of Irregular LDPC Codes by CRC and OSD Algorithm

    Satoshi GOUNAI  Tomoaki OHTSUKI  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E89-B No:1
      Page(s):
    1-10

    Irregular Low-Density Parity-Check (LDPC) codes generally achieve better performance than regular LDPC codes at low Eb/N0 values. They have, however, higher error floors than regular LDPC codes. With respect to the construction of the irregular LDPC code, it can achieve the trade-off between the performance degradation of low Eb/N0 region and lowering error floor. It is known that a decoding algorithm can achieve very good performance if it combines the Ordered Statistic Decoding (OSD) algorithm and the Log Likelihood Ratio-Belief Propagation (LLR-BP) decoding algorithm. Unfortunately, all the codewords obtained by the OSD algorithm satisfy the parity check equation of the LDPC code. While we can not use the parity check equation of the LDPC code to stop the decoding process, the wrong codeword that satisfies the parity check equation raises the error floor. Once a codeword that satisfies the parity check equation is generated by the LLR-BP decoding algorithm, we regard that codeword as the final estimate and halt decoding; the OSD algorithm is not performed. In this paper, we propose a new encoding/decoding scheme to lower the error floor created by irregular LDPC codes. The proposed encoding scheme encodes information bits by Cyclic Redundancy Check (CRC) and LDPC code. The proposed decoding scheme, which consists of the LLR-BP decoding, CRC check, and OSD decoding, detects errors in the codewords obtained by the LLR-BP decoding algorithm and the OSD decoding algorithm using the parity check equations of LDPC codes and CRC. Computer simulations show that the proposed encoding/decoding scheme can lower the error floor of irregular LDPC codes.

  • An Automatic Extraction Method of F0 Generation Model Parameters

    Shehui BU  Mikio YAMAMOTO  Shuichi ITAHASHI  

     
    PAPER-Speech and Hearing

      Vol:
    E89-D No:1
      Page(s):
    305-313

    In this paper, a revised method is proposed in order to determine the parameters of an F0 generation model from the observed F0 contour automatically. Compared with the previous method, there are two points revised in the proposed method. Firstly, we relax the endpoint constraint in the dynamic programming method, especially we allow the timing of the first phrase command to be earlier than the beginning point of the actual F0 pattern. Secondly, the z-transform method is introduced to convert the equation of the F0 model in order to simplify the calculation and save the computation time. An experiment with 100 sentences spoken by two males and two females selected from the speech database "ATR 503 sentences" has shown that the proposed method is effective as we expected.

  • High Quality and Low Complexity Speech Analysis/Synthesis Based on Sinusoidal Representation

    Jianguo TAN  Wenjun ZHANG  Peilin LIU  

     
    LETTER-Speech and Hearing

      Vol:
    E88-D No:12
      Page(s):
    2893-2896

    Sinusoidal representation has been widely applied to speech modification, low bit rate speech and audio coding. Usually, speech signal is analyzed and synthesized using the overlap-add algorithm or the peak-picking algorithm. But the overlap-add algorithm is well known for high computational complexity and the peak-picking algorithm cannot track the transient and syllabic variation well. In this letter, both algorithms are applied to speech analysis/synthesis. Peaks are picked in the curve of power spectral density for speech signal; the frequencies corresponding to these peaks are arranged according to the descending orders of their corresponding power spectral densities. These frequencies are regarded as the candidate frequencies to determine the corresponding amplitudes and initial phases according to the least mean square error criterion. The summation of the extracted sinusoidal components is used to successively approach the original speech signal. The results show that the proposed algorithm can track the transient and syllabic variation and can attain the good synthesized speech signal with low computational complexity.

  • Stego-Encoding with Error Correction Capability

    Xinpeng ZHANG  Shuozhong WANG  

     
    LETTER-Information Security

      Vol:
    E88-A No:12
      Page(s):
    3663-3667

    Although a proposed steganographic encoding scheme can reduce distortion caused by data hiding, it makes the system susceptible to active-warden attacks due to error spreading. Meanwhile, straightforward application of error correction encoding inevitably increases the required amount of bit alterations so that the risk of being detected will increase. To overcome the drawback in both cases, an integrated approach is introduced that combines the stego-encoding and error correction encoding to provide enhanced robustness against active attacks and channel noise while keeping good imperceptibility.

  • Avoiding the Local Minima Problem in Backpropagation Algorithm with Modified Error Function

    Weixing BI  Xugang WANG  Zheng TANG  Hiroki TAMURA  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E88-A No:12
      Page(s):
    3645-3653

    One critical "drawback" of the backpropagation algorithm is the local minima problem. We have noted that the local minima problem in the backpropagation algorithm is usually caused by update disharmony between weights connected to the hidden layer and the output layer. To solve this kind of local minima problem, we propose a modified error function with two terms. By adding one term to the conventional error function, the modified error function can harmonize the update of weights connected to the hidden layer and those connected to the output layer. Thus, it can avoid the local minima problem caused by such disharmony. Simulations on some benchmark problems and a real classification task have been performed to test the validity of the modified error function.

  • Analysis of Bit Error Probability of Trellis Coded 8-PSK

    Hideki YOSHIKAWA  

     
    LETTER-Communication Theory

      Vol:
    E88-A No:10
      Page(s):
    2956-2959

    The letter presents an analysis of bit error probability for trellis coded 8-ary phase shift keying moduation with 2-state soft decision Viterbi decoding. It is shown that exact numerical error performance can be obtained for low singal-to-noise power ratio where bounds are useless.

  • Weight and Stopping Set Distributions of Two-Edge Type LDPC Code Ensembles

    Ryoji IKEGAYA  Kenta KASAI  Yuji SHIMOYAMA  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E88-A No:10
      Page(s):
    2745-2761

    In this paper, we explicitly formulate the average weight and the stopping set distributions and their asymptotic exponents of two-edge type LDPC code ensembles. We also show some characteristics such as the symmetry and the conditions for zero of the weight distributions of two code ensembles. Further we investigate the relation between two code ensembles from the perspectives of the weight and stopping set distributions.

641-660hit(1060hit)