The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] error(1060hit)

1001-1020hit(1060hit)

  • Unidirectional Byte Error Locating Codes

    Shuxin JIANG  Eiji FUJIWARA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1253-1260

    This papter proposes a new type of unidirectional error control codes which indicates the location of unidirectional errors clustered in b-bit length, i.e., unidirectional byte error in b (b2) bits. Single unidirectional b-bit byte error locating codes, called SUbEL codes, are first clarified using necessary and sufficient conditions, and then code construction algorithm is demonstrated. The lower bound on check bit length of the SUbEL codes is derived. Based on this, the proposed codes are shown to be very efficient. Using the code design concept presented for the SUbEL codes, it is demonstrated that generalized unidirectional byte error locating codes are easily constructed.

  • Variable Error Controlling Schemes for Intelligent Error Controlling Systems

    Taroh SASAKI  Ryuji KOHNO  Hideki IMAI  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1281-1288

    Recently, a lot of research works have been carried out regarding intelligent communication. If the final information sink is assumed as a human being, a communication channel can be used more effectively when encoders/decoders work "intelligently" or take into account of the semantics of information to be sent. We have been studying error-controlling systems based on different importance of segmental information. The system divides the information input into segments to which individual importance can be assigned. The segments are individually encoded by appropriate error-correcting codes (ECCs) which correspond to their importance among codes with different error-correcting capabilities. For the information that difference of the importance is systematically aligned, conventional UEP (unequal error protection) codes can be applied, but we treat the case that alignment of the importance of the information source is not systematically aligned. Since the system uses multiple ECCs with different (n,k,d) parameters, information regarding what length of the next codeword is required for decoding. We propose error controlling schemes using mulriple ECCs; the first scheme and the second scheme use the obvious codelength identifying information. In the second scheme, information bits are sorted so that segments with the same importance can be encoded by an ECC with the same error-correcting capability. The third scheme is a main proposal in this paper and uses Variable Capability Coding scheme (VCC) which uses some ECCs having different error-correcting capabilities and codelengths. A sequence encoded by the VCC is separable into appropriate segments without obvious codelength identifying information when the channel error probability is low. Subsequently, we evaluate these schemes by coderate when (1) error correcting capability (2) codelength identifying capability are the same. One of the feature of VCC is the capability of resuming from propagative errors because errors beyond the codelength identifying capability occur and the proper beginning of the codeword is lost in the decoder. We also evaluate this capability as (3) resynchronizing capability.

  • On Trellis Structure of LUEP Block Codes and a Class of UEP QPSK Block Modulation Codes

    Robert MORELOS-ZARAGOZA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1261-1266

    Recently there has been considerable interest in coded modulation schemes that offer multiple levels of error protection. That is, constructions of (block or convolutional) modulation codes in which signal sequences associated with some message symbols are separated by a squared Euclidean distance that is larger than the minimum squared Euclidean distance (MSED) of the code. In this paper, the trellis structure of linear unequal-error-protection (LUEP) codes is analyzed. First, it is shown that LUEP codes have trellises that can be expressed as a direct product of trellises of subcodes or clouds. This particular trellis structure is a result of the cloud structure of LUEP codes in general. A direct consequence of this property of LUEP codes is that searching for trellises with parallel structure for a block modulation code may be useful not only in analyzing its structure and in simplifying its decoding, but also in determining its UEP capabilities. A basic 3-level 8-PSK block modulation code is analyzed under this new perspective, and shown to offer two levels of error protection. To illustrate the trellis structure of an LUEP code, we analyze a trellis diagram for an extended (64,24) BCH code, which is a two-level LUEP code. Furthermore, we introduce a family of LUEP codes based on the |||-construction, using Reed-Muller (RM) codes as component codes. LUEP codes in this family have the advantage of having a well known trellis structure. Their application in constructing LUEP-QPSK modulation codes is presented, and their error performance over an AWGN channel examined.

  • Performance Evaluation Method of Trellis Coded Modulation Scheme without Uniformity

    Haruo OGIWARA  Kazuo OOHIRA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1267-1273

    An encoder of a trellis coded modulation (TCM) is composed of a linear convolutional encoder followed by a mapper to channel signals. A new condition, under which the performance evaluation of the TCM is possible based on the 2ν state error state transition diagram, is proposed, where ν is the number of delay elements in the convolutional encoder. There have been proposed three similar methods. This paper points out the restriction of the previous methods, and proposes a new method. The condition, under which the previous method is useful, is called nuiformity, such as, the error weight profile is independent from the encoder state. When uniformity does not hold, we discuss to divide an error state into substates based on the coset decomposition of output vectors of the convolutional encoder. The coset is determined by the vector called coset selector. If the condition defined as equal dividing holds, the subdivided states can be merged and the performance can be evaluated based on the 2ν state transition diagram, even for the codes without uniformity. When the row rank of the transformation matrix, from the input vector of the encoder to the coset selector vector, is full, the equal dividing condition holds under the assumption of equally probable i.i.d. (independently identically distributed) input sequence. For TCM schemes without uniformity (in the case, previous methods can not be applied), upper bounds of the bit error rate are evaluated by the proposed method and compared with the simulation results. The difference is less than 10% in the range of bet error rate 10-4.

  • An Error-Controlling Scheme according to the Importance of Individual Segments of Model-Based Coded Facial Images

    Noriko SUZUKI  Taroh SASAKI  Ryuji KOHNO  Hideki IMAI  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1289-1297

    This paper proposes and investigates an intelligent error-controlling scheme according to different importance of segmental information. In particular, the scheme is designed for facial images encoded by model-based coding that is a kind of intelligent compression coding. Intelligent communication systems regard the contents of information to be transmitted with extremely high compression and reliability. After highly efficient information compression by model-beaed coding, errors in the compressed information lead to severe semantic errors. The proposed scheme reduces semantic errors of information for the receiver. In this paper, we consider Action Unit (AU) as a segment of model-based coded facial image of human being and define the importance for each AU. According to the importance, an AU is encoded by an appropriated code among codes with different error-correcting capabilities. For encoding with different error controlling codes, we use three kinds of constructions to obtain unequal error protection (UEP) codes in this paper. One of them is the direct sum construction and the others are the proposed constructions which are based on joint and double coding. These UEP codes can have higher coderate than other UEP codes when minimum Hamming distance is small. By using these UEP codes, the proposed intelligent error-controlling scheme can protect information in segment in order to reduce semantic errors over a conventional error-controlling scheme in which information is uniformly protected by an error-correcting code.

  • New Go-Back-N ARQ Protocols for Point-to-Multipoint Communications

    Hui ZHAO  Toru SATO  Iwane KIMURA  

     
    PAPER-Communication Theory

      Vol:
    E77-B No:8
      Page(s):
    1013-1022

    This paper presents new go-back-N ARQ protocols for point-to-multipoint communications over broadcast channels such as satellite or broadcast radio channels. In the conventional go-back-N ARQ protocols for multidestination communications, usually only error detection codes are used for error detection and m copies of a frame are transmitted at a time. In one of our protocols, a bit-by-bit majority-voting decoder based on all of the m copies of a frame is used to recover the transmitted frame. In another protocol, a hybrid-ARQ protocol, which is an error detection code concatenated with a rate repetition convolutional code with the Viterbi decoding, is used. In these protocols, a dynamic programming technique is used to select the optimal number of copies of a frame to be transmitted at a time. The optimal number is determined by round trip propagation delay of the channel, the error probability, and the number of receivers that have not yet received the message. Analytic expressions are derived for the throughput efficiency of the proposed protocols. The proposed point-to-multipoint protocols provide satisfactory throughput efficiency and perform considerably better than the conventional protocols under high error rate conditions, especially in environments with a large number of receivers and large link round trips. In this paper we analyze the performances of the proposed protocols upon the random error channel conditions.

  • Voice Activity Detection and Transmission Error Control for Digital Cordless Telephone System

    Seishi SASAKI  Ichiro MATSUMOTO  Osamu WATANABE  Kenzo URABE  

     
    PAPER

      Vol:
    E77-B No:7
      Page(s):
    948-955

    Personal Handy Phone (PHP), the Japanese digital cordless telephone system is being developed. The 32kbits/s ADPCM (Adaptive Differential Pulse Code Modulation) codec has been standardized for PHP. This paper describes firstly, the advanced algorithms of a Voice Activity Detection (VAD) function that reduces power dissipation of a digital cordless telephone terminal, secondly, a comfort noise generator operates in conjunction with the VAD and finally, a transmission error control based on the use of the prediction coefficients generated in the ADPCM codec. These proposed algorithms function in the low signal-to-noise ratio (SNR) environment of personal radio communications. The quality of the reconstructed speech after the process is influenced by the VAD decision errors (false detection when no voice is present, or no detection when voice is present) , the similarity of the generated comfort noise to the actual background noise, and the transmission quality. The simulation results of the performance achieved by these algorithms are shown and required loading of the computation are also given.

  • A Study on the Performance Improvements of Error Control Schemes in Digital Cellular DS/CDMA Systems

    Ill-Woo LEE  Dong-Ho CHO  

     
    PAPER

      Vol:
    E77-B No:7
      Page(s):
    883-890

    In this paper, the average error-rate characteristics are investigated as the number of users increases in the digital cellular DS/CDMA (Direct Sequence/Code Division Multiple Access) systems. Then, the performances of the various error control schemes applied to the data service of digital cellular DS/CDMA systems are compared and analyzed. That is, the performances of the conventional error control schemes such as Go-back-N ARQ (Automatic Repeat Request) and Selective-Repeat ARQ are analyzed in the circumstance of digital cellular DS/CDMA system. Also, the improved error control schemes which utilize the variable window size and/or variable data packet size are proposed and evaluated in order to improve the performances of the conventional error control scheme such as Quick-Repeat ARQ and WORM ARQ schemes in the digital cellular DS/CDMA system environments. According to the simulation results, the performances of the improved scheme with variable window and variable frame size are superior to those of the conventional scheme in the view of throughput and delay characteristics due to the robustness to the fading channel impairments.

  • Effects of Non-matched Receiver Filters on π/4-DQPSK Bit Error Rate in Rayleigh Fading

    Chun Sum NG  Tjeng Thiang TJHUNG  Fumiyuki ADACHI  

     
    PAPER-Radio Communication

      Vol:
    E77-B No:6
      Page(s):
    800-807

    The effect of intersymbol interference resulting from non-matched receiver filtering on the bit error rate (BER) performance of π/4-DQPSK systems recently adopted in the North American and Japanese digital cellular standards, is analyzed in Rayleigh fading. With a Gaussian or a Butterworth (of order N, 2N10) receiver filter, the BER performance is found to degrade by only a small fraction of a decibel from the performance with ideally matched receiver filters. A 4th-order Butterworth receiver filter leads to BER curves which almost coincide with those of the ideally matched filtering condition.

  • Variance Distribution of Reflection Coefficients in Six-Port Reflectometer

    Manabu KINOSHITA  Hajime SUZUKI  Toshiyuki YAKABE  Hatsuo YABE  

     
    PAPER

      Vol:
    E77-C No:6
      Page(s):
    930-934

    This paper discusses the effect of random errors in power meter readings by the six-port reflectometer. By means of six-port techniques, the determination of the reflection coefficient (Γ) of a divice under test is reduced to the problem of finding a common intersection of three circles in the complex plane. Since the intersection usually forms a cluster due to the effect of measurement error, the extraction of a single value from the cluster including the radical center of the three circles is required. Two types of methods are presented for determining Γ. One uses a linear solution for the radical center, and the other is a statistically based nonlinear solution. In order to improve measurement accuracy, the effect of random errors in the sidearm power meter readings and due to the influence of the q-point locations are investigated for each method. By adding a random variation of 0.5% onto each of the three port power ratios, the variance distributions of Γ over the entire area of the Smith chart are simulated for comparison of these two solutions. The three dimensional variance distribution chart reveals that only the nonlinear solution suffers a variance increase shown as a ridgelike peak along the lines of centers of the three circles. As a result of computer simulations, it is clarified that the reflectometer has the property of measurement accuracy dependence on the value of Γ. A new type of six-port model is suggested, which is unlikely to be affected by random errors in the nonlinear solution.

  • Traffic Analysis of the Stop-and-Wait ARQ over A Markov Error Channel

    Masaharu KOMATSU  Chun-Xiang CHEN  Kozo KINOSHITA  

     
    PAPER-Communication Theory

      Vol:
    E77-B No:4
      Page(s):
    477-484

    Recently, the throughput performances of ARQ's have been analyzed over a Markov error channel. It has been shown that given a round-trip-delay, the throughput of the Stop-and-Wait ARQ is dependent only on the overall average packet-error probability. In this paper, we exactly analyze the Stop-and-Wait ARQ scheme under the condition that the channel is slotted and packet errors occur according to a two-state Markov chain which is characterized by the decay factor. The distribution of packet delay time and the channel usage factor are obtained. From the analytical results and numerical examples, it is shown that for a given round-trip-delay, the average packet delay time and the channel utilization factor depend on both the overall average packet-error probability and the decay factor characterizing the two-state Markov chain. Furthermore, the decay factor gives different influence on the average delay time and the channel usage factor depending on whether the round-trip-delay is even slots or not.

  • Comparison of Classifiers in Small Training Sample Size Situations for Pattern Recognition

    Yoshihiko HAMAMOTO  Shunji UCHIMURA  Shingo TOMITA  

     
    LETTER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    355-357

    The main problem in statistical pattern recognition is to design a classifier. Many researchers point out that a finite number of training samples causes the practical difficulties and constraints in designing a classifier. However, very little is known about the performance of a classifier in small training sample size situations. In this paper, we compare the classification performance of the well-known classifiers (k-NN, Parzen, Fisher's linear, Quadratic, Modified quadratic, Euclidean distance classifiers) when the number of training samples is small.

  • Soft-Error Study of DRAMs with Retrograde Well Structure by New Evaluation Method

    Yoshikazu OHNO  Hiroshi KIMURA  Ken-ichiro SONODA  Tadashi NISHIMURA  Shin-ichi SATOH  Hirokazu SAYAMA  Shigenori HARA  Mikio TAKAI  Hirokazu MIYOSHI  

     
    PAPER-Device Technology

      Vol:
    E77-C No:3
      Page(s):
    399-405

    A new method for the DRAM soft-error evaluation was developed. By using a focused proton microprobe as a radiation source, and scanning it on a memory cell plane, local sensitive structure of memory cells against soft-errors could be investigated with a form of the susceptibility mapping. Cell mode and bit-line mode soft-errors could be clearly distinguished by controlling the incident location and the proton dose, and it was also found that the incident beam within 4 µm around the monitored memory cell caused the soft-error. The retrograde well formed by the MeV ion implantation technology was examined by this method. It was confirmed that the B+ layers in the retrograde well were a sufficient barrier against the charge collection. The generation rate of the electron-hole pairs and the charge collection into n+ layers with a retrograde well and a conventional well were estimated by the device simulator, and were explained the experimental results.

  • Study on Snow Attaching to the TACAN Antenna

    Yoshihiko KUWAHARA  Naohito OSHIDA  Yoshihiko MATSUZAWA  Mitsuo KATO  

     
    PAPER-Electronic and Radio Applications

      Vol:
    E77-B No:2
      Page(s):
    248-255

    The TACAN is located where there is no obstruction to its line-of-sight coverage. When it snows, its radome, particularly its windward side is covered with snow. This partial snow attaching on the radome causes azimuth error of the TACAN. In this paper, a simple computer simulation for estimation of the azimuth error caused by such snow attaching is proposed. Then we checked the simulation results against the test results of the azimuth error due to pseudo ice/snow layer and the results of measurements in the fields. Finally, we propose a spherical radome to alleviate this problem and its test results are presented. We think that this study is also applicable for radar antennas.

  • Comparison between a posteriori Error Indicators for Adaptive Mesh Generation in Semiconductor Device Simulation

    Katsuhiko TANAKA  Paolo CIAMPOLINI  Anna PIERANTONI  Giorgio BACCARANI  

     
    PAPER-Numerics

      Vol:
    E77-C No:2
      Page(s):
    214-219

    In order to achieve an efficient and reliable prediction of device performance by numerical device simulation, a discretization mesh must be generated with an adequate, but not redundant, density of mesh points. However, manual mesh optimization requires user's trial and error. This task annoys the user considerably, especially when the device operation is not well known, or the required mesh-point density strongly depends on the bias condition, or else the manipulation of the mesh is difficult as is expected in 3D. Since these situations often happen in designing advanced VLSI devices, it is highly desirable to automatically optimize the mesh. Adaptive meshing techniques realize automatic optimization by refining the mesh according to the discretization error estimated from the solution. The performance of mesh optimization depends on a posteriori error indicators adopted to evaluate the discretization error. In particular, to obtain a precise terminal-current value, a reliable error indicator for the current continuity equation is necessary. In this paper, adaptive meshing based on the current continuity equation is investigated. A heuristic error indicator is proposed, and a methodology to extend a theoretical error indicator proposed for the finite element method to the requirements of device simulation is presented. The theoretical indicator is based on the energy norm of the flux-density error and is applicable to both Poisson and current continuity equations regardless of the mesh-element shape. These error indicators have been incorporated into the adaptive-mesh device-simulator HFIELDS, and their practicality is examined by MOSFET simulation. Both indicators can produce a mesh with sufficient node density in the channel region, and precise drain current values are obtained on the optimized meshes. The theoretical indicator is superior because it provides a better optimization performance, and is applicable to general mesh elements.

  • A Method for Estimating the Mean-Squared Error of Distributed Arithmetic

    Jun TAKEDA  Shin-ichi URAMOTO  Masahiko YOSHIMOTO  

     
    PAPER-Digital Signal Processing

      Vol:
    E77-A No:1
      Page(s):
    272-280

    It is important for LSI system designers to estimate computational errors when designing LSI's for numeric computations. Both for the prediction of the errors at an early stage of designing and for the choice of a proper hardware configuration to achieve a target performance, it is desirable that the errors can be estimated in terms of a minimum of parameters. This paper presents a theoretical error analysis of multiply-accumulation implemented by distributed arithmetic(DA) and proposes a new method for estimating the mean-squared error. DA is a method of implementing the multiply-accumulation that is defined as an inner product of an input vector and a fixed coefficient vector. Using a ROM which stores partial products. DA calculates the output by accumulating the partial products bitserially. As DA uses no parallel multipliers, it needs a smaller chip area than methods using parallel multipliers. Thus DA is effectively utilitzed for the LSI implementation of a digital signal processing system which requires the multiply-accumulation. It has been known that, if the input data are uniformly distributed, the mean-squared error of the multiply-accumulation implemented by DA is a function of only the word lengths of the input, the output, and the ROM. The proposed method for the error estimation can calculate the mean-squared error by using the same parameters even when the input data are not uniformly distributed. The basic idea of the method is to regard the input data as a combination of uniformly distributed partial data with a different word length. Then the mean-squared error can be predicted as a weighted sum of the contribution of each partial data, where the weight is the ratio of the partial data to the total input data. Finally, the method is applied to a two-dimensional inverse discrete cosine transform (IDCT) and the practicability of the method is confirmed by computer simulations of the IDCT implemented by DA.

  • Throughput Performances of ARQ Protocols Operating over Generalized Two-State Markov Error Channel

    Masaharu KOMATSU  Yukuo HAYASHIDA  Kozo KINOSHITA  

     
    PAPER-Communication Theory

      Vol:
    E77-B No:1
      Page(s):
    35-42

    In this paper, we analyze the throughput of the Stop-and-wait and Go-back-N ARQ schemes over an unreliable channel modeled by the two-state Markov process. Generally, in these states, block error probabilities are different. From analytical results and numerical examples, we show that the throughput of the Stop-and-wait ARQ scheme only depends on overall average error probability, while that of the Go-back-N ARQ scheme depends on the characteristic of the Markov process.

  • Optimal Redundancy of Systems for Minimizing the Probability of Dangerous Errors

    Kyoichi NAKASHIMA  Hitoshi MATZNAGA  

     
    PAPER-Reliability and Safety

      Vol:
    E77-A No:1
      Page(s):
    228-236

    For systems in which the probability that an incorrect output is observed differs with input values, we adopt the redundant usage of n copies of identical systems which we call the n-redundant system. This paper presents a method to find the optimal redundancy of systems for minimizing the probability of dangerous errors. First, it is proved that a k-out-of-n redundancy or a mixture of two kinds of k-out-of-n redundancies minimizes the probability of D-errors under the condition that the probability of output errors including both dangerous errors and safe errors is below a specified value. Next, an algorithm is given to find the optimal series-parallel redundancy of systems by using the properties of the distance between two structure functions.

  • An Error-Correcting Version of the Leiss's Parser for Context-Free Languages

    Ken-ichi KURODA  Eiichi TANAKA  

     
    LETTER-Automaton, Language and Theory of Computing

      Vol:
    E76-D No:12
      Page(s):
    1528-1531

    This paper describes an error-correcting parser (ec-parser) for context-free languages that is an extension of the Leiss's parser. Since the ec-parser uses precomputed informations and a pruning technique by lookahead, the ec-parser is always faster than the Lyon's parser. Several examples are shown.

  • A Hybrid-ARQ Protocol with Adaptive Rate Error Control

    Hui ZHAO  Toru SATO  Iwane KIMURA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E76-A No:12
      Page(s):
    2095-2101

    This paper presents an adaptive rate error control scheme for digital communication over time-varying channels. The cyclic code with majority-logic decoding is used in a cascaded way as an inner code to create a simple and powerful hybrid-ARQ error control scheme. Inner code is used only for error correction and the outer code is used for both error correction and error detection. When an error is detected, retransmission is required. The unsuccessful packets are not discarded as with conventional schemes, but are combined with their retransmitted copies. Approximations for the throughput efficiency and the undetectable error probability are given. A high reliability coupled with a simple high-speed implementation makes it suitable for high data rate error control over both stationary and nonstationary channels. Adaptive error control scheme becomes the best solution for time-varying channels when the optimum code is selected according to the actual channel conditions to enhance the system performance. The main feature of this system is that the basic structure of the encoder and decoder need not be modified while the error-correction capability of the code increases. Results of a comparative analysis show that the proposed scheme outperforms other similar ARQ protocols.

1001-1020hit(1060hit)