The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] error(1060hit)

841-860hit(1060hit)

  • Optimal Grid Pattern for Automated Camera Calibration Using Cross Ratio

    Chikara MATSUNAGA  Yasushi KANAZAWA  Kenichi KANATANI  

     
    PAPER-Image Processing

      Vol:
    E83-A No:10
      Page(s):
    1921-1928

    With a view to virtual studio applications, we design an optimal grid pattern such that the observed image of a small portion of it can be matched to its corresponding position in the pattern easily. The grid shape is so determined that the cross ratio of adjacent intervals is different everywhere. The cross ratios are generated by an optimal Markov process that maximizes the accuracy of matching. We test our camera calibration system using the resulting grid pattern in a realistic setting and show that the performance is greatly improved by applying techniques derived from the designed properties of the pattern.

  • Local Maxima Error Intensity Functions and Its Application to Time Delay Estimator in the Presence of Shot Noise Interference

    Joong-Kyu KIM  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E83-A No:9
      Page(s):
    1844-1852

    This paper concentrates on the model useful for analyzing the error performance of M-estimators of a single unknown signal parameter: that is the error intensity model. We develop the point process representation for the estimation error, the conditional distribution of the estimator, and the distribution of error candidate point process. Then the error intensity function is defined as the probability density of the estimate and the general form of the error intensity function is derived. We compute the explicit form of the intensity functions based on the local maxima model of the error generating point process. While the methods described in this paper are applicable to any estimation problem with continuous parameters, our main application will be time delay estimation. Specifically, we will consider the case where coherent impulsive interference is involved in addition to the Gaussian noise. Based on numerical simulation results, we compare each of the error intensity model in terms of the accuracy of both error probability and mean squared error (MSE) predictions, and the issue of extendibility to multiple parameter estimation is also discussed.

  • A Fast Correction Method for Erroneous Sentences Using the LR Parsing

    Masami SHISHIBORI  Kazuaki ANDO  Yuuichirou KASHIWAGI  Jun-ichi AOE  

     
    PAPER-Natural Language Processing

      Vol:
    E83-D No:9
      Page(s):
    1797-1804

    Natural language interface systems can accept more unrestricted queries from users than other systems, however it is impossible to understand erroneous sentences which include the syntax errors, unknown words and misspelling. In order to realize the superior natural language interface, the automatic error correction for erroneous sentences is one of problems to be solved. The method to apply the LR parsing strategies is one of the famous approaches as the robust error recovery scheme. This method is able to obtain a high correction accuracy, however it takes a great deal of time to parse the sentence, such that it becomes a very important task to improve the time-cost. In this paper, we propose the method to improve the time efficiency, keeping the correction accuracy of the traditional method. This method makes use of a new parsing table that denotes the states to be transited after accepting each symbol. By using this table, the symbol which is allocated just after the error position can be utilized for selecting correction symbols, as a result, the number of candidates produced on the correction process is reduced, and fast system can be realized. The experiment results, using 1,050 sentences including error characters, show that this method can correct error points 69 times faster than the traditional method, also keep the same correction accuracy as the traditional method.

  • Performance Analyses of Notch Fourier Transform (NFT) and Constrained Notch Fourier Transform (CNFT)

    Yegui XIAO  Takahiro MATSUO  Katsunori SHIDA  

     
    PAPER-Digital Signal Processing

      Vol:
    E83-A No:9
      Page(s):
    1739-1747

    Fourier analysis of sinusoidal and/or quasi-periodic signals in additive noise has been used in various fields. So far, many analysis algorithms including the well-known DFT have been developed. In particular, many adaptive algorithms have been proposed to handle non-stationary signals whose discrete Fourier coefficients (DFCs) are time-varying. Notch Fourier Transform (NFT) and Constrained Notch Fourier Transform(CNFT) proposed by Tadokoro et al. and Kilani et al., respectively, are two of them, which are implemented by filter banks and estimate the DFCs via simple sliding algorithms of their own. This paper presents, for the first time, statistical performance analyses of the NFT and the CNFT. Estimation biases and mean square errors (MSEs) of their sliding algorithms will be derived in closed form. As a result, it is revealed that both algorithms are unbiased, and their estimation MSEs are related to the signal frequencies, the additive noise variance and orders of comb filters used in their filter banks. Extensive simulations are performed to confirm the analytical findings.

  • Blind Separation of Sources: Methods, Assumptions and Applications

    Ali MANSOUR  Allan Kardec BARROS  Noboru OHNISHI  

     
    SURVEY PAPER

      Vol:
    E83-A No:8
      Page(s):
    1498-1512

    The blind separation of sources is a recent and important problem in signal processing. Since 1984, it has been studied by many authors whilst many algorithms have been proposed. In this paper, the description of the problem, its assumptions, its currently applications and some algorithms and ideas are discussed.

  • Code Synchronization Error Control Scheme by Correlation of Received Sequence in Phase Rotating Modulation

    Hideki YOSHIKAWA  Ikuo OKA  Chikato FUJIWARA  

     
    PAPER

      Vol:
    E83-B No:8
      Page(s):
    1873-1879

    It is known that cycle slip due to frequency selective fading causes a burst error by symbol deletion or insertion, and has a serious effect on mobile radio communication systems. In this paper, first, we show that phase rotating modulation is suitable for code synchronization error detection. Next, we consider a code synchronization controller using correlation estimator of received sequence, and the controller combines the estimator with 2π/3-shifted modulation to construct a new code synchronization error control scheme as a cycle slip cancelling system. Furthermore, we apply the scheme to the multilevel trellis coded modulation (TCM). Finally, computer simulation results confirm that proposed scheme is capable of code synchronization error correction.

  • Weighted OFDM for Wireless Multipath Channels

    Homayoun NIKOOKAR  Ramjee PRASAD  

     
    PAPER

      Vol:
    E83-B No:8
      Page(s):
    1864-1872

    In this paper the novel method of "weighted OFDM" is addressed. Different types of weighting factors (including Rectangular, Bartlett, Gaussian, Raised cosine, Half-sin and Shanon) are considered. The impact of weighting of OFDM on the peak-to-average power ratio (PAPR) is investigated by means of simulation and is compared for the above mentioned weighting factors. Results show that by weighting of the OFDM signal the PAPR reduces. Bit error performance of weighted multicarrier transmission over a multipath channel is also investigated. Results indicate that there is a trade off between PAPR reduction and bit error performance degradation by weighting.

  • Tradeoffs between Error Performance and Decoding Complexity in Multilevel 8-PSK Codes with UEP Capabilities and Multistage Decoding

    Motohiko ISAKA  Robert H. MORELOS-ZARAGOZA  Marc P. C. FOSSORIER  Shu LIN  Hideki IMAI  

     
    PAPER-Coding Theory

      Vol:
    E83-A No:8
      Page(s):
    1704-1712

    In this paper, we investigate multilevel coding and multistage decoding for satellite broadcasting with moderate decoding complexity. An unconventional signal set partitioning is used to achieve unequal error protection capabilities. Two possibilities are shown and analyzed for practical systems: (i) linear block component codes with near optimum decoding, (ii) punctured convolutional component codes with a common trellis structure.

  • Channel State Dependent Resource Scheduling for Wireless Message Transport with Framed ALOHA-Reservation Access Protocol

    Masugi INOUE  

     
    PAPER

      Vol:
    E83-A No:7
      Page(s):
    1338-1346

    Channel-state-dependent (CSD) radio-resource scheduling algorithms for wireless message transport using a framed ALOHA-reservation access protocol are presented. In future wireless systems that provide Mbps-class high-speed wireless links using high frequencies, burst packet errors, which last a certain number of packets in time, would cause serious performance degradation. CSD resource scheduling algorithms utilize channel-state information for increasing overall throughput. These algorithms were comparatively evaluated in terms of average allocation plus transfer delay, average throughput, variance in throughput, and utilization of resources. Computer simulation results showed that the CSD mechanism has a good effect, especially on equal sharing (ES)-based algorithms, and also CSD-ES provides low allocation plus transfer delay, high average throughput, low variance in throughput, and efficient utilization of radio resources.

  • Fault Tolerance in Decentralized Systems

    Brian RANDELL  

     
    INVITED PAPER

      Vol:
    E83-B No:5
      Page(s):
    903-907

    In a decentralised system the problems of fault tolerance, and in particular error recovery, vary greatly depending on the design assumptions. For example, in a distributed database system, if one disregards the possibility of undetected invalid inputs or outputs, the errors that have to be recovered from will just affect the database, and backward error recovery will be feasible and should suffice. Such a system is typically supporting a set of activities that are competing for access to a shared database, but which are otherwise essentially independent of each other--in such circumstances conventional database transaction processing and distributed protocols enable backward recovery to be provided very effectively. But in more general systems the multiple activities will often not simply be competing against each other, but rather will at times be attempting to co-operate with each other, in pursuit of some common goal. Moreover, the activities in decentralised systems typically involve not just computers, but also external entities that are not capable of backward error recovery. Such additional complications make the task of error recovery more challenging, and indeed more interesting. This paper provides a brief analysis of the consequences of various such complications, and outlines some recent work on advanced error recovery techniques that they have motivated.

  • Realization of Admissibility for Supervised Learning

    Akira HIRABAYASHI  Hidemitsu OGAWA  Akiko NAKASHIMA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:5
      Page(s):
    1170-1176

    In supervised learning, one of the major learning methods is memorization learning (ML). Since it reduces only the training error, ML does not guarantee good generalization capability in general. When ML is used, however, acquiring good generalization capability is expected. This usage of ML was interpreted by one of the present authors, H. Ogawa, as a means of realizing 'true objective learning' which directly takes generalization capability into account, and introduced the concept of admissibility. If a learning method can provide the same generalization capability as a true objective learning, it is said that the objective learning admits the learning method. Hence, if admissibility does not hold, making it hold becomes important. In this paper, we introduce the concept of realization of admissibility, and devise a realization method of admissibility of ML with respect to projection learning which directly takes generalization capability into account.

  • A Roman-Chinese Character Conversion System Correcting Pinyin Spell Errors with Application to the Chinese FEP

    Bin YE  Hirotada KAWAKAMI  Tadahiro MATSUMOTO  Munehiro GOTO  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E83-D No:5
      Page(s):
    1153-1159

    It is not so easy even for the ordinary Chinese to spell the correct Pinyin. Therefore the Pinyin-based Chinese character (Kanji) input system including the Chinese word processor is not easy to use. This paper propose a FEP (Front End Processor) to the Pinyin-based input system which allows the user's slight mistakes due to his ignorance of the spelling or dialect. This FEP uses the similarity of the structure of Kanji to confirm the correct Pinyin.

  • IFS Optimization Using Discrete Parameter Pools

    Hiroyuki HONDA  Miki HASEYAMA  Hideo KITAJIMA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:2
      Page(s):
    233-241

    This paper proposes an Iterated Function System (IFS) which can reduce effects of quantization errors of the IFS parameters. The proposed method skips conventional analog-parameter search and directly selects optimum IFS parameters from pools of discrete IFS parameters. In conventional IFS-based image coding the IFS parameters are quantized after their analog optimum values are determined. The image reconstructed from the quantized parameters is degraded with errors that are traced back to quantization errors amplified in the iterated mappings. The effectiveness of this new realistic approach is demonstrated by simulation results over the conventional method.

  • Aggressive Packet Combining for Error Control in Wireless Networks

    Yiu-Wing LEUNG  

     
    PAPER-Radio Communication

      Vol:
    E83-B No:2
      Page(s):
    380-385

    In uplink data communication in wireless networks, a portable computer may retransmit a packet multiple times before the base station receives the correct one. Each retransmission consumes communication bandwidth and battery energy of the portable computer. Therefore, it is desirable to reduce the number of retransmissions. In this paper, we propose the aggressive packet combining scheme for this purpose. The base station executes this scheme to combine multiple erroneous copies as follows: (1) perform bit-by-bit majority voting on the erroneous copies to produce a combined packet, (2) identify the least reliable bits in this combined packet, and (3) search the correct bit pattern for these bits. Then the base station may recover the correct packet, thereby reducing the mean number of retransmissions. The proposed scheme has several advantages: (1) it is more powerful than the majority packet combining scheme, (2) it can complement many existing ARQ protocols to improve their performance, (3) it does not add additional bits to the packet and hence it does not consume extra bandwidth in the wireless channel, and (4) it is only executed by the base station and hence the portable transceiver can be kept simple. The simulation results show that the proposed scheme is more bandwidth-efficient and energy-efficient than the majority packet combining scheme.

  • A New Vector Error Measurement Scheme for Transmit Modulation Accuracy of OFDM Systems

    Satoru HORI  Tomoaki KUMAGAI  Tetsu SAKATA  Masahiro MORIKURA  

     
    PAPER

      Vol:
    E82-B No:12
      Page(s):
    1906-1913

    This paper proposes a new vector error measurement scheme for orthogonal frequency division multiplexing (OFDM) systems that is used to define transmit modulation accuracy. The transmit modulation accuracy is defined to guarantee inter-operability among wireless terminals. In OFDM systems, the transmit modulation accuracy measured by the conventional vector error measurement scheme can not guarantee inter-operability due to the effect of phase noise. To overcome this problem, the proposed vector error measurement scheme utilizes pilot signals in multiple OFDM symbols to compensate the phase rotation caused by the phase noise. Computer simulation results show that the vector error measured by the proposed scheme uniquely corresponds to the C/N degradation in packet error rate even if phase noise exists in the OFDM signals. This means that the proposed vector error measurement scheme makes it possible to define the transmit modulation accuracy and so guarantee inter-operability among wireless terminals.

  • Theoretical and Approximate Derivation of Bit Error Rate in DS-CDMA Systems under Rician Fading Environment

    Fumihito SASAMORI  Fumio TAKAHATA  

     
    PAPER

      Vol:
    E82-A No:12
      Page(s):
    2660-2668

    The transmission quality in mobile wireless communications is affected by not only the thermal noise but also the multipass fading which changes drastically an amplitude and a phase of received signal. The paper proposes the theoretical and approximate methods for deriving an average bit error rate in DS-CDMA systems under the Rician fading environment on the assumption of the frequency non-selective fading, as parameters of the number of simultaneous access stations, the maximum Doppler frequency and so on. It is confirmed from the coincidence of theoretical and approximate results with simulation ones that the proposed approach is applicable to a variety of system parameters.

  • A High-Speed, Low-Power Phase Frequency Detector and Charge-Pump Circuits for High Frequency Phase-Locked Loops

    Won-Hyo LEE  Sung-Dae LEE  Jun-Dong CHO  

     
    PAPER

      Vol:
    E82-A No:11
      Page(s):
    2514-2520

    In this paper, we introduce a high-speed and low-power Phase-Frequency Detector (PFD) that is designed using a modified TSPC (True Single-Phase Clock) positive edge triggered D flip-flop . The proposed PFD has a simple structure with using only 19 transistors. The operation range of this PFD is over 1.4 GHz without using additional prescaler circuits. Furthermore, the PFD has a dead zone less than 0.01ns in the phase characteristics and has low phase sensitivity errors. The phase and frequency error detection range is not limited as in the case of the pt-type and nc-type PFDs. Also, the PFD is independent of the duty cycle of input signals. Also, a new charge-pump circuit is presented that is based on a charge-amplifier. A stand-by current of the proposed charge-pump circuit enhances the speed of charge-pump and removes the charge sharing which causes a phase noise in the charge pump PLL. Furthermore, the effect of clock feedthrough is reduced by separating the output stage from up and down signal. The simulation results base on a third order PLL are presented to verify the lock in process with the proposed PFD and charge pump circuits. The proposed PFD and charge-pump circuits are designed using 0.8 µm CMOS technology with 5 V supply voltage.

  • Determination of Error Values for Decoding Hermitian Codes with the Inverse Affine Fourier Transform

    Chih-Wei LIU  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E82-A No:10
      Page(s):
    2302-2305

    With the knowledge of the syndromes Sa,b, 0a,b q-2, the exact error values cannot be determined by using the conventional (q-1)2-point discrete Fourier transform in the decoding of a plane algebraic-geometric code over GF(q). In this letter, the inverse q-point 1-dimensional and q2-point 2-dimensional affine Fourier transform over GF(q) are presented to be used to retrieve the actual error values, but it requires much computation efforts. For saving computation complexity, a modification of the affine Fourier transform is derived by using the property of the rational points of the plane Hermitian curve. The modified transform, which has almost the same computation complexity of the conventional discrete Fourier transform, requires the knowledge of syndromes Sa,b, 0 a,b q-2, and three more extended syndromes Sq-1,q-1, S0,q-1, Sq-1,0.

  • A Study on Performances of Soft-Decision Decoding Algorithm Based on Energy Minimization Principle

    Akira SHIOZAKI  Yasushi NOGAWA  Tomokazu SATO  

     
    LETTER-Coding Theory

      Vol:
    E82-A No:10
      Page(s):
    2194-2198

    We proposed a soft-decision decoding algorithm for cyclic codes based on energy minimization principle. This letter presents the algorithm which improves decoding performance and decoding complexity of the previous method by giving more initial positions and introducing a new criterion for terminating the decoding procedure. Computer simulation results show that both the decoded block error rate and the decoding complexity decrease by this method more than by the previous method.

  • Iterative Processing for Improving Decode Quality in Mobile Multimedia Communications

    Shoichiro YAMASAKI  Hirokazu TANAKA  Atsushi ASANO  

     
    PAPER-Communication Systems

      Vol:
    E82-A No:10
      Page(s):
    2096-2104

    Multimedia communications over mobile networks suffer from fluctuating channel degradation. Conventional error handling schemes consist of the first stage error correction decoding in wireless interface and the second stage error correction decoding in multimedia demultiplexer, where the second stage decoding result is not used to improve the first stage decoding performance. To meet the requirements of more powerful error protection, we propose iterative soft-input/soft-output error correction decoding in multimedia communications, where the likelihood output generated by the error correction decoding in multimedia demultiplexer is fed back to the decoding in wireless interface and the decoding procedure is iterated. The performances were evaluated by MPEG-4 video transmission simulation over mobile channels.

841-860hit(1060hit)