The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] maximum entropy(15hit)

1-15hit
  • Efficient RFID Data Cleaning in Supply Chain Management

    Hua FAN  Quanyuan WU  Jianfeng ZHANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E96-D No:7
      Page(s):
    1557-1560

    Despite the improvement of the accuracy of RFID readers, there are still erroneous readings such as missed reads and ghost reads. In this letter, we propose two effective models, a Bayesian inference-based decision model and a path-based detection model, to increase the accuracy of RFID data cleaning in RFID based supply chain management. In addition, the maximum entropy model is introduced for determining the value of sliding window size. Experiment results validate the performance of the proposed method and show that it is able to clean raw RFID data with a higher accuracy.

  • A Novel Discriminative Method for Pronunciation Quality Assessment

    Junbo ZHANG  Fuping PAN  Bin DONG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E96-D No:5
      Page(s):
    1145-1151

    In this paper, we presented a novel method for automatic pronunciation quality assessment. Unlike the popular “Goodness of Pronunciation” (GOP) method, this method does not map the decoding confidence into pronunciation quality score, but differentiates the different pronunciation quality utterances directly. In this method, the student's utterance need to be decoded for two times. The first-time decoding was for getting the time points of each phone of the utterance by a forced alignment using a conventional trained acoustic model (AM). The second-time decoding was for differentiating the pronunciation quality for each triphone using a specially trained AM, where the triphones in different pronunciation qualities were trained as different units, and the model was trained in discriminative method to ensure the model has the best discrimination among the triphones whose names were same but pronunciation quality scores were different. The decoding network in the second-time decoding included different pronunciation quality triphones, so the phone-level scores can be obtained from the decoding result directly. The phone-level scores were combined into the sentence-level scores using maximum entropy criterion. The experimental results shows that the scoring performance was increased significantly compared to the GOP method, especially in sentence-level.

  • Improving the Performance of the Hough Detector in Search Radars

    Ali MOQISEH  Mahdi HADAVI  Mohammad M. NAYEBI  

     
    PAPER-Sensing

      Vol:
    E94-B No:1
      Page(s):
    273-281

    In this paper, the inherent problem of the Hough transform when applied to search radars is considered. This problem makes the detection probability of a target depend on the length of the target line in the data space in addition to the received SNR from it. It is shown that this problem results in a non-uniform distribution of noise power in the parameter space. In other words, noise power in some regions of the parameter space is greater than in others. Therefore, the detection probability of the targets covered by these regions will decrease. Our solution is to modify the Hough detector to remove the problem. This modification uses non-uniform quantization in the parameter space based on the Maximum Entropy Quantization method. The details of implementing the modified Hough detector in a search radar are presented according to this quantization method. Then, it is shown that by using this method the detection performance of the target will not depend on its length in the data space. The performance of the modified Hough detector is also compared with the standard Hough detector by considering their probability of detection and probability of false alarm. This comparison shows the performance improvement of the modified detector.

  • Semantic Classification of Bio-Entities Incorporating Predicate-Argument Features

    Kyung-Mi PARK  Hae-Chang RIM  

     
    LETTER-Natural Language Processing

      Vol:
    E91-D No:4
      Page(s):
    1211-1214

    In this paper, we propose new external context features for the semantic classification of bio-entities. In the previous approaches, the words located on the left or the right context of bio-entities are frequently used as the external context features. However, in our prior experiments, the external contexts in a flat representation did not improve the performance. In this study, we incorporate predicate-argument features into training the ME-based classifier. Through parsing and argument identification, we recognize biomedical verbs that have argument relations with the constituents including a bio-entity, and then use the predicate-argument structures as the external context features. The extraction of predicate-argument features can be done by performing two identification tasks: the biomedically salient word identification which determines whether a word is a biomedically salient word or not, and the target verb identification which identifies biomedical verbs that have argument relations with the constituents including a bio-entity. Experiments show that the performance of semantic classification in the bio domain can be improved by utilizing such predicate-argument features.

  • Automatic Acquisition of Qualia Structure from Corpus Data

    Ichiro YAMADA  Timothy BALDWIN  Hideki SUMIYOSHI  Masahiro SHIBATA  Nobuyuki YAGI  

     
    PAPER

      Vol:
    E90-D No:10
      Page(s):
    1534-1541

    This paper presents a method to automatically acquire a given noun's telic and agentive roles from corpus data. These relations form part of the qualia structure assumed in the generative lexicon, where the telic role represents a typical purpose of the entity and the agentive role represents the origin of the entity. Our proposed method employs a supervised machine-learning technique which makes use of template-based contextual features derived from token instances of each noun. The output of our method is a ranked list of verbs for each noun, across the different qualia roles. We also propose a variant of Spearman's rank correlation to evaluate the correlation of two top-N ranked lists. Using this correlation method, we represent the ability of the proposed method to identify qualia structure relative to a conventional template-based method.

  • Zero-Anaphora Resolution in Chinese Using Maximum Entropy

    Jing PENG  Kenji ARAKI  

     
    PAPER-Natural Language Processing

      Vol:
    E90-D No:7
      Page(s):
    1092-1102

    In this paper, we propose a learning classifier based on maximum entropy (ME) for resolving zero-anaphora in Chinese text. Besides regular grammatical, lexical, positional and semantic features motivated by previous research on anaphora resolution, we develop two innovative Web-based features for extracting additional semantic information from the Web. The values of the two features can be obtained easily by querying the Web using some patterns. Our study shows that our machine learning approach is able to achieve an accuracy comparable to that of state-of-the-art systems. The Web as a knowledge source can be incorporated effectively into the ME learning framework and significantly improves the performance of our approach.

  • Word Error Rate Minimization Using an Integrated Confidence Measure

    Akio KOBAYASHI  Kazuo ONOE  Shinichi HOMMA  Shoei SATO  Toru IMAI  

     
    PAPER-Speech and Hearing

      Vol:
    E90-D No:5
      Page(s):
    835-843

    This paper describes a new criterion for speech recognition using an integrated confidence measure to minimize the word error rate (WER). The conventional criteria for WER minimization obtain the expected WER of a sentence hypothesis merely by comparing it with other hypotheses in an n-best list. The proposed criterion estimates the expected WER by using an integrated confidence measure with word posterior probabilities for a given acoustic input. The integrated confidence measure, which is implemented as a classifier based on maximum entropy (ME) modeling or support vector machines (SVMs), is used to acquire probabilities reflecting whether the word hypotheses are correct. The classifier is comprised of a variety of confidence measures and can deal with a temporal sequence of them to attain a more reliable confidence. Our proposed criterion for minimizing WER achieved a WER of 9.8% and a 3.9% reduction, relative to conventional n-best rescoring methods in transcribing Japanese broadcast news in various environments such as under noisy field and spontaneous speech conditions.

  • A Probabilistic Sentence Reduction Using Maximum Entropy Model

    Minh LE NGUYEN  Masaru FUKUSHI  Susumu HORIGUCHI  

     
    PAPER-Natural Language Processing

      Vol:
    E88-D No:2
      Page(s):
    278-288

    This paper describes a new probabilistic sentence reduction method using maximum entropy model. In contrast to previous methods, the proposed method has the ability to produce multiple best results for a given sentence, which is useful in text summarization applications. Experimental results show that the proposed method improves on earlier methods in both accuracy and computation time.

  • An Integrated Dialogue Analysis Model for Determining Speech Acts and Discourse Structures

    Won Seug CHOI  Harksoo KIM  Jungyun SEO  

     
    PAPER-Natural Language Processing

      Vol:
    E88-D No:1
      Page(s):
    150-157

    Analysis of speech acts and discourse structures is essential to a dialogue understanding system because speech acts and discourse structures are closely tied with the speaker's intention. However, it has been difficult to infer a speech act and a discourse structure from a surface utterance because they highly depend on the context of the utterance. We propose a statistical dialogue analysis model to determine discourse structures as well as speech acts using a maximum entropy model. The model can automatically acquire probabilistic discourse knowledge from an annotated dialogue corpus. Moreover, the model can analyze speech acts and discourse structures in one framework. In the experiment, the model showed better performance than other previous works.

  • A Statistical Model for Identifying Grammatical Relations in Korean Sentences

    Songwook LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E87-D No:12
      Page(s):
    2863-2871

    This study aims to identify grammatical relations (GRs) in Korean sentences. The key task is to find the GRs in sentences in terms of such GR categories as subject, object, and adverbial. To overcome this problem, we are faced with the structural ambiguity and the grammatical relational ambiguity. We propose a statistical model, which resolves the grammatical relational ambiguity first, and then resolves structural ambiguity by using the probabilities of the GRs given noun phrases and verb phrases in sentences. The proposed model uses the characteristics of the Korean language such as distance, no-crossing and case property. We showed that consideration of such characteristics produces better results than without consideration by experiments. We attempt to enhance our system by estimating the probabilities of the proposed model with the Maximum Entropy (ME) model, and with Support Vector Machines (SVM) classifiers and we confirm that SVM classifiers improved the performance of our proposed model through experiments. Through an experiment with a tree and GR tagged corpus for training the model, we achieved an overall accuracy of 84.8%, 94.1%, and 84.8% in identifying subject, object, and adverbial relations in sentences, respectively.

  • An Approximate Analysis of a Shared Buffer ATM Switch Using Input Process Aggregation

    Jisoo KIM  Chi-Hyuck JUN  

     
    PAPER-Switching and Communication Processing

      Vol:
    E82-B No:12
      Page(s):
    2107-2115

    A shared buffer ATM switch loaded with bursty input traffic is modeled by a discrete-time queueing system. Also, the unbalanced and correlated routing traffic patterns are considered. An approximation method to analyze the queueing system under consideration is developed. To overcome the problem regarding the size of state space to be dealt with, the entire switching system is decomposed into several subsystems, and then each subsystem is analyzed in isolation. We first propose an efficient algorithm for superposing all the individual bursty cell arrival processes to the switch. And then, the maximum entropy method is applied to obtain the steady-state probability distribution of the queueing system. From the obtained steady-state probabilities, we can derive some performance measures such as cell loss probability and average delay. Numerical examples of the proposed approximation method are given, which are compared with simulation results.

  • Symbol Error Probability of Time Spread PPM Signals in the Presence of Co-channel Interference

    Jinsong DUAN  Ikuo OKA  Chikato FUJIWARA  

     
    PAPER-Communication Theory

      Vol:
    E81-B No:1
      Page(s):
    66-72

    Time spread (TS) pulse position modulation (PPM) signals have been proposed for CDMA applications, where the envelope detection is employed instead of coherent detection for easier synchronization of PPM. In this paper, a new method of deriving symbol error probability (SEP) of TS PPM signals in the presence of interference is introduced. The analysis is based on the moment technique. The maximum entropy criterion for estimating an unknown probability density function (PDF) from its moments is applied to the evaluation of PDF of envelope detector output. Numerical results of SEP are shown for 4, 8 and 16PPM in the practical range of signal-to-noise power ratio (SNR) and signal-to-interference power ratio (SIR) of 5, 10 and 20 dB. SEP by the union bound is also given for comparison. From the results it is noted that when PPM multilevel number is small, the union bound goes near to SEP by the proposed method, but when it increases the difference of the SEP by the bound and proposed method becomes larger. The effect of central frequency offset of TS-filter is evaluated as an illustrative example.

  • Human Sleep Electroencephalogram Analysis Based on The Instantaneous Maximum Entropy Method

    Sunao UCHIDA  Yumi TAKIZAWA  Nobuhide HIRAI  Makio ISHIGURO  

     
    PAPER

      Vol:
    E80-A No:6
      Page(s):
    965-970

    Analysis of electroencephalogram (EEG) is presented for sleep physiology. This analysis is performed by the Instantaneous Maximum Entropy Method (IMEM), which was given by the author. Appearance and continuation of featuristic waves are not steady in EEG. The characteristics of these waves responding to epoch of sleep are analyzed. The behaviours of waves were clarified by this analysis as follows; (a) time dependent frequency of continuous oscillations of alpha rhythm was observed precisely. Sleep spindles were detected clearly within NREM and these parameters of time, frequency, and peak energy were specified. (b) delta waves with very low frequencies and sleep spindles were observed simultaneously. And (c) the relationship of sleep spindles and delta waves was first detected with negative correlation along time-axis. The analysis by the IMEM was found effective comparing conventional analysis method of FFT, bandpass filter bank, etc.

  • Analysis Method of Nonstationary Waveforms Based on a Modulation Model

    Yumi TAKIZAWA  Atsushi FUKASAWA  

     
    PAPER

      Vol:
    E80-A No:6
      Page(s):
    951-957

    An analysis method is proposed for nonstationary waveforms. Modelling of a nonstationary waveform is first given in this paper. A waveform is represented by multiple oscillations. The instantaneous phase angle of each oscillation is written by three terms, predictive component, residual component, and initial phase constant. By this modelling, waveform analysis results in estimations of frequency, calculation of residual pbase in instantaneous phase angle. The Instantaneous Maximum Entropy Methods (IMEN) is utilized for frequency estimation. The residual phase angle is obtained by the Vandermonde matrix and the condition of continuity of phase angle among n-neighbourhood. Another analysis method is also proposed by the normalization of waveform parameters. The evaluation of the proposed method is done using artificially composed waveform signals. Novel and useful knowledge was provided by this analysis.

  • Adaptive Array Antenna Based on Spatial Spectral Estimation Using Maximum Entropy Method

    Minami NAGATSUKA  Naoto ISHII  Ryuji KOHNO  Hideki IMAI  

     
    PAPER

      Vol:
    E77-B No:5
      Page(s):
    624-633

    An adaptive array antenna can be considered as a useful tool of combating with fading in mobile communications. We can directly obtain the optimal weight coefficients without updating in temporal sampling, if the arrival angles and signal-to-noise ratio (SNR) of the desired and the undesired signals can be accurately estimated. The Maximum Entropy Method (MEM) can estimate the arrival angles, and the SNR from spatially sampled signals by an array antenna more precisely than the Discrete Fourier Transform (DFT). Therefore, this paper proposes and investigates an adaptive array antenna based on spatial spectral estimation using MEM. We call it MEM array. In order to reduce complexity for implementation, we also propose a modified algorithm using temporal updating as well. Furthermore, we propose a method of both improving estimation accuracy and reducing the number of antenna elements. In the method, the arrival angles can be approximately estimated by using temporal sampling instead of spatial sampling. Computer simulations evaluate MEM array in comparison with DFT array and LMS array, and show improvement owing to its modified algorithm and performance of the improved method.