The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] hypothesis test(16hit)

1-16hit
  • Testing Homogeneity for Normal Mixture Models: Variational Bayes Approach

    Natsuki KARIYA  Sumio WATANABE  

     
    PAPER-Information Theory

      Vol:
    E103-A No:11
      Page(s):
    1274-1282

    The test of homogeneity for normal mixtures has been used in various fields, but its theoretical understanding is limited because the parameter set for the null hypothesis corresponds to singular points in the parameter space. In this paper, we shed a light on this issue from a new perspective, variational Bayes, and offer a theory for testing homogeneity based on it. Conventional theory has not reveal the stochastic behavior of the variational free energy, which is necessary for constructing a hypothesis test, has remained unknown. We clarify it for the first time and construct a new test base on it. Numerical experiments show the validity of our results.

  • A Unified Approach to Error Exponents for Multiterminal Source Coding Systems

    Shigeaki KUZUOKA  

     
    PAPER-Shannon theory

      Vol:
    E101-A No:12
      Page(s):
    2082-2090

    Two kinds of problems - multiterminal hypothesis testing and one-to-many lossy source coding - are investigated in a unified way. It is demonstrated that a simple key idea, which is developed by Iriyama for one-to-one source coding systems, can be applied to multiterminal source coding systems. In particular, general bounds on the error exponents for multiterminal hypothesis testing and one-to-many lossy source coding are given.

  • The Error Exponent of Zero-Rate Multiterminal Hypothesis Testing for Sources with Common Information

    Makoto UEDA  Shigeaki KUZUOKA  

     
    PAPER-Shannon Theory

      Vol:
    E98-A No:12
      Page(s):
    2384-2392

    The multiterminal hypothesis testing problem with zero-rate constraint is considered. For this problem, an upper bound on the optimal error exponent is given by Shalaby and Papamarcou, provided that the positivity condition holds. Our contribution is to prove that Shalaby and Papamarcou's upper bound is valid under a weaker condition: (i) two remote observations have a common random variable in the sense of Gácks and Körner, and (ii) when the value of the common random variable is fixed, the conditional distribution of remaining random variables satisfies the positivity condition. Moreover, a generalization of the main result is also given.

  • Non-coherent Power Decomposition-Based Energy Detection for Cooperative Spectrum Sensing in Cognitive Radio Networks

    Bingxuan ZHAO  Shigeru SHIMAMOTO  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E95-B No:1
      Page(s):
    234-242

    As the fundamental component of dynamic spectrum access, implementing spectrum sensing is one of the most important goals in cognitive radio networks due to its key functions of protecting licensed primary users from harmful interference and identifying spectrum holes for the improvement of spectrum utilization. However, its performance is generally compromised by the interference from adjacent primary channels. To cope with such interference and improve detection performance, this paper proposes a non-coherent power decomposition-based energy detection method for cooperative spectrum sensing. Due to its use of power decomposition, interference cancellation can be applied in energy detection. The proposed power decomposition does not require any prior knowledge of the primary signals. The power detection with its interference cancellation can be implemented indirectly by solving a non-homogeneous linear equation set with a coefficient matrix that involves only the distances between primary transmitters and cognitive secondary users (SUs). The optimal number of SUs for sensing a single channel and the number of channels that can be sensed simultaneously are also derived. The simulation results show that the proposed method is able to cope with the expected interference variation and achieve higher probability of detection and lower probability of false alarm than the conventional method in both hard combining and soft combining scenarios.

  • Signal Activity Detection of Offset-QPSK in Colored Gaussian Noise

    Sayed Jalal ZAHABI  Mohammadali KHOSRAVIFARD  Ali A. TADAION  T. Aaron GULLIVER  

     
    LETTER-Communication Theory and Signals

      Vol:
    E94-A No:11
      Page(s):
    2226-2229

    This letter considers the problem of detecting an offset quadrature phase shift keying (O-QPSK) modulated signal in colored Gaussian noise. The generalized likelihood ratio test (GLRT) is employed for detection. By deriving the GLRT, it is shown that the assumption of colored Gaussian noise results in a more complicated problem than with the white noise assumption that was previously examined in the literature. An efficient solution for the detection maximization problem is proposed, based on which the GLRT is implemented. Performance results are presented to illustrate the detector performance.

  • A Novel Pre-Processing Scheme to Enhance GNSS Signal Detection in the Presence of Blanking

    Chung-Liang CHANG  Jyh-Ching JUANG  

     
    PAPER-Navigation, Guidance and Control Systems

      Vol:
    E91-B No:5
      Page(s):
    1589-1598

    In air navigation, the rotation of aircraft results in the discontinuous tracking of GNSS signals. As the platform rotates, the GNSS signals are subject to blanking effects. To solve this problem, a ring-type antenna array is used to prevent signal discontinuity and a hypothesis-test based detection scheme is developed so that the correct antenna combination can be formed to provide uninterrupted reception of GNSS signals in the presence of blanking, noise, and interferences. A fixed threshold detection scheme is first developed by assuming that the statistics of the noise are known. It is shown that the scheme is capable of differentiating signal from noise at each antenna element. To account for the interference effect, a multiple hypothesis test scheme, together with an adaptive selection rule, is further developed. Through this detection and selection process, it is shown, through simulations, that the desired GNSS signals can be extracted and successfully tracked in the presence of blanking and co-channel interference.

  • A Method for Detecting Shallowly Buried Landmines Using Sequential GPR Data

    Masahiko NISHIMOTO  Ken-ichiro SHIMO  

     
    PAPER

      Vol:
    E88-B No:6
      Page(s):
    2362-2368

    A method for detecting shallowly buried landmines using sequential ground penetrating radar (GPR) data is presented. After removing a dominant coherent component arising from the ground surface reflection from the GPR data, three kinds of target features related to wave correlation, energy ratio, and signal arrival time are extracted. Since the detection problem treated here is reduced to a binary hypothesis test, an approach based on a likelihood ratio test is employed as a detection algorithm. In order to check the detection performance, a Monte Carlo simulation is carried out for data generated by a two-dimensional finite-difference time domain (FDTD) method. Results given in the form of receiver operating characteristic (ROC) curves show that good detection performance is obtained even for landmines buried at shallow depths under rough ground surfaces, where the responses from the landmines and that from the ground surface overlap in time.

  • A Novel Image Enhancement Algorithm for a Small Target Detection of Panoramic Infrared Imagery

    Ju-Young KIM  Ki-Hong KIM  Hee-Chul HWANG  Duk-Gyoo KIM  

     
    LETTER

      Vol:
    E88-A No:6
      Page(s):
    1520-1524

    A novel image enhancement algorithm that can efficiently detect a small target of panoramic infrared (IR) imagery is proposed. Image enhancement is the first step for detecting and recognizing a small target in the IR imagery. The essence of the proposed algorithm is to utilize the independent histogram equalization (HE) separately over two sub-images obtained by decomposing the given image through the statistical hypothesis testing (SHT). Experimental results show that the proposed algorithm has better discrimination and lower false alarm rate than the conventional algorithms.

  • A Statistical Time Synchronization Method for Frequency-Synchronized Network Clocks in Distributed Systems

    Takao YAMASHITA  Satoshi ONO  

     
    PAPER-Computer Systems

      Vol:
    E87-D No:7
      Page(s):
    1878-1886

    In this paper, we propose a statistical method of time synchronization for computer clocks that have precisely frequency-synchronized oscillators. This method not only improves the accuracy of time synchronization but also prevents degradation in the frequency stability of the precise oscillators when the errors in the measured time offsets between computer clocks caused by network traffic possess a Gaussian distribution. Improved accuracy of time synchronization is achieved by estimating the confidence interval of the measured time offsets between the clocks. Degradation in frequency stability is prevented by eliminating unnecessary time correction for the computer clock, because time correction generally causes changes in the frequency accuracy and stability of the precise oscillators. To eliminate unnecessary time correction, our method uses an extended hypothesis test of the difference between the current mean and the mean at the last time adjustment to determine whether time correction is needed. Evaluation by simulating changes in the time offset of the existing ISDN clock synchronization system showed that this method achieves accurate time and stable frequency synchronization.

  • The Theory of Software Reliability Corroboration

    Bojan CUKIC  Erdogan GUNEL  Harshinder SINGH  Lan GUO  

     
    PAPER-Testing

      Vol:
    E86-D No:10
      Page(s):
    2121-2129

    Software certification is a notoriously difficult problem. From software reliability engineering perspective, certification process must provide evidence that the program meets or exceeds the required level of reliability. When certifying the reliability of a high assurance system very few, if any, failures are observed by testing. In statistical estimation theory the probability of an event is estimated by determining the proportion of the times it occurs in a fixed number of trials. In absence of failures, the number of required certification tests becomes impractically large. We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the hypothesized probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.

  • On Automatic Speech Recognition at the Dawn of the 21st Century

    Chin-Hui LEE  

     
    INVITED SURVEY PAPER

      Vol:
    E86-D No:3
      Page(s):
    377-396

    In the last three decades of the 20th Century, research in speech recognition has been intensively carried out worldwide, spurred on by advances in signal processing, algorithms, architectures, and hardware. Recognition systems have been developed for a wide variety of applications, ranging from small vocabulary keyword recognition over dial-up telephone lines, to medium size vocabulary voice interactive command and control systems for business automation, to large vocabulary speech dictation, spontaneous speech understanding, and limited-domain speech translation. Although we have witnessed many new technological promises, we have also encountered a number of practical limitations that hinder a widespread deployment of applications and services. On one hand, fast progress was observed in statistical speech and language modeling. On the other hand only spotty successes have been reported in applying knowledge sources in acoustics, speech and language science to improving speech recognition performance and robustness to adverse conditions. In this paper we review some key advances in several areas of speech recognition. A bottom-up detection framework is also proposed to facilitate worldwide research collaboration for incorporating technology advances in both statistical modeling and knowledge integration into going beyond the current speech recognition limitations and benefiting the society in the 21st century.

  • Spectral Subtraction Based on Statistical Criteria of the Spectral Distribution

    Hidetoshi NAKASHIMA  Yoshifumi CHISAKI  Tsuyoshi USAGAWA  Masanao EBATA  

     
    PAPER-Digital Signal Processing

      Vol:
    E85-A No:10
      Page(s):
    2283-2292

    This paper addresses the single channel speech enhancement method which utilizes the mean value and variance of the logarithmic noise power spectra. An important issue for single channel speech enhancement algorithm is to determine the trade-off point for the spectral distortion and residual noise. Thus the accurate discrimination between speech spectral and noise components is required. The conventional methods determine the trade-off point using parameters obtained experimentally. As a result spectral discrimination is not adequate. And the enhanced speech is deteriorated by spectral distortion or residual noise. Therefore, a criteria to determine the point is necessary. The proposed method determines the trade-off point of spectral distortion and residual noise level by discrimination between speech spectral and noise components based on statistical criteria. The spectral discrimination is performed using hypothesis testing that utilizes means and variances of the logarithmic power spectra. The discriminated spectral components are divided into speech-dominant spectral components and noise-dominant ones. For the speech-dominant ones, spectral subtraction is performed to minimize the spectral distortion. For the noise-dominant ones, attenuation is performed to reduce the noise level. The performance of the method is confirmed in terms of waveform, spectrogram, noise reduction level and speech recognition task. As a result, the noise reduction level and speech recognition rate are improved so that the method reduces the musical noise effectively and improves the enhanced speech quality.

  • A Study of Collaborative Discovery Processes Using a Cognitive Simulator

    Kazuhisa MIWA  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E83-D No:12
      Page(s):
    2088-2097

    We discuss human collaborative discovery processes using a production system model as a cognitive simulator. We have developed an interactive production system architecture to construct the simulator. Two production systems interactively find targets in which the only experimental results are shared; each does not know the hypothesis the other system has. Through this kind of interaction, we verify whether or not the performance of two systems interactively finding targets exceeds that of two systems independently finding targets. If we confirm the superiority of collaborative discovery, we approve of emergence by the interaction. The results are: (1) generally speaking collaboration does not produces the emergence defined above, and (2) as the different degree of hypothesis testing strategies that the two system use gets larger, the benefits of interaction gradually increases.

  • A Coarse to Fine Image Segmentation Method

    Shanjun ZHANG  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E80-D No:7
      Page(s):
    726-732

    The segmentation of images into regions that have some common properties is a fundamental problem in low level computer vision. In this paper, the region growing method to segmentation is studied. In the study, a coarse to fine processing strategy is adopted to identify the homogeneity of the subregion of an image. The pixels in the image are checked by a nested triple-layer neighborhood system based hypothesis test. The pixels can then be classified into single pixels or grain pixels with different size and coarseness. Instead of using the global threshold to the region growing, local thresholds are determined adaptively for each pixel in the image. The strength of the proposed method lies in the fact that the thresholds are computed automatically. Experiments for synthetic and natural images show the efficiency of our method.

  • On Dimension Estimates with Surrogate Data Sets

    Tohru IKEGUCHI  Kazuyuki AIHARA  

     
    PAPER-Nonlinear Problems

      Vol:
    E80-A No:5
      Page(s):
    859-868

    In this paper, we propose a new strategy of estimating correlation dimensions in combination with the method of surrogate data, which is a kind of statistical control usually introduced to avoid spurious estimates of nonlinear statistics, such as fractal dimensions, Lyapunov exponents and so on. In the case of analyzing time series with the method of surrogate data, it is desirable to decide values of estimated nonlinear statistics of the original data and surrogate data sets as exactly as possible. However, when dimensional analysis is applied to possible attractors reconstructed from real time series, it is very dangerous to decide a single value as the estimated dimensions and desirable to analyze its scaling property for avoiding spurious estimates. In order to solve this defficulty, a dimension estimator algorithm and the method of surrogate data are combined by introducing Monte Carlo hypothesis testing. In order to show effectiveness of the new strategy, firstly artificial time series are analyzed, such as the Henon map with additive noise, filtered random numbers and filtered random numbers transformed by a static monotonic nonlinearity, and then experimental time series are also examined, such as wolfer's sunspot numbers and the fluctuations in a farinfrared laser data.

  • Approximate Distribution of Processor Utilization and Design of an Overload Detection Scheme for SPC Switching Systems

    Toshihisa OZAWA  

     
    PAPER

      Vol:
    E75-B No:12
      Page(s):
    1287-1291

    Processors are important resources of stored program control (SPC) switching systems, and estimation of their workload level is crucial to maintaining service quality. Processor utilization is measured as processor usage per unit time, and workload level is usually estimated from measurement of this utilization during a given interval. This paper provides an approximate distribution of processor utilization of SPC switching systems, and it provides a method for designing an overload detection scheme. This method minimizes the observation interval required to keep overload detection errors below specified values. This observation interval is obtained as an optimal solution of a linear programming.