The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Bayesian estimation(12hit)

1-12hit
  • Greedy Selection of Sensors for Linear Bayesian Estimation under Correlated Noise Open Access

    Yoon Hak KIM  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2024/05/14
      Vol:
    E107-D No:9
      Page(s):
    1274-1277

    We consider the problem of finding the best subset of sensors in wireless sensor networks where linear Bayesian parameter estimation is conducted from the selected measurements corrupted by correlated noise. We aim to directly minimize the estimation error which is manipulated by using the QR and LU factorizations. We derive an analytic result which expedites the sensor selection in a greedy manner. We also provide the complexity of the proposed algorithm in comparison with previous selection methods. We evaluate the performance through numerical experiments using random measurements under correlated noise and demonstrate a competitive estimation accuracy of the proposed algorithm with a reasonable increase in complexity as compared with the previous selection methods.

  • Bayesian Nagaoka-Hayashi Bound for Multiparameter Quantum-State Estimation Problem

    Jun SUZUKI  

     
    PAPER-Quantum Information Theory

      Pubricized:
    2023/08/16
      Vol:
    E107-A No:3
      Page(s):
    510-518

    In this work we propose a Bayesian version of the Nagaoka-Hayashi bound when estimating a parametric family of quantum states. This lower bound is a generalization of a recently proposed bound for point estimation to Bayesian estimation. We then show that the proposed lower bound can be efficiently computed as a semidefinite programming problem. As a lower bound, we also derive a Bayesian version of the Holevo-type bound from the Bayesian Nagaoka-Hayashi bound. Lastly, we prove that the new lower bound is tighter than the Bayesian quantum logarithmic derivative bounds.

  • Discovery of Regular and Irregular Spatio-Temporal Patterns from Location-Based SNS by Diffusion-Type Estimation

    Yoshitatsu MATSUDA  Kazunori YAMAGUCHI  Ken-ichiro NISHIOKA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/06/10
      Vol:
    E98-D No:9
      Page(s):
    1675-1682

    In this paper, a new approach is proposed for extracting the spatio-temporal patterns from a location-based social networking system (SNS) such as Foursquare. The proposed approach consists of the following procedures. First, the spatio-temporal behaviors of users in SNS are approximated as a probabilistic distribution by using a diffusion-type formula. Since the SNS datasets generally consist of sparse check-in's of users at some time points and locations, it is difficult to investigate the spatio-temporal patterns on a wide range of time and space scales. The proposed method can estimate such wide range patterns by smoothing the sparse datasets by a diffusion-type formula. It is crucial in this method to estimate robustly the scale parameter by giving a prior generative model on check-in's of users. The robust estimation enables the method to extract appropriate patterns even in small local areas. Next, the covariance matrix among the time points is calculated from the estimated distribution. Then, the principal eigenfunctions are approximately extracted as the spatio-temporal patterns by principal component analysis (PCA). The distribution is a mixture of various patterns, some of which are regular ones with a periodic cycle and some of which are irregular ones corresponding to transient events. Though it is generally difficult to separate such complicated mixtures, the experiments on an actual Foursquare dataset showed that the proposed method can extract many plausible and interesting spatio-temporal patterns.

  • Bayesian Estimation of Multi-Trap RTN Parameters Using Markov Chain Monte Carlo Method

    Hiromitsu AWANO  Hiroshi TSUTSUI  Hiroyuki OCHI  Takashi SATO  

     
    PAPER-Device and Circuit Modeling and Analysis

      Vol:
    E95-A No:12
      Page(s):
    2272-2283

    Random telegraph noise (RTN) is a phenomenon that is considered to limit the reliability and performance of circuits using advanced devices. The time constants of carrier capture and emission and the associated change in the threshold voltage are important parameters commonly included in various models, but their extraction from time-domain observations has been a difficult task. In this study, we propose a statistical method for simultaneously estimating interrelated parameters: the time constants and magnitude of the threshold voltage shift. Our method is based on a graphical network representation, and the parameters are estimated using the Markov chain Monte Carlo method. Experimental application of the proposed method to synthetic and measured time-domain RTN signals was successful. The proposed method can handle interrelated parameters of multiple traps and thereby contributes to the construction of more accurate RTN models.

  • An Iterative MAP Approach to Blind Estimation of SIMO FIR Channels

    Koji HARADA  Hideaki SAKAI  

     
    PAPER-Digital Signal Processing

      Vol:
    E95-A No:1
      Page(s):
    330-337

    In this paper, we present a maximum a posteriori probability (MAP) approach to the problem of blind estimation of single-input, multiple-output (SIMO), finite impulse response (FIR) channels. A number of methods have been developed to date for this blind estimation problem. Some of those utilize prior knowledge on input signal statistics. However, there are very few that utilize channel statistics too. In this paper, the unknown channel to be estimated is assumed as the frequency-selective Rayleigh fading channel, and we incorporate the channel prior distributions (and hyperprior distributions) into our model in two different ways. Then for each case an iterative MAP estimator is derived approximately. Performance comparisons over existing methods are conducted via numerical simulation on randomly generated channel coefficients according to the Rayleigh fading channel model. It is shown that improved estimation performance can be achieved through the MAP approaches, especially for such channel realizations that have resulted in large estimation error with existing methods.

  • Geometric BIC

    Kenichi KANATANI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E93-D No:1
      Page(s):
    144-151

    The "geometric AIC" and the "geometric MDL" have been proposed as model selection criteria for geometric fitting problems. These correspond to Akaike's "AIC" and Rissanen's "BIC" well known in the statistical estimation framework. Another well known criterion is Schwarz' "BIC", but its counterpart for geometric fitting has not been known. This paper introduces the corresponding criterion, which we call the "geometric BIC", and shows that it is of the same form as the geometric MDL. Our result gives a justification to the geometric MDL from the Bayesian principle.

  • Transfer Matrix Method for Instantaneous Spike Rate Estimation

    Kazuho WATANABE  Hiroyuki TANAKA  Keiji MIURA  Masato OKADA  

     
    INVITED PAPER

      Vol:
    E92-D No:7
      Page(s):
    1362-1368

    The spike timings of neurons are irregular and are considered to be a one-dimensional point process. The Bayesian approach is generally used to estimate the time-dependent firing rate function from sequences of spike timings. It can also be used to estimate the firing rate from only a single sequence of spikes. However, the rate function has too many degrees of freedom in general, so approximation techniques are often used to carry out the Bayesian estimation. We applied the transfer matrix method, which efficiently computes the exact marginal distribution, to the estimation of the firing rate and developed an algorithm that enables the exact results to be obtained for the Bayesian framework. Using this estimation method, we investigated how the mismatch of the prior hyperparameter value affects the marginal distribution and the firing rate estimation.

  • Software Reliability Modeling Based on Capture-Recapture Sampling

    Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER

      Vol:
    E92-A No:7
      Page(s):
    1615-1622

    This paper proposes a dynamic capture-recapture (DCR) model to estimate not only the total number of software faults but also quantitative software reliability from observed data. Compared to conventional static capture-recapture (SCR) model and usual software reliability models (SRMs) in the past literature, the DCR model can handle dynamic behavior of software fault-detection processes and can evaluate quantitative software reliability based on capture-recapture sampling of software fault data. This is regarded as a unified modeling framework of SCR and SRM with the Bayesian estimation. Simulation experiments under some plausible testing scenarios show that our models are superior to SCR and SRMs in terms of estimation accuracy.

  • On-Line Learning Methods for Gaussian Processes

    Shigeyuki OBA  Masa-aki SATO  Shin ISHII  

     
    LETTER-Pattern Recognition

      Vol:
    E86-D No:3
      Page(s):
    650-654

    We propose two modifications of Gaussian processes, which aim to deal with dynamic environments. One is a weight decay method that gradually forgets old data, and the other is a time stamp method that regards the time course of data as a Gaussian process. We show experimental results when these modifications are applied to regression problems in dynamic environments. The weight decay method is found to follow the environmental change by automatically ignoring the past data, and the time stamp method is found to predict linear alteration.

  • A Statistical Estimation Method of Optimal Software Release Timing Applying Auto-Regressive Models

    Tadashi DOHI  Hiromichi MORISHITA  Shunji OSAKI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Vol:
    E84-A No:1
      Page(s):
    331-338

    This paper proposes a statistical method to estimate the optimal software release time which minimizes the expected total software cost incurred in both testing and operation phases. It is shown that the underlying cost minimization problem can be reduced to a graphical one. This implies that the software release problem under consideration is essentially equivalent to a time series forecasting for the software fault-occurrence time data. In order to predict the future fault-occurrence time, we apply three extraordinary auto-regressive models by Singpurwalla and Soyer (1985) as the prediction devices as well as the well-known AR and ARIMA models. Numerical examples are devoted to illustrate the predictive performance for the proposed method. We compare it with the classical exponential software reliability growth model based on the non-homogeneous Poisson process, using actual software fault-occurrence time data.

  • Parameter-Free Restoration Algorithms for Two Classes of Binary MRF Images Degraded by Flip-Flap Noises

    Bing ZHANG  Mehdi N. SHIRAZI  Hideki NODA  

     
    PAPER-Image Theory

      Vol:
    E80-A No:10
      Page(s):
    2022-2031

    The problem of restoring binary (black and white) images degraded by color-dependent flip-flap noises is considered. The real image is modeled by a Markov Random Field (MRF). The Iterated Conditional Modes (ICM) algorithm is adopted. It is shown that under certain conditions the ICM algorithm is insensitive to the MRF image model and noise parameters. Using this property, we propose a parameter-free restoration algorithm which does not require the estimations of the image model and noise parameters and thus can be implemented fully in parallel. The effectiveness of the proposed algorithm is shown through applying the algorithm to degraded hand-drawn and synthetic images.

  • On the Human Being Presupposition Used in Learning

    Eri YAMAGISHI  Minako NOZAWA  Yoshinori UESAKA  

     
    PAPER-Neural Nets and Human Being

      Vol:
    E79-A No:10
      Page(s):
    1601-1607

    Conventional learning algorithms are considered to be a sort of estimation of the true recognition function from sample patterns. Such an estimation requires a good assumption on a prior distribution underlying behind learning data. On the other hand the human being sounds to be able to acquire a better result from an extremely small number of samples. This forces us to think that the human being might use a suitable prior (called presupposition here), which is an essential key to make recognition machines highly flexible. In the present paper we propose a framework for guessing the learner's presupposition used in his learning process based on his learning result. First it is pointed out that such a guess requires to assume what kind of estimation method the learner uses and that the problem of guessing the presupposition becomes in general ill-defined. With these in mind, the framework is given under the assumption that the learner utilizes the Bayesian estimation method, and a method how to determine the presupposition is demonstrated under two examples of constraints to both of a family of presuppositions and a set of recognition functions. Finally a simple example of learning with a presupposition is demonstrated to show that the guessed presupposition guarantees a better fitting to the samples and prevents a learning machine from falling into over learning.