The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] expectation maximization (EM)(3hit)

1-3hit
  • Robust Sparse Signal Recovery in Impulsive Noise Using Bayesian Methods

    Jinyang SONG  Feng SHEN  Xiaobo CHEN  Di ZHAO  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:1
      Page(s):
    273-278

    In this letter, robust sparse signal recovery is considered in the presence of heavy-tailed impulsive noise. Two Bayesian approaches are developed where a Bayesian framework is constructed by utilizing the Laplace distribution to model the noise. By rewriting the noise-fitting term as a reweighted quadratic function which is optimized in the sparse signal space, the Type I Maximum A Posteriori (MAP) approach is proposed. Next, by exploiting the hierarchical structure of the sparse prior and the likelihood function, we develop the Type II Evidence Maximization approach optimized in the hyperparameter space. The numerical results verify the effectiveness of the proposed methods in the presence of impulsive noise.

  • Efficient Parallel Learning of Hidden Markov Chain Models on SMPs

    Lei LI  Bin FU  Christos FALOUTSOS  

     
    INVITED PAPER

      Vol:
    E93-D No:6
      Page(s):
    1330-1342

    Quad-core cpus have been a common desktop configuration for today's office. The increasing number of processors on a single chip opens new opportunity for parallel computing. Our goal is to make use of the multi-core as well as multi-processor architectures to speed up large-scale data mining algorithms. In this paper, we present a general parallel learning framework, Cut-And-Stitch, for training hidden Markov chain models. Particularly, we propose two model-specific variants, CAS-LDS for learning linear dynamical systems (LDS) and CAS-HMM for learning hidden Markov models (HMM). Our main contribution is a novel method to handle the data dependencies due to the chain structure of hidden variables, so as to parallelize the EM-based parameter learning algorithm. We implement CAS-LDS and CAS-HMM using OpenMP on two supercomputers and a quad-core commercial desktop. The experimental results show that parallel algorithms using Cut-And-Stitch achieve comparable accuracy and almost linear speedups over the traditional serial version.

  • Unsupervised Classification of Polarimetric SAR Images by EM Algorithm

    Kamran-Ullah KHAN  Jian YANG  Weijie ZHANG  

     
    PAPER-Sensing

      Vol:
    E90-B No:12
      Page(s):
    3632-3642

    In this paper, the expectation maximization (EM) algorithm is used for unsupervised classification of polarimetric synthetic aperture radar (SAR) images. The EM algorithm provides an estimate of the parameters of the underlying probability distribution functions (pdf's) for each class. The feature vector is 9-dimensional, consisting of the six magnitudes and three angles of the elements of a coherency matrix. Each of the elements of the feature vector is assigned a specific parametric pdf. In this work, all the features are supposed to be statistically independent. Then we present a two-stage unsupervised clustering procedure. The EM algorithm is first run for a few iterations to obtain an initial partition of, for example, four clusters. A randomly selected sample of, for example, 2% pixels of the polarimetric SAR image may be used for unsupervised training. In the second stage, the EM algorithm may be run again to reclassify the first stage clusters into smaller sub-clusters. Each cluster from the first stage will be processed separately in the second stage. This approach makes further classification possible as shown in the results. The training cost is also reduced as the number of feature vector in a specific cluster is much smaller than the whole image.