The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Laplace prior(2hit)

1-2hit
  • Empirical Bayes Estimation for L1 Regularization: A Detailed Analysis in the One-Parameter Lasso Model

    Tsukasa YOSHIDA  Kazuho WATANABE  

     
    PAPER-Machine learning

      Vol:
    E101-A No:12
      Page(s):
    2184-2191

    Lasso regression based on the L1 regularization is one of the most popular sparse estimation methods. It is often required to set appropriately in advance the regularization parameter that determines the degree of regularization. Although the empirical Bayes approach provides an effective method to estimate the regularization parameter, its solution has yet to be fully investigated in the lasso regression model. In this study, we analyze the empirical Bayes estimator of the one-parameter model of lasso regression and show its uniqueness and its properties. Furthermore, we compare this estimator with that of the variational approximation, and its accuracy is evaluated.

  • Speaker Recognition Using Sparse Probabilistic Linear Discriminant Analysis

    Hai YANG  Yunfei XU  Qinwei ZHAO  Ruohua ZHOU  Yonghong YAN  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1938-1945

    Sparse representation has been studied within the field of signal processing as a means of providing a compact form of signal representation. This paper introduces a sparse representation based framework named Sparse Probabilistic Linear Discriminant Analysis in speaker recognition. In this latent variable model, probabilistic linear discriminant analysis is modified to obtain an algorithm for learning overcomplete sparse representations by replacing the Gaussian prior on the factors with Laplace prior that encourages sparseness. For a given speaker signal, the dictionary obtained from this model has good representational power while supporting optimal discrimination of the classes. An expectation-maximization algorithm is derived to train the model with a variational approximation to a range of heavy-tailed distributions whose limit is the Laplace. The variational approximation is also used to compute the likelihood ratio score of all trials of speakers. This approach performed well on the core-extended conditions of the NIST 2010 Speaker Recognition Evaluation, and is competitive compared to the Gaussian Probabilistic Linear Discriminant Analysis, in terms of normalized Decision Cost Function and Equal Error Rate.