The search functionality is under construction.
The search functionality is under construction.

Improved Sample Complexity Bounds for Parameter Estimation

Jun-ichi TAKEUCHI

  • Full Text Views

    0

  • Cite this

Summary :

Various authors have proposed probabilistic extensions of Valiant's PAC (Probably Approximately Correct) learning model in which the target to be learned is a (conditional) probability distribution. In this paper, we improve upon the best known upper bounds on the sample complexity of the parameter estimation part of the learning problem for distributions and stochastic rules over a finite domain with respect to the Kullback-Leibler divergence (KL-devergence). In particular, we improve the upper bound of order O(1/ε2) due to Abe, Takeuchi, and Warmuth to a bound of order O(1/ε). In obtaining our results, we made use of the properties of a specific estimator (slightly modified maximum likelihood estimator) with respect to the KL-divergence, while previously known upper bounds were obtained using the uniform convergence technique.

Publication
IEICE TRANSACTIONS on Information Vol.E78-D No.5 pp.526-531
Publication Date
1995/05/25
Publicized
Online ISSN
DOI
Type of Manuscript
Special Section PAPER (Special Issue on Algorithmic Learning Theory)
Category
Computational Learning Theory

Authors

Keyword