The search functionality is under construction.

IEICE TRANSACTIONS on Information

Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution

Shuqiong WU, Hiroshi NAGAHASHI

  • Full Text Views

    0

  • Cite this

Summary :

Gentle AdaBoost is widely used in object detection and pattern recognition due to its efficiency and stability. To focus on instances with small margins, Gentle AdaBoost assigns larger weights to these instances during the training. However, misclassification of small-margin instances can still occur, which will cause the weights of these instances to become larger and larger. Eventually, several large-weight instances might dominate the whole data distribution, encouraging Gentle AdaBoost to choose weak hypotheses that fit only these instances in the late training phase. This phenomenon, known as “classifier distortion”, degrades the generalization error and can easily lead to overfitting since the deviation of all selected weak hypotheses is increased by the late-selected ones. To solve this problem, we propose a new variant which we call “Penalized AdaBoost”. In each iteration, our approach not only penalizes the misclassification of instances with small margins but also restrains the weight increase for instances with minimal margins. Our method performs better than Gentle AdaBoost because it avoids the “classifier distortion” effectively. Experiments show that our method achieves far lower generalization errors and a similar training speed compared with Gentle AdaBoost.

Publication
IEICE TRANSACTIONS on Information Vol.E98-D No.11 pp.1906-1915
Publication Date
2015/11/01
Publicized
2015/08/13
Online ISSN
1745-1361
DOI
10.1587/transinf.2015EDP7069
Type of Manuscript
PAPER
Category
Artificial Intelligence, Data Mining

Authors

Shuqiong WU
  Tokyo Institute of Technology
Hiroshi NAGAHASHI
  Tokyo Institute of Technology

Keyword