抄録
We investigate further improvement of boosting in the case that the target concept belongs to the class of r-of-k threshold Boolean functions, which answer "+1" if at least r of k relevant variables are positive, and answer "-1" otherwise. Given m examples of a r-of-k function and literals as base hypotheses, popular boosting algorithms (e.g., AdaBoost) construct a consistent final hypothesis by using O(k2 log m) base hypotheses. While this convergence speed is tight in general, we show that a modification of AdaBoost (confidence-rated AdaBoost [SS99] or InfoBoost [As100]) can make use of the property of r-of-k functions that make less error on one-side to find a consistent final hypothesis by using O(kr log m) hypotheses. Our result extends the previous investigation by Hatano and Warmuth [HW04] and gives more general examples where confidence-rated AdaBoost or InfoBoost has an advantage over AdaBoost.
本文言語 | 英語 |
---|---|
ページ(範囲) | 114-126 |
ページ数 | 13 |
ジャーナル | Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) |
巻 | 3244 |
DOI | |
出版ステータス | 出版済み - 2004 |
外部発表 | はい |
イベント | 15th International Conference ALT 2004: Algorithmic Learning Theory - Padova, イタリア 継続期間: 10月 2 2004 → 10月 5 2004 |
!!!All Science Journal Classification (ASJC) codes
- 理論的コンピュータサイエンス
- コンピュータサイエンス一般