Abstract
Following the formulation of Support Vector Regression (SVR), we consider a regression analogue of soft margin optimization over the feature space indexed by a hypothesis class H. More specifically, the problem is to find a linear model w ∈ RH that minimizes the sum of ρ-insensitive losses over all training data for as small ρ as posssible, where the ρ-insensitive loss for a single data (xi, yi) is defined as max{|yi − ∑h whh(xi)| − ρ, 0}. Intuitively, the parameter ρ and the ρ-insensitive loss are defined analogously to the target margin and the hinge loss in soft margin optimization, respectively. The difference of our formulation from SVR is two-fold: (1) we consider L1-norm regularization instead of L2norm regularization, and (2) the feature space is implicitly defined by a hypothesis class instead of a kernel. We propose a boosting-type algorithm for solving the problem with a theoretically guaranteed convergence rate under a natural assumption on the weak learnability.
Original language | English |
---|---|
Pages (from-to) | 294-300 |
Number of pages | 7 |
Journal | IEICE Transactions on Information and Systems |
Volume | E107.D |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2024 |
All Science Journal Classification (ASJC) codes
- Software
- Hardware and Architecture
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering
- Artificial Intelligence