Solving Linear Regression with Insensitive Loss by Boosting

Ryotaro Mitsuboshi, kohei hatano, Eiji Takimoto

Research output: Contribution to journalArticlepeer-review

Abstract

Following the formulation of Support Vector Regression (SVR), we consider a regression analogue of soft margin optimization over the feature space indexed by a hypothesis class H. More specifically, the problem is to find a linear model w ∈ RH that minimizes the sum of ρ-insensitive losses over all training data for as small ρ as posssible, where the ρ-insensitive loss for a single data (xi, yi) is defined as max{|yi − h whh(xi)| − ρ, 0}. Intuitively, the parameter ρ and the ρ-insensitive loss are defined analogously to the target margin and the hinge loss in soft margin optimization, respectively. The difference of our formulation from SVR is two-fold: (1) we consider L1-norm regularization instead of L2norm regularization, and (2) the feature space is implicitly defined by a hypothesis class instead of a kernel. We propose a boosting-type algorithm for solving the problem with a theoretically guaranteed convergence rate under a natural assumption on the weak learnability.

Original languageEnglish
Pages (from-to)294-300
Number of pages7
JournalIEICE Transactions on Information and Systems
VolumeE107.D
Issue number3
DOIs
Publication statusPublished - Mar 2024

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Solving Linear Regression with Insensitive Loss by Boosting'. Together they form a unique fingerprint.

Cite this