Equivalence between adaptive Lasso and generalized ridge estimators in linear regression with orthogonal explanatory variables after optimizing regularization parameters

Mineaki Ohishi, Hirokazu Yanagihara, Shuichi Kawano

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

In this paper, we deal with a penalized least-squares (PLS) method for a linear regression model with orthogonal explanatory variables. The used penalties are an adaptive Lasso (AL)-type ℓ1 penalty (AL penalty) and a generalized ridge (GR)-type ℓ2 penalty (GR penalty). Since the estimators obtained by minimizing the PLS methods strongly depend on the regularization parameters, we optimize them by a model selection criterion (MSC) minimization method. The estimators based on the AL penalty and the GR penalty have different properties, and it is universally recognized that these are completely different estimators. However, in this paper, we show an interesting result that the two estimators are exactly equal when the explanatory variables are orthogonal after optimizing the regularization parameters by the MSC minimization method.

Original languageEnglish
Pages (from-to)1501-1516
Number of pages16
JournalAnnals of the Institute of Statistical Mathematics
Volume72
Issue number6
DOIs
Publication statusPublished - Dec 1 2020
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Equivalence between adaptive Lasso and generalized ridge estimators in linear regression with orthogonal explanatory variables after optimizing regularization parameters'. Together they form a unique fingerprint.

Cite this