TY - GEN
T1 - End-to-End Learning for Prediction and Optimization with Gradient Boosting
AU - Konishi, Takuya
AU - Fukunaga, Takuro
N1 - Funding Information:
T. Konishi—Supported by JSPS KAKENHI Grant Number 17K12743 and JP18H05291, Japan. T. Fukunaga—Supported by JST PRESTO grant JPMJPR1759, Japan.
Funding Information:
T. Konishi?Supported by JSPS KAKENHI Grant Number 17K12743 and JP18H05291, Japan. T. Fukunaga?Supported by JST PRESTO grant JPMJPR1759, Japan.
Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Mathematical optimization is a fundamental tool in decision making. However, it is often difficult to obtain an accurate formulation of an optimization problem due to uncertain parameters. Machine learning frameworks are attractive to address this issue: we predict the uncertain parameters and then optimize the problem based on the prediction. Recently, end-to-end learning approaches to predict and optimize the successive problems have received attention in the field of both optimization and machine learning. In this paper, we focus on gradient boosting which is known as a powerful ensemble method, and develop the end-to-end learning algorithm with maximizing the performance on the optimization problems directly. Our algorithm extends the existing gradient-based optimization through implicit differentiation to the second-order optimization for efficiently learning gradient boosting. We also conduct computational experiments to analyze how the end-to-end approaches work well and show the effectiveness of our end-to-end approach.
AB - Mathematical optimization is a fundamental tool in decision making. However, it is often difficult to obtain an accurate formulation of an optimization problem due to uncertain parameters. Machine learning frameworks are attractive to address this issue: we predict the uncertain parameters and then optimize the problem based on the prediction. Recently, end-to-end learning approaches to predict and optimize the successive problems have received attention in the field of both optimization and machine learning. In this paper, we focus on gradient boosting which is known as a powerful ensemble method, and develop the end-to-end learning algorithm with maximizing the performance on the optimization problems directly. Our algorithm extends the existing gradient-based optimization through implicit differentiation to the second-order optimization for efficiently learning gradient boosting. We also conduct computational experiments to analyze how the end-to-end approaches work well and show the effectiveness of our end-to-end approach.
UR - http://www.scopus.com/inward/record.url?scp=85103234233&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85103234233&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-67664-3_12
DO - 10.1007/978-3-030-67664-3_12
M3 - Conference contribution
AN - SCOPUS:85103234233
SN - 9783030676636
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 191
EP - 207
BT - Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2020, Proceedings
A2 - Hutter, Frank
A2 - Kersting, Kristian
A2 - Lijffijt, Jefrey
A2 - Valera, Isabel
PB - Springer Science and Business Media Deutschland GmbH
T2 - European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2020
Y2 - 14 September 2020 through 18 September 2020
ER -