Abstract
Building upon recent works on linesearch-free adaptive proximal gradient methods, this paper proposes AdaPGq,r, a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds. Different choices of the parameters q and r are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations. In an attempt to better understand the underlying theory, its convergence is established in a more general setting that allows for time-varying parameters. Finally, an adaptive alternating minimization algorithm is presented by exploring the dual setting. This algorithm not only incorporates additional adaptivity, but also expands its applicability beyond standard strongly convex settings.
Original language | English |
---|---|
Pages (from-to) | 197-208 |
Number of pages | 12 |
Journal | Proceedings of Machine Learning Research |
Volume | 242 |
Publication status | Published - 2024 |
Event | 6th Annual Learning for Dynamics and Control Conference, L4DC 2024 - Oxford, United Kingdom Duration: Jul 15 2024 → Jul 17 2024 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability