From sparse polynomial approximation to optimal restart schemes for accelerating first-order optimization algorithms

Abstract

Sparse polynomial methods have, over the last decade, become widely used tools for approximating smooth, high-dimensional functions from limited data. Notable applications include parametric modelling and computational uncertainty quantification, discovering governing equations of dynamical systems, and high-dimensional PDEs. A standard sparse polynomial approximation scheme involves solving a convex optimization problem involving a $\ell^1$- or weighted $\ell^1$-norm. This is typically done using first-order methods. However, such methods generally converge very slowly, making them undesirable for the many applications that require high accuracy. Motivated by this problem, in this talk I will describe a general scheme for accelerating first-order optimization algorithms. This scheme applies to any convex optimization problem possessing a certain $\textit{approximate sharpness}$ condition and essentially any first-order optimization algorithm. For a wide range of problems, i.e., not just $\ell^1$-minimization-type problems, this scheme achieves either optimal rates of convergence or improves on previously established rates. To emphasize its usefulness and generality, I will showcase applications in compressive imaging and machine learning.