Theme II: Optimization Theory and Algorithms for Machine Learning including Numerical Solvers for Large-Scale Nonconvex and Nonsmooth Learning Problems
Participants:
- CS - P. Devanbu, I. Tagkopoulos
- ECE - L. Lai
- Math - J. De Loera, A. Fannjiang, M. Köppe, S. Ma, T. Strohmer
- Stat - K. Balasubramanian, X. Li
Algorithmic tools from optimization have become more and more important in machine learning and data analysis. Increasingly, the focus is shifting to nonconvex optimization, in which the functions to be minimized may have a plethora of local solutions and saddle points. Nonconvex problems appear in deep learning, as well as in important applications in signal and image processing. Examples include image classification, data clustering, and phase retrieval. Another important direction is how to implement nonconvex methods on devices with very limited resources. Solving this challenge is key for on-device machine learning which offers great benefits such as increased privacy and security.
UCD4IDS faculty have played key roles in the recent surge of nonconvex optimization. A very fruitful direction has been to find a proper convex relaxation of the nonconvex problem and demonstrate that under practically relevant conditions the solution of the convex problem coincides with that of the original nonconvex one. The now well-known example is the PhaseLift approach to solve the famous phase retrieval problem. In turn, we kicked off the theoretical analysis of nonconvex geometry for signal processing and machine learning, while our more recent work represents a milestone in blind deconvolution, thereby igniting extensive work in biconvex optimization.
Our research projects in this theme include:
- Stochastic algorithms (improving Stochastic Gradient Descent algorithm)
- Optimization landscape of nonconvex problems including analysis on escape from a saddle point
- Privacy and security in machine learning
- Applications-Phase retrieval and beyond