EECS 559 - Optimization Methods for Signal and Image Processing and Machine Learning
Course Instructors: Qing Qu and Vladimir Dvorkin
Teaching Assistants: Siyi Chen and Soo-Min Kwon
Course Time: Mon/Wed 10:30 AM – 12:00 PM (Hybrid), Chrysler 133 and Online
Office Hour: Wed 1:00 PM – 2:30 PM (In-Person/Remote)
Enrollment based on the ECE override system with priority to SIPML students, a previous course taught by Prof. Jeffrey Fessler can be found here.
Prerequisite: EECS 545, EECS 551 or EECS 505 (aka 598 on “Computational Data Science”) is essential
Overview: This graduate-level course introduces optimization methods that are suitable for large-scale problems arising in data science and machine learning applications. We will explore several widely used optimization algorithms for solving convex/nonconvex, and smooth/nonsmooth problems appearing in SIPML. We will study the efficacy of these methods, which include (sub)gradient methods, proximal methods, Nesterov’s accelerated methods, ADMM, quasi-Newton, trust-region, cubic regularization methods, and (some of) their stochastic variants. If time allows, we will also introduce constraint optimization over the Riemannian manifold. In the meanwhile, we will show how these methods can be applied to concrete problems ranging from inverse problems in signal processing (e.g., sparse recovery, phase retrieval, blind deconvolution, matrix completion), unsupervised learning (e.g., dictionary learning, independent component analysis, nonnegative matrix factorization), to supervised learning (e.g., deep learning).
The course will involve extensive practical algorithm development, implementation, and investigation using Python. Designing methods to be suitable for large-scale SIPML applications will be emphasized and students will be expected to learn and apply efficient coding methods.
Course Objectives: The course will involve extensive practical algorithm development, implementation, and investigation using Python. Designing methods to be suitable for large-scale SIPML applications will be emphasized and students will be expected to learn and apply efficient coding methods.
Course Materials: slides and video will be accessed via Canvas, below are some tentative algorithms that will be covered in the course:
- 1st-order methods for smooth optimization: gradient descent, conjugate gradient, line-search method, momentum (Nesterov’s accelerated) method;
- 1st-order methods for nonsmooth optimization: subgradient method, proximal method, and its accelerated variants;
- Large-scale 1st-order optimization: ADMM, Frank-Wolfe method, and stochastic/incremental gradient methods;
- 2nd-order methods: Newton and quasi-Newton method, trust-region method, cubic regularization method, and curvilinear search method;
- Riemannian optimization: optimization over matrix manifolds such as the sphere, Stiefel manifold, Grassmannian manifold, etc.
Every optimization method introduced will have at least one SIPML application that we will introduce as motivation. Students will implement and test these methods on those applications.
Assessment: (i) homeworks (biweekly, 40%), (ii) course project (15%), (iii) final (take-home) exam (40%), (iv) class participation (5%)
Literature:
- High-Dimensional Data Analysis with Low-Dimensional Models: Principles, Computation, and Applications, John Wright, Yi Ma (2021).
- Numerical Optimization, Jorge Nocedal, and Stephen Wright (2006)
- Convex Optimization, Stephen Boyd and Lieven Vandenberghe (2004).
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Stephen Boyd, Neal Parikh, Eric Chu (2011).
- Optimization Methods for Large-Scale Machine Learning, Leon Bottou, Frank Curtis, and Jorge Nocedal (2016).
- Proximal Algorithms, Neal Parikh and Stephen Boyd (2014).
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview, Yuejie Chi, Yue M. Lu, Yuxin Chen (2019).
- From Symmetry to Geometry: Tractable Nonconvex Problems, Yuqian Zhang, Qing Qu, John Wright (2020).
- An Introduction to Optimization on Smooth Manifolds, Nicolas Boumal (2020).
Related courses:
- EECS 600 (Function Space Methods for Systems Theory) is much more theoretical than this course because it deals with infinite-dimensional spaces whereas EECS 559 will focus completely on finite-dimensional problems. EECS 600 is far more proof oriented than this course, but there will be some proofs presented and expected in EECS 559 as well.
- IOE 410 (Advanced Optimization Methods) focuses on discrete methods and seems aimed at undergraduates.
- IOE 511/Math562 (Continuous Optimization Methods) has some overlap in terms of the optimization methods. IOE 511 uses Matlab. EECS 559 focuses on SIPML applications.
- IOE 611/Math663 (Nonlinear Programming) covers important Convex Optimization principles. It uses the CVX package in Matlab that does not scale to large problems. EECS 559 emphasizes large-scale SIPML applications.
- STAT 608 (Optimization Methods in Statistics) covers many of the same methods as EECS 559.
- EECS 556 (Image Processing) introduces some applications (e.g., image deblurring) that are considered as examples in EECS 559. So there is some overlap with EECS 556, as well as the other courses listed above, but it is fine for students to take this course and also any or all of EECS 556, EECS 600, and IOE 611.
Syllabus (subject to changes):
class | date | topic | content |
---|---|---|---|
1 | 1/10 | introduction | course logistics & overview |
2 | 1/17 | optimization basics | introduction to mathematical optimization |
3 | 1/22 | optimization basics | sample examples & applications, mathematical background |
4 | 1/24 | convex smooth | gradient descent method, line search |
5 | 1/29 | convex smooth | gradient descent method, line search |
6 | 1/31 | convex smooth | Nesterov’s acceleration, Newton’s method |
7 | 2/05 | convex smooth | stochastic gradient descent |
8 | 2/07 | convex nonsmooth | intro to nonsmooth problems, subgradient methods |
9 | 2/12 | convex nonsmooth | subgradient methods II |
10 | 2/14 | convex nonsmooth | smoothing & Moreau envelope |
11 | 2/19 | convex nonsmooth | proximal gradient method |
12 | 2/21 | convex nonsmooth | accelerated proximal gradient & homotopy continuation |
13 | 3/04 | convex nonsmooth | augmented Lagrangian method |
14 | 3/06 | convex nonsmooth | alternating direction method of multipliers (ADMM) I |
15 | 3/11 | convex nonsmooth | alternating direction method of multipliers (ADMM) II |
16 | 3/13 | convex nonsmooth | Frank-Wolfe method |
17 | 3/18 | nonconvex optimization | intro to nonconvex problems I |
18 | 3/20 | nonconvex optimization | intro to nonconvex problems II |
19 | 3/25 | nonconvex optimization | trust-region method I |
20 | 3/27 | nonconvex optimization | trust-region method II |
21 | 4/01 | nonconvex optimization | trust-region method III |
22 | 4/03 | nonconvex optimization | cubic regularization method |
23 | 4/08 | Riemannian optimization | Riemannian optimization I |
24 | 4/10 | Riemannian optimization | Riemannian optimization II |
25 | 4/15 | Riemannian optimization | Riemannian optimization III |
26 | 4/17 | TBA | TBA |
27 | 4/22 | TBA | TBA |