# EECS 559 - Optimization Methods for Signal and Image Processing and Machine Learning

**Course Instructors:** Qing Qu and Vladimir Dvorkin

**Teaching Assistants:** Siyi Chen and Soo-Min Kwon

**Course Time:** Mon/Wed 10:30 AM – 12:00 PM (Hybrid), Chrysler 133 and Online

**Office Hour:** Wed 1:00 PM – 2:30 PM (In-Person/Remote)

Enrollment based on the ECE override system with priority to SIPML students, a previous course taught by Prof. Jeffrey Fessler can be found here.

**Prerequisite:** EECS 545, EECS 551 or EECS 505 (aka 598 on “Computational Data Science”) is essential

**Overview:** This graduate-level course introduces optimization methods that are suitable for large-scale problems arising in data science and machine learning applications. We will explore several widely used optimization algorithms for solving convex/nonconvex, and smooth/nonsmooth problems appearing in SIPML. We will study the efficacy of these methods, which include (sub)gradient methods, proximal methods, Nesterov’s accelerated methods, ADMM, quasi-Newton, trust-region, cubic regularization methods, and (some of) their stochastic variants. If time allows, we will also introduce constraint optimization over the Riemannian manifold. In the meanwhile, we will show how these methods can be applied to concrete problems ranging from inverse problems in signal processing (e.g., sparse recovery, phase retrieval, blind deconvolution, matrix completion), unsupervised learning (e.g., dictionary learning, independent component analysis, nonnegative matrix factorization), to supervised learning (e.g., deep learning).

The course will involve extensive practical algorithm development, implementation, and investigation using Python. Designing methods to be suitable for large-scale SIPML applications will be emphasized and students will be expected to learn and apply efficient coding methods.

**Course Objectives:** The course will involve extensive practical algorithm development, implementation, and investigation using Python. Designing methods to be suitable for large-scale SIPML applications will be emphasized and students will be expected to learn and apply efficient coding methods.

**Course Materials:** slides and video will be accessed via Canvas, below are some tentative algorithms that will be covered in the course:

- 1st-order methods for smooth optimization: gradient descent, conjugate gradient, line-search method, momentum (Nesterov’s accelerated) method;
- 1st-order methods for nonsmooth optimization: subgradient method, proximal method, and its accelerated variants;
- Large-scale 1st-order optimization: ADMM, Frank-Wolfe method, and stochastic/incremental gradient methods;
- 2nd-order methods: Newton and quasi-Newton method, trust-region method, cubic regularization method, and curvilinear search method;
- Riemannian optimization: optimization over matrix manifolds such as the sphere, Stiefel manifold, Grassmannian manifold, etc.

Every optimization method introduced will have at least one SIPML application that we will introduce as motivation. Students will implement and test these methods on those applications.

**Assessment:** (i) homeworks (biweekly, 40%), (ii) course project (15%), (iii) final (take-home) exam (40%), (iv) class participation (5%)

**Literature:**

- High-Dimensional Data Analysis with Low-Dimensional Models: Principles, Computation, and Applications, John Wright, Yi Ma (2021).
- Numerical Optimization, Jorge Nocedal, and Stephen Wright (2006)
- Convex Optimization, Stephen Boyd and Lieven Vandenberghe (2004).
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Stephen Boyd, Neal Parikh, Eric Chu (2011).
- Optimization Methods for Large-Scale Machine Learning, Leon Bottou, Frank Curtis, and Jorge Nocedal (2016).
- Proximal Algorithms, Neal Parikh and Stephen Boyd (2014).
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview, Yuejie Chi, Yue M. Lu, Yuxin Chen (2019).
- From Symmetry to Geometry: Tractable Nonconvex Problems, Yuqian Zhang, Qing Qu, John Wright (2020).
- An Introduction to Optimization on Smooth Manifolds, Nicolas Boumal (2020).

**Related courses:**

- EECS 600 (Function Space Methods for Systems Theory) is much more theoretical than this course because it deals with infinite-dimensional spaces whereas EECS 559 will focus completely on finite-dimensional problems. EECS 600 is far more proof oriented than this course, but there will be some proofs presented and expected in EECS 559 as well.
- IOE 410 (Advanced Optimization Methods) focuses on discrete methods and seems aimed at undergraduates.
- IOE 511/Math562 (Continuous Optimization Methods) has some overlap in terms of the optimization methods. IOE 511 uses Matlab. EECS 559 focuses on SIPML applications.
- IOE 611/Math663 (Nonlinear Programming) covers important Convex Optimization principles. It uses the CVX package in Matlab that does not scale to large problems. EECS 559 emphasizes large-scale SIPML applications.
- STAT 608 (Optimization Methods in Statistics) covers many of the same methods as EECS 559.
- EECS 556 (Image Processing) introduces some applications (e.g., image deblurring) that are considered as examples in EECS 559. So there is some overlap with EECS 556, as well as the other courses listed above, but it is fine for students to take this course and also any or all of EECS 556, EECS 600, and IOE 611.

**Syllabus (subject to changes):**

_{class} | _{date} | _{topic} | _{content} |
---|---|---|---|

_{1} | _{1/10} | _{introduction} | _{course logistics & overview} |

_{2} | _{1/17} | _{optimization basics} | _{introduction to mathematical optimization} |

_{3} | _{1/22} | _{optimization basics} | _{sample examples & applications, mathematical background} |

_{4} | _{1/24} | _{convex smooth} | _{gradient descent method, line search} |

_{5} | _{1/29} | _{convex smooth} | _{gradient descent method, line search} |

_{6} | _{1/31} | _{convex smooth} | _{Nesterov’s acceleration, Newton’s method} |

_{7} | _{2/05} | _{convex smooth} | _{stochastic gradient descent} |

_{8} | _{2/07} | _{convex nonsmooth} | _{intro to nonsmooth problems, subgradient methods} |

_{9} | _{2/12} | _{convex nonsmooth} | _{subgradient methods II} |

_{10} | _{2/14} | _{convex nonsmooth} | _{smoothing & Moreau envelope} |

_{11} | _{2/19} | _{convex nonsmooth} | _{proximal gradient method} |

_{12} | _{2/21} | _{convex nonsmooth} | _{accelerated proximal gradient & homotopy continuation} |

_{13} | _{3/04} | _{convex nonsmooth} | _{augmented Lagrangian method} |

_{14} | _{3/06} | _{convex nonsmooth} | _{alternating direction method of multipliers (ADMM) I } |

_{15} | _{3/11} | _{convex nonsmooth} | _{alternating direction method of multipliers (ADMM) II} |

_{16} | _{3/13} | _{convex nonsmooth} | _{Frank-Wolfe method} |

_{17} | _{3/18} | _{nonconvex optimization} | _{ intro to nonconvex problems I } |

_{18} | _{3/20} | _{nonconvex optimization} | _{ intro to nonconvex problems II } |

_{19} | _{3/25} | _{nonconvex optimization} | _{ trust-region method I } |

_{20} | _{3/27} | _{nonconvex optimization} | _{ trust-region method II} |

_{21} | _{4/01} | _{nonconvex optimization} | _{ trust-region method III} |

_{22} | _{4/03} | _{nonconvex optimization} | _{ cubic regularization method } |

_{23} | _{4/08} | _{Riemannian optimization} | _{Riemannian optimization I } |

_{24} | _{4/10} | _{Riemannian optimization} | _{ Riemannian optimization II } |

_{25} | _{4/15} | _{Riemannian optimization} | _{ Riemannian optimization III} |

_{26} | _{4/17} | _{TBA} | _{ TBA} |

_{27} | _{4/22} | _{TBA} | _{ TBA} |