Here I list “selected publications” that are representative of my work overall.
Selected Publications
Here I list “selected publications” that are representative of my work overall.
Selected Publications
February 11, 2025 By Laura Balzano
I am honored to have received the Sarah Goddard Power Award, an award given to those who contribute to the advancement of women in scholarship and academic leadership. Right now this work is critically important in my field, as technology in machine learning, artificial intelligence, and computing changes our world on a daily basis. Technology is often thought of as an objective pursuit, where the goals are clear and well-defined, and only those who are “math geniuses” can make a contribution. This couldn’t be further from the truth – we are constantly defining the goals and values of our technology, and diverse voices are key to creating technology that lifts us up as a whole society.
November 5, 2024 By Laura Balzano
I am the lead guest editor on a Signal Processing Magazine special issue on the Mathematics of Deep Learning: https://signalprocessingsociety.org/publications-resources/special-issue-deadlines/ieee-spm-special-issue-mathematics-deep-learning. My excellent co-editors are Joan Bruna, Gitta Kutyniok, Robert Nowak, and Jong Chul Ye. We have extended the White Paper deadline to this Friday, November 8. Please share with anyone who is interested but missed the deadline last Friday. We look forward to your submissions!
June 25, 2024 By Laura Balzano
I am excited to be a part of three papers at the International Conference of Machine Learning this July in Vienna.
Congratulations to Can Yaras for having his work on compression in deep low-rank learning, with co-authors Peng Wang and Qing Qu, accepted as an oral presentation for Tuesday afternoon! This work proves that when training deep linear networks, the gradient descent dynamics are limited to an invariant subspace. This subspace can be leveraged to make training and overparameterization more efficient, and allows us to reap the benefits of deep overparameterization without the computational burden. The code is available on Can’s github site. I talked about this work for the 1W-Minds seminar in April.
Peng Wang and Huikang Liu led our work on symmetric matrix completion with ReLU sampling that will be presented as a poster on Wednesday. We showed that it is possible to recover a low-rank matrix with sampling that is highly dependent on the matrix entries — we focus on ReLU sampling (and variants) where only positive entries are observed.
Finally, Wisconsin-Madison PhD student Yuchen Li will be presenting his work on block Riemannian MM methods, also with a poster on Wednesday. He proved iteration guarantees for convergence to a stationary point for general multi-block MM algorithms where any number of blocks may be constrained to a Riemannian manifold. His complexity results reduce to well-known results in the Euclidean case. This work is broadly applicable to alternating MM algorithms for machine learning problems.
Copyright © 2025 · Streamline Child Theme on Genesis Framework · WordPress · Log in