Research presentations at CPAL 2024

At the inaugural Conference on Parsimony and Learning (CPAL), my group is presenting three works that have come out of a recent exciting collaboration with UM Prof Qing Qu and other colleagues on low-rank learning in deep networks. Prof Qu’s prior work studying neural collapse in deep networks has opened many exciting directions for us to pursue! All three works study deep linear networks (DLNs), i.e. deep matrix factorization. In this setting (which is simplified from deep neural networks that have nonlinear activations), we can prove several interesting fundamental facts about the way DLNs learn from data when trained with gradient descent. Congratulations SPADA members Soo Min Kwon, Can Yaras, and Peng Wang (all co-advised by Prof Qu) for these publications!

Yaras, C., Wang, P., Hu, W., Zhu, Z., Balzano, L., & Qu, Q. (2023, December 1). Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Linear Networks. Conference on Parsimony and Learning (Recent Spotlight Track). https://openreview.net/forum?id=oSzCKf1I5N

Wang, P., Li, X., Yaras, C., Zhu, Z., Balzano, L., Hu, W., & Qu, Q. (2023, December 1). Understanding Hierarchical Representations in Deep Networks via Feature Compression and Discrimination. Conference on Parsimony and Learning (Recent Spotlight Track). https://openreview.net/forum?id=Ovuu8LpGZu

Kwon, S. M., Zhang, Z., Song, D., Balzano, L., & Qu, Q. (2023, December 1). Efficient Low-Dimensional Compression of Overparameterized Networks. Conference on Parsimony and Learning (Recent Spotlight Track). https://openreview.net/forum?id=1AVb9oEdK7

Congratulations Dr. Gilman and Dr. Du!

Last fall and winter, SPADA PhD students Kyle Gilman and Zhe Du graduated. Kyle’s thesis was titled “Scalable Algorithms Using Optimization on Orthogonal Matrix Manifolds,” and he continues to make fundamental contributions to interesting modern optimization problems. He is currently an Applied AI/ML Senior Associate at JPMorgan Chase. Zhe’s thesis was titled “Learning, Control, and Reduction for Markov Jump Systems,” with lots of interesting work at the intersection of machine learning and control. He is currently a Postdoctoral researcher working with Samet Oymak and Fabio Pasqualetti. I am excited to follow their work into the future as they make an impact in optimization, machine learning, and control!

MLK Spirit Award

I am honored to have received an MLK Spirit Award from the Michigan College of Engineering. These awards are given to university members who exemplify the leadership and vision of Reverend Dr. Martin Luther King, Jr. through their commitment to social justice, diversity, equity, and inclusion. That commitment is a very high priority for me, so I am grateful that others have felt the impact of my actions. https://ece.engin.umich.edu/stories/laura-balzano-receives-2023-mlk-spirit-award

K-Subspaces Algorithm Results at ICML

I’m excited that our results for the K-Subspaces algorithm were accepted to ICML. My postdoc Peng Wang will be presenting his excellent work; you may read the paper here or attend his session if you are interested. K-Subspaces (KSS) is a natural generalization of K-Means to higher dimensional centers, originally proposed by Bradley and Mangasarian in 2000. Peng not only showed that KSS converges locally, but that a simple spectral initialization guarantees a close-enough initialization in the case of data drawn randomly from arbitrary subspaces. This makes a giant step in a line of questioning that has been open for more than 20 years. Great work Peng!

Code for Subspace Tracking with Missing Data

Code for my Proceedings of IEEE paper with Yuejie Chi and Yue Lu, Streaming PCA and Subspace Tracking: the Missing Data Case can be found here.

Optimally Weighted Heteroscedastic PCA Code

We have updated our paper on optimally weighted heteroscedastic PCA and here is the code to run the experiments. In this work we show how to weight data before solving PCA, under a spiked covariance model with heteroscedastic additive noise. Surprisingly, the weights are not inverse noise variance, and neither are they 0/1 discarding the noisiest points, but instead the optimal weights are in between these two standard heuristics.

I am for real dating and communication. For several reasons: 1. It is immediately clear whether you like the person and whether it is mutual. 2. It’s a pity for your time. But I also met online disabled relationships. I know couples who were able to create families with the help of him (both with and without disabilities).

DoE funding for sketching algorithms and theory

Hessam Mahdavifar and I have been awarded funds from the Department of Energy to study sketching in the context of non-real-valued data. Randomized sketching and subsampling algorithms are revolutionizing the data processing pipeline by allowing significant compression of redundant information. However, current research assumes input data are real-valued, when many sensing, storage, and computation modalities in scientific and technological applications are best modeled mathematically as other types of data, including discrete-valued data and ordinal or categorical data, among others. You can read about the project here and read a Q&A here that was highlighted on the DoE office of science website. We are excited about the opportunity to expand in this new direction!

HePPCAT in TSP

Our work on heteroscedastic PCA continues with our article “HePPCAT: Probabilistic PCA for Data with Heteroscedastic Noise,” published in IEEE Transactions on Signal Processing. In this paper we developed novel ascent algorithms to maximize the heteroscedastic PCA likelihood, simultaneously estimating the principal components and the heteroscedastic noise variances. We show a compelling application to air quality data, where it is common to have data both from sensors that are high-quality EPA instruments and others that are consumer grade. Code for the paper experiments is available at https://gitlab.com/heppcat-group, and the HePPCAT method is available as a registered Julia package. Congratulations to my student Kyle Gilman, former student David Hong, and colleague Jeff Fessler.

Congratulations Dr. Bower!

Last fall, my PhD student Amanda Bower defended her thesis titled “Dealing with Intransitivity, Non-Convexity, and Algorithmic Bias in Preference Learning.” Amanda was in the Applied Interdisciplinary Math program, co-advised by Martin Strauss. She will now be moving on to work with Twitter’s ML Ethics, Transparency, and Accountability (META) group. We are so proud that she is going to go make her mark on the world. Congratulations Dr. Bower!

Faktum är att Viagra på nätet börjar fungera efter 20-25 minuter för många patienter, vilket ger upp till 6 timmars prestanda från och med den tiden. Det mesta av Viagra som du kan köpa online är generiskt sildenafilcitrat och ofta är tabletterna märkta på detta sätt.

Online matrix factorization for Markovian data

Hanbaek Lyu, Deanna Needell, and I recently had a manuscript published at JMLR: “Online matrix factorization for Markovian data and applications to Network Dictionary Learning.” In this work we show that the well-known OMF algorithm for i.i.d. stream of data converges almost surely to the set of critical points of the expected loss function, even when the data stream is dependent but Markovian. It would be of great interest to show that this algorithm further converges to global minimizers, as has been recently proven for many batch-processing algorithms. We are excited about this important step, generalizing the theory for the more practical case where the data aren’t i.i.d. Han’s work applying this to network sampling is super cool — and in fact it’s impossible to sample a sparse network in an i.i.d. way, so this extension is critical for this application. The code is available here. Han is on the academic job market this year.