Code for my Proceedings of IEEE paper with Yuejie Chi and Yue Lu, Streaming PCA and Subspace Tracking: the Missing Data Case can be found here.
Code for my Proceedings of IEEE paper with Yuejie Chi and Yue Lu, Streaming PCA and Subspace Tracking: the Missing Data Case can be found here.
May 1, 2026 By Laura Balzano
Today I hooded Dr. Javier Salazar Cavazos, who defended his thesis in March and today walked the stage at commencement. His dissertation is entitled “Learning Representations from Noisy Data and Brain Imaging: Subspace Modeling for Heteroscedastic Data and Deep Learning for Functional MRI in Alzheimer’s Disease,” and included his paper on ALPCAH, an algorithm for heteroscedastic subspace learning. Next he’ll be going to KLA. Congratulations Javier!
April 13, 2026 By Laura Balzano
I am excited to say that our SPM special issue, part 1, on the Mathematics of Deep Learning, has now been published in ieee explore.
As we said in our guest editorial, “The aim of this special issue is to capture some of the salient points of contact between the SP and DL disciplines so that a mathematical picture of the key questions and challenges ahead begins to emerge.” The eight papers in this part of the issue all display these contact points beautifully: from sparsity to Kalman filtering to probability.
SPM is a venue that provides tutorial-like material on important signal processing topics. I hope you will read the issue and share the material with your junior graduate students.
December 8, 2025 By Laura Balzano
SPADA lab had two interesting works to share at Neurips this year. The first was MonarchAttention, which received a spotlight; thanks to everyone who stopped by the poster. See our earlier post for an example of how our method offers a zero-shot drop-in replacement for softmax attention at a significant savings of memory and computation – with very little accuracy loss. This technique has a University of Michigan patent pending.
The second work is on the topic of Out-of-Distribution In-Context Learning, which we presented at the What Can’t Transformers Do? Workshop. We analyze the solution for training linear attention on an out-of-distribution linear regression test task, where the training task is a regression vector either drawn from a single subspace or a union of subspaces. In the case of a union of subspaces, we can generalize to the span of the subspaces at test time.
Nice work to all the students: Can, Soo Min (both SPADA lab members), as well as our treasured collaborators Alec, Pierre, and Changwoo!
Copyright © 2026 · Streamline Child Theme on Genesis Framework · WordPress · Log in