Our ICML paper “Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation” shows that it is possible to get the benefits of deep overparameterization without increasing the number of trainable parameters. The code for our experiments can be found at Can’s github site: https://github.com/cjyaras/deep-lora-transformers. We welcome any thoughts and questions if you use and adapt our code for your problem!
Code for Subspace Tracking with Missing Data
Code for my Proceedings of IEEE paper with Yuejie Chi and Yue Lu, Streaming PCA and Subspace Tracking: the Missing Data Case can be found here.
Optimally Weighted Heteroscedastic PCA Code
We have updated our paper on optimally weighted heteroscedastic PCA and here is the code to run the experiments. In this work we show how to weight data before solving PCA, under a spiked covariance model with heteroscedastic additive noise. Surprisingly, the weights are not inverse noise variance, and neither are they 0/1 discarding the noisiest points, but instead the optimal weights are in between these two standard heuristics.
I am for real dating and communication. For several reasons: 1. It is immediately clear whether you like the person and whether it is mutual. 2. It’s a pity for your time. But I also met online disabled relationships. I know couples who were able to create families with the help of him (both with and without disabilities).
HePPCAT in TSP
Our work on heteroscedastic PCA continues with our article “HePPCAT: Probabilistic PCA for Data with Heteroscedastic Noise,” published in IEEE Transactions on Signal Processing. In this paper we developed novel ascent algorithms to maximize the heteroscedastic PCA likelihood, simultaneously estimating the principal components and the heteroscedastic noise variances. We show a compelling application to air quality data, where it is common to have data both from sensors that are high-quality EPA instruments and others that are consumer grade. Code for the paper experiments is available at https://gitlab.com/heppcat-group, and the HePPCAT method is available as a registered Julia package. Congratulations to my student Kyle Gilman, former student David Hong, and colleague Jeff Fessler.
Online matrix factorization for Markovian data
Hanbaek Lyu, Deanna Needell, and I recently had a manuscript published at JMLR: “Online matrix factorization for Markovian data and applications to Network Dictionary Learning.” In this work we show that the well-known OMF algorithm for i.i.d. stream of data converges almost surely to the set of critical points of the expected loss function, even when the data stream is dependent but Markovian. It would be of great interest to show that this algorithm further converges to global minimizers, as has been recently proven for many batch-processing algorithms. We are excited about this important step, generalizing the theory for the more practical case where the data aren’t i.i.d. Han’s work applying this to network sampling is super cool — and in fact it’s impossible to sample a sparse network in an i.i.d. way, so this extension is critical for this application. The code is available here. Han is on the academic job market this year.
Online Tensor Completion and Tracking
Kyle Gilman and I have a preprint out describing a new algorithm for online tensor completion and tracking. We derive and demonstrate an algorithm that operates on streaming tensor data, such as hyperspectral video collected over time, or chemo-sensing experiments in space and time. Kyle presented his work at the first fully virtual ICASSP, which you can view here. Anyone can register for free to this year’s virtual ICASSP and watch the videos, post questions, and join the discussion. Kyle’s code is also available here. We think this algorithm will have a major impact in speeding up low-rank tensor processing, especially with time-varying data, and we welcome questions and feedback.
The speed of the girl’s movement around the city is about four kilometers per hour, or two and a half, if she wears heels higher than six centimeters. The zone of possible contact is five meters free hookup, no more. That is, everything about everything you get … How much do you get? (Damn, they told me for a reason: study math properly, you will need it!) Well, like, five seconds.
Improving K-Subspaces via Coherence Pursuit
John Lipor, Andrew Gitlin, Bioashuai Tao, and I have a new paper, “Improving K-Subspaces via Coherence Pursuit,” to be published in the Journal of Special Topics in Signal Processing issue “Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications.” In it we present a new subspace clustering algorithm, Coherence Pursuit – K-Subspaces (CoP-KSS). Here is the code for CoP-KSS and for our figures. Our paper considers specifically the PCA step in K-Subspaces, where a best-fit subspace estimate is determined from a (possibly incorrect) clustering. When a given cluster has points from multiple low-rank subspaces, PCA is not a robust approach. We replace that step with Coherence Pursuit, a new algorithm for Robust PCA. We prove that Coherence Pursuit indeed can recover the “majority” subspace when data from other low-rank subspaces is contaminating the cluster. In this paper we also prove — to the best of our knowledge, for the first time — that the K-Subspaces problem is NP-hard, and indeed even NP-hard to approximate within any finite factor for large enough subspace rank.
The origins of poker are also not completely clear, because, like many other games of chance, poker, most likely, also developed over several centuries, taking shape from different card games. Some argue that poker-like gambling was invented in the 17th century in Persia, while others say that the famous game of today was inspired by the French game Poque. The popularity of this game grew rather slowly until the 70s. of the last century, no world poker tournaments were held in Las Vegas. But the greatest recognition of this game was provided by the opportunity to gamble on the Internet when online poker appeared.
Group OWL Regularization for Deep Nets
My student Dejiao Zhang’s code for our paper Learning to Share: Simultaneous Parameter Tying and Sparsification in Deep Nets can be found at this link. We demonstrated that regularizing the weights in a deep network using the Group OWL norm allows for simultaneous enforcement of sparsity (meaning unimportant weights are eliminated) and parameter tying (meaning co-adapted or highly correlated weights are tied together). This is an exciting technique for learning compressed deep net architectures from data.
Monotonic Matrix Completion
Ravi Ganti and Rebecca Willett and I had a paper in NIPS 2015 called “Matrix Completion under Monotonic Single Index Models.” We studied a matrix completion problem where a low-rank matrix is observed through a monotonic function applied to each entry. We developed a calibrated loss function that allowed a neat implementation and analysis. Now the code is available for public usage at this bitbucket link.
Variety Matrix Completion code
The code for our matrix completion algorithm from the ICML paper “Algebraic Variety Models for High-rank Matrix Completion” can be found here in Greg Ongie’s github repository. Using the algebraic variety as a low-dimensional model for data, Greg’s algorithm is a kernel method for doing matrix completion in the space associated with a polynomial kernel. It allows us to do matrix completion even when the matrix does not have low linear rank — but instead when it has low dimension in the form of a nonlinear variety. Start with the README for example scripts.