Optimally Weighted PCA for High-dimensional Heteroscedastic Data

Today I had the opportunity to speak about very recent results by my student David Hong (joint work also with Jeff Fessler) in analyzing asymptotic recovery guarantees for weighted PCA for high-dimensional heteroscedastic data. In the paper we recently posted online, we have asymptotic analysis (as both the number of samples and dimension of the problem grow to infinity, but converge to a fixed constant) of the recovery for weighted PCA components, amplitudes, and scores. Those recovery expressions allow us to find weights that give optimal recovery, and the weights turn out to be a very simple expression involving only the noise variance and the PCA amplitudes. To learn more, watch my talk here, and let us know if you have any questions!

The problem with this method is that after the detailed plan for relocating your body to Dvortsovaya Street has been drawn up, we have nothing more to talk about and all the norms of public ethics demand that we part ways and part ways for good. So the wisest thing to do aff app is to figure out in which direction the beauty is pointing her feet, and try to tail her. Since you are on the road, let her take you to the second right turn towards the Georgian embassy.

AFOSR Young Investigator

I have great news that my AFOSR Young Investigator proposal was accepted for funding. My proposal was focused on time-varying low-rank factorization models, and various ways of solving a variety of related non-convex problem formulations.  Read more about it here.   I look forward to the contributions we will be able to make with the support of AFOSR.

 

Ensemble K-Subspaces

Yesterday I gave a talk on Subspace Clustering using Ensemble methods at the Simons Institute. See the video here!

This is work with John Lipor, David Hong, and Yan Shuo Tan. Our related paper has been just updated on the arxiv. Our key observation was that, while K-Subspaces (KSS) works poorly overall and depends heavily on initialization, it still seems to give partially good clustering information. We therefore use it as a “weak clusterer” and combine ensembles of KSS (EKSS) by averaging the co-association/affinity matrices. This works extremely well, both in simulation and on real data, and also in theory. We were able to show that EKSS gives correct clustering in a variety of common cases: e.g. for subspaces with bounded affinity, and with noisy data and missing data. Our theory generalizes theory of the Thresholded Subspace Clustering algorithm to show that any algorithm that produces an affinity matrix that is an approximation to a monotonic function of absolute inner products will give correct clustering. This general theory should be broadly applicable to many geometric approaches to subspace clsutering.

Improving K-Subspaces via Coherence Pursuit

John Lipor, Andrew Gitlin, Bioashuai Tao, and I have a new paper, “Improving K-Subspaces via Coherence Pursuit,” to be published in the Journal of Special Topics in Signal Processing issue “Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications.” In it we present a new subspace clustering algorithm, Coherence Pursuit – K-Subspaces (CoP-KSS). Here is the code for CoP-KSS and for our figures. Our paper considers specifically the PCA step in K-Subspaces, where a best-fit subspace estimate is determined from a (possibly incorrect) clustering. When a given cluster has points from multiple low-rank subspaces, PCA is not a robust approach. We replace that step with Coherence Pursuit, a new algorithm for Robust PCA. We prove that Coherence Pursuit indeed can recover the “majority” subspace when data from other low-rank subspaces is contaminating the cluster. In this paper we also prove — to the best of our knowledge, for the first time — that the K-Subspaces problem is NP-hard, and indeed even NP-hard to approximate within any finite factor for large enough subspace rank.

In turn, roulette, which is now one of the most popular casino table games all over the world, appeared in the 18th century in France, in the gambling houses of Paris. Translated https://pin-ups-casino.com from French, the word “roulette” means “small wheel”, it became incredibly popular throughout Europe and after some time came to the United States, where it became one of the favorite entertainment of Americans.

The origins of poker are also not completely clear, because, like many other games of chance, poker, most likely, also developed over several centuries, taking shape from different card games. Some argue that poker-like gambling was invented in the 17th century in Persia, while others say that the famous game of today was inspired by the French game Poque. The popularity of this game grew rather slowly until the 70s. of the last century, no world poker tournaments were held in Las Vegas. But the greatest recognition of this game was provided by the opportunity to gamble on the Internet when online poker appeared.

 

Streaming PCA Review Article

The Proceedings of IEEE posted our review article today on Streaming PCA and Subspace Tracking with Missing Data. It was a great experience to work with Yuejie Chi and Yue Lu on this survey. You can also find a less pretty version on the arxiv.

The oldest casino gambling that is still played today is Baccarat, the earliest version of which was first mentioned at the beginning of the 15th century, when it migrated from Italy to France. The exact origin of baccarat has not yet been clarified. The most popular version says that baccarat was https://casinopinups.com invented in Italy and was first played in the Middle Ages using tarot cards. Later, in 1490, this game appeared in France.

New paper in Journal of Multivariate Analysis

Congratulations to my student David Hong (and his co-advisor Jeff Fessler) for our published article in the Journal of Multivariate Analysis, titled “Asymptotic performance of PCA for high-dimensional heteroscedastic data.” Heteroscedastic data, where different data points are of differing quality (precisely, have different noise variance), are common in so many interesting big data problems. Sensor network data, medical imaging using historical data, and astronomical imaging are just a few examples. PCA is known to be the maximum likelihood estimate for data with additive Gaussian noise of a single variance across all the data points. This work investigates the performance of PCA when that homoscedastic noise assumption is violated. We give precise predictions for the recovery of subspaces and singular values in a spiked/planted model, and show that vanilla PCA (perhaps unsurprisingly) has suboptimal subspace recovery when the data are heteroscedastic. 

There is also no clear answer to the question of the origin of one of the most popular modern casino table games – blackjack – most often France is considered to be its country of origin. It is believed that the 17th century French game vingt-et-un is the direct ancestor of blackjack, and it appeared in the United States along with the first colonists from France. The name blackjack is of American origin, and it was associated with special promotions that were held at casinos in Nevada in the 30s. XX century. In order to attract new customers, higher https://pinup-wiki.com/ odds were paid if the player won with a combination that included a jack of black suit (clubs or spades) and an ace of spades. In English, the jack of black suit (clubs or spades) is called “black Jack of Clubs” or “black Jack of Spades”.

Group OWL Regularization for Deep Nets

My student Dejiao Zhang’s code for our paper Learning to Share: Simultaneous Parameter Tying and Sparsification in Deep Nets can be found at this link. We demonstrated that regularizing the weights in a deep network using the Group OWL norm allows for simultaneous enforcement of sparsity (meaning unimportant weights are eliminated) and parameter tying (meaning co-adapted or highly correlated weights are tied together). This is an exciting technique for learning compressed deep net architectures from data.

Interestingly, the first casinos appeared in the 17th century in Venice, Italy, and at first they were not associated with gambling. At the beginning of their existence, casinos were used as public halls for music and dancing, but there they also gambled. The first famous European gambling house, which, incidentally https://casino-pinups.com/, was not called a “casino”, although it did fit the modern definition of a casino, was Ridotto, which opened in Venice in 1638 to ensure control over gambling during the carnival. created throughout continental Europe in the 19th century, while more informal fashion was in vogue in the United States

3M Non-Tenured Faculty Award

I am honored to have received 3M’s Non-Tenured Faculty Award, which recognizes outstanding junior faculty nominated by 3M researchers on the basis of their demonstrated record of research, experience, and academic leadership. I look forward to working with 3M Machine Learning researchers to advance data science research!

Publications Update

We have had many exciting publications in the last several months.

My student Dejiao Zhang and I worked with Mario Figueiredo and two other Michigan students on applying OWL regularization in deep networks. The intuition is that since OWL can tie correlated regressors, it should be able to do the same in deep nets that experience a high degree of co-adaptation (and correlation) of nodes in the network. Dejiao presented our paper Learning to Share: Simultaneous Parameter Tying and Sparsification for Deep Learning at ICLR last month and we will present Simultaneous Sparsity and Parameter Tying for Deep Learning using Ordered Weighted L1 Regularization at SSP next month.

With my colleague Johanna Mathieu and her student Greg Ledva, we published a paper in Transactions on Power Systems studying Real-Time Energy Disaggregation of a Distribution Feeder’s Demand Using Online Learning. The work leverages recent results in dynamic online learning where classes of dynamical models are used to apply online learning to the time-varying signal setting. This work can leverage existing sensing structure to improve prediction of distributed energy resources, demand-responsive electric loads and residential solar generation. We also have a book chapter in Energy Markets and Responsive Grids that was written also with my student Zhe Du.

Greg Ongie, David Hong, Dejiao Zhang, and I have been working on adaptive sampling for subspace estimation. If one has a matrix in memory that is large and difficult to access, but you want to compute a low-rank approximation of that matrix, one way is to sketch it by reading only parts of the matrix and computing an approximation. Our paper Enhanced Online Subspace Estimation Via Adaptive Sensing describes an adaptive sampling scheme to do exactly that, and using that scheme along with the GROUSE subspace estimation algorithm, we gave global convergence guarantees to the true underlying low-rank matrix. We will also present Online Estimation of Coherent Subspaces with Adaptive Sampling at SSP next month, which constrains the adaptive samples to be entry-wise and sees similar improvements.

Rounding it out, Zhe Du will be presenting our work with Necmiye Ozay on A Robust Algorithm for Online Switched System Identification at the SYS ID conference in July, and Bob Malinas and David Hong will present our work with Jeff Fessler on Learning Dictionary-Based Unions of Subspaces for Image Denoising at EUSIPCO in September. This spring Amanda Bower presented our work with Lalit Jain on The Landscape of Nonconvex Quadratic Feasibility, studying the minimizers for a non-convex formulation of the preference learning problem; and next week Naveen Murthy presents our work with Greg Ongie and Jeff Fessler on Memory-efficient Splitting Algorithms for Large-Scale Sparsity Regularized Optimization at the CT Meeting. Last fall Greg Ongie, Saket Dewangan, Jeff Fessler and I had a paper Online Dynamic MRI Reconstruction via Robust Subspace Tracking at GlobalSIP, pursuing the interesting idea of online subspace tracking for time-varying signals.

So many exciting research directions that we will continue to pursue!

Monotonic Matrix Completion

Ravi Ganti and Rebecca Willett and I had a paper in NIPS 2015 called “Matrix Completion under Monotonic Single Index Models.” We studied a matrix completion problem where a low-rank matrix is observed through a monotonic function applied to each entry. We developed a calibrated loss function that allowed a neat implementation and analysis. Now the code is available for public usage at this bitbucket link.