Code for Subspace Tracking with Missing Data

Code for my Proceedings of IEEE paper with Yuejie Chi and Yue Lu, Streaming PCA and Subspace Tracking: the Missing Data Case can be found here.

Optimally Weighted Heteroscedastic PCA Code

We have updated our paper on optimally weighted heteroscedastic PCA and here is the code to run the experiments. In this work we show how to weight data before solving PCA, under a spiked covariance model with heteroscedastic additive noise. Surprisingly, the weights are not inverse noise variance, and neither are they 0/1 discarding the noisiest points, but instead the optimal weights are in between these two standard heuristics.

I am for real dating and communication. For several reasons: 1. It is immediately clear whether you like the person and whether it is mutual. 2. It’s a pity for your time. But I also met online disabled relationships. I know couples who were able to create families with the help of him (both with and without disabilities).

HePPCAT in TSP

Our work on heteroscedastic PCA continues with our article “HePPCAT: Probabilistic PCA for Data with Heteroscedastic Noise,” published in IEEE Transactions on Signal Processing. In this paper we developed novel ascent algorithms to maximize the heteroscedastic PCA likelihood, simultaneously estimating the principal components and the heteroscedastic noise variances. We show a compelling application to air quality data, where it is common to have data both from sensors that are high-quality EPA instruments and others that are consumer grade. Code for the paper experiments is available at https://gitlab.com/heppcat-group, and the HePPCAT method is available as a registered Julia package. Congratulations to my student Kyle Gilman, former student David Hong, and colleague Jeff Fessler.

Online matrix factorization for Markovian data

Hanbaek Lyu, Deanna Needell, and I recently had a manuscript published at JMLR: “Online matrix factorization for Markovian data and applications to Network Dictionary Learning.” In this work we show that the well-known OMF algorithm for i.i.d. stream of data converges almost surely to the set of critical points of the expected loss function, even when the data stream is dependent but Markovian. It would be of great interest to show that this algorithm further converges to global minimizers, as has been recently proven for many batch-processing algorithms. We are excited about this important step, generalizing the theory for the more practical case where the data aren’t i.i.d. Han’s work applying this to network sampling is super cool — and in fact it’s impossible to sample a sparse network in an i.i.d. way, so this extension is critical for this application. The code is available here. Han is on the academic job market this year.

Online Tensor Completion and Tracking

Kyle Gilman and I have a preprint out describing a new algorithm for online tensor completion and tracking. We derive and demonstrate an algorithm that operates on streaming tensor data, such as hyperspectral video collected over time, or chemo-sensing experiments in space and time. Kyle presented his work at the first fully virtual ICASSP, which you can view here. Anyone can register for free to this year’s virtual ICASSP and watch the videos, post questions, and join the discussion. Kyle’s code is also available here. We think this algorithm will have a major impact in speeding up low-rank tensor processing, especially with time-varying data, and we welcome questions and feedback.

The speed of the girl’s movement around the city is about four kilometers per hour, or two and a half, if she wears heels higher than six centimeters. The zone of possible contact is five meters free hookup, no more. That is, everything about everything you get … How much do you get? (Damn, they told me for a reason: study math properly, you will need it!) Well, like, five seconds.

Improving K-Subspaces via Coherence Pursuit

John Lipor, Andrew Gitlin, Bioashuai Tao, and I have a new paper, “Improving K-Subspaces via Coherence Pursuit,” to be published in the Journal of Special Topics in Signal Processing issue “Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications.” In it we present a new subspace clustering algorithm, Coherence Pursuit – K-Subspaces (CoP-KSS). Here is the code for CoP-KSS and for our figures. Our paper considers specifically the PCA step in K-Subspaces, where a best-fit subspace estimate is determined from a (possibly incorrect) clustering. When a given cluster has points from multiple low-rank subspaces, PCA is not a robust approach. We replace that step with Coherence Pursuit, a new algorithm for Robust PCA. We prove that Coherence Pursuit indeed can recover the “majority” subspace when data from other low-rank subspaces is contaminating the cluster. In this paper we also prove — to the best of our knowledge, for the first time — that the K-Subspaces problem is NP-hard, and indeed even NP-hard to approximate within any finite factor for large enough subspace rank.

In turn, roulette, which is now one of the most popular casino table games all over the world, appeared in the 18th century in France, in the gambling houses of Paris. Translated https://pin-ups-casino.com from French, the word “roulette” means “small wheel”, it became incredibly popular throughout Europe and after some time came to the United States, where it became one of the favorite entertainment of Americans.

The origins of poker are also not completely clear, because, like many other games of chance, poker, most likely, also developed over several centuries, taking shape from different card games. Some argue that poker-like gambling was invented in the 17th century in Persia, while others say that the famous game of today was inspired by the French game Poque. The popularity of this game grew rather slowly until the 70s. of the last century, no world poker tournaments were held in Las Vegas. But the greatest recognition of this game was provided by the opportunity to gamble on the Internet when online poker appeared.

 

Group OWL Regularization for Deep Nets

My student Dejiao Zhang’s code for our paper Learning to Share: Simultaneous Parameter Tying and Sparsification in Deep Nets can be found at this link. We demonstrated that regularizing the weights in a deep network using the Group OWL norm allows for simultaneous enforcement of sparsity (meaning unimportant weights are eliminated) and parameter tying (meaning co-adapted or highly correlated weights are tied together). This is an exciting technique for learning compressed deep net architectures from data.

Interestingly, the first casinos appeared in the 17th century in Venice, Italy, and at first they were not associated with gambling. At the beginning of their existence, casinos were used as public halls for music and dancing, but there they also gambled. The first famous European gambling house, which, incidentally https://casino-pinups.com/, was not called a “casino”, although it did fit the modern definition of a casino, was Ridotto, which opened in Venice in 1638 to ensure control over gambling during the carnival. created throughout continental Europe in the 19th century, while more informal fashion was in vogue in the United States

Monotonic Matrix Completion

Ravi Ganti and Rebecca Willett and I had a paper in NIPS 2015 called “Matrix Completion under Monotonic Single Index Models.” We studied a matrix completion problem where a low-rank matrix is observed through a monotonic function applied to each entry. We developed a calibrated loss function that allowed a neat implementation and analysis. Now the code is available for public usage at this bitbucket link.

Variety Matrix Completion code

The code for our matrix completion algorithm from the ICML paper “Algebraic Variety Models for High-rank Matrix Completion” can be found here in Greg Ongie’s github repository. Using the algebraic variety as a low-dimensional model for data, Greg’s algorithm is a kernel method for doing matrix completion in the space associated with a polynomial kernel. It allows us to do matrix completion even when the matrix does not have low linear rank — but instead when it has low dimension in the form of a nonlinear variety. Start with the README for example scripts.

Distance-Penalized Active Learning Using Quantile Search

Active sampling — where one chooses what samples to collect based on data collected thus far — is an important approach for spatial environmental sampling, where resources are drastically limited when compared to the extent of the signals of interest. However, most active learning literature studies the case where each sample has equal cost. In spatial sampling, the sample cost is often proportional to distance between samples. John Lipor and I collaborated with our colleagues in the department of Civil and Environmental Engineering and the department of Natural Resources to develop active sampling techniques for lake sampling.

The code is available here. You can also find a video about the project here.

Lipor, J., B. P. Wong, D. Scavia, B. Kerkez, and L. Balzano. 2017. “Distance-Penalized Active Learning Using Quantile Search.” IEEE Transactions on Signal Processing 65 (20): 5453–65. https://doi.org/10.1109/TSP.2017.2731323.

Lipor, J., L. Balzano, B. Kerkez, and D. Scavia. 2015. “Quantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling.” In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 1241–48. https://doi.org/10.1109/ALLERTON.2015.7447150.