Preference Learning with Salient Features

I am excited that Amanda Bower will have the opportunity to discuss our new work in preference learning, “Preference Modeling with Context-Dependent Salient Features“, at ICML next week. In this work, we propose a new model for preference learning that takes into account the fact that when making pairwise comparisons, certain features may play an outside role in the comparison, making the pairwise comparison result inconsistent with a general preference order. Her video and the schedule for her “poster session availability” can be found here. We look forward to hearing people’s questions and feedback!

Online Tensor Completion and Tracking

Kyle Gilman and I have a preprint out describing a new algorithm for online tensor completion and tracking. We derive and demonstrate an algorithm that operates on streaming tensor data, such as hyperspectral video collected over time, or chemo-sensing experiments in space and time. Kyle presented his work at the first fully virtual ICASSP, which you can view here. Anyone can register for free to this year’s virtual ICASSP and watch the videos, post questions, and join the discussion. Kyle’s code is also available here. We think this algorithm will have a major impact in speeding up low-rank tensor processing, especially with time-varying data, and we welcome questions and feedback.

Fulbright Award to study in Portugal

I am so excited to be awarded a Fulbright award to study optimization and machine learning with my colleague Mario Figueiredo in Lisbon, Portugal.

Two U-M stories about it can be found here and here. We will be working on new regularizers for matrix optimization problems.

Education Excellence Award

I have been awarded an Education Excellence award from the U-M College of Engineering. This award recognizes faculty with a demonstrated sustained excellence in curricular development, instruction, and guidance at both the undergraduate and graduate levels. Students are the reason why I love having an academic job, so I am truly honored to have received this award.

Promotion to Associate Professor

I am thrilled that yesterday the Regents of the University of Michigan promoted me to Associate Professor with tenure, effective September 1. It feels like yesterday that I arrived in my faculty office for the first time, and yet it also feels long ago. I appreciate so much the support of my colleagues and mentors. I also would not be here without my outstanding, curious, enthusiastic, and hard-working graduate students — they are incredible. Here is the letter that my dean sent to the provost to recommend me for promotion. I am looking forward to the next phase!

Congratulations Dejiao and David!

SPADA lab is so proud of Dr. Dejiao Zhang and Dr. David Hong. They both successfully defended their PhD dissertations this spring. Dejiao is going to Amazon Web Services next, and David is going to a postdoc at the University of Pennsylvania. We expect you both to go off and do great things! Congratulations!

NSF CAREER Award

I am honored to have received the NSF CAREER award for a proposal on optimization methods and theory for the joint formulation of dimension reduction and clustering. You can read about the award here in the UM press release and also here on the NSF website. Dimension reduction and clustering are arguably the two most critical problems in unsupervised machine learning; they are used universally for data exploration and understanding. Often dimension reduction is used before clustering (or vice versa) to lend tractability to the modeling algorithm. It’s more typical in real data to see clusters each with their own low-dimensional structure, and so a joint formulation is of great interest. I look forward to working toward this end in the next stage of my career.

Army Young Investigator

I am very excited that my project “Mathematics for Learning Nonlinear Generalizations of Subspace Models in High Dimensions” has won the Army Young Investigator award! Subspace models are widely used due to simplicity and ease of analysis. However, while these linear models are very powerful in many high-dimensional data contexts, they also often miss out on important nonlinearities in real data. This project aims to extend recent advances in signal processing to the single-index model and the nonlinear variety model. Read the department’s announcement here.

Postdoc Opportunity at the University of Michigan

to begin in spring 2019.

 

Please email Laura Balzano <girasole@umich.edu> with the subject “Joining the Balzano lab — postdoc 2019” if you are interested.

We are seeking a postdoc who is interested in applying machine learning techniques to real-time dynamic data analysis. While machine learning has advanced significantly over the last decade, its application to dynamic time-varying data is still in its infancy. This project will focus on three ML areas: online learning, stochastic gradient methods, and streaming PCA. We will work on theory to understand how the standard approaches behave when the data are time-varying, develop appropriate models for time-varying data, and develop novel approaches along with convergence theory. Our main applications focus will be power systems engineering and computer vision. In power systems, we will develop methodologies to infer the real-time behavior of aggregations of distributed energy resources from hierarchical, heterogeneous, and incomplete measurements of power system quantities. In computer vision, we will develop real-time algorithms for object tracking and activity recognition in video.

Optimally Weighted PCA for High-dimensional Heteroscedastic Data

Today I had the opportunity to speak about very recent results by my student David Hong (joint work also with Jeff Fessler) in analyzing asymptotic recovery guarantees for weighted PCA for high-dimensional heteroscedastic data. In the paper we recently posted online, we have asymptotic analysis (as both the number of samples and dimension of the problem grow to infinity, but converge to a fixed constant) of the recovery for weighted PCA components, amplitudes, and scores. Those recovery expressions allow us to find weights that give optimal recovery, and the weights turn out to be a very simple expression involving only the noise variance and the PCA amplitudes. To learn more, watch my talk here, and let us know if you have any questions!