EECS 598 Unsupervised Feature Learning

Instructor: Prof. Honglak Lee
Instructor webpage: http://www.eecs.umich.edu/~honglak/
Office hours: Th 5pm-6pm, 3773 CSE
Classroom: 1690 CSE
Time: M W 10:30am-12pm

Course Schedule
(Note: this schedule is subject to change.)
Date Topic Papers Presenter
9/8 Introduction Honglak
9/13 Sparse coding B. Olshausen, D. Field. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 1996.

H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. NIPS, 2007.

Honglak
9/15 Self-taught learning

Application: computer vision

R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007.

H. Lee, R. Raina, A. Teichman, and A. Y. Ng. Exponential Family Sparse Coding with Application to Self-taught Learning. IJCAI, 2009.

J. Yang, K. Yu, Y. Gong, and T. Huang. Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification. CVPR, 2009.

Honglak
9/20 Neural networks and deep architectures I Y. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009. Chapter 4.

Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007.

Deepak
9/22 Restricted Boltzmann machine Y. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009. Chapter 5. Byung-soo
9/27 Variants of RBMs and Autoencoders P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with denoising autoencoders. ICML, 2008.

H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. NIPS, 2008.

Chun-Yuen
9/29 Deep belief networks Y. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009. Chapter 6.

R. Salakhutdinov, PhD Thesis. Chapter 2

Anna
10/4 Convolutional deep belief networks H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009. Min-Yian
10/6 Application: audio H. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. NIPS, 2009.

A. R. Mohamed, G. Dahl, and G. E. Hinton, Deep belief networks for phone recognition. NIPS 2009 workshop on deep learning for speech recognition.

Yash
10/11 Factorized models I M. Ranzato, A. Krizhevsky, G. E. Hinton, Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images. AISTATS, 2010. Chun
10/13 Factorized models II M. Ranzato, G. E. Hinton. Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines. CVPR, 2010. Soonam
10/18 No class - study break
10/20 Project proposal presentations
10/25 Temporal modeling I G. Taylor, G. E. Hinton, and S. Roweis. Modeling Human Motion Using Binary Latent Variables. NIPS, 2007.

G. Taylor and G. E. Hinton. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. ICML, 2009.

Jeshua
10/27 Temporal modeling II G. Taylor, R. Fergus, Y. LeCun and C. Bregler. Convolutional Learning of Spatio-temporal Features. ECCV, 2010. Robert
11/1 Energy-based models K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, Learning Invariant Features through Topographic Filter Maps. CVPR, 2009.

K. Kavukcuoglu, M. Ranzato, and Y. LeCun, Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition. CBLL-TR-2008-12-01, 2008.

Ryan
11/3 Pooling and invariance K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, What is the Best Multi-Stage Architecture for Object Recognition? ICML, 2009. Min-Yian
11/8 Evaluating RBMs R. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008.

R. Salakhutdinov, PhD Thesis. Chapter 4

Jeshua
11/10 Deep Boltzmann machines R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. AISTATS, 2009. Dae Yon
11/15 Local coordinate coding K. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding, NIPS, 2009.

J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Learning Locality-constrained Linear Coding for Image Classification. CVPR, 2010.

Robert
11/17 Deep architectures II H. Larochelle, Y. Bengio, J. Louradour and P. Lamblin, Exploring Strategies for Training Deep Neural Networks, JMLR, 2009. Soonam
11/22 Deep architectures III D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent and S. Bengio, Why Does Unsupervised Pre-training Help Deep Learning? JMLR, 2010. Chun
11/24 Application: computer vision II J. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. CVPR, 2010.

Y. Boureau, F. Bach, Y. LeCun and J. Ponce: Learning Mid-Level Features for Recognition. CVPR, 2010.

Dae Yon
11/29 Pooling and invariance II I. J. Goodfellow, Q. V. Le, A. M. Saxe, H. Lee, and A. Y. Ng. Measuring invariances in deep networks. NIPS, 2009.

Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in vision algorithms. ICML, 2010.

Anna
12/1 Application: natural language processing R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. ICML, 2009. Guanyu
12/13 Project presentations I
12/15 Project presentations II
12/19 Final project report due