This page contains a matlab code implementing the algorithms described in the NIPS paper "Efficient sparse coding algorithms".
In the paper, we propose fast algorithms for solving two general-purpose convex problems:
(1) L1-regularized Least Squares problem solver using the feature-sign search algorithm and
(2) L2-constrained Least Squares problem solver using Lagrange dual.
Especially, our feature-sign search algorithm (L1-regularized Least Squares solver) is very fast, and can be used for many other machine learning problems; when tested for the benchmark data, the feature-sign search algorithm outperforms many other existing algorithms such as LARS, basis pursuit, and grafting.
For more details, see our NIPS'06 paper.
We also apply this efficient sparse coding algorithm to a new machine learning framework called "self-taught learning", where we are given a small amount of labeled data for a supervised learning task, and lots of additional unlabeled data that does not share the labels of the supervised problem and does not arise from the same distribution. For more details, see our ICML'07 paper.