Sparse Recovery Using Sparse Random Matrices
Piotr Indyk
MIT
Friday, March 28 at 11:30am
DOW 1017
ABSTRACT:
Over the recent years, a new linear method for compressing high-dimensional
data (e.g., images) has been discovered. For any high-dimensional
vector x, its
sketch is equal to Ax, where A is an m x n matrix (possibly chosen at
random).
Although typically the sketch length m is much smaller than the number of
dimensions n, the sketch contains enough information to recover an
approximation to x. At the same time, the linearity of the sketching method
is very convenient for many applications, such as data stream computing and
compressed sensing.
The sketching methods originally came in (at least) two different flavors:
- combinatorial: utilize sparse sketching matrices; the recovery algorithms
involve binary-search-like techniques
- geometric: utilize dense sketching matrices; the recovery algorithms
typically
involve linear or convex programming
Both methods achieve different trade-offs, notably between the
compression rate
and the running time. Thus, it is desirable to understand the connections
between them, with the ultimate goal of obtaining the "best of both worlds"
solution.
In this talk we present recent results on applying geometric recovery
methods
to sparse sketching matrices. We show that, both in theory and in practice,
the sparse matrices are essentially as "good" as the dense ones. At the same
time, our approach inherits many advantages of sparse matrices, such as lower
sketching and recovery times, and the ability to construct the sketching
matrices explicitly.
Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff and Martin
Strauss.