Compressed sensing belongs to the large field of inverse problems. A typical example of such problems consists in determining the signal that produces the measurement vector
through the linear transformation
, namely
.
Of course, without additional assumption the problem is ill-posed if the rank of the matrix is strictly smaller than
, meaning that
does not admit a left inverse. To be able to deal with cases where
, we need at least to assume some structure on the original signal
and/or on the design matrix
. Compressed sensing deals with the case where
and where
is assumed to be sparse meaning that only few coefficients of
are non zero. For any
, a vector
is said to be
-sparse if
Denoting the
lines oft the matrix
, typical problems adressed by compressed sensing are the following.
: Construct a minimal amount of sensing vectors
such that any
-sparse vector can be efficiently recovered from
and
: Given sensing vectors
construct a recovery algorithm for any
sparse vector from
and
; i.e. find an application
which can be efficiently implemented and such that for any
sparse vector
,
.
I.
-minimization
Reconstructing an s-sparse vector from its measurement vector
amounts to
solving the following -minimization problem:
(1)
In the following, we present the two different problems that can be tackled together with the minimal number of measurements to ensure that

1) Non-uniform approach
For any , given an
-sparse vector
, there exists a measurement matrix
with
rows such that the vector
can be reconstructed from its measurement vector
as a solution of the
-minimization problem (1).
2) Uniform approach
Given some matrix , we want to ensure that we are able to recover any
-sparse vectors
from its measurements
. The following Theorem states that a necessary condition for uniform recovery is
Theorem
Given , the following properties are equivalent:
Every s-sparse vector
is the unique s-sparse solution of
, that is, if
and both
and
are s-sparse, then
.
The null space
does not contain any
-sparse vector other than the zero vector, that is,
where
.
For every
with
, the submatrix
is injective as a map from
to
Every set of
columns of
is linearly independent.
We are now going to see that measurements suffice to reconstruct every
-sparse vector – at least in theory.
Theorem
For any integer , there exists a measurement matrix
with
rows such that every
-sparse vector
can be recovered from its measurement vector
as a solution of the
-minimization problem (1).
The optimization problem (1) is known to be NP-hard. In compressive sensing, we will rather consider special choices of and choose
for some sparse
. We will see that a variety of tractable algorithms will then provably recover
from
and thereby solve (1) for such specifically designed matrices
. However, to emphasize this point once more, such algorithms will not successfully solve the
– minimization problem for all possible choices of
and
due to NP-hardness.
II. Convex relaxation through
minimization
Hence one needs to transform the formulation of the problem to be able to recover the signal efficiently without ending up with a problem that requires
to be too large to ensure that
is the unique optimal solution. The most natural way to do this is to propose a convex relaxation of the problem (1). This is done by replacing the
-norm in the objective function by the
-norm. Note that this is the best relaxation that we can propose in the sense that the
-norm is the convex envelope of the
-norm on
. This means that on
, the
-norm is the supremum of all the affine functions that are smaller than the
-norm on
. Another equivalent statement is that the
-norm coincides with the biconjugate of the
-norm on
.
(2)
1. Necessary condition
Proposition
Let be a matrix satisfying
, then
The number of measurements needed to reconstruct every s-sparse vector via (2) always satisfies up to a universal constant. It means that we are paying an extra log factor compared to the
-minimization procedure. We gain more on a computational side than we lose on the theoretical number of measurements.
Definition
Let such that
. We say that
satisfies the exact recovery property of order
(ER(s)), if for any s-sparse vector
, one has
2) Equivalent condition: Nullspace property and cone of descent
A matrix is said to satisfy the null space property relative to a set
if
for all


![Rendered by QuickLaTeX.com S \subset [n]](https://quentin-duchemin.alwaysdata.net/wiki/wp-content/ql-cache/quicklatex.com-7b06411875f0a43005c68a8a6fe7e7d7_l3.png)

Theorem
Let . The following assertions are equivalent:
satisfies ER(s).
satisfies NSP(s).
where
is the cone of descent of the norm
at
.
Theorem
If satisfies NSP(s), then
which means that uniform recovery is ensured by solving (1).
Hence, the null space property is a necessary and sufficient condition for a matrix A to ensure recovery via (2).
It would be sufficient to construct sensing matrices satisfying NSP(s) with a
minimal number of rows (i.e. a minimal number of measurements) to solve CS problem using (2). However, verifying such a condition is impossible in practice. We are going to use instead two stronger properties that imply NSP(s). These two conditions are then sufficient – whereas NSP is necessary an sufficient – for (2) to success.
2) Sufficient conditions
a) Gelfand width
For all integer , let us denote
(resp.
) the
(resp.
) unit ball of
, and let us define
Under NSP(s), the null space of does not contain the sparse vectors set
. Then Ker(
) is far enough from the canonical axes and from all the 2s-dimensional spaces spanned by canonical basis vectors. Intuitively, Ker(
) will be directed only diagonal directions in
. For these vectors said to be well-spread, the ratio of their
-norm over their
-norm much smaller than
. One can anticipate that NSP(s) will be verified if the
-norm of vectors in
Ker(
)
is much smaller than .


Definition
We say that satisfies the Gelfand property of order
if
We will write that

Theorem
If satisfies Gelfand(s) then
satisfies NSP(s).
b) Restricted isometry property
Definition
Let . We way that
satisfies the restricted isometry property of order
if for any
,
We will write that

RIP(s) condition is really strong as it can be seen from the following Theorem.
Theorem
If satisfies RIP(s) then
satisfies Gelfand(s/65).
In a nutshell
The NSP(s) is equivalent to ER(s), then this is a tight condition for ER(s). However it is unpractical because it is quasi not verifiable. RIP condition, even if it is a more restrictive condition, gives a quantitative criterion, which is much easier to deal with. Hence, the next goal is then to construct sensing matrices satisfying RIP.