arxiv_papers / 1001.0041.txt
alx-ai's picture
Upload 920 files
b13a737
arXiv:1001.0041v2 [math.MG] 2 Jul 2010Almost-Euclidean subspaces of ℓN
1via tensor
products: a simple approach to randomness
reduction
Piotr Indyk1⋆and Stanislaw Szarek2⋆⋆
1MITindyk@mit.edu
2CWRU & Paris 6 szarek@math.jussieu.fr
Abstract. It has been known since 1970’s that the N-dimensional ℓ1-
space contains nearly Euclidean subspaces whose dimension isΩ(N).
However, proofs of existence of such subspaces were probabi listic, hence
non-constructive, which made the results not-quite-suita ble for subse-
quently discovered applications to high-dimensional near est neighbor
search, error-correcting codes over the reals, compressiv e sensing and
other computational problems. In this paper we present a “lo w-tech”
scheme which, for any γ >0, allows us toexhibitnearly Euclidean Ω(N)-
dimensional subspaces of ℓN
1while using only Nγrandom bits. Our re-
sults extend and complement (particularly) recent work by G uruswami-
Lee-Wigderson. Characteristic features of our approach in clude (1) sim-
plicity (we use only tensor products) and (2) yielding almos t Euclidean
subspaces with arbitrarily small distortions.
1 Introduction
It is a well-known fact that for any vector x∈RN, itsℓ2andℓ1norms are
related by the (optimal) inequality /⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l1≤√
N/⌊ar⌈⌊lx/⌊ar⌈⌊l2. However, classical
results in geometric functional analysis show that for a “substant ial fraction” of
vectors , the relation between its 1-norm and 2-norm can be made m uch tighter.
Specifically, [FLM77,Kas77,GG84] show that there exists a subspace E⊂RNof
dimensionm=αN, and a scaling constant Ssuch that for all x∈E
1/D·√
N/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤S/⌊ar⌈⌊lx/⌊ar⌈⌊l1≤√
N/⌊ar⌈⌊lx/⌊ar⌈⌊l2 (1)
whereα∈(0,1) andD=D(α), called the distortion ofE, are absolute (notably
dimension-free) constants.Overthe last few years,such “almos t-Euclidean”sub-
spaces ofℓN
1have found numerous applications, to high-dimensional nearest
neighbor search [Ind00], error-correcting codes over reals and c ompressive sens-
ing [KT07,GLR08,GLW08], vector quantization [LV06], oblivious dimension ality
⋆This research has been supported in part by David and Lucille Packard Fellowship,
MADALGO (Center for Massive Data Algorithmics, funded by th e Danish National
Research Association) and NSF grant CCF-0728645.
⋆⋆Supported in part by grants from the National Science Founda tion (U.S.A.) and the
U.S.-Israel BSF.reduction and ǫ-samples for high-dimensional half-spaces [KRS09], and to other
problems.
For the above applications, it is convenient and sometimes crucial th at the
subspaceEis defined in an explicit manner3. However, the aforementioned re-
sults do not providemuch guidance in this regard,since they use the probabilistic
method. Specifically, either the vectors spanning E, or the vectors spanning the
space dual to E, are i.i.d. random variables from some distribution. As a result,
the constructionsrequire Ω(N2)independent randomvariablesasstartingpoint.
Until recently, the largest explicitly constructible almost-Euclidean subspace of
ℓN
1, due to Rudin [Rud60] (cf. [LLR94]), had only a dimension of Θ(√
N).
During the last few years, there has been a renewed interest in the prob-
lem[AM06,Sza06,Ind07,LS07,GLR08,GLW08],withresearchersusingide asgained
from the study of expanders, extractorsand error-correctin gcodes to obtain sev-
eral explicit constructions. The work progressed on two fronts, focusing on (a)
fully explicit constructions of subspaces attempting to maximize the dimension
and minimize the distortion [Ind07,GLR08], as well as (b) construction s using
limited randomness, with dimension and distortion matching (at least q ualita-
tively)theexistentialdimensionanddistortionbounds[Ind00,AM06,L S07,GLW08].
The parameters of the constructions are depicted in Figure 1. Qua litatively,
they show that in the fully explicit case, one can achieve either arbitr arily low
distortion or arbitrarily high subspace dimension, but not (yet?) bo th. In the
low-randomness case, one can achieve arbitrarily high subspace dim ension and
constant distortion while using randomness that is sub-linear in N; achieving
arbitrarily low distortion was possible as well, albeit at a price of (super )-linear
randomness.
Reference Distortion Subspace dimension Randomness
[Ind07] 1+ǫ N1−oǫ(1)explicit
[GLR08] (logN)Oη(logloglog N)(1−η)N explicit
[Ind00] 1+ǫ Ω(ǫ2/log(1/ǫ))NO(Nlog2N)
[AM06,LS07] Oη(1) (1−η)N O(N)
[GLW08] 2Oη(1/γ)(1−η)N O(Nγ)
This paper 1+ǫ (γǫ)O(1/γ)N O(Nγ)
Fig.1.The best known results for constructing almost-Euclidean s ubspaces of ℓN
1. The
parameters ǫ,η,γ∈(0,1) are assumed to be constants, although we explicitly point
out when the dependence on them is subsumed by the big-Oh nota tion.
3For the purpose of this paper “explicit” means “the basis of Ecan be generated
by a deterministic algorithm with running time polynomial i nN.” However, the
individual constructions can be even “more explicit” than t hat.Our result In this paper we show that, using sub-linear randomness, one can
constructasubspacewitharbitrarilysmalldistortionwhilekeepingit sdimension
proportional to N. More precisely, we have:
Theorem 1 Letǫ,γ∈(0,1). GivenN∈N, assume that we have at our dis-
posal a sequence of random bits of length max{Nγ,C(ǫ,γ)}log(N/(ǫγ)). Then,
in deterministic polynomial (in N) time, we can generate numbers M >0,
m≥c(ǫ,γ)Nand anm-dimensional subspace of ℓN
1E, for which we have
∀x∈E,(1−ǫ)M/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l1≤(1+ǫ)M/⌊ar⌈⌊lx/⌊ar⌈⌊l2
with probability greater than 98%.
In a sense, this complements the result of [GLW08], optimizing the dist ortion
of the subspace at the expense of its dimension. Our approach also allows to
retrieve – using a simpler and low-tech approach – the results of [GLW 08] (see
the comments at the end of the Introduction).
Overview of techniques The ideas behind many of the prior constructions as
well as this work can be viewed as variants of the related developmen ts in the
context of error-correcting codes. Specifically, the construct ion of [Ind07] resem-
bles the approach of amplifying minimum distance of a code using expan ders
developed in [ABN+92], while the constructions of [GLR08,GLW08] were in-
spired by low-density parity check codes. The reason for this stat e of affairs is
that a vector whose ℓ1norms and ℓ2norms are very different must be “well-
spread”, i.e., a small subset of its coordinates cannot contain most of itsℓ2
mass (cf. [Ind07,GLR08]). This is akin to a property required from a g ood error-
correcting code, where the weight (a.k.a. the ℓ0norm) of each codeword cannot
be concentrated on a small subset of its coordinates.
In this vein, our construction utilizes a tool frequently used for (lin ear) error-
correcting codes, namely the tensor product . Recall that, for two linear codes
C1⊂ {0,1}n1andC2⊂ {0,1}n2, their tensor product is a code C⊂ {0,1}n1n2,
such that for any codeword c∈C(viewed as an n1×n2matrix), each column of
cbelongs toC1and each row of cbelongs toC2. It is known that the dimension
ofCis a product of the dimensions of C1andC2, and that the same holds
for the minimum distance. This enables constructing a code of “large ” block-
lengthNkby starting from a code of “small” block-length Nand tensoring it k
times. Here, we roughly show that the tensor product of two subs paces yields a
subspace whose distortion is a product of the distortions of the su bspaces. Thus,
we can randomly choose an initial small low-distortion subspace, and tensor it
with itself to yield the desired dimension.
However, tensoring alone does not seem sufficient to give a subspac e with
distortionarbitrarilycloseto1.Thisisbecausewecanonlyanalyzeth edistortion
of the product space for the case when the scaling factor Sin Equation 1 is
equal to 1 (technically, we only prove the left inequality, and rely on t he general
relation between the ℓ2andℓ1for the upper bound). For S= 1, however, the
best achievable distortion is strictly greater than 1, and tensoring can make itonly larger. To avoid this problem, instead of the ℓN
1norm we use the ℓN/B
1(ℓB
2)
norm, for a “small” value of B. The latter norm (say, denoted by /⌊ar⌈⌊l · /⌊ar⌈⌊l) treats
the vector as a sequence of N/B“blocks” of length B, and returns the sum of
theℓ2norms of the blocks. We show that there exist subspaces E⊂ℓN/B
1(ℓB
2)
such that for any x∈Ewe have
1/D·/radicalbig
N/B/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l ≤/radicalbig
N/B/⌊ar⌈⌊lx/⌊ar⌈⌊l2
forDthat is arbitrarily close to 1. Thus, we can construct almost-Euclide an
subspaces of ℓ1(ℓ2) of desired dimensions using tensoring, and get rid of the
“inner”ℓ2norm at the end of the process.
We point out that if we do not insist on distortion arbitrarily close to 1,
the “blocks” are not needed and the argument simplifies substantia lly. In par-
ticular, to retrieve the results of [GLW08], it is enough to combine the scalar-
valued version of Proposition 1 below with “off-the-shelf” random co nstructions
[Kas77,GG84] yielding – in the notation of Equation 1 – a subspace E, for which
the parameter αis close to 1.
2 Tensoring subspaces of L1
We start by defining some basic notions and notation used in this sect ion.
Norms and distortion In this section we adopt the “continuous” notation for
vectorsand norms. Specifically, considera real Hilbert space Hand a probability
measureµover [0,1]. Forp∈[1,∞] consider the space Lp(H) ofH-valuedp-
integrable functions fendowed with the norm
/⌊ar⌈⌊lf/⌊ar⌈⌊lp=/⌊ar⌈⌊lf/⌊ar⌈⌊lLp(H)=/parenleftbigg/integraldisplay
/⌊ar⌈⌊lf(x)/⌊ar⌈⌊lp
Hdµ(x)/parenrightbigg1/p
In what follows we will omit µfrom the formulae since the measure will be
clear from the context (and largely irrelevant). As our main result c oncerns
finite dimensional spaces, it suffices to focus on the case where µis simply the
normalizedcountingmeasureoverthe discreteset {0,1/n,...(n−1)/n}for some
fixedn∈N(although the statements hold in full generality). In this setting, t he
functionsffromLp(H) areequivalent to n-dimensional vectorswith coordinates
inH.4The advantage of using the Lpnorms as opposed to the ℓpnorms that
the relation between the 1-norm and the 2-norm does not involve sc aling factors
that depend on dimension, i.e., we have /⌊ar⌈⌊lf/⌊ar⌈⌊l2≥ /⌊ar⌈⌊lf/⌊ar⌈⌊l1for allf∈L2(H) (note
that, for the Lpnorms, the “trivial” inequality goes in the other direction than
for theℓpnorms). This simplifies the notation considerably.
4The values from Hroughly correspond to the finite-dimensional “blocks” in th e
construction sketched in the introduction. Note that Hcan be discretized similarly
as theLp-spaces; alternatively, functions that are constant on int ervals of the type/parenleftBig
(k−1)/N,k/N/parenrightBig
can be considered in lieu of discrete measures.We will be interested in lialmost subspaces E⊂L2(H) on which the 1-norm
and 2-norm uniformly agree, i.e., for some c∈(0,1],
/⌊ar⌈⌊lf/⌊ar⌈⌊l2≥ /⌊ar⌈⌊lf/⌊ar⌈⌊l1≥c/⌊ar⌈⌊lf/⌊ar⌈⌊l2 (2)
for allf∈E. The best (the largest) constant cthat works in (2) will be denoted
Λ1(E). For completeness, we also define Λ1(E) = 0 if noc>0 works.
Tensor products IfH,Kare Hilbert spaces, H⊗2Kis their (Hilbertian) tensor
product, which may be (for example) described by the following prop erty: if (ej)
is an orthonormal sequence in Hand (fk) is an orthonormal sequence in K,
then (ej⊗fk) is an orthonormal sequence in H ⊗2K(a basis if ( ej) and (fk)
were bases). Next, any element of L2(H)⊗ Kis canonically identified with a
function in the space L2(H ⊗2K); note that such functions are H ⊗K-valued,
but are defined on the same probability space as their counterpart s fromL2(H).
IfE⊂L2(H) is a linear subspace, E⊗Kis – under this identification – a linear
subspace of L2(H⊗2K).
As hinted in the Introduction, our argument depends (roughly) on the fact
that the property expressed by (1) or (2) “passes” to tensor p roducts of sub-
spaces, and that it “survives” replacing scalar-valued functions b y ones that
have values in a Hilbert space. Statements to similar effect of various degrees
of generality and precision are widely available in the mathematical liter ature,
see for example [MZ39,Bec75,And80,FJ80]. However, we are not awar e of a ref-
erence that subsumes all the facts needed here and so we presen t an elementary
self-contained proof.
We start with two preliminary lemmas.
Lemma 1 Ifg1,g2,...∈E⊂L2(H), then
/integraldisplay/parenleftbig/summationdisplay
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊l2
H/parenrightbig1/2dx≥Λ1(E)/parenleftBig/integraldisplay/summationdisplay
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊l2
Hdx/parenrightBig1/2
.
ProofLetKbe an auxiliary Hilbert space and ( ek) an orthonormal sequence
(O.N.S.) in K. We will apply Minkowski inequality – a continuous version of
the triangle inequality, which says that for vector valued functions /⌊ar⌈⌊l/integraltext
h/⌊ar⌈⌊l ≤/integraltext/⌊ar⌈⌊lh/⌊ar⌈⌊l– to the K-valued function h(x) =/summationtext
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊lHek. As is easily seen,
/⌊ar⌈⌊l/integraltext
h/⌊ar⌈⌊lK=/⌊ar⌈⌊l/summationtext
k/parenleftbig/integraltext
/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊lHdx/parenrightbig
ek/⌊ar⌈⌊lK=/parenleftbig/summationtext
k/⌊ar⌈⌊lgk/⌊ar⌈⌊l2
L1(H)/parenrightbig1/2. Given that gk∈
E,/⌊ar⌈⌊lgk/⌊ar⌈⌊lL1(H)≥Λ1(E)/⌊ar⌈⌊lgk/⌊ar⌈⌊lL2(H)and so
/vextenddouble/vextenddouble/vextenddouble/integraldisplay
h/vextenddouble/vextenddouble/vextenddouble
K≥Λ1(E)/parenleftBig/integraldisplay/summationdisplay
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊l2
Hdx/parenrightBig1/2
On the other hand, the left hand side of the inequality in Lemma 1 is exa ctly/integraltext
/⌊ar⌈⌊lh/⌊ar⌈⌊lK, so the Minkowski inequality yields the required estimate.
We are now ready to state the next lemma. Recall that Eis a linear subspace
ofL2(H), andKis a Hilbert space.Lemma 2 Λ1(E⊗K) =Λ1(E)
IfE⊂L2=L2(R), the lemma says that any estimate of type (2) for scalar
functionsf∈Ecarries overto their linear combinations with vector coefficients,
namely to functions of the type/summationtext
jvjfj,fj∈E,vj∈ K. In the general case,
any estimate for H-valued functions f∈E⊂L2(H) carries over to functions of
the form/summationtext
jfj⊗vj∈L2(H⊗2K), withfj∈E,vj∈ K.
Proof of Lemma 2 Let (ek) be an orthonormalbasis of K. In fact w.l.o.g. we may
assume that K=ℓ2and that (ek) is the canonical orthonormal basis. Consider
g=/summationtext
jfj⊗vj, wherefj∈Eandvj∈ K. Then also g=/summationtext
kgk⊗ekfor some
gk∈Eand hence (pointwise) /⌊ar⌈⌊lg(x)/⌊ar⌈⌊lH⊗2K=/parenleftbig/summationtext
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊l2
H/parenrightbig1/2. Accordingly,
/⌊ar⌈⌊lg/⌊ar⌈⌊lL2(H⊗2K)=/parenleftbig/integraltext/summationtext
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊l2
Hdx/parenrightbig1/2,while/⌊ar⌈⌊lg/⌊ar⌈⌊lL1(H⊗2K)=/integraltext/parenleftbig/summationtext
k/⌊ar⌈⌊lgk(x)/⌊ar⌈⌊l2
H/parenrightbig1/2dx.
Comparing such quantities is exactly the object of Lemma 1, which imp lies that
/⌊ar⌈⌊lg/⌊ar⌈⌊lL1(H⊗2K)≥Λ1(E)/⌊ar⌈⌊lg/⌊ar⌈⌊lL2(H⊗2K).Sinceg∈E⊗Kwasarbitrary,it follows that
Λ1(E⊗K)≥Λ1(E). The reverse inequality is automatic (except in the trivial
case dim K= 0, which we will ignore).
IfE⊂L2(H) andF⊂L2(K) are subspaces, E⊗Fis the subspace of
L2(H ⊗2K) spanned by f⊗gwithf∈E,g∈F. (For clarity, f⊗gis a
function on the product of the underlying probability spaces and is defined by
(x,y)→f(x)⊗g(y)∈ H⊗K .)
The next proposition shows the key property of tensoring almost- Euclidean
spaces.
Proposition 1. Λ1(E⊗F)≥Λ1(E)Λ1(F)
ProofLet (ϕj) and (ψk) be orthonormal bases of respectively EandFand let
g=/summationtext
j,ktjkϕj⊗ψk.Weneedtoshowthat /⌊ar⌈⌊lg/⌊ar⌈⌊lL1(H⊗2K)≥Λ1(E)Λ1(F)/⌊ar⌈⌊lg/⌊ar⌈⌊lL2(H⊗2K),
where thep-norms refer to the product probability space, for example
/⌊ar⌈⌊lg/⌊ar⌈⌊lL1(H⊗2K)=/integraldisplay /integraldisplay/vextenddouble/vextenddouble/summationdisplay
j,ktjkϕj(x)⊗ψk(y)/vextenddouble/vextenddouble
H⊗2Kdxdy.
Rewriting the expression under the sum and subsequently applying L emma 2 to
the inner integral for fixed ygives
/integraldisplay/vextenddouble/vextenddouble/summationdisplay
j,ktjkϕj(x)⊗ψk(y)/vextenddouble/vextenddouble
H⊗2Kdx=/integraldisplay/vextenddouble/vextenddouble/summationdisplay
jϕj(x)⊗/parenleftBig/summationdisplay
ktjkψk(y)/parenrightBig/vextenddouble/vextenddouble
H⊗2Kdx
≥Λ1(E)/parenleftBig/integraldisplay/vextenddouble/vextenddouble/summationdisplay
jϕj(x)⊗/parenleftBig/summationdisplay
ktjkψk(y)/parenrightBig/vextenddouble/vextenddouble2
H⊗2Kdx/parenrightBig1/2
=Λ1(E)/parenleftBig/summationdisplay
j/vextenddouble/vextenddouble/summationdisplay
ktjkψk(y)/vextenddouble/vextenddouble2
K/parenrightBig1/2In turn,/summationtext
ktjkψk∈F(for allj) and so, by Lemma 1,
/integraldisplay/parenleftBig/summationdisplay
j/vextenddouble/vextenddouble/summationdisplay
ktjkψk(y)/vextenddouble/vextenddouble2
K/parenrightBig1/2
dy≥Λ1(F)/parenleftBig/integraldisplay/summationdisplay
j/vextenddouble/vextenddouble/summationdisplay
ktjkψk(y)/vextenddouble/vextenddouble2
Kdy/parenrightBig1/2
=Λ1(F)/⌊ar⌈⌊lg/⌊ar⌈⌊lL2(H⊗2K).
Combining the above formulae yields the conclusion of the Proposition .
3 The construction
In this section we describe our low-randomness construction. We s tart from a
recap of the probabilistic construction, since we use it as a building blo ck.
3.1 Dvoretzky’s theorem, and its “tangible” version
For general normed spaces, the following is one possible statement of the well-
known Dvoretzky’s theorem [Dvo61]:
Givenm∈Nandε>0there isN=N(m,ε)such that, for any norm on RN
there is an m-dimensional subspace on which the ratio of ℓ1andℓ2norms is
(approximately) constant, up to a multiplicative factor 1+ε.
For specific norms this statement can be made more precise, both in describing
the dependence N=N(m,ε) and in identifying the constant of (approximate)
proportionality of norms. The following version is (essentially) due to Milman
[Mil71].
Dvoretzky’s theorem (Tangible version) Consider the N-dimensional Eu-
clidean space (real orcomplex) endowed with the Euclidean norm /⌊ar⌈⌊l·/⌊ar⌈⌊l2and some
other norm /⌊ar⌈⌊l·/⌊ar⌈⌊lsuch that, for some b>0,/⌊ar⌈⌊l·/⌊ar⌈⌊l ≤b/⌊ar⌈⌊l·/⌊ar⌈⌊l2. LetM=E/⌊ar⌈⌊lX/⌊ar⌈⌊l, whereX
is a random variable uniformly distributed on the unit Euclidean sphere . Then
there exists a computable universal constant c >0, so that if 0< ε <1and
m≤cε2(M/b)2N, then for more than 99% (with respect to the Haar measure)
m-dimensional subspaces Ewe have
∀x∈E,(1−ε)M/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l ≤(1+ε)M/⌊ar⌈⌊lx/⌊ar⌈⌊l2. (3)
Alternative good expositions of the theorem are in, e.g., [FLM77], [MS8 6] and
[Pis89]. We point out that standard and most elementary proofs yield m≤
cε2/log(1/ε)(M/b)2N; the dependence on εof orderε2was obtained in the
important papers [Gor85,Sch89], see also [ASW10].
3.2 The case of ℓn
1(ℓB
2)
OurobjectivenowistoapplyDvoretzky’stheoremandsubsequent lyProposition
1 to spaces of the form ℓn
1(ℓB
2) for some n,B∈N, so from now on we set/⌊ar⌈⌊l·/⌊ar⌈⌊l:=/⌊ar⌈⌊l·/⌊ar⌈⌊lℓn
1(ℓB
2)To that end, we need to determine the values of the parameter
Mthat appears in the theorem. (The optimal value of bis clearly√n, as in
the scalar case, i.e., when B= 1.) We have the following standard (cf. [Bal97],
Lecture 9)
Lemma 3
M(n,B) :=Ex∈SnB−1/⌊ar⌈⌊lx/⌊ar⌈⌊l=Γ(B+1
2)
Γ(B
2)Γ(nB
2)
Γ(nB+1
2)n.
In particular,/radicalBig
1+1
n−1/radicalBig
2
π√n>M(n,1)>/radicalBig
2
π√nfor alln∈N(the scalar
case) andM(n,B)>/radicalBig
1−1
B√nfor alln,B∈N.
The equality is shown by relating (via passing to polar coordinates) sp heri-
cal averages of norms to Gaussian means: if Xis a random variable uniformly
distributed on the Euclidean sphere SN−1andYhas the standard Gaussian
distribution on RN, then, for any norm /⌊ar⌈⌊l·/⌊ar⌈⌊l,
E/⌊ar⌈⌊lY/⌊ar⌈⌊l=√
2Γ(N+1
2)
Γ(N
2)E/⌊ar⌈⌊lX/⌊ar⌈⌊l
The inequalities follow from the estimates/radicalBig
x−1
2<Γ(x+1
2)
Γ(x)<√x(forx≥1
2),
which in turn are consequences of log-convexity of Γand its functional equation
Γ(y+1) =yΓ(y). (Alternatively, Stirling’s formula may be used to arrive at a
similar conclusion.)
Combining Dvoretzky’s theorem with Lemma 3 yields
Corollary 1 If0< ε <1andm≤c1ε2n, then for more than 99%of the
m-dimensional subspaces E⊂ℓn
1we have
∀x∈E(1−ε)/radicalbigg
2
π√n/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l1≤(1+ε)/radicalbigg
1+1
n−1/radicalbigg
2
π√n/⌊ar⌈⌊lx/⌊ar⌈⌊l2(4)
Similarly, if B >1andm≤c2ε2nB, then for more than 99%of them-
dimensional subspaces E⊂ℓn
1(ℓB
2)we have
∀x∈E(1−ε)/radicalbigg
1−1
B√n/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l ≤√n/⌊ar⌈⌊lx/⌊ar⌈⌊l2 (5)
We point out that the upper estimate on /⌊ar⌈⌊lx/⌊ar⌈⌊lin the second inequality is valid
for allx∈ℓn
1(ℓB
2) and, like the estimate M(n,B)≤√n, follows just from the
Cauchy-Schwarz inequality.
Since a random subspace chosen uniformly according to the Haar me asure
on the manifold of m-dimensional subspaces of RN(orCN) can be constructed
from anN×mrandom Gaussian matrix, we may apply standard discretization
techniques to obtain the followingCorollary 2 There is a deterministic algorithm that, given ε,B,m,n as in
Corollary 1 and a sequence of O(mnlog(mn/ǫ))random bits, generates sub-
spacesEas in Corollary 1 with probability greater than 98%, in time polynomial
in1/ε+B+m+n.
We point out that in the literature on the “randomness-reduction” , one typ-
ically uses Bernoulli matrices in lieu of Gaussian ones. This enables avoid ing the
discretization issue, since the problem is phrased directly in terms of random
bits. Still, since proofs of Dvoretzky type theorems for Bernoulli m atrices are
often much harder than for their Gaussian counterparts, we pre fer to appeal in-
stead to a simple discretization ofGaussian random variables.We not e, however,
that the early approach of [Kas77] was based on Bernoulli matrices .
We are now ready to conclude the proof of Theorem 1. Given ε∈(0,1)
andn∈N, chooseB=⌈ε−1⌉andm=⌊cε2(1−1
B)nB⌋ ≥c0ε2nB. Corollary
2 (Equation 5) and repeated application of Proposition 1 give us a sub space
F⊂ℓν
1(ℓβ
2) (whereν=nkandβ=Bk) of dimension mk≥(c0ε2)kνβsuch that
∀x∈F(1−ε)3k/2nk/2/⌊ar⌈⌊lx/⌊ar⌈⌊l2≤ /⌊ar⌈⌊lx/⌊ar⌈⌊l ≤nk/2/⌊ar⌈⌊lx/⌊ar⌈⌊l2.
Moreover,F=E⊗E⊗...⊗E, whereE⊂ℓn
1(ℓB
2) is a typical m-dimensional
subspace.Thusin ordertoproduce E, henceF,weonlyneed togeneratea“typi-
cal”m≈c0ε2(νβ))1/ksubspace of the nB= (νβ))1/k-dimensional space ℓn
1(ℓB
2).
Note that for fixed εandk >1,nBandmare asymptotically (substantially)
smaller than dim F. Further, in order to efficiently represent Fas a subspace of
anℓ1-space, we only need to find a good embedding of ℓβ
2intoℓ1. This can be
done using Corollary 2 (Equation 4); note that βdepends only on εandk. Thus
we reduced the problem of finding “large” almost Euclidean subspace s ofℓN
1to
similar problems for much smaller dimensions.
Theorem 1 now follows from the above discussion. The argument give s, e.g.,
c(ε,γ) = (cεγ)3/γandC(ε,γ) =c(ε,γ)−1.
References
[ABN+92] Noga Alon, Jehoshua Bruck, Joseph Naor, Moni Naor, and Ro nny Roth.
Construction of asymptotically good low-rate error-corre cting codes through
pseudo-random graphs. IEEE Transactions on Information Theory , 38:509–516,
1992.
[And80] K. F. Andersen, Inequalities for Scalar-Valued Lin ear Operators That Extend
to Their Vector-Valued Analogues. J. Math. Anal. Appl. 77 (1980), 264–269
[AM06] S. Artstein-Avidan and V. D. Milman. Logarithmic red uction of the level of
randomness in some probabilistic geometric constructions .Journal of Functional
Analysis, 235:297–329, 2006.
[ASW10] G. Aubrun, S. Szarek and E. Werner. Hastings’s addit ivity counterexample
via Dvoretzky’s theorem. Arxiv.org eprint 1003.4925
[Bal97] K. Ball. An elementary introduction to modern conve x geometry. Flavors of
geometry , edited by S. Levy, Math. Sci. Res. Inst. Publ. 31, 1–58, Cambridge Univ.
Press, Cambridge, 1997.[Bec75] W. Beckner. Inequalities in Fourier analysis. Annals of Math. 102 (1975),
159–182.
[DS01] K. R. Davidson and S. J. Szarek. Local operator theory , random matrices
and Banach spaces. In Handbook of the geometry of Banach spaces, edited by W.
B. Johnson and J. Lindenstrauss (North-Holland, Amsterdam , 2001), Vol. 1, pp.
317–366; (North-Holland, Amsterdam, 2003), Vol. 2, pp. 181 9–1820.
[Dvo61] A. Dvoretzky. Some Results on Convex Bodies and Bana ch Spaces. In Proc.
Internat. Sympos. Linear Spaces (Jerusalem, 1960) . Jerusalem Academic Press,
Jerusalem; Pergamon, Oxford,1961, pp. 123–160.
[FJ80] T. Figiel and W. B. Johnson. Large subspaces of ℓn
∞and estimates of the
Gordon-Lewis constant. Israel J. Math. 37 (1980), 92–112.
[FLM77] T. Figiel, J. Lindenstrauss, and V. D. Milman. The di mension of almost
spherical sections of convex bodies. Acta Math. 139 (1977), no. 1-2, 53–94.
[GG84] A. Garnaev and E. Gluskin. The widths of a Euclidean ba ll.Soviet Math.
Dokl.30 (1984), 200–204 (English translation)
[Gor85] Y. Gordon. Some inequalities for Gaussian processe s and applications. Israel
J. Math. 50 (1985), 265–289.
[GLR08] V. Guruswami, J. Lee, and A. Razborov. Almost euclid ean subspaces of l1
via expander codes. SODA, 2008.
[GLW08] V. Guruswami, J. Lee, and A. Wigderson. Euclidean se ctions with sublinear
randomness and error-correction over the reals. RANDOM , 2008.
[Ind00] P. Indyk. Dimensionality reduction techniques for proximity problems. Pro-
ceedings of the Ninth ACM-SIAM Symposium on Discrete Algori thms, 2000.
[Ind07] P. Indyk. Uncertainty principles, extractors and e xplicit embedding of l2 into
l1.STOC, 2007.
[KRS09] Z. Karnin, Y. Rabani, and A. Shpilka. Explicit Dimen sion Reduction and Its
Applications ECCC TR09-121 , 2009.
[Kas77] B. S. Kashin. The widths of certain finite-dimension al sets and classes of
smooth functions. Izv. Akad. Nauk SSSR Ser. Mat. , 41(2):334351, 1977.
[KT07] B. S. Kashin and V. N. Temlyakov. A remark on compresse d sensing. Mathe-
matical Notes 82 (2007), 748–755.
[LLR94] N. Linial, E. London, and Y. Rabinovich. The geometr y of graphs and some
of its algorithmic applications. FOCS, pages 577–591, 1994.
[LPRTV05] A. Litvak, A. Pajor, M. Rudelson, N. Tomczak-Jaeg ermann, R. Vershynin.
Euclidean embeddings in spaces of finite volume ratio via ran dom matrices. J.
Reine Angew. Math. 589 (2005), 1–19
[LS07] S. Lovett and S. Sodin. Almost euclidean sections of t he n-dimensional cross-
polytope using O(n) random bits. ECCC Report TR07-012 , 2007.
[LV06] Yu. Lyubarskii and R. Vershynin. Uncertainty princi ples and vector quantiza-
tion. Arxiv.org eprint math.NA/0611343
[MZ39] J. Marcinkiewicz and A. Zygmund. Quelques in´ egalit ´ es pour les op´ erations
lin´ eaires. Fund. Math. 32 (1939), 113–121.
[Mil71] V. Milman. A new proof of the theorem of A. Dvoretzky o n sections of convex
bodies.Funct. Anal. Appl. 5 (1971), 28–37 (English translation)
[Mil00] V. Milman, Topics in asymptotic geometric analysis. InVisions in mathemat-
ics. Towards 2000. (Tel Aviv, 1999). Geom. Funct. Anal. 2000, Special Volume,
Part II, 792–815.
[MS86] V. D. Milman and G. Schechtman. Asymptotic theory of fi nite-dimensional
normed spaces. With an appendix by M. Gromov. Lecture Notes in Math. 1200,
Springer-Verlag, Berlin 1986.[Pis89] G. Pisier. The volume of convex bodies and Banach spa ce geometry. Cambridge
Tracts in Mathematics, 94. Cambridge University Press, Cambridge, 1989.
[Rud60] W. Rudin. Trigonometric series with gaps. J. Math. Mech. , 9:203–227, 1960.
[Sch89] G. Schechtman. A remark concerning the dependence o nǫin Dvoretzky’s
theorem. In Geometric aspects of functional analysis (1987–88), 274–277, Lecture
Notes in Math. 1376, Springer, Berlin, 1989.
[Sza06] S. Szarek. Convexity, complexity and high dimensio ns.Proceedings of the
International Congress of Mathematicians (Madrid, 2006), Vol. II, 1599–1621 Eu-
ropean Math. Soc. 2006. Available online from icm2006.org
[TJ89] N. Tomczak-Jaegermann. Banach-Mazur distances and finite-dimensional op-
erator ideals. Longman Scientific & Technical, Harlow, 1989 .