text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
ROOT DATA WITH GROUP ACTIONS
arXiv:1707.01935v1 [math.RT] 6 Jul 2017
JEFFREY D. ADLER AND JOSHUA M. LANSKY
Abstract. Suppose k is a field, G is a connected reductive algebraic
k-group, T is a maximal k-torus in G, and Γ is a finite group that acts on
(G, T ). From the above, one obtains a root datum Ψ on which Gal(k)×Γ
acts. Provided that Γ preserves a positive system in Ψ, not necessarily
invariant under Gal(k), we construct an inverse to this process. That is,
given a root datum on which Gal(k) × Γ acts, we show how to construct
a pair (G, T ), on which Γ acts as above.
Although the pair (G, T ) and the action of Γ are canonical only up
to an equivalence relation, we construct a particular pair for which G
is k-quasisplit and Γ fixes a Gal(k)-stable pinning of G. Using these
choices, we can define a notion of taking “Γ-fixed points” at the level
of equivalence classes, and this process is compatible with a general
“restriction” process for root data with Γ-action.
1. Introduction
Let k be a field with separable closure ksep . Let Γ be a finite group.
Suppose Ψ is a (reduced) based root datum on which the absolute Galois
group Gal(k) acts. Then it is well known ([7, Theorem 6.2.7]) that there
exists a connected, reductive, k-quasisplit k-group G, uniquely determined
up to k-isomorphism, such that the root datum of G (with respect to a
maximal k-torus contained in a Borel k-subgroup) is isomorphic to Ψ and
carries the same action of Gal(k). We generalize this result in two directions.
(A) Suppose G is a connected reductive k-group, and T is an arbitrary
maximal torus. Then the root datum Ψ(G, T ) carries an action of
Gal(k). We show that one can reverse this process. That is, given a
root datum Ψ with an action of Gal(k), one can obtain a pair (G, T )
that gives rise to Ψ. In general, the pair (G, T ) need not be uniquely
determined up to k-isomorphism. However, we can always choose G to
be k-quasisplit, and all possibilities for G must be k-inner forms of each
other.
(B) Now suppose that Γ acts on a pair (G, T ) as above via k-automorphisms.
Then Γ acts on the root datum Ψ(G, T ), and the actions of Γ and
Gal(k) commute. We show that one can reverse this process under mild
Date: February 22, 2018.
2010 Mathematics Subject Classification. Primary 20G15, 20G40. Secondary 20C33.
Key words and phrases. Reductive algebraic group, root datum, quasi-semisimple
automorphisms.
1
2
JEFFREY D. ADLER AND JOSHUA M. LANSKY
conditions. That is, suppose that Ψ is a root datum with an action of
Gal(k) × Γ. Assume that Γ (but not necessarily Gal(k)) preserves a
base. Then one can obtain a pair (G, T ) as above, carrying an action
of Γ. That is, under appropriate conditions, we can lift an action of Γ
from a root datum to a reductive group. Moreover, one can choose G
to be k-quasisplit, and can choose the action of Γ to preserve a pinning.
The above are all contained in our main result, Theorem 1. In order to
state it more precisely, let us consider the collection of abstract root data Ψ
that carry an action of Gal(k) × Γ such that the action of Γ stabilizes a base
for Ψ. We consider two data Ψ and Ψ′ with such actions to be equivalent
if there is a Gal(k) × Γ-equivariant isomorphism Ψ −→ Ψ′ . Let RΓ denote
the set of equivalence classes of reduced data with such actions.
Let G be a connected reductive k-group and T ⊆ G a maximal k-torus.
Suppose there exists some Borel subgroup B ⊆ G (not necessarily defined over k) containing T , and a homomorphism ϕ from Γ to the group
Autk (G, B, T ) of k-automorphisms of G stabilizing T and B. Suppose G′ , T ′ ,
and ϕ′ are defined similarly. We say that the triples (G, T, ϕ) and (G′ , T ′ , ϕ′ )
are equivalent if there exists an isomorphism ν : G −→ G′ whose restriction
gives a Γ-equivariant k-isomorphism T −→ T ′ . (In this situation, ν must be
an inner twisting by [5, §3.2].) Let TΓ be the set of equivalence classes of
such triples (G, T, ϕ).
A triple (G, T, ϕ) as above naturally determines a root datum with appropriate actions of Gal(k) and Γ, hence an element of RΓ . It is easily seen
that if (G′ , T ′ , ϕ′ ) and (G, T, ϕ) are equivalent, then they determine the same
class in RΓ . Hence we have a natural map rΓ : TΓ −→ RΓ .
Our main result is the following:
Theorem 1. The map rΓ : TΓ −→ RΓ is a bijection.
We also prove a generalization of a second well-known result. Suppose
char k = 0, and Γ is a finite, cyclic group acting algebraically on a connected
reductive group G. Then Γ must fix a Borel-torus pair (B, T ) in G. If Γ
fixes a pinning, then the root system of the connected part Ḡ := (GΓ )◦ of
the group of fixed points is obtained as follows. The set of restrictions of
roots of G from T to T̄ := (T Γ )◦ is a root system, not necessarily reduced,
but there is a preferred way to choose a maximal reduced subsystem.
We generalize the above result in several directions.
(C) Instead of assuming char k = 0, we impose the weaker condition that Γ
fix a Borel-torus pair.
(D) The group Γ need not be cyclic.
(E) We describe the root datum, not just the root system, of Ḡ with respect
to T̄ .
The above are all contained in Theorem 2. To state this result more
precisely, suppose that the triple (G, T, ϕ) represents an element of TΓ .
Then we know [1, Proposition 3.5] that Ḡ is a connected reductive k-group,
ROOT DATA WITH GROUP ACTIONS
3
and T̄ is a maximal k-torus in Ḡ. Thus, if we let “1” represent the map
from the trivial group 1 to Aut(G), then the triple (Ḡ, T̄ , 1) represents an
element of T1 . The equivalence class of (G, T, ϕ) does not determine that
of (Ḡ, T̄ , 1), or even the ksep -isomorphism class of Ḡ. Nonetheless, we can
obtain a well-defined map TΓ −→ T1 as follows: From Remark 25, we will
see that every class in TΓ contains a triple (G, T, ϕ) such G is k-quasisplit
and ϕ fixes a Gal(k)-invariant pinning. Use this choice of triple to define Ḡ
and T̄ , and it is straightforward to show that our choices determine Ḡ and
T̄ up to k-isomorphism.
Suppose that the root datum Ψ represents an element of RΓ . We will see
in §2 that the action of Γ on Ψ allows us to construct a “restricted” root
datum Ψ̄ that has a preferred choice of reduced subdatum. We thus obtain
a map RΓ −→ R1 .
Theorem 2. Our maps TΓ −→ T1 and RΓ −→ R1 above are compatible with the maps of Theorem 1, in the sense that the following diagram
commutes:
rΓ
// RΓ
TΓ
T1
r1
// R1
We prove both theorems in §4.
2. Restrictions of root data
Let Ψ = (X ∗ , Φ, X∗ , Φ∨ ) be a root datum. (We do not assume that Φ is
reduced.) Let Γ denote a finite group of automorphisms of Ψ. We assume
that there exists a Γ-stable set ∆ of simple roots in Φ. Let V ∗ = X ∗ ⊗ Q and
V∗ = X∗ ⊗ Q. Let i∗ denote the quotient map from V ∗ to its space V̄ ∗ := VΓ∗
of Γ-coinvariants. From [1, §2], there is an embedding ι : V̄ ∗ −→ V ∗ with
image V ∗ Γ given by
1 X
ι(v̄) =
γv,
|Γ|
∗
γ∈Γ
V̄ ∗ . Let
where v is any preimage in V of v̄ ∈
X̄ ∗ and Φ̄ denote the images
∗
∗
∗
of X and Φ under i . Then X̄ is the torsion-free part of the module XΓ∗
of Γ-coinvariants of X ∗ . It is straightforward to see that X̄ ∗ and X̄∗ := X∗Γ ,
and thus V̄ ∗ and V̄∗ := V∗Γ , are in duality via the pairing given by hx̄, λ̄i :=
hιx̄, i∗ λ̄i, where i∗ : X̄∗ −→ X∗ is the inclusion map. With respect to these
pairings, i∗ is the transpose of i∗ .
For each β ∈ Φ, let wβ denote the automorphism of X ∗ defined by
wβ (x) = x − hx, β ∨ iβ.
Let W denote the Weyl group of Ψ, i.e., the (finite) subgroup of Aut(X ∗ )
generated by the wβ . Then Γ acts naturally on W , and W acts on X ∗ . The
group W Γ of Γ-fixed elements of W acts on on both V̄ ∗ and X̄ ∗ via the rule
4
JEFFREY D. ADLER AND JOSHUA M. LANSKY
w(i∗ x) := i∗ (w(x)) for w ∈ W Γ and x ∈ X ∗ . Equivalently, for x̄ ∈ X̄ ∗ and
w ∈ W Γ , we have ι(w(x̄)) = w(ιx̄).
Lemma 3 (cf. [8, §1.32(a)]). The natural action of W Γ on X̄ ∗ is faithful.
Proof. Let w be a nontrivial element of W Γ . Then there exists a positive
root β ∈ Φ such that w(β) is negative. Since Γ stabilizes ∆, it follows that
w(γ ·β) = γ ·(wβ) is also negative for every γ ∈ Γ. Thus ι(w(i∗ β)) is a linear
combination of roots in ∆ in which all of the coefficients are nonpositive, so
w(i∗ β) 6= i∗ β.
Notation 4. For each root β ∈ Φ, define a Γ-orbit Ξβ in Φ as in [1, §5].
That is, let Ξβ = Γ · β if this is an orthogonal set. Otherwise, for each
θ ∈ Γ · β, there exists a unique root θ ′ 6= θ in Γ · β such that θ and θ ′ are not
orthogonal. Moreover, θ + θ ′ is a root in Φ and does not belong to Γ · β. In
this case, let Ξβ = {θ + θ ′ | θ ∈ Γ · β}.
Remark 5. Thus, in all cases, Ξβ is an orthogonal Γ-orbit of roots.
Lemma 6. If α ∈ Φ̄, then i∗−1 (α) is a Γ-orbit of roots in Φ.
Proof. This argument is similar to but more general than that given in the
proof of [6, Lemma 10.3.2(ii)]. Suppose β ∈ Φ and i∗ (β) = α. Then clearly
i∗ θ = α for any θ ∈ Γ · β.
Now suppose β ′ ∈ Φ, β ′ 6= β, and i∗ β ′ = α. Since ι(i∗ (β ′ − β)) = 0 and
since Γ preserves ∆, when β ′ − β is written as a linear combination of simple
roots, the coefficients must sum to 0. In particular, β ′ − β ∈
/ Φ. Since β ′
′
∨
cannot be multiple of β, we have that hβ , β i ≤ 0 by standard results about
root systems. Similarly, hβ ′ , θ ∨ i ≤ 0 for all θ 6= β ′ in Γ · β. Therefore,
D
E D
E
X
X
X
E D
β ′ − β,
θ ∨ = i∗ (β ′ − β),
θ ∨ = β ′ − β, i∗
θ∨ ,
θ∈Γ·β
θ∈Γ·β
i∗ (β ′
θ∈Γ·β
′ ∨
and since
− β) = 0, this pairing vanishes. Thus
θ∈Γ·β hβ , θ i =
P
∨
θ∈Γ·β hβ, θ i = 2 or 1, depending on whether or not Γ · β is orthogonal.
(This follows from the properties of root orbits discussed in [1, §5].) Since
hβ ′ , θ ∨ i ≤ 0 for all θ 6= β ′ in Γ · β, it follows that β ′ ∈ Γ · β.
P
For each α ∈ Φ̄, define
α∨ =
|Γ · β| X ∨
ξ ∈ X̄∗ ,
|Ξβ |
ξ∈Ξβ
where β is any element of Φ such that i∗ β = α. The element of X̄∗ defined by
the above formula is independent of the particular choice of β by Lemma 6.
Note that α∨ does indeed lie in X̄∗ since |Γ · β|/|Ξβ | = 1 or 2. Let Φ̄∨ =
{α∨ | α ∈ Φ̄}.
Theorem 7. With the above notation, Ψ̄ := (X̄ ∗ , Φ̄, X̄∗ , Φ̄∨ ) is a root datum.
ROOT DATA WITH GROUP ACTIONS
5
Remark 8. If Ψ comes equipped with an action of Gal(k), and the action of
Γ commutes with that of Gal(k), then it is clear that the action of Gal(k)
preserves Ψ̄.
Proof of Theorem 7. According to [5, §1.1], it suffices to show that
• X̄ ∗ and X̄∗ are in duality (which we have already observed),
• hα, α∨ i = 2 for all α ∈ Φ̄, and
• The automorphisms wα of X̄ ∗ of the form
wα (x̄) = x̄ − hx̄, α∨ iα
(for α ∈ Φ̄)
stabilize Φ̄ and generate a finite subgroup of Aut(X̄ ∗ ).
Let α ∈ Φ̄. Choose β ∈ Φ such that i∗ β = α, and choose ξ0 ∈ Ξβ . Then
we have
hα, α∨ i = hια, i∗ α∨ i
D 1
X
|Γ · β| X ∨ E
ξ
θ,
=
|Γ · β|
|Ξβ |
θ∈Γ·β
ξ∈Ξβ
E
D
X
X
1
=
ξ∨
θ,
|Ξβ |
θ∈Γ·β
ξ∈Ξβ
1 D X ′ X ∨E
=
ξ,
ξ
|Ξβ | ′
ξ ∈Ξβ
(by the definition of Ξβ )
θ∈Ξβ
hξ0 , ξ0∨ i
=
= 2,
(by Remark 5)
as desired.
Now let x̄ ∈ X̄ ∗ , and choose x ∈ X ∗ such that i∗ x = x̄. Then
(9)
hx̄, α∨ i = hx, i∗ α∨ i
D |Γ · β| X E
(by Remark 5)
ξ∨
= x,
|Ξβ |
ξ∈Ξβ
|Γ · β| X
hx, ξ ∨ i.
=
|Ξβ |
ξ∈Ξβ
6
JEFFREY D. ADLER AND JOSHUA M. LANSKY
It follows that
wα (x̄) = x̄ − hx̄, α∨ iα
= i∗ x − hx̄, α∨ ii∗ β
hx̄, α∨ i ∗ X
i
θ
= i∗ x −
|Γ · β|
(by Lemma 6)
θ∈Γ·β
hx̄, α∨ i ∗ X ′
ξ
i
|Γ · β|
ξ ′ ∈Ξβ
X
1 X
= i∗ x −
hx, ξ ∨ ii∗
ξ′
|Ξβ |
ξ∈Ξβ
ξ ′ ∈Ξβ
1 X
X
ξ′ .
hx, ξ ∨ ii∗
= i∗ x −
|Ξβ | ′
= i∗ x −
ξ∈Ξβ
(by the definition of Ξβ )
(by (9))
ξ ∈Ξβ
But by Remark 5, for any ξ ∈ Ξβ ,
1 X
i∗
ξ ′ = i∗ ξ,
|Ξβ | ′
ξ ∈Ξβ
so we have
(10)
X
wα (x̄) = i∗ x −
x, ξ ∨ ξ .
ξ∈Ξβ
Also by Remark 5, the reflections wξ ∈ W for ξ ∈ Ξβ all commute with one
another. If w denotes their product, then
X
hx, ξ ∨ iξ = w(x),
x−
ξ∈Ξβ
so by (10), we have
(11)
wα (x̄) = i∗ (w(x)).
In particular, if α′ ∈ Φ̄, and β ′ ∈ Φ satisfies i∗ β ′ = α′ , then
wα (α′ ) = i∗ (w(β ′ )) ∈ i∗ (Φ) = Φ̄,
so Φ̄ is stable under the action of wα , as desired.
It remains to show that the group W̄ := hwα | α ∈ Φ̄i ⊂ Aut(X̄ ∗ ) is finite.
To accomplish this, we show that W̄ embeds naturally in the finite group
W Γ . By Lemma 3, there is a natural injection
W Γ −→ Aut(X̄ ∗ ).
To construct an embedding W̄ −→ W Γ , it is therefore enough to show that
the image of this injection contains W̄ . Thus, given w̄ ∈ W̄ , we will show
that there exists w ∈ W Γ whose action on X̄ ∗ coincides with that of w̄. It
suffices to prove the existence of w only in the caseQin which w̄ is a reflection
wα through a root α ∈ Φ̄. In this case, let w = ξ∈Ξβ wξ , where β ∈ Φ is
ROOT DATA WITH GROUP ACTIONS
7
such that i∗ β = α. It follows from Remark 5 that w ∈ W Γ , and it follows
from (11) that for any x ∈ X ∗ ,
wα (i∗ x) = i∗ (w(x)) = w(i∗ x).
This establishes the existence of the desired embedding.
Remark 12. If Φ is reduced, then so is the root system Φ̄ constructed above,
unless Φ has an irreducible factor of type A2n whose stabilizer in Γ acts
upon it nontrivially. To see this, it is easy to reduce to the case where Φ is
irreducible and Γ is cyclic (see [1, Proposition 3.5]). The result then follows
from [3, §1.3].
Remark 13. There is a way to choose a maximal reduced subsystem that
we will later see is preferred. Specficially, take only the nondivisible (resp.
nonmultipliable) roots of Φ̄ according as char k is not two (resp. two).
Lemma 14. The map i∗ induces a bijection between the set of Γ-invariant
positive systems in Φ and the set of positive systems in Φ̄.
Proof. Let Π ⊆ Φ be a Γ-invariant positive system. Let Π̄ = i∗ (Π) ⊆ Φ̄.
Then there is some vector v ∈ V∗ such that for every root β ∈ Φ, we have
that hβ, vi =
6 0, and P
hβ, vi > 0 if and only if β ∈ Π. Since Π is Γ-invariant,
we may replace v by γ∈Γ γv, and thus assume that v is Γ-invariant, and so
lies in V̄∗ . Suppose α ∈ Φ̄. Then α = i∗ β for some β ∈ Φ, so hα, vi = hβ, vi.
Thus, hα, vi =
6 0, and hα, vi > 0 if and only if α ∈ Π̄. This shows that Π̄ is
a positive system in Φ̄.
Conversely, suppose that Π̄ ⊆ Φ̄ is a positive system, and let Π = i∗ −1 Π̄.
Then there is some vector v̄ ∈ V̄∗ such that for every root α ∈ Φ̄, we have
that hα, v̄i =
6 0, and hα, v̄i > 0 if and only if α ∈ Π̄. For every root β ∈ Φ,
we have hβ, i∗ vi = hi∗ β, vi, which is never zero, and is positive if and only
if β ∈ Π. Thus, Π ⊂ Φ is a positive system. Since i∗ v is Γ-invariant, so is
Π.
Corollary 15. Let W̄ be the Weyl group of Ψ̄. Then the embedding of W̄
into W Γ in the proof of Theorem 7 is an isomorphism.
Proof. Since W̄ acts simply transitively on the set of positive systems in Φ̄,
and W Γ acts simply transitively on the set of Γ-invariant positive systems
in Φ, the result follows from Lemma 14.
3. From automorphisms of root data to automorphisms of
reductive groups
Let Ψ = (X ∗ , Φ, X∗ , Φ∨ ) be a root datum on which a group Λ acts via
automorphisms. Choosing a root basis ∆ of Φ, we obtain a corresponding
based root datum Ψ̇. Then Ψ̇ also carries an action of Λ. Namely, for σ ∈ Λ,
there exists a unique element c(σ) in the Weyl group W (Ψ) of Ψ such that
σ(∆) = c(σ)(∆). If we define σ ⋆ to be the automorphism of Ψ̇ given by
(16)
σ ⋆ χ = c(σ)−1 (σχ)
8
JEFFREY D. ADLER AND JOSHUA M. LANSKY
for χ ∈ X ∗ , then the action of Λ on Ψ̇ is given by σ 7→ σ ⋆ .
Since Λ acts on Ψ and Ψ̇, it acts naturally on Aut(Ψ) and Aut(Ψ̇), as
well as on the Weyl groups W (Ψ) ⊂ Aut(Ψ) and W (Ψ̇) ⊂ Aut(Ψ̇). Just as
the actions of Λ on Ψ̇ and on Ψ differ, so the actions of Λ on W (Ψ̇) and on
W (Ψ) differ, even though the Weyl groups themselves are equal. For σ ∈ Λ
and w ∈ W (Ψ̇), let (σ, w) 7→ σ ⋆ (w) denote the action of Λ on W (Ψ̇). Then
we have
σw = c(σ) (σ ⋆ w) c(σ)−1 .
One can check readily that map c : Λ −→ W (Ψ̇) is a cocycle in Z 1 (k, W (Ψ̇)).
We now turn our attention to based root data arising from reductive
algebraic groups. Let G be a connected reductive k-group, B a Borel subgroup of G, and T ⊆ B a maximal torus of G. Let Ψ̇(G, B, T ) denote the
corresponding based root datum.
Any map ϑ ∈ Aut(G) determines an obvious isomorphism
ϑ∗ : Ψ̇(G, B, T ) −→ Ψ̇(G, ϑ(B), ϑ(T )).
There is a natural homomorphism
π : Aut(G) −→ Aut(Ψ̇(G, B, T ))
defined as follows. For ϑ ∈ Aut(G), choose gϑ ∈ G(ksep ) such that Int(gϑ )
takes ϑ(B) to B, and ϑ(T ) to T . Then Int(gϑ ) ◦ ϑ stabilizes B and T , and
we let π(ϑ) be the automorphism (Int(gϑ ) ◦ ϑ)∗ of Ψ̇(G, B, T ) (which is, in
fact, independent of the choice of gϑ ).
Now suppose that T is defined over k. Then an element σ ∈ Gal(k)
naturally determines an automorphism of Ψ(G, T ) hence an automorphism
σ ⋆ of Ψ̇(G, B, T ) as defined in (16). We thus obtain an action of Gal(k)
on Ψ̇(G, B, T ), hence one on Aut(Ψ̇(G, B, T )) as above. These actions are
independent of the particular choice of B and T in the sense that if g ∈
G(ksep ) and σ ∈ Gal(k), then we have
(17)
σ ⋆ ◦ Int(g)∗ = Int(g)∗ ◦ σ ⋆ ,
where we use the notation σ ⋆ to denote both the action of σ on Ψ̇(G, B, T )
and on Ψ̇(G, g B, g T ).
There is a well-known exact sequence
(18)
π
1 −→ Inn(G) −→ Aut(G) −→ Aut(Ψ̇(G, B, T )) −→ 1.
We note that the homomorphisms in (18) are Gal(k)-equivariant.
Remark 19. Let ∆ be the set of simple roots for (G, B, T ). Let {Xα }α∈∆ ⊂
Lie(G)(ksep ) be a pinning. It is well known [5, Cor. 2.14] that {Xα } determines a unique splitting ψ of (18). Namely, if f ∈ Aut(Ψ̇(G, B, T )), define
ψ(f ) to be the automorphism of G such that
• ψ(f ) stabilizes B and T ,
• the restriction of ψ(f ) to T is determined by the automorphism of
X ∗ (T ) given by f , and
ROOT DATA WITH GROUP ACTIONS
9
• ψ(f )(Xα ) = Xf (α) .
Thus im ψ lies in the subgroup Aut(G, B, T, {Xα }) of Aut(G) consisting of
automorphisms that stabilize B, T , and the pinning {Xα }. If B and T are
defined over k, and {Xα } is Gal(k)-stable, it follows from [2, §3.10] that ψ
is Gal(k)-equivariant.
Lemma 20. Retain the notation of the previous remark, and assume that B
and T are defined over k, and {Xα } is Gal(k)-stable. Suppose a group Γ acts
on G via k-automorphisms, preserving B, T , and {Xα }. Then Ḡ = (GΓ )◦
is a reductive k-group, B̄ = (B Γ )◦ is a Borel k-subgroup of G, T̄ = (T Γ )◦ is
a maximal k-torus in B, and W (G, T )Γ = W (Ḡ, T̄ ).
We prove this result by reducing to the well-known case where Γ is cyclic.
Proof. The statements about Ḡ and T̄ follow from [1, Proposition 3.5]. The
lemma follows for G if it holds for a central quotient of G. Therefore, we
may assume that (over ksep ) G is a direct product of almost simple groups.
We can also reduce to the case where Γ acts transitively on the factors of
G. As in the proof loc. cit., we may identify the factors of G with each
other, and replace Γ by a group S × Γ1 such that Ḡ = (GS×Γ1 )◦ , where S
acts by permuting the factors in our product decomposition of G, and Γ1
preserves each factor and acts in the same way on each. It is clear from the
construction that S × Γ1 preserves {Xα }.
Working in stages, we may assume that Γ is simple. Thus, either Γ acts
by permutation of the factors of G, or G is simple. In the former case,
our result is trivial, so assume that G is simple. Then G has a connected
Dynkin diagram, whose automorphism group is solvable. Since Γ embeds in
this automorphism group, it must be cyclic.
We let Ψ = (X ∗ (T ), Φ(G, T ), X∗ (T ), Φ∨ (G, T )) and will freely use the
notation of §2. We may identify X̄ ∗ with X ∗ (T̄ ). Under this identification,
the restriction βres of a root β ∈ Φ(G, T ) to T̄ corresponds to i∗ β. It follows
from [8, §8.2(2)] that since Γ fixes a pinning (i.e., cβ = 1 for each β ∈ ∆,
in the terminology loc. cit.), then for each β ∈ Φ(G, T ), there exists a root
α ∈ Φ(Ḡ, T̄ ) proportional to βres . Meanwhile, it follows from [1, Proposition
3.5(iv)] that every root in Φ(Ḡ, T̄ ) is the restriction of a root in Φ(G, T ).
It follows that the Weyl group W̄ of Ψ̄ is equal to W (Ḡ, T̄ ). But W̄ is
canonically isomorphic to W (G, T )Γ by Corollary 15.
4. Proofs of Theorems
Proof of Theorem 1. Consider an abstract root datum Ψ = (X ∗ , Φ, X∗ , Φ∨ )
with an action of Gal(k) × Γ. Suppose that ∆ is a Γ-stable base for Ψ. Let
Ψ̇ be the corresponding based root datum. As discussed in §3, the action of
Gal(k) × Γ on Ψ determines one of Gal(k) × Γ on Ψ̇. Since ∆ is Γ-stable, the
actions of Γ on Ψ and Ψ̇ coincide. In the notation of (16) with Λ = Gal(k),
the elements c(σ) ∈ W (Ψ̇) that arise from the action of Gal(k) on Ψ̇ must
lie in W (Ψ̇)Γ since this action commutes with that of Γ. Therefore, the
10
JEFFREY D. ADLER AND JOSHUA M. LANSKY
map c : Gal(k) −→ W (Ψ̇)Γ is a cocycle in Z 1 (k, W (Ψ̇)Γ ). We note that the
Gal(k) × Γ-isomorphism class of Ψ̇ depends only on that of Ψ.
By [7, Theorem 6.2.7], there exists a triple (G, B0 , T0 ), unique up to kisomorphism, consisting of a k-quasisplit connected reductive group G, a
Borel k-subgroup B0 of G, and a maximal k-torus T0 of B0 , such that the
associated based root datum Ψ̇(G, B0 , T0 ) is Gal(k)-isomorphic to Ψ̇. We
will identify Ψ̇ and Ψ̇(G, B0 , T0 ) via such an isomorphism.
Let {Xα } be a Gal(k)-stable pinning for G relative to B0 and T0 . The
action of Γ on Ψ̇ determines a homomorphism φ : Γ −→ Aut(Ψ̇). Let ϕ be
the composition
φ
ψ
ϕ : Γ −→ Aut(Ψ̇) = Aut(Ψ̇(G, B0 , T0 )) −→ Aut(G, B0 , T0 , {Xα }),
where ψ : Aut(Ψ̇(G, B0 , T0 )) −→ Aut(G, B0 , T0 , {Xα }) is the homomorphism
from Remark 19.
◦
ϕ(Γ) ◦
Let Ḡ = Gϕ(Γ) and T̄0 = T0
. By Lemma 20, Ḡ is a k-quasisplit
reductive group, T̄0 a maximal k-torus of Ḡ, and
W (Ψ̇)Γ = W (G, T0 )ϕ(Γ) = W (Ḡ, T̄0 ).
Thus we may view c as a cocycle in Z 1 (k, W (Ḡ, T̄0 )).
By [4, Theorem 1.1], there is some g ∈ Ḡ(ksep ) such that for all σ ∈
Gal(k), g−1 σ(g) lies in the normalizer NḠ (T̄0 )(ksep ), and the image of g −1 σ(g)
in W (Ḡ, T̄0 ) is equal to c(σ). Let T = g T0 and B = g B0 . Since g is ϕ(Γ)fixed, T is a ϕ(Γ)-stable maximal k-torus of G, and B is a ϕ(Γ)-stable Borel
subgroup of G containing T . We have therefore associated to Ψ a triple
(G, T, ϕ) of the required type.
Suppose we vary the arbitrary choices made in the above construction of
(G, T, ϕ). That is, suppose we choose
• another based root datum Ψ̇′ with underlying datum Ψ, and hence
a cocycle c′ in Z 1 (k, W (Ψ̇′ )Γ );
• another triple of k-groups (G′ , B0′ , T0′ ) k-isomorphic to (G, B0 , T0 )
and an identification of Ψ̇(G′ , B0′ , T0′ ) with Ψ̇′ ; and
• a Gal(k)-stable pinning {Xα′ } of G′ relative to B0′ and T0′ , along with
the associated map ψ ′ : Aut(Ψ̇(G′ , B0′ , T0′ )) −→ Aut(G′ , B0′ , T0′ , {Xα′ })
from Remark 19.
We will show that these choices lead to a triple (G′ , T ′ , ϕ′ ) that is equivalent
to (G, T, ϕ). We note that replacing Ψ by another datum in its Gal(k) × Γisomorphism class has no additional effect on the triple arising from the
construction.
Use a particular k-isomorphism between (G′ , B0′ , T0′ ) and (G, B0 , T0 ) to
identify these triples. Following the above construction, we obtain a homomorphism
ϕ′ : Γ −→ Aut(G, B0 , T0 , {Xα′ }), as well as an element g ′ ∈
′
′ (Γ) ◦
ϕ
and a k-torus T ′ = g T0 , analogous to g and T , respectively.
G
There is a unique element w ∈ W (Ψ) mapping Ψ̇ to Ψ̇′ , and by uniqueness,
w must in fact lie in W (Ψ)Γ , and the mapping it induces is equivariant with
ROOT DATA WITH GROUP ACTIONS
11
respect to the actions of Gal(k) on these based root data. Via conjugation,
the element w induces Γ-equivariant isomorphisms W (Ψ̇) −→ W (Ψ̇′ ) and
τ : Z 1 (k, W (Ψ̇)) −→ Z 1 (k, W (Ψ̇′ )).
We have a unique element κ of Autk (Ψ̇(G, B0 , T0 )) that produces a commutative square
(21)
Ψ̇
// Ψ̇(G, B0 , T0 )
κ
w
Ψ̇′
// Ψ̇(G, B0 , T0 )
Here the horizontal arrows are the identifications chosen in the respective
constuctions of ϕ and ϕ′ . (That κ is Gal(k)-equivariant follows from the
equivariance of the other three maps in the square.) We therefore obtain a
diagram
(22)
♥66
♥♥♥
♥
♥
♥♥♥
♥♥♥
Γ
// Aut(Ψ̇) ∩ Aut(Ψ̇′ )
PPP
PPP
PPP
PPP
((
Aut(Ψ̇)
// Aut(Ψ̇(G, B0 , T0 ))
// Aut(Ψ̇(G, B0 , T0 ))
Aut(Ψ̇′ )
in which the square on the right is induced by (21) (and hence commutes),
the vertical maps are given respectively by conjugation by w and κ, the
diagonal maps are given by inclusion, and the map out of Γ is given by the
action of Γ on Ψ.
The map τ (c) : σ 7→ wc(σ)w−1 is a cocycle in Z 1 (k, W (Ψ̇′ )Γ ), cohomologous to c′ ; more precisely, for σ ∈ Gal(k),
c′ (σ) = w−1 (wc(σ)w−1 )σ ⋆′ (w) = w−1 (τ (c)(σ))σ ⋆′ (w),
where σ ⋆′ denotes the result of the action of σ on w, viewed as an element of
W (Ψ̇′ ). Identifying c and c′ respectively with cocycles in Z 1 (k, W (G, T0 )ϕ(Γ) )
′
and Z 1 (k, W (G, T0 )ϕ (Γ) ) as in the above construction, it follows from (22)
that
(23)
c′ (σ) = w−1 (κ ◦ c(σ) ◦ κ−1 )σ(w),
where σ(w) here denotes the result of σ ∈ Gal(k) acting on the element
w ∈ W (Ψ̇′ ) via the identification of this group with the concrete Weyl group
W (G, T0 ) in (22).
Let n ∈ NḠ (T̄0 )(ksep ) be a representative for w and set µ = ψ(κ) ∈
Autk (G, B0 , T0 ). Then by (23), g ′ −1 σ(g′ ) and n−1 µ(g−1 σ(g))σ(n) have the
same image in W (G, T0 ). Rearranging terms and letting h = g′ n−1 µ(g)−1 ,
12
JEFFREY D. ADLER AND JOSHUA M. LANSKY
we obtain that σ(h) and h have the same image modulo
(24)
σ(µ(g)n)
T0 = σ(µ(g)n T0 ) = σ(µ(g) T0 ) = σ(µ(g T0 )) = σ(µ(T )) = µ(T ).
Let ν = Int(h) ◦ µ. Since
−1
ν(T ) = Int(h)(µ(T )) = Int(g ′ n−1 µ(g)−1 )(µ(T )) = Int(g ′ n−1 )(µ(g T ))
′
= Int(g′ n−1 )(µ(T0 )) = Int(g ′ n−1 )(T0 ) = g T0 = T ′ ,
it follows from (24) that ν gives a k-isomorphism T −→ T ′ .
To show that (G′ , T ′ , ϕ′ ) is equivalent to (G, T, ϕ), it remains to show that
ν is Γ-equivariant. It follows from the construction of ϕ that π ◦ ϕ is equal
to the composition Γ −→ Aut(Ψ̇) −→ Aut(Ψ̇(G, B0 , T0 )) appearing in (22).
Similarly, π ◦ ϕ′ is equal to the analogous composition Γ −→ Aut(Ψ̇′ ) −→
Aut(Ψ̇(G, B0 , T0 )). Thus for any γ ∈ Γ,
π(ϕ′ (γ)) = κ ◦ π(ϕ(γ)) ◦ κ−1 .
Applying ψ to this equality and noting that ψ ◦ π ◦ ϕ = ϕ by construction,
we obtain
ψ(π(ϕ′ (γ))) = µ ◦ ϕ(γ) ◦ µ−1 .
Note that by definition, ψ(f ) and ψ ′ (f ) agree on T0 for any f ∈ Aut(Ψ̇(G, B0 , T0 )).
Therefore, as automorphisms of T0 , we have
ϕ′ (γ) = ψ ′ (π(ϕ′ (γ))) = ψ(π(ϕ′ (γ))) = µ ◦ ϕ(γ) ◦ µ−1 .
It follows that, as maps on T ,
ϕ′ (γ) ◦ ν
= ϕ′ (γ) ◦ Int(h) ◦ µ
= ϕ′ (γ) ◦ Int(g ′ n−1 µ(g)−1 ) ◦ µ
= Int(g ′ ) ◦ ϕ′ (γ) ◦ Int(µ(g)n)−1 ◦ µ
= Int(g ′ ) ◦ µ ◦ ϕ(γ) ◦ µ−1 ◦ Int(µ(g)n)−1 ◦ µ
= Int(g ′ ) ◦ µ ◦ ϕ(γ) ◦ Int(gµ−1 (n))−1
= Int(g ′ ) ◦ µ ◦ Int(gµ−1 (n))−1 ◦ ϕ(γ),
where the last equality above comes from the fact that g ∈ Ḡ(ksep ) and
Int(µ−1 (n)) ∈ W (G, T0 )ϕ(Γ) . Thus ϕ′ (γ) ◦ ν is equal to
Int(g ′ n−1 µ(g)−1 ) ◦ µ ◦ ϕ(γ) = ν ◦ ϕ(γ),
showing that ν is Γ-equivariant. Therefore, (G′ , T ′ , ϕ′ ) is equivalent to
(G, T, ϕ), and our construction induces a well-defined map sΓ : RΓ −→ TΓ .
We now show that rΓ ◦sΓ is the identity map on RΓ . Let Ψ be a root datum
representing some class in RΓ , and let (G, T, ϕ) be a triple representing the
image of the class of Ψ under sΓ . We need to show that Ψ(G, T ) is Gal(k)×Γisomorphic to Ψ. We will make free use of the notation developed in the
construction of sΓ .
ROOT DATA WITH GROUP ACTIONS
13
The Gal(k)-equivariant isomorphism of based root data Ψ̇ −→ Ψ̇(G, B0 , T0 )
chosen in the definition of sΓ is Γ-equivariant by construction (where the action of Γ on Ψ̇(G, B0 , T0 ) is induced by ϕ). We may therefore identify Ψ̇ and
Ψ̇(G, B0 , T0 ) as based root data with Gal(k)×Γ-action via this isomorphism.
This allows us to identify Ψ and Ψ(G, T0 ) as root data with Γ-action (but
not necessarily with Gal(k)-action since the actions of Gal(k) on Ψ̇ and Ψ
differ in general).
Recall the element g ∈ Ḡ(ksep ) chosen in the definition of sΓ . The map
Int(g)∗ : Ψ = Ψ(G, T0 ) −→ Ψ(G, T ) is Γ-equivariant since g is ϕ(Γ)-fixed.
Furthermore, Int(g)∗ is Gal(k)-equivariant since for σ ∈ Gal(k) and χ ∈
X ∗ (T0 ),
Int(g)∗ (σχ) = Int(g)∗ c(σ)(σ ⋆ χ)
= gg
−1 σ(g)
(σ ⋆ χ)
= σ(g) (σ ⋆ χ)
= σ(g χ)
= σ(Int(g)∗ (χ)).
Thus Ψ(G, T ) is Gal(k) × Γ-isomorphic to Ψ, as desired.
Finally, we show that sΓ ◦ rΓ is the identity map on TΓ . Let (G, T, ϕ)
represent a class in TΓ , and let (G′ , T ′ , ϕ′ ) represent the image of this class
under sΓ ◦ rΓ . Since rΓ ◦ (sΓ ◦ rΓ ) = (rΓ ◦ sΓ ) ◦ rΓ = rΓ , it follows that there
is a Gal(k) × Γ isomorphism Ψ(G, T ) −→ Ψ(G′ , T ′ ). By [5, Theorem 2.9],
this isomorphism is induced by an isomorphism ν : G −→ G′ that restricts
to a Γ-equivariant k-isomorphism T −→ T ′ . Thus (G, T, ϕ) and (G′ , T ′ , ϕ′ )
are equivalent.
Remark 25. Observe that in the definition of the map sΓ above, the triple
(G, T, ϕ) is constructed in such a way that G is k-quasisplit and ϕ fixes a
Gal(k)-invariant pinning of G. Thus, since sΓ ◦ πΓ is the identity map on
TΓ , we see that every equivalence class in TΓ contains such a triple.
Moreover, suppose that (G, T, ϕ) is a triple of this kind. Applying the
construction of sΓ ◦ rΓ to this triple, we see that the triple we obtain is
precisely (G, T, ϕ), provided that we make appropriate choices.
Remark 26. Recall that in the proof, it is shown that if (G, T, ϕ) and
(G′ , T ′ , ϕ′ ) are two triples that arise by applying the sΓ construction to
a root datum Ψ, then (G, T, ϕ) and (G′ , T ′ , ϕ′ ) are equivalent. We note that
the equivalence ν constructed in this case is of a special kind. Namely, ν is
of the form Int(h) ◦ µ, where h ∈ G(ksep ) and µ ∈ Autk (G, B0 , T0 ).
Now suppose that (G, T, ϕ) and (G′ , T ′ , ϕ′ ) are arbitrary equivalent triples
with the properties that G and G′ are k-quasisplit and ϕ and ϕ′ fix Gal(k)invariant pinnings for G and G′ , respectively. Then combining the first part
of this remark with Remark 25, it follows that there is an equivalence ν
between (G, T, ϕ) and (G′ , T ′ , ϕ′ ) of the above special form.
14
JEFFREY D. ADLER AND JOSHUA M. LANSKY
Remark 27. Suppose that G′ is k-quasisplit and T ′ is a maximal k-torus
of G′ . Suppose that the finite group Γ acts via Gal(k)-equivariant automorphisms on Ψ(G′ , T ′ ) preserving a base. Then the equivalence class of
Ψ(G′ , T ′ ) lies in RΓ . Applying the construction in the definition of sΓ
to Ψ(G′ , T ′ ), we obtain a triple (G, T, ϕ) where G is k-quasisplit. Since
Ψ(G′ , T ′ ) and Ψ(G, T ) are Gal(k)-isomorphic, G′ can be taken to equal G.
Moreover, if g ∈ G(ksep ) is chosen such that T ′ = g T0 , then the cocylcle c used to define T can be taken to be the image of σ 7→ g−1 σ(g) in
Z 1 (k, W (G, T0 )Γ ). In particular, it follows from [1, Proposition 6.1] that T ′
is stably conjugate to T .
Proof of Theorem 2. Consider a class in TΓ . From Remark 25, we can represent this class by a triple (G, T, ϕ), where G is k-quasisplit and the action ϕ of Γ on G fixes a Gal(k)-invariant pinning. Let Ḡ = (Gϕ(Γ) )◦ and
T̄ = (T ϕ(Γ) )◦ . Then Ψ = Ψ(G, T ) comes equipped with an action of Γ.
Consider the restricted root datum Ψ̄ of Theorem 7. Construct a new root
datum Ψ̄′ by replacing the root system Φ̄ of Ψ̄ by a maximal reduced subsystem Φ̄′ as in Remark 13, and do likewise with the coroot system. It is clear
from the constructions that the root data Ψ̄′ and Ψ(Ḡ, T̄ ) are equivalent,
provided that their root systems are equivalent.
As in the proof of Lemma 20, one may reduce to the case where Γ is
cyclic. From the proof of [8, §8.2(2′′′′ )], the root system Φ̄′ must contain the
root system Φ(Ḡ, T̄ ), with equality since Γ fixes a pinning.
References
[1] Jeffrey D. Adler and Joshua M. Lansky, Lifting representations of finite reductive
groups I: Semisimple conjugacy classes, Canad. J. Math. 66 (2014), 1201–1224, DOI
10.4153/CJM-2014-013-6, available at arXiv:1106.0706.
[2] M. Demazure, Automorphismes des groupes réductifs, Schémas en Groupes (Sém.
Géométrie Algébrique, Inst. Hautes Études Sci., 1963/64), Inst. Hautes Études Sci.,
Paris, 1965, pp. 87 (French). MR0228503 (37 #4083)
[3] Robert E. Kottwitz and Diana Shelstad, Foundations of twisted endoscopy, Astérisque
255 (1999), vi+190 (English, with English and French summaries). MR1687096
(2000k:22024)
[4] M. S. Raghunathan, Tori in quasi-split groups, J. Ramanujan Math. Soc. 19 (2004),
no. 4, 281–287. MR2125504 (2005m:20114)
[5] Tonny A. Springer, Reductive groups, Automorphic forms, representations, and Lfunctions. Part 1 (Armand Borel and W. Casselman, eds.), Proceedings of Symposia in
Pure Mathematics, XXXIII, American Mathematical Society, Providence, R.I., 1979,
pp. 3–27. MR546587 (80h:20062)
, Linear algebraic groups, Progress in Mathematics, vol. 9, Birkhäuser Boston
[6]
Inc., Boston, MA, 1998. MR1642713 (99h:20075)
, Linear algebraic groups, Algebraic geometry IV, Encyclopedia of Mathemati[7]
cal Sciences, SpringerVerlag, 1994, pp. 1–121. MR1100484 (92g:20061)
[8] Robert Steinberg, Endomorphisms of linear algebraic groups, Memoirs of the American
Mathematical Society, No. 80, American Mathematical Society, Providence, R.I., 1968.
MR0230728 (37 #6288)
ROOT DATA WITH GROUP ACTIONS
15
Department of Mathematics and Statistics, American University, Washington, DC 20016-8050
E-mail address, Adler: jadler@american.edu
E-mail address, Lansky: lansky@american.edu
| 4 |
arXiv:1408.0350v3 [] 25 Feb 2016
Factorizations of almost simple groups
with a solvable factor, and Cayley graphs
of solvable groups
Cai Heng Li, Binzhou Xia
(Li) The University of Western Australia, Crawley 6009, WA, Australia.
Email: cai.heng.li@uwa.edu.au
(Xia) Peking University, Beijing 100871, China.
Email: binzhouxia@pku.edu.cn
Abstract
A classification is given for factorizations of almost simple groups with at least one
factor solvable, and it is then applied to characterize s-arc-transitive Cayley graphs
of solvable groups, leading to a striking corollary: Except the cycles, every nonbipartite connected 3-arc-transitive Cayley graph of a solvable group is a cover of
the Petersen graph or the Hoffman-Singleton graph.
Key words and phrases:
factorizations; almost simple groups; solvable groups; s-arc-transitive graphs
AMS Subject Classification (2010): 20D40, 20D06, 20D08, 05E18
Acknowledgements. We would like to thank Cheryl Praeger for valuable comments. We also thank Stephen Glasby and Derek Holt for their help with some of
the computation in Magma. The first author acknowledges the support of a NSFC
grant and an ARC Discovery Project Grant. The second author acknowledges the
support of NSFC grant 11501011.
Contents
Chapter 1. Introduction
1.1. Factorizations of almost simple groups
1.2. s-Arc transitive Cayley graphs
Chapter 2. Preliminaries
2.1. Notation
2.2. Results on finite simple groups
2.3. Elementary facts concerning factorizations
2.4. Maximal factorizations of almost simple groups
5
5
8
11
11
13
16
18
Chapter 3. The factorizations of linear and unitary groups of prime dimension 21
3.1. Singer cycles
21
3.2. Linear groups of prime dimension
23
3.3. Unitary groups of prime dimension
25
Chapter 4. Non-classical groups
4.1. The case that both factors are solvable
4.2. Exceptional groups of Lie type
4.3. Alternating group socles
4.4. Sporadic group socles
27
27
28
28
29
Chapter 5. Examples in classical groups
5.1. Examples in unitary groups
5.2. Examples in symplectic groups
5.3. Examples in orthogonal groups of odd dimension
5.4. Examples in orthogonal groups of plus type
31
31
33
35
36
Chapter 6. Reduction for classical groups
6.1. The case that A is solvable
6.2. Inductive hypothesis
6.3. The case that A has at least two unsolvable composition factors
39
39
40
43
Chapter 7. Proof of Theorem 1.1
7.1. Linear groups
7.2. Symplectic Groups
7.3. Unitary Groups
7.4. Orthogonal groups of odd dimension
7.5. Orthogonal groups of even dimension
7.6. Completion of the proof
47
47
49
57
60
62
69
Chapter 8. s-Arc transitive Cayley graphs of solvable groups
71
3
4
CONTENTS
8.1.
8.2.
8.3.
8.4.
Preliminaries
A property of finite simple groups
Reduction to affine and almost simple groups
Proof of Theorem 1.2
71
74
78
88
Appendix A. Tables for nontrivial maximal factorizations of almost simple
classical groups
95
Appendix B. Magmacodes
99
Appendix. Bibliography
105
CHAPTER 1
Introduction
A classification is given for factorizations of almost simple groups with at least
one factor solvable (Theorem 1.1), and it is then applied to characterize s-arctransitive Cayley graphs of solvable groups (Theorem 1.2), leading to a striking
corollary:
Except the cycles, every non-bipartite connected 3-arc-transitive
Cayley graph of a solvable group is a cover of the Petersen graph
or the Hoffman-Singleton graph.
1.1. Factorizations of almost simple groups
For a group G, the expression G = HK with H, K proper subgroups of G is
called a factorization of G, and H, K are called its factors. A finite group G is
said to be almost simple if it lies between a nonabelian simple group S and its
automorphism group Aut(S), or equivalently, if G has a unique minimal normal
subgroup, which is nonabelian simple.
The socle of a group G is the product of all minimal normal subgroups, denoted by soc(G). For a finite almost simple group, the factorization with a factor
containing the socle is trivial in some sense. Thus we only consider the factorizations of which the factors are core-free. A factorization of an almost simple group
is said to be nontrivial if both its factors are core-free. We also say the factors
in nontrivial factorizations of an almost simple group nontrivial. When speaking of
factorizations of almost simple groups, we always refer to nontrivial ones. It is at the
heart of studying factorizations of general groups to understand the factorizations
of almost simple groups.
Problem A. Classify factorizations of finite almost simple groups.
This problem has been investigated for a long time. In the early stage, there were
some partial results for certain simple groups. For instance, Itô [29] determined the
factorizations of PSL2 (q), and the factorizations of certain finite simple groups into
two simple groups are classified [19, 20, 52]. A factorization is said to be exact if
the intersection of the two factors is trivial. The exact factorizations of alternating
and symmetric groups are determined by Wiegold and Williamson [54]. Fisman
and Arad [16] in 1987 proved a conjecture of Szép [50, 51] saying that a nonabelian
simple group G does not have a factorization G = HK with the centers of H and
K both nontrivial.
During 1980s’ and 1990s’, the status of research on Problem A dramatically
changed since some significant classification results have been achieved. In 1987,
Hering, Liebeck and Saxl [25] classified factorizations of exceptional groups of Lie
type. A factorization G = HK is called a maximal factorization of G if both
5
6
1. INTRODUCTION
H, K are maximal subgroups of G. In 1990, Liebeck, Praeger and Saxl published
the landmark work [42] classifying maximal factorizations of finite almost simple
groups, which is the foundation of many further works on Problem A. When the
socle is an alternating group, in fact all the factorizations of an almost simple group
were determined in [42, THEOREM D]. Based on the maximal factorizations in [42,
THEOREM C], Giudici [21] in 2006 determined the factorizations of almost simple
groups with sporadic simple group socle.
Nevertheless, Problem A for classical groups of Lie type is widely open. Our
approach to this problem is to divide it into three major cases:
• at least one of the two factors is solvable;
• at least one of the two factors has at least two unsolvable composition
factors;
• both factors have a unique unsolvable composition factor.
In this paper, we solve Problem A for the first case (Theorem 1.1). In subsequent
work [39], we solve the problem for the second case.
For the notation in the following theorem, refer to Section 2.1.
Theorem 1.1. Suppose that G is an almost simple group with socle L and
G = HK for solvable subgroup H and core-free subgroup K of G. Then one of
the following statements holds.
(a) Both factors H, K are solvable, and the triple (G, H, K) is described in
Proposition 4.1.
(b) L = An , and the triple (G, H, K) is described in Proposition 4.3.
(c) L is a sporadic simple group, and the triple (G, H, K) is described in Proposition 4.4.
(d) L is a classical group of Lie type, and the triple (L, H ∩ L, K ∩ L) lies in
Table 1.1 or Table 1.2.
Conversely, for each simple group L in Table 1.1 and Table 1.2, there exist group
G and subgroups H, K as described such that soc(G) = L and G = HK.
Here are some remarks on Theorem 1.1.
(i) For the triple (G, H, K) in part (d) of Theorem 1.1, NL (K ∩ L) is maximal
in L except when K ∩ L = PSL3 (3) in row 6 of Table 1.2 or K ∩ L = A5 ,
S5 in row 11 of Table 1.2, respectively.
(ii) By the benefit of isomorphisms
PSL4 (q) ∼
= PΩ5 (q),
= PΩ2m+1 (2f ) and PSp (q) ∼
= PΩ+ (q), PSp (2f ) ∼
6
2m
4
there is a uniform description for rows 2–9 of Table 1.1: L = PSU2m (q 1/2 )
with m > 2 or PΩ2m+1 (q) with m > 2 or PΩ+
2m (q) with m > 3, and up to
graph automorphisms of L, we have
H ∩ L 6 Op (Pm ):ˆGL1 (q m ).m < Pm
and NL (K ∩ L) = Nε1 ,
where ε = 2m − n ∈ {0, −} with n being the dimension of L. To illustrate this, take row 4 of Table 1.1 as an instance. Denote by τ the graph
automorphism of L of order 2, and consider the factorization Gτ = H τ K τ
deriving from G = HK. Since τ maps P1 to P2 and Sp2 (q 2 ).2 to O−
4 (q),
we deduce that H τ ∩ L 6 q 3 :(q 2 − 1).2 < P2 and K τ ∩ L D Ω−
4 (q).
1.1. FACTORIZATIONS OF ALMOST SIMPLE GROUPS
7
This means that the triple (Gτ , H τ , K τ ) is in our uniform description with
L = soc(Gτ ) = PΩ5 (q) and q even.
(iii) Although Table 1.1 presents Pk as a maximal subgroup of L containing
H ∩ L for each of the rows 2–9, it does not assert that Pk is the only such
maximal subgroup. In fact, H ∩ L may be contained in the intersection of
two different maximal subgroups of L, one of which is Pk . For example,
G = Sp4 (4) has a factorization G = HK with H = 24 :15 < P2 ∩ O−
4 (4) and
K = Sp2 (16):2.
Table 1.1.
row L
H ∩L 6
K ∩LD
1
PSLn (q)
ˆGL1 (q n ):n =
2
PSL4 (q)
q 3 : q d−1 .3 < Pk
PSp4 (q)
d = (4, q − 1),
k ∈ {1, 3}
3
PSp2m (q)
q m(m+1)/2 :(q m − 1).m < Pm
Ω−
2m (q)
m > 2, q even
4
PSp4 (q)
q 3 :(q 2 − 1).2 < P1
Sp2 (q 2 )
q even
5
PSp4 (q)
q 1+2 : q 2−1 .2 < P1
PSp2 (q 2 )
q odd
6
q −1
PSU2m (q) q m : (q+1)d
.m < Pm
SU2m−1 (q)
m > 2,
d = (2m, q + 1)
7
Ω2m+1 (q)
(q m(m−1)/2 .q m ): q
8
PΩ+
2m (q)
q m(m−1)/2 : q
9
PΩ+
8 (q)
q 6 : q d−1 .4 < Pk
q n −1
:n
(q−1)d
3
2
2m
2
4
m −1
d
m −1
2
q n−1 :SLn−1 (q) d = (n, q − 1)
.m < Pm Ω−
2m (q)
.m < Pk
remark
Ω2m−1 (q)
Ω7 (q)
m > 3, q odd
m > 5,
d = (4, q m − 1),
k ∈ {m, m − 1}
d = (4, q 4 − 1),
k ∈ {1, 3, 4}
8
1. INTRODUCTION
Table 1.2.
row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
L
PSL2 (11)
PSL2 (16)
PSL2 (19)
PSL2 (29)
PSL2 (59)
PSL4 (3)
PSL4 (3)
PSL4 (4)
PSL5 (2)
PSp4 (3)
PSp4 (3)
PSp4 (5)
PSp4 (7)
PSp4 (11)
PSp4 (23)
Sp6 (2)
PSp6 (3)
PSU3 (3)
PSU3 (5)
PSU4 (3)
21
22
23
24
25
26
27
28
PSU4 (8)
Ω7 (3)
Ω7 (3)
Ω9 (3)
Ω+
8 (2)
Ω+
8 (2)
PΩ+
8 (3)
PΩ+
8 (3)
H∩L6
11:5
D34
19:9
29:14
59:29
24 :5:4
33 :13:3
26 :63:3
31:5
33 :S4
31+2
+ :2.A4
51+2
+ :4.A4
71+2
+ :6.S4
111+2
+ :10.A4
231+2
+ :22.S4
1+2
3+ :2.S4
1+4
31+4
.D10
+ :2
31+2
:8
+
51+2
+ :8
34 :D10 , 34 :S4 ,
34 :32 :4, 31+4
+ .2.S4
513:3
35 :24 .AGL1 (5)
33+3 :13:3
36+4 :21+4 .AGL1 (5)
22 :15.4 < A9
26 :15.4
36 :24 .AGL1 (5)
36 :(33 :13:3), 33+6 :13.3
K ∩L
A5
A5
A5
A5
A5
PSL3 (3), 33 :PSL3 (3)
(4 × PSL2 (9)):2
(5 × PSL2 (16)):2
26 :(S3 × PSL3 (2))
24 :A5
A5 , 24 :A5 , S5 , A6 , S6
PSL2 (52 ), PSL2 (52 ):2
PSL2 (72 ), PSL2 (72 ):2
PSL2 (112 ), PSL2 (112 ):2
PSL2 (232 ), PSL2 (232 ):2
A8 , S8
PSL2 (27):3
PSL2 (7)
A7
PSL3 (4)
212 :SL2 (64).7
G2 (3)
Sp6 (2)
−
Ω−
8 (3), Ω8 (3).2
Sp6 (2)
A9
Ω7 (3)
Ω+
8 (2)
Note the isomorphism PSp4 (3) ∼
= PSU4 (2) for rows 10 and 11.
1.2. s-Arc transitive Cayley graphs
The second part of this paper gives a characterization of non-bipartite connected
s-arc-transitive Cayley graphs of solvable groups where s > 2, as an application of
Theorem 1.1. Before stating the result, we introduce some definitions.
Let Γ = (V, E) be a simple graph with vertex set V and edge set E. An
automorphism of Γ is a permutation on V that preserves the edge set E. The
group consisting of all the automorphisms of Γ is called the (full) automorphism
group of Γ and denoted by Aut(Γ ). For a positive integer s, an s-arc in Γ is a
sequence of vertices (α0 , α1 , . . . , αs ) such that {αi−1 , αi } ∈ E and αi−1 6= αi+1 for
all admissible i. A 1-arc is simply called an arc. For some G 6 Aut(Γ ), the graph
Γ is said to be (G, s)-arc-transitive if Γ is regular (means that each vertex has the
1.2. s-ARC TRANSITIVE CAYLEY GRAPHS
9
same number of neighbors) and G acts transitively on the s-arcs of Γ ; Γ is said to
be (G, s)-transitive if Γ is (G, s)-arc-transitive but not (G, s + 1)-arc-transitive. If Γ
is (Aut(Γ ), s)-arc-transitive or (Aut(Γ ), s)-transitive, then we say that Γ is s-arctransitive or s-transitive, respectively. Note that the cycles can be s-arc-transitive
for any positive integer s.
Given a group R and a subset S ⊂ R which does not contain the identity of R
such that S = S −1 := {g −1 | g ∈ S}, the Cayley graph Cay(R, S) of R is the graph
with vertex set R such that two vertices x, y are adjacent if and only if yx−1 ∈ S.
It is easy to see that Cay(R, S) is connected if and only if S generates the group R,
and a graph Γ is (isomorphic to) a Cayley graph of R if and only if Aut(Γ ) has a
subgroup isomorphic to R acting regularly on the vertices of Γ .
There have been classification results for certain classes of s-arc-transitive Cayley
graphs in the literature. For instance, see [1] for 2-arc-transitive circulants (Cayley graphs of cyclic groups), [15, 45, 46] for 2-arc-transitive dihedrants (Cayley
graphs of dihedral groups) and [30, 37] for 2-arc-transitive Cayley graphs of abelian
groups. Cubic s-arc-transitive Cayley graphs are characterized in [9, 40, 56, 57],
and tetravalent case is studied in [36].
A graph Γ is said to be a cover of a graph Σ , if there exists a surjection φ
from the vertex set of Γ to the vertex set of Σ which preserves adjacency and is
a bijection from the neighbors of v to the neighbors of v φ for any vertex v of Γ .
Suppose that Γ = (V, E) is a (G, s)-arc-transitive graph with s > 2, and G has a
normal subgroup N which has at least three orbits on the vertex set V . Then N
induces a graph ΓN = (VN , EN ), where VN is the set of N-orbits on V and EN is
the set of N-orbits on E, called the normal quotient of Γ induced by N. In this
context, we call Γ a normal cover of ΓN , and call Γ a proper normal cover of ΓN if
in addition N 6= 1.
A transitive permutation group G is called primitive if G does not preserve
any nontrivial partition of the points, and is called quasiprimitive if every nontrivial
normal subgroup of G is transitive. Given a non-bipartite (G, s)-arc-transitive graph
Σ with s > 2, one can take a maximal intransitive normal subgroup N of G, so that
ΣN is (G/N, s)-arc-transitive with G/N vertex-quasiprimitive and Σ is a cover of
ΣN , see [48]. If in addition Σ is a Cayley graph of a solvable group, then the
normal quotient ΣN admits a solvable vertex-transitive group of automorphisms.
Thus to characterize s-arc-transitive Cayley graphs of solvable groups, the first step
is to classify (G, s)-arc-transitive graphs with G vertex-quasiprimitive containing a
solvable vertex-transitive subgroup. This is essentially the result in Theorem 1.2,
where Σ is extended to the class of graphs admitting a solvable vertex-transitive
group of automorphisms although our original purpose is those Cayley graphs of
solvable groups. A group G is called affine if Cdp E G 6 AGLd (p) for some prime p
and positive integer d. The (G, 2)-arc-transitive graphs for primitive affine groups
G are classified in [30]. The (G, 2)-arc-transitive graphs for almost simple groups G
with socle PSL2 (q) are classified in [23].
Theorem 1.2. Let Σ be a non-bipartite connected (X, s)-transitive graph admitting a solvable vertex-transitive subgroup of X, where s > 2 and the valency of
Σ is at least three. Then s = 2 or 3, and X has a normal subgroup N with the
(G, s)-transitive normal quotient Γ described below, where G = X/N and Γ = ΣN .
10
1. INTRODUCTION
(a) G is a 3-transitive permutation group of degree n, Γ = Kn , and s = 2.
(b) G is a primitive affine group, so that Γ is classified in [30], and s = 2.
(c) PSL2 (q) 6 G 6 PΓL2 (q) for some prime power q > 4, so that Γ is classified
in [23].
(d) G = PSU3 (5) or PΣU3 (5), Γ is the Hoffman-Singlton graph, and s = 3.
(e) G = HS or HS.2, Γ is the Higman-Sims graph, and s = 2.
In particular, s = 3 if and only if Γ is the Petersen graph or the Hoffman-Singleton
graph.
A Cayley graph Γ of a group R is called a normal Cayley graph of R if the right
regular representation of R is normal in Aut(Γ ), and a Cayley graph of a solvable
group is called a solvable Cayley graph. An instant consequence of Theorem 1.2 is
the following.
Corollary 1.3. A connected non-bipartite 2-arc-transitive solvable Cayley graph
of valency at least three is a normal cover of
(a) a complete graph, or
(b) a normal Cayley graph of Cd2 , or
(c) a (G, 2)-arc-transitive graph with soc(G) = PSL2 (q), or
(d) the Hoffman-Singlton graph, or
(e) the Higman-Sims graph.
In particular, a connected non-bipartite 3-arc-transitive solvable Cayley graph of
valency at least three is a normal cover of the Petersen graph or the HoffmanSingleton graph.
We remark that neither the Petersen graph nor the Hoffman-Singleton graph is
a Cayley graph. Thus a non-bipartite 3-arc-transitive solvable Cayley graph (if any)
is a proper normal cover of the Petersen graph or the Hoffman-Singleton graph.
CHAPTER 2
Preliminaries
We collect in this chapter the notation and elementary facts as well as some technical lemmas. Some basic facts will be used in the sequel without further reference.
2.1. Notation
In this paper, all the groups are supposed to be finite and all graphs are supposed
to be finite and simple if there are no further instructions. We set up the notation
below, where G, H, K are groups, M is a subgroup of G, n and m are positive
integers, p is a prime number, and q is a power of p (so that q is called a p-power).
Z(G)
rad(G)
soc(G)
G(∞)
CG (M)
NG (M)
Op (G)
[G:M]
G◦H
G:H
G.H
G:H:K
G:H.K
G.H.K
Sn
An
Cn
D2n
[n]
np
np′
pn
pn+m
p1+2n
+
p1+2n
−
GF(q)
ΓLn (q)
GLn (q)
SLn (q)
PΓLn (q)
center of G
solvable radical (the largest solvable normal subgroup) of G
socle
of the minimal normal subgroups) of G
T∞(product
(i)
= i=1 G
centralizer of M in G
normalizer of M in G
largest normal p-subgroup of G
set of right cosets of M in G
a central product of G and H
a split extension of G by H
an extension of G by H
a group of both type G:(H:K) and (G:H):K
a group of type G:(H.K) (this is automatically of type (G:H).K)
a group of type G.(H.K) (this is automatically of type (G.H).K)
symmetric group of degree n (naturally acting on {1, 2, . . . , n})
alternating group of degree n (naturally acting on {1, 2, . . . , n})
cyclic group of order n (sometimes just denoted by n)
dihedral group of order 2n
an unspecified group of order n
p-part of n (the largest p-power that divides n)
= n/np
elementary abelian group of order pn
= pn .pm
extraspecial group of order p1+2n with exponent p when p is odd
extraspecial group of order p1+2n with exponent p2 when p is odd
finite field of q elements
group of semilinear bijections on GF(q)n
general linear group on GF(q)n
special linear group on GF(q)n
= ΓLn (q)/Z(GLn (q))
11
12
2. PRELIMINARIES
PGLn (q)
PSLn (q)
Sp2n (q)
PSp2n (q)
GUn (q)
SUn (q)
PGUn (q)
PSUn (q)
O2n+1 (q)
O+
2n (q)
O−
2n (q)
SO2n+1 (q)
SO+
2n (q)
SO−
2n (q)
Ω2n+1 (q)
Ω+
2n (q)
Ω−
2n (q)
PSO2n+1 (q)
PSO+
2n (q)
PSO−
2n (q)
PΩ2n+1 (q)
PΩ+
2n (q)
−
PΩ2n (q)
Kn
Kn,m
projective general linear group on GF(q)n
projective special linear group on GF(q)n
symplectic group on GF(q)2n
projective symplectic group on GF(q)2n
general unitary group on GF(q 2 )n
special unitary group on GF(q 2 )n
projective general unitary group on GF(q 2 )n
projective special unitary group on GF(q 2 )n
general orthogonal group on GF(q)2n+1
general orthogonal group on GF(q)2n with Witt index n
general orthogonal group on GF(q)2n with Witt index n − 1
special orthogonal group on GF(q)2n
special orthogonal group on GF(q)2n with Witt index n
special orthogonal group on GF(q)2n with Witt index n − 1
derived subgroup of SO2n+1 (q)
derived subgroup of SO+
2n (q)
derived subgroup of SO−
2n (q)
projective special orthogonal group on GF(q)2n
projective special orthogonal group on GF(q)2n with Witt
index n
projective special orthogonal group on GF(q)2n with Witt
index n − 1
derived subgroup of PSO2n+1 (q)
derived subgroup of PSO+
2n (q)
−
derived subgroup of PSO2n (q)
complete graph on n vertices
complete bipartite graph with bipart sizes n and m
Let T be a classical linear group on V with center Z such that T /Z is a classical
simple group, and X be a subgroup of GL(V ) containing T as a normal subgroup.
Then for any subgroup Y of X, denote by ˆY the subgroup (Y ∩ T )Z/Z of T /Z.
For example, if X = GLn (q) and Y = GL1 (q n ):n 6 X (see Section 3.1), then
ˆY = ((q n − 1)/((q − 1)(n, q − 1))):n as the third column of row 1 in Table 1.1.
If G is a linear group, define Pk [G] to be the stabilizer of a k-space in G. For
the rest of this section, let G be a symplectic, unitary or orthogonal group. If G
is transitive on the totally singular k-spaces, then define Pk [G] to be the stabilizer
of a totally singular k-space in G. If G is not transitive on the totally singular
k-spaces (so that G is an orthogonal group of dimension 2k with Witt index k, see
[42, 2.2.4]), then G has precisely two orbits on them and
(i) define Pk−1 [G], Pk [G] to be the stabilizers of totally singular k-spaces in the
two different orbits of G;
(ii) define Pk−1,k [G] to be the stabilizer of a totally singular (k − 1)-space in G;
(iii) define P1,k−1 [G] to be the intersection P1 [G] ∩ Pk−1 [G], where the 1-space
stabilized by P1 [G] lies in the k-space stabilized by Pk−1 [G];
(iv) define P1,k [G] to be the intersection P1 [G] ∩ Pk [G], where the 1-space stabilized by P1 [G] lies in the k-space stabilized by Pk [G].
Let W be a non-degenerate k-space. We define
2.2. RESULTS ON FINITE SIMPLE GROUPS
13
(i) Nk [G] = GW if either G is symplectic or unitary, or G is orthogonal of even
dimension with k odd;
(ii) Nεk [G] = GW for ε = ± if G is orthogonal and W has type ε;
(iii) Nεk [G] = GW for ε = ± if G is orthogonal of odd dimension and W ⊥ has
type ε.
+
For the above defined groups Pk [G], Pi,j [G], Nk [G], N−
k [G] and Nk [G], we will simply
+
write Pk , Pi,j , Nk , N−
k and Nk , respectively, if the classical group G is clear from
the context.
We shall employ the families C1 –C8 of subgroups of classical groups defined in
[2], see [34, 33] for their group structure and [34, 7] for determination of their
−
maximality in classical simple groups. The subgroups Pk , Nk , N+
k and Nk defined
in the previous paragraph are actually C1 subgroups of classical groups.
2.2. Results on finite simple groups
This section contains some information about the finite simple groups which is
needed in the sequel.
2.2.1. Classification of the finite simple groups. Let q be a prime power.
By the classification of finite simple groups (CFSG), the finite nonabelian simple
groups are:
(i) An with n > 5;
(ii) classical groups of Lie type:
PSLn (q) with n > 2 and (n, q) 6= (2, 2) or (2, 3),
PSp2m (q) with m > 2 and (m, q) 6= (2, 2),
PSUn (q) with n > 3 and (n, q) 6= (3, 2),
Ω2m+1 (q) with m > 3 and q odd,
−
PΩ+
2m (q) with m > 4, PΩ2m (q) with m > 4;
(iii) exceptional groups of Lie type:
G2 (q) with q > 3, F4 (q), E6 (q), E7 (q), E8 (q),
2
B2 (22c+1 ) = Sz(22c+1 ) with c > 1, 2 G2 (32c+1) with c > 1,
2
F4 (22c+1 ) with c > 1, 2 F4 (2)′ , 3 D4 (q), 2 E6 (q);
(iv) 26 sporadic simple groups.
Furthermore, the only repetitions among (i)–(iv) are:
PSL2 (4) ∼
= PSL2 (5) ∼
= A5 , PSL3 (2) ∼
= PSL2 (7),
∼
∼
PSL2 (9) = A6 , PSL4 (2) = A8 , PSU4 (2) ∼
= PSp4 (3).
For the orders of the groups listed in (ii)–(iv), see [42, TABLE 2.1].
Remark 2.1. We note that
PSp (2) ∼
= S6 , G2 (2) ∼
= PSU3 (3):2,
4
2
G2 (3) ∼
= PΓL2 (8),
and the groups PSL2 (2), PSL2 (3), PSU3 (2), 2 B2 (2) are all solvable.
14
2. PRELIMINARIES
2.2.2. Outer automorphism group. As a consequence of CFSG, the outer
automorphism groups of finite simple groups are explicitly known. In particular,
the Schreier conjecture is true, that is, the outer automorphism group of every finite
simple group is solvable. We collect some information for the outer automorphism
groups of simple groups of Lie type in the following two tables, where q = pf with
p prime. For the presentations of outer automorphism groups of classical simple
groups, see [7, 1.7].
Table 2.1.
L
PSL2 (q)
PSLn (q), n > 3
PSUn (q), n > 3
PSp2m (q), (m, p) 6= (2, 2)
PSp4 (q), q even
Ω2m+1 (q), m > 3, q odd
m
PΩ−
2m (q), m > 4, q 6≡ 3 (mod
m
PΩ−
2m (q), m > 4, q ≡ 3 (mod
+
PΩ2m (q), m > 5, q m 6≡ 1 (mod
m
PΩ+
2m (q), m > 5, q ≡ 1 (mod
+
PΩ8 (q)
4)
4)
4)
4)
Out(L)
Cd × Cf
Cd :(C2 × Cf )
Cd :C2f
Cd × Cf
C2f
C2 × Cf
Cd × C2f
D8 × Cf
C2 × Cd × Cf
D8 × Cf
Sd × Cf
d
(2, q − 1)
(n, q − 1)
(n, q + 1)
(2, q − 1)
1
1
(2, q − 1)
4
(2, q − 1)
4
2 + (2, q − 1)
Table 2.2.
L
G2 (q), q > 3
F4 (q)
E6 (q)
E7 (q)
E8 (q)
2
B2 (q), q = 22c+1 > 23
2
G2 (q), q = 32c+1 > 33
2
F4 (q), q = 22c+1 > 23
2
F4 (2)′
3
D4 (q)
2
E6 (q)
|Out(L)|
(3, p)f
(2, p)f
2df
df
f
f
f
f
2
3f
2df
d
1
1
(3, q − 1)
(2, q − 1)
1
1
1
1
1
1
(3, q + 1)
Let a and m be positive integers. A prime number r is called a primitive prime
divisor of the pair (a, m) if r divides am − 1 but does not divide aℓ − 1 for any
positive integer ℓ < m. We will simply say that r is a primitive prime divisor of
am − 1 when a is prime.
Lemma 2.2. A prime number r is a primitive prime divisor of (a, m) if and only
if a has order m in GF(r)× . In particular, if r is a primitive prime divisor of (a, m),
then m r − 1 and r > m.
The following Zsigmondy’s theorem shows exactly when the primitive prime divisors exist.
2.2. RESULTS ON FINITE SIMPLE GROUPS
15
Theorem 2.3. (Zsigmondy, see [3, Theorem IX.8.3]) Let a and m be integers
greater than 1. Then (a, m) has a primitive prime divisor except for (a, m) = (2, 6)
or (2k − 1, 2) with some positive integer k.
Checking the orders of outer automorphism groups of finite simple groups and
viewing Lemma 2.2, one has the consequence below, see [42, p.38 PROPOSITION
B].
Lemma 2.4. Let L be a simple group of Lie type over GF(q), where q = pf with
p prime and m > 3. If (q, m) 6= (2, 6) or (4, 3), then no primitive prime divisor of
pf m − 1 divides |Out(L)|.
2.2.3. Minimal index of proper subgroups. A group G is said to be perfect
if G′ = G. A group G is said to be quasisimple if G is perfect and G/Z(G) is
nonabelian simple.
Lemma 2.5. Let P (X) denote the smallest index of proper subgroups of an arbitrary group X. Then the following statements hold.
(a) If G is almost simple with socle L, then |G|/|H| > P (L) for any core-free
subgroup H of G.
(b) If G is quasisimple with center Z, then P (G/Z) = P (G).
(c) If G is a permutation group on n points and N is a normal subgroup of G,
then P (G/N) 6 n.
(d) If H and K are subgroups of G such that |G|/|K| < P (H), then H 6 K.
Proof. Proof for parts (a) and (b) is fairly easy, so we omit it. To prove part (c),
use induction on n. When n = 1, the conclusion holds trivially. Let K 6 G be the
stabilizer of an arbitrary point. Evidently, |G|/|K| 6 n, and K is a permutation
group on n − 1 points. As K/K ∩ N ∼
= KN/N, we see that K/K ∩ N is isomorphic
to a subgroup H of G/N. If H 6= G/N, then
P (G/N) 6
|G/N|
|G||K ∩ N|
|G|
=
6
6 n.
|H|
|N||K|
|K|
If H = G/N, then P (G/N) = P (K/K ∩ N) 6 n − 1 by the inductive hypothesis.
Consequently, part (c) is true.
It remains to prove part (d). Suppose on the contrary that H K. Then H ∩ K
is a proper subgroup of H, and so |H|/|H ∩ K| > P (H). This causes a contradiction
that
|G|
|HK|
|H|
P (H) >
>
=
> P (H).
|K|
|K|
|H ∩ K|
Thereby we have H 6 K.
A list of the smallest indices of proper subgroups of classical simple groups was
obtained by Cooperstein [11]. In fact, the smallest index of proper subgroups of a
classical simple group follows immediately from the classification of its maximal subgroups. This is cited in the following theorem and is referred to [34, Theorem 5.2.2],
which also points out the two errors in Cooperstein’s list.
Theorem 2.6. The smallest index P (L) of proper subgroups of a classical simple
group L is as in Table 2.3.
16
2. PRELIMINARIES
Table 2.3.
L
PSLn (q), (n, q) 6= (2, 5), (2, 7), (2, 9), (2, 11), (4, 2)
PSL2 (5), PSL2 (7), PSL2 (9), PSL2 (11), PSL4 (2)
PSp2m (q), m > 2, q > 2, (m, q) 6= (2, 3)
Sp2m (2), m > 3
PSp4 (3)
Ω2m+1 (q), m > 3, q > 5 odd
Ω2m+1 (3), m > 3
PΩ+
2m (q), m > 4, q > 3
PΩ+
2m (2), m > 4
PΩ−
2m (q), m > 4
PSU3 (q), q 6= 5
PSU3 (5)
PSU4 (q)
PSUn (q), n > 5, (n, q) 6= (6m, 2)
PSUn (2), n ≡ 0 (mod 6)
P (L)
(q n − 1)/(q − 1)
5, 7, 6, 11, 8
(q 2m − 1)/(q − 1)
2m−1 (2m − 1)
27
(q 2m − 1)/(q − 1)
3m (3m − 1)/2
(q m − 1)(q m−1 + 1)/(q − 1)
2m−1 (2m − 1)
(q m + 1)(q m−1 − 1)/(q − 1)
q3 + 1
50
(q + 1)(q 3 + 1)
(q n −(−1)n )(q n−1 −(−1)n−1 )
q 2 −1
n−1 n
2
(2 − 1)/3
2.3. Elementary facts concerning factorizations
We first give several equivalent conditions for a factorization.
Lemma 2.7. Let H, K be subgroups of G. Then the following are equivalent.
(a)
(b)
(c)
(d)
(e)
(f)
G = HK.
G = H x K y for any x, y ∈ G.
|H ∩ K||G| = |H||K|.
|G| 6 |H||K|/|H ∩ K|.
H acts transitively on [G:K] by right multiplication.
K acts transitively on [G:H] by right multiplication.
Due to part (b) of Lemma 2.7, we will consider conjugacy classes of subgroups
when studying factorizations of a group. Given a group G and its subgroups H, K,
in order to inspect whether G = HK holds we only need to compute the orders of G,
H, K and H ∩K by part (c) or (d) of Lemma 2.7. This is usually easier than checking
the equality G = HK directly, see our Magmacodes in APPENDIX B. Utilizing
equivalent conditions (e) and (f) in Lemma 2.7, one can construct factorizations in
terms of permutation groups.
Example 2.8. According to Lemma 2.7, each k-homogeneous permutation group
H of degree n gives rise to a factorization Sn = H(Sk × Sn−k ).
(a) Each transitive permutation group H of degree n gives rise to a factorization Sn = HSn−1 . Since a group is transitive in its regular permutation
representation, we see that each group is a factor of some symmetric group.
(b) Each 2-homogeneous permutation group H of degree n gives rise to a factorization Sn = H(S2 × Sn−2 ). The solvable 2-homogeneous group H of
degree n > 5 only exists for prime power n, and is either a subgroup of
2.3. ELEMENTARY FACTS CONCERNING FACTORIZATIONS
17
AΓL1 (n) or one of the following (see [31] and [4, Theorem XII.7.3]):
H
52 :SL2 (3), 52 :Q8 .6, 52 :SL2 (3).4
72 :Q8 .S3 , 72 :SL2 (3).6
112 :SL2 (3).5, 112 :SL2 (3).10
232 :SL2 (3).22
34 :21+4 .5, 34 :21+4 .D10 , 34 :21+4 .AGL1 (5)
n
52
72
112
232
34
(c) There are only three solvable 3-homogeneous groups of degree n > 5,
namely, AGL1 (8) and AΓL1 (8) with n = 8, and AΓL1 (32) with n = 32.
Each of them is a factor of Sn with the other factor being S3 × Sn−3 .
See the proof of Proposition 4.3 for a more comprehensive treatment of these factorizations.
The following simple lemma will be used repeatedly in subsequent chapters.
Lemma 2.9. Let H, K be subgroups of G and L be a normal subgroup of G. If
G = HK, then we have the following divisibilities.
(a) |G| divides |H||K|.
(b) |G| divides |H ∩ L||K||G/L|.
(c) |L| divides |H ∩ L||K|.
(d) |L| divides |H ∩ L||K ∩ L||G/L|.
Proof. It derives from G = HK that |H ∩ K||G| = |H||K| and thus statement
(a) holds. Then since |H| = |H ∩ L||HL/L| divides |H ∩ L||G/L|, we conclude that
|G| divides |H ∩ L||K||G/L|. Hence statement (b) holds, which is equivalent to (c).
Since |K| = |K ∩ L||KL/L| divides |K ∩ L||G/L|, we then further deduce statement
(d).
In the remainder of this section, we present some lemmas relating factorizations
of a group and those of its subgroups.
Lemma 2.10. Let H, K and M be subgroups of G. If G = HK, then M =
(H ∩ M)(K ∩ M) if and only if |HM||KM| 6 |G||(H ∩ K)M|. In particular, if
G = HK and H 6 M, then M = H(K ∩ M).
Proof. By Lemma 2.7, M = (H ∩ M)(K ∩ M) if and only if
(2.1)
|M| 6 |H ∩ M||K ∩ M|/|H ∩ K ∩ M|.
Substituting |H ∩M| = |H||M|/|HM|, |K ∩M| = |K||M|/|HK| and |H ∩K ∩M| =
|H ∩ K||M|/|(H ∩ K)M| into (2.1), we obtain
|HM||KM||H ∩ K| 6 |H||K||(H ∩ K)M|.
Since |H||K|/|H ∩ K| = |G| in view of G = HK, the above inequality turns out to
be |HM||KM| 6 |G||(H ∩ K)M|. This proves the lemma.
Lemma 2.11. Let K, M be subgroups of G and H be a subgroup of M. If M =
H(K ∩ M), then G = HK if and only if G = MK.
Proof. If G = HK, then H 6 M implies G = MK. If G = MK, then
G = (H(K ∩ M))K = H((K ∩ M)K) = HK.
18
2. PRELIMINARIES
The above two lemmas enable us to construct new factorizations from given ones.
Example 2.12. Let G = PSL4 (3), M = PSp4 (3).2 < G and K = P1 =
33 :PSL3 (3). Then we have G = MK and K ∩ M = 33 :(S4 × 2) by [42, 3.1.1].
Since the almost simple group M = PSp4 (3).2 has a factorization M = H(K ∩ M)
with H = 24 :AGL1 (5) (see row 11 of Table 4.1), there holds G = HK by Lemma
2.11. This factorization is as described in row 6 of Table 1.2.
6
Example 2.13. Let G = PΩ+
8 (3), M = P4 = 3 :PSL4 (3) and K = N1 = Ω7 (3).
3+3
Then we have G = MK and K ∩M = 3 :PSL3 (3) by [42, 5.1.15]. Write M = R:S,
where R = C63 and S = PSL4 (3). As shown in Example 2.12, S has a factorization
S = H1 K1 with H1 = 24 :AGL1 (5) and K1 = 33 :PSL3 (3) < K ∩M. Let H = R:H1 <
M. It follows that
M = RS = RH1 K1 = HK1 = H(K ∩ M),
and thus G = HK by Lemma 2.11. This factorization is as described in row 27 of
Table 1.2.
2.4. Maximal factorizations of almost simple groups
The nontrivial maximal factorizations of almost simple groups are classicfied by
Liebeck, Praeger and Saxl [42]. According to [42, THEOREM A], any nontrivial
maximal factorization G = AB of almost simple group G with socle L classical of
Lie type lies in TABLEs 1–4 of [42]. In TABLE 1 of [42], the maximal subgroups
A and B are given by some natural subgroups XA and XB of A ∩ L and B ∩ L
respectively such that A = NG (XA ) and B = NG (XB ). In fact, the explicit group
structures of A ∩ L and B ∩ L in TABLE 1 of [42] can be read off from [34], which
gives the following lemma.
Lemma 2.14. Let G be an almost simple group with socle L classical of Lie type.
If G = AB is a nontrivial maximal factorization as described in TABLE 1 of [42],
then letting XA and XB be as defined in TABLE 1 of [42], we have one of the
following.
(a) A ∩ L = XA and B ∩ L = XB .
(b) L = PSLn (q) with n > 4 even, B ∩ L = XB , and A ∩ L = PSpn (q).a where
a = (2, q − 1)(n/2, q − 1)/(n, q − 1).
(c) L = PSU2m (q) with m > 2, A ∩ L = XA , and B ∩ L = PSp2m (q).a where
a = (2, q − 1)(m, q + 1)/(2m, q + 1).
(d) L = PΩ+
2m (q) with m > 6 even and q > 2, A ∩ L = XA , and B ∩ L =
(PSp2 (q) ⊗ PSpm (q)).a where a = gcd(2, m/2, q − 1).
In particular, |A ∩ L|/|XA | 6 2 and |B ∩ L|/|XB | 6 2.
In light of Lemma 2.14, we restate THEOREM A of [42] as follows.
Theorem 2.15. (Liebeck, Praeger and Saxl) Let G be an almost simple group
with socle L classical of Lie type not isomorphic to A5 , A6 or A8 . If G = AB is a
nontrivial maximal factorization of G, then interchanging A and B if necessary, the
triple (L, A ∩ L, B ∩ L) lies in Tables A.1–A.7 in APPENDIX A.
For a group G, the solvable radical of G, denoted by rad(G), is the product of
all the solvable normal subgroups of G.
2.4. MAXIMAL FACTORIZATIONS OF ALMOST SIMPLE GROUPS
19
Lemma 2.16. If a group G has precisely one (involving multiplicity) unsolvable
composition factor, then G/rad(G) is almost simple.
Proof. Let R = rad(G) be the solvable radical of G. If G/R has an abelian
minimal normal subgroup N say, then the full preimage of N is solvable and normal
in G, which is a contradiction since R is the largest solvable normal subgroup of
G. Thus each minimal normal subgroup of G/R is nonabelian. As G has only one
unsolvable composition factor, so does G/R. Therefore, G/R has only one minimal
normal subgroup and the minimal normal subgroup is nonabelian simple, which
shows that G/R is almost simple.
T
(i)
be the first perfect group in the derived
For a group G, let G(∞) = ∞
i=1 G
series of G. Obviously, G is solvable if and only if G(∞) = 1. In fact, G(∞) is the
smallest normal subgroup of G such that G/G(∞) is solvable.
The following proposition plays a fundamental role in our further analysis.
Proposition 2.17. Suppose G is an almost simple group with socle classical of
Lie type, and G = AB with subgroups A, B maximal and core-free in G. If A has
exactly one unsolvable composition factor and R = rad(A), then (A ∩ B)R/R is
core-free in A/R.
Proof. Let S be the unique unsolvable composition factor of A. By Lemma
2.16, A/R is an almost simple group with socle S. Suppose that (A∩B)R/R contains
soc(A/R) = (A/R)(∞) = A(∞) R/R. Then (A ∩ B)R > A(∞) R. From G = AB we
deduce that |G|/|B| = |A|/|A ∩ B| divides |A||R|/|(A ∩ B)R|. Hence |G|/|B| divides
|A||R|
|A||A(∞) ∩ R|
|A|
=
=
.
|A(∞) R|
|A(∞) |
|A(∞) /rad(A(∞) )|
Since A(∞) /rad(A(∞) ) is also an almost simple group with socle S, this implies that
|G|/|B| divides |A|/|S|. However, inspecting the candidates in Tables A.1–A.7, we
conclude that the factorization G = AB does not satisfy this divisibility condition.
Thus (A ∩ B)R/R does not contain soc(A/R), that is, (A ∩ B)R/R is core-free in
A/R.
In order to appeal the classification of maximal factorizations to investigate
the general factorizations of an almost simple group G, say, we need to embed a
nontrivial factorization G = HK to the a maximal factorization G = AB. This can
be easily accomplished by taking arbitrary maximal subgroups A, B of G containing
H, K respectively. However, such maximal subgroups A and B are not necessarily
core-free.
Lemma 2.18. Suppose that G is an almost simple group with socle L and G has
a nontrivial factorization G = HK. Then the following statements hold.
(a) HL = KL = G if and only if for any maximal subgroups A, B of G containing H, K respectively, A, B are both core-free.
(b) There exist L E G∗ 6 G and a factorization G∗ = H ∗ K ∗ of G∗ such that
H ∗ ∩ L = H ∩ L, K ∗ ∩ L = K ∩ L and H ∗ L = K ∗ L = G∗ .
Proof. Assume that HL = KL = G. For any maximal subgroups A of G
containing H, since A > L will lead to a contradiction that A > HL = G, we know
20
2. PRELIMINARIES
that A is core-free in G. Similarly, any maximal subgroup B of G containing K is
core-free in G.
Conversely, assume that any maximal subgroups of G containing H, K, respectively, are core-free in G. If HL < G, then the maximal subgroup of G containing
HL (and thus containing H) is not core-free in G, contrary to the assumption.
Hence HL = G, and similarly we have KL = G. Therefore, part (a) is true.
For part (b), take G∗ = HL ∩ KL, H ∗ = H ∩ G∗ and K ∗ = K ∩ G∗ . Then
L E G∗ 6 G, H ∗ ∩ L = H ∩ L and K ∗ ∩ L = K ∩ L. It follows that H ∗ and K ∗ are
both core-free in G∗ since H and K are both core-free in G. By [44, Lemma 2(i)],
we have G∗ = H ∗ K ∗ and H ∗ L = K ∗ L = G∗ . Thus G∗ = H ∗ K ∗ is a factorization
satisfying part (b).
Remark 2.19. For a nontrivial factorization G = HK, if we are concerned with
H ∩soc(G) and K ∩soc(G) instead of H and K (such as in proving Theorem 1.1(d)),
then Lemma 2.18 allows us to assume that any maximal subgroups of G containing
H, K, respectively, are core-free in G.
CHAPTER 3
The factorizations of linear and unitary groups of prime
dimension
We classify factorizations of almost simple linear and unitary groups of prime
dimension in this chapter. It turns out that all these factorizations have at least one
solvable factor unless the socle is PSL3 (4).
3.1. Singer cycles
Let n = ab > 2 and V = GF(q n ). Then V may be viewed as an n-dimensional
vector space over GF(q) and an a-dimensional vector space over GF(q b ). Let g be a
GF(q b )-linear transformation on V . This means that, by definition, g is a bijection
on V satisfying
(u + v)g = ug + v g for any u, v ∈ V and
(λv)g = λ(v g ) for any λ ∈ GF(q b ).
Since GF(q) ⊆ GF(q b ), we see from the above conditions that g is also a GF(q)-linear
transformation on V . Therefore, GLa (q b ) 6 GLn (q). In fact, for any g ∈ GLa (q b )
and σ ∈ Gal(GF(q b )/GF(q)), it is direct to check by definition that σ −1 gσ is still a
GF(q b )-linear transformation. Hence we have
GLa (q b ):Gal(GF(q b )/GF(q)) = GLa (q b ):b 6 GLn (q).
Taking a = 1 and b = n in the above argument, one obtains that GL1 (q n ) <
GL1 (q n ):n < GLn (q). We call the cyclic group GL1 (q n ) and its conjugates in GLn (q)
the Singer cycle of GLn (q), and call ˆGL1 (q n ) and its conjugates in PGLn (q) the
Singer cycle of PGLn (q). Obviously, the group GL1 (q n ) consisting of all GF(q n )linear transformations of V is transitive on V \ {0}. It follows that Singer cycles are
transitive on the 1-spaces and (n − 1)-spaces, respectively, of V . Accordingly, for
k = 1 or n − 1 we obtain the factorizations
GLn (q) = GL1 (q n )Pk [GLn (q)] = (GL1 (q n ):n)Pk [GLn (q)]
and
PGLn (q) = ˆGL1 (q n )Pk [PGLn (q)] = (ˆGL1 (q n ):n)Pk [PGLn (q)].
It is worth mentioning that ΓL1 (q n ) actually has a lot of subgroups which are
transitive on V \ {0}, see [17, 18].
We have seen in the above that there exist almost simple groups G with socle
L = PSLn (q) and subgroups H, K such that G = HK, H ∩ L 6 ˆGL1 (q n ):n and
K ∩ L 6 P1 or Pn−1 . In the rest of this section, we will show that if such a
factorization holds then K ∩ L must contain a large normal subgroup of P1 or Pn−1 .
Lemma 3.1. Let G be an almost simple group with socle L = PSLn (q) and
(n, q) 6= (3, 2), (3, 3) or (4, 2). If G = HK for subgroups H and K of G such that
H ∩ L 6 ˆGL1 (q n ):n and K ∩ L 6 P1 or Pn−1 , then K < PΓLn (q).
21
22
3. PRIME DIMENSION
Proof. Suppose K
PΓLn (q), and write d = (n, q − 1). Then n > 3, K
involves the graph automorphism of L, and K ∩ L stabilizes a decomposition V =
V1 ⊕ Vn−1 , where V is an n-dimensional vector space over GF(q) and V1 , Vn−1 are
1-space and (n − 1)-space, respectively, in V . This implies that |K ∩ L| divides
|GLn−1 (q)|/d. Moreover, we conclude from H ∩ L 6 ˆGL1 (q n ):n that |H ∩ L| divides
n(q n − 1)/(d(q − 1)). Hence by Lemma 2.9 we obtain
(3.1)
q n−1 2f n,
and thereby 2n−1 6 pn−1 6 pf (n−1) /f 6 2n. This yields n 6 4, and so n = 3 or
4. However, we deduce q = 2 from (3.1) if n = 3 or 4, contrary to our assumption
that (n, q) 6= (3, 2) or (4, 2). Thus K 6 PΓLn (q), and further K < PΓLn (q) since
K 6= PΓLn (q).
Lemma 3.2. Let G be an almost simple group with socle L = PSLn (q). If G =
HK for subgroups H, K of G such that H ∩ L 6 ˆGL1 (q n ):n and K ∩ L 6 P1 or
Pn−1 , then one of the following holds.
(a) q n−1 :SLn−1 (q) E K ∩ L.
(b) (n, q) ∈ {(2, 4), (3, 2), (3, 3), (3, 4), (3, 8)} and K is solvable.
Proof. Let q = pf with prime number p. We divide the discussion into two
cases distinguishing n = 2 or not. Note that if n = 2, part (a) turns out to be
Cfp E K ∩ L.
Case 1. Assume n = 2, and suppose q 6= 4. Then q > 5 and K ∩ L 6
Cfp :C(q−1)/(2,q−1) .
First suppose f 6 2. Then p > 2 as q > 5. It derives from Lemma 2.9 that q
divides |K ∩ L|. Hence q:SL1 (q) = Cfp E K ∩ L as part (a) of Lemma 3.2.
Next suppose f > 3. By Zsigmondy’s theorem, pf − 1 has a primitive prime
divisor r except (p, f ) = (2, 6). If (p, f ) = (2, 6), then 24 · 3 · 7 divides |K ∩ L| by
Lemma 2.9, but the only subgroups of 26 :63 in PSL2 (64) with order divisible by
24 · 3 · 7 are 26 :21 and 26 :63, which leads to 26 E K ∩ L. If (p, f ) 6= (2, 6), then
Lemma 2.9 implies that K ∩ L has order divisible by r and thus has an element
of order r. Note that Cr does not have any faithful representation over GF(p) of
dimension less than f . We then have Cfp :Cr E K ∩ L.
Case 2. Assume n > 3, and assume without loss of generality K ∩ L 6 P1 .
Write O = G/L. By Lemma 2.18, we may assume that there exist core-free maximal
subgroups A, B of G containing H, K, respectively. Then G = AB with A =
ˆGL1 (q n ):n.O and B 6 PΓLn (q) by Lemma 3.1. It follows from Theorem 2.15 that
B ∩ L = P1 . Thus B = P1 .O and O 6 PΓLn (q)/PSLn (q) = [(n, q − 1)f ]. Since
|G|/|A| divides |K|, |B|/|K| divides
|A||B|
|A|(q − 1)
|ˆGL1 (q n )|(q − 1)|O|
n|O|
=
=
=
.
n
n
|G|
q −1
q −1
(n, q − 1)
This yields that |B|/|K| divides nf . Suppose that (n, q) 6= (3, 2) or (3, 3). Then B
has a unique unsolvable composition factor PSLn−1 (q), and B (∞) = (B ∩ L)(∞) =
q n−1 :SLn−1 (q). Let R = rad(B), B = B/R and K = KR/R. It follows that B is
an almost simple group with socle PSLn−1 (q) by Lemma 2.16. Notice that |B|/|K|
divides |B|/|K| and thus divides nf . Moreover, we conclude from Theorem 2.6 that
3.2. LINEAR GROUPS OF PRIME DIMENSION
23
either each proper subgroup of PSLn−1 (q) has index greater than nf , or (n, q) ∈
{(3, 4), (3, 8), (3, 9)}.
First assume that K is core-free in B. Then the observation
|soc(B)|
|K soc(B)|
|B|
=
6
6 nf
|K ∩ soc(B)|
|K|
|K|
implies that (n, q) ∈ {(3, 4), (3, 8), (3, 9)}. If (n, q) = (3, 4) or (3, 8), then K is
solvable since it is core-free in PΓL2 (q), which implies that K is solvable as part (b)
asserts. If (n, q) = (3, 9), then computation in Magma[6] shows that 34 :SL2 (9) E
K ∩ L, as part (a) of Lemma 3.2.
(∞)
Next assume that K > soc(B). Since soc(B) = B , this yields KR > B (∞) .
As a consequence, K > SLn−1 (q), which implies K ∩ L > SLn−1 (q). Note that
B ∩ L = q n−1 :ˆGLn−1 (q) and SLn−1 (q) 6 ˆGLn−1 (q) 6 B (∞) acts irreducibly on
q n−1 . We conclude that either K ∩ L > q n−1 :SLn−1 (q) or K ∩ L 6 ˆGLn−1 (q).
However, the latter causes |A|p |K ∩ L|p |O|p < |G|p , contrary to Lemma 2.9. Thus
K ∩ L > q n−1 :SLn−1 (q) as part (a) asserts.
3.2. Linear groups of prime dimension
Now we determine the factorizations of almost simple groups with socle PSLn (q)
for prime dimension n. We exclude the values of (n, q) ∈ {(2, 4), (2, 5), (2, 9), (3, 2)}
due to the isomorphisms PSL2 (4) ∼
= PSL2 (5) ∼
= A5 , PSL2 (9) ∼
= A6 and PSL3 (2) ∼
=
PSL2 (7).
Theorem 3.3. Let G be an almost simple group with socle L = PSLn (q) and
n be a prime with (n, q) 6∈ {(2, 4), (2, 5), (2, 9), (3, 2)}. If G = HK is a nontrivial
factorization, then interchanging H and K if necessary, one of the following holds.
(a) H ∩ L 6 ˆGL1 (q n ):n, and q n−1 :SLn−1 (q) E K ∩ L 6 P1 or Pn−1 .
(b) n = 2 with q ∈ {7, 11, 16, 19, 23, 29, 59}, or n = 3 with q ∈ {3, 4, 8}, or
(n, q) = (5, 2), and (G, H, K) lies in Table 3.1.
Conversely, for each L there exists a factorization G = HK satisfying part (a) with
soc(G) = L, and each triple (G, H, K) in Table 3.1 gives a factorization G = HK.
Proof. Suppose that G = HK is a nontrivial factorization. By Lemma 2.18,
there exist a group G∗ with socle L and its factorization G∗ = H ∗ K ∗ such that
H ∗ ∩ L = H ∩ L, K ∗ ∩ L = K ∩ L, and the maximal subgroups A∗ , B ∗ containing
H ∗ , K ∗ respectively are core-free in G∗ . Now G∗ = A∗ B ∗ is determined by Theorem
2.15. Inspecting candidates there and interchanging A∗ and B ∗ if necessary, we
obtain the following possibilities as n is prime.
(i) A∗ ∩ L = ˆGL1 (q n ):n and B ∗ ∩ L = P1 or Pn−1 .
(ii) n = 2 and q ∈ {7, 11, 16, 19, 23, 29, 59} as in rows 5–8 of Table A.1.
(iii) L = PSL3 (4), A∗ ∩ L = PSL2 (7) and B ∗ ∩ L = A6 .
(iv) L = PSL5 (2), A∗ ∩ L = 31.5 and B ∗ ∩ L = P2 or P3 .
Let A, B be maximal core-free subgroups (maximal among the core-free subgroups)
of G containing H, K respectively.
Case 1. Assume that (i) appears, which implies H ∩ L = H ∗ ∩ L 6 ˆGL1 (q n ):n
and K ∩ L = K ∗ ∩ L 6 P1 or Pn−1 . If (n, q) 6∈ {(3, 2), (3, 3), (3, 8)}, then Lemma 3.2
24
3. PRIME DIMENSION
Table 3.1.
row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
G
PSL2 (7).O
PSL2 (11).O
PSL2 (11).O
PΓL2 (16)
PSL2 (19).O
PSL2 (23).O
PSL2 (29)
PGL2 (29)
PSL2 (59).O
PSL3 (2).2
PSL3 (3).O
PSL3 (4).2
PSL3 (4).22
PSL3 (4).22
PSL3 (4).(S3 × O)
PSL3 (8).(3 × O)
PSL5 (2).O
H
7:O, 7:(3 × O)
11:(5 × O1 )
11:O, 11:(5 × O)
17:8
19:(9 × O)
23:(11 × O)
29:7, 29:14
29:28
59:(29 × O)
7:6
13:(3 × O)
PGL2 (7)
PGL2 (7)
PGL2 (7) × 2
7:(3 × O).S3
73:(9 × O1 )
31:(5 × O)
K
S4
A4 .O2
A5
(A5 × 2).2
A5
S4
A5
A5
A5
8
32 :ΓL1 (9)
M10
M10 :2
M10
(24 .(3 × D10 )).2
23+6 :72 :(3 × O2 )
26 :(S3 × SL3 (2))
where O 6 C2 , and O1 , O2 are subgroups of O such that O = O1 O2 .
already leads to part (a). It then remains to treat (n, q) = (3, 3) and (3, 8), respectively.
First consider (n, q) = (3, 3). Since |G|/|A| = 144, we deduce from the factorization G = AK that |K| must be divisible by 144. Suppose that part (a)
fails, that is, 32 :SL2 (3)
K ∩ L. Then simple computation in Magma[6] shows
2
K = AΓL1 (9) = 3 :ΓL1 (9) in order that G = AK. Now |K| = 144 = |G|/|A|, and
it derives from G = HK that H = A. Thus row 11 of Table 3.1 occurs.
Next consider (n, q) = (3, 8), and suppose that part (a) fails. Then by Lemma 3.2,
K is solvable. Besides, H 6 A is solvable as well. Searching by Magma[6] the factorizations of G with two solvable factors, we obtain the factorizations in row 16 of
Table 3.1.
Case 2. Assume that (ii) appears. In this case, A∗ ∩ L and B ∗ ∩ L are described
in rows 5–8 of Table A.1 as follows:
A∗ ∩ L
P1
P1
P1
D34
B∗ ∩ L
A5
S4
A4
PSL2 (4)
q
11, 19, 29, 59
7, 23
11
16
Arguing in the same vein as the above case leads to rows 1–9 of Table 3.1. We have
also confirmed these factorizations with computation in Magma[6].
Case 3. Assume that (iii) appears. Computation in Magma[6] for this case
gives the triple (G, H, K) in rows 12–15 of Table 3.1. We remark that there are H2 ∼
=
PGL2 (7), K2 ∼
= M10 such that PSL3 (4):22 =
= PGL2 (7) × 2 and K3 ∼
= M10 :2, H3 ∼
3.3. UNITARY GROUPS OF PRIME DIMENSION
25
H2 K2 = H3 K3 , but PSL3 (4):22 6= H3 K2 . In fact, the subgroup H1 of H3 isomorphic to PGL2 (7) is not conjugate to H2 in PSL3 (4):22 , and the subgroup K1 of K2
isomorphic to M10 is not conjugate to K3 in PSL3 (4):22 .
Case 4. Finally, assume that (iv) appears. In this case, P2 ∼
= B 6 L and
|G|/|A| = |B|, refer to [10]. Thus H = A and K = B as in row 17 of Table 3.1.
Conversely, since the Singer cycle H of G = PGLn (q) is transitive on the 1-spaces
and (n − 1)-spaces respectively, it gives a factorization G = HK satisfying part (a),
where K = P1 [PGLn (q)] and Pn−1 [PGLn (q)] respectively. Moreover, computation
in Magma[6] verifies that each triple (G, H, K) in Table 3.1 gives a factorization
G = HK. The proof is thus completed.
As a consequence of Theorem 3.3, we have
Corollary 3.4. Let G be an almost simple linear group of prime dimension.
Then any nontrivial factorization of G has at least one factor solvable, unless soc(G) =
PSL3 (4) and the factorization G = HK is described in rows 12–14 of Table 3.1.
3.3. Unitary groups of prime dimension
The factorizations of unitary groups of prime dimension are classified in the
following theorem.
Theorem 3.5. Let G be an almost simple group with socle L = PSUn (q) for an
odd prime n. If G = HK is a nontrivial factorization, then interchanging H and K
if necessary, (G, H, K) lies in Table 3.2. Conversely, each triple (G, H, K) in Table
3.2 gives a factorization G = HK.
Table 3.2.
row
1
2
3
4
5
6
G
PSU3 (3).O
PSU3 (3).2
PSU3 (5).O
PSU3 (5).2
PSU3 (5).S3
PSU3 (8).32.O
H
31+2
+ :8:O
31+2
+ :8
51+2
+ :8.O
1+2
51+2
+ :8, 5+ :8:2
1+2
51+2
+ :(3:8), 5+ :24:2
57:9.O1
K
PSL2 (7).O
O 6 C2
PGL2 (7)
A7
O 6 S3
S7
S7
23+6 :(63:3).O2 O1 O2 = O 6 C2
Proof. Suppose G = HK to be a nontrivial factorization. By Lemma 2.18,
there exist a group G∗ with socle L and its factorization G∗ = H ∗ K ∗ such that H ∗ ∩
L = H ∩ L, K ∗ ∩ L = K ∩ L, and the maximal subgroups A∗ , B ∗ containing H ∗ , K ∗
respectively are core-free in G∗ . Now G∗ = A∗ B ∗ is determined by Theorem 2.15,
which shows that, interchanging A∗ and B ∗ if necessary, the triple (L, A∗ ∩L, B ∗ ∩L)
lies in rows 5–7 of Table A.3 as follows:
L
PSU3 (3)
PSU3 (5)
PSU3 (8)
A∗ ∩ L
PSL2 (7)
A7
19.3
B∗ ∩ L
P1
P1
P1
26
3. PRIME DIMENSION
Computation in Magma[6] for these cases produces all nontrivial factorizations
as listed in Table 3.2. We remark that there are two non-isomorphic groups of type
31+2
+ :8 for H in row 2 of Table 3.2. Also, there are two non-isomorphic groups of
1+2
type 51+2
+ :(3:8) for H in row 5 of Table 3.2, where one is 5+ :24 and the other is
not.
As a consequence of Theorem 3.5, we have
Corollary 3.6. Let G be an almost simple unitary group of odd prime dimension. Then any nontrivial factorization of G has at least one factor solvable.
CHAPTER 4
Non-classical groups
For non-classical almost simple groups, since the factorizations are classified
[25, 42, 21], we can read off those factorizations which have a solvable factor.
4.1. The case that both factors are solvable
We first classify factorizations of almost simple groups with both factors solvable
based on Kazarin’s result [32], so that we can focus later on the factorizations with
precisely one solvable factor.
Proposition 4.1. Let G be an almost simple group with socle L. If G = HK
for solvable subgroups H, K of G, then interchanging H and K if necessary, one of
the following holds.
(a) L = PSL2 (q), H ∩ L 6 D2(q+1)/d and q E K ∩ L 6 q:((q − 1)/d), where q is
a prime power and d = (2, q − 1).
(b) L is one of the groups: PSL2 (7) ∼
= PSL3 (2), PSL2 (11), PSL3 (3), PSL3 (4),
∼
PSL3 (8), PSU3 (8), PSU4 (2) = PSp4 (3) and M11 ; moreover, (G, H, K) lies
in Table 4.1.
Conversely, for each prime power q there exists a factorization G = HK satisfying
part (a) with soc(G) = L = PSL2 (q), and each triple (G, H, K) in Table 4.1 gives a
factorization G = HK.
Table 4.1.
row
1
2
3
4
5
6
7
8
9
10
11
G
PSL2 (7).O
PSL2 (11).O
PSL2 (23).O
PSL3 (3).O
PSL3 (3).O
PSL3 (4).(S3 × O)
PSL3 (8).(3 × O)
PSU3 (8).32 .O
PSU4 (2).O
PSU4 (2).O
PSU4 (2).2
H
7:O, 7:(3 × O)
11:(5 × O1 )
23:(11 × O)
13:O, 13:(3 × O)
13:(3 × O)
7:(3 × O).S3
73:(9 × O1 )
57:9.O1
24 :5
24 :D10 .O1
24 :5:4
12
M11
11:5
K
S4
A4 .O2
S4
32 :2.S4
AΓL1 (9)
24 :(3 × D10 ).2
23+6 :72 :(3 × O2 )
23+6 :(63:3).O2
31+2
+ :2.(A4 .O)
31+2
+ :2.(A4 .O2 )
3
31+2
+ :S3 , 3 :(S3 × O),
33 :(A4 × 2), 33 :(S4 × O)
M9 .2
where O 6 C2 , and O1 , O2 are subgroups of O such that O = O1 O2 .
27
28
4. NON-CLASSICAL GROUPS
Proof. Suppose G = HK for solvable subgroups H, K of G. By the result of
[32], either L = PSL2 (q), or L is one of the groups:
PSU3 (8), PSU4 (2) ∼
= PSp4 (3), PSL4 (2), M11 , PSL3 (q) with q = 3, 4, 5, 7, 8.
For the latter case, computation in Magma[6] excludes PSL3 (5) and PSL3 (7), and
produces for the other groups all the factorizations G = HK with H, K solvable as
in rows 4–12 of Table 4.1. Next we assume the former case, namely, L = PSL2 (q).
If L = PSL2 (4) ∼
= A6 , it is easy to see that
= A5 or L = PSL2 (9) ∼
= PSL2 (5) ∼
part (a) holds. Thus we assume L = PSL2 (q) with q ∈ {4, 5, 9}. Appealing Theorem
3.3, we have the factorization G = HK as described in either part (a) or the first
three rows of Table 4.1. This completes the proof.
4.2. Exceptional groups of Lie type
Consulting the classification [25] of factorizations of exceptional groups of Lie
type, one obtains the following proposition.
Proposition 4.2. Suppose G is an almost simple group with socle L and G =
HK with core-free subgroups H, K of G. If H is solvable, then L is not an exceptional
group of Lie type.
4.3. Alternating group socles
In this section we study the case of alternating group socle.
Proposition 4.3. Suppose n > 5 and soc(G) = An acting naturally on Ωn =
{1, . . . , n}. If G = HK for solvable subgroup H and core-free unsolvable subgroup
K of G, then one of the following holds.
(a) An E G 6 Sn with n > 6, H is transitive on Ωn , and An−1 E K 6 Sn−1 .
(b) An E G 6 Sn with n = pf for some prime p, H is 2-homogeneous on Ωn ,
and An−2 E K 6 Sn−2 × S2 ; moreover, either H 6 AΓL1 (pf ) or (H, n) lies
in the table:
H
52 :SL2 (3), 52 :Q8 .6, 52 :SL2 (3).4
72 :Q8 .S3 , 72 :SL2 (3).6
112 :SL2 (3).5, 112 :SL2 (3).10
232 :SL2 (3).22
34 :21+4 .5, 34 :21+4 .D10 , 34 :21+4 .AGL1 (5)
n
52
72
112
232
34
(c) An E G 6 Sn with n = 8 or 32, An−3 E K 6 Sn−3 × S3 , and (H, n) =
(AGL1 (8), 8), (AΓL1 (8), 8) or (AΓL1 (32), 32).
(d) A6 E G 6 S6 , H 6 S4 × S2 , and K = PSL2 (5) or PGL2 (5).
(e) A6 E G 6 S6 , H 6 S3 ≀ S2 , and K = PSL2 (5) or PGL2 (5).
(f) n = 6 or 8, and (G, H, K) lies in Table 4.2.
Conversely, for each G in parts (a)–(e) there exists a factorization G = HK as
described, and each triple (G, H, K) in Table 4.2 gives a factorization G = HK.
4.4. SPORADIC GROUP SOCLES
29
Table 4.2.
row
1
2
3
4
5
6
G
M10
PGL2 (9)
PΓL2 (9)
PΓL2 (9)
A8
S8
H
32 :Q8
AGL1 (9)
AΓL1 (9)
AGL1 (9), 32 :Q8 , AΓL1 (9)
15, 3 × D10 , ΓL1 (16)
D30 , S3 × 5, S3 × D10 ,
3 × AGL1 (5), S3 × AGL1 (5)
K
PSL2 (5)
PSL2 (5)
PSL2 (5)
PGL2 (5)
AGL3 (2)
AGL3 (2)
Proof. Since all core-free subgroups of S5 is solvable, we obtain n > 6 by our
assumption that K is unsolvable. If n = 6 and G
S6 , then computation in
Magma[6] leads to rows 1–4 of Table 4.2. Thus we assume G 6 Sn with n > 6 in
the rest of the proof. By THEOREM D and Remark 2 after it in [42], one of the
following cases appears.
(i) H is k-homogeneous on Ωn and An−k 6 K 6 Sn−k × Sk for some k ∈
{1, 2, 3, 4, 5}.
(ii) An−k 6 H 6 Sn−k × Sk and K is k-homogeneous on Ωn for some k ∈
{1, 2, 3, 4, 5}.
(iii) n = 6, H ∩ A6 6 S3 ≀ S2 and K ∩ A6 = PSL2 (5) with H, K both transitive
on Ω6 .
(iv) n = 8, H > C15 and K = AGL3 (2).
The k-homogeneous but not k-transitive permutation groups are determined by
Kantor [31]. Besides, the 2-transitive permutation groups are well-known, see for
example [8, Tables 7.3-7.4]. Indeed, Huppert [27] classified the solvable 2-transitive
permutation groups much earlier, see also [4, Theorem XII.7.3]. From these results,
we conclude that any solvable 2-homogeneous permutation group H 6 Sn is as described in part (b). Moreover, for k > 3 the only solvable k-homogeneous subgroups
of Sn are AGL1 (8) with (n, k) = (8, 3), AΓL1 (8) with (n, k) = (8, 3) and AΓL1 (32)
with (n, k) = (32, 3).
First suppose (i) appears. It is just part (a) of Proposition 4.3 if k = 1. If k > 2,
then the conclusion in the last paragraph leads to parts (b) and (c).
Next suppose (ii) appears. Since n > 6 and An−k 6 H is solvable, k 6= 1. Assume
k = 2. We then have n = 6 as An−2 6 H is solvable. Note that the only unsolvable
2-homogeneous subgroups of S6 not containing A6 are PSL2 (5) and PGL2 (5) (see
for example [14, Table 2.1]). This corresponds to part (d). Similarly, we obtain
part (e) if k = 3. If k > 4, An−k 6 H solvable implies 5 6 n 6 9, but there are no
k-homogeneous permutation groups for such degrees by [31] and [8, Table 7.3 and
Table 7.4].
Finally, for (iii) and (iv), computation in Magma[6] leads to part (e) and rows
5–6 of Table 4.2. Thus the proof is completed.
4.4. Sporadic group socles
Now we consider the sporadic almost simple groups.
30
4. NON-CLASSICAL GROUPS
Proposition 4.4. Let L = soc(G) be a sporadic simple group. If G = HK
for solvable subgroup H and core-free unsolvable subgroup K of G, then one of the
following holds.
(a) M12 6 G 6 M12 .2, H is transitive on [G:M11 ], and K = M11 .
(b) G = M24 , H is transitive on [M24 :M23 ], and K = M23 .
(c) (G, H, K) lies in Table 4.3.
Conversely, each triple (G, H, K) in parts (a)–(c) gives a factorization G = HK.
Table 4.3.
row
1
2
3
4
5
6
7
8
9
10
11
12
13
G
M11
M11
M12
M12 .2
M22 .2
M23
M23
J2 .2
H
11, 11:5
M9 , M9 .2, AGL1 (9)
M9 .2, M9 .S3
M9 .2, M9 .S3
11:2, 11:10
23
23:11
52 :4, 52 :(4 × 2), 52 :12,
52 :Q12 , 52 :(4 × S3 )
HS
51+2
+ :8:2
HS.2 51+2
+ :(4 ≀ S2 )
HS.2 52 :4, 52 :(4 × 2), 52 :42 , 51+2
+ :4,
1+2 2
1+2
51+2
:(4
×
2),
5
:4
,
5
+
+
+ :8:2
1+2
1+2
1+2
He.2 7+ :6, 7+ :(6 × 2), 7+ :(6 × 3),
1+2
71+2
+ :(S3 × 3), 7+ :(S3 × 6)
Suz.2 35 :12, 35 :((11:5) × 2)
K
M10
PSL2 (11)
PSL2 (11)
PGL2 (11)
PSL3 (4):2
M22
M22 , PSL3 (4):2, 24 :A7
G2 (2)
M22
M22 , M22 .2
M22 .2
Sp4 (4).4
G2 (4).2
Proof. Suppose that G = HK is a nontrivial factorization with H solvable and
K unsolvable. If G = L, then from [21, Theorem 1.1] we directly read off the triples
(G, H, K), as stated in the proposition. Now suppose G 6= L. Since this indicates
Out(L) 6= 1, we have L 6= Mk for k ∈ {11, 23, 24}.
First assume L = (H ∩ L)(K ∩ L). Then since H ∩ L is solvable, we deduce
from [21, Theorem 1.1] that L = M12 or HS. Consequently, G = M12 .2 or HS.2.
Searching in Magma[6] for factorizations in these two groups leads to part (a) or
one of rows 4, 10, 11 of Table 4.3.
Next assume L 6= (H ∩ L)(K ∩ L). Then by [21, Theorem 1.2] and computation
results of Magma[6], (G, H, K) lies in one of rows 5, 8, 10–13 of Table 4.3.
Remark 4.5.
(i) For G = J2 .2 in row 8 of Table 4.3, there are two nonisomorphic groups of shape 52 :4 for H.
(ii) For G = HS.2 in row 11 of Table 4.3, there are four non-isomorphic groups
of shape 52 :4, two non-isomorphic groups of shape 52 :(4 × 2) and two nonisomorphic groups of shape 51+2
+ :4 for H.
(iii) For G = He.2 in row 12 of Table 4.3, three non-isomorphic groups of shape
71+2
+ :6 for H.
CHAPTER 5
Examples in classical groups
We present in this chapter examples for infinite families of factorizations appearing in Table 1.1. Let q = pf throughout this chapter, where p is a prime and f is a
positive integer.
5.1. Examples in unitary groups
Let V be a vector space over GF(q 2 ) of dimension 2m > 4 equipped with a
non-degenerate unitary form β. There is a basis e1 , . . . , em , f1 , . . . , fm of V such
that
β(ei , ej ) = β(fi , fj ) = 0, β(ei , fj ) = δi,j
for any i, j ∈ {1, . . . , m} (see [42, 2.2.3]). Let
G = GU(V, β) = GU2m (q)
be the general unitary group of dimension 2m over GF(q 2 ), and let A be the stabilizer
of the totally singular subspace he1 , . . . , em i in G, called a parabolic subgroup. Then
2
A = Pm [G] = q m :GLm (q 2 ),
see [55, 3.6.2]. Fix an element µ of GF(q 2 ) such that µ + µq 6= 0. Then em + µfm is
a nonisotropic vector. Let K be the stabilizer of em + µfm in G. Then
K = GU2m−1 (q) 6 N1 [G].
We now construct a solvable subgroup H of A such that G = HK.
2
Construction 5.1. Let R = Op (A) = q m and C = GLm (q 2 ) < A. Take S to
be a Singer cycle of C and H = RS.
2
Proposition 5.2. In the above notation, H = q m :GL1 (q 2m ) is a solvable sub2
group of Pm [G], H ∩ K = R ∩ K = q (m−1) , and
2
GU2m (q) = G = HK = (q m :GL1 (q 2m ))GU2m−1 (q).
Proof. Define linear maps wk (λ) for 1 6 k 6 m and λ ∈ GF(q 2 )∗ , xi,j (λ) for
i 6= j and λ ∈ GF(q 2 ), yi,j (λ) for 1 6 i < j 6 m and λ ∈ GF(q 2 ), and zk (λ) for
1 6 k 6 m, λ ∈ GF(q 2 ) and λ + λq = 0 by
wk (λ) :
xi,j (λ) :
yi,j (λ) :
zk (λ) :
ek 7→ λek , fk →
7 λ−q fk ,
ej 7→ ej + λei , fi 7→ fi − λq fj ,
fi 7→ fi + λej , fj →
7 fj − λq ei ,
fk 7→ fk + λek
and fixing all the other basis vectors of V . By [55, 3.6.2], A = R:C with C denoting
the group generated by all wk (λ) and xi,j (λ) and R denoting the group generated
by all yi,j (λ) and zk (λ).
31
32
5. EXAMPLES IN CLASSICAL GROUPS
2
Since S is a Singer cycle of C = GLm (q 2 ), the group H = R:S = q m :GL1 (q 2m )
is solvable. It is obvious that S is a Hall p′ -subgroup of H. Let M be a Hall p′ subgroup of H ∩ K. Since M is a p′ -subgroup of H, M 6 S h for some h ∈ R. We
then have M 6 S h ∩ K, and so |H ∩ K|p′ divides |S h ∩ K|. Similarly, |H ∩ K|p
divides |R ∩ K|, and thus |H ∩ K| divides |R ∩ K||S h ∩ K|.
First we calculate |R ∩ K|. For all 1 6 i < j 6 m − 1, 1 6 k 6 m − 1 and
λ ∈ GF(q), it is obvious that yi,j (λ) and zk (λ) both fix em + µfm , that is to say,
yi,j (λ) and zk (λ) are both in K. These yi,j (λ) and zk (λ) generate an elementary
2
abelian group of order q (m−1) . Now consider an element
g = y1,m (λ1 ) . . . ym−1,m (λm−1 )zm (λm )
in R. Notice that g sends em + µfm to
−
m−1
X
µλqi ei + (1 + µλm )em + µfm .
i=1
Then g ∈ K if and only if λ1 = · · · = λm = 0, that is, g = 1. Thereby we conclude
2
|R ∩ K| = q (m−1) .
Next we show |S h ∩ K| = 1. Suppose on the contrary that |S h ∩ K| is divisible
by a prime number r. Then there exists a subgroup X of order |S h ∩ K|r in S such
that X h 6 K. Denote by Y the subgroup of A ∩ K generated by
{wk (λ) : k 6 m − 1, λ ∈ GF(q 2 )∗ } ∪ {xi,j (λ) : i 6 m − 1, j 6 m − 1, λ ∈ GF(q 2 )}.
Then Y ∼
= GLm−1 (q 2 ) and Y fixes em . Since |A ∩ K| = q (2m−1)(m−1) (q 2m−2 −
2
1) . . . (q − 1) (see [42, 3.3.3]), we know that Y contains a Sylow r-subgroup of
−1
A ∩ K. It follows that X h 6 Y h1 for some h1 ∈ A ∩ K. Hence X 6 Y h1 h , and so
h1 h−1
X fixes em
. This is a contradiction since S acts regularly on the nonzero vectors
of he1 , . . . , em i.
2
Now that |H ∩ K| divides |R ∩ K||S h ∩ K| = |R ∩ K| = q (m−1) , we conclude
G = HK by Lemma 2.7. We also see that H ∩ K = R ∩ K is a elementary abelian
2
group of order q (m−1) .
The lemma below excludes certain factorizations of unitary groups, which will
be useful in the proof of Theorem 1.1.
Lemma 5.3. In the above notation, let τ be the semilinear map on V such that
e
(λv)τ = λp v for any λ ∈ GF(q 2 ) and v ∈ V , where e|2f . Let Σ = SU(V, β):hτ i
and N 6 Σ be the stabilizer of e2 + µf2 . If M is a maximal solvable subgroup of Σ
stabilizing he1 , e2 i, then Σ 6= MN.
Proof. Suppose Σ = MN. Let L = SU(V, β), Γ = GU(V, β):hτ i and Y 6 Γ be
the stabilizer of e2 +µf2 . Evidently, M = X ∩Σ with X a maximal solvable subgroup
of Γ stabilizing he1 , e2 i. Note that the stabilizer of he1 , e2 i in Γ is A:hτ i = R:(C:hτ i).
Then R 6 X, and X = RQ with Q maximal solvable in C:hτ i. Since Σ = (X ∩Σ )N,
|X| is divisible by a primitive prime divisor r of p4f − 1, which implies that |Q| is
divisible by r.
Take S and H as in Construction 5.1. Viewing C:hτ i 6 GL4f (p), we conclude
that S 6 Q and |Q| = 4f |S|/e. As a consequence, H > X and |X| = 4f |H|/e.
5.2. EXAMPLES IN SYMPLECTIC GROUPS
33
Moreover, Γ = XG = XGY = X(HK)Y = (XH)(KY ) = XY . It then derives
from Σ = (X ∩ Σ )N = (X ∩ Σ )(Y ∩ Σ ) and Lemma 2.10 that
(5.1)
|XΣ ||Y Σ | 6 |Γ ||(X ∩ Y )Γ |.
It is easily seen HL = KL = G. Hence HΣ = KΣ = Γ , and so XΣ = Y Σ = Γ .
From |X| = 4f |H|/e we deduce that |X ∩ Y | 6 4f |H ∩ Y |/e and thus |(X ∩ Y )Σ | 6
4f |(H ∩ Y )Σ |/e. This in conjunction with H ∩ Y = H ∩ K = R ∩ K 6 L yields
|(X ∩ Y )Σ | 6 4f |Σ |/e. Therefore, (5.1) implies that
(q + 1)|Σ | = |Γ | 6 |(X ∩ Y )Σ | 6
4f |Σ |
,
e
which gives (q, e) = (2, 1), (3, 1), (4, 1) or (8, 1). However, computation in Magma[6]
shows that none of these is possible.
5.2. Examples in symplectic groups
Let p = 2 and V be a vector space over GF(q) of dimension 2m > 4 equipped
with a non-degenerate symplectic form β. There is a basis e1 , . . . , em , f1 , . . . , fm of
V such that
β(ei , ej ) = β(fi , fj ) = 0,
β(ei , fj ) = δi,j
for any i, j ∈ {1, . . . , m}. Let
G = Sp(V, β) = Sp2m (q)
be the symplectic group of dimension 2m over GF(q), and let A be the stabilizer of
the totally singular subspace he1 , . . . , em i in G. Then
A = Pm [G] = q m(m+1)/2 :GLm (q),
see [55, 3.5.4]. Let Q be a quadratic form of minus type on V associated with the
form β. Fix an element σ with x2 + x + σ an irreducible quadratic over GF(q) such
that
Q(e1 ) = · · · = Q(em−1 ) = Q(f1 ) = · · · = Q(fm−1 ) = 0,
Q(em ) = 1, Q(fm ) = σ
(see [42, 2.2.3]). Take
K = O− (V, Q) = O−
2m (q),
the general orthogonal group of minus type of dimension 2m over GF(q).
We construct a solvable subgroup H of A such that G = HK.
Construction 5.4. Let R = O2 (A) = q m(m+1)/2 and C = GLm (q) < A. Take
S to be a Singer cycle of C and H = RS.
Proposition 5.5. In the above notation, H = q m(m+1)/2 :GL1 (q m ) is a solvable
subgroup of Pm [G], and
Sp2m (q) = G = HK = (q m(m+1)/2 :GL1 (q m ))O−
2m (q).
34
5. EXAMPLES IN CLASSICAL GROUPS
Proof. Define linear maps wk (λ) for 1 6 k 6 m and λ ∈ GF(q)∗ , xi,j (λ) for
i 6= j and λ ∈ GF(q), yi,j (λ) for 1 6 i < j 6 m and λ ∈ GF(q), and zk (λ) for
1 6 k 6 m and λ ∈ GF(q) by
wk (λ) :
xi,j (λ) :
yi,j (λ) :
zk (λ) :
ek 7→ λek , fk →
7 λ−1 fk ,
ej 7→ ej + λei , fi 7→ fi − λfj ,
fi 7→ fi + λej , fj →
7 fj + λei ,
fk 7→ fk + λek
and fixing all the other basis vectors of V . By [55, 3.5.4], A = R:C with C denoting
the group generated by all wk (λ) and xi,j (λ) and R denoting the group generated
by all yi,j (λ) and zk (λ).
Since S is a Singer cycle of C = GLm (q), H = R:S = q m(m+1)/2 :GL1 (q m ) is
solvable. Clearly, S is a Hall 2′ -subgroup of H. Let M be a Hall 2′ -subgroup of
H ∩ K. Since M is a 2′ -subgroup of H, M 6 S h for some h ∈ R. We then have
M 6 S h ∩ K, and so |H ∩ K|2′ divides |S h ∩ K|. Similarly, |H ∩ K|2 divides |R ∩ K|,
and thus |H ∩ K| divides |R ∩ K||S h ∩ K|.
First we calculate |R ∩ K|. For all 1 6 i < j 6 m − 1 and λ ∈ GF(q), it is
obvious that yi,j (λ) fixes Q(ek ) and Q(fk ) for k = 1, . . . , m and thus yi,j (λ) ∈ K.
These yi,j (λ) generate an elementary abelian group of order q (m−1)(m−2)/2 . Now
consider an element
g = y1,m (λ1 ) . . . ym−1,m (λm−1 )z1 (µ1 ) . . . zm (µm )
in R. Notice that g fixes e1 , . . . , em and
g : fi 7→ fi + λi em + µi ei , i = 1, . . . , m − 1,
m−1
X
fm 7→ fm +
λi ei + µm em .
i=1
Then g ∈ K if and only if
Q(fi + λi em + µi ei ) = Q(fi ),
and
Q(fm +
m−1
X
i = 1, . . . , m − 1
λi ei + µm em ) = Q(fm ),
i=1
which is equivalent to µi = −λ2i for i = 1, . . . , m − 1 and µm = 0 or 1. Hence all
the elements of shape g in K generate an elementary abelian group of order 2q m−1 .
Therefore, we conclude |R ∩ K| = 2q m−1 · q (m−1)(m−2)/2 = 2q m(m−1)/2 .
Next we show |S h ∩K| = 1. Suppose on the contrary that |S h ∩K| is divisible by
an odd prime r. Then there exists a subgroup X of order r in S such that X h 6 K.
Denote by Y the subgroup of A ∩ K generated by
{wk (λ) : k 6 m − 1, λ ∈ GF(q)∗ } ∪ {xi,j (λ) : i 6 m − 1, j 6 m − 1, λ ∈ GF(q)}.
Then Y ∼
= GL(m−1, q) and Y fixes em . Since |A∩K| = 2q m(m−1) (q m−1 −1) . . . (q−1)
(see [42, 3.2.4(a)]), we know that Y contains a Sylow r-subgroup of A∩K. It follows
−1
h1 h−1
that X h 6 Y h1 for some h1 ∈ A ∩ K. Hence X 6 Y h1 h , and so X fixes em
.
This is a contradiction since S acts regularly on the nonzero vectors of he1 , . . . , em i.
5.3. EXAMPLES IN ORTHOGONAL GROUPS OF ODD DIMENSION
35
Now that |H ∩ K| divides |R ∩ K||S h ∩ K| = 2q m(m−1)/2 , we conclude G = HK
by Lemma 2.7.
5.3. Examples in orthogonal groups of odd dimension
Let p > 2 and V be a vector space over GF(q) of dimension 2m + 1 > 5 equipped
with a non-degenerate quadratic form Q and the associated bilinear form β. There
is a basis e1 , . . . , em , f1 , . . . , fm , d of V such that
β(ei , ej ) = β(fi , fj ) = β(ei , d) = β(fi , d) = 0,
Q(ei ) = 0, Q(fj ) = 0, Q(d) = 1
β(ei , fj ) = δi,j ,
for any i, j ∈ {1, . . . , m} (see [42, 2.2.3]). Let
G = SO(V, Q) = SO2m+1 (q)
be the special orthogonal group of dimension 2m + 1 over GF(q), and let A = Pm [G]
be the stabilizer of the totally singular subspace he1 , . . . , em i in G. Then
A = Pm [G] = (q m(m−1)/2 .q m ):GLm (q),
where q m(m−1)/2 .q m is a special p-group with center of order q m(m−1)/2 , see [55,
3.7.4]. Fix a non-square element µ in GF(q), and let K be the stabilizer of the
vector em + µfm in G. Then
−
K = SO−
2m (q) 6 N1 [G].
We show that A has a solvable subgroup H such that G = HK.
Construction 5.6. Let R = Op (A) = q m(m−1)/2 .q m and C = GLm (q) < A.
Take S to be a Singer cycle of C and H = RS.
Proposition 5.7. In the above notation, H = (q m(m−1)/2 .q m ):GL1 (q m ) is a
solvable subgroup of Pm [G], and
SO2m+1 (q) = G = HK = ((q m(m−1)/2 .q m ):GL1 (q m ))SO−
2m (q).
Proof. Define linear maps wk (λ) for 1 6 k 6 m and λ ∈ GF(q)∗ and xi,j (λ)
for i 6= j and λ ∈ GF(q) by
wk (λ) : ek 7→ λek , fk →
7 λ−1 fk ,
xi,j (λ) : ej →
7 ej + λei , fi 7→ fi − λfj
and fixing all the other basis vectors of V . By [55, 3.7.4], A = R:C with C denoting
the group generated by all wk (λ) and xi,j (λ) and R denoting the kernel of the action
of A on he1 , . . . , em i.
Since S is a Singer cycle of C = GLm (q), H = R:S = (q m(m−1)/2 .q m ):GL1 (q m )
is solvable. It is obvious that S is a Hall p′ -subgroup of H. Let M be a Hall p′ subgroup of H ∩ K. Since M is a p′ -subgroup of H, M 6 S h for some h ∈ R. We
then have M 6 S h ∩ K, and so |H ∩ K|p′ divides |S h ∩ K|. Similarly, |H ∩ K|p
divides |R ∩ K|, and thus |H ∩ K| divides |R ∩ K||S h ∩ K|.
First we calculate |R∩K|. For any g ∈ R∩K, since g fixes both em and em +µfm ,
g fixes fm as well, and so g fixes
W := hem , fm i⊥ = he1 , . . . , em−1 , f1 , . . . , fm−1 , di.
36
5. EXAMPLES IN CLASSICAL GROUPS
Thus we conclude that R ∩ K equals the pointwise stabilizer of he1 , . . . , em−1 i in
SO(W, β). According to the previous paragraph, this is a special group of order
q m(m−1)/2 , whence |R ∩ K| = q m(m−1)/2 .
Next we show that |S h ∩ K| = 1. Suppose on the contrary that |S h ∩ K| is
divisible by a prime number r. Then there exists a subgroup X of order |S h ∩ K|r
in S such that X h 6 K. Denote by Y the subgroup of A ∩ K generated by
{wk (λ) : k 6 m − 1, λ ∈ GF(q)∗ } ∪ {xi,j (λ) : i 6 m − 1, j 6 m − 1, λ ∈ GF(q)}.
Then Y ∼
= GL(m−1, q) and Y fixes em . Since |A∩K| = q m(m−1) (q m−1 −1) . . . (q −1)
(see [42, 3.4.1]), we know that Y contains a Sylow r-subgroup of A ∩ K. It follows
−1
h1 h−1
that X h 6 Y h1 for some h1 ∈ A ∩ K. Hence X 6 Y h1 h , and so X fixes em
.
This is a contradiction since S acts regularly on the nonzero vectors of he1 , . . . , em i.
Now that |H ∩ K| divides |R ∩ K||S h ∩ K| = q m(m−1)/2 , we conclude G = HK
by Lemma 2.7.
5.4. Examples in orthogonal groups of plus type
Let V be a vector space over GF(q) of dimension 2m > 6 equipped with a nondegenerate quadratic form Q and the associated bilinear form β such that the Witt
index of Q equals m. There is a basis e1 , . . . , em , f1 , . . . , fm of V such that
Q(ei ) = Q(fi ) = β(ei , ej ) = β(fi , fj ) = 0,
β(ei , fj ) = δi,j
for any i, j ∈ {1, . . . , m} (see [42, 2.2.3]). Let
G = SO(V, Q) = SO+
2m (q)
be the special orthogonal group of plus type of dimension 2m over GF(q), and let A
be the stabilizer of the totally singular subspace he1 , . . . , em i in G. Then
A = Pm [G] = q m(m−1)/2 :GLm (q),
see [55, 3.7.4]. Let K be the stabilizer of the vector em + fm in G. Then
K = SO2m−1 (q) 6 N1 [G].
We show in the following that A has a solvable subgroup H such that G = HK.
Construction 5.8. Let R = Op (A) = q m(m−1)/2 and C = GLm (q) < A. Take
S to be a Singer cycle of C and H = RS.
Proposition 5.9. In the above notation, H = q m(m−1)/2 :GL1 (q m ) is a solvable
subgroup of Pm [G], and
m(m−1)/2
SO+
:GL1 (q m ))SO2m−1 (q).
2m (q) = G = HK = (q
Proof. Define linear maps wk (λ) for 1 6 k 6 m and λ ∈ GF(q)∗ , xi,j (λ) for
i 6= j and λ ∈ GF(q), and yi,j (λ) for 1 6 i < j 6 m and λ ∈ GF(q) by
wk (λ) : ek 7→ λek , fk →
7 λ−1 fk ,
xi,j (λ) : ej →
7 ej + λei , fi 7→ fi − λfj ,
yi,j (λ) : fi 7→ fi + λej , fj →
7 fj − λei
and fixing all the other basis vectors of V . By [55, 3.7.4], A = R:C with C denoting
the group generated by all wk (λ) and xi,j (λ) and R denoting the group generated
by all yi,j (λ).
5.4. EXAMPLES IN ORTHOGONAL GROUPS OF PLUS TYPE
37
Since S is a Singer cycle of C = GLm (q), H = R:S = q m(m−1)/2 :GL1 (q m ) is
solvable. It is obvious that S is a Hall p′ -subgroup of H. Let M be a Hall p′ subgroup of H ∩ K. Since M is a p′ -subgroup of H, M 6 S h for some h ∈ R. We
then have M 6 S h ∩ K, and so |H ∩ K|p′ divides |S h ∩ K|. Similarly, |H ∩ K|p
divides |R ∩ K|, and thus |H ∩ K| divides |R ∩ K||S h ∩ K|.
First we calculate |R ∩ K|. For all 1 6 i < j 6 m − 1 and λ ∈ GF(q), it is
obvious yi,j (λ) ∈ K. These yi,j (λ) generate an elementary abelian group of order
q (m−1)(m−2)/2 . Now consider an element
g = y1,m (λ1 ) . . . ym−1,m (λm−1 )
in R. Notice that g sends em + fm to
−
m−1
X
λi ei + em + fm .
i=1
Then g ∈ K if and only if λ1 = · · · = λm−1 = 0, which is equivalent to g = 1.
Therefore, we conclude |R ∩ K| = q (m−1)(m−2)/2 .
Next we show |S h ∩ K| = 1. Suppose on the contrary that |S h ∩ K| is divisible
by a prime number r. Then there exists a subgroup X of order |S h ∩ K|r in S such
that X h 6 K. Denote by Y the subgroup of A ∩ K generated by
{wk (λ) : k 6 m − 1, λ ∈ GF(q)∗ } ∪ {xi,j (λ) : i 6 m − 1, j 6 m − 1, λ ∈ GF(q)}.
2
Then Y ∼
= GL(m − 1, q) and Y fixes em . Since |A ∩ K| = q (m−1) (q m−1 − 1) . . . (q − 1)
(see [42, 3.6.1(a)]), we know that Y contains a Sylow r-subgroup of A∩K. It follows
−1
h1 h−1
that X h 6 Y h1 for some h1 ∈ A ∩ K. Hence X 6 Y h1 h , and so X fixes em
.
This is a contradiction since S acts regularly on the nonzero vectors of he1 , . . . , em i.
Now that |H ∩ K| divides |R ∩ K||S h ∩ K| = q (m−1)(m−2)/2 , we conclude G = HK
by Lemma 2.7.
CHAPTER 6
Reduction for classical groups
Towards the proof of Theorem 1.1, we set up the strategy for the classical group
case when precisely one factor is solvable, and establish some preconditioning results.
Throughout this chapter, let G be an almost simple group with socle L classical
of Lie type, and G = HK be a nontrivial factorization of G with H solvable. By the
results in Chapter 4, we assume that L is not isomorphic to A5 , A6 or A8 . To show
that part (d) of Theorem 1.1 holds, we may further assume that the factorization
G = HK can be embedded into a nontrivial maximal factorization G = AB by
virtue of Lemma 2.18. The candidates for the factorization G = AB are listed in
Tables A.1–A.7.
6.1. The case that A is solvable
First of all, we determine the case when A is solvable. An inspection of Tables
A.1–A.7 shows that this occurs for three classes of socle L:
(i) L = PSLn (q) with n prime,
(ii) L = PSp4 (3) ∼
= PSU4 (2),
(iii) L = PSU3 (3), PSU3 (5) or PSU3 (8).
The candidates in (i) and (iii) are treated respectively in Theorem 3.3 and Theorem 3.5. Thus we only need to deal with (ii). For an almost simple group G with
socle PSp4 (3) ∼
= PSU4 (2), all the nontrivial factorizations of G can be produced
instantly by Magma[6]. We state the computation result here.
Proposition 6.1. Let G be an almost simple group with socle L = PSp4 (3) ∼
=
PSU4 (2). Then the following four cases give all the nontrivial factorizations G =
HK with H solvable.
(a) Both H and K are solvable, and (G, H, K) lies in rows 9–11 of Table 4.1.
(b) G = PSp4 (3), H 6 Pk [PSp4 (3)], and (G, H, K, k) lies in rows 1–4 of Table
6.1; moreover, Pk [PSp4 (3)] is the only maximal subgroup of G containing
H.
(c) G = PSp4 (3).2, and L = (H ∩ L)(K ∩ L); in particular, H = (H ∩ L).O1
and K = (K ∩ L).O2 , where Oi 6 C2 for i ∈ {1, 2} and O1 O2 = C2 .
(d) G = PSp4 (3).2, L 6= (H ∩ L)(K ∩ L), H 6 Pk [PSp4 (3).2], and (G, H, K, k)
lies in rows 5–6 of Table 6.1; moreover, Pk [PSp4 (3).2] is the only maximal
subgroup of G containing H.
To sum up, we have the following proposition.
Proposition 6.2. Let G be an almost simple group with socle L classical of
Lie type not isomorphic to A5 , A6 or A8 . If G = HK is a nontrivial factorization
such that H is contained in some solvable maximal subgroup of G, then one of the
following holds as in Theorem 1.1.
39
40
6. REDUCTION FOR CLASSICAL GROUPS
Table 6.1.
row
1
2
3
4
5
6
G
PSp4 (3)
PSp4 (3)
PSp4 (3)
PSp4 (3)
PSp4 (3).2
PSp4 (3).2
H
1+2
1+2
4
4
31+2
− , 3+ , 3+ :2, [3 ], [3 ]:2
1+2
3+ :4
1+2
31+2
+ :Q8 , 3+ :2.A4
3
3
3 :A4 , 3 :S4
31+2
+ :8
1+2
31+2
+ :ΓL1 (9), 3+ :2.S4
K
24 :A5
24 :A5
24 :A5 , S5 , A6 , S6
24 :A5
S5 × 2, A6 × 2, S6 , S6 × 2
S5
k
1, 2
1
1
2
1
1
(a) L = PSLn (q) with n prime, and (G, H, K) lies in row 1 of Table 1.1, or
rows 1–7 of Table 4.1, or rows 1–5, 9 of Table 1.2.
(b) L = PSp4 (3) ∼
= PSU4 (2), and (G, H, K) lies in rows 9–11 of Table 4.1 or
rows 10–11 of Table 1.2.
(c) L = PSU3 (q) with q ∈ {3, 5, 8}, and (G, H, K) lies in row 8 of Table 4.1
or rows 18–19 of Table 1.2.
6.2. Inductive hypothesis
The lemma below enables us to prove Theorem 1.1 by induction.
Lemma 6.3. Let G = HB be a factorization and H 6 A 6 G. If N is a normal
subgroup of A such that A/N is almost simple, then either soc(A/N) E (A∩B)N/N
or HN/N is a nontrivial factor of A/N.
Proof. Since G = HB and H 6 A, we have A = H(A ∩ B) by Lemma 2.10.
Let A = A/N, H = HN/N and A ∩ B = (A∩B)N/N. Then it holds A = H A ∩ B,
and the lemma follows.
Combining Lemma 6.3 and Proposition 2.17, we obtain the following lemma.
Lemma 6.4. Let G be an almost simple classical group of Lie type, G = HK is a
nontrivial factorizations of G with H solvable and HL = KL = G. If A is a maximal
subgroup of G containing H such that A has precisely one unsolvable composition
factor, then A/rad(A) is an almost simple group with a nontrivial solvable factor
Hrad(A)/rad(A).
Proof. Take B to be a maximal subgroup of G containing K. By Lemma 2.18,
B is core-free. Since H 6 A, we then obtain a nontrivial maximal factorization
G = AB. Write R = rad(A). Then A/R is almost simple, and Proposition 2.17
asserts that soc(A/R) (A ∩ B)R/R. Applying Lemma 6.3 with N = R there, we
know that HR/R is a nontrivial factor of A/R. Besides, HR/R is obviously solvable
as H is solvable. This completes the proof.
Let G be an almost simple group. We will use induction on the order of G to finish
the proof of Theorem 1.1. The base case for induction is settled by Propositions
4.1–4.4 and 6.2. Now we make the inductive hypothesis:
Hypothesis 6.5. For any almost simple group G1 with order properly dividing
|G|, if G1 = H1 K1 for solvable subgroup H1 and core-free subgroup K1 of G1 , then
the triple (G1 , H1 , K1 ) satisfies Theorem 1.1.
6.2. INDUCTIVE HYPOTHESIS
41
For our convenience when applying the inductive hypothesis in subsequent chapters, we draw together some essential information in the next lemma under Hypothesis 6.5.
Lemma 6.6. Suppose G is an almost simple group satisfying Hypothesis 6.5 and
G = HK is a nontrivial factorization with H solvable and HL = KL = G. If A
is a maximal subgroup of G containing H such that A has the unique unsolvable
composition factor S, then writing R = rad(A) and A/R = S.O, we have the
following statements.
(a) S is not an exceptional group of Lie type.
(b) S 6= PΩ−
2ℓ (q) for ℓ > 4.
(c) If S = PSLℓ (q), then H is as described in one of the following rows:
row
1
2
3
4
5
6
7
8
9
10
11
12
H6
R.(((q ℓ − 1)/((q − 1)d)).ℓ).O
R.(q:((q − 1)/d)).O
R.(q 3 :((q 3 − 1)/d).3).O
R.(S2 × S3 )
R.S4
R.(S2 × S4 )
R.(32 :2.S4 )
R.(24 .(3 × D10 )).2
R.(26 :(56:7):6)
R.(S2 ≀ S4 )
R.(S4 ≀ S2 )
R.(24 :5:4).O
remark
d = (ℓ, q − 1)
ℓ = 2, d = (2, q − 1)
ℓ = 4, d = (4, q − 1)
ℓ = 2, q = 4
ℓ = 2, q ∈ {5, 7, 11, 23}
ℓ = 2, q = 9
ℓ = 3, q = 3
ℓ = 3, q = 4
ℓ = 3, q = 8
ℓ = 4, q = 2
ℓ = 4, q = 2
ℓ = 4, q = 3
(d) If S = PSp2ℓ (q) with ℓ > 2, then H is as described in one of the following
rows:
row
1
2
3
4
5
6
7
8
9
H6
R.(q ℓ(ℓ+1)/2 :(q ℓ − 1).ℓ).O
R.(q 1+2 :((q 2 − 1)/2).2).O
R.(24 :AGL1 (5))
R.(33 :(S4 × 2))
R.(31+2
+ :2.S4 )
R.(q 3 :(q − 1).A4 ).2
R.(q 3 :(q − 1).S4 ).2
R.(31+2
+ :2.S4 )
1+4
R.(31+4
.D10 ).2
+ :2
remark
q even
ℓ = 2, q
ℓ = 2, q
ℓ = 2, q
ℓ = 2, q
ℓ = 2, q
ℓ = 2, q
ℓ = 3, q
ℓ = 3, q
odd
=3
=3
=3
∈ {5, 11}
∈ {7, 23}
=2
=3
(e) If S = PSU2ℓ+1 (q) with ℓ > 1, then ℓ = 1 and H is as described in one of
the following rows:
row
1
2
3
4
H6
R.(31+2
+ :8:2)
R.(51+2
+ :24:2)
R.(57:9.2)
R.(23+6 :63:3).2
remark
q=3
q=5
q=8
q=8
42
6. REDUCTION FOR CLASSICAL GROUPS
(f) If S = PSU2ℓ (q) with ℓ > 2, then H is as described in one of the following
rows:
row H 6
remark
ℓ2
2ℓ
1
R.(q :((q − 1)/((q + 1)d)).ℓ).O d = (2ℓ, q + 1)
2
R.(33 :(S4 × 2))
ℓ = 2, q = 2
1+2
3
R.(3+ :2.S4 )
ℓ = 2, q = 2
4
R.(34 :S4 ).O
ℓ = 2, q = 3
4 2
5
R.(3 :3 :4).O
ℓ = 2, q = 3
6
R.(31+4
ℓ = 2, q = 3
.2.S
).O
4
+
7
R.(513:3).O
ℓ = 2, q = 8
(g) If S = Ω2ℓ+1 (q) with ℓ > 3 and q odd, then H is as described in one of the
following rows:
row
1
2
3
H6
remark
ℓ(ℓ−1)/2 ℓ
ℓ
R.((q
.q ):((q − 1)/2).ℓ).O
R.(35 :24 .AGL1 (5)).2
ℓ = 3, q = 3
6+4 1+4
R.(3 :2 .AGL1 (5)).2
ℓ = 4, q = 3
(h) If S = PΩ+
2ℓ (q) with ℓ > 4, then H is as described in one of the following
rows:
row H 6
remark
ℓ(ℓ−1)/2
ℓ
1
R.(q
:((q − 1)/d).ℓ).O d = (4, q ℓ − 1)
6 4
2
R.(3 :2 .AGL1 (5)).O
ℓ = 4, q = 3
9
3
R.([3 ].13.3).O
ℓ = 4, q = 3
Proof. Let A = A/R and H = HR/R. Then A is an almost simple group with
socle S, O = A/S 6 Out(S), and
H 6 HR = R.H 6 R.(H ∩ S).O.
By Lemma 6.4, H is a nontrivial solvable factor of A. By Hypothesis 6.5, the pair
(A, H) satisfies Theorem 1.1. Thus the candidates for (A, H) are from Propositions
4.1, 4.3, 4.4 or Tables 1.1–1.2. In particular, statements (a) and (b) of the lemma
are both true. Next we prove statements (c)–(h) case by case.
Case 1. Let S = PSLℓ (q). If the solvable factor H of A is from Proposition 4.1,
then in view of PSL2 (4) ∼
= A5 , we see that one of rows 1–2, 4–5 and 7–9
= PSL2 (5) ∼
appears.
Next, consider the solvable factors H from Proposition 4.3, which occurs only
for (ℓ, q) = (2, 9) or (4, 2). For (ℓ, q) = (2, 9), S = A6 and one of rows 1–2 and 6
appears. For (ℓ, q) = (4, 2), S = A8 and one of rows 1, 3 and 10–11 appears.
Finally, the candidates from Tables 1.1–1.2 are just rows 1, 3 and 12 in statement
(c). This completes the verification of statement (c).
Case 2. Let S = PSp2ℓ (q) with ℓ > 2. Then (A, H) either satisfies Proposition
4.1 or lies in Tables 1.1–1.2. For the former, (ℓ, q) = (2, 3) and one of rows 3–5 in
statement (d) appears. For the latter, we have rows 1–2 and 4–9 in statement (d).
Case 3. Assume that S = PSU2ℓ+1 (q). Then ℓ = 1, and (A, H) lies in rows
18–19 of Table 1.2 or row 8 of Table 4.1. This leads to the four rows in statement
(e).
6.3. THE CASE THAT A HAS AT LEAST TWO UNSOLVABLE COMPOSITION FACTORS 43
Case 4. Let S = PSU2ℓ (q) with ℓ > 2. Then (A, H) either satisfies Proposition
4.1 or is from Tables 1.1–1.2. For the former, (ℓ, q) = (2, 2) and one of rows 1–3 in
statement (f) appears. For the latter, we have rows 1 and 4–7 in statement (f).
Case 5. Let S = Ω2ℓ+1 (q) with ℓ > 3. In this case, (A, H) only arises in Tables
1.1–1.2, from which we read off the three rows in statement (g).
Case 6. Now let S = Ω+
2ℓ (q) with ℓ > 4. Then the only candidates in Tables
1.1–1.2 lead to the three rows in statement (h).
6.3. The case that A has at least two unsolvable composition factors
This section is devoted to the case when A has at least two unsolvable composition factors. We can read off the candidates of maximal factorizations from Tables
A.1–A.7.
Lemma 6.7. Let G be an almost simple group with socle L classical of Lie type.
If G = AB is a nontrivial maximal factorization of G such that A has at least
two unsolvable composition factors, then A has exactly two unsolvable composition
factors and (L, A ∩ L, B ∩ L) lies in Table 6.2, where a = gcd(2, ℓ, q − 1).
Table 6.2.
row
1
2
3
4
5
6
7
8
9
10
L
Sp4ℓ (2f ), f ℓ > 2
Sp4ℓ (4), ℓ > 2
Sp4 (2f ), f > 2
Sp4 (2f ), f > 3 odd
Sp6 (2f ), f > 2
Sp4 (4)
PΩ+
4ℓ (q), ℓ > 3, q > 4
PΩ+
8 (q), q > 5 odd
Ω+
(2)
8
Ω+
8 (4)
A∩L
Sp2ℓ (2f ) ≀ S2
Sp2 (4) × Sp4ℓ−2 (4)
f
O+
4 (2 )
+ f
O4 (2 )
Sp2 (2f ) × Sp4 (2f )
O+
4 (4)
(PSp2 (q) × PSp2ℓ (q)).a
(PSp2 (q) × PSp4 (q)).2
(SL2 (4) × SL2 (4)).22
(SL2 (16) × SL2 (16)).22
B∩L
f
O−
4ℓ (2 )
Sp2ℓ (16).2
Sp2 (4f ).2
Sz(2f )
G2 (2f )
O−
4 (4)
N1
Ω7 (q)
Sp6 (2)
Sp6 (4)
Remark 6.8. In row 6 of Table 6.2, either A, B are both C8 subgroups, or A is
in C2 and B is in C3 .
Now we analyze the candidates in Table 6.2 under Hypothesis 6.5.
Proposition 6.9. Let G be an almost simple group satisfying Hypothesis 6.5
with socle L classical of Lie type. If G = HK is a nontrivial factorization with H
solvable and HL = KL = G, and a maximal subgroup A of G containing H has at
least two unsolvable composition factors, then one of the following holds.
(a) L = Sp4 (4), and (L, H ∩ L, K ∩ L) lies in row 3 or 4 of Table 1.1.
+
(b) L = Ω+
8 (2) or Ω8 (4), and (L, H ∩ L, K ∩ L) lies in row 9 of Table 1.1.
Proof. Take B to be a maximal subgroup of G containing K. By Lemma 2.18,
A, B are both core-free in G. Hence G = AB is a nontrivial maximal factorization,
and by Lemma 6.7, (L, A ∩ L, B ∩ L) lies in Table 6.2. If L = Sp4 (4), then computation in Magma[6] shows that (L, H ∩ L, K ∩ L) lies in row 3 or 4 of Table 1.1,
44
6. REDUCTION FOR CLASSICAL GROUPS
+
as described in part (a) of the proposition. For L = Ω+
8 (2) or Ω8 (4), computation
in Magma[6] shows that part (b) of the proposition holds. Thus we assume that
+
L 6∈ {Sp4 (4), Ω+
8 (2), Ω8 (4)}, and in particular, none of rows 6 and 9–10 of Table 6.2
appears.
Case 1. Suppose L = PSp4 (q). Then it is seen in Table 6.2 that q = 2f > 4
and one of rows 1 (with ℓ = 1), 3 and 4 appears. Consequently, A ∩ L ∼
= SL2 (q) ≀ S2 ,
2
∼
B ∩ L = SL2 (q ).2 or Sz(q), and G/L = Ce 6 Cf . Viewing
2 2
q (q − 1)/2,
if B ∩ L ∼
= SL2 (q 2 ).2,
|L|/|B ∩ L| =
2 2
q (q − 1)(q + 1), if B ∩ L = Sz(q),
we conclude that |L|/|B ∩ L| is divisible by q 2 (q 2 − 1)/2, and so is |H ∩ L||G/L| by
Lemma 2.9.
Write A ∩ L = (T1 × T2 ).S2 , where T1 ∼
= T2 ∼
= SL2 (q), and let X = (H ∩ L) ∩
(T1 × T2 ). Then X is a solvable subgroup of T1 × T2 , and X has index at most 2 in
H ∩ L. Thus |X||G/L| is divisible by q 2 (q 2 − 1)/4. Since G/L = Ce 6 Cf , it follows
that e|X| is divisible by q 2 (q 2 − 1)/4. Let Xi = XTi /Ti . T3−i for i = 1, 2. Then
X1 , X2 are solvable, and X . X1 × X2 .
Note that A has a subgroup T of index 2 such that T ∩ L ∼
= SL2 (q) × SL2 (q). If
H 6 T , then G = T B and A = T (A ∩ B), contrary to the facts that A ∩ B 6 T (see
[42, 3.2.4(b) and 5.1.7(b)]). Hence H T , and thus there exists an element g ∈ H\T
interchanging T1 and T2 by conjugation. Since X g = (H ∩ L)g ∩ (T1 × T2 )g = X,
g induces an isomorphism between X1 and X2 . As a consequence, |X| divides
|X1 ||X2 | = |X1 |2 .
Any solvable subgroup of SL2 (2f ) has order dividing q(q − 1), 2(q − 1), 2(q + 1)
or 12, refer to [28, Chapter 2, 8.27]. Hence |X| divides (q(q − 1))2 , (2(q − 1))2 ,
(2(q + 1))2 or 144. Since q 2 (q 2 − 1)/4 divides e|X| and e divides f , we deduce that
q 2 (q 2 −1)/4 divides f q 2 (q −1)2 , 4f (q −1)2 , 4f (q +1)2 or 144f . If q 2 (q 2 −1)/4 divides
f q 2 (q−1)2 , or equivalently q+1 divides 4f (q−1), then observing (q+1, 4(q−1)) = 1,
we have q + 1 f , which is impossible. In the same vein, we see that the others are
impossible too. This excludes the possibility of L = PSp4 (q), and especially, rows 3
and 4 of Table 6.2.
To complete the proof, we still need to exclude rows 1, 2, 5 and 7–8.
Case 2. Suppose that (L, A ∩ L, B ∩ L) lies in one of row 1 of Table 6.2.
From case 1 we see that ℓ > 2. Let G/L = Ce 6 Cf and T = (T1 × T2 ):e, where
+
f
f
T1 ∼
= Sp2ℓ (2f ). Observe that A ∩ B = (O−
= T2 ∼
2ℓ (2 ) × O2ℓ (2 )):e (see [42, 3.2.4(b)]).
f
Without loss of generality, suppose A ∩ B ∩ T1 = O−
2ℓ (2 ). It derives from G = HB
that A = H(A∩B), and thus T = (H ∩T )(A∩B) since A∩B 6 T . Now consider the
factorization T = X1 A ∩ B, where T = T /T1 = Sp2ℓ (2f ).e, A ∩ B = (A∩B)T1 /T1 =
f
O+
2ℓ (2 ).e and X1 = (H ∩ T )T1 /T1 is solvable. By Hypothesis 6.5, this factorization
must satisfy the conclusion of Theorem 1.1. Hence we have (ℓ, f ) = (2, 1) or (3, 1).
Now L = Sp4ℓ (2) with ℓ ∈ {2, 3}. Replacing T1 with T2 in the above paragraph,
we obtain the factorization Sp2ℓ (2) = X2 O−
2ℓ (2) along the same lines, where X2 =
(H ∩ T )T2 /T2 is solvable. Moreover, since B is not transitive on the 2ℓ-dimensional
non-degenerate symplectic subspaces, we know that G 6= T B. Consequently, H T
as G = HB. It follows that each element in H \ T induces an isomorphism between
X1 and X2 . However, computation in Magma[6] shows that there do not exist two
factorizations Sp2ℓ (2) = X(3−ε)/2 Oε2ℓ (2), where ε = ±1, such that X1 and X2 are
6.3. THE CASE THAT A HAS AT LEAST TWO UNSOLVABLE COMPOSITION FACTORS 45
both solvable and have the same order. This contradiction implies that row 1 of
Table 6.2 cannot appear.
Case 3. Suppose that (L, A ∩ L, B ∩ L) lies in one of rows 2, 5, 7 and 8 of
Table 6.2. Then A ∩ L has a subgroup PSp2 (q) × T of index at most 2, where
T = PSp2k (q) with k > 2 and q > 4. We deduce from Zsigmondy’s theorem that
q 2k − 1 has a primitive prime divisor r. Write N = CA (T ), and note T ⊳ A and
N ∩T = 1. Then N ⊳ NA (T ) = A, and T ∼
= NT /N 6 A/N . Aut(T ), which means
that A/N is an almost simple group with socle PSp2k (q). By N ∼
= NT /T 6 A/T
we know that |N| divides 2|PSp2 (q)||Out(L)|. Consequently, |N| is not divisible by
r.
As the intersection A ∩ B is determined in [42] (see 3.2.1(a), 5.23(b), 3.6.1(d)
and 5.1.15 there), it follows readily that (A ∩ B)N/N does not contain PSp2k (q).
Hence by Lemma 6.3, HN/N is a nontrivial factor of A/N. Then since HN/N is
solvable, Hypothesis 6.5 implies that either HN/N ∩ PSp2k (q) 6 q k(k+1)/2 :(q k − 1).k,
or q ∈ {5, 7, 11, 23} and HN/N ∩PSp2k (q) 6 q 3 :(q−1).S4 . Thereby we conclude that
|HN/N| is not divisible by r. Since |H| = |HN/N||H ∩ N| divides |HN/N||N|, this
implies that |H| is not divisible by r. However, the factorization G = HB requires
|H| to be divisible by |G|/|B| and thus by r, a contradiction.
CHAPTER 7
Proof of Theorem 1.1
We will complete the proof of Theorem 1.1 by induction on the order of the
almost simple group. Thus throughout the first five sections of this chapter, we
assume Hypothesis 6.5.
7.1. Linear groups
The main result of this section is the following proposition, which verifies Theorem 1.1 for linear groups under Hypothesis 6.5.
Proposition 7.1. Let G be an almost simple group with socle L = PSLn (q)
not isomorphic to A5 , A6 or A8 . If G = HK is a nontrivial factorization with H
solvable, K unsolvable and HL = KL = G, then under Hypothesis 6.5, one of the
following holds.
(a) H ∩ L 6 ˆGL1 (q n ).n, and q n−1 :SLn−1 (q) E K ∩ L 6 P1 or Pn−1 .
(b) L = PSL4 (q), H ∩ L 6 q 3 :((q 3 − 1)/(4, q − 1)).3, and PSp4 (q) E K ∩ L 6
PSp4 (q).a, where a = (4, q + 1)/(2, q − 1) 6 2.
(c) L = PSL2 (q) with q ∈ {11, 16, 19, 29, 59} or PSL4 (q) with q ∈ {3, 4} or
PSL5 (2), and (L, H ∩ L, K ∩ L) is as described in Table 7.1.
Table 7.1.
row
1
2
3
4
5
6
7
8
9
L
PSL2 (11)
PSL2 (16)
PSL2 (19)
PSL2 (29)
PSL2 (59)
PSL4 (3)
PSL4 (3)
PSL4 (4)
PSL5 (2)
H ∩L6
11:5
D34
19:9
29:14
59:29
24 :5:4
33 :26:3
26 :63:3
31:5
K∩L
A5
A5
A5
A5
A5
PSL3 (3), 33 :PSL3 (3)
(4 × PSL2 (9)):2
(5 × PSL2 (16)):2
26 :(S3 × PSL3 (2))
Proof. By Lemma 2.18, we may take A, B to be core-free maximal subgroups of
G containing H, K respectively. Then the maximal factorizations G = AB are listed
in Table A.1 (interchanging A, B if necessary) by Theorem 2.15. If n is a prime,
then the proposition holds by Theorem 3.3. We thus assume that n is a composite
number. Under this assumption, (L, A ∩ L, B ∩ L) lies in rows 1–4 of Table A.1. For
L = PSL4 (3) or PSL4 (4), computation in Magma[6] shows that one of parts (a),
(b) and (c) of Proposition 7.1 holds. Thus assume (n, q) 6∈ {(4, 3), (4, 4)} for the
rest of the proof.
47
48
7. PROOF
Case 1. Suppose that B ∩ L = ˆGLa (q b ).b with ab = n and b prime, as in
row 1 or row 4 of Table A.1 (with A, B interchanged). Then A ∩ L = P1 , Pn−1
or Stab(V1 ⊕ Vn−1 ). Notice that A has the unique unsolvable composition factor
PSLn−1 (q). Write q = pf with p prime, R = rad(A) and A/R = PSLn−1 (q).O.
As (n, q) 6∈ {(4, 3), (4, 4)}, we deduce from Lemma 6.6(c) that either |H|p divides
(n − 1)|R|p |O|p , or (n, q) = (4, 8) and |H| divides 26 · 56 · 7 · 6|R|. If the latter
occurs, then |H| is not divisible by 73, contrary to the factorization G = HB since
a = b = 2 and |G|/|B| is divisible by 73. Hence |H|p divides (n − 1)|R|p |O|p , and
then as |R||O| = |A|/|PSLn−1 (q)|, we have that |H|p divides (n − 1)f q n−1. Since
G = HB, we know that |H|p|B ∩ L|p is divisible by |L|p , and hence |L|p divides
(n − 1)f q n−1|B ∩ L|p . Consequently,
q n(n−1)/2 (n − 1)f q n−1 · q n(a−1)/2 b,
that is, q n(n−a)/2−n+1 b(n − 1)f . Since (b, n − 1) = 1 we conclude that either
q n(n−a)/2−n+1 bf or q n(n−a)/2−n+1 (n − 1)f . Since a 6 n/2 and b < n − 1, it follows
that
(7.1)
qn
2 /4−n+1
6 q n(n−a)/2−n+1 6 max(b, n − 1)f = (n − 1)f.
This implies
2
2
2
2n /4−n+1 6 pn /4−n+1 6 pf (n /4−n+1) /f 6 n − 1,
which leads to n = 4. However, substituting n = 4 into (7.1) gives q 6 3f , contradicting the assumption (n, q) 6∈ {(4, 2), (4, 3)}.
Case 2. Suppose that B ∩ L = P1 or Pn−1 , as in row 1 or row 2 of Table A.1.
Assume that row 2 of Table A.1 appears. Then A ∩ L = PSpn (q).c with n > 4
even and c = (2, q − 1)(n/2, q − 1)/(n, q − 1) 6 2. As (n, q) 6= (4, 3), Lemma 6.6(d)
implies that either (n, q) = (6, 2) or |H| is not divisible by any primitive prime
divisor of q n − 1. Since the factorization G = HK requires |H| to be divisible by
|G|/|B| = (q n − 1)/(q − 1), we have (n, q) = (6, 2) and thus |H| is divisible by
(q n − 1)/(q − 1) = 63. However, by Lemma 6.6(d), |H| divides 26 · (23 − 1) · 3 or
33 · 2 · 24, which is a contradiction.
We thus have row 1 of Table A.1, namely, A ∩ L = ˆGLa (q b ).b with ab = n and
b prime. Note that A has the unique unsolvable composition factor PSLa (q b ). By
Lemma 6.6(c), H 6 PΓL1 ((q b )a ) = PΓL1 (q n ). This by Lemma 3.2 leads to part (a)
of Proposition 7.1.
Case 3. Suppose that B ∩ L = PSpn (q).c with n > 4 even and c = (2, q −
1)(n/2, q − 1)/(n, q − 1), as in row 2 or row 3 of Table A.1 (with A, B interchanged).
Then
A ∩ L = P1 , Pn−1 or Stab(V1 ⊕ Vn−1 ).
Notice that A has the unique unsolvable composition factor PSLn−1 (q). Write q = pf
with p prime, R = rad(A) and A/R = PSLn−1 (q).O.
Assume A ∩ L = Stab(V1 ⊕ Vn−1 ) ∼
= ˆGLn−1 (q). Then by Lemma 6.6(c), |H|p
divides 2(n − 1)f . This implies that |H|p is not divisible by |L|p /|B ∩ L|p =
2
q n(n−1)/2−n /4 . Consequently, |H| is not divisible by |L|/|B ∩ L|, which is a contradiction to the factorization G = HB. Therefore, A ∩ L = P1 or Pn−1 .
Assume n > 6. By Zsigmondy’s theorem, (q, n − 3) has a primitive prime divisor
r as n 6= 9. Then Lemma 6.6(c) implies that |H| is not divisible by r. Observe that
7.2. SYMPLECTIC GROUPS
49
r does not divide |B|. This yields that r does not divide |H||B|, contrary to the
factorization G = HB since |G| is divisible by r.
We thus have n = 4 and A ∩ L ∼
= P1 = q 3 :(GL3 (q)/(4, q − 1)). Hence c =
2
(2, q − 1) /(4, q − 1) = (4, q + 1)/(2, q − 1). By Lemma 6.6(c), either H 6 R.(((q 3 −
1)/((q − 1)(3, q − 1))).3).O, or q = 8 and |H| divides 26 · 56 · 7 · 6|R|. If the latter
occurs, then |H| is not divisible by 73, contrary to the factorization G = HB since
|L|/|B ∩ L| = |PSL4 (8)|/|PSp4 (8)| is divisible by 73. Therefore,
q3 − 1
q3 − 1
3
.3 .O = q .
.3 .(G/L).
H 6 R.
(q − 1)(3, q − 1)
(4, q − 1)
Accordingly we have H ∩ L 6 q 3 :((q 3 − 1)/(4, q − 1)).3, and this further yields by
Lemma 2.9 that q 3 (q 4 − 1)(q 2 − 1) divides 6(4, q − 1)f |K ∩ L|. Consequently,
|PSp4 (q).a|
6(4, q − 1)f |PSp4 (q).a|
q4 − 1
6
<
.
|K ∩ L|
q 3 (q 4 − 1)(q 2 − 1)
q−1
Since (q 4 − 1)/(q − 1) equals the smallest index of proper subgroups of PSp4 (q) by
Theorem 2.6, we obtain from Lemma 2.5 that PSp4 (q) E K ∩ L. Thus part (b) of
Proposition 7.1 follows.
Case 4. Suppose that B ∩ L = Stab(V1 ⊕ Vn−1 ) = ˆGLn−1 (q) with n > 4 even,
as in row 3 or row 4 of Table A.1. Note that the factorization G = HK requires |H|
to be divisible by |L|/|B ∩ L| = q n−1 (q n − 1)/(q − 1).
Assume that row 3 of Table A.1 appears. Then A ∩ L = PSpn (q).c with c =
(2, q − 1)(n/2, q − 1)/(n, q − 1) 6 2. As (n, q) 6= (4, 3), Lemma 6.6(d) implies
that either (n, q) = (6, 2) or |H| is not divisible by any primitive prime divisor of
q n − 1. Since |H| is divisible by (q n − 1)/(q − 1), we have (n, q) = (6, 2) and thus
|H| is divisible by (q n − 1)/(q − 1) = 63. However, by Lemma 6.6(d), |H| divides
26 · (23 − 1) · 3 or 33 · 2 · 24, which is a contradiction.
Now row 4 of Table A.1 appears, that is, A ∩ L = ˆGLn/2 (q 2 ).2 with q ∈ {2, 4}.
As (n, q) 6∈ {(4, 2), (4, 4)}, we have n > 6. Since A has the unique unsolvable
composition factor PSLn/2 (q 2 ), Lemma 6.6(c) holds with R = rad(A) and A/R =
PSLn/2 (q 2 ).O. To be more precise, row 1, 3 or 8 in Lemma 6.6(c) holds as q ∈ {2, 4}
and n > 6. Observe that |R||O| = |A|/|PSLn/2 (q 2 )| divides 2(q 2 − 1)(n/2, q 2 − 1). If
q n −1 has a primitive prime divisor r, then the factorization G = HK requires |H| to
be divisible by q n−1 r, but none of rows 1, 3 and 8 in Lemma 6.6(c) satisfies this. Thus
q n −1 does not have any primitive prime divisor, which is equivalent to (n, q) = (6, 2)
by Zsigmondy’s theorem. In this situation, row 1 or 8 in Lemma 6.6(c) appears,
but neither of them allows |H| to be divisible by q n−1 (q n − 1)/(q − 1) = 25 · 32 · 7, a
contradiction.
7.2. Symplectic Groups
In this section we verify Theorem 1.1 for symplectic groups under Hypothesis
6.5.
Proposition 7.2. Let G be an almost simple group with socle L = PSp2m (q),
where m > 2. If G = HK is a nontrivial factorization with H solvable, K unsolvable
and HL = KL = G, then under Hypothesis 6.5, one of the following holds.
50
7. PROOF
(a) q is even, H ∩ L 6 q m(m+1)/2 :(q m − 1).m < Pm , and Ω−
2m (q) E K ∩ L ≤
O−
(q).
2m
(b) m = 2, q is even, H ∩ L 6 q 3 :(q 2 − 1).2 < P1 , and Sp2 (q 2 ) E K ∩ L ≤
Sp2 (q 2 ).2.
(c) m = 2, q is odd, H ∩ L 6 q 1+2 :((q 2 − 1)/2).2 < P1 and PSp2 (q 2 ) E K ∩ L 6
PSp2 (q 2 ).2.
(d) L = PSp4 (q) with q ∈ {3, 5, 7, 11, 23} or PSp6 (q) with q = 2 or 3, and
(L, H ∩ L, K ∩ L) is as described in Table 7.2.
Table 7.2.
row
1
2
3
4
5
6
7
8
L
PSp4 (3)
PSp4 (3)
PSp4 (5)
PSp4 (7)
PSp4 (11)
PSp4 (23)
Sp6 (2)
PSp6 (3)
H∩L6
33 :S4
31+2
+ :2.A4
53 :4.A4
73 :6.S4
113 :10.A4
233 :22.S4
31+2
+ :2.S4
1+4
31+4
.D10
+ :2
K∩L
24 :A5
A5 , 24 :A5 , S5 , A6 , S6
PSL2 (52 ), PSL2 (52 ):2
PSL2 (72 ), PSL2 (72 ):2
PSL2 (112 ), PSL2 (112 ):2
PSL2 (232 ), PSL2 (232 ):2
A8 , S8
PSL2 (27):3
Throughout this section, fix G, L, H, K to be the groups in the condition of
Proposition 7.2, and take A, B to be core-free maximal subgroups of G containing
H, K respectively (such A and B are existent by Lemma 2.18). We shall first treat
symplectic groups of dimension four in Section 7.2.1 (see Lemma 7.3), and then
treat the general case in Section 7.2.2 by Lemmas 7.6 and 7.7, distinguishing the
maximal factorization G = AB in rows 1–12 or 13–16 of Table A.2. These lemmas
together prove Proposition 7.2.
7.2.1. Symplectic groups of dimension four. The main result of this subsection is stated in the following lemma.
Lemma 7.3. If L = PSp4 (q) with q > 3, then one of the following holds.
−
(a) q is even, H ∩ L 6 q 3 :(q 2 − 1).2 < P2 , and Ω−
4 (q) E K ∩ L ≤ O4 (q).
(b) q is even, H ∩ L 6 q 3 :(q 2 − 1).2 < P1 , and Sp2 (q 2 ) E K ∩ L ≤ Sp2 (q 2 ).2.
(c) q is odd, H ∩ L 6 q 1+2 :((q 2 − 1)/2).2 < P1 and PSp2 (q 2 ) E K ∩ L 6
PSp2 (q 2 ).2.
(d) L = PSp4 (q) with q ∈ {3, 5, 7, 11, 23}, and (L, H ∩ L, K ∩ L) is as described
in rows 1–6 of Table 7.2.
Proof. For L = PSp4 (3), the lemma holds as a consequence of Proposition
6.1. Thus we assume q > 4 for the rest of our proof. By Proposition 6.9, we further
assume that A∩L has at most one unsolvable composition factor. Therefore, we only
need to consider rows 1–11 of Table A.2 for the maximal factorization G = AB by
Lemma 2.15. Moreover, A ∩ L 6= Sz(q) as Lemma 6.6(a) asserts, which rules out row
11 of Table A.2. Hence we have the following candidates for the pair (A ∩ L, B ∩ L).
(i) A ∩ L ∼
= PSL2 (q 2 ).2 and B ∩ L = P1 or P2 .
+
(ii) A ∩ L ∼
= O−
= PSL2 (q 2 ).2 and B ∩ L ∼
4 (q) or O4 (q).
7.2. SYMPLECTIC GROUPS
51
(iii) A ∩ L = P1 and B ∩ L = PSp2 (q 2 ).2.
(iv) q is even, A ∩ L = P2 and B ∩ L = O−
4 (q).
− f
f
(v) q = 4 with f ∈ {1, 2}, A ∩ L = O4 (4 ) and B ∩ L = Sp4 (2f ).
f
(vi) q = 4f with f ∈ {1, 2}, A ∩ L = Sp4 (2f ) and B ∩ L = O−
4 (4 ).
Case 1. Suppose that (A ∩ L, B ∩ L) is as described in (i). Let R = rad(A) and
S = soc(A/R). Write A/R = S.O and q = pf with p prime. Then S = PSL2 (q 2 ),
and we see from Lemma 6.6(c) that H 6 R.(((q 4 − 1)/((q 2 − 1)(2, q − 1)).2).O or
R.(q 2 :((q 2 − 1)/(2, q − 1))).O as in row 1 or row 2 of the table there. In particular,
|H| divides 2(q 2 + 1)|R||O|/(2, q − 1) or q 2 (q 2 − 1)|R||O|/(2, q − 1). Since |R||O| =
|A|/|S| = 2|A|/|A ∩ L| divides f (2, q − 1), we thus obtain that |H| divides 4f (q 2 + 1)
or 2f q 2 (q 2 − 1). Moreover, |L|/|B ∩ L| = (q 4 − 1)/(q − 1) divides |H| according to
the factorization G = HB. Hence (q 4 − 1)/(q − 1) divides 4f (q 2 + 1) or 2f q 2 (q 2 − 1),
which is impossible.
Case 2. Suppose that (A ∩ L, B ∩ L) is as described in (ii). Let R = rad(A) and
S = soc(A/R). Write A/R = S.O and q = pf with p prime. Then S = PSL2 (q 2 ),
and we see from Lemma 6.6(c) that H 6 R.(((q 4 − 1)/((q 2 − 1)(2, q − 1)).2).O or
R.(q 2 :((q 2 − 1)/(2, q − 1))).O as in row 1 or row 2 of the table there. In particular,
|H| divides 2(q 2 + 1)|R||O|/(2, q − 1) or q 2 (q 2 − 1)|R||O|/(2, q − 1). Since |R||O| =
|A|/|S| = 2|A|/|A ∩ L| = 2|G|/|L| divides 2f (2, q − 1), we thus obtain that |H|
divides 4f (q 2 + 1) or 2f q 2 (q 2 − 1). Moreover, |L| divides |H||B ∩ L| due to the
factorization G = HB. Hence
(7.2)
|L|/|B ∩ L| divides 4f (q 2 + 1) or 2f q 2(q 2 − 1).
2 2
∼ +
If B ∩ L ∼
= O−
4 (q), then |L|/|B ∩ L| = q (q − 1)/2. If B ∩ L = O4 (q), then
|L|/|B ∩ L| = q 2 (q 2 + 1)/2. Thus by (7.2), either
2 2
B∩L∼
= O−
4 (q) and |H| divides 2f q (q − 1), or
q = 4 and B ∩ L = O+
4 (4).
Computation in Magma[6] shows that the latter gives no factorization G = HB
with H solvable. Thus we have the former, which indicates p = 2 since row 1 of
Table A.2 is now excluded. Also, H ∩ L 6 P1 [PSL2 (q 2 ).2] since H 6 R.(q 2 :((q 2 −
1)/(2, q − 1))).O as row 2 of the table in Lemma 6.6(c). Combining the condition
that |H| divides 2f q 2(q 2 − 1) with the conclusion of the factorization G = HK that
|L| divides |H||K ∩ L|, we deduce that |L| divides 2f q 2(q 2 − 1)|K ∩ L|. That is to
say, q 4 (q 4 − 1)(q 2 − 1) divides 2f q 2 (q 2 − 1)|K ∩ L|, or equivalently, q 2 (q 4 − 1) divides
2f |K ∩ L|. Since K ∩ L 6 B ∩ L ∼
= SL2 (q 2 ).2, it then follows that SL2 (q 2 ) . K ∩ L.
Note that A ∩ L lie in two possible Aschbacher classes of subgroups of L, namely,
C3 and C8 . We distinguish these two classes in the next two paragraphs.
Assume that A ∩ L = Sp2 (q 2 ).2 is a C3 subgroup of L. Then B ∩ L = O−
4 (q),
and P1 [A ∩ L] 6 P2 [L]. Hence H ∩ L 6 P2 [L]. Since P2 [L] = q 3 :GL2 (q), we
have H ∩ L 6 q 3 :M for some maximal solvable subgroup M of GL2 (q). From
the factorization G = HB we deduce that |L| divides f |B ∩ L||H ∩ L|, that is,
q 4 (q 4 − 1)(q 2 − 1) divides 2f q 2 (q 4 − 1)|H ∩ L|. Consequently, q 2 (q 2 − 1) divides
2f |H ∩ L|, which further yields that (q 2 − 1) divides 2f |M|. This implies that
M = (q 2 − 1):2, and thus H ∩ L 6 q 3 :(q 2 − 1).2. Therefore, part (a) of Lemma 7.3
appears.
52
7. PROOF
2
Assume that A ∩ L = O−
4 (q) is a C8 subgroup of L. Then B ∩ L = Sp2 (q ).2,
3
and P1 [A ∩ L] 6 P1 [L]. Hence H ∩ L 6 P1 [L]. Since P1 [L] = q :GL2 (q), we
have H ∩ L 6 q 3 :M for some maximal solvable subgroup M of GL2 (q). According
to the factorization G = HB we have that |L| divides f |B ∩ L||H ∩ L|, that is,
q 4 (q 4 − 1)(q 2 − 1) divides 2f q 2 (q 4 − 1)|H ∩ L|. Consequently, q 2 (q 2 − 1) divides
2f |H ∩L|, which yields that (q 2 −1) divides 2f |M|. This implies that M = (q 2 −1):2,
and thus H ∩ L 6 q 3 :(q 2 − 1).2. Therefore, part (b) of Lemma 7.3 appears.
Case 3. Consider the pair (A ∩ L, B ∩ L) as described in (iii). If q is even, then
since H ∩ L 6 P1 [L], arguing as the second paragraph of Case 1 leads to part (b) of
Lemma 7.3. Thus assume q = pf with odd prime p. Let X be a maximal solvable
subgroup of
A ∩ L = P1 = q 1+2 :((q − 1) × Sp2 (q))/2 = q 1+2 :(q − 1).PSp2 (q)
containing H ∩L. Then X = q 1+2 :((q−1)×Y )/2 for some maximal solvable subgroup
Y of Sp2 (q). By Lemma 6.6(c), Y /C2 6 PSp2 (q) lies in the following table:
row
1
2
3
4
5
Y /C2
Dq+1
q:((q − 1)/2)
A4
S4
S4
q
odd
odd
5, 11
7, 23
9
Moreover, the factorization G = HB together with H ∩ L 6 X implies that |L|
divides |X||B ∩ L||Out(L)|, that is, q + 1 divides 2qf |Y |. Thereby checking the
above table we conclude that only its rows 1 and 3–4 are possible. Notice that |L|
divides |X||K ∩ L||Out(L)| due to the factorization G = HK. If row 1 appears,
then X = q 1+2 :C(q2 −1)/2 .C2 and it follows that q(q 4 − 1) divides 4f |K ∩ L|. This
implies that PSp2 (q 2 ) E K ∩ L, as part (c) of Lemma 7.3. If row 3 appears, then
X = q 1+2 :(q − 1).A4 with q ∈ {5, 11} and it follows that q(q 4 − 1)(q + 1) divides
48f |K ∩ L|. This implies that PSp2 (q 2 ) 6 K ∩ L, and leads to part (d) of Lemma
7.3. Similarly, if row 4 appears, then X = q 1+2 :(q − 1).S4 with q ∈ {7, 23} and it
follows that q(q 4 −1)(q + 1) divides 96f |K ∩L|. This implies that PSp2 (q 2 ) 6 K ∩L,
as part (d) of Lemma 7.3.
Case 4. Consider the pair (A∩L, B∩L) as described in (iv). Since H∩L 6 P2 [L],
arguing as the third paragraph of Case 1 leads to part (a) of Lemma 7.3.
Case 5. Suppose that (A ∩ L, B ∩ L) is as described in (v). If f = 1, then
|L|/|B ∩ L| is divisible by 5 · 17, but Lemma 6.6(c) implies that |H| is not divisible
by 5 · 17, contrary to the factorization G = HB. If f = 2, then |L|/|B ∩ L| is
divisible by 17 · 257, but Lemma 6.6(c) implies that |H| is not divisible by 17 · 257,
contrary to the factorization G = HB. Hence this case is not possible.
Case 6. Finally, consider the pair (A ∩ L, B ∩ L) as described in (vi). If f = 1,
then the factorization G = HB requires |H ∩ L| to be divisible by 15, contrary to
the fact that A ∩ L = Sp4 (2) ∼
= S6 does not possess any solvable subgroup of order
divisible by 15. If f = 2, then |L|/|B ∩ L| is divisible by 17, but Lemma 6.6(d)
implies that |H| is not divisible by 17, contradicting the factorization G = HB.
Hence this case is not possible either. We thus complete the proof.
7.2. SYMPLECTIC GROUPS
53
7.2.2. Symplectic groups of dimension at least six. We first embark on
the infinite families for the maximal factorization G = AB, namely, rows 1–12 of
Table A.2.
Lemma 7.4. Let m > 3, q = pf with p prime and (m, q) 6= (3, 2). If the
maximal factorization G = AB lies in rows 1–12 of Table A.2 (interchanging A, B
if necessary), then each primitive prime divisor of p2f m − 1 divides |B ∩ L|.
Proof. By Lemma 2.9, it suffices to show that any primitive prime divisor
r of p2f m − 1 does not divide |H ∩ L|. Suppose to the contrary that r divides
|H ∩ L|. Then r divides |A ∩ L|. Inspecting rows 1–12 of Table A.2, we conclude
that A ∩ L = PSp2a (q b ).b with ab = m and b prime or SO−
2m (q) with q even or G2 (q)
−
∼
with m = 3. Notice that SO6 (q) = PSU4 (q).2 if q is even. Since r divides |H|, we
deduce from Lemma 6.6(a), (b) and (f) that either A ∩ L = PSp2a (q b ).b with ab = m
and b prime or L = Sp6 (8) and A ∩ L = SO−
6 (8).
Suppose that L = Sp6 (8) and A ∩ L = SO−
6 (8). Then one sees from Table A.2
that B ∩ L = Sp2 (83 ).3, P3 or G2 (8). For these candidates of B ∩ L, it holds that
|L|/|B ∩ L| is divisible by 13. However, Lemma 6.6(f) shows that |H| is not divisible
by 13, contrary to the factorization G = HB. Therefore, A ∩ L = PSp2a (q b ).b with
ab = m and b prime.
Write R = rad(A) and A/R = PSp2a (q b ).O. Since r divides |H|, we deduce
from Lemma 6.6(c) (with ℓ = 2 there) and (d) that a = 1, b = m is prime and
H 6 R.D2(qm +1)/(2,q−1) .O. This together with the equality |R||O| = |A|/|PSp2 (q m )|
yields that |H| divides
2(q m + 1)|A|
2m(q m + 1)|A|
2m(q m + 1)|G|
=
=
,
(2, q − 1)|PSp2 (q m )|
(2, q − 1)|A ∩ L|
(2, q − 1)|L|
whence |H| divides 2f m(q m + 1). Then from the factorization G = HB we obtain
that
(7.3)
|L|/|B ∩ L| divides 2f m(q m + 1).
As seen from Table A.2, the possibilities for B ∩ L are:
P1 , SO+
2m (q) with q even,
SO−
2m (q) with q even.
We proceed by these three cases for B ∩ L.
Case 1: B ∩ L = P1 . Then (q 2m − 1)/(q − 1) divides 2f m(q m + 1) by (7.3), that
is, the divisibility in
(7.4)
q m − 1 2f m(q − 1).
It follows by (7.4) that (m, q) 6= (3, 4). Hence pf m − 1 has a primitive prime divisor
s by Zsigmondy’s theorem as m is prime. However, since 2f m(q − 1) is not divisible
by s, (7.4) does not hold, which is a contradiction.
m m
m
Case 2: B ∩ L = SO+
2m (q) with p = 2. Then q (q + 1)/2 divides 2f m(q + 1)
by (7.3), that is, 2f m 4f m. This forces f m = 4, contrary to the condition that
m > 3 is prime.
m m
m
Case 3: B ∩ L = SO−
2m (q) with p = 2. Then q (q − 1)/2 divides 2f m(q + 1)
m m
m
m m
m
m
by (7.3), that is, q (q − 1) 4f m(q + 1). Since (q , q + 1) = (q − 1, q + 1) = 1,
this implies q m (q m − 1) 4f m, which is impossible.
54
7. PROOF
Lemma 7.5. Let m > 3, q = 2f and (m, q) 6= (3, 2). If the maximal factorization
G = AB lies in rows 1–12 of Table A.2 (interchanging A, B if necessary) with
B ∩ L = SO−
2m (q), then H 6 Pm [G].
Proof. By Proposition 6.9, the maximal factor A has at most one unsolvable
composition factor. Consequently, A ∩ L 6= Spm ≀ S2 . Then in rows 1–12 of Table
A.2, the possibilities for A ∩ L are:
Sp2a (q b ).b with ab = m and b prime, Pm ,
1/2
O+
) with q ∈ {4, 16}.
2m (q) with q ∈ {2, 4}, Sp2m (q
We proceed by these four cases for A ∩ L.
Case 1: A ∩ L = Sp2a (q b ).b with ab = m and b prime. In this case, A =
Sp2a (q b ).Cbe , where Ce = G/L 6 Cf . By Lemma 6.3, H is a solvable nontrivial
factor of A.
Assume H Pa [A], a parabolic subgroup of A. By Hypothesis 6.5, we see from
Theorem 1.1 that either a = 1 and |H| divides 2(q m +1)me, or a = 2 and H 6 P1 [A].
For the latter, the observation P1 [A] 6 Pm/2 [G] yields that H 6 Pm/2 [G], and thus
the factorization G = Pm/2 [G]B arises, contrary to Theorem 2.15. Hence we have
the former, and in particular |H| divides 2(q m + 1)mf . Then the factorization
G = HB implies that |L|/|B ∩ L| = q m (q m − 1)/2 divides 2(q m + 1)mf , that is,
q m (q m − 1) 4f m(q m + 1). Since (q m , q m + 1) = (q m − 1, q m + 1) = 1, this derives
f q m (q m − 1) 4f m, which is impossible.
Therefore, H 6 Pa [A], then the observation Pa [A] 6 Pm [G] implies that H 6
Pm [G], as the lemma asserts.
Case 2: A ∩ L = Pm . Then H 6 A = Pm [G] as stated in the lemma.
+
Case 3: A ∩ L = O+
2m (q) with q ∈ {2, 4}. In this case, A = O2m (q).Ce , where
Ce 6 Cf . By Lemma 6.3, H is a solvable nontrivial factor of A.
Assume H Pm [A]. Viewing O+
6 (q) = PSL4 (q).2, we conclude from Hypothesis
6.5 and Theorem 1.1 that one of the following holds.
(i) m = 4 and H 6 P1 [A].
(ii) (m, q) = (4, 2) and H 6 S9 .
(iii) (m, q) = (3, 4) and |H| divides 4(44 − 1)|Out(PSL4 (4))| = 24 · 3 · 5 · 17.
If (i) holds, then the observation P1 [A] 6 P1 [G] yields that H 6 P1 [G], and thus the
factorization G = P1 [G]B arises, contrary to Theorem 2.15. For (ii), computation
in Magma[6] shows that it gives no factorization G = HB with H solvable. Thus
we have (iii), and in particular |H| is not divisible by 7. However, the factorization
G = HB requires |H| to be divisible by |L|/|B ∩ L|, and thus |H| is divisible by 7,
a contradiction.
Consequently, H 6 Pm [A], and it follows that H 6 Pm [G], as the lemma states.
Case 4: A ∩ L = Sp2m (q 1/2 ) with q ∈ {4, 16}. If (m, q) = (3, 4), then Hypothesis
6.5 in conjunction with Theorem 1.1 implies that |H| is not divisible by 63, contrary
to the condition that 25 · 63 = |L|/|B ∩ L| divides |H|. Thus (m, q) 6= (3, 4), and it
follows that 2f m −1 has a primitive prime divisor r. Since r divides |G|/|B|, r should
also divide |H| as the factorization G = HB requires. However, by Hypothesis 6.5
and Theorem 1.1, r does not divide |H|, which is a contradiction.
7.2. SYMPLECTIC GROUPS
55
Now we finish the analysis for maximal factorizations G = AB from rows 1–12
of Table A.2.
Lemma 7.6. If m > 3 and the maximal factorization G = AB lies in rows 1–12
of Table A.2 (A, B may be interchanged), then one of the following holds.
(a) q is even, H ∩ L 6 q m(m+1)/2 :(q m − 1).m < Pm , and Ω−
2m (q) E K ∩ L ≤
−
O2m (q).
+
∼
(b) G = PSp6 (2), H = 31+2
+ :2.M with M 6 S4 , and A8 E K 6 O6 (2) = S8 ;
moreover, (M, A, K) lies in the following table:
M
A
K
2 2 , 4 O−
(2),
G
(2)
S8
2
6
−
D8
O6 (2), G2 (2) A8 , S8
A4
O−
S8
6 (2)
−
S4
O6 (2)
A8 , S8
1+4
(c) G = PSp6 (3).2, A = P1 [G], K = B = PSL2 (27):6, H = 31+4
.AGL1 (5)
+ :2
1+4 1+4
and H ∩ L = 3+ :2 .D10 .
Proof. Let q = pf with p prime. For (m, q) = (3, 2), computation in Magma[6]
verifies the lemma directly. Thus we assume (m, q) 6= (3, 2) henceforth, and it follows
that p2f m − 1 has a primitive prime divisor r. By Lemma 7.4, r divides |B ∩ L|.
Then according to Table A.2, the possibilities for B ∩ L are:
PSp2a (q b ).b with ab = m and b prime,
O−
G2 (q) with m = 3 and q even.
2m (q) with q even,
We proceed by these three cases for B ∩ L.
Case 1. Suppose that B ∩ L = PSp2a (q b ).b, where ab = m and b is prime. By
Proposition 6.9, A has at most one unsolvable composition factor. Then inspecting
Table A.2, we obtain the following candidates for A ∩ L:
−
P1 , O+
2m (q) with q even, O2m (q) with q even, N2 with b = 2 and q = 2.
If (m, q) 6= (4, 2), then pf (2m−2) − 1 has a primitive prime divisor s by Zsigmondy’s
theorem. Let s = 33 ·7 if (m, q) = (4, 2). Since s divides |L|/|B ∩L|, the factorization
G = HB requires |H| to be divisible by s.
First assume that A ∩ L = P1 or A ∩ L = N2 with b = q = 2. Then A has the
unique unsolvable composition factor PSp2m−2 (q). Since s divides |H|, we conclude
from Lemma 6.6(d) that (m, q) = (3, 3), A ∩ L = P1 and H 6 rad(A).(24 :AGL1 (5)).
In this situation, computation in Magma[6] shows that part (c) of Lemma 7.6
appears.
Next assume that A ∩ L = O+
2m (q) with p = 2. If m > 4, then there is a
contradiction that |H| is not divisible by s by Lemma 6.6(h). Thus m = 3, which
indicates that A has the unique unsolvable composition factor PSL4 (q). Since |H|
is divisible by s, we deduce from Lemma 6.6(c) that |H| divides 8f (q 4 − 1)/(q − 1).
Therefore, the factorization G = HB implies that |L| divides |H||PSp2 (q 3 ).3| and
thus 8f (q 2 + 1)(q + 1)|PSp2 (q 3 ).3|. This turns out to be that q 6 (q 2 − 1)(q − 1) 24f ,
which is impossible.
Now assume that A ∩ L = O−
2m (q) with p = 2. By Lemma 6.6(b), m = 3
and thus A has the unique unsolvable composition factor PSU4 (q). It then derives
56
7. PROOF
from Lemma 6.6(f) that either |H| divides 4f q 4 (q 4 − 1)/(q + 1) or q = 8 and |H|
is not divisible by 13. The latter is contrary to the factorization G = HB since
|L|/|B ∩ L| is divisible by 13. Consequently, |H| divides 4f q 4 (q 4 − 1)/(q + 1).
Thereby the factorization G = HB implies that |L| divides |H||PSp2 (q 3 ).3| and thus
4f q 4 (q 2 + 1)(q − 1)|PSp2 (q 3 ).3|. This turns out to be that q 2 (q 2 − 1)(q + 1) 12f ,
which is impossible.
Case 2. Suppose that B ∩ L = O−
2m (q) with p = 2. By Lemma 7.5 we have
H 6 Pm [G]. Let X = Pm [G] be a maximal subgroup of G containing H. Then
X = q m(m+1)/2 :GLm (q).Ce and B = O−
2m (q).Ce , where Ce 6 Cf . If (m, q) 6= (3, 4) or
(6, 2), then 2f m − 1 has a primitive prime divisor s by Zsigmondy’s theorem. Let
s = 7 if (m, q) = (3, 4) or (6, 2). Since s divides |L|/|B ∩ L|, we conclude by Lemma
2.9 that s also divides |H|. This together with Lemma 6.6(c) implies that
(7.5)
H 6 (q m(m+1)/2 :(q m − 1).m).Ce < X,
and thus H ∩ L 6 q m(m+1)/2 :(q m − 1).m < Pm .
It derives from the factorization G = HK that B = (H ∩ B)K. Note that H ∩ B
is solvable and K is unsolvable. Then by Hypothesis 6.5 and Theorem 1.1, either
B (∞) E K, or m = 3 and B has the unique unsolvable composition factor PSU4 (q)
with |K| not divisible by any primitive prime divisor of 24f − 1. In light of (7.5),
the latter indicates that |H||K| is not divisible by any primitive prime divisor of
24f − 1, contradicting the factorization G = HK. Thus we have B (∞) E K, which
leads to Ω−
2m (q) E K ∩ L. Therefore, part (a) of Lemma 7.6 appears.
Case 3. Suppose that m = 3 and B ∩ L = G2 (q) with p = 2. In this case, we
have f > 1 by our assumption that (m, q) 6= (3, 2). By Proposition 6.9, A has at
most one unsolvable composition factor. Then inspecting Table A.2, we obtain the
following candidates for A ∩ L:
−
O+
6 (q), O2m (q), P1 .
From the factorization G = HB we know that |L|/|B ∩ L| divides |H|, that is,
(7.6)
q 3 (q 4 − 1) |H|.
By Zsigmondy’s theorem, q 4 − 1 has a primitive prime divisor s. Then s divides
|H| as a consequence of (7.6). Accordingly, we exclude the candidate A ∩ L = P1
by Lemma 6.6(d) since in this situation A has the unique unsolvable composition
factor PSp4 (q).
Assume A ∩ L = O+
6 (q). Then A has the unique unsolvable composition factor
PSL4 (q). Since s divides |H|, we deduce from Lemma 6.6(c) that |H|2 divides 8f .
This together with (7.6) yields that q 3 8f , which is impossible.
−
Now assume A ∩ L = O−
6 (q). Note that A = Ω6 (q).C2f /e = SU4 (q).C2f /e , where
e 2f . Let V be a vector space over GF(q 2 ) of dimension 4 equipped with a nondegenerate unitary form β, and u1 , u2 , u3, u4 be a orthonormal basis of V . Let τ be
the semilinear transformation of V fixing the basis vectors u1 , u2, u3 , u4 such that
e
(λv)τ = λp v for any v ∈ V and λ ∈ GF(q 2 ). Then we can write A = S:T with
S = SU(V, β) and T = hτ i = C2f /e such that A∩B = (S∩B):T and S∩B = SU3 (q) is
the stabilizer of u1 in S by [42, 5.2.3(b)]. It derives from G = HB that A = H(A∩B)
and thus A = H(S ∩ B)T . Denote the stabilizer of u1 in A by N. Take e1 , e2 , f1 , f2
7.3. UNITARY GROUPS
57
to be a basis of V and µ ∈ GF(q 2 ) such that e2 + µf2 = u1 and
β(ei , ej ) = β(fi , fj ) = 0,
β(ei , fj ) = δi,j
for any i, j ∈ {1, 2}. Then A = HN since (S ∩ B)T ⊆ N. By Hypothesis 6.5,
this factorization should satisfy Theorem 1.1, that is, H x 6 M with some x ∈ A
and maximal solvable subgroup M of A stabilizing he1 , e2 i. However, this gives
A = H x N = MN, contrary to Lemma 5.3.
For rows 13–16 of Table A.2, since the case L = PSp4 (3) has been treated in
Lemma 7.3, we only need to consider rows 14–16.
Lemma 7.7. The maximal factorization G = AB does not lie in rows 14–16 of
Table A.2 (A, B may be interchanged).
Proof. By Lemma 6.6(b), A ∩ L 6= O−
8 (2). We now deal with the remaining
candidates in rows 14–16 of Table A.2.
Case 1. L = PSp6 (3), A ∩ L = PSL2 (13) and B ∩ L = P1 . Since |L|/|B ∩ L| =
6
(3 − 1)/(3 − 1), we deduce from G = HB that |H| is divisible by 7 · 13. However,
Lemma 6.6(c) shows that |H| is not divisible by 7 · 13, a contradiction.
Case 2. L = PSp6 (3), A ∩ L = P1 and B ∩ L = PSL2 (13). In order that B is
a maximal subgroup of G such that B ∩ L = PSL2 (13), we have G = L (see [10]).
7
Consequently, A = A ∩ L = P1 = 31+4
+ :Sp4 (3). Since |L|/|B ∩ L| is divisible by 2 · 5,
we deduce from G = HB that |H| is divisible by 27 · 5. However, Sp4 (3) has no
solvable subgroup of order divisible by 27 · 5, which indicates that A has no solvable
subgroup of order divisible by 27 · 5. Hence this case is not possible either.
Case 3. L = Sp8 (2), A ∩ L = S10 and B ∩ L = O−
8 (2). In this case, G = L
and |G|/|B| = 120. Hence |H| is divisible by 120 according to the factorization
G = HB. Applying Hypothesis 6.5 to the factorization A = H(H ∩ B), we view
from Proposition 4.3 that H is a transitive subgroup of S10 . However, S10 does not
have a solvable transitive subgroup of order divisible by 120, which is a contradiction.
Case 4. L = Sp8 (2), A ∩ L = PSL2 (17) and B ∩ L = O+
8 (2). In this case,
G = L and A ∩ B = D18 (see [42, 5.1.9]). Since |G|/|B| is divisible by 23 · 17, we
deduce from G = HB that |H| is divisible by 23 · 17. Hence H must be C17 :C8 as
a solvable subgroup of PSL2 (17). However, this leads to H ∩ (A ∩ B) = 2 and thus
|H||A ∩ B| < |H ∩ (A ∩ B)||A|, contradicting the factorization A = H(A ∩ B).
Case 5. L = Sp8 (2), A ∩ L = O+
8 (2) and B ∩ L = PSL2 (17). From the
factorization G = HB we deduce that |H| is divisible by |G|/|B| = 212 · 33 · 52 · 7,
+
12
3
2
but the only proper subgroup of O+
8 (2) with order divisible by 2 · 3 · 5 · 7 is Ω8 (2),
which is unsolvable. Hence this case is impossible too.
7.3. Unitary Groups
This section is devoted to the proof of the proposition below, which confirms
Theorem 1.1 for unitary groups under Hypothesis 6.5.
Proposition 7.8. Let G be an almost simple group with socle L = PSUn (q),
where n > 3. If G = HK is a nontrivial factorization with H solvable, K unsolvable
and HL = KL = G, then under Hypothesis 6.5, one of the following holds.
2
(a) n = 2m, H ∩ L 6 q m :((q 2m − 1)/((q + 1)(n, q + 1))).m < Pm , and
SU2m−1 (q) E K ∩ L 6 N1 .
58
7. PROOF
(b) L = PSU3 (q) with q =∈ {3, 5} or PSU4 (q) with q ∈ {2, 3, 8}, and (L, H ∩
L, K ∩ L) is as described in Table 7.3.
Table 7.3.
row
1
2
3
4
5
6
L
PSU4 (2)
PSU4 (2)
PSU3 (3)
PSU3 (5)
PSU4 (3)
H ∩L6
33 :S4
31+2
+ :2.A4
31+2
+ :8
51+2
+ :8
34 :D10 , 34 :S4 ,
34 :32 :4, 31+4
+ .2.S4
PSU4 (8) 513:3
K∩L
24 :A5
A5 , 24 :A5 , S5 , A6 , S6
PSL2 (7)
A7
PSL3 (4)
212 :SL2 (64).7
We start the proof of the proposition by taking A, B to be core-free maximal
subgroups of G containing H, K, respectively (such A and B are existent by Lemma
2.18). The maximal factorizations G = AB are listed in Table A.3 by Theorem
2.15. If n is prime, then the proposition holds clearly by Theorem 3.5. We thus
assume that n is a composite number. For L = PSU4 (2) ∼
= PSp4 (3), it is seen
from Proposition 6.1 that Proposition 7.8 is true. For L = PSU4 (3), computation
in Magma[6] shows that part (a) or (b) of Proposition 7.8 holds. Thus assume
(n, q) 6∈ {(4, 2), (4, 3)} in the following.
Under the above assumptions, we only need to consider rows 1–4 and 10–12
of Table A.3. After analysis for these rows below, the proposition will follow by
Lemmas 7.10 and 7.11.
Lemma 7.9. If (n, q) 6∈ {(4, 2), (4, 3)} and the maximal factorization G = AB lies
in rows 1–4 of Table A.3 (interchanging A, B if necessary), then either H ∩ L 6 Pm
and B ∩ L = N1 or (L, H ∩ L, K ∩ L) is as described in row 6 of Table 7.3.
Proof. As in rows 1–4 of Table A.3, n = 2m > 4 and either A ∩ L or B ∩ L is
N1 .
Case 1. Assume that A∩L = N1 . Then A has the unique unsolvable composition
factor PSU2m−1 (q). By Lemma 6.6(e), we have m = 2 and q ∈ {5, 8} since (n, q) 6=
(4, 3). Consequently, neither row 3 nor 4 of Table A.3 appears. Now B ∩ L = P2
or PSp4 (q).a with a 6 2 as in row 1 or row 2 of Table A.3. One checks directly for
q ∈ {5, 8} that r = (q 2 − q + 1)/3 is a prime number dividing |L|/|B ∩ L|. It follows
then from the factorization G = HB that r divides |H|, which leads to q = 8 and
H ∩ L 6 GU1 (83 ):3 = 513:3 by Lemma 6.6(e). In in turn implies that |L| divides
513 · 3|K ∩ L||Out(L)|, that is, 217 · 32 · 5 · 72 · 13 divides |K ∩ L|. Thereby we obtain
K ∩ L = P2 = 212 :SL2 (64).7 as in row 6 of Table 7.3.
Case 2. Assume that B ∩ L = N1 = ˆGU2m−1 (q) and one of the following four
cases appears.
(i) A ∩ L = Pm .
(ii) A ∩ L = PSp2m (q).a with a = (2, q − 1)(m, q + 1)/(n, q + 1) 6 2.
(iii) q = 2 and A ∩ L = ˆSLm (4).2.
(iv) q = 4 and A ∩ L = ˆSLm (16).3.2.
7.3. UNITARY GROUPS
59
Let q = pf with p prime. For case (i), the lemma already holds. By Zsigmondy’s
theorem, pf n − 1 has a primitive prime divisor r if (n, q) 6= (6, 2). Let r = 7 if
(n, q) = (6, 2). Since G = HB and r divides |L|/|B ∩ L|, it follows that r divides
|H| by Lemma 2.9.
First suppose that case (ii) appears. Then from Lemma 6.6(d) we conclude that
(n, q) = (6, 2) and H 6 P3 [A]. Note that P3 [PSp6 (2)] 6 P3 [PSU6 (2)]. We thereby
obtain H ∩ L 6 P3 [PSU6 (2)] as the lemma asserts.
Next suppose that case (iii) or (iv) appears. Then p = 2 and f ∈ {1, 2}. Write
R = rad(A) and A/R = PSLm (q 2 ).O. Noticing that r divides |H|, we conclude from
Lemma 6.6(c) that
q 2m − 1
H 6 R.
.m .O.
(q 2 − 1)(m, q 2 − 1)
In particular, |H|2 divides m(|R||O|)2. This together with the equality |R||O| =
|A|/|PSLm (q 2 )| implies that |H|2 divides 2m(|A|/|A ∩ L|)2 = 2m|G/L|2 and thus
divides 4f m. However, the factorization G = HB indicates that |H|2 is divisible by
(|G|/|B|)2. Hence q 2m−1 = (|G|/|B|)2 divides 4f m, which is impossible as (n, q) 6=
(4, 2).
Lemma 7.10. If (n, q) 6∈ {(4, 2), (4, 3)} and the maximal factorization G = AB
lies in rows 1–4 of Table A.3 (interchanging A, B if necessary), then either part (a)
of Proposition 7.8 holds or (L, H ∩ L, K ∩ L) is as described in row 6 of Table 7.3.
Proof. By Lemma 7.9, we may assume that A ∩ L = Pm with n = 2m, and
B ∩ L = N1 . Let d = (n, q + 1) and q = pf with p prime. Note that Pm =
2
q m :(SLm (q 2 ).(q − 1)/d) and N1 = GUn−1 (q)/d. Then A has the unique unsolvable
composition factor PSLm (q 2 ). By Zsigmondy’s theorem, (q 2 , m) has a primitive
prime divisor r. It derives from the factorization G = HB that r divides |H| since
r divides |L|/|B ∩ L|. Hence by Lemma 6.6(c),
2m
q 2m − 1
−1
m2 q
H 6 rad(A).
.m .O = q .
.m .(G/L)
(q 2 − 1)(m, q 2 − 1)
(q + 1)d
2
with A/rad(A) = PSLm (q 2 ).O. Accordingly, we have H ∩ L 6 q m :((q 2m − 1)/((q +
1)d)).m.
Suppose m = 2. Then the conclusion H ∩ L 6 q 4 :((q 4 − 1)/((q + 1)d)).2 above
implies that |L| divides
2q 4 (q 4 − 1)|K ∩ L||Out(L)|
= 4f q 4 (q 2 − 1)(q − 1)|K ∩ L|.
(q + 1)d
due to the factorization G = HK. It follows that q 2 (q 3 + 1)(q 2 − 1)(q + 1) divides
4df |K ∩ L|, and thus
|B ∩ L|
4df |B ∩ L|
6 2 3
= 4f q.
|K ∩ L|
q (q + 1)(q 2 − 1)(q + 1)
Note that each proper subgroup of SU3 (q) has index greater than 4f q by Theorem 2.6. We thereby obtain SU3 (q) E K ∩ L from Lemma 2.5, whence part (a) of
Proposition 7.8 appears.
60
7. PROOF
Now suppose m > 3. Let R = rad(B), B = B/R, H ∩ B = (H ∩ B)R/R and
K = KR/R. By Lemma 2.16, B is almost simple with socle PSU2m−1 (q). From
G = HK we deduce B = (H ∩ B)K and further B = H ∩ B K. Note that H ∩ B is
solvable, and B does not have any solvable nontrivial factor by Hypothesis 6.5. We
then have PSU2m−1 (q) E K. Therefore, K ∩ L contains (B ∩ L)(∞) = SU2m−1 (q),
and so SU2m−1 (q) E K ∩ L 6 B ∩ L = N1 as part (a) of Proposition 7.8. This
completes the proof.
Lemma 7.11. The maximal factorization G = AB does not lie in rows 10–12 of
Table A.3 (interchanging A, B if necessary).
Proof. For the factorization G = AB in these rows, A has a unique unsolvable
composition factor S. By Lemma 6.4 and Hypothesis 6.5, S 6= PSU5 (2), J3 , PSU7 (2)
or PSU11 (2). Thus we have the following three cases for (L, A ∩ L, B ∩ L).
(i) L = PSU6 (2), A ∩ L = PSU4 (3).2 and B ∩ L = N1 .
(ii) L = PSU6 (2), A ∩ L = M22 and B ∩ L = N1 .
(iii) L = PSU12 (2), A ∩ L = Suz and B ∩ L = N1 .
First assume that case (i) appears. Then by Lemma 6.6(f), |H| is not divisible
by 7. However, |G|/|B| is divisible by 7, which is a contradiction to the factorization
G = HB.
Next assume the case (ii) appears. Then by Lemma 6.4 and Hypothesis 6.5, |H|
is not divisible by 7 as row 5 of Table 4.3 suggests. However, |G|/|B| is divisible by
7, contrary to the factorization G = HB.
Finally assume the case (iii) appears. Then again by Lemma 6.4 and Hypothesis
6.5, |H| is not divisible by 7 as row 13 of Table 4.3 suggests. However, |G|/|B| is
divisible by 7, contrary to the factorization G = HB.
7.4. Orthogonal groups of odd dimension
In this section we verify Theorem 1.1 for orthogonal groups of odd dimension
under Hypothesis 6.5. First of all, we compute in Magma[6] all the nontrivial
factorizations of G with soc(G) = Ω7 (3), and list them in the following proposition.
Proposition 7.12. Let G be an almost simple group with socle L = Ω7 (3).
Then the following three cases give all the nontrivial factorizations G = HK with
H solvable.
(a) G = Ω7 (3), H < Pk [Ω7 (3)], and (G, H, K, k) lies in rows 1–2 of Table 7.4;
moreover, Pk [Ω7 (3)] is the only maximal subgroup of G containing H.
(b) G = SO7 (3), and L = (H ∩ L)(K ∩ L); in particular, H = (H ∩ L).O1 and
K = (K ∩ L).O2 , where Oi 6 C2 for i ∈ {1, 2} and O1 O2 = C2 .
(c) G = SO7 (3), L 6= (H ∩ L)(K ∩ L), H < P3 [SO7 (3)], K < N−
1 [SO7 (3)] and
(G, H, K, k) lies in row 3 of Table 7.4; moreover, P3 [SO7 (3)] is the only
maximal subgroup of G containing H.
Now we state the main result of this section.
Proposition 7.13. Let G be an almost simple group with socle L = Ω2m+1 (q)
with m > 3 and q odd. If G = HK is a nontrivial factorization with H solvable,
K unsolvable and HL = KL = G, then under Hypothesis 6.5, one of the following
holds.
7.4. ORTHOGONAL GROUPS OF ODD DIMENSION
61
Table 7.4.
row
1
2
3
G
Ω7 (3)
Ω7 (3)
SO7 (3)
H
35 :24 .5, 35 :24 .D10 , 35 :24 .AGL1 (5)
33+3 :13, 33+3 :13:3
33+3 :13:2, 33+3 :13:6
K
G2 (3)
N−
1 , Sp6 (2)
−
Ω6 (3).2
k
1
3
3
−
(a) H ∩ L 6 (q m(m−1)/2 .q m ):((q m − 1)/2).m < Pm , and Ω−
2m (q) E K ∩ L 6 N1 .
(b) L = Ω7 (3) or Ω9 (3), and (L, H ∩ L, K ∩ L) is as described in Table 7.5.
Table 7.5.
row
1
2
3
L
Ω7 (3)
Ω7 (3)
Ω9 (3)
H ∩L6
35 :24 .AGL1 (5)
33+3 :13:3
36+4 :21+4 .AGL1 (5)
K ∩L
G2 (3)
Sp6 (2)
−
Ω−
8 (3), Ω8 (3).2
Proof. Let q = pf with odd prime p. By Lemma 2.18, we may take A, B to be
core-free maximal subgroups of G containing H, K respectively. For L = Ω7 (3), the
proposition follows immediately from Proposition 7.12. Thus we assume (m, q) 6=
(3, 3) for the rest of the proof. Under this assumption, the maximal factorizations
G = AB lies in rows 1–5 of Table A.4 (interchanging A, B if necessary) by Theorem
2.15. Further by Lemma 6.6, the possibilities for A ∩ L are:
+
N−
1 with m = 3, Pm , P1 with m = 3, N1 with m = 3,
+
N−
2 with m = 3, N2 with m = 3, PSp6 (3).a with (m, q) = (6, 3) and a 6 2.
Case 1. Suppose that A ∩ L = N−
1 with m = 3. Then A has the unique
unsolvable composition factor PSU4 (q), and B ∩ L = Pm or G2 (q) seen from Table
A.4. By Zsigmondy’e theorem, p6f − 1 has a primitive prime divisor r, and we
conclude from Lemma 6.6(f) that r does not divide |H|. It follows that r divides
|B ∩ L| according to the factorization G = HB. This implies that B ∩ L 6= Pm , and
thus B ∩ L = G2 (q). Consequently, we have SO7 (q) G by [7, Proposition 5.7.2],
since B is a maximal subgroup of G. Hence A = SU4 (q).C2f /e with e 2f .
Let V be a vector space over GF(q 2 ) of dimension 4 equipped with a nondegenerate unitary form β, and u1 , u2 , u3, u4 be a orthonormal basis of V . Let τ
be the semilinear transformation of V fixing the basis vectors u1 , u2 , u3, u4 such that
e
(λv)τ = λp v for any v ∈ V and λ ∈ GF(q 2 ). Then we can write A = S:T with
S = SU(V, β) and T = hτ i = C2f /e such that A∩B = (S ∩B):T and S ∩B = SU3 (q)
is the stabilizer of u1 in S by [42, 5.1.14]. It derives from G = HB that A = H(A∩B)
and thus A = H(S ∩ B)T . Denote the stabilizer of u1 in A by N. Take e1 , e2 , f1 , f2
to be a basis of V and µ ∈ GF(q 2 ) such that e2 + µf2 = u1 and
β(ei , ej ) = β(fi , fj ) = 0,
β(ei , fj ) = δi,j
for any i, j ∈ {1, 2}. Then A = HN since (S ∩ B)T ⊆ N. By Hypothesis 6.5,
this factorization should satisfy Theorem 1.1, that is, H x 6 M with some x ∈ A
and maximal solvable subgroup M of A stabilizing he1 , e2 i. However, this gives
A = H x N = MN, contrary to Lemma 5.3.
62
7. PROOF
Case 2. Suppose that A ∩ L = Pm . Then it is seen from Table A.4 that
m(m−1)/2 m
B ∩ L = N−
.q ):SLm (q).((q − 1)/2), we conclude that A
1 . Noticing Pm = (q
has the unique unsolvable composition factor PSLm (q). By Zsigmondy’s theorem,
p2f m − 1 has a primitive prime divisor r, and pf m − 1 has a primitive prime divisor
s. Note that s divides |H| as s divides |L|/|B ∩ L|. Then by Lemma 6.6(c), either
m
qm − 1
m(m−1)/2 m q − 1
.m .O = (q
.q ):
.m .(G/L),
H 6 R.
(q − 1)(m, q − 1)
2
or (m, q) = (4, 3) and H 6 R.(24 :5:4).O, where R, O are defined in Lemma 6.6 with
A/R = PSLm (q).O. Thus either
H ∩ L 6 (q m(m−1)/2 .q m ):((q m − 1)/2).m, or
L = Ω9 (3) and H ∩ L 6 36+4 :21+4 .AGL1 (5).
As a consequence, r does not divide |H|. This in conjunction with the factorization
G = HK yields that r divides |K| since r divides |L|.
Let R = rad(B), B = B/R, H ∩ B = (H ∩ B)R/R and K = KR/R. By
Lemma 2.16, B is almost simple with socle PΩ−
2m (q). From G = HK we deduce
B = (H ∩ B)K and further B = H ∩ B K. Moreover, H ∩ B is solvable and r
divides |K| since r divides |K|. By Hypothesis 6.5, B does not have any solvable
nontrivial factor of order divisible by r. We then conclude that PΩ−
2m (q) E K.
−
−
(∞)
Hence K ∩ L contains (B ∩ L)
= Ω2m (q), and so Ω2m (q) E K ∩ L 6 B ∩ L = N−
1.
To sum up, we have shown in this case that either part (a) of Proposition 7.13
holds, or the triple (L, H ∩ L, K ∩ L) is described in row 3 of Table 7.5.
−
+
Case 3. Suppose that A ∩ L ∈ {P1 , N+
1 , N2 , N2 } with m = 3. In this case,
we have B ∩ L = G2 (q) from Table A.4. By Zsigmondy’s theorem, p4f − 1 has a
primitive prime divisor r, and r divides |H| since r divides |L|/|B ∩ L|. Thereby we
2
conclude from Lemma 6.6(c) and (d) that A ∩ L = N+
1 and H 6 R.(((q + 1)(q +
1)/(4, q − 1)).4).O with R = rad(A) and A/R = PSL4 (q).O. This implies that |H|p
divides
(|R||O|)p = (|A|/|PSL4 (q)|)p = (|A|/|A ∩ L|)p = |G/L|p
and thus divides f . As the factorization G = HB requires |H|p to be divisible by
|L|p /|B ∩ L|p = q 3 , we then obtain a contradiction that q 3 f .
Case 4. Finally suppose that PSp6 (3).a with (m, q) = (6, 3) and a 6 2. Then
we have B ∩ L = N−
1 as Table A.4 shows. It follows that |L|/|B ∩ L| is divisible by 7,
and thereby the factorization G = HB forces |H| to be divisible by 7. However, we
view from Lemma 6.6(d) that |H| is not divisible by 7. Thus this case is impossible
too. The proof is completed.
7.5. Orthogonal groups of even dimension
The main result of this section is the proposition below, which confirms Theorem 1.1 for orthogonal groups of even dimension under Hypothesis 6.5.
Proposition 7.14. Let G be an almost simple group with socle L = PΩε2m (q)
with m > 4 and ε = ±. If G = HK is a nontrivial factorization with H solvable, K
unsolvable and HL = KL = G, then under Hypothesis 6.5, ε = + and one of the
following holds.
7.5. ORTHOGONAL GROUPS OF EVEN DIMENSION
63
(a) m > 5, H ∩ L 6 q m(m−1)/2 :((q m − 1)/(4, q m − 1)).m < Pm or Pm−1 , and
Ω2m−1 (q) E K ∩ L 6 N1 .
(b) m = 4, H ∩ L 6 q 6 :((q 4 − 1)/(4, q 4 − 1)).4 < P1 or P3 or P4 , and K ∩ L =
Ω7 (q).
+
(c) L = Ω+
8 (2) or PΩ8 (3), and (L, H ∩ L, K ∩ L) is as described in Table 7.6.
Table 7.6.
row
1
2
3
4
L
Ω+
8 (2)
Ω+
8 (2)
PΩ+
8 (3)
PΩ+
8 (3)
H ∩L6
22 :15.4
26 :15.4
36 :24 .AGL1 (5)
36 :(33 :13:3), 33+6 :13.3
K ∩L
Sp6 (2)
A9
Ω7 (3)
Ω+
8 (2)
We treat the orthogonal groups of minus type, plus type of dimension 8, and
plus type of dimension at least 10, respectively in the following three subsections.
The above proposition will follow by Lemmas7.15, 7.18 and 7.20. Throughout this
section, fix G, L, H, K to be the groups in the condition of Proposition 7.2, and take
A, B to be core-free maximal subgroups of G containing H, K respectively (such A
and B exist by Lemma 2.18).
7.5.1. Orthogonal groups of minus type. First, we exclude the possibility
of orthogonal groups of minus type.
Lemma 7.15. Let G be an almost simple group with socle L. If G = HK is a
nontrivial factorization with H solvable, K unsolvable and HL = KL = G, then
under Hypothesis 6.5 we have L 6= PΩ−
2m (q) for m > 4.
Proof. Suppose on the contrary that L = PΩ−
2m (q) with m > 4. By Lemma
2.18, we may take A, B to be core-free maximal subgroups of G containing H, K
respectively. By Theorem 2.15, the maximal factorizations G = AB lies in Table
A.5 (interchanging A, B if necessary). If m is odd, then we have by Lemma 6.6(b)
and (e) that A ∩ L 6= P1 , N+
2 or ˆGUm (q). Hence the candidates for A ∩ L are:
2
N1 , Ω−
m (q ).2 with m even and q ∈ {2, 4}, A12 with (m, q) = (5, 2).
Let q = pf for prime number p. We proceed by the above three cases for A ∩ L.
Case 1. Suppose A ∩ L = N1 . In this case, as Table A.5 shows, either B ∩
2
L = ˆGUm (q) with q odd, or B ∩ L = Ω−
m (q ).2 with m even and q ∈ {2, 4}. If
f (2m−2)
(m, q) 6= (4, 2), then p
− 1 has a primitive prime divisor r by Zsigmondy’s
theorem. If (m, q) = (4, 2), then let r = 63. Then r divides |L|/|B ∩ L|, and thus
r divides |H| due to the factorization G = HK. However, since A has the unique
unsolvable composition factor PΩ−
2m−1 (q), we deduce from Lemma 6.6(d) and (g)
that |H| is not divisible by r, a contradiction.
2
Case 2. Suppose that A∩L = Ω−
m (q ).2 with m even and q ∈ {2, 4}. Then p = 2
and it is seen from Table A.5 that B ∩ L = N1 and G = Aut(L). By Zsigmondy’s
theorem, 22f m − 1 has a primitive prime divisor r. Since r divides |L|/|B ∩ L|, we
know that r divides |H| according to the factorization G = HB. It follows that
64
7. PROOF
m = 4 and H 6 R.D2(q4 +1) .O by Lemma 6.6(b), (c) and (f), where R, O are defined
in Lemma 6.6 with A/R = PSL2 (q 4 ).O. In particular, |H| divides
2(q 4 + 1)|R||O| =
2(q 4 + 1)|A|
4(q 4 + 1)|A|
=
= 4(q 4 + 1)|Out(L)| = 8f (q 4 + 1),
4
|PSL2 (q )|
|A ∩ L|
which implies that |G|/|B| divides 8f (q 4 +1) due to the factorization G = HB. As a
consequence, q 3 = (|G|/|B|)2 divides 8f . This restricts q = 2, whence L = PΩ−
8 (2).
−
However, computation in Magma[6] shows that G = SO8 (2) allows no factorization
G = HB with H solvable. Thus this case is impossible too.
Case 3. Finally suppose that A∩L = A12 with (m, q) = (5, 2). Then B∩L = P1 ,
A12 E A 6 S12 , and A ∩ B = (S4 × S8 ) ∩ A (see [42, 5.2.16]). This indicates that the
factorization A = H(A ∩ B) does not satisfy Theorem 1.1, contrary to Hypothesis
6.5. This completes the proof.
7.5.2. Orthogonal groups of dimension eight. We find out in Magma[6]
all the nontrivial factorizations of G with soc(G) = PΩ+
8 (2) in the next proposition.
Proposition 7.16. Let G be an almost simple group with socle L = Ω+
8 (2).
Then the following five cases give all the nontrivial factorizations G = HK with H
solvable.
(a) G = Ω+
8 (2), H is contained in Pk for some k ∈ {1, 3, 4}, and (H, K) lies in
row 5 of Table 7.7; moreover H is not contained in any maximal subgroup
of G other than P1 , P3 and P4 .
(b) G = Ω+
8 (2), H is contained in A9 , and (H, K) lies in row 1 of Table 7.7;
moreover H is not contained in any maximal subgroup of G other than A9 .
(c) G = Ω+
8 (2), H is contained in (3 × PSU4 (2)):2 and Pk simultaneously for
some k ∈ {1, 3, 4}, and (H, K) lies in rows 2–4 of Table 7.7; moreover H
is not contained in any maximal subgroup of G other than P1 , P3 , P4 and
(3 × PSU4 (2)):2.
2
(d) G = Ω+
8 (2), H is contained in (PSL2 (4)×PSL2 (4)).2 and Pk simultaneously
for some k ∈ {1, 3, 4}, and (H, K) lies in row 1 of Table 7.7; moreover H
is not contained in any maximal subgroup of G other than P1 , P3 , P4 and
(PSL2 (4) × PSL2 (4)).22 .
(e) G 6= Ω+
8 (2), and L = (H ∩ L)(K ∩ L).
Table 7.7.
row
1
2
3
4
5
maximal subgroups of G containing H
P1 , P3 , P4 , A9 , (PSL2 (4) × PSL2 (4)).22
P1 , P3 , P4 , (3 × PSU4 (2)):2
P1 , P3 , P4 , (3 × PSU4 (2)):2
P1 , P3 , P4 , (3 × PSU4 (2)):2
P1 , P3 , P4
H
22 :15.4
24 :15
24 :15.2
24 :15.4
26 :15, 26 :15.2, 26 :15.4
K
Sp6 (2)
Sp6 (2)
Sp6 (2)
Sp6 (2), A9
Sp6 (2), A9
f
For the rest of this subsection, let d = (2, q − 1) and L = PΩ+
8 (q) with q = p for
prime number p. We aim to show that part (b) or (c) of Proposition 7.14 appears
in this situation.
7.5. ORTHOGONAL GROUPS OF EVEN DIMENSION
65
Lemma 7.17. If q > 3 and B ∩ L = Ω7 (q), then H stabilizes a totally isotropic
k-space with k = 1 or 4.
Proof. By Proposition 6.9, we may assume that A has at most one unsolvable
composition factor. Then in view of Lemma 6.6(b), we see from Table A.7 that the
possibilities for A ∩ L are:
d
P1 , P3 , P4 , Ω7 (q), ˆ((q + 1)/d × Ω−
6 (q)).2 ,
+
d
ˆ((q − 1)/d × Ω+
6 (q)).2 , Ω8 (2) with q = 3.
If A∩L = P1 or P3 or P4 , then the lemma holds already. We deal with the remaining
candidates one by one below. By Zsigmondy’s theorem, p4f −1 has a primitive prime
divisor r, and r divides |H| as r divides |G|/|B|.
Case 1. Suppose A ∩ L = Ω7 (q). Since r divides |H|, it derives from Lemma
6.6(d) and (g) that q = 3 and H 6 rad(A).(35 :24 .AGL1 (5)).2. Note that L has a
graph automorphism of order 3 (see [2, (15.1)]) which permutes {P1 , P3 , P4 }. We
may assume without loss of generality that A ∩ L = N1 . Then by Proposition 7.12,
we have H < P1 [A]. Consequently, H stabilizes a totally isotropic 1-space since
P1 [A] 6 P1 [G].
d
Case 2. Suppose A ∩ L = ˆ((q + 1)/d × Ω−
6 (q)).2 . Note that L has a graph
automorphism of order 3 which sends N−
2 to a C3 subgroup ˆGU4 (q).2 of L and
permutes {P1 , P3 , P4 } [2, (15.1)]. We may assume without loss of generality that
A ∩ L = ˆGU4 (q).2 ∈ C3 . Since r divides |H|, we deduce from Lemma 6.4 and
Hypothesis 6.5 that H ∩ L 6 P2 [ˆGU4 (q).2]. A totally singular unitary 2-space
over GF(q 2 ) is also a totally singular orthogonal 4-space over GF(q). Therefore, H
stabilizes a totally isotropic 4-space.
d
Case 3. Suppose A ∩ L = ˆ((q − 1)/d × Ω+
6 (q)).2 . In this case, A has the unique
unsolvable composition factor PSL4 (q). Write R = rad(A) and A/R = PSL4 (q).O.
Since r divides |H|, we deduce from Lemma 6.6(c) that either
q4 − 1
H 6 R.
.4 .O, or q = 3 and H 6 R.(24 :5:4).O.
(q − 1)(4, q − 1)
Consequently, |H|p divides
4(|R||O|)p =
4|A|p
4|A|p (2d )p
=
= 4|G/L|p (2d )p ,
|PSL4 (q)|p
|A ∩ L|p
and thus divides 48f . According to the factorization G = HB we know that |H|p
is divisible by |L|p /|B ∩ L|p = q 3 . Hence we obtain q 3 48f , which is impossible as
q > 3.
Case 4. Suppose that A ∩ L = Ω+
8 (2) with q = 3. In this case, |L|/|B ∩ L|
is divisible by 27. Thus the factorization G = HB requires |H| to be divisible by
27. However, Lemma 6.6(h) implies that |H| is not divisible by 27, a contradiction.
This completes the proof.
Lemma 7.18. If L = PΩ+
8 (q), then one of the following holds.
(a) m = 4, H ∩ L 6 q 6 :((q 4 − 1)/(4, q 4 − 1)).4 < P1 or P3 or P4 , and K ∩ L =
Ω7 (q).
+
(b) L = Ω+
8 (2) or PΩ8 (3), and (L, H ∩ L, K ∩ L) is as described in Table 7.6.
66
7. PROOF
Proof. If L = Ω+
8 (2), the lemma follows directly by Proposition 7.16. Thus we
assume q > 3 for the rest of the proof. As a consequence, p6f − 1 has a primitive
prime divisor r.
Assume that r divides |H ∩ L|. Then r divides |A ∩ L|, and inspecting Table A.7
we obtain the candidates for A ∩ L:
(7.7)
+
d
6
Ω7 (q), ˆ((q + 1)/d × Ω−
6 (q)).2 , Ω8 (2) with q = 3, 2 :A8 with q = 3.
Since r divides |H|, we deduce from Lemma 6.6(d) and (f)–(h) that either A ∩ L =
d
6
ˆ((q + 1)/d × Ω−
6 (q)).2 with q = 8 or A ∩ L = 2 :A8 with q = 3. Suppose that the
former occurs. Then B ∩ L = Ω7 (8), P1 , P3 or P4 , and hence |L|/|B ∩ L| is divisible
by 13. However, Lemma 6.6(f) implies that |H| is not divisible by 13 since r divides
|H|. This contradicts the factorization G = HB as it requires |L|/|B ∩ L| to divide
|H| by Lemma 2.9.
Now A ∩ L = 26 :A8 with q = 3, and it follows that B ∩ L = P1 , P3 or P4 , as
in Table A.7. Therefore, |L|/|B ∩ L| is divisible by 35, which indicates that |H| is
divisible by 35 due to the factorization G = HB. Note that A/rad(A) is an almost
simple group with socle A8 , and Hrad(A)/rad(A) is a nontrivial solvable factor of
A/rad(A) by Lemma 6.4. This is contrary to Hypothesis 6.5 as Proposition 4.3
implies that A/rad(A) has no nontrivial solvable factor of order divisible by 35.
Consequently, r does not divide |H ∩ L|. Thus the factorization G = HB forces
r to divide |B ∩ L|. We thereby obtain the candidates for B ∩ L as in (7.7). By
Zsigmondy’s theorem, p4f − 1 has a primitive prime divisor s.
Case 1. Suppose B ∩ L = Ω7 (q). Then by Lemma 7.17, we may assume A ∩ L =
Pk with k ∈ {1, 3, 4}. Note that Pk = q 6 :(SL4 (q).(q − 1)/d)/d. In particular, A has
the unique unsolvable composition factor PSL4 (q). It derives from the factorization
G = HB that s divides |H| since s divides |L|/|B ∩ L|. Hence by Lemma 6.6(c),
either
q4 − 1
q4 − 1
6
H 6 R.
.4 .O = q .
.4 .(G/L),
(q − 1)(4, q − 1)
(4, q 4 − 1)
or q = 3 and H 6 R.(24 :5:4).O = (36 .24 .AGL1 (5)).(G/L), where R, O are defined
in Lemma 6.6 with A/R = PSL4 (q).O. Accordingly, either
H ∩ L 6 q6:
q4 − 1
.4, or q = 3 and H ∩ L 6 36 :24 .AGL1 (5).
(4, q 4 − 1)
Now |H ∩L| divides 24 q 6 (q 4 −1)/(4, q 4 −1), and we deduce from the factorization
G = HB that |L| divides 24 q 6 (q 4 − 1)|K ∩ L||G/L|/(4, q 4 − 1) by Lemma 2.9. Since
|G/L| divides 2f (4, q 4 − 1), this implies that |L| divides 25 f q 6(q 4 − 1)|K ∩ L|, that
is,
q 6 (q 6 − 1)(q 4 − 1)(q 2 − 1) divides 25 f d|K ∩ L|.
Hence we conclude K ∩ L = Ω7 (q) (see [7, Tables 8.28–8.29 and 8.39–8.40]). Therefore, either part (a) of the lemma holds, or (L, H ∩ L, K ∩ L) is as described in row
3 of Table 7.6.
d
Case 2. Suppose B ∩ L = ˆ((q + 1)/d × Ω−
6 (q)).2 . By Theorem 2.15, we have
the candidates for A ∩ L:
Ω7 (q), P1 , P3 , P4 , (3 × Ω+
6 (4)).2 with q = 4.
7.5. ORTHOGONAL GROUPS OF EVEN DIMENSION
67
Let t be a primitive prime divisor of p3f −1. According to the factorization G = HB,
the order |H| is divisible by |L|/|B ∩ L|, and thus divisible by st. However, since A
has socle Ω7 (q) or PSL4 (q), we deduce from Lemma 6.6(c), (d) and (g) that st does
not divide |H|, which is a contradiction.
Case 3. Suppose that B ∩ L = Ω+
8 (2) with q = 3. It is seen in Table A.7 that
the candidates for A ∩ L are P1 , P3 , P4 , P13 , P14 and P34 . From the factorization
G = HB we know that |H| is divisible by |L|/|B ∩ L| and thus divisible by 13.
Assume A ∩ L = P1 , P3 or P4 . Then A ∩ L = 36 :PSL4 (3), which implies that A
has the unique unsolvable composition factor PSL4 (3). Since |H| is divisible by 13,
we conclude from Lemma 6.6(c) that H ∩ L 6 36 :(33 :13:3), as in row 4 of Table 7.6.
Next assume A ∩ L = P13 , P14 or P34 . Then A ∩ L = 33+6 :PSL3 (3), which shows
that A has the unique unsolvable composition factor PSL3 (3). Since |H| is divisible
by 13, we conclude from Lemma 6.6(c) that H ∩ L 6 33+6 :13.3, as in row 4 of Table
7.6.
Case 4. Suppose that B ∩ L = 26 :A8 with q = 3. Then seen from Table A.7,
A ∩ L = Pk = 36 :PSL4 (3) with k ∈ {1, 3, 4}. In view of the factorization G = HB,
we know that |H| is divisible by |L|/|B ∩ L|, and thus divisible by 5 · 13. However, as
A has the unique unsolvable composition factor PSL4 (3), Lemma 6.6(c) shows that
|H| is not divisible by 5 · 13, which is a contradiction. The proof is thus finished.
7.5.3. Orthogonal groups of even dimension at least ten. In this subsecf
tion, let L = PΩ+
2m (q) with m > 5 and q = p for prime number p. We aim to show
that part (a) of Proposition 7.14 holds for such L.
Lemma 7.19. If B ∩ L = N1 , then H stabilizes a totally singular m-space.
Proof. Due to Hypothesis 6.5, the conclusion of Theorem 1.1 excludes the
possibility of the case A ∩ L = Co1 . Thus we see from Table A.6 that there are six
cases for A ∩ L.
(i) A ∩ L = Pm or Pm−1 .
(ii) m is even, and A ∩ L = ˆGUm (q).2.
(iii) m is even, q > 2, and A ∩ L = (PSp2 (q) ⊗ PSpm (q)).a with a 6 2.
(iv) A ∩ L = ˆGLm (q).2.
2
2
(v) m is even, q = 2 or 4, and A ∩ L = Ω+
m (q ).2 .
(vi) m = 8, and A ∩ L = Ω9 (q).a with a 6 2.
If A ∩ L = Pm or Pm−1 , then the lemma holds already. Now we deal with (ii)–(vi).
Case 1. Suppose that A∩L = ˆGUm (q).2 with m = 2ℓ, as in (ii). By Hypothesis
6.5, we see from Theorem 1.1 that H 6 Pℓ [A]. Note that a totally singular unitary
ℓ-space over GF(q 2 ) is also a totally singular orthogonal m-space over GF(q). We
then conclude that H 6 Pℓ [A] stabilizes a totally singular m-space.
Case 2. Suppose that (iii) appears. As Proposition 6.9 indicates that A has at
most one unsolvable composition factor, we have q = 3 and A ∩ L = (PSp2 (3) ×
PSpm (3)).a with a 6 2. By Zsigmondy’s theorem, 3m − 1 has a primitive prime
divisor r. From the factorization G = HB we conclude that r divides |H| since r
divides |L|/|B ∩ L|. However, Lemma 6.6(d) implies that |H| is not divisible by r,
which is a contradiction.
Case 3. Suppose A ∩ L = ˆGLm (q).2, as in (iv). By Lemma 6.6(c), we have
H 6 R.(((q m − 1)/((q − 1)(m, q − 1))).m).O, where R and O are defined in Lemma
68
7. PROOF
6.6 with A/R = PSLm (q).O. It follows that |H|p divides
m(|R||O|)p =
m(2|A|)p
m|A|p
=
= m(2|G/L|)p
|PSLm (q)|p
|A ∩ L|p
and thus divides 4f m. This implies that q m−1 = (|L|/|B ∩ L|)p divides 4f m due to
the factorization G = HB. As a consequence, we obtain
2m−1 6 pm−1 6 pf (m−1) /f 6 4m,
which is impossible as m > 5.
2
2
Case 4. Suppose that (v) appears, that is, A ∩ L = Ω+
m (q ).2 with m = 2ℓ
and q ∈ {2, 4}. By Zsigmondy’s theorem, 2f m − 1 has primitive prime divisor r if
(m, q) 6= (6, 2). Set r = 7 if (m, q) = (6, 2). Since r divides |G|/|B|, r also divides
|H| as the factorization G = HB requires. Due to Hypothesis 6.5, it derives from the
conclusion of Theorem 1.1 that H 6 Pℓ [A] or Pℓ−1 [A]. Note that a totally singular
orthogonal ℓ-space over GF(q 2 ) is also a totally singular orthogonal m-space over
GF(q). We then conclude have H stabilizes a totally singular m-space.
Case 5. Assume that L = PΩ+
16 (q) and A ∩ L = Ω9 (q).a with a 6 2, as in
(vi). By Zsigmondy’s theorem, p8f − 1 has a primitive prime divisor r. According
to the factorization G = HB we know that r divides |H| since r divides |L|/|B ∩ L|.
However, Lemma 6.6(g) implies that |H| is not divisible by r, which is a contradiction.
m(m−1)/2
Lemma 7.20. If L = PΩ+
:((q m −
2m (q) with m > 5, then H ∩ L 6 q
m
1)/(4, q − 1)).m < Pm or Pm−1 , and Ω2m−1 (q) E K ∩ L 6 N1 .
Proof. By Zsigmondy’s theorem, pf (2m−2) − 1 has a primitive prime divisor r.
Assume that r divides |H ∩ L|. Then r divides |A ∩ L|, and inspecting Table A.6
we obtain the candidates for A ∩ L:
(7.8)
N1 , ˆGUm (q).2 with m even,
N−
2.
Since r divides |H|, we conclude from Lemma 6.6(b), (d) and (f) that none of them
is possible. Therefore, r does not divide |H ∩ L|. Hence the factorization G = HB
forces r to divide |B ∩ L| by Lemma 2.9. We thus obtain the candidates for B ∩ L
as in (7.8).
Case 1. Suppose B ∩ L = N1 . Then by Lemma 7.19, we may assume that
A∩L = Pk with k ∈ {m, m−1}. Let d = (4, q m −1)/(2, q−1). Then Z(Ω+
2m (q)) = Cd
m(m−1)/2
and Pk = q
:(SLm (q).(q − 1)/(2, q − 1))/d (see [34, Proposition 4.1.20]). In
particular, A has the unique unsolvable composition factor PSLm (q). Hence we
deduce from Lemma 6.6(c) that
qm − 1
qm − 1
m(m−1)/2
H 6 rad(A).
.m .O = q
.
.m .(G/L),
(q − 1)(m, q − 1)
(4, q m − 1)
with A/rad(A) = PSLm (q).O. Consequently,
(7.9)
H ∩ L 6 q m(m−1)/2 :
qm − 1
.m.
(4, q m − 1)
Let R = rad(B), B = B/R, K = KR/R and H ∩ B = (H ∩ B)R/R. From
G = HK we deduce B = (H ∩ B)K and B = H ∩ B K. Note that B is almost
7.6. COMPLETION OF THE PROOF
69
simple with socle Ω2m−1 (q). Since H ∩ B is solvable, it then derives from Hypothesis
6.5 that either K ∩ soc(B) 6 Ω−
2m−2 (q).2 or K D Ω2m−1 (q).
Assume that K ∩ soc(B) 6 Ω−
2m−2 (q).2. Since |B|/|K| divides |B|/|K| and
|G|/|K| divides |H ∩ L||Out(L)|, we deduce that (|G|/|B|)(|B|/|K|) divides |H ∩
L||Out(L)|. This together with (7.9) leads to
q 2(m−1) (q m − 1)(q m−1 − 1)
2f mq m(m−1)/2 (q m − 1),
2(2, q − 1)
which gives
(7.10)
q m−1 − 1 4f m(2, q − 1).
As a consequence,
pf (m−1) − 1
p4f − 1
6
6 4(2, q − 1),
5f
fm
which forces q = 2. Substituting this into the above inequality, we obtain that
2m−1 − 1 6 4m and thus m = 5. However, the pair (m, q) = (5, 2) does not satisfy
(7.10), a contradiction.
Therefore, we have K D Ω2m−1 (q). It follows that K ∩ L contains (B ∩ L)(∞) =
Ω2m−1 (q), and thus Ω2m−1 (q) E K ∩ L 6 B ∩ L = N1 , as the lemma states.
Case 2. Suppose that B ∩ L = ˆGUm (q).2 with m even. As listed in Table A.6,
there are three candidates for A ∩ L:
N1 , P1 , N+
2 with q = 4.
By Zsigmondy’s theorem, pf (2m−4) − 1 has a primitive prime divisor s. According
to the factorization G = HB, the order |H| is divisible by |L|/|B ∩ L|, and thus
divisible by s. However, since A has socle Ω2m−1 (q) or PΩ+
2m−2 (q), we deduce from
Lemma 6.6(d), (g) and (h) that s does not divide |H|, which is a contradiction.
Case 3. Finally suppose B ∩ L = N−
2 . Then by Theorem 2.15, we obtain all the
candidates for A ∩ L:
Pm , Pm−1 , ˆGLm (q).2 with q ∈ {2, 4}.
By Zsigmondy’s theorem, pf (m−1) −1 has a primitive prime divisor s if (m, q) 6= (7, 2).
Set s = 7 if (m, q) = (7, 2). From the factorization G = HB we know that |L|/|B∩L|
divides |H|, whence s divides |H|. However, since A has the unique unsolvable
composition factor PSLm (q), we conclude from Lemma 6.6(c) that s does not divide
|H|, a contradiction. The proof is thus completed.
7.6. Completion of the proof
We are now able to complete the proof of Theorem 1.1 by summarizing the
results obtained in previous sections.
Lemma 7.21. Let G be an almost simple group with socle L. Then each nontrivial
factorization G = HK with H solvable satisfies Theorem 1.1.
Proof. Proposition 4.2 shows that L cannot be an exceptional group of Lie
type. If K is also solvable or L is an alternating group or a sporadic simple group,
then Propositions 4.1, 4.3 and 4.4, respectively, describe the triple (G, H, K).
70
7. PROOF
Now assume that K is unsolvable and L is a classical group of Lie type not
isomorphic to any alternating group. To prove that part (d) of Theorem 1.1 holds,
we may embed G = HK into a nontrivial maximal factorization G = AB (this means
that we can find core-free maximal subgroups A, B containing H, K respectively, see
Remark 2.19). If A is solvable, then part (d) of Theorem 1.1 follows by Proposition
6.2. Under Hypothesis 6.5, Propositions 7.1, 7.2, 7.8, 7.13 and 7.14 show that the
triple (G, H, K) lies in Table 1.1 or Table 1.2. Therefore, we conclude by induction
that any nontrivial factorization G = HK with H solvable satisfies Theorem 1.1.
To finish the proof of Theorem 1.1, it remains to verify that for each socle L
listed in Table 1.1 or Table 1.2, there exist factorizations as described.
Lemma 7.22. For each L in Table 1.1 and Table 1.2, there exist group G and
subgroups H, K as described such that soc(G) = L and G = HK.
Proof. To see the existence for row 1 of Table 1.1, let G = PGLn (q), L =
soc(G), H be a Singer cycle and K be the stabilizer of a 1-space in G. Then
H ∩ L = ˆGL1 (q n ) = (q n − 1)/(n, q − 1), K ∩ L = P1 , and G = HK. Moreover, take
τ to be a graph automorphism of L of order 2 such that K τ ∩ L = Pn−1 , and then
we have soc(Gτ ) = L and Gτ = H τ K τ .
∼
Take m = 3 in Proposition 5.9. Then by the isomorphisms PΩ+
6 (q) = PSL4 (q)
and PΩ5 (q) ∼
= PSp4 (q), we see that row 2 of Table 1.1 arises.
Next let G, H, K be defined as in Proposition 5.5. Then G = soc(G) = Sp2m (q)
with q even, H is solvable, H = q m(m+1)/2 :(q m −1) 6 Pm , K = O−
2m (q), and G = HK
by Proposition 5.5. Hence row 3 of Table 1.1 arises. If m = 2, then further take
τ to be a graph automorphism of G of order 2, under which we have Gτ = H τ K τ ,
H τ 6 (P2 )τ = P1 and K τ = Sp2 (q 2 ).2 by [2, (14.1)]. This shows that the row 4 of
Table 1.1 arises.
Take m = 2 in Proposition 5.7. Then by the isomorphisms PΩ5 (q) ∼
= PSp4 (q)
−
2
∼
and PΩ4 (q) = PSp2 (q ), we see that row 5 of Table 1.1 arises.
For row 6 of Table 1.1, let G, H, K be defined as in Proposition 5.2, Z = Z(G),
G = G/Z, L = soc(G), H = HZ/Z and K = KZ/Z. Then G = PGU2m (q),
2
H ∩ L = q m :(q 2m − 1)/((q + 1)(2m, q + 1)) < Pm , K ∩ L = N1 , and G = H K holds
since G = HK by Proposition 5.2.
For row 7 of Table 1.1, let G, H, K be defined as in Proposition 5.7 and L =
soc(G). Then G = SO2m+1 (q), H ∩L = (q m(m−1)/2 .q m ):(q m −1)/2 < Pm , K∩L = N−
1,
and G = HK by Proposition 5.7.
Now let G, A, H, K be defined as in Proposition 5.9, Z = Z(G), G = G/Z, L =
soc(G), H = HZ/Z, A = AZ/Z and K = KZ/Z. Then G = PSO+
2m (q), A∩L is one
of Pm and Pm−1 , say A∩L = Pm , H ∩L = q m(m−1)/2 :(q m −1)/(4, q m −1), K ∩L = N1 ,
and G = H K by Proposition 5.9. Moreover, take σ to be an automorphism of L
σ
σ σ
such that A ∩ L = Pm−1 . We have G = H K . Hence row 8 of Table 1.1 arises. If
m = 4, then further take τ to be a graph automorphism of L of order 3 such that
τ
τ
τ
τ
τ
A ∩ L = P1 , under which we have soc(G ) = L and G = H K . Consequently,
row 9 of Table 1.1 arises.
Finally, computation in Magma[6] shows that for each row in Table 1.2, there
exists factorization G = HK with soc(G) = L and (H ∩ L, K ∩ L) as described.
This completes the proof.
CHAPTER 8
s-Arc transitive Cayley graphs of solvable groups
This chapter is devoted to proving Theorem 1.2, and is organized as follows.
Section 8.1 collects preliminary results that are needed in the ensuing arguments.
In particular, Lemma 8.4 reduces the proof of Theorem 1.2 to the quasiprimitive
case. Section 8.2 presents a result regarding the outer automorphism group of simple
groups, which generalizes a well-known result of Gorenstein and plays an important
role in the proof of Theorem 1.2. Section 8.3 further reduces the quasiprimitive case
to the affine case and the almost simple case. Then the final section completes the
proof Theorem 1.2 by citing the classification of the affine case in [30] and treating
the almost simple case based on Theorem 1.1.
8.1. Preliminaries
Throughout this section, let Γ = (V, E) be a connected G-arc-transitive graph
and {α, β} be an edge of Γ . Denote by Γ (α) the set of neighbors of α in Γ , and
Γ (α)
Gα the permutation group on Γ (α) induced by Gα .
8.1.1. Normal subgroups and normal quotients. As mentioned in the Introduction chapter, for a normal subgroup N of G which is intransitive on V , we
have a normal quotient graph ΓN = (VN , EN ) of Γ , where VN the set of N-orbits
on V and EN the set of N-orbits on E. We shall see that the normal quotient ΓN
inherits certain properties of Γ .
The first lemma is a well-known result, see for example [47, Lemma 1.6].
Lemma 8.1. Let Γ = (V, E) be a connected G-arc-transitive graph such that
is primitive, N be a normal subgroup of G, and G = G/N. If |VN | > 3, then
the following statements hold.
Γ (α)
Gα
(a) N is semiregular on V and G is faithful on VN .
(b) For any α ∈ V and B ∈ VN , Gα ∼
= GB .
The next lemma shows that the s-arc-transitivity is inherited by normal quotients.
Lemma 8.2. (Praeger [48]) Assume that Γ is a connected non-bipartite (G, s)arc-transitive graph with s > 2. Then there exists a normal subgroup N of G
such that G/N is a quasiprimitive permutation group on VN of type almost simple (AS), affine (HA), twisted wreath product (TW) or product action (PA) and ΓN
is (G/N, s)-arc transitive.
Although a normal quotient of a Cayley graph is not necessarily a Cayley graph,
we have the following observation.
71
72
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Lemma 8.3. If Γ is a Cayley graph of a group R and N is a normal subgroup
of G. Then RN/N is transitive on VN .
This reduces the proof of Theorem 1.2 to the quasiprimitive case.
Lemma 8.4. Theorem 1.2 holds if it holds for the case where X = G is vertexquasiprimitive.
For vertex-transitive normal subgroups of G, we have the following consequences.
Lemma 8.5. Let K be a vertex-transitive normal subgroup of G. Then the following statements hold.
(a) Gα /Kα ∼
= G/K.
(b) If K is arc-transitive, then Gαβ /Kαβ ∼
= G/K.
Γ (α)
(c) If Gα is primitive and Kα 6= 1, then K is arc-transitive.
Proof. Since K is vertex-transitive, we have G = KGα , and thus
Gα /Kα = Gα /Gα ∩ K ∼
= Gα K/K = G/K,
as part (a) states. If K is arc-transitive, then G = KGαβ , and so
Gαβ /Kαβ = Gαβ /Gαβ ∩ K ∼
= Gαβ K/K = G/K,
Γ (α)
proving part (b). Now suppose that Gα
is primitive and Kα 6= 1. Since Γ
Γ (α)
is connected, we see that if Kα
= 1 then Kα = 1. Thus by our assumption,
Γ (α)
Γ (α)
Γ (α)
Γ (α)
Kα
6= 1. It follows that Kα
is transitive since Kα
is normal in Gα
and
Γ (α)
Gα is primitive. This implies that K is arc-transitive.
8.1.2. Vertex and arc stabilizers. Now suppose further that Γ is (G, 2)-arc[1]
transitive. Denote by Gα the kernel of the action induced by Gα on Γ (α), and
[1]
[1]
[1]
[1]
[1]
Gαβ := Gα ∩ Gβ . Noting that Gα E Gαβ and the kernel of the action of Gα on
[1]
Γ (β) is equal to Gαβ , we have the observation as follows.
Γ (α)
[1]
Γ (β)
[1]
[1]
Lemma 8.6. Gα /Gαβ ∼
= Gαβ .
= (Gα )Γ (β) E Gαβ ∼
The next theorem is a fundamental result in the study of 2-arc-transitive graphs,
see [53, §4].
Theorem 8.7. Let Γ be a connected (G, 2)-arc-transitive graph, and {α, β} be
an edge of Γ . Then the following statements hold.
[1]
(a) Gαβ is a p-group for some prime p.
[1]
Γ (α)
(b) If Gαβ is a nontrivial p-group with prime p, then Gα
D PSLd (q) and
d
|Γ (α)| = (q − 1)/(q − 1) for some p-power q and integer d > 2.
Utilizing Theorem 8.7 and results of [38], we establish the following lemma.
Lemma 8.8. Let Γ be (G, 2)-arc-transitive, and K be a vertex-transitive normal
Γ (α)
subgroup of G such that Kα 6= 1 is imprimitive on Γ (α). Then Gα
is an affine
2-transitive permutation group of degree pd , where p is prime and d > 2, and the
following hold.
[1]
(a) Gαβ = 1.
8.1. PRELIMINARIES
73
(b) Kαβ ∼
= Ck × Cm with k m and Cm = Kαβ 6 GL1 (pf ) for some proper
divisor f of d.
(c) Kα ∼
= Ck × (Cdp :Cm ).
[1]
[1]
(d) Gα 6 Cm . In particular, |Gα | divides pf − 1.
Γ (α)
Γ (α)
Proof. By [38, Corollary 1.2 and Lemma 3.3], Gα
is an affine 2-transitive
Γ (α)
d
permutation group of degree p with p prime, and Kαβ 6 GL1 (pf ), where f is a
Γ (α)
proper divisor of d. Consequently, Kαβ = Cm for some m (pf − 1). Applying
[1]
Theorem 8.7 we have Gαβ = 1, as part (a) asserts. This together with Lemma 8.6
[1]
[1]
[1]
Γ (α)
[1]
implies that Kα = Kα /Kαβ is isomorphic to a subgroup of Kαβ , so that Kα = Ck
for some k m.
[1]
Γ (α)
[1]
Now that Kαβ /Kα ∼
= Kαβ is cyclic, Kα contains the commutator subgroup
[1]
[1]
[1]
′
′
′
Kαβ
of Kαβ . For the same reason, Kβ > Kαβ
. Therefore, Kαβ
6 Kα ∩ Kβ =
[1]
[1]
[1]
Kαβ = 1, and hence Kαβ is abelian. Since Kα = Ck and Kαβ /Kα = Cm are
[1]
both cyclic, we conclude that Kαβ = Kα × Cm , proving part (b). It follows that
[1]
Kα = Op (Kα ):Kαβ = Kα × (Op (Kα ):Cm ), as in part (c).
[1]
[1]
[1]
Γ (α)
Γ (α)
Γ (α)
Γ (α)
Finally, let X = KGα . Then Xα = Gα , Xα
= Kα , and Xαβ = Kαβ .
[1]
Γ (β)
Γ (β)
[1]
[1]
[1]
[1]
=K
,
Viewing X 6 G = 1, we deduce that Gα = Xα ∼
= (Xα )Γ (β) ✁ X
αβ
αβ
αβ
αβ
which leads to part (d).
8.1.3. Coset graph construction. As Γ is G-arc-transitive, there exists g ∈
G interchanging α and β. Consequently,
Ggαβ = (Gα ∩ Gβ )g = Ggα ∩ Ggβ = Gβ ∩ Gα = Gαβ ,
namely, g normalizes the arc stabilizer Gαβ . Because Gα is transitive on Γ (α), so
the valency of Γ equals
|Γ (α)| = |Gα |/|Gαβ | = |Gα |/|Gα ∩ Gβ | = |Gα |/|Gα ∩ Ggα |.
Moreover, hGα , gi = G since Γ is connected. Thus we have the following lemma.
Lemma 8.9. There exists g ∈ NG (Gαβ ) such that
(a) g 2 ∈ Gαβ and hGα , NG (Gαβ )i = hGα , gi = G;
(b) the valency of Γ equals |Gα |/|Gα ∩ Ggα |;
(c) Γ is (G, 2)-arc-transitive if and only if Gα is 2-transitive on [Gα :Gα ∩ Ggα ].
Conversely, suppose that K is a core-free subgroup of G and g is an element in
G \ K with g 2 ∈ K. Then G acts faithfully on [G:K] by right multiplication, so
that G can be regarded as a transitive permutation group on [G:K]. Denote the
points K and Kg in [G:K] by α and β, respectively. Then Gα = K, Gβ = K g , and
(α, β)g = (β, α). The graph with vertex set [G:K] and edge set {α, β}G is G-arctransitive, where two vertices Kx, Ky are adjacent if and only if yx−1 ∈ KgK. Such
a graph is called a coset graph, denoted by Cos(G, K, KgK). It is steadily seen that
Cos(G, K, KgK) is connected if and only if hK, gi = G.
Remark 8.10. Replacing g by some power of g, we may assume that g is a
2-element. This accelerates the search for (G, 2)-arc-transitive graphs for given G
and Gα , see the Magmacodes in Appendix B.
74
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
We introduce the Hoffman-Singlton graph and the Higman-Sims graph in terms
of coset graphs.
Example 8.11. Let G = PΣU3 (5), K be a subgroup of G isomorphic to S7 and
M a subgroup of K isomorphic to S6 . The normalizer NG (M) of M in G is an
extension of M by C2 . Take g to be any element of NG (M) \ M. Then g 2 ∈ M and
hK, gi = G, whence Cos(G, K, KgK) is connected. The graph Cos(G, K, KgK) is
the well-known Hoffman-Singlton graph [26], which is non-bipartite and 3-transitive.
Moreover, the full automorphism group of Cos(G, K, KgK) is isomorphic to G and
has a subgroup 51+2
+ :8:2 transitive on vertices.
Example 8.12. Let G = HS, K be a subgroup of G isomorphic to M22 and
M a subgroup of K isomorphic to PSL3 (4). The normalizer NG (M) of M in G
is an extension of M by C2 . Take g to be any element of NG (M) \ M. Then
g 2 ∈ M and hK, gi = G. Thus Cos(G, K, KgK) is a connected (G, 2)-arc-transitive
graph. This is the Higman-Sims graph, see [10]. The full automorphism group
of Cos(G, K, KgK) is isomorphic to G.2 and has a subgroup 51+2
+ :8:2 transitive
on vertices. Note that the valency of Cos(G, K, KgK) is |K|/|M| = 22. Hence
Cos(G, K, KgK) is not 3-arc-transitive since |M| is not divisible by 32 ·72 = (22−1)2.
Moreover, Cos(G, K, KgK) is non-bipartite since G does not have a subgroup of
index two.
8.1.4. Other facts. The following lemma is a well-known result in elementary
number theory, which is a consequence of so-called Legendre’s formula.
Lemma 8.13. For any positive integer n and prime p, the p-part (n!)p < pn/(p−1) .
We also need a theorem of Dixon on the maximal order of solvable permutation
groups of given degree.
Theorem 8.14. (Dixon [13, Theorem 3]) For any solvable subgroup R of Sn ,
the order |R| 6 24(n−1)/3 .
The next lemma is a consequence of [43, Corollary 5].
Lemma 8.15. Let T ba a nonabelian simple group. Then for any solvable subgroup R of T , there exists a prime divisor r of |T | not dividing |R|.
8.2. A property of finite simple groups
A well-known theorem of Gorenstein [22, Theorem 1.53] says that for a nonabelian simple group T of order divisible by a prime r, if |T | is not divisible by r 2
then |Out(T )| is not divisible by r. We need an improved version of this theorem,
which will be established in this section. The proof will invoke the following elementary number theory result. Recall that for any positive integer f , we denote the
r-part of f by fr and denote fr′ = f /fr .
Lemma 8.16. Let f be a positive integer, r be a prime number, and
(
r, if r > 2
r0 =
4, if r = 2.
Then for any integer t > 1, the following statements hold.
8.2. A PROPERTY OF FINITE SIMPLE GROUPS
75
(a) r tf − 1 if and only if r tfr′ − 1.
(b) If t ≡ 1 (mod r0 ), then (tf − 1)r = fr (t − 1)r .
(c) If r0 tf − 1, then (tf − 1)r > r0 fr .
Proof. Let fr = r m , where m > 0. As r − 1 divides r m − 1, we derive
m
tf = tfr′ (r −1) tfr′ ≡ tfr′ (mod r) by Fermat’s little theorem. Then part (a) follows immediately.
Suppose that t ≡ 1 (mod r0 ). Then writing u = t − 1 we have
r
tr − 1
(u + 1)r − 1
r(r − 1)u X r k−1
u .
=
=r+
+
k
t−1
u
2
k=3
Since r0 u, the above equality implies that (tr − 1)/(t − 1) ≡ r (mod r 2 ). As a
consequence, (tr − 1)r = r(t − 1)r . Applying this repeatedly, we obtain
m
(tf − 1)r = (tfr′ r − 1)r = r(tfr′ r
m−1
− 1)r = · · · = r m (tfr′ − 1)r .
In the meanwhile, the condition t ≡ 1 (mod r) implies that
tfr′ − 1
= 1 + t + t2 + · · · + tfr′ −1 ≡ fr′ (mod r)
t−1
and thus (tfr′ − 1)/(t − 1) is not divisible by r. Hence (tf − 1)r = r m (tfr′ − 1)r =
r m (t − 1)r as part (b) asserts.
Next suppose that t > 1 is an integer with r0 tf − 1. It follows from part (a)
that r tfr′ − 1. If r > 2, then by part (b) (replacing t with tfr′ there), one has
(tf − 1)r = fr (tfr′ − 1)r > rfr . If r = 2 and fr = 1, then part (c) holds trivially. If
r = 2 and ir > 2, then t is odd and so t2 ≡ 1 (mod 8), whence we conclude from
part (b) that
m−1
(tf − 1)r = (t2f2′ ·2
− 1)2 = 2m−1 (t2f2′ − 1)2 > 2m−1 · 8 = 2m+2 = r0 fr .
This proves part (c).
Now we give the improved version of Gorenstein’s theorem mentioned above.
Theorem 8.17. Let T be a simple group, and r be a common prime divisor of
|T | and |Out(T )|. Then |T |r > r|Out(T )|r , and further, for r = 2 or 3, |T |r =
r|Out(T )|r if and only if one of the following occurs, where p is a prime.
(a) r = 2, and T ∼
= PSL2 (pf ) with p ≡ ±3 (mod 8).
∼
(b) r = 3, and T = PSL2 (pf ) with p ≡ ±2 or ±4 (mod 9) and f ≡ 0 (mod 3).
(c) r = 3, and T ∼
= PSL3 (pf ), where either p ≡ 2 or 5 (mod 9) and f ≡ 2, 3
or 4 (mod 6), or p ≡ 4 or 7 (mod 9) and f 6≡ 0 (mod 3).
(d) r = 3, and T ∼
= PSU3 (pf ), where either p ≡ 2 or 5 (mod 9) and f ≡ 0, 1
or 5 (mod 6), or p ≡ 4 or 7 (mod 9) and f ≡ 0 (mod 3).
Proof. Assume that T is alternating or sporadic. Then either |Out(T )| 6 2,
or |Out(T )| = 4 and T = A6 . Hence r = 2, and the inequality |T |r > r|Out(T )|r
holds steadily. Further, |T |2 = 2|Out(T )|2 if and only if T = A5 ∼
= PSL2 (5) or
∼
T = A6 = PSL2 (9), as in part (a) of the lemma.
In the remainder of the proof, assume that T is a simple group of Lie type defined
over a field of characteristic p, and T is not isomorphic to A5 , A6 , A8 or A9 . If r = p,
then either T = PSL2 (pf ), or |T |p > q 3 > p|Out(T )|p . Suppose that T = PSL2 (pf ).
76
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Then pf > 8. For p = 2 or 3, |T |p = pf > pf > pfp = p|Out(T )|p . For p > 5,
|T |p = pf > pf > pfp = p|Out(T )|p . Thus assume that r 6= p hereafter.
Case 1. Suppose that r = 2. Then, in particular, T 6= 2 B2 (q) or 2 F4 (q).
An inspection of |T | and |Out(T )| for simple groups T of Lie type shows that
|Out(T )|2 6 2d2 f2 , and one of the following holds.
(i) |T | is divisible by d(q 2 − 1).
(ii) T = PSL2 (q), |T |2 = (q 2 − 1)2 /2, and |Out(T )|2 = 2f2 .
(iii) T = 2 G2 (q) with q = 32c+1 > 33 , |T |2 = (q 3 +1)2 (q−1)2 , and |Out(T )|2 = 1.
Since 4 q 2 − 1, we conclude from Lemma 8.16(c) that
(8.1)
(q 2 − 1)2 = (p2f − 1)2 > 4(2f )2 = 8f2 .
If (i) occurs, then the above equality implies
|T |2 > d2 (q 2 − 1)2 > 8d2 f2 > 2|Out(T )|2 .
If (iii) occurs, then apparently |T |2 > 2 = 2|Out(T )|2.
Now assume that (ii) occurs. Then by (8.1),
|T |2 = (q 2 − 1)2 /2 > 4f2 = 2|Out(T )|2 .
Moreover, |T |2 = 2|Out(T )|2 if and only if
(8.2)
(q 2 − 1)2 = 8f2 .
Note that (q 2 − 1)2 = ((p2 )f − 1)2 = f2 (p2 − 1)2 by Lemma 8.16(b). We see that
(8.2) is equivalent to (p2 − 1)2 = 8, which is further equivalent to the condition in
part (a) of the lemma.
Case 2. Suppose that r = 3. Then T 6= 2 G2 (q) and p2 ≡ 1 (mod 3). An
inspection of |T | and |Out(T )| divides simple groups T of Lie type into the following
six classes.
(i) |T | is divisible by 3d(q 2 − 1), and |Out(T )|3 = d3 f3 .
(ii) T = PSL2 (q), |T |3 = (q 2 − 1)3 , and |Out(T )|3 = f3 .
(iii) T = PSL3 (q), |T |3 = (q 3 − 1)3 (q 2 − 1)3 /d3, and |Out(T )|3 = d3 f3 .
(iv) T = PSU3 (q), |T |3 = (q 3 + 1)3 (q 2 − 1)3 /d3 , and |Out(T )|3 = d3 f3 .
6
4
2 2
(v) T = PΩ+
8 (q), |T |3 = (q − 1)3 (q − 1)3 (q − 1)3 , and |Out(T )|3 = 3f3 .
(vi) T = 3 D4 (q), |T |3 = (q 8 + q 4 + 1)3 (q 6 − 1)3 (q 2 − 1)3 , and |Out(T )|3 = 3f3 .
By Lemma 8.16(b) we have (q 2 − 1)3 = f3 (p2 − 1)3 . If (i), (v) or (vi) occurs, then
|T |3 /|Out(T )|3 > 3(q 2 − 1)3 /f3 = 3(p2 − 1)3 > 3.
Assume that (ii) occurs. Then |T |3 = (q 2 −1)3 = f3 (p2 −1)3 = (p2 −1)3 |Out(T )|3.
Hence |T |3 > 3|Out(T )|3 , and |T |3 = 3|Out(T )|3 if and only if (p2 − 1)3 = 3, which
is equivalent to p ≡ ±2 or ±4 (mod 9). This together with the condition that 3
divides |Out(T )| leads to part (b) of the lemma.
Next assume that (iii) appears. Then |T |3 /|Out(T )|3 = (q 3 − 1)3 (p2 − 1)3 /d23 .
If q ≡ 2 (mod 3), then |T |3 /|Out(T )|3 = (p2 − 1)3 > 3, and |T |3 /|Out(T )|3 = 3 is
equivalent to (p2 − 1)3 = 3, which occurs exactly when p ≡ 2 or 5 (mod 9) and f
odd. If q ≡ 1 (mod 3), then
|T |3
(q 3 − 1)3 (p2 − 1)3
(q 6 − 1)3 (p2 − 1)3
3f3 (p2 − 1)23
=
=
=
> 3,
|Out(T )|3
9
9
9
8.2. A PROPERTY OF FINITE SIMPLE GROUPS
77
and |T |3 /|Out(T )|3 = 3 is equivalent to f3 (p2 − 1)23 = 9, which occurs exactly when
either p ≡ 2 or 5 (mod 9) and f ≡ ±2 (mod 6), or p ≡ 4 or 7 (mod 9) and f 6≡ 0
(mod 3). To sum up, |T |3 = 3|Out(T )|3 if and only if either p ≡ 2 or 5 (mod 9) and
f 6≡ 0 (mod 6), or p ≡ 4 or 7 (mod 9) and f 6≡ 0 (mod 3). This together with the
condition that 3 divides |Out(T )| leads to part (c) of the lemma.
Now assume that (iv) appears. Then |T |3/|Out(T )|3 = (q 3 + 1)3 (p2 − 1)3 /d23 .
If q ≡ 1 (mod 3), then |T |3 /|Out(T )|3 = (p2 − 1)3 > 3, and |T |3 /|Out(T )|3 = 3
is equivalent to (p2 − 1)3 = 3, which occurs exactly when p ≡ 2 or 5 (mod 9) and
f ≡ ±1 (mod 6). If q ≡ 2 (mod 3), then
|T |3
(q 3 + 1)3 (p2 − 1)3
(q 6 − 1)3 (p2 − 1)3
3f3 (p2 − 1)23
=
=
=
> 3,
|Out(T )|3
9
9
9
and |T |3 /|Out(T )|3 = 3 is equivalent to f3 (p2 − 1)23 = 9, which occurs exactly when
either p ≡ 2 or 5 (mod 9) and f is even, or p ≡ 4 or 7 (mod 9). To sum up,
|T |3 = 3|Out(T )|3 if and only if either p ≡ 2 or 5 (mod 9) and f 6≡ 3 (mod 6), or
p ≡ 4 or 7 (mod 9). This together with the condition that 3 divides |Out(T )| leads
to part (d) of the lemma.
Case 3. Assume that r > 5. Then an inspection of |T | and |Out(T )| for simple
groups T of Lie type shows that |T |p > q and |Out(T )|r = dr fr .
Suppose that dr > 1. Then T = PSLn (q) or PSUn (q), and n > 5. First
assume that T = PSLn (q). Then r q − 1 = pf − 1, and hence we conclude from
Lemma 8.16(c) that (q − 1)r = (pf − 1)r > rfr . This together with the observation
that |T | is divisible by d(q − 1) yields |T |r > dr (q − 1)r > rdr fr = r|Out(T )|r .
Now assume that T = PSUn (q). Then r q + 1, whence r divides q 2 − 1 but not
q − 1. It follows that (q + 1)r = (q 2 − 1)r , and appealing Lemma 8.16(c) we obtain
(q + 1)r = (p2f − 1)r > rfr . This together with the observation that |T | is divisible
by d(q + 1) yields |T |r > dr (q + 1)r > rdr fr = r|Out(T )|r .
Suppose that dr = 1. As r divides |T |, an inspection of |T | and |Out(T )| for
simple groups T of Lie type shows that one the following happens.
(i) r q i − 1 for some i > 1 with |T |r > (q i − 1)r .
(ii) r q i + 1 for some i > 2 with |T |r > (q i + 1)r .
(iii) r q 8 + q 4 + 1 with |T |r > (q 8 + q 4 + 1)r .
First assume that (i) appears. Then we conclude from Lemma 8.16(c) that (q i −1)r =
(pif −1)r > rfr , which implies |T |r > (q i −1)r > rfr = r|Out(T )|r . Next assume that
(ii) appears. Then r divides q 2i −1 but not q i −1. It follows that (q i +1)r = (q 2i −1)r ,
and appealing Lemma 8.16(c) we obtain (q i + 1)r = (p2if − 1)r > rfr . This leads
to |T |r > (q i + 1)r > rfr = r|Out(T )|r . Finally assume that (iii) appears. If
r q 4 − 1, then q 8 + q 4 + 1 ≡ 3 (mod r), contrary to the condition that r q 8 + q 4 + 1.
Consequently, r does not divide q 4 − 1, and so (q 8 + q 4 + 1)r = (q 12 − 1)r . Now
appealing Lemma 8.16(c) we have (q 8 + q 4 + 1)r = (p12f − 1)r > rfr , which implies
|T |r > (q 8 + q 4 + 1)r > rfr = r|Out(T )|r .
The proof is thus completed.
78
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
8.3. Reduction to affine and almost simple groups
In [48, Theorem 2] Praeger proved that the quasiprimitive groups acting 2-arctransitively on a connected graph can be divided into four different types, which
were later called HA, AS, TW and PA [49, Theorem 5.1], see Lemma 8.2. In
this section, we determine the quasiprimitive types for those containing a vertextransitive solvable subgroup. It will be shown that only types HA and AS can
occur.
We fix some notation throughout this section. Let Γ = (V, E) be a connected
(G, 2)-arc-transitive graph with G quasiprimitive on V , and R be a solvable vertextransitive subgroup of G. Take an edge {α, β} ∈ E and denote N = soc(G). Then
N = T1 × · · · × Tℓ , where ℓ > 2 and T1 ∼
= T for some nonabelian simple
= Tℓ ∼
= ... ∼
group T . Let T = {T1 , . . . , Tℓ }, and K be the kernel of G acting by conjugation on
T . Then N E K 6 Aut(T1 ) × · · · × Aut(Tℓ ).
Lemma 8.18. Assume that Nα is solvable. Then either T = PSL2 (q) with q > 5,
or T = PSU3 (8), PSU4 (2), PSL4 (2), M11 , or PSL3 (q) with q ∈ {3, 4, 5, 7, 8}.
Proof. Let X = NR. Then X = NXα , and
Xα /Nα ∼
= R/(R ∩ N),
= NXα /N = X/N = NR/N ∼
whence Xα /Nα is solvable. This together with the assumption that Nα is solvable
implies that Xα is solvable. Hence X is a product of two solvable groups R and Xα ,
and then by [32] we conclude that T is a one of the groups listed in the lemma.
One will see that for TW or PA type G, the assumption of Lemma 8.18 is
satisfied, so that the simple groups listed in Lemma 8.18 are all the possibilities for
T . For later use, we give some properties for these groups in the next lemma.
Lemma 8.19. Let T be a simple group listed in Lemma 8.18, r be the largest
prime divisor of |T |, and m be the smallest index of solvable subgroups in T . Then
r 6 m, and the following statements hold.
(a) If m < 3|Out(T )|, then T = PSL2 (5) or PSL2 (9).
(b) If m 6 60, then either (T, m) = (PSL2 (q), q + 1) with prime power 8 6 q 6
59, or (T, m) lies in Table 8.1.
Table 8.1.
T PSL2 (5) PSL2 (7) PSL3 (3) PSL4 (2) PSU4 (2) M11
m 5
7
13
35
40
55
Proof. First, for T = PSU3 (8), PSU4 (2), PSL4 (2), M11 , or PSL3 (q) with q ∈
{3, 4, 5, 7, 8}, the value of m can be derived from [10], verifying the lemma directly.
Next assume that T = PSL2 (q) with prime power q > 5. For q = 5, 7, 9, 11, we see
from [10] that m = 5, 7, 10, 12, respectively, satisfying the conclusion of the lemma.
Thus assume that q = 8 or q > 13 in the following. Then according to Theorem 2.6,
the smallest index of proper subgroups of T is q + 1. As a consequence, m > q + 1.
At the meanwhile, T = PSL2 (q) has a solvable subgroup q:(q − 1)/(2, q − 1) of index
q +1. Thereby we obtain m = q +1 > 3|Out(T )|. Moreover, r q(q +1)(q −1) implies
that r q, q + 1 or q − 1, and hence r 6 q + 1 = m. This proves the lemma.
8.3. REDUCTION TO AFFINE AND ALMOST SIMPLE GROUPS
79
Lemma 8.20. G is not of type TW.
Proof. Suppose that G is quasiprimitive of TW type. Then Nα = 1, and
|V | = |N| = |T |ℓ . As a consequence, T is a simple group listed in Lemma 8.18. Let
m be the smallest index of solvable subgroups of Aut(T ), and R = RK/K. Then
R 6 G/K . Sℓ , and |R ∩ K||R| = |R| is divisible by |T |ℓ since R is transitive
on V . Let Ri be the projection of R ∩ K into Aut(Ti ), where 1 6 i 6 ℓ. Then
R ∩ K . R1 × · · · × Rℓ . By Lemma 8.19, either m > 3|Out(T )|, or T = PSL2 (5) or
PSL2 (9).
First assume that m > 3|Out(T )|. For 1 6 i 6 ℓ, since |Ri |m 6 |Aut(T )|, we
have |Ri | 6 |T |/3. Now |R ∩ K| 6 |R1 | · · · |Rℓ | 6 |T |ℓ /3ℓ , while |R ∩ K||R| > |T |ℓ .
Thus |R| > 3ℓ , which violates Theorem 8.14 on the order of the solvable subgroup
R in Sℓ .
Next assume that T = PSL2 (5). Then any solvable subgroup of Aut(T ) has
order dividing 24 or 20. Hence |R1 | · · · |Rℓ | divides 24x 20y for some nonnegative
integers x and y with x + y = ℓ. Viewing that |R ∩ K| divides |R1 | · · · |Rℓ | and |R|
divides ℓ!, we obtain 60ℓ 24x 20y ℓ! since 60ℓ divides |R| = |R ∩ K||R|. In particular,
3ℓ 3x (ℓ!)3 and 5ℓ 5y (ℓ!)5 . This implies ℓ < x + ℓ/2 and ℓ < y + ℓ/4 by Lemma 8.13,
which leads to a contradiction that 2ℓ < (x + ℓ/2) + (y + ℓ/4) = x + y + 3ℓ/4 = 7ℓ/4.
Finally assume that T = PSL2 (9). Then any solvable subgroup of Aut(T ) has
order dividing 288 or 40, and so |R∩K| divides 288x 40y for some nonnegative integers
x and y with x+y = ℓ. It follows that 360ℓ 288x 40y ℓ!. In particular, 32ℓ 32x (ℓ!)3 and
5ℓ 5y (ℓ!)5 . This implies 2ℓ < 2x + ℓ/2 and ℓ < y + ℓ/4 by Lemma 8.13, which leads
to a contradiction that 2ℓ < x + y + ℓ/2 = 3ℓ/2. The proof is thus completed.
In the next few lemmas we exclude quasiprimitive type PA for G. Recall that
K 6 Aut(T1 ) × · · · × Aut(Tℓ ). We call a subgroup of K diagonal if it isomorphic to
its projection into Aut(Ti ) for each 1 6 i 6 ℓ.
Γ (α)
Lemma 8.21. If G is of type PA, then Kα
is transitive and imprimitive.
Proof. As G is quasiprimitive of type PA, we have Nα 6= 1. It follows from
Γ (α)
Γ (α)
Lemma 8.5 that Nα
is transitive, and so is Kα . For 1 6 i 6 ℓ, let Ri be the
projection of R ∩ N into Ti and Mi = CK (Ti ). Then Mi ✁ K, and K/Mi is almost
simple with socle T . Note that Mi × Ti > N is transitive on V while Mi is not. We
infer that Ti transitively permutes the orbits of Mi , and so Mi has more than two
orbits.
Γ (α)
Suppose for a contradiction that Kα
is primitive. Then Mi is semiregular
on V by Lemma 8.1. Hence Kα ∩ Mi = 1 for each 1 6 i 6 ℓ, which means that
Kα is diagonal. Without loss of generality, assume Kα = {(x, . . . , x) | x ∈ P } for
some P 6 Aut(T ). Then Kα ∼
= P and Nα = Kα ∩ N = Kα ∩ (T1 × · · · × Tℓ ) =
{(x, . . . , x) | x ∈ P ∩ T }. Consequently, |V | = |N|/|Nα | is divisible by |T |ℓ−1, and
Kα /Nα ∼
= P T /T 6 Out(T ). Since N is transitive on V , we derive that
= P/P ∩ T ∼
K/N = Kα N/N ∼
= Kα /Kα ∩ N = Kα /Nα . Out(T ).
Hence |R ∩ K|/|R ∩ N| divides |Out(T )|, and |R ∩ K||RK/K| = |R| is divisible by
|T |ℓ−1 due to the vertex-transitivity of R. Then as |R ∩ N| divides |R1 × · · · × Rℓ |,
(8.3)
|T |ℓ−1 divides |R1 × · · · × Rℓ ||Out(T )||RK/K|.
80
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Consider an arbitrary orbit of R by conjugation on T , say {T1 , . . . , Tk }, where
1 6 k 6 ℓ. Clearly, R1 ∼
= Rk . Let N = T1 × · · · × Tk , L = M1 ∩ · · · ∩ Mk ,
= ... ∼
K = KL/L, V be the orbits of L on V , v ∈ V , and R be the permutation group on
V induced by R. Then N , K are transitive permutation groups on V , and since Kα
is diagonal, K v is diagonal too. Moreover, the projection of R ∩ N into Ti is still
Ri for 1 6 i 6 k. Noticing R K/K . Sk , then along the same lines of the previous
paragraph, we derive that
(8.4)
|T |k−1 divides |R1 × · · · × Rk ||Out(T )||Sk | = |R1 |k |Out(T )|k!.
Assume that k > 1. Since R1 is a solvable subgroup of T1 , we know from Lemma 8.15
that there exists a prime r dividing |T | but not |R1 |. By Theorem 8.17, |T |r >
r|Out(T )|r , whence r|T |rk−2 divides (k!)r by (8.4). Accordingly,
(8.5)
r|T |rk−2 6 (k!)r < r k/(r−1)
by Lemma 8.13. In particular, r k−1 < r k/(r−1) , which forces r = 2. As |T | is
divisible by 4, (8.5) implies that 22k−3 < 2k , and so k 6 2. If k = 2, then |T |2
divides 2|Out(T )|2 because of (8.4), and thereby we have T = PSL2 (pf ) with prime
p ≡ ±3 (mod 8) by Theorem 8.17. However, in this situation (8.4) requires that
|PSL2 (pf )| = pf (p2f − 1)/2 divides 2|R1 |2 |Out(T )| = 4f |R1 |2 , which is not possible
for a solvable subgroup R1 of PSL2 (pf ). Therefore, k = 1. It follows that K = K/M1
is an almost simple group with socle T , and R ∩ N = R ∩ soc(K) ∼
= R1 .
The conclusion of the previous paragraph implies that R normalizes Ti for
each 1 6 i 6 ℓ, whence R 6 K. For 1 6 i 6 ℓ denote by αi the orbit of
Mi on V containing α. Then by the transitivity of R we have a factorization
K/Mi = (RMi /Mi )(K/Mi )αi . Moreover, K/Mi is an almost simple group with
socle T , (RMi /Mi ) ∩ soc(K/Mi ) ∼
= Ri , and (K/Mi )αi ∼
= Kα by Lemma 8.1. Let
Y = RT1 . Then T1 is normal in Y as T1 is normal in K. Since (T1 )α = 1 by
Lemma 8.1, we have Yα ∩ T1 = 1, and thus
Yα = Yα /Yα ∩ T1 ∼
= R/R ∩ T1
= Yα T1 /T1 6 Y /T1 = RT1 /T1 ∼
is solvable. Now Y = RYα is a product of two solvable subgroups, so by [32], T ∼
= T1
is one of the groups: PSL2 (q) with q > 5, PSU3 (8), PSU4 (2), PSL4 (2), M11 , and
PSL3 (q) with q ∈ {3, 4, 5, 7, 8}. Applying Theorem 1.1 to the factorizations K/Mi =
(RMi /Mi )(K/Mi )αi for 1 6 i 6 ℓ, where soc(K/M1 ) ∼
= T is one
= soc(K/Mℓ ) ∼
= ... ∼
∼
∼
of the above groups and (K/M1 )α1 = . . . = (K/Mℓ )αℓ , we conclude that there exists
a prime divisor r of |T | dividing none of |(RMi /Mi ) ∩ soc(K/Mi )| for 1 6 i 6 ℓ.
Consequently, r does not divide |R1 ×· · ·×Rk |. Thus (8.3) implies that |T |rℓ−1 divides
|Out(T )|r , which is not possible by Theorem 8.17. This contradiction completes the
proof.
Γ (α)
Lemma 8.22. Let R = RK/K. If G is of type PA, then Gα
is an affine
2-transitive permutation group of degree pd , where p is prime and d > 2, and the
following statements hold.
(a) R is isomorphic to a solvable subgroup of Gαβ /Kαβ , and |T |ℓ /|R1 × . . . Rℓ |
divides pd |Kαβ ||R|.
Γ (α)
Γ (α)
(b) |T |ℓ /|R1 × . . . Rℓ | divides |Kαβ ||Gα |.
(c) Kα is solvable, and Kαβ is diagonal.
8.3. REDUCTION TO AFFINE AND ALMOST SIMPLE GROUPS
81
(d) T is one of the groups listed in Lemma 8.18.
Γ (α)
Proof. Suppose that G has type PA. Then Kα is imprimitive by Lemma 8.21.
Γ (α)
We thereby see from Lemma 8.8 that Gα
is an affine 2-transitive permutation
group of degree pd , where p is prime and d > 2, and Kα is solvable.
Since Nα 6= 1 and Kα 6= 1, Lemma 8.5 shows that G/N ∼
= Gαβ /Nαβ and G/K ∼
=
Gαβ /Kαβ . Consequently, R is isomorphic to a solvable subgroup of Gαβ /Kαβ , and
|K/N| = |Kαβ /Nαβ |. Because N is arc-transitive by Lemma 8.5 and R is transitive
on V , so |R| is divisible by |V | = |N|/|Nα| = |T |ℓ /(pd |Nαβ |). Then as |R| = |R ∩
K||R| divides |K/N||R ∩ N||R| = |Kαβ /Nαβ ||R ∩ N||R|, we deduce that |T |ℓ /|R1 ×
. . . Rℓ | divides pd |Kαβ ||R|. Hence part (a) holds.
Viewing R . Gαβ /Kαβ , we derive from part (a) that |T |ℓ /|R1 × . . . Rℓ | divides
[1]
Γ (α)
[1]
Γ (α)
pd |Gαβ | = |Gα | = |Gα ||Gα |. Thus part (b) is true since |Gα | divides |Kαβ | by
Lemma 8.8.
To prove that Kαβ is diagonal, take an arbitrary i ∈ {1, . . . , ℓ} and let M =
Q
Γ (α)
is transitive, then the neighbors of α are in the same orbit of M
j6=i Kj . If Mα
Γ (α)
and so Γ will be bipartite. Hence Mα
is an intransitive normal subgroup of the
Γ (α)
Γ (α)
[1]
Frobenius group Kα . This forces Mαβ = 1, which means Mαβ = Mα . Similarly,
[1]
[1]
[1]
[1]
Mαβ = Mβ . It follows that Kαβ ∩ M = Mαβ = Mα ∩ Mβ = Mαβ = 1. Therefore,
Kαβ is diagonal, proving part (c).
Finally, as Kα is solvable we know that Nα is solvable too. Then Lemma 8.18
shows that T is one of the groups listed there.
Lemma 8.23. G is not of type PA.
Proof. Let r be the largest prime divisor of |T |, m be the smallest index of
solvable subgroups in T , and R = RK/K. Suppose for a contradiction that G
Γ (α)
has type PA. Then from Lemma 8.22 we know that Gα
is an affine 2-transitive
permutation group of degree pd , where p is prime and d > 2. The affine 2-transitive
permutation groups were classified by Hering [24], see also [41]. By virtue of this
classification, we will exclude the candidates of 2-transitive permutation groups for
Γ (α)
Gα
case by case. According to Lemma 8.22(d), T satisfies Lemma 8.18, and so
Γ (α)
Γ (α)
the integers r 6 m satisfy Lemma 8.19. Noticing Nα
> soc(Gα ) = Cdp as
Γ (α)
Γ (α)
Nα
⊳ Gα , we see that |T |ℓ = |N| is divisible by p. Hence |T | is divisible by p,
and so m > r > max{5, p}. Then the observation mℓ 6 |T |ℓ /|R1 × . . . Rℓ | together
with Lemma 8.22(b) yields
(8.6)
Γ (α)
max{5, p}ℓ 6 mℓ 6 |Kαβ ||GΓα (α) |.
For any group X, let P (X) denote the smallest index of a proper subgroup of X.
Note that G/K ∼
= Gαβ /Kαβ by Lemma 8.5, and G/K is isomorphic to a transitive
subgroup of Sℓ since N is the unique minimal normal subgroup of G.
Γ (α)
Case 1. Suppose that SLn (q) 6 Gαβ 6 ΓLn (q), where n > 2 and q = pf . Then
Γ (α)
Kαβ 6 Cq−1 , and PSLn (q) is a section of Gαβ /Kαβ . It follows from Lemma 2.5(c)
that ℓ > P (PSLn (q)) due to Gαβ /Kαβ . Sℓ . Thereby (8.6) leads to
(8.7)
max{5, p}P (PSLn (q)) 6 (q − 1)|AΓLn (q)|.
82
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Assume that (n, q) 6= (4, 2) or (2, q) with q 6 11. Then we have P (PSLn (q)) =
(q n − 1)/(q − 1) according to Table 2.3, and hence (8.7) implies
(8.8)
5(q
n −1)/(q−1)
Γ (α)
6 |Kαβ ||GΓα (α) | 6 (q − 1)|AΓLn (q)| < q n
2 +n+2
.
n
2
If q = 2, then n > 3 and the above inequality turns out to be 52 −1 < 2n +n+2 ,
n−1
n
2
not possible. It derives from (8.8) that 5q
< 5(q −1)/(q−1) < q n +n+2 . Hence
q n−1 ln 5 < (n2 + n + 2) ln q < (n2 + n + 2)q/2, and so q n−2 ln 25 < n2 + n + 2. This
2
indicates n 6 3 as q > 3. Further, if n = 3, then (8.8) turns out to be 5q +q+1 < q 14 ,
which is not possible. We therefore have n = 2. However, (8.8) then turns out to
be 5q+1 < q 8 , violating our assumption that q > 11.
Consequently, we only need to deal with the candidates in the table below.
n
2 2 2 2 2 2 2 2 4
q
2 3 4 5 7 8 9 11 2
P (PSLn (q)) 2 3 5 5 7 9 6 11 8
For (n, q) = (2, 2), (2, 7), (2, 11) or (4, 2), the inequality (8.7) does not hold, a
contradiction. Thus n = 2 and q ∈ {3, 4, 5, 8, 9}. Recall from (8.6) that
(8.9)
Γ (α)
mℓ 6 |Kαβ ||GΓα (α) | 6 (q − 1)|AΓL2 (q)| = q 3 (q 2 − 1)(q − 1)2 f.
Γ (α)
Assume that q = 3. Then Kαβ
6 C2 , and
Γ (α)
Γ (α)
|Sℓ | > |Gαβ /Kαβ | > |Gαβ /Kαβ | > |SL2 (3)|/2 > 12.
This implies ℓ > 4, and so (8.9) requires m4 6 25 · 33 . Accordingly we obtain m 6 5,
and thus T = A5 by Lemma 8.19. Observe that any solvable subgroup of T has
index divisible by 5 or 6. We infer from Lemma 8.22(b) that 5x 6y 25 · 33 for some
nonnegative integers x and y with x + y = ℓ, which is impossible as ℓ > 4.
Assume that q = 4. Then ℓ > P (PSL2 (4)) = 5, and (8.9) requires m5 6 26 · 33 · 5.
Hence m 6 7, and so T = A5 or PSL2 (7) by Lemma 8.19. If T = A5 , then any
solvable subgroup of T has index divisible by 5 or 6, and we infer from Lemma 8.22(b)
that 5x 6y 26 · 33 · 5 for some nonnegative integers x and y with x + y = ℓ > 5, a
contradiction. Thus T 6= A5 . Similarly, we derive that T 6= PSL2 (7).
Assume that q = 5. Then ℓ > P (PSL2 (5)) = 5, and (8.9) requires m5 6 27 · 3 · 53.
Hence m 6 8, and thus Lemma 8.19 implies T = A5 as |T | is divisible by p = 5. Now
we derive from Lemma 8.22(b) that 5x 6y 27 · 3 · 53 for some nonnegative integers x
and y with x + y = ℓ > 5, a contradiction.
Assume that q = 8. Then ℓ > P (PSL2 (4)) = 9, and (8.9) requires m5 6 29 ·32 ·73.
Hence m 6 5, and so T = A5 by Lemma 8.19. Now we derive from Lemma 8.22(b)
that 5x 6y 29 · 32 · 73 for some nonnegative integers x and y with x + y = ℓ > 9, a
contradiction.
Assume that q = 9. Then ℓ > P (PSL2 (9)) = 6, and (8.9) requires m6 6 211 ·36 ·5.
Hence m 6 13, and thus Lemma 8.19 implies that T = A5 , PSL2 (7), PSL2 (8), A6 ,
PSL2 (11) or PSL3 (3). If T = PSL2 (7), then any solvable subgroup of T has index
divisible by 7 or 8, and thereby we derive from Lemma 8.22(b) that 7x 8y 211 · 36 · 5
for some nonnegative integers x and y with x + y = ℓ > 6, not possible. Similarly
we exclude the candidates PSL2 (8), A6 , PSL2 (11) and PSL3 (3) for T . It follows
that T = A5 , and so |T |ℓ /|R1 × . . . Rℓ | is divisible by 5x 6y for some nonnegative
8.3. REDUCTION TO AFFINE AND ALMOST SIMPLE GROUPS
83
integers x and y with x + y = ℓ. According to Lemma 8.22(c), Kαβ . Aut(T ) = S5 .
Γ (α)
This together with Lemma 8.8(b) and Kαβ 6 C8 implies that |Kαβ | divides 4.
Hence 5x 6y divides 22 · 34 |R| by Lemma 8.22(a). Note that Lemma 8.22(b) implies
5x 6y 211 · 36 · 5, which yields x 6 1 and y 6 6. Therefore, (x, y) = (1, 6), (1, 5) or
(0, 6) as ℓ = x + y > 6. However, none of these values for (x, y) allows 5x 6y to divide
22 · 34 |R| for some solvable R . Sℓ = Sx+y , a contradiction.
Γ (α)
Case 2. Suppose that Gαβ D Sp2n (q)′ , where n > 2 and q = pf . Then
Γ (α)
Γ (α)
Gαβ 6 (Cq−1 ◦ Sp2n (q)).Cf , Kαβ 6 Cq−1 , and PSp2n (q)′ is a section of Gαβ /Kαβ .
It follows from Lemma 2.5(c) that ℓ > P (PSp2n (q)′ ) due to Gαβ /Kαβ . Sℓ . Let
q = pf . Then (8.6) yields
(8.10)
′
max{5, p}P (PSp2n (q) ) 6 q 2n |Sp2n (q)|(q − 1)2 f.
Noticing that f 6 q, we obtain
(8.11)
′
max{5, p}P (PSp2n (q) ) 6 q 2n+1 |Sp2n (q)|(q − 1)2 < q 2n
2 +3n+3
.
Moreover, by Theorem 2.6, one of the following appears.
(i) q > 2, (n, q) 6= (2, 3) and P (PSp2n (q)′ ) = (q 2n − 1)/(q − 1).
(ii) q = 2, n > 3 and P (PSp2n (q)′ ) = 2n−1 (2n − 1).
(iii) (n, q) = (2, 3) and P (PSp2n (q)′ ) = 27.
(iv) (n, q) = (2, 2) and P (PSp2n (q)′ ) = 6.
The possibility for (iii) and (iv) is directly ruled out by (8.10). If (i) occurs, then
(8.11) implies that q 2n−1 < (q 2n − 1)/(q − 1) < (2n2 + 3n + 3)f , and thus 32n−1 6
q 2n−1 /f < 2n2 + 3n + 3, not possible. If (ii) occurs, then (8.11) implies that
n (2n −1)
22
n−1 (2n −1)
< 52
< 22n
2 +3n+3
,
and thus 2n (2n − 1) < 2n2 + 3n + 3, not possible either.
Γ (α)
Γ (α)
Case 3. Suppose that Gαβ D G2 (q)′ , where q = 2f . Then pd = q 6 , Gαβ 6
Γ (α)
(Cq−1 ◦ G2 (q)).Cf , Kαβ 6 Cq−1 , and G2 (q)′ is a section of Gαβ /Kαβ . It follows
from Lemma 2.5(c) that ℓ > P (G2(q)′ ) due to Gαβ /Kαβ . Sℓ . Then (8.6) yields
(8.12)
′
5P (G2 (q) ) 6 q 6 |G2 (q)|(q − 1)2 f < q 23 .
For q = 2 or 4, P (G2(q)′ ) = 28 or 416 by [10], contrary to (8.12). Thus q > 8.
Consulting the classification of maximal subgroups of G2 (2f )′ = G2 (2f ) in [12], one
immediately concludes that P (G2(q)′ ) = (q 6 − 1)/(q − 1). Thereby (8.12) implies
that
5
5
6
2q < 5q < 5(q −1)/(q−1) < q 23 = 223f ,
and so 85 /3 6 q 5 /f < 23, a contradiction.
Γ (α)
Γ (α)
Case 4. Suppose that Gα 6 AΓL1 (pd ). Let H = Gαβ ∩ GL1 (pd ), and M be
Γ (α)
the full preimage of H under the natural homomorphism from Gαβ to Gαβ . Note
[1]
Γ (α)
by Lemma 8.8 that Gαβ = 1, Kαβ
6 GL1 (pf ) for some proper divisor f of d, and
[1]
|Gα | divides pf − 1. From (8.6) we deduce
(8.13)
Γ (α)
max{5, p}ℓ 6 |Kαβ ||GΓα (α) | 6 (pf − 1)pd (pd − 1)d.
84
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
As M/M ∩ Gα ∼
= M Γ (α) = H is cyclic, we have M ′ 6 Gα . Similarly, M/M ∩
[1]
[1]
[1]
[1]
[1]
Gβ ∼
= M Γ (β) ∼
= H yields M ′ 6 Gβ . Thus M ′ 6 Gα ∩ Gβ = Gαβ = 1, which means
that M is abelian. Consequently, M has a subgroup L isomorphic to M Γ (β) ∼
= H.
Denote G = Gαβ /Kαβ , M = MKαβ /Kαβ and L = LKαβ /Kαβ . Clearly, L is a cyclic
normal subgroup of M . Since M is a normal subgroup of G and G ∼
= G/K is a
transitive permutation group on T , the orbits of M on T , say ∆1 , . . . , ∆k , are of
equal size. For 1 6 i 6 k, let Mi and Li be the restriction of M and L, respectively,
on ∆i . Then M = M1 × · · · × Mk , L = L1 × · · · × Lk , and Li is a cyclic normal
subgroup of Mi for each 1 6 i 6 k. From the transitivity of Mi on ∆i we deduce
that Li is semiregular on ∆i , and thus |Li | divides |∆i | = |∆1 | for each 1 6 i 6 k.
Noticing that |L1 |, . . . , |Lk | are pairwise coprime as L is cyclic, we conclude that
|L| = |L1 | · · · |Lk | divides |∆1 |. In particular, |L| divides ℓ. Because |M /L| divides
[1]
[1]
|M/L| = |M ∩ Gα | and |Gα | divides pf − 1, so |M/L| divides pf − 1. Moreover,
Γ (α)
Γ (α)
|G/M| divides d, since |G/M| divides |Gαβ /M| = |Gαβ /H| and |Gαβ /H| divides
|ΓL1 (pd )/|GL1 (pd )| = d. It follows that |G| divides (pf − 1)d|L| and thus divides
Γ (α)
Γ (α)
Γ (α)
Γ (α)
(pf − 1)dℓ. Since G is divisible by |Gαβ /Kαβ | while |Gαβ /Kαβ | is divisible by
(pd − 1)/(pf − 1), we thereby obtain
[1]
[1]
(pd − 1)/(pf − 1) (pf − 1)dℓ.
(8.14)
Observe that the greatest common divisor of (pd − 1)/(pf − 1) and pf − 1 equals
(d/f, pf − 1) due to (pd − 1)/(pf − 1) ≡ d/f (mod pf − 1). Then (8.14) is equivalent
to (pd − 1)/(pf − 1) (d/f, pf − 1)dℓ, which implies that
(8.15)
ℓ>
(pd − 1)f
(pd − 1)d/2
pd/2 + 1
>
=
.
(pf − 1)d2
(pd/2 − 1)d2
2d
Suppose p = 2. Combining (8.13) and (8.15) one obtains
(8.16)
5
2d/2 +1
2d
6 (2d/2 − 1)2d (2d − 1)d.
d/2
d/2
As a consequence, 2(2 +1)/d < 5(2 +1)/(2d) < 23d , or equivalently, 2d/2 + 1 < 3d2 .
This implies that d 6 20. However, d = 20 does not satisfy (8.14), whence d 6 19.
If d is prime, then f = 1 and (8.14) yields 2d − 1 ℓ, which in conjunction with (8.13)
d
gives 52 −1 6 2d (2d − 1)d, a contradiction. If d is a power of 2, then (8.14) yields
d/2
2d/2 + 1 ℓ, and so (8.13) leads to 52 +1 6 (2d/2 − 1)2d (2d − 1)d, not possible. If
d = 9, then 73 ℓ by (8.14), not satisfying (8.13). If d = 10, then by (8.14), either
f 6 2 and 31 ℓ or f = 5 and 33 ℓ, still not satisfying (8.13). The same argument
excludes d ∈ {12, 14, 15, 18}. Therefore, d = 6. If f 6 2, then 7 ℓ by (8.14), not
Γ (α)
Γ (α)
satisfying (8.13). Thus f = 3 as a proper divisor of d. Since Gαβ /Kαβ is divisible
Γ (α)
Γ (α)
by (26 − 1)/(23 − 1) = 9 and Gαβ /Kαβ
(8.6) requires
is a section of Sℓ , we conclude ℓ > 6. Then
Γ (α)
m6 6 mℓ 6 |Kαβ ||GΓα (α) | 6 |GL1 (23 )||AΓL6 (2)| = 27 · 33 · 72 ,
which means m 6 7. It follows from Lemma 8.19 that T = A5 or PSL2 (7). If
T = A5 , then any solvable subgroup of T has index divisible by 5 or 6, and thereby
we derive from Lemma 8.22(b) that 5x 6y 27 · 33 · 72 for some nonnegative integers
8.3. REDUCTION TO AFFINE AND ALMOST SIMPLE GROUPS
85
x and y with x + y = ℓ > 6, not possible. Hence T 6= A5 . Similarly T 6= PSL2 (7), a
contradiction.
Suppose p = 3. Combining (8.13) and (8.15) one obtains
5
3d/2 +1
2d
d/2
6 (3d/2 − 1)3d (3d − 1)d.
d/2
2
2
As a consequence, 5(3 +1)/(2d) < 33d , and thus 53 +1 < 36d < 55d . This is
equivalent to 3d/2 + 1 < 5d2 , whence d 6 11. If d = 2, then f = 1, and (8.13) yields
Γ (α)
Γ (α)
ℓ 6 3 while |Gαβ /Kαβ | is divisible by (32 − 1)/(3 − 1) = 4, contrary to the fact
Γ (α)
Γ (α)
that Gαβ /Kαβ is a section of Sℓ . If d = 3, then 13 ℓ by (8.14), not satisfying
(8.13). The same argument excludes d ∈ {5, 6, 7, 8, 9, 10, 11}. Therefore, d = 4, and
Γ (α)
so Kαβ 6 GL1 (32 ) as f 6 2. By (8.14) we have 5 ℓ. Then since (8.6) requires
Γ (α)
m5 6 mℓ 6 |Kαβ ||GΓα (α) | 6 |GL1 (32 )||AΓL4 (3)| = 29 · 34 · 5,
we obtain m 6 11. It follows from Lemma 8.19 that T = A5 , PSL2 (7), PSL2 (8) or
A6 . If T = PSL2 (7), then any solvable subgroup of T has index divisible by 7 or 8,
and thereby we derive from Lemma 8.22(b) that 7x 8y 29 · 34 · 5 for some nonnegative
integers x and y with x + y = ℓ > 5, which is impossible. Hence T 6= PSL2 (7).
Similarly, T 6= PSL2 (8) or A6 . Now T = A5 , and thus |T |ℓ /|R1 ×. . . Rℓ | is divisible by
5x 6y for some nonnegative integers x and y with x + y = ℓ. Thereby Lemma 8.22(b)
implies that 5x 6y 29 · 34 · 5, which leads to (x, y) = (1, 4) as x + y = ℓ > 5. In view
Γ (α)
of Kαβ 6 C8 , we derive from Lemma 8.8(b) that Kαβ is a 2-group. Accordingly,
5x 3y divides 34 |R| by Lemma 8.22(a), whence |R| is divisible by 5. It follows that R
is transitive on T . However, this indicates that R1 ∼
= R5 and thus (|A5 |/|R1 |)5
= ... ∼
9
4
divides 2 · 3 · 5, not possible.
Thus far we have known that p > 5. Then (8.13) together with (8.15) implies
(8.17)
p
pd/2 +1
2d
6 (pd/2 − 1)pd (pd − 1)d < p5d/2 d,
or equivalently, pd/2 +1 < 5d2 +2d logp d. As a consequence, 5d/2 +1 < 5d2 +2d log5 d,
which restricts d 6 6. If d = 6, then (8.17) forces p = 5 and so (8.14) yields
21 ℓ, not satisfying (8.13). If d = 5, then (8.14) implies that (p5 − 1)/(p − 1) 25ℓ
4
and so ℓ > 25p4 , which leads to pp /25 < 5p11 6 p12 by (8.13), a contradiction.
If d = 3, then (8.14) implies that (p2 + p + 1)/(9, p2 + p + 1) ℓ, which leads to
2
2
p(p +p+1)/(9,p +p+1) < 3p7 < p8 by (8.13), not possible. If d = 4, then it follows from
(8.17) that p = 5 or 7. If d = 4 and p = 5, then (8.14) yields 13 ℓ, violating (8.13).
If d = 4 and p = 7, then (8.14) yields 25 ℓ, again violating (8.13). Consequently,
d 6= 4, and thereby we have d = 2. Now (8.17) gives p ∈ {5, 7, 11, 13, 17, 19}. If
p = 13 or 17, then (8.14) yields 7 ℓ or 9 ℓ, respectively, not satisfying (8.13). We
next rule out the possibilities for p ∈ {5, 7, 11, 19}.
Assume p = 5. Then (8.14) yields 3 ℓ, and so (8.6) requires
Γ (α)
m3 6 mℓ 6 |Kαβ ||GΓα (α) | 6 |GL1 (5)||AΓL2 (5)| = 27 · 3 · 53 ,
giving m 6 36. Moreover, |T | is divisible by p = 5. Thus it follows from Lemma 8.19
that T = A8 or PSL2 (q) with q ∈ {5, 9, 11, 16, 19, 25, 29, 31}. If T = A8 , then any
solvable subgroup of T has index divisible by 3 or 7, and thereby we derive from
86
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Lemma 8.22(b) that 3x 7y 27 · 3 · 53 for some nonnegative integers x and y with
x + y = ℓ > 3, not possible. Hence T 6= A8 . Similarly, T 6= PSL2 (q) for q ∈
{11, 16, 25, 29, 31}. Therefore, T = A5 , A6 or PSL2 (19). We derive a contradiction
below under the assumption that T = A5 , while a similar contradiction can be
derived for T = A6 or PSL2 (19). Note that |T |ℓ /|R1 × . . . Rℓ | is divisible by 5x 6y
for some nonnegative integers x and y with x + y = ℓ, and thereby Lemma 8.22(b)
implies 5x 6y 29 ·34 ·5. This shows that x 6 1 and y 6 4, which leads to (x, y) = (3, 0)
Γ (α)
or (2, 1) since 3 (x + y). In view of Kαβ 6 C4 , we derive from Lemma 8.8(b) that
Kαβ is a 2-group, and so 5x 3y divides 52 |R| by Lemma 8.22(a). If (x, y) = (3, 0), then
5 divides |R|, contrary to the condition R . Sℓ = S3 . Consequently, (x, y) = (2, 1),
and thus 3 divides |R|. It follows that R is transitive on T . However, this indicates
that R1 ∼
= R2 ∼
= R3 and so (|A5 |/|R1 |)32 divides 52 |R|, again contrary to R . S3 .
Γ (α)
Γ (α)
Assume p = 7. Since Gαβ /Kαβ is divisible by (72 − 1)/(7 − 1) = 8 and
Γ (α)
Γ (α)
Gαβ /Kαβ
is a section of Sℓ , we conclude ℓ > 4. Then (8.6) requires
Γ (α)
m4 6 mℓ 6 |Kαβ ||GΓα (α) | 6 |GL1 (7)||AΓL2 (7)| = 26 · 32 · 72 ,
which means m 6 12. Moreover, |T | is divisible by p = 7. Thus it follows from
Lemma 8.19 that T = PSL2 (7) or PSL2 (8). If T = PSL2 (8), then any solvable subgroup of T has index divisible by 9 or 28, and thereby we derive from Lemma 8.22(b)
that 9x 28y 26 · 32 · 72 for some nonnegative integers x and y with x + y = ℓ > 4,
not possible. Now T = PSL2 (7), and so |T |ℓ /|R1 × . . . Rℓ | is divisible by 7x 8y for
some nonnegative integers x and y with x + y = ℓ. Then Lemma 8.22(b) implies
that 7x 8y 26 · 32 · 72 , which leads to (x, y) = (2, 2) as x + y = ℓ > 5. In view of
Γ (α)
Kαβ 6 C6 , we derive from Lemma 8.8(b) that |Kαβ | divides 36. Consequently,
7x 8y divides 22 · 32 · 72 |R| by Lemma 8.22(a). However, this indicates that |R| is
divisible by 24 , contrary to the condition R . Sℓ = S4 .
Γ (α)
Γ (α)
Assume p = 11. Since Gαβ /Kαβ is divisible by (112 − 1)/(11 − 1) = 12
Γ (α)
Γ (α)
and Gαβ /Kαβ is a section of Sℓ , we have ℓ > 4. Moreover,(8.14) indicates 3 ℓ.
Thereby we conclude that ℓ > 6. Then (8.6) requires
Γ (α)
m6 6 mℓ 6 |Kαβ ||GΓα (α) | 6 |GL1 (11)||AΓL2 (11)| = 25 · 3 · 53 · 113 ,
which means m 6 15. Also, |T | is divisible by p = 11. Thus it follows from
Lemma 8.19 that T = PSL2 (11). Now any solvable subgroup of T has index divisible
by 11 or 12, and thereby we derive from Lemma 8.22(b) that 11x 12y 25 · 3 · 53 · 113
for some nonnegative integers x and y with x + y = ℓ > 6, not possible.
Assume p = 19. Then (8.14) yields 5 ℓ, and so (8.6) requires
Γ (α)
m5 6 mℓ 6 |Kαβ ||GΓα (α) | 6 |GL1 (19)||AΓL2 (19)| = 25 · 34 · 5 · 192 ,
giving m 6 21. Moreover, |T | is divisible by p = 19. Thus it follows from
Lemma 8.19 that T = PSL2 (19). Note that any solvable subgroup of T has index divisible by 19 or 20. Thereby we derive from Lemma 8.22(b) that 19x 20y 25 ·34 ·5·192
for some nonnegative integers x and y with x + y = ℓ > 5, not possible.
8.3. REDUCTION TO AFFINE AND ALMOST SIMPLE GROUPS
87
Γ (α)
Case 5. Suppose that Gα is not one of the 2-transitive permutation groups
Γ (α)
Γ (α)
in the previous cases. Then (pd , Gαβ , Kαβ ) lies in the table below.
row
1
2
3
4
5
6
7
8
9
10
11
12
pd
36
24
34
52
52
72
112
232
112
192
292
592
Γ (α)
Gαβ
SL2 (13)
A7
Γ (α)
21+4 .5 6 Gαβ 6 21+4 .S5
SL2 (3)
Q8 .6, SL2 (3).4
Q8 .S3 , SL2 (3).6
SL2 (3).5, SL2 (3).10
SL2 (3).22
SL2 (5), 5 × SL2 (5)
9 × SL2 (5)
7 × SL2 (5), 28.PSL2 (5)
29 × SL2 (5)
Γ (α)
Kαβ
2
1
2
2
4
6
10
22
10
18
28
58
6
Assume that row 1 appears. Then it follows from Lemma 2.5(c) that ℓ >
P (PSL2 (13)) = 14 since PSL2 (13) is a section of Gαβ /Kαβ . Sℓ . Thereby (8.6)
yields 514 6 2 · 36 |SL2 (13)|, a contradiction. Similar argument rules out row 2.
Γ (α)
Γ (α)
Assume that row 3 appears. Then Gαβ /Kαβ has a section 24 :5, and so Sℓ has a
section 24 :5. This implies that ℓ > 10. It follows from (8.6) that 510 6 2 · 34|21+4 .S5 |,
a contradiction.
Γ (α)
Γ (α)
Next consider rows 4–8. As |Sℓ | > |Gαβ /Kαβ | > |Gαβ /Kαβ | > 12, we have
Γ (α)
Γ (α)
ℓ > 4. Note that |Kαβ ||Gα | divides 24p2 (p − 1)2 . Then (8.6) yields m4 6
24p2 (p − 1)2 , which gives an upper bound for m in the table below. This together
with the condition that p divides |T | restricts the possibilities for T by Lemma 8.19,
also shown in the table.
p
5 7
11
23
m 6 9 14
23
49
T
A5 PSL2 (7), PSL2 (8), PSL2 (13) PSL2 (11) PSL2 (23)
For p = 5, since any solvable subgroup of T = A5 has index divisible by 5 or 6, we
derive from Lemma 8.22(b) that 5x 6y 24 · 52 · 42 for some nonnegative integers x
and y with x + y = ℓ > 4, a contradiction. For p = 7, if T = PSL2 (7), then since
any solvable subgroup of T has index divisible by 7 or 8, Lemma 8.22(b) yields that
7x 8y 24 · 72 · 62 for some nonnegative integers x and y with x + y = ℓ > 4, not
possible. Similarly, T is not PSL2 (8) or PSL2 (13) either for p = 7. For p = 11, since
any solvable subgroup of T = PSL2 (11) has index divisible by 11 or 12, it follows from
Lemma 8.22(b) that 11x 12y 24 · 112 · 102 for some nonnegative integers x and y with
x + y = ℓ > 4, a contradiction. For p = 23, any solvable subgroup of T = PSL2 (23)
has index divisible by 23 or 24, and thereby it follows from Lemma 8.22(b) that
11x 12y 24 · 232 · 222 for some nonnegative integers x and y with x + y = ℓ > 4, again
a contradiction.
Γ (α)
Γ (α)
Finally, consider rows 9–12. Note that |Kαβ ||Gα | divides 60p2 (p − 1)2 , and
ℓ > P (PSL2 (5)) = 5 since PSL2 (5) is a section of Gαβ /Kαβ . Sℓ . It derives from
(8.6) that m5 6 60p2 (p − 1)2 , which gives m 6 14, 23, 33 or 58 corresponding to
88
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
p = 11, 19, 29 or 59, respectively. We conclude that T = PSL2 (p) by Lemma 8.19
as |T | is divisible by p. Then any solvable subgroup of T has index divisible by
p or p + 1, and so Lemma 8.22(b) implies that px (p − 1)y 60p2 (p − 1)2 for some
nonnegative integers x and y with x + y = ℓ > 5, not possible.
We summarize the outcomes of this section in the following proposition.
Proposition 8.24. Let Γ = (V, E) be a connected (G, 2)-arc-transitive graph,
and R be a solvable vertex-transitive subgroup of G. If G is quasiprimitive on V ,
then G is either an almost simple group or an affine group.
Proof. We have seen in Lemmas 8.20 and 8.23 that the G is not of type TW or
PA. Then by [48, Theorem 2], the quasiprimitive type of G must be HA or AS.
8.4. Proof of Theorem 1.2
We shall complete the proof of Theorem 1.2 in this section. Since quasiprimitive
2-arc-transitive graphs of affine type are classified in [30], we only need to treat the
almost simple case by Proposition 8.24. For convenience, we make a hypothesis as
follows.
Hypothesis 8.25. Let Γ = (V, E) be a connected (G, s)-arc-transitive graph,
where s > 2 and the valency of Γ is at least three. Suppose that G is almost simple
with socle L, and G is quasiprimitive on V containing a solvable vertex-transitive
subgroup R. Let {α, β} ∈ E.
By the quasiprimitivity of G we immediately derive from the above hypothesis
that L is transitive on V and Γ is non-bipartite. Furthermore, by Lemmas 8.5
and 8.9 we have the consequences below.
Lemma 8.26. Under Hypothesis 8.25, the following statements hold.
(a) G has a factorization G = RGα with a solvable factor R and a core-free
factor Gα .
(b) Either G = L or Gα L.
(c) There exists a 2-element g in NG (Gαβ ) such that
hGα , NG (Gαβ )i = hGα , gi = G
and Gα is 2-transitive on [Gα :Gα ∩ Ggα ].
The candidates for (G, R, Gα ) in the factorization G = RGα are categorized by
Theorem 1.1 into parts (a)–(d) there, and we will analyze them separately. Above
all, we consider the candidates (G, R, Gα) = (G, H, K) as in part (a) of Theorem 1.1,
which is the case where both H and K are solvable.
Lemma 8.27. Under Hypothesis 8.25, if Gα is solvable, then L = PSL2 (q) for
some prime power q, so that Γ is classified in [23].
Proof. Suppose that L 6= PSL2 (q) for any prime power q. Then by Proposition 4.1, (G, R, Gα ) = (G, H, K) or (G, K, H) for some triple (G, H, K) in rows 4–12
of Table 4.1.
Assume that L = PSL3 (3) as in rows 4–5 of Table 4.1. Then since Gα is 2transitive on Γ (α), we infer that Gα = 32 :2.S4 or AΓL1 (9) and in particular Gα < L.
8.4. PROOF
89
This yields G = L by Lemma 8.26(b), and so |V | = |G|/|Gα| = 13 or 39. As a
consequence, the valency of Γ is even, which restricts Gα = 32 :2.S4 and Gαβ =
32 :2.S3 . However, this causes NG (Gαβ ) = Gαβ < Gα and then hGα , NG (Gαβ ) = Gα ,
contrary to Lemma 8.26(c).
For L = PSL3 (4) as in row 6 of Table 4.1, since the quasiprimitivity of G on V
requires that there is no subgroup of index two in G containing Gα , the possibilities
for (G, Gα ) are (L.S3 , 7:3.S3 ), (L.(S3 × 2), 7:6.S3 ) and (L.S3 , 24:(3 × D10 ).2). Searching in Magma[6] for these candidates gives no such connected (G, 2)-arc-transitive
graph.
In the following we exclude rows 7–12 of Table 4.1. If Γ has valency at least
[1]
[1]
five, then since Gα is solvable, we have Gαβ = 1 by Theorem 8.7 and thus Gα E
Γ (β) ∼
Γ (α)
G
by Lemma 8.6. In particular, if Γ has valency at least five, then
=G
αβ
(8.18)
αβ
Γ (α)
Γ (α)
|Gα | = |G[1]
| divides |Gαβ ||GΓα (α) | = |GΓα (α) |2 /|Γ (α)|.
α ||Gα
Let L = PSL3 (8) as in row 7 of Table 4.1. Since Gα is 2-transitive on Γ (α),
we have Gα = 23+6 :72 :3 or 23+6 :72 :6. If Gα = 23+6 :72 :6, then |Γ (α)| = 7 and
G = PSL3 (8).6, which leads to a contradiction that both |Γ (α)| and |V | = |G|/|Gα |
Γ (α)
are odd. Thus Gα = 23+6 :72 :3, and so Gα = 23 :7:3. However, (8.18) requires |Gα |
Γ (α)
to divide |Gα |2 , a contradiction.
Let L = PSU3 (8) as in row 8 of Table 4.1. Since Gα is 2-transitive on Γ (α)
and G does not have a subgroup of index two containing Gα , we have (G, Gα ) =
(PSU3 (8).32 .O, 23+6 :(63:3).O) with O 6 C2 . As a consequence, |V | = |G|/|Gα | =
Γ (α)
513 is odd, and so |Γ (α)| is even. Hence |Γ (α)| = 26 and Gα = 26 :(63:3).O, but
this does not satisfy (8.18), a contradiction.
Assume next that L = PSU4 (2) and Gα = H or K as in rows 9–11 of Table 4.1.
For each candidate of Gα , let X be the maximal subgroup of G containing Gα ,
3
where X ∩ L = 24 :A5 , 31+2
+ :2.A4 or 3 :S4 . Then computation in Magma[6] verifies
that for any subgroup M of Gα such that Gα acts 2-transitively on [Gα :M], one has
NG (M) 6 X. This violates Lemma 8.26(c).
For G = M11 as in row 12 Table 4.1, Gα = M9 .2 = 32 :Q8 .2 and Gαβ = Q8 .2,
which makes |V | = |G|/|Gα | and |Γ (α)| = |Gα |/|Gαβ | both odd, a contradiction.
This proves the lemma.
Next we treat the candidates for (G, R, Gα) = (G, H, K) as described in part (b)
of Theorem 1.1.
Lemma 8.28. Under Hypothesis 8.25, if L = An with n > 7, then Γ = Kn .
Proof. The triple (G, R, Gα ) = (G, H, K) is classified in Proposition 4.3, which
shows that one of the following occurs.
(i) An E G 6 Sn with n > 6, and An−1 E Gα 6 Sn−1 .
(ii) An E G 6 Sn with n = pf for some prime p, and An−2 E Gα 6 Sn−2 × S2 .
(iii) An E G 6 Sn with n = 8 or 32, and An−3 E Gα 6 Sn−3 × S3 .
(iv) (G, R, Gα ) = (G, H, K) in rows 5–6 of Table 4.2.
First assume that (i) occurs. Then viewing Lemma 8.26(b) we have (G, Gα ) =
(Sn , Sn−1 ) or (An , An−1 ). It follows that G is 2-transitive on [G:Gα ], and hence
Γ = Kn .
90
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Next assume that (ii) occurs. For n = 7, 8 and 9, respectively, computation in
Magma[6] shows that there is no such connected (G, 2)-arc-transitive graph. Thus
n > 11.
Suppose G = An . Then either Gα = An−2 or Gα = (Sn−2 × S2 ) ∩ An ∼
= Sn−2 .
Assume that Gα = An−2 . Because Gα acts 2-transitively on [Gα :Gαβ ], so we have
Gαβ = An−3 . It follows that NG (Gαβ ) = (Sn−3 ×S3 )∩An . Thus for any 2-element g ∈
NG (Gαβ ) \ Gα , we have hGα , gi = hAn−2 , (1, 2)(n − 2, n − 1)i, hAn−2 , (1, 2)(n − 2, n)i
or hAn−2 , (1, 2)(n − 1, n)i, and hence hGα , gi ∼
= An−1 or Sn−2 . This is contrary
to Lemma 8.26(c). Now Gα = (Sn−2 × S2 ) ∩ An = hAn−2 , (1, 2)(n − 1, n)i. Since
Gα acts 2-transitively on [Gα :Gαβ ], we may assume Gαβ = hAn−3 , (1, 2)(n − 1, n)i
without loss of generality. It follows that NG (Gαβ ) = Gαβ < Gα , still contrary to
Lemma 8.26(c).
Suppose G = Sn . Then Gα = Sn−2 , An−2 × S2 or Sn−2 × S2 by Lemma 8.26(b).
This implies that L = An is 2-arc-transitive on Γ , which is not possible as shown in
the previous paragraph.
Now assume that (iii) appears. If n = 8, then searching in Magma[6] shows that
no such connected (G, 2)-arc-transitive graph arises. Therefore, n = 32. According
to Proposition 4.3, R = AΓL1 (32), which implies that |Gα | is divisible by 3|S29 |
[1]
since |Gα | is divisible by |G|/|R|. Hence A29 × C3 E Gα , and so C3 6 Gα , A29 6
Γ (α)
Γ (α)
[1]
Gα 6 S29 and A28 6 Gαβ 6 S28 . By Theorem 8.7(b) we conclude Gαβ = 1, and
[1]
[1]
Γ (α)
it then follows from Lemma 8.6 that Gα ∼
, not possible.
= (Gα )Γ (β) E G
αβ
Finally, for case (iv), computation in Magma[6] shows that there exists no such
connected (G, 2)-arc-transitive graph. This completes the proof.
The case where (G, R, Gα) = (G, H, K) as in part (c) of Theorem 1.1 is dealt
with in the next lemma.
Lemma 8.29. Under Hypothesis 8.25, if L is a sporadic simple group and Gα is
unsolvable, then either Γ is a complete graph, or Γ is the Higman-Sims graph and
(G, Gα ) = (HS, M22 ) or (HS.2, M22 .2).
Proof. By Proposition 4.4, either G acts 2-transitively on [G:K], or (G, K) lies
in rows 3–4 and 7–13 of Table 4.3. For the former, Γ is a complete graph. Since Gα
is 2-transitive on Γ (α), it is not possible for Gα = Sp4 (4).4 or G2 (4).2 as in row 12
or 13, respectively, of Table 4.3. Moreover, computation in Magma[6] shows that
there exists no non-bipartite connected (G, 2)-arc-transitive graph for rows 3–4 of
Table 4.3. Thus we only need to consider rows 7–11 of Table 4.3.
Let G = M23 as in row 7 of Table 4.3. If Gα = M22 , then G acts 2-transitively on
[G:Gα ] and so Γ = K23 . If Gα = PSL3 (4):2, then Gαβ = 24 :S5 and NG (Gαβ ) = Gαβ ,
contradicting Lemma 8.26(c). If Gα = 24 :A7 , then Gαβ = A7 , 24 :A6 or 24 :GL3 (2)
and NG (Gαβ ) = Gαβ , again contradicting Lemma 8.26(c).
If G = J2 .2 as in row 8 of Table 4.3, then Gα = G2 (2) and Gαβ = 31+2
+ :8:2 as Gα
is 2-transitive on Γ (α), but NG (Gαβ ) = Gαβ , violating the connectivity of Γ .
Now let L = HS as in rows 9–11 of Table 4.3. Viewing Lemma 8.26(b), we have
(G, Gα ) = (HS.O, M22 .O) with O 6 C2 . From the 2-transitivity of Gα on Γ (α)
we derive that Gαβ = PSL3 (4).O. Hence Γ has order |G|/|Gα | = 100 and valency
|Gα |/|Gαβ | = 22. This is the well-known Higman-Sims graph, see Example 8.12.
8.4. PROOF
91
Finally we embark on the candidates for (G, R, Gα) = (G, H, K) as in part (d)
of Theorem 1.1.
Lemma 8.30. Under Hypothesis 8.25, if G is a classical group of dimension
greater than two and Gα is unsolvable, then Γ is the Hoffman-Singlton graph and
(G, Gα ) = (PSU3 (5), A7 ) or (PΣU3 (5), S7 ).
Proof. According to part (d) of Theorem 1.1, (G, R, Gα ) = (G, H, K) with
(L, H ∩ L, K ∩ L) described in Tables 1.1–1.2. Since Gα acts 2-transitively on Γ (α),
inspecting the candidates in Tables 1.1–1.2 we conclude that one of the following
occurs.
(i) L = PSLn (q), where n > 3 and (n, q) 6= (3, 2) or (3, 3), and Lα D
q n−1 :SLn−1 (q).
(ii) L = PSp4 (q), and PSL2 (q 2 ) E Lα 6 PSL2 (q 2 ).2 as in row 4 or 5 of Table 1.1.
(iii) L = PSU4 (q), and SU3 (q) E Lα as in row 6 of Table 1.1.
(iv) (L, H ∩ L, K ∩ L) lies in rows 6–21, 23 and 25–26 of Table 1.2.
Γ (α)
Case 1. Suppose that (i) appears. Then either Lα D q n−1 :SLn−1 (q) is affine
Γ (α)
with |Γ (α)| = q n−1 , or Lα D PSLn−1 (q) is almost simple with |Γ (α)| = (q n−1 −
1)/(q−1). Accordingly, Gαβ is the maximal subgroup of index q n−1 or (q n−1 −1)/(q−
1) in Gα . If G 6 PΓLn (q), then NG (Gαβ ) 6 P1 [G], which leads to hGα , NG (Gαβ )i 6
P1 [G], contrary to Lemma 8.9(c). Consequently, G PΓLn (q), and so G ∩ PΓLn (q)
has index 2 in G. Moreover, G ∩ PΓLn (q) contains Gα by Lemma 3.1. This implies
that G ∩ PΓLn (q) is not transitive on V , which is impossible as G is quasiprimitive
on V .
Case 2. Suppose that (ii) appears. In light of the graph automorphism of L
for even q, we may assume without loss of generality that Lα is a C3 subgroup of L.
Denote the stabilizer of a totally isotropic 1 or 2-space in L by P1 or P2 , respectively.
Since Lα E Gα and Gα is 2-transitive on Γ (α), we deduce from the classification of
2-transitive permutation groups that Lα is 2-transitive on Γ (α) and
Lαβ = Lα ∩ P2 = q 2 :
q2 − 1
q2 − 1
or q 2 :
.2.
(2, q − 1)
(2, q − 1)
Therefore, Γ is (L, 2)-arc-transitive, and then by Lemma 8.9 there exists g ∈ L
such that g normalizes Lαβ , g 2 ∈ Lαβ and hLα , gi = L. Let X = PSL2 (q 2 ).2
be the maximal subgroup of L containing Lα , and N = hgiLαβ = Lαβ hgi. Since
Op (Lαβ ) 6= 1, we know that Op (N) 6= 1 and hence N is contained in the parabolic
subgroup P1 or P2 of L by Borel-Tits theorem. Note that the intersection of P1 with
the C3 subgroup X has order |X||P1 |/|L| = 2q 2 (q − 1)/(2, q − 1) as the factorization
L = XP1 holds. If N 6 P1 , then Lαβ 6 P1 , which leads to a contradiction that
2q 2 (q − 1) > |P1 ∩ X| > |P1 ∩ Lα | > |Lαβ | > q 2 (q 2 − 1).
Consequently, N 6 P2 , from which we conclude
N = q2:
q2 − 1
.2 = X ∩ P2
(2, q − 1)
as |N|/|Lαβ | = 2. It follows that N 6 X, and then hLα , gi 6 hLα , Ni 6 X 6= L,
contrary to the connectivity of Γ .
92
8. CAYLEY GRAPHS OF SOLVABLE GROUPS
Case 3. Suppose that (iii) appears. In this case, Y E Lα 6 X, where Y =
SU3 (q) and X = GU3 (q)/d with d = (4, q + 1). Denote the stabilizer of a totally
isotropic 1 or 2-space in L by P1 or P2 , respectively. Since Lα E Gα and Gα is
2-transitive on Γ (α), we deduce from the classification of 2-transitive permutation
groups that Lα is 2-transitive on Γ (α) and Lαβ ∩ Y = q 1+2 :(q 2 − 1). Therefore,
Γ is (L, 2)-arc-transitive, and then by Lemma 8.9 there exists g ∈ L such that g
normalizes Lαβ , g 2 ∈ Lαβ and hLα , gi = L. Let N = hgiLαβ = Lαβ hgi. Since
Op (Lαβ ) 6= 1, we know that Op (N) 6= 1 and hence N is contained in the parabolic
subgroup P1 or P2 of L by Borel-Tits theorem. We conclude that
q+1
6X
d
as |N|/|Lαβ | = 2, but this implies hLα , gi 6 hLα , Ni 6 X 6= L, contrary to the
connectivity of Γ .
Case 4. Suppose that (iv) appears. For the candidates of (L, H ∩ L, K ∩ L) in
rows 6–11 of Table 1.2, searching in Magma[6] gives no such connected (G, 2)-arctransitive graph.
Let L = PSp4 (p), where p ∈ {5, 7, 11, 23}, as in rows 12–15 of Table 1.2. Then
Γ (α)
Γ (α)
Lα = PSL2 (p2 ):O with O 6 C2 , and since Gα
is 2-transitive, Lα
is also 2transitive. Therefore, Γ is (L, 2)-arc-transitive. By the 2-transitivity of Lα on Γ (α)
we derive that Lαβ = (p2 :(p2 − 1)/2).O and then NL (Lαβ ) < PSL2 (p2 ):2. However,
this implies that hLα , NL (Lαβ )i 6 PSL2 (p2 ):2 < L, contrary to the connectivity of
Γ.
Assume that L = Sp6 (2) as in row 16 of Table 1.2. Then G = L, and Gα = A8 or
S8 . If Gα = S8 , then G is 2-transitive but not 3-transitive on [G:Gα ], a contradiction.
Thus Gα = A8 , and we have Gαβ = 23 :GL3 (2) or A7 since Gα acts 2-transitively on
Γ (α). If Gαβ = 23 :GL3 (2), then NG (Gαβ ) = Gαβ < Gα , violating the connectivity
of Γ . If Gαβ = A7 , then S7 ∼
= NG (Gαβ ) < Gα , still violating the connectivity of Γ .
Let L = PSp6 (3) as in row 17 of Table 1.2. Then Lα = PSL2 (27).3 is also 2transitive on Γ (α), and hence Γ is (L, 2)-arc-transitive. Due to the 2-transitivity
Γ (α)
of Lα we deduce Gαβ = 33 :13:3. However, it follows that NL (Lαβ ) = Lαβ < Lα ,
contrary to the connectivity of Γ . The same arguments rules out rows 18 and 20 of
Table 1.2.
Now let L = PSU3 (5) as in row 19 of Table 1.2. Then Lα = A7 is also 2transitive on Γ (α), and hence Lαβ = PSL2 (7) or A6 . If Lαβ = PSL2 (7), then
NL (Lαβ ) = Lαβ < Lα , not possible. Consequently, Lαβ = A6 , and Γ has order
|L|/|Lα | = 50 and valency |Lα |/|Lαβ | = 7. This is the well-known Hoffman-Singleton
graph introduced in Example 8.11, where we see that Aut(Γ ) = PΣU3 (5) and so
(G, Gα ) = (PSU3 (5), A7 ) or (PΣU3 (5), S7 ).
Let L = PSU4 (8) as in row 21 of Table 1.2. Then Lα = 212 :SL2 (64).7, and
|V | = |L|/|Lα | = 4617 is odd. Since Gα is 2-transitive on Γ (α), it follows that
|Γ (α)| = 65 is odd, a contradiction. (We remark that the subgroup 212 :SL2 (64) of Lα
is not isomorphic to ASL2 (64) and does not have any 2-transitive affine permutation
representation.) The same argument excludes row 23 of Table 1.2.
Assume finally that L = Ω+
8 (2) as in rows 25–26 of Table 1.2, where Lα = Sp6 (2)
or A9 . Then Lα is also 2-transitive on Γ (α). If Lα = Sp6 (2), then Lαβ = S8 or
SO−
6 (2), but NL (Lαβ ) = Lαβ < Lα , violating the connectivity of Γ . If Lα = A9 ,
N 6 q 1+2 :(q 2 − 1).
8.4. PROOF
93
then Lαβ = A8 , but NL (Lαβ ) = Lαβ < Lα , still violating the connectivity of Γ . This
completes the proof.
Proof of Theorem 1.2: Let Σ be a non-bipartite connected (X, s)-transitive
graph and R be a solvable vertex-transitive subgroup of X, where s > 2 and the
valency of Σ is at least three. By virtue of Lemma 8.4 we may assume that X = G is
vertex-quasiprimitive and Σ = Γ . By Proposition 8.24, G is affine or almost simple.
If G is affine, then Γ is not (G, 3)-arc-transitive by [35, Proposition 2.3], and so
s = 2 as in part (b) of Theorem 1.2.
Now assume that G is almost simple. Then (G, Γ ) satisfies Hypothesis 8.25.
If PSL2 (q) 6 G 6 PΓL2 (q) for some prime power q > 4, then Γ is classified
in [23], from which we conclude that s = 3 if and only if Γ is the Petersen graph.
Thus assume that soc(G) 6= PSL2 (q). Then Lemmas 8.27–8.30 show that (G, Γ )
satisfies part (a), (d) or (e) of Theorem 1.2. Note that the complete graph is not
3-arc-transitive, the Hoffman-Singlton graph is 3-transitive by Example 8.11 and
the Higman-Sims graph is not 3-arc-transitive by Example 8.12. This completes the
proof of Theorem 1.2.
APPENDIX A
Tables for nontrivial maximal factorizations of almost
simple classical groups
Table A.1. L = PSLn (q), n > 2
row A ∩ L
1
ˆGLa (q b ).b
2
PSpn (q).a
3
4
5
6
7
8
9
10
B∩L
P1 , Pn−1
P1 , Pn−1
remark
ab = n, b prime
n > 4 even,
a = (2, q − 1)(n/2, q − 1)/(n, q − 1)
PSpn (q).a
Stab(V1 ⊕ Vn−1 ) n > 4 even, G PΓLn (q),
a = (2, q − 1)(n/2, q − 1)/(n, q − 1)
2
ˆGLn/2 (q ).2 Stab(V1 ⊕ Vn−1 ) n > 4 even, q ∈ {2, 4}, G PΓLn (q)
P1
A5
n = 2, q ∈ {11, 19, 29, 59}
P1
S4
n = 2, q ∈ {7, 23}
P1
A4
G = PGL2 (11)
D34
PSL2 (4)
G = PΓL2 (16)
PSL2 (7)
A6
n = 3, q = 4, G = L.2
31.5
P2 , P3
n = 5, q = 2
95
96
A. MAXIMAL FACTORIZATIONS OF ALMOST SIMPLE CLASSICAL GROUPS
Table A.2. L = PSp2m (q), m > 2
row
1
2
3
4
5
6
7
A∩L
PSp2a (q b ).b
Sp2a (q b ).b
O−
2m (q)
O−
2m (q)
Spm (4).2
O−
2m (2)
O−
2m (4)
B∩L
P1
−
O+
2m (q), O2m (q)
Pm
Spm (q) ≀ S2
N2
O+
2m (2)
O+
2m (4)
8
9
10
11
Spm (16).2
O−
2m (4)
O−
2m (16)
Sz(q)
N2
Sp2m (2)
Sp2m (4)
O+
4 (q)
12
13
14
15
16
G2 (q)
24 .A5
PSL2 (13)
O−
8 (2)
PSL2 (17)
−
O+
6 (q), O6 (q), P1 , N2
P1 , P2
P1
S10
O+
8 (2)
remark
ab = m, b prime
q even, ab = m, b prime
q even
m even, q even
m > 4 even, q = 2
q=2
q = 4, G = L.2, two classes
of factorizations if m = 2
m > 4 even, q = 4, G = L.2
q = 4, G = L.2
q = 16, G = L.4
m = 2, q = 2f , f > 3 odd,
two classes of factorizations
m = 3, q even
m = 2, q = 3
m = 3, q = 3
m = 4, q = 2
m = 4, q = 2
Table A.3. L = PSUn (q), n > 3
row
1
2
3
4
5
6
7
8
9
10
11
12
A∩L
N1
N1
N1
N1
PSL2 (7)
A7
19.3
33 .S4
PSL3 (4)
N1
J3
Suz
B∩L
Pm
PSp2m (q).a
ˆSLm (4).2
ˆSLm (16).3.2
P1
P1
P1
P2
P1 , PSp4 (3), P2
PSU4 (3).2, M22
P1
N1
remark
n = 2m
n = 2m, a = (2, q − 1)(m, q + 1)/(n, q + 1)
n = 2m > 6, q = 2
n = 2m, q = 4, G > L.4
n = 3, q = 3
n = 3, q = 5
n = 3, q = 8, G > L.32
n = 4, q = 2
n = 4, q = 3
n = 6, q = 2
n = 9, q = 2
n = 12, q = 2
A. MAXIMAL FACTORIZATIONS OF ALMOST SIMPLE CLASSICAL GROUPS
Table A.4. L = Ω2m+1 (q), m > 3, q odd
row
1
2
3
4
5
6
7
8
A∩L
N−
1
G2 (q)
G2 (q)
PSp6 (q).a
F4 (q)
G2 (3)
N+
1
P3
B∩L
Pm
−
−
P1 , N+
1 , N1 , N2
N+
2
N−
1
N−
1
S9 , Sp6 (2)
S9 , Sp6 (2)
S9 , Sp6 (2), 26 .A7
remark
m=3
m = 3, q > 3
m = 6, q = 3f , a 6 2
m = 12, q = 3f
m = 3, q = 3
m = 3, q = 3
m = 3, q = 3
Table A.5. L = PΩ−
2m (q), m > 4
row
1
2
3
4
5
A∩L
P1
N1
P1
N+
2
A12
B∩L
ˆGUm (q)
ˆGUm (q)
2
Ω−
m (q ).2
GUm (4)
P1
remark
m odd
m odd
m even, q ∈ {2, 4}, G = Aut(L)
m odd, q = 4, G = Aut(L)
m = 5, q = 2
Table A.6. L = PΩ+
2m (q), m > 5
row
1
2
3
4
5
6
7
8
9
10
11
12
13
A∩L
N1
N1
N1
N−
2
P1
N1
N1
N1
N−
2
N−
2
N+
2
Ω9 (q).a
Co1
B∩L
Pm , Pm−1
ˆGUm (q).2
(PSp2 (q) ⊗ PSpm (q)).a
Pm , Pm−1
ˆGUm (q).2
ˆGLm (q).2
2
Ω+
m (4).2
2
Ω+
m (16).2
ˆGLm (2).2
ˆGLm (4).2
ˆGUm (4).2
N1
N1
remark
m even
m even, q > 2, a = gcd(2, m/2, q − 1)
m even
G > L.2 if m odd
m even, q = 2
m even, q = 4, G > L.2
q = 2, G > L.2 if m odd
q = 4, G > L.2, G 6= O+
2m (4)
m even, q = 4, G = L.2
m = 8, a 6 2
m = 12, q = 2
97
98
A. MAXIMAL FACTORIZATIONS OF ALMOST SIMPLE CLASSICAL GROUPS
Table A.7. L = PΩ+
8 (q)
row
1
2
3
4
A∩L
Ω7 (q)
Ω7 (q)
Ω7 (q)
Ω7 (q)
5
6
7
8
Ω7 (q)
Ω7 (q)
P1 , P3 , P4
A9
9
10
11
12
13
14
15
B∩L
Ω7 (q)
P1 , P3 , P4
d
ˆ((q + 1)/d × Ω−
6 (q)).2
d
ˆ((q − 1)/d × Ω+
6 (q)).2
(PSp2 (q) ⊗ PSp4 (q)).2
1/2
Ω−
)
8 (q
d
ˆ((q + 1)/d × Ω−
6 (q)).2
Ω7 (2), P1 , P3 , P4 ,
(3 × Ω−
6 (2)).2
Ω7 (2)
(PSL2 (4) × PSL2 (4)).22
+
Ω8 (2)
Ω7 (3)
+
Ω8 (2)
P1 , P3 , P4
+
Ω8 (2)
P13 , P14 , P34
26 .A8
P1 , P3 , P4
Ω7 (4)
(PSL2 (16) × PSL2 (16)).22
−
(5 × Ω6 (4)).2 (3 × Ω+
6 (4)).2
remark
A = B τ for some triality τ
d = (2, q − 1), B in C1 or C3
q > 2, G = L.2 if q = 3,
d = (2, q − 1), B in C1 or C2
q odd, B in C1 or C4
q square, B in C5 or S
d = (2, q − 1), B in C1 or C3
q=2
q
q
q
q
q
q
q
= 2,
=3
=3
= 3,
= 3,
= 4,
= 4,
B in C2 or C3
G > L.2
G > L.2
G > L.2
G > L.2
APPENDIX B
Magmacodes
/*
Input : a finite group G
Output: sequence of pairs [H,K] such that G=HK with H and K both core-free
*/
_Factorizations:=function(G)
list:=[];
S:=[i:i in Subgroups(G)|#Core(G,i‘subgroup) eq 1];
T:=MaximalSubgroups(G);
while #T gt 0 do
K:=T[1]‘subgroup;
core:=Core(G,K);
if #core eq 1 then
for i in S do
H:=i‘subgroup;
if #H*#K eq #G*#(H meet K) then
Append(~list,[H,K]);
T:=T cat MaximalSubgroups(K);
end if;
end for;
else
T:=T cat MaximalSubgroups(K);
end if;
Remove(~T,1);
end while;
return list;
end function;
/*
Input : a finite group G
Output: sequence of pairs [H,K] such that G=HK with H solvable and K core-free
*/
_SolvableFactorizations:=function(G)
list:=[];
S:=SolvableSubgroups(G);
T:=MaximalSubgroups(G);
while #T gt 0 do
K:=T[1]‘subgroup;
99
100
B. MagmaCODES
core:=Core(G,K);
if #core eq 1 then
for i in S do
H:=i‘subgroup;
if #H*#K eq #G*#(H meet K) then
Append(~list,[H,K]);
T:=T cat MaximalSubgroups(K);
end if;
end for;
else
T:=T cat MaximalSubgroups(K);
end if;
Remove(~T,1);
end while;
return list;
end function;
/*
Input : a finite group G
Output: the maximal solvable subgroups as well as some other solvable subgroups
of G
*/
_MaximalSolvableCandidates:=function(G)
list:=[];
temp:=MaximalSubgroups(G);
while #temp gt 0 do
if IsSolvable(temp[1]‘subgroup) eq true then
Append(~list,temp[1]‘subgroup);
else
temp:=temp cat MaximalSubgroups(temp[1]‘subgroup);
end if;
Remove(~temp,1);
end while;
return list;
end function;
/*
Input : a finite group G
Output: sequence of pairs [H,K] such that G=HK with H maximal solvable and K
core-free
*/
_MaximalSolvableFactorizations1:=function(G)
list:=[];
B. MagmaCODES
101
S:=_MaximalSolvableCandidates(G);
T:=MaximalSubgroups(G);
while #T gt 0 do
K:=T[1]‘subgroup;
core:=Core(G,K);
if #core eq 1 then
for H in S do
if #H*#K eq #G*#(H meet K) then
Append(~list,[H,K]);
T:=T cat MaximalSubgroups(K);
end if;
end for;
else
T:=T cat MaximalSubgroups(K);
end if;
Remove(~T,1);
end while;
return list;
end function;
/*
Input : a finite group G and a subgroup A
Output: sequence of pairs [H,K] such that G=HK with H maximal solvable in A
and K core-free
*/
_MaximalSolvableFactorizations2:=function(G,A)
list:=[];
S:=_MaximalSolvableCandidates(A);
T:=MaximalSubgroups(G);
while #T gt 0 do
K:=T[1]‘subgroup;
core:=Core(G,K);
if #core eq 1 then
for H in S do
if #H*#K eq #G*#(H meet K) then
Append(~list,[H,K]);
T:=T cat MaximalSubgroups(K);
end if;
end for;
else
T:=T cat MaximalSubgroups(K);
end if;
Remove(~T,1);
end while;
return list;
102
B. MagmaCODES
end function;
/*
Input : a finite group G
Output: the core-free maximal subgroups of G
*/
_CoreFreeMaximalSubgroups:=function(G)
L:=MaximalSubgroups(G);
return [i:i in L|#Core(G,i‘subgroup) eq 1];
end function;
/*
Input : a finite group G
Output: sequence of pairs [H,K] such that G=HK with H maximal solvable and K
core-free maximal
*/
_MaximalFactorizations1:=function(G)
list:=[];
S:=_MaximalSolvableCandidates(G);
T:=_CoreFreeMaximalSubgroups(G);
for H in S do
for i in T do
K:=i‘subgroup;
if #H*#K eq #G*#(H meet K) then
Append(~list,[H,K]);
end if;
end for;
end for;
return list;
end function;
/*
Input : a finite group G and a subgroup A
Output: sequence of pairs [H,K] such that G=HK with H maximal solvable in A
and K core-free maximal
*/
_MaximalFactorizations2:=function(G,A)
list:=[];
S:=_MaximalSolvableCandidates(A);
T:=_CoreFreeMaximalSubgroups(G);
for H in S do
for i in T do
B. MagmaCODES
103
K:=i‘subgroup;
if #H*#K eq #G*#(H meet K) then
Append(~list,[H,K]);
end if;
end for;
end for;
return list;
end function;
/*
Input : a finite group G and a subgroup K
Output: sequence of conjugacy classes of subgroups M of K such that K=Gα and
M=Gαβ for an edge {α, β} of some (G,2)-arc-transitive graph
*/
_TwoArcTransitive:=function(G,K)
list:=[];
I:=[i:i in MaximalSubgroups(K)|Transitivity(CosetImage(K,i‘subgroup
)) gt 1 and sub<G|K,Normalizer(G,i‘subgroup)> eq G];
for i in I do
M:=i‘subgroup;
N:=Normaliser(G,M);
P:=Sylow(N,2);
R,_:=RightTransversal(N,Normaliser(N,P));
s:=0;
for x in R do
if exists(g){g:g in P^x|K^g meet K eq M and sub<G|K,g> eq G
and g^2 in K} then
s:=s+1;
end if;
end for;
if s gt 0 then
Append(~list,M);
end if;
end for;
return list;
end function;
Bibliography
[1] B. Alspach, M. Conder, D. Marušič and M. Y. Xu, A classification of 2-arc-transitive circulants, J. Algebraic Combin. 5 (1996), no. 2, 83–86.
[2] M. Aschbacher, On the maximal subgroups of the finite classical groups, Invent. Math., 76
(1984), no. 3, 469–514.
[3] N. Blackburn and B. Huppert, Finite groups II, Springer-Verlag, Berlin-New York, 1982.
[4] N. Blackburn and B. Huppert, Finite groups III, Springer-Verlag, Berlin-New York, 1982.
[5] N. L. Biggs and M. J. Hoare, The sextet construction for cubic graphs, Combinatorica, 3
(1983), no. 2, 153–165.
[6] W. Bosma, J. Cannon and C. Playoust, The magma algebra system I: The user language, J.
Symbolic Comput., 24 (1997), no. 3-4, 235–265.
[7] J. N. Bray, D. F. Holt and C. M. Roney-Dougal, The maximal subgroups of the low-dimensional
finite classical groups, Cambridge University Press, Cambridge, 2013.
[8] P. J. Cameron, Permutation groups. London Mathematical Society Student Texts, 45. Cambridge University Press, Cambridge, 1999.
[9] M. Conder, On symmetries of Cayley graphs and the graphs underlying regular maps, J.
Algebra, 321 (2009), no. 11, 3112–3127.
[10] J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker and R. A. Wilson, Atlas of finite groups:
maximal subgroups and ordinary characters for simple groups, Clarendon Press, Oxford, 1985.
[11] B. N. Cooperstein, Minimal degree for a permutation representation of a classical group, Israel
J. Math., 30 (1978), no. 3, 213–235.
[12] B. N. Cooperstein, Maximal subgroups of G2 (2n ), J. Algebra, 70 (1981), no. 1, 23–36.
[13] J. D. Dixon, The Fitting subgroup of a linear solvable group, J. Austral. Math. Soc., 7 (1967),
417–424.
[14] J .D. Dixon and B. Mortimer, Permutation groups. Graduate Texts in Mathematics, 163.
Springer-Verlag, New York, 1996.
[15] S. F. Du, A. Malnič and D. Marušič, Classification of 2-arc-transitive dihedrants, J. Combin.
Theory Ser. B, 98 (2008), no. 6, 1349–1372.
[16] E. Fisman and Z. Arad, A proof of Szép’s conjecture on non-simplicity of certain finite groups,
J. Algebra, 108 (1987), no. 2, 340–354.
[17] D. A. Foulser, Solvable flag transitive affine groups. Math. Z. 86 (1964), 191–204.
[18] D. A. Foulser, Solvable primitive permutation groups of low rank. Trans. Amer. Math. Soc.
143 (1969), 1–54.
[19] T. R. Gentchev, Factorizations of the sporadic simple groups, Arch. Math. (Basel), 47 (1986),
no. 2, 97–102.
[20] T. R. Gentchev, Factorizations of the groups of Lie type of Lie rank 1 or 2, Arch. Math.
(Basel), 47 (1986), no. 6, 493–499.
[21] M. Giudici, Factorisations of sporadic simple groups, J. Algebra, 304 (2006), no. 1, 311–323.
[22] D. Gorenstein, Finite simple groups: An introduction to their classification, Plenum Press,
New York, 1982.
[23] A. Hassani, L. R. Nochefranca, and C. E. Praeger, Two-arc transitive graphs admitting a
two-dimensional projective linear group, J. Group Theory, 2 (1999), no. 4, 335–353.
[24] C. Hering, Transitive linear groups and linear groups which contain irreducible subgroups of
prime order, II, J. Algebra, 93 (1985), no. 1, 151–164.
[25] C. Hering, M. W. Liebeck, and J. Saxl, The factorizations of the finite exceptional groups of
lie type, J. Algebra, 106 (1987), no. 2, 517–527.
105
106
BIBLIOGRAPHY
[26] A. J. Hoffman and R. R. Singleton, On Moore graphs with diameters 2 and 3, IBM J. Res.
Develop., 4 (1960), 497–504.
[27] B. Huppert, Zweifach transitive, auflösbare permutationsgruppen, Math. Z., 68 (1957), no. 1,
126–150.
[28] B. Huppert, Endliche gruppen I, Spring-Verlag, Berlin-New York , 1967.
[29] N. Itô, On the factorizations of the linear fractional group LF (2, pn ), Acta Sci. Math. Szeged,
15 (1953), 79–84.
[30] A. A. Ivanov and C. E. Praeger, On finite affine 2-arc transitive graphs. Algebraic combinatorics (Vladimir, 1991). European J. Combin. 14 (1993), no. 5, 421–444.
[31] W. M. Kantor, k-homogeneous groups, Math. Z., 124 (1972), 261–265.
[32] L. Kazarin, Groups that can be represented as a product of two solvable subgroups, Comm.
Algebra, 14 (1986), no. 6, 1001–1066.
[33] P. B. Kleidman, The maximal subgroups of the finite 8-dimensional orthogonal groups P Ω+
8 (q)
and of their automorphism groups, J. Algebra, 110 (1987), no. 1, 173–242.
[34] P. B. Kleidman and M. W. Liebeck, The subgroup structure of the finite classical groups,
Cambridge University Press, Cambridge, 1990.
[35] C. H. Li, Finite s-arc transitive Cayley graphs and flag-transitive projective planes, Proc.
Amer. Math. Soc., 133 (2005), no. 1, 31–41.
[36] C. H. Li, Z. P. Lu and H. Zhang, Tetravalent edge-transitive Cayley graphs with odd number
of vertices, J. Combin. Theory Ser. B, 96 (2006), no. 1, 164–181.
[37] C. H. Li and J. Pan, Finite 2-arc-transitive abelian Cayley graphs, European J. Combin, 29
(2008), no. 1, 148–158.
[38] C. H. Li, Á. Seress and S. J. Song, s-Arc-transitive graphs and normal subgroups, J. Algebra,
421 (2015), 331–348.
[39] C. H. Li and B. Xia, Factorizations of almost simple groups with a factor having at least two
unsolvable composition factors, in preparation.
[40] J. J. Li and Z. P. Lu, Cubic s-arc transitive Cayley graphs, Discrete Math., 309 (2009), no.
20, 6014–6025.
[41] M. W. Liebeck, The affine permutation groups of rank three, Proc. London Math. Soc. (3),
54 (1987), no. 3, 477–516.
[42] M. W. Liebeck, C. E. Praeger and J. Saxl, The maximal factorizations of the finite simple
groups and their automorphism groups, Mem. Amer. Math. Soc., 86 (1990), no. 432.
[43] M. W. Liebeck, C. E. Praeger and J. Saxl, Transitive subgroups of primitive permutation
groups, J. Algebra, 234 (2000), no. 2, 291–361.
[44] M. W. Liebeck, C. E. Praeger and J. Saxl, On factorizations of almost simple groups, J.
Algebra, 185 (1996), no. 2, 409–419.
[45] D. Marus̆ic̆, On 2-arc-transitivity of Cayley graphs, J. Combin. Theory Ser. B, 87 (2003), no.
1, 162–196.
[46] D. Marus̆ic̆, Corrigendum to: ”On 2-arc-transitivity of Cayley graphs”, J. Combin. Theory
Ser. B, 96 (2006), no. 5, 761–764.
[47] C. E. Praeger, Imprimitive symmetric graphs, Ars Combin. 19 (1985), A, 149–163.
[48] C. E. Praeger, An O’Nan-Scott theorem for finite quasiprimitive permutation groups and an
application to 2-arc transitive graphs, J. London Math. Soc., 47 (1993), no. 2, 227–239.
[49] C. E. Praeger, Finite quasiprimitive graphs, Surveys in combinatorics, 1997 (London), 65–85,
London Math. Soc. Lecture Note Ser., 241, Cambridge Univ. Press, Cambridge, 1997.
[50] J. Szép, Sui gruppi fattorizzabili non semplici, Rend. Mat. e Appl., 22 (1963), no. 5, 245–252.
[51] J. Szép, Sui gruppi fattorizzabili, Rend. Sem. Mat. Fis. Milano, 38 (1968), 228–230.
[52] K. B. Tachakerian and T. R. Gentchev, Factorizations of the groups G2 (q), Arch. Math.
(Basel), 44 (1985), no. 3, 230–232.
[53] R. Weiss, s-transitive graphs, Algebraic methods in graph theory, Colloq. Math. Soc. János
Bolyai 25 (1981), 827–847.
[54] J. Wiegold and A. G. Williamson, The factorisation of the alternating and symmetric group,
Math. Z., 175 (1980), no. 2, 171–179.
[55] R. Wilson, The finite simple groups, Graduate Texts in Mathematics, 251. Springer, 2009.
BIBLIOGRAPHY
107
[56] S. J. Xu, X. G. Fang, J. Wang and M. Y. Xu, On cubic s-arc transitive Cayley graphs of finite
simple groups, European J. Combin. 26 (2005), no. 1, 133–143.
[57] S. J. Xu, X. G. Fang, J. Wang and M. Y. Xu, 5-Arc transitive cubic Cayley graphs on finite
simple groups, European J. Combin. 28 (2007), no. 3, 1023–1036.
| 4 |
Separating Sets of Strings by Finding Matching
Patterns is Almost Always Hard
Giuseppe Lancia
Dipartimento di Matematica e Informatica, University of Udine, Via delle Scienze 206,
33100 Udine, Italy
arXiv:1604.03243v3 [cs.CC] 19 Dec 2016
Luke Mathieson
School of Electrical Engineering and Computer Science, University of Newcastle,
Callaghan, NSW 2308, Australia
Pablo Moscato
School of Electrical Engineering and Computer Science, University of Newcastle,
Callaghan, NSW 2308, Australia
Abstract
We study the complexity of the problem of searching for a set of patterns
that separate two given sets of strings. This problem has applications in a
wide variety of areas, most notably in data mining, computational biology,
and in understanding the complexity of genetic algorithms. We show that the
basic problem of finding a small set of patterns that match one set of strings
but do not match any string in a second set is difficult (NP-complete, W[2]hard when parameterized by the size of the pattern set, and APX-hard). We
then perform a detailed parameterized analysis of the problem, separating
tractable and intractable variants. In particular we show that parameterizing
by the size of pattern set and the number of strings, and the size of the
alphabet and the number of strings give FPT results, amongst others.
Keywords: pattern identification, parameterized complexity,
computational complexity
1. Introduction
Finding patterns in a collection of data is one of the fundamental problems
in data mining, data science, artificial intelligence, bioinformatics and many
other areas of both theoretical and applied computer science. Accordingly
Preprint submitted to Elsevier
December 20, 2016
there are a large number of formulations of this problem. In this paper we
develop a particular formulation, drawn from two central motivations:
1. multiparent recombination in genetic and evolutionary algorithms, and
2. the construction of explanatory patterns in single-nucleotide polymorphisms related to disease.
It should not be construed however that these motivations are limitations on
the applicability of the problem we develop and study. As will be seen, the
underlying computational problem is a general one that occurs as a fundamental component of many other computational problems.
1.1. The Central Problem
Before expanding upon the motivations, we briefly introduce the core computational problem to provide a semi-formal context and some unifying vocabulary. For full definitions we refer the reader to Section 2. Central to the
problem is the notion of pattern, a string over an alphabet Σ which has been
augmented with a special symbol ∗. A pattern matches a string over Σ if the
pattern and the string are the same length and each character of the pattern
is the same as the character of the string at that position, or the pattern has
an ∗ at that position, i.e. ∗ ‘matches’ any symbol from the alphabet. The
fundamental problem is then, given two sets, G and B, of strings over Σ,
can we find a set of patterns of size at most k such that every string in G
matches one of our patterns, and none of the strings in B match any of our
patterns.
1.2. Separating Healthy Patterns from Diseased
A significant portion of bioinformatics and computational medicine efforts
are focused on developing diagnostic tools. The identification of explanatory
genes, uncovering of biomarkers, metabolic network analysis and protein interaction analysis all have as a key (but not sole) motivation the identification
of differential markers of disease and consequently routes to treatment. Consider the following problem as a motivating archetypal example: we have two
sets of individuals, healthy and diseased and for each example we are given a
string that encodes the single-nucleotide polymorphism (SNPs) states across
the two copies of each genome, giving us two sets of strings G and B 1 . A
SNP has several alleles of which an individual has two. The individual may
1
Whether healthy is G and diseased is B or vice versa depends on what information
we wish the set of patterns to extract.
2
thus be homozygous in any of the alleles, or heterozygous with any choice of
pairs of alleles, giving the underlying alphabet Σ.
It is easy to see that if we can identify patterns of SNPs that separate the
healthy from the diseased individuals, we have a source of genetic information
that may assist in explaining and treating the disease.
This problem is even more apparent in its computational form when considering a biologically motivated form of computation, i.e., evolutionary algorithms.
1.3. Patterns in Multiparent Recombination
The central mechanism for optimization in Genetic Algorithms (GAs) is the
recombination of parent solutions to produce a new child solution which
ideally retains the positive aspects of the parents. The mechanism derives
from an analogy with sexual reproduction in biological evolution and hence
typically combines two existing solutions to produce the offspring. In the
optimization setting however, there’s no conceptual reason for this restriction.
Given that recombination can be viewed as a local move in the search space
from one individual solution to another as mediated by a third individual
solution, a natural generalization of this is to employ multiple parents in the
hope of further refining the components of the solution that promote quality,
while producing new solutions that effectively cover the search space.
The central theoretical formalization for describing this process is that of
schemata 2 . An individual solution in a (simple) GA is described by an array,
which we can represent as a string, of length n over a given alphabet Σ.
A schema is a string of length n over the same alphabet augmented with
the special “wild card” character ∗, i.e., a pattern. A schema can then be
thought of as representing a portion of the search space. The preservation of
desirable shared characteristics of two or more parent individuals can then be
viewed as the problem of defining a suitable schema. We can define a set G
using the individuals selected as parents for a recombination operation, and,
if desired, a set B from any individuals whose characteristics we may wish to
avoid. The child individual(s) can then be generated from this schema with
the wild cards replaced in whichever manner is chosen. Thus we can use
schemata to model the basic operation of genetic recombination operators.
This idea not only models multiparent recombination but also multi-child
recombination. When considering simply a set of parents from which we wish
to generate a set of children, constructing schemata that are compatible with
2
We use the third declension neuter form of schema, as it matches better the Greek
roots of the word.
3
the parents is straightforward. A single schema that is a string of n many
∗ symbols would suffice as a trivial solution and the natural solution where
for each position, if all the parents agree on the same symbol, the schema
has that symbol and ∗ otherwise also provides a simple solution. However
in these cases it is reasonably easy to see that the schemata generated can
easily be under-specified, leading to a loss of useful information, rendering
the recombination operation ineffective. One solution to this problem is to
ask for a small set of schemata that are compatible with the parents, but are
incompatible with a set of forbidden strings – akin to the list of forbidden
elements in Tabu search. In this paper, we elaborate upon and examine this
idea.
Some further complexity issues surrounding multiparent recombination have
been examined in [10].
1.4. Our Contribution
In this paper we formalize the problem of finding a small set of patterns that
match a set of strings, without matching a set of forbidden strings, as discussed in the introduction and examine its complexity. We call the general
form of the problem Pattern Identification and introduce some useful
variants. In most cases this problems turn out to be hard. We naturally
then consider the problem from a Parameterized Complexity perspective.
The problem has a rich parameter ecology and also provides an interesting
example of a non-graph theoretic problem. Unfortunately for many parameterizations the problem turns out to be hard in this setting as well. The
natural parameterization by the number of desired schemata is W[2]-hard.
Even if we take the length of the strings as the parameter, the problem is
para-NP-complete. Table 1 gives a summary of the parameterized results,
and some key open problems. It is also inapproximable and for some cases
we obtain parameterized inapproximability results as well. The only case
for which we are able to obtain fixed-parameter tractability relies on a small
number of input strings which have a limited number of symbols which are
different from a given “base” symbol.
1.5. Related Work
The identification of patterns describing a set of strings forms a well studied
family of problems with a wide series of applications. Although, as best
as we can determine, the precise problems we studied here have not yet
been considered, a number of interesting related problems are explored in
the literature. We present here a selection of some of the more relevant
and interesting results, however these can at best form a basis for further
exploration by the interested reader.
4
One of most immediately similar variants is that where pattern variables are
allowed. In contrast to the work here, these variables can act as substrings
of arbitrary length. Keans and Pitt [23] give a family of polynomial time
algorithms for learning the language generated by a single such pattern with
a given number k of pattern variables. Angluin [1] studies the inverse problem
of generating a pattern, with a polynomial time algorithm for the case where
the pattern contains a single pattern variable being the central result. We
note that a central difference here is the repeated use of variables, allowing
the same undefined substring to be repeated. The properties of these pattern
languages have since been studied in some detail, far beyond the scope of this
paper.
Bredereck, Nichterlein and Niedermeier [4] employ a similar, but not identical, formalism to that employed here, but study the problem of taking a set
of strings and a set of patterns and determining whether the set of strings
can be altered to match the set of patterns. In their formalism patterns are
strings over the set {, ?}. We note in particular though that their definition of matching differs from our definition of compatibility in that a string
matches a pattern if and only if the string has the special symbol ? exactly
where the pattern does. They show this problem to be NP-hard, but in FPT
when parameterized by the combined parameter of the number of patterns
and the number of strings. They also present an ILP based implementation and computational results. Bredereck et al. [3] examine forming teams,
i.e., mapping the set of strings to the set of patterns in a consistent manner.
They use a similar basis, excepting that the special ? symbol in a pattern now
matches any symbol in a string and that the symbol requires homogeneity
of the matched strings (i.e. the symbol it matches is not specified, but all
matching strings must have the same symbol at that point). They give a series of classification results, with the problem mostly being intractable, but
in FPT for the number of input strings, the number of different input strings
and the combined parameter of alphabet size with the length of the strings.
Gramm, Guo and Niedermeier [18] study another similar problem, Distinguishing Substring Selection, where the input is two sets of strings
(“good” and “bad”), and two integers dg and db with the goal of finding a
single string of length L whose Hamming distance from all length L substrings of every “good” string is at least dg and from at least one length L
substring for each “bad” string is at most db . An extension of the Closest
String [19, 24] and Closest Substring [15] problems, the problem has
a ptas [12] but they show that it is W[1]-hard when parameterized by any
combination of the parameters dg , db and the number of “good” or “bad”
strings. Under sufficient restriction they demonstrate an FPT result, requiring a binary alphabet, a ‘dual’ parameter d0g = L − dg and that d0g is optimal
5
in the sense that it is the minimum possible value. We note that, in relation to the problems studied here, although the number of ∗ symbols in the
patterns provides an upper-bound for the Hamming distance, the Hamming
distance for a set of strings may be much lower; consider a set of strings with
one position set to 1 and all others to 0 such that for every possible position
there is a string with a 1 at that point, then the string (or indeed substring)
of all 0 has Hamming distance at most one from each input string, but a
single pattern would need to be entirely ∗ symbols to match the entire set.
Hermelin and Rozenberg introduce a further variant of the Closest String
problem [21], the Closest String with Wildcards problem. The input
is a set of strings {si }, which may include wildcard characters, and an integer
d. The goal is to find a string with hamming distance at most d to each si .
The solution is required to have no wildcard characters. The examine a number of parameters: the length n of the input strings, the number m of input
strings, d, the number |Σ| of characters in the alphabet, and the minimum
number k of wildcard characters in any input string. They show that the
problem is in FPT (with varying explicit running times) when parameterized
by m, m+n, |Σ|+k +d and k +d. They also show that the special case where
d = 1 can be solved in polynomial time, whereas the problem is NP-hard for
every d ≥ 2.
Bulteau et al. [5] also give a survey of the parameterized complexity of a
variety of more distantly related string problems, with similar multivariate parameterizations as in other work in this area. They cover, amongst
others, Closest String, Closest Substring, Longest Common Subsequence, Shortest Common Supersequence, Shortest Common
Superstring, Multiple Sequence Alignment and Minimum Common String.
Introduced by Cannon and Cowen [6], the Class Cover problem is a geometric relative of Pattern Identification where the input is two sets
of points colored red and blue, with the goal of selecting a minimum set of
blue points (centers) that “cover” the full set of blue points, in the sense
that any blue point is closer to its nearest center than any red point. It
is NP-hard with an O(log n + 1)-factor approximation algorithm, bearing a
close similarity to Dominating Set.
2. Preliminaries and Definitions
We now give the relevant definitions for the complexity analysis that follows. In the reductions we use the well known Dominating Set and
Vertex Cover problems. The graphs taken as input for these problems
are simple, undirected and unweighted. To assist with notation and indexing,
6
we take the vertex set V (G) of a graph G to be the set {1, . . . , n}. The edge
set E(G) is then a set of pairs drawn from V (G) and we denote the edge
between vertices i and j by ij (= ji). The Set Cover and k-Feature Set
problems are also employed. The problems are defined as follows:
Dominating Set:
Instance:
A graph G and an integer k.
Question:
Is there a set V 0 ⊆ V (G) with |V 0 | ≤ k such
that for every u ∈ V (G) there exists a v ∈ V 0
with u ∈ N (v)?
Vertex Cover:
Instance:
A graph G and an integer k.
Question:
Is there a set V 0 ⊆ V (G) with |V 0 | ≤ k such
that for every uv ∈ E(G) we have u ∈ V 0 or
v ∈ V 0?
Set Cover:
Instance:
A base set U , a set S ⊆ P(U ) and an integer
k.
Question:
Is
a set S 0 ⊆ S with |S 0 | ≤ k such that
S there
0
S = U?
k-Feature Set:
Instance:
An n × m 0-1 matrix M , an n × 1 0-1 vector
f and an integer k.
Question:
Is there a set of indices I ⊆ {1, . . . , m} with
|I| ≤ k such that for all a, b where fa 6= fb
there exist i ∈ I such that Ma,i 6= Mb,i ?
We note the following key classification results:
• Dominating Set is NP-complete, O(log n)-APX-hard3 and W[2]complete when parameterized by k, the size of the dominating set.
• Vertex Cover is NP-complete and APX-hard, and remains NPcomplete when the input is a planar graph [17].
3
That is there exists some c > 0 such that Dominating Set has no c · log n-factor
approximation algorithm unless P = NP.
7
• Set Cover is W[2]-complete when parameterized by the size of the
set cover.
• k-Feature Set is W[2]-complete when parameterized by the size of
the feature set [9].
We also employ a parameterized version of the Model Checking problem,
which takes as input a finite structure and a logical formula and asks the
question of whether the structure is a model of the formula, i.e. whether
there is a suitable assignment of elements of the universe of the structure
to variables of the formula such that the formula evaluates to true under
that assignment. The parameter is the length of the logic formula. While
we informally introduce the finite structural elements as needed, we briefly
describe here the fragments of first-order logic we employ. Let Σ0 = Π0 be
the set of unquantified Boolean formulae. The classes Σt and Πt for t > 0
can be defined recursively as follows:
Σt = {∃x1 . . . ∃xk ϕ | ϕ ∈ Πt−1 }
Πt = {∀x1 . . . ∀xk ϕ | ϕ ∈ Σt−1 }
The class Σt,u is the subclass of Σt where each quantifier block after the first
existential block has length at most u. We note that trivially Πt−1 ⊂ Σt .
We note that these classes are specified in prenex normal form, and are, in
general, not robust against Boolean combinations of formulae. In general,
the process of converting a formula to prenex normal form (where all quantifiers are “out the front”) increases the number of quantifier alternations. An
analog of the Σ classes is Σ∗t,u . Let Θ0,u be the set of quantifier free formulae,
and Θt,u for t > 0 be the set of Boolean combinations of formulae where
each leading quantifier block is existential and quantifies over a formula in
Θt−1,u , where the length of each quantifier block is at most u. That is, the
formulae in Θt,u are not required to be in prenex normal form, and Boolean
connectives may precede some quantifiers. We can deal with leading universal quantification by the normal expedient of the introduction of a trivial
existential block. Then Σ∗t,u is the class of formulae of the form ∃x1 . . . ∃xk ϕ
where ϕ ∈ Θt−1,u and where k may be greater than u.
Thus we refer to the Model Checking problem as MC(Φ) where Φ is the
first-order fragment employed. In the parameterized setting, MC(Σt,u ) is
W[t]-complete for every u ≥ 1, and MC(Σ∗t,u ) is W∗ [t]-complete for every
u ≥ 1. The W∗ -hierarchy is the hierarchy analogous to the W-hierarchy
obtained from using MC(Σ∗t,u ) as the complete problem instead of MC(Σt,u ).
While it is known that W[1] = W∗ [1] and W[2] = W∗ [2]; for t ≥ 3, the best
known containment relationship is W[t] ⊆ W∗ [t] ⊆ W[2t − 2]. For more
8
detail on these results and the full definitions relating to first-order logic
and structures we refer the reader to [16]. The W∗ -hierarchy, introduced by
Downey, Fellows and Taylor [14] but more fully explored later [7, 16] is a
parameterized hierarchy which takes into account the Boolean combinations
of quantified first-order formulae, but is otherwise similar to the more usual
W-hierarchy.
In several of our intractability results we make use of the class para-NP,
and a useful corollary due to Flum and Grohe with a detailed explanation
in [16] (presented as Corollary 2.16). The class para-NP is the direct parameterized complexity translation of NP, where we replace “polynomial-time”
with “fixed-parameter tractable time” (or fpt-time in short) in the definition. Flum and Grohe’s result states that if, given a parameterized problem
(Π, κ), the classical version of the problem Π is NP-complete for at least
one fixed value of κ, then (Π, κ) is para-NP-complete. As may be expected,
FPT = para-NP if and only if P = NP, thus para-NP-completeness is strong
evidence of intractability. We also make reference to parameterized approximation. A parameterized approximation algorithm is, in essence, a standard
approximation algorithm, but where we relax the running time to fpt-time,
rather than polynomial-time. We refer to [25] for a full introduction to this
area.
The other parameterized complexity theory employed is more standard, thus
for general definitions we refer the reader to standard texts [13, 16].
We write A ≤F P T B to denote that there exists a parameterized reduction
from problem A to problem B, and similarly A ≤P B to denote the existence
of a polynomial-time many-one reduction from problem A to problem B. We
also use strict polynomial-time reductions to obtain some approximation results. A strict reduction is one that, given two problems A and B, guarantees
that the approximation ratio for A is at least as good as that of B. In the
cases we present, we employ them for approximation hardness results, so the
precise ratio is not discussed. For a full definition of strict reductions (and
other approximation preserving reductions) we refer to [11].
Definition 2.1 (Pattern). A pattern is a string over an alphabet Σ and a
special symbol ∗.
Given a string s ∈ Σ∗ and an integer i, we denote the ith symbol of s by s[i].
Definition 2.2 (Compatible). A pattern p is compatible with a string g,
denoted p → g, if for all i such that p[i] 6= ∗ we have g[i] = p[i]. If a pattern
and string are not compatible, we write p 6→ g. We extend this notation to
sets of strings, writing p → G to denote ∀g ∈ G, p → g and P → G for
∀g ∈ G∃p ∈ P, p → g.
9
Definition 2.3 (G-B-Separated Sets). A set P of patterns G-B-separates
an ordered pair (G, B) of sets of strings, written P → (G, B) if
• P → G, and
• for every b ∈ B and p ∈ P we have p 6→ b.
Thus we can state the central problem for this paper:
Pattern Identification:
Instance:
A finite alphabet Σ, two disjoint sets G, B ⊆
Σn of strings and an integer k.
Question:
Is there a set P of patterns such that |P | ≤ k
and P → (G, B)?
The complexity analysis of the problem in the parameterized setting leads
to the definition of a second, subsidiary problem which allows a convenient
examination of sets of strings which are very similar.
Definition 2.4 (Small String). A string s over an alphabet Σ is d-small if,
given an identified symbol σ ∈ Σ, for exactly d values of i, p[i] 6= σ.
We call σ the base symbol.
A set of strings S is d-small if, given a fixed base symbol all strings in S are
d-small.
This restriction on the structure of the input gives further insight into the
complexity of Pattern Identification and is key to some of the tractability results in Section 4. For convenience we phrase a restricted version of the
Pattern Identification problem:
PI with Small Strings:
Instance:
An alphabet Σ, two disjoint d-small sets
G, B ⊆ Σn , an integer k.
Question:
Is there a set P of patterns with |P | ≤ k such
that P → (G, B)?
From the perspective of multiparent recombination, minimizing the number
of wildcard symbols in each pattern is also an interesting objective:
PI with Large Patterns:
Instance:
An alphabet Σ, two disjoint sets G, B ⊆ Σn ,
integers k and r.
Question:
Is there a set P of patterns with |P | ≤ k such
that P → (G, B) and for each p ∈ P the number of ∗ symbols in p is at most r?
10
We implicitly define the obvious intersection of the two restricted problems,
PI with Large Patterns and Small Strings.
From a combinatorial perspective, the inverse problem is also interesting:
PI with Small Patterns:
Instance:
An alphabet Σ, two disjoint sets G, B ⊆ Σn ,
integers k and s.
Question:
Is there a set P of patterns with |P | ≤ k such
that P → (G, B) and for each p ∈ P the number of non-∗ symbols in p is at most s?
3. Hard Cases of the Pattern Identification Problem
We first examine the intractable cases of the Pattern Identification
problem. This narrows down the source of the combinatorial complexity
of the problem.
Theorem 3.1. Pattern Identification is W[2]-hard when parameterized
by k, even if |Σ| = 2 and |B| = 1.
Lemma 3.2. Dominating Set ≤F P T Pattern Identification.
3
G
1
1
1
1
1
1
1
1
0
0
1
1
1
0
0
1
0
0
1
1
1
0
0
1
1
B
0
0
0
0
0
2
→
1
5
4
P = {1 ∗ ∗ ∗ ∗}
Figure 1: An example of the reduction used in Lemma 3.2 with k = 1. The
dominating set is highlighted in red, and the correspond set of patterns (a
singleton) is shown.
Proof. Let (G, k) be an instance of Dominating Set. Let n = |V (G)| and
assume V (G) = {1, . . . , n}. We construct an instance (Σ, G, B, k) as follows:
1. Σ = {1, 0},
11
2. G = {g1 , . . . , gn } where for each i, gi ∈ Σn where for every j, gi [j] = 1
if ij ∈ E(G) or i = j and gi [j] = 0 otherwise,
3. B = {0n }.
An example of the reduction is given in Figure 1.
Claim 3.3. If (G, k) is a Yes instance of Dominating Set then (Σ, G, B, k)
is a Yes instance of Pattern Identification.
Let D ⊆ V (G) with |D| ≤ k be a dominating set witnessing that (G, k) is a
Yes instance. We can construct a witness set of patterns P with |P | = |D|
such that P → (G, B). For each i ∈ D, we create a pattern pi where pi [i] = 1
and pi [j] = ∗ for all j 6= i.
As D is a dominating set, for every vertex j ∈ V (G) there is a vertex i ∈ D
such that ij ∈ E(G). Then for string gj ∈ G, pattern pi is compatible with
gj as by construction gj [i] = 1. Therefore for every string g ∈ G there exists
p ∈ P such that p → g.
Moreover there is no p ∈ P such that p → b where b ∈ B. As B consists of
the single string b = 0n and for each p ∈ P exactly one element is neither 0
nor ∗ there is one position where the pattern does not match b.
Thus (Σ, G, B, k) is a Yes instance.
Claim 3.4. If (Σ, G, B, k) is a Yes instance of Pattern Identification
then (G, k) is a Yes instance of Dominating Set
Let P with |P | ≤ k be the witness set of patterns that (Σ, G, B, k) is a Yes
instance. Inductively, we may assume that every p ∈ P is compatible with
at least one element of G, if not, P \ {p} constitutes an alternate witness.
First we note that no p ∈ P can consist of only ∗ and 0 symbols, as this
would be compatible with the single element of B. Therefore each p ∈ P has
at least one i such that p[i] = 1.
Consider a p ∈ P , and the corresponding set of vertices Vp (i.e. each position
i where p[i] = 1). Let gj ∈ G be a string such that p → gj . By construction
for every i ∈ Vp , ij ∈ E(G). Let Vp→g be the set of vertices corresponding
to the set Gp ⊆ G where p → Gp . Each vertex j ∈ Vp→g is adjacent (or
identical) to every vertex in Vp . Thus we may select arbitrarily a single
vertex from Vp to be in the dominating set D.
Thus we have D with |D| ≤ |P | ≤ k, where every vertex in V (G) is adjacent
(or identical) to some vertex in D. Therefore (G, k) is a Yes instance.
The construction can be clearly performed in fpt time (in fact, polynomial
time), and the lemma follows.
Proof of Theorem 3.1. The theorem follows immediately from Lemma 3.2.
12
The structure of the reduction then gives the following:
Corollary 3.5. PI with Small Patterns is W[2]-complete when parameterized by k and NP-complete even when s = 1, |Σ| = 2 and |B| = 1.
Proof. The W[2]-hardness is apparent from the proof of Lemma 3.2 (in fact
the restriction would make the proof simpler). To show containment in W[2],
we reduce PI with Small Patterns to MC(Σ2,1 ). The first-order structure is equipped with four unary relations N , Σ, G and B and a binary
function symbol C. Each string is represented by an integer, according to an
arbitrary fixed ordering. Gi is true if string i is in G, Bi is true if string i is in
B. Σσ is true if σ ∈ Σ and N i is true if i ∈ N. The function C : N × N → Σ
is defined Cij = σ if σ is the jth symbol of string i.
We
now
provide
the
first-order
formula
expressing
PI with Small Patterns:
^
∃i1,1 , . . . , ik,s , c1,1 , . . . , ck,s ∀j((
N il,b )∧
l∈[k],b∈[s]
(
^
Σcl,b )∧
l∈[k],b∈[s]
_ ^
(Gj → ( (
Cjil,b = cl,b )))∧
l∈[k] b∈[s]
(Bj → (
^ _
(
Cjil,b 6= cl,b ))))
l∈[k] b∈[s]
The formula states that a solution to PI with Small Patterns consists
of k sets of s symbols along with positions such that for each string in G, for
at least one set of symbols, the string is compatible and for each string in B
no set of symbols is compatible.
Containment in NP can be demonstrated by the usual polynomial verification
approach (indeed in much the same format as the above formula).
Corollary 3.6. Pattern Identification has no constant factor fptapproximation algorithm unless FPT = W[2] and there exists a c ≥ 0 such
that Pattern Identification has no c · log n polynomial time approximation algorithm unless P = NP, even when |Σ| = 2 and the optimization goal
is min k.
Proof. As Dominating Set has no constant factor fpt-approximation [8]
unless FPT = W[2] and no c · log n polynomial time approximation [26] for
some c > 0 unless P = NP and the reduction of Lemma 3.2 is a strict
polynomial-time reduction, the corollary follows.
13
Given the construction in the proof of Lemma 3.2, we can deduce that one
source of complexity might be the freedom (unboundedness) in the alphabet and the structure of the strings. We demonstrate that restricting these
parameters is fruitless from a computational complexity perspective.
Corollary 3.7. PI with Small Strings is NP complete even when |Σ| =
2, d = 4, s = 1 and |B| = 1.
Proof. As Dominating Set is NP-complete on planar graphs of maximum
degree 3 [17], the number of 1s in each string in the construction of the proof
of Lemma 3.2 is at most 4, where we take the base symbol to be 0.
This result also demonstrates the following:
Lemma 3.8. PI with Large Patterns and Small Strings and
PI with Large Patterns are both NP-complete even when |Σ| = 2, d = 4,
r = 9 and |B| = 1.
Proof. Following Corollary 3.7, we can see from the construction given in the
proof of Lemma 3.2 that for each p ∈ P , instead of setting p[i] = ∗ for each
i not in the dominating set, we can choose r to be nine, and set p[i] := 1 if i
is in the dominating set, p[j] = ∗ for the at most three values of j such that
ij ∈ E(G) and the at most six additional values of j at distance two4 from i,
and p[l] = 0 for all other l ∈ {1, . . . , n}. For the reverse argument, we have
similar conditions as before, at least one symbol of each pattern must be a
1 and at most four can be 1s. With at most nine ∗ symbols, the pattern is
compatible with all the strings that the corresponding vertex dominates, and
all other symbols in these strings are 0.
Corollary 3.9. The following are true:
1. PI with Large Patterns and Small Strings
is
para-NPcomplete when parameterized by |Σ| + d + r + |B|.
2. PI with Large Patterns is para-NP-complete when parameterized
by |Σ| + r + |B|.
3. PI with Small Patterns and Small Strings
is
para-NPcomplete when parameterized by |Σ| + d + s + |B|.
4. PI with Small Patterns is para-NP-complete when parameterized
by |Σ| + s + |B|.
4
As G has maximum degree three, each neighbor of i has at most two other neighbors,
so the patterns representing each of these neighbors has a 1 in the ith position, a 1 for its
own position and two other 1s. Therefore we need only three ∗ symbols for the neighbors
themselves, and two more per neighbor for the distance two neighborhood.
14
Proof. The result are obtained as follows:
1. Lemma 3.8 gives NP-completeness with fixed |Σ|, d, r and |B|. With
Corollary 2.16 from [16], the result follows.
2. The preservation of hardness when taking subsets of a set of parameters
gives the result from 1.
3. Corollary 3.7 shows NP-completeness with fixed |Σ|, d, s and |B|.
Corollary 2.16 from [16] completes the result.
4. The result follows immediately from 3.
We note that Dominating Set is in FPT for graphs of bounded degree, so
we do not obtain a W[2]-hardness result. However we can tighten this result
a little further:
Theorem 3.10. Pattern Identification is NP-complete and APX-hard
even when Σ = {0, 1} and all strings have at most two symbols as 1 (equiv.
at most two symbols as 0) and |B| = 1.
Lemma 3.11. Vertex Cover ≤P Pattern Identification.
3
1
0
0
0
1
0
0
1
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
1
B
0
0
0
0
0
2
→
1
5
G
1
1
1
1
0
0
4
P = {1 ∗ ∗ ∗ ∗, ∗1 ∗ ∗∗, ∗ ∗ ∗1∗}
Figure 2: An example of the reduction used in Lemma 3.11 with k = 3.
The vertex cover is highlighted in red, and the correspond set of patterns is
shown.
Proof. Given an instance (G, k) of Vertex Cover with V (G) = {1, . . . , n},
we construct an instance (Σ, G, B, k) of Pattern Identification as follows:
15
1. Σ = {0, 1}.
2. G = {gij | ij ∈ E(G)} with gij ∈ Σn where gij [i] = gij [j] = 1 and
gij [u] = 0 for u 6= i, j.
3. B = {0n }.
Clearly this construction can be performed in polynomial time. The construction is illustrated in Figure 2.
Claim 3.12. If (G, k) is a Yes instance of Vertex Cover then (Σ, G, B, k)
is a Yes instance of Pattern Identification.
Let V 0 ⊆ V (G) where |V 0 | ≤ k be a vertex cover witnessing that (G, k) is
a Yes instance of Vertex Cover. We construct a set of patterns P with
|P | = |V 0 | that is a solution for (Σ, G, B, k) where for each i ∈ V 0 there
is a pattern pi ∈ P with pi [i] = 1 and pi [j] = ∗ for j 6= i. For each edge
ij ∈ E(G), either i ∈ V 0 or j ∈ V 0 (or both). Therefore for the string gij
corresponding to ij, we have either pi ∈ P or pj ∈ P such that pi → gij or
pj → gij . Hence P → G. Moreover there is no pi ∈ P such that pi → b
where b is the single element of B as each pi , by construction, contains a
1, whereas b consists of only 0s. Therefore (Σ, G, B, k) is a Yes instance of
Pattern Identification.
Claim 3.13. If (Σ, G, B, k) is a Yes instance of Pattern Identification
then (G, k) is a Yes instance of Vertex Cover.
Let P with |P | ≤ k be the set of patterns witnessing the fact that (Σ, G, B, k)
is a Yes instance of Pattern Identification. We may assume without
loss of generality that for every p ∈ P , there exists some g ∈ G such that
p → g. Each p ∈ P must contain at least one 1, otherwise p → b where b is
the single element of B. No p ∈ P can contain more than two 1s, as there
exists g ∈ G such that p → g, and every such g has exactly two 1s. We note
that if a pattern p has two 1s, then there is exactly one g ∈ G such that
p → g.
Let P1 ⊆ P be the set of patterns with exactly one 1 and P2 ⊆ P be the
set of patterns with exactly two 1s. We have P1 ∪ P2 = P . We construct a
vertex cover V 0 ⊆ V (G) with |V 0 | ≤ |P | as follows:
1. for each p ∈ P1 add i to V 0 where p[i] = 1,
2. for each p ∈ P2 where p[i] = p[j] = 1, arbitrarily add one of i or j to
V 0.
Consider every edge ij ∈ E(G), then for the corresponding gij ∈ G there
exists a p ∈ P such that p → gij . As each p has at least one 1, this 1 must
be at position i or j (or both). Therefore i or j is in V 0 (or perhaps both),
therefore V 0 forms a valid vertex cover for G.
16
Proof of Theorem 3.10. The NP-hardness follows from Lemma 3.11. The
containment in NP follows from the usual verification algorithm. The
APX-hardness follows as the reduction of Lemma 3.11 is strict and
Vertex Cover is APX-hard [20].
Finally, as restricting the alphabet did not reduce the complexity, we consider
the case where the strings themselves are short. Again the problem is hard,
but we note that to achieve this reduction we relax the bound on Σ (or in
Parameterized Complexity terms, |Σ| is no longer a parameter – if |Σ| is a
parameter, the problem is in FPT).
Theorem 3.14. PI with Small Strings is NP-complete even when n =
4, d = 4 and |B| = 1.
Lemma 3.15. Planar Vertex Cover ≤P PI with Small Strings
even when the length of strings is restricted to 4.
3
σ2
σ6
σ4
σ6
σ2
σ4
σ6
σ3
σ6
σ5
σ3
σ5
σ6
σ6
σ6
σ6
σ6
σ6
B
σ6
σ6
σ6
σ6
2
→
1
5
G
σ1
σ1
σ1
σ1
σ6
σ6
4
P = {σ1 ∗ ∗∗, ∗σ2 ∗ ∗, ∗σ4 ∗ ∗}
Figure 3: An example of the reduction used in Lemma 3.15 with k = 3.
The vertex cover is highlighted in red, and the correspond set of patterns
is shown. Note the difference with the reduction in Lemma 3.11, here the
position encodes the coloring and the symbols encode the edges, whereas
previously the string more directly encode the graph.
Proof. Let (G, k) be an instance of Planar Vertex Cover. We assume
without loss of generality that V (G) = {1, . . . , n}. As G is planar, we can
compute a proper 4-coloring in polynomial time [2]. Let C : V (G) →
{1, 2, 3, 4} be such a coloring. We construct an instance (Σ, G, B, k 0 , d) of
PI with Small Strings as follows:
1. Σ = {σ1 , . . . , σn+1 }.
17
2. G = {gij | ij ∈ E(G)} where for k ∈ {1, . . . , 4} we set
if C(i) = k
σ i
gij [k] := σj
if C(j) = k
σn+1 otherwise.
4
}.
3. B = {σn+1
4. d = 4.
We note that as C is a proper coloring, C(i) 6= C(j) for any ij ∈ E(G).
Moreover for i ∈ V (G), σi only appears as the C(i)th symbol in any string.
The construction can clearly be performed in polynomial time. The construction is illustrated in Figure 3.
Claim 3.16. If (G, k) is a Yes instance of Planar Vertex Cover then
(Σ, G, B, k, d) is a Yes instance of PI with Small Strings.
Let V 0 ⊆ V (G) with |V 0 | ≤ k be a vertex cover witnessing that (G, k) is a
Yes instance of Planar Vertex Cover. We construct a set P with |P | =
|V 0 | ≤ k of patterns that forms a solution for (Σ, G, B, k, d) in the following
manner: for each i ∈ V 0 , we add the pattern pi to P where pi [C(i)] = σi
and all other symbols in pi are ∗. No pattern in P is compatible with the
singleton element of B, as each has a symbol σi with 1 ≤ i ≤ n. For every
edge ij ∈ E(G), at least one of i and j is in V 0 . Without loss of generality
assume that i ∈ V 0 . By construction the string gij is compatible with the
pattern pi ∈ P , therefore every string in G is compatible with some pattern
in P .
Claim 3.17. If (Σ, G, B, k, d) is a Yes instance of PI with Small Strings
then (G, k) is a Yes instance of Planar Vertex Cover.
Let P with |P | ≤ k be a set of patterns such that P → (G, B). As before we
may assume that P is minimal in the sense that each pattern is compatible
with some string in G. Each p ∈ P must have at least one symbol drawn from
the set {σ1 , . . . , σn }, otherwise p → B. No pattern p ∈ P can have more than
two symbols from {σ1 , . . . , σn }, otherwise p 6→ G. As before, we partition P
into P1 , the subset of patterns with one symbol from {σ1 , . . . , σn }, and P2 ,
the subset of patterns with two symbols from {σ1 , . . . , σn }. We construct a
vertex cover V 0 ⊆ V (G) for G with |V 0 | ≤ |P | ≤ k as follows:
• for each p ∈ P1 add i to V 0 if p[C(i)] = σi ,
• for each p ∈ P2 where p[C(j)] = σj and p[C(i)] = σi , arbitrarily add
either i or j to V 0 .
18
Consider every edge ij ∈ E(G). The string gij is compatible with some
pattern p ∈ P , therefore at least one of i and j is in V 0 , thus V 0 forms a
proper vertex cover for G.
Proof of Theorem 3.14. The construction used in the proof of Lemma 3.15
has the required structural properties. Again containment in NP is apparent
from the usual verification algorithm techniques.
Corollary 3.18. PI with Small Strings is para-NP-complete when parameterized by n + d + |B|.
Proof. The corollary follows from Theorem 3.10 and Corollary 2.16 from [16].
3.1. Containment
Although the W[2]-hardness reduction is quite direct, containment of
Pattern Identification when parameterized by k is not apparent. In
fact it is not clear that the problem lies in W[P] or even XP. As the nonparameterized version of the problem is NP-complete, it is at least contained
in para-NP. For PI with Small Patterns we have shown containment in
W[2]. In contrast, for PI with Large Patterns we can show containment
in W∗ [5].
Theorem 3.19. PI with Large Patterns ∈ W∗ [5] when parameterized
by k + r.
Proof. We reduce the problem to MC(Σ∗5,1 ), which is complete for W∗ [5] [7,
16]. We use the same first-order structure as in the proof of Corollary 3.5,
and give a suitable first-order formula:
∃s1 , . . . , sk , i1,1 , . . . , ik,r ∀j
_
_
(Gj → (∃l(
l = sc ∧ ∀b(Cjb = Clb ∨
b = ic,d ))))∧
c∈[k]
(Bj → (∀l(
^
d∈[r]
l = sc ∧ ∃b(Cjb 6= Clb ∧
c∈[k]
(
^
c∈[k]
(N sc ∧
^
^
b 6= ic,d ))))∧
d∈[r]
N ic,d ))
d∈[r]
The formula picks out k indices of strings (implicitly in G, as a choice of
a string from B will fail) and for each of these, r indices which will be the
19
location of the ∗ symbols in the patterns. For each index, if the index selects
a string in G, then one of the patterns is compatible with the string, if it
selects a string in B, no pattern is compatible with the string. We note that
the B clause is in Π2,1 , and hence Σ3,1 , giving the final bound of Σ∗5,1 .
This also places PI with Large Patterns somewhere between W[5] and
W[8] [7]. We note that the above formula could be converted into prenex
form, giving a tighter containment, however the central observation is that
it will be greater than W[2], in contrast to the hardness result and the containment of PI with Small Patterns.
4. Tractable Cases of Pattern Identification Problem
Guided by the results of Section 3, we identify the following cases where the
Pattern Identification problem is tractable.
Theorem 4.1. Pattern Identification is fixed-parameter tractable when
parameterized by |Σ| + n.
Proof. Taking the alphabet size and the string length as a combined parameter gives an immediate kernelization. The total number of strings of length
n over alphabet Σ is |Σ|n . Thus |G| + |B| ≤ |Σ|n .
Theorem 4.2. PI with Small Strings is fixed-parameter tractable when
parameterized by d + |G| + |B|, with a kernel of size O(d · (|G| + |B|)2 ) in both
the total number of symbols across all strings and the size of the alphabet.
Proof. As G and B are d-small, there can be at most d · (|G| + |B|) positions
where any pair of strings in G∪B differ, that is, every other position must be
the base symbol uniformly across all strings. The positions where every string
is identical cannot be of use in generating patterns, thus we may ignore these
positions. This gives restricted sets G0 and B 0 of size |G0 | + |B 0 | ≤ |G| + |B|
containing strings of length at most d · (|G| + |B|). Furthermore this restricts
the number of symbols used from Σ to at most d · (|G| + |B|)2 . Thus we can
restrict our alphabet to these symbols alone, denote this new alphabet by
Σ0 . This gives our required kernel size.
The initial determination of which positions to ignore can be computed in
O(n·(|G|+|B|)) time, thus the kernelization can be performed in polynomial
time.
Theorem 4.3. Pattern Identification is fixed-parameter tractable when
parameterized by k + n.
20
Proof. Let (Σ, G, B, k) be an instance of Pattern Identification. If
(Σ, G, B, k) is a Yes instance, by definition, there exists a P with |P | ≤ k
such that every string g ∈ G must be compatible with at least one p ∈ P .
Therefore given g, the compatible p must consist of, at each position, either
the ∗ symbol, or the symbol at the same position in g.
This
gives
a
direct
bounded
search
tree
algorithm
for
Pattern Identification. At each node in the tree we select an arbitrary g from G. We then branch on all possible patterns p that are
compatible with g, with a new set G := G \ {h ∈ G | p → h} (note that this
removes g from further consideration). If there is a b ∈ B such that p → b,
then we terminate the branch. If we reach depth k and G 6= ∅, we terminate
the branch. Otherwise if at any point we have G = ∅, we answer Yes.
Obviously the depth of the search tree is explicitly bounded by k. The
branching factor is equal to the number of patterns compatible with a string
of length n, which is 2n . The adjustment of G and checks against B at each
node individually take O(n) time, giving O((|G| + |B|) · n) time at each node.
Combined the algorithm takes O(2kn · (|G| + |B|) · n) time, and the theorem
follows.
Theorem 4.4. Pattern Identification is fixed-parameter tractable when
parameterized by |G| + n.
Proof. The search tree approach used in the proof of Theorem 4.3 can also
be adapted to the combined parameter |G| + n. Again we select an arbitrary
g from G. We branch on all possible patterns p that are compatible with g,
of which there are at most 2n , with the new set G := G \ {h ∈ G | p → h}.
If p → b for any b ∈ B, the branch is terminated. When we have G = ∅, we
check whether the collected set P of patterns in that branch. If |P | ≤ k we
answer Yes, otherwise the branch is terminated. If all branches terminate
with no Yes answer, we answer No.
Theorem 4.5. PI with Large Patterns and Small Strings is fixedparameter tractable when parameterized by k + |Σ| + d + r + |B|.
Proof. As each pattern can have at most r many ∗ symbols, every other
symbol in each pattern is fixed. Thus each pattern is compatible with |Σ|r
strings. This limits the number of strings in G to k · |Σ|r .
The tractability then follows from Theorem 4.2.
5. Discussion
Complementing the classification results given above, we now discuss some
related issues. Firstly (in Section 5.1), given the complex parameter
21
landscape introduced, what problems remain unsolved, and which are
the interesting parameterizations for future work? Secondly, we related
Pattern Identification to some similar problems that give some small
intuition as to sources of complexity in Pattern Identification (Section 5.2).
5.1. The Mire of Multivariate Analysis: Untangling the Parameters
The complexity analysis in this work involves a considerable number of parameters and unsurprisingly, there are some relationships between them that
can be identified, allowing a better perspective on the sources of complexity
in the problem, and what cases remain open. The immediately obvious relationships, for non-trivial parameter values5 , are r ≤ n, s ≤ n and d ≤ n. We
also note that k ≤ |G| and k ≤ (|Σ| + 1)n , again for non-trivial values of k.
This helps to unravel some of the relationships present in the results of this
work. We also note that, of course, expanding a list of parameters preserves
tractability, while reducing a list of parameters preserves intractability
A visual summary of the tractable, intractable and open cases for a simplified
parameter space is given in Figure 4. Given the relationships between s, r,
d and n, we reduce the parameter space to k, |Σ|, n, |G| and |B|. Although
this reduces the accuracy of the space, the broad complexity landscape of
the problem becomes more comprehensible.
Speculatively, we may observe that the problem seems to require at least two
parameters for tractability. This is perhaps unsurprising, given the nature of
the input – we need some parametric “handle” on the length of the strings
and another on the number of strings.
From Figure 4 it is clear that the central parameter in the open cases is |G|,
though we note that in the full parameter space, there are combinations of
s, r and d with other parameters for which the complexity remains open6 .
5.2. Ties to Other Problems
The Pattern Identification problem, as would be expected, has ties to
other problems that (can) model the general search for patterns that separate
two sets of data. These ties also illustrate some features of the computational
complexity of the problem.
5
By non-trivial we mean values which differentiate the parameters – for example, if
s > n, s becomes meaningless as any number of ∗ symbols would be allowed, within the
limitation of length n strings.
6
At last hand-count, 72 cases out of the 256 possible parameterizations with these
parameters remain open, compared to 8 with the reduced parameter space.
22
para-NP-complete
W[2]-hard
FPT
Open
|G|
|B|
k
n
|Σ|
Figure 4: Simplified representation of the parameter space and the complexity results. We note in particular that n or at least one of its related
parameters s, r or d seems essential for tractability (though never sufficient).
Given the nature of the input as a set of strings, it is perhaps unsurprising
that at least two parameters are (apparently) needed for tractability. The
obvious open cases are dominated by the parameter |G|.
5.2.1. Set Covering
When the length n of the strings is small, Pattern Identification
can be easily reduced to Set Cover. Given an instance (Σ, G, B, k) of
Pattern Identification, we can generate the set P of all patterns that
are compatible with some string in G. We know that |P | ≤ |G| · 2n . From
P we remove any pattern that is compatible with a string in B. Let P 0 be
the set thus obtained. For each p ∈ P 0 , let sp = {g ∈ G | p → g}, and let
S = {sp | p ∈ P 0 }. Taking G as the base set, (G, S, k) forms an instance
of Set Cover (parameterized by k). This reduction can be performed in
O((|B| + |G|) · |G| · 2n n) time.
This leads to the following theorem:
Theorem 5.1. Pattern Identification ∈ W[2] when n ≤ f (k) · log |I|
where |I| is the overall size of the instance and f (k) is a computable function
of the parameter k.
Proof. The reduction above is a parameterized reduction if 2n ∈ O(g(k)·|I|c )
23
for some computable function g.
It is not clear that we retain W[2]-hardness in this case however, so we unfortunately do not obtain a W[2]-completeness result.
This does give us an immediate approximation algorithm for this case however. As Set Cover has a 1 + log(|S|)-factor linear time approximation
algorithm [22], we obtain a 1 + log(|G|2 · log(|I|) · 2f (k) )-factor fpt-time approximation algorithm.
5.2.2. Feature Set
The k-Feature Set problem bears a strong resemblance to the
Pattern Identification problem7 , except in the k-Feature Set case,
the problem asks for a set of features that separate the “good” examples from
the “bad” rather than a set of patterns. In fact, given a feasible solution for
one problem, we can construct a feasible (but not necessarily optimal) solution to the other.
Given a set I = {i1 , . . . , ik } of indices of columns forming a feature set, we can
construct a set of patterns that separates G and B as follows: for each g ∈ G,
let pg be the pattern where pg [i] = g[i] if i ∈ I and pg [i] = ∗ otherwise. We
note that this gives a set of small patterns (i.e., s = k), however the number
of patterns may be as large as |G|.
Conversely, given a set of patterns P with at most s non-∗ symbols in each
pattern, the set I = {i ∈ [n] | ∃p ∈ P (p[i] 6= ∗)} forms a feature set. Again
we note that the size of the feature set may be as large as |G| · s.
If we consider a variant of PI with Small Patterns where we relax the
constraint on the number of patterns in the solution, it is easy to see that this
problem is in W[2]. This suggests that the solution size plays a significant role
in raising the complexity of the problem from a parameterized perspective.
6. Conclusion and Future Directions
There are a number of open complexity questions prompted by this paper,
three of which we think are particularly interesting.
The central question is of course the precise classification of
Pattern Identification.
Although PI with Small Patterns is
W[2]-complete, the general problem is only W[2]-hard, and the containment
of PI with Large Patterns simply gives a loose upper bound, although
does suggest that the problem is harder than PI with Small Patterns.
7
Indeed, variants of k-Feature Set have also been considered for use in similar applications as Pattern Identification [10].
24
The problem, intuitively, also shares some similarities with p-Hypergraph(Non)-Dominating-Set which is W[3]-complete [7].
p-Colored∗
Hypergraph-(Non)-Dominating-Set however is W [3]-complete [7] and
appears “harder” than Pattern Identification, hence we conjecture:
Conjecture 6.1. Pattern Identification is W[3]-complete when parameterized by k.
There are also some interesting parameterizations for which the complexity
remains open:
• PI with Small Strings parameterized by k + |Σ| + d, and
• PI with Large Patterns and Small Strings parameterized by
k + d + r.
Turning to the parameter |G|, results for the following combinations of parameters would also close some of the significant open cases:
• Pattern Identification parameterized by k + |Σ| + |G|,
• Pattern Identification parameterized by |G| + |Σ| + |B|, and
• Pattern Identification parameterized by k + |B| + |G|.
As a matter of prognostication, we would guess that the first of these is in
FPT, and the latter two are hard for some level of the W-hierarchy, but as
yet have no strong evidence for these claims.
7. Acknowledgements
PM acknowledges funding of his research by the Australian Research Council
(ARC, http://www.arc.gov.au/) grants Future Fellowship FT120100060 and
Discovery Project DP140104183.
8. References
References
[1] Dana Angluin. Finding patterns common to a set of strings. Journal of
Computer and System Sciences, 21(1):46–62, 1980.
25
[2] Kenneth Appel and Wolfgang Haken. Every Planar Map is FourColorable, volume 98 of Contemporary Mathematics. American Mathematical Society, Providence, RI, 1989. With the collaboration of J.
Koch.
[3] Robert Bredereck, Thomas Köhler, André Nichterlein, Rolf Niedermeier,
and Geevarghese Philip. Using patterns to form homogeneous teams.
Algorithmica, 71:517–538, 2015.
[4] Robert Bredereck, André Nichterlein, and Rolf Niedermeier. Patternguided k-anonymity. Algorithms, 6:678–701, 2013.
[5] Laurent Bulteau, Falk Hüffner, Christian Komusiewicz, and Rolf Niedermeier. Multivariate algorithmics for NP-hard string problems. Bulletin
of the EACTS, 114, 2014.
[6] Adam Cannon and Lenore Cowen. Approximation algorithms for the
class cover problem. Annals of Mathematics and Artificial Intelligence,
40(3-4):215–224, 2004.
[7] Yijia Chen, Jörg Flum, and Martin Grohe. An analysis of the W∗ hierarchy. The Journal of Symbolic Logic, 72(2):513–534, 2007.
[8] Yijia Chen and Bingkai Lin. The constant inapproximability of the
parameterized dominating set problem. CoRR, abs/1511.00075, 2015.
[9] Carlos Cotta and Pablo Moscato. The k-Feature Set problem is W[2]complete. Journal of Computer and System Sciences, 67(4):686–690,
2002.
[10] Carlos Cotta and Pablo Moscato. The parameterized complexity of
multiparent recombination. In Proceedings of the 6th Metaheuristics
International Conference (MICS2005), pages 237–242, 2005.
[11] Pierluigi Crescenzi. A short guide to approximation preserving reductions. In Proceedings of the Twelfth Annual IEEE Conference on Computational Complexity, pages 262–273. IEEE Computer Society, 1997.
[12] Xiaotie Deng, Guojun Li, Zimao Li, Bin Ma, and Lusheng Wang. A
ptas for distinguishing (sub)string selection. In Proceedings of the 29th
International Colloquium on Automata, Languages and Programming,
ICALP ’02, pages 740–751. Springer-Verlag, 2002.
[13] Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Texts in Computer Science. Springer, 2013.
26
[14] Rodney G. Downey, Michael R. Fellows, and Udayan Taylor. The parameterized complexity of relational database queries and an improved
characterization of W[1]. In Douglas S. Bridges, Cristian S. Calude,
Jeremy Gibbons, Steve Reeves, and Ian H. Witten, editors, First Conference of the Centre for Discrete Mathematics and Theoretical Computer Science, DMTCS 1996, Auckland, New Zealand, December, 9-13,
1996, pages 194–213. Springer-Verlag, Singapore, 1996.
[15] Michael R. Fellows, Jens Gramm, and Rolf Niedermeier. On the parameterized intractability of CLOSEST SUBSTRING size and related
problems. In Helmut Alt and Afonso Ferreira, editors, Proceedings of
the 19th Annual Symposium on Theoretical Aspects of Computer Science (STACS 2002), volume 2285 of Lecture Notes in Computer Science,
pages 262–273. Springer, 2002.
[16] Jörg Flum and Martin Grohe. Parameterized Complexity Theory. Texts
in Theoretical Computer Science. An EATCS Series. Springer, 2006.
[17] M. R. Garey and D. S. Johnson. Computers and Intractability – A
Guide to the Theory of NP–completeness. Freeman and Company, San
Francisco, 1979.
[18] Jens Gramm, Jiong Guo, and Rolf Niedermeier. Parameterized intractability of distinguishing substring selection. Theory of Computing
Systems, 39(4):545–560, 2006.
[19] Jens Gramm, Rolf Niedermeier, and Peter Rossmanith. Fixed-parameter
algorithms for CLOSEST STRING and related problems. Algorithmica,
37(1):25–42, 2003.
[20] J. Håstad. Some optimal inapproximability results. In Proceedings of
the 29th ACM Symposium on the Theory of Computing (STOC), pages
1–10, 1997.
[21] Danny Hermelin and Liat Rozenberg. Parameterized complexity analysis
for the closest string with wildcards problem. Theoretical Computer
Science, 600:11–18, 2015.
[22] D. S. Johnson. Approximation algorithms for combinatorial problems.
Journal of Computer and System Sciences, pages 256–278, 1974.
[23] Michael Kearns and Leonard Pitt. A polynomial-time algorithm for
learning k–variable pattern languages from examples. In Proceedings
27
of the 2nd Annual ACM Workshop on Computational Learning Theory,
pages 57–71, 1991.
[24] Ming Li, Bin Ma, and Lusheng Wang. On the closest string and substring
problems. Journal of the ACM, 49(2):157–171, March 2002.
[25] Dániel Marx. Parameterized complexity and approximation algorithms.
The Computer Journal, 51(1):60–78, 2008.
[26] Ran Raz and Shmuel Safra. A sub-constant error-probability low-degree
test, and a sub-constant error-probability PCP characterization of NP.
In Proceedings of the 29th ACM Symposium on the Theory of Computing
(STOC), pages 475–484, 1997.
28
Parameter
Complexity
k + |Σ| + |B|
k + |Σ| + s + |B|
n + d + |B|
|Σ| + d + r + |B|
|Σ| + d + s + |B|
d + |G| + |B|
|Σ| + n
k+n
|G| + n
k + |Σ| + d + r + |B|
k + |G| + |B|
|Σ| + |G| + |B|
k + |Σ| + d
W[2]-hard
W[2]-complete
para-NP-complete
para-NP-complete
para-NP-complete
FPT
FPT
FPT
FPT
FPT
Open
Open
Open
Theorem
3.1
3.5
3.18
3.9
3.9
4.2
4.1
4.3
4.4
4.5
Table 1: Summary of the parameterized results of the paper. |Σ| is the size
of the alphabet, n is the length of the strings and patterns, |G| and |B| are
the sizes of the two input string sets, k is the number of patterns, r is the
maximum number of ∗ symbols in a pattern, s is the maximum number of
non-∗ symbols in a pattern and d is the number of ‘non-base’ elements in each
string. Of course the usual inferences apply: tractable cases remain tractable
when expanding the parameter and intractable cases remain intractable when
restricting the parameter. We note that a number of cases remain open, of
which we include some of the more pertinent here, however given the number
of parameters under consideration, we refer the reader to Sections 5.1 and 6
for a proper discussion of the open cases.
29
| 8 |
Planning system for deliveries in Medellín
Catalina Patiño-Forero
Mateo Agudelo-Toro
Mauricio Toro
Universidad EAFIT
Medellín, Colombia
Universidad EAFIT
Medellín, Colombia
Universidad EAFIT
Medellín, Colombia
cpatin10@eafit.edu.co
magude29@eafit.edu.co
mtorobe@eafit.edu.co
ABSTRACT
Here we present the implementation of an application capable of
planning the shortest delivery route in the city of Medellín,
Colombia. We discuss the different approaches to this problem
which is similar to the famous Traveling Salesman Problem
(TSP), but differs in the fact that, in our problem, we can visit
each place (or vertex) more than once. Solving this problem is
important since it would help people, especially stores with
delivering services, to save time and money spent in fuel,
because they can plan any route in an efficient way.
To solve this we need to construct a subgraph with the
delivering points, based on the city’s map, and it will be a
complete one i.e. all of its vertices are connected. Then we will
give the user different options that will determine which
algorithm will be used to solve the problem. Between these
options there is only one that will surely give the shortest route
and works only with twenty or less points. The other options are
quite fast but may or may not give the shortest route.
Depending on the chosen algorithm, the results in time, memory
and total distance will vary. For example, we have an algorithm
that does not create a subgraph to give an answer, so it takes less
memory and time, but will not give the total distance. Others can
give a better answer quite fast, even though they require to
compute a subgraph, but still the tour produced may not be the
shortest one. At last, there is an algorithm that can give the
shortest route every time, but needs to look through all possible
answers so it takes much more time.
For the problem of planning delivery routes in Medellín our
proposed solution to find the shortest route can be of huge help
for small companies if their couriers do not visit more than 20
points per trip.
Author Keywords
Planning, deliveries, routing, graph, complexity, shortest path.
ACM Classification Keywords
Theory of computation → Design and analysis of algorithms →
Approximation algorithms analysis → Routing and network
design problems
1. INTRODUCTION
Efficiently planning the deliveries is something really useful for
any company in the field. Here we talk about creating an
efficient program that gives an optimal delivering route for a
courier, in order to minimize the time spent traveling; the
courier can pass over one place more than once. Without this
last condition we would have a TSP which, though it is a
“simple” problem formulated over 200 years ago [9], does not
have any optimal solution for big graphs (thousands of
vertexes). Since it is simpler (and possible) to treat our problem
as TSP, we are going to do so.
We will see the different approaches to this problem and also
discuss the selection of the best available choice for our specific
case.
2. PROBLEM
As we just stated, we are trying to create an efficient program
that gives an optimal (shortest total distance) delivering route for
a courier, which minimizes the time spent traveling; this route
can repeat places which were already visited. In our case, we
will implement it for the city of Medellín in Colombia, but it
does not mean the algorithm cannot be used for other cities.
This efficient route planning request is quite difficult to compute
if we want to get an optimal answer. This is due the incredible
amount of possibilities we will have, since the idea is to use the
algorithm for real cities, for example Medellín, which has a
population that surpasses the 2 million people [11]. So it is to be
expected that the algorithm will take an incredible amount of
time to give an appropriate answer, time that may exceed what
we can spend on it. We can take the TSP as an example, which
requires a time proportional to (n-1)!/2 to execute (where n is
the number of places or nodes) [10], which means that for 20
destinations it would require a about 12 years to compute using
an average computer. We will treat our problem as TSP but
using a faster algorithm that requires less than 3 seconds to
compute the path for 20 points, but that would require 14 years
for 45 points on the same computer.
3. RELATED WORK
3.1 Minimum Spanning Tree (MST)
Given a weighted graph, the MST is the cheapest subset of
edges that keeps the graph in one connected component [1].
Those are very useful because they give approximate solutions
to the traveling salesman problem very efficiently.
One efficient way to compute the MST of a graph is the
Kruskal’s algorithm. It is a greedy algorithm that starts by
placing each vertex on its own connected component. It then
iterates over the edges having them sorted in non-descending
order, merging the two vertices connected by the current edge if,
and only if, they are currently on different components. The
complexity of this algorithm is O(m*log(m)) where m is the
number of edges.
3.2 Hamiltonian Path and Cycle
A Hamiltonian Path is a path between two vertices of a graph
that visits each vertex exactly once [5]. A Hamiltonian Cycle is
a closed loop through a graph that visits each node exactly once
[6]. A closed loop is a cycle in a graph in which the first vertex
is the same as the last [7]. A graph possessing a Hamiltonian
Path is said to be a Hamiltonian Graph [8].
There is a backtracking approach to find whether an undirected
graph is Hamiltonian. We start by creating an empty array and
adding vertex 0 to it. We will try to add the other vertices
starting from 1, but before that, we check whether it is adjacent
to the previously added vertex and if it is not already added. If
we find such vertex, we add it as part of the solution. If we do
not find a vertex then we return false [1]. Anyway, the
complexity of this algorithm is O(n!) where n is the number of
vertices, just like the naïve approach.
3.3 Eulerian Path and Cycle
An Eulerian Path is a path in a graph that visits each edge
exactly once [3], and an Eulerian Cycle is an Eulerian Path
which starts and ends in the same vertex [2]. It is similar to the
Hamiltonian path because in both we want to visit some part of
the graph only once. The difference is that in this case we want
to walk through each edge instead of visiting each vertex. This
difference changes everything: while the Hamiltonian path is an
NP-Complete problem for a general graph, finding whether a
given graph is Eulerian (has an Eulerian Cycle) can be done in
O(n + m), where n is the number of vertices in the graph and m
the number of edges.
To find whether a undirected graph is Eulerian it must have all
its non-zero degree vertices connected (which can be done using
a DFS traversal) and the number of vertices with odd degree
must be 1 (if it is 2 then the graph has a Eulerian Path instead)
[4].
3.4 Chinese Postman Problem (CPP)
In this problem, given a weighted graph, the postman wants to
find the shortest path that visits every edge at least once
returning to the starting point.
This problem can be solved in an optimal way by adding the
appropriate edges to the graph to make it Eulerian, because that
is basically what the problem is: finding an (especial) Eulerian
Cycle in the graph. Specifically, we find the shortest path
between each pair of odd-degree vertices in the graph. Adding a
path between two odd-degree vertices in G turns both of them to
even-degree, moving G closer to becoming an Eulerian graph.
Finding the best set of shortest paths to add to G reduces to
identifying a minimum-weight perfect matching in a special
graph G’. For undirected graphs, the vertices of G’ correspond
the odd-degree vertices of G, with the weight of edge (i, j)
defined to be the length of the shortest path from i to j in G. For
directed graphs, the vertices of G’ correspond to the degreeimbalanced vertices from G, with the bonus that all edges in G’
go from out-degree deficient vertices to in-degree deficient ones.
Thus, bipartite matching algorithms suffice when G is directed
[1]. Once the graph is Eulerian, the actual cycle can be extracted
using the algorithm described above.
4. DATA STRUCTURES
To implement our algorithm, we need two graphs: one to store
the city itself and other to store the points we need to visit.
For the graph of the city, we use adjacency list representation.
To achieve this we created four classes: vertex, edge, point and
pair. The first two, vertex and edge, are used directly on the
representation of the graph: we use HashMaps with vertex
objects as keys and edge objects as values. Point class represents
the latitude and longitude of the vertex in the world, and is used
to access vertices given its latitude and longitude. Pair objects
are used during the execution of A* and Dijkstra’s algorithm,
which require a priority queue of vertices (first value of the pair)
sorted by some value acquired during the execution of the
algorithm (second value of the pair).
The other is a complete graph that contains only the vertices we
want to visit. It is stored as an adjacency matrix, using a
primitive double’s 2D array with dynamically assigned integer
codes to vertices (used as indices), and where the edges are
completed using either Dijkstra’s or A* algorithms on the city’s
graph (depending on the amount of points). Since it is a
complete graph, using an adjacency matrix is better than an
adjacency list, because both need the same memory space, but
the adjacency matrix is faster when looking up for the distance
between any two vertices and that is a main requirement of our
algorithms.
There are other auxiliary data structures used during the
execution of different parts of the program. The Natural
Approximation algorithm uses a HashMap with vertices as keys
and integers as values to remember the original positions of the
vertices to visit. To read the vertices the user wants to visit, we
use a HashMap with points as keys and vertices as values,
because the URL only contains information about the point and
not the vertex itself. When reading the edges, we use a HashMap
with the code of the vertices (long data type) as keys and
vertices as values to access the vertices in the graph since the
edge’s specifications contain only the codes of the vertices it
connects and its cost (distance). In the case where the user gives
a point that it is not in our map, we compute a close enough one
and use a HashMap with vertex objects as keys and point objects
as values to store the original points of the approximated
vertices we found. This is done with the aim of returning to the
user the same points he or she entered. Finally, we use
ArrayLists of vertex objects to store both the points entered by
the user and the generated tours.
The reason to use HashMaps is that we do not require our
mapping to be stored in order, which allows us to use the most
efficient map available in Java.
5. ALGORITHM
First, the program creates the city’s graph by reading its
specifications which are given in two text files, one for the
vertices and other for the edges.
Then it reads the URL containing the points from the user, the
program finds the nearest vertex to the ones corresponding to the
given coordinates, which can be the exact same point. After this,
the program will compute the complete subgraph containing
only the points of interest. In order to create the subgraph the
program will choose between two different algorithms,
depending on the number of nodes given by the user: A*
algorithm is used if this number is less or equal to five,
otherwise it uses Dijkstra’s algorithm. For the heuristic of the
A* algorithm, it uses the Manhattan distance. Then it will
execute one of the following three algorithms, according to
user’s choice:
1.
Natural approximation: the algorithm is a simple sort over
the vertices using its 2D coordinates, using a custom
comparator to determine whether a point is to the left or to
the right of another. This algorithm has two versions: the
first (or the fastest) version just performs the sort described
above and completes the tour adding the initial vertex,
while the second version does the same as the first one, but
also computes the subgraph and the reverse tour, compares
both total lengths and returns the best tour. Notice that the
fastest version does not generate the subgraph. The method
for comparing the points is based on [11].
2.
3.
Nearest neighbor: This algorithm starts with the first vertex
and greedily chooses the nearest neighbor and moves to it.
It keeps doing the same thing until all the vertices are
visited. Finally, it adds the starting vertex to the end in
order to complete the tour. This algorithm will not always
give an optimal solution, but it’s execution time is much
faster than the brute force and can produces better tours
than natural approximation.
Brute force: we try every possible path starting with the
initial vertex, ending at i and passing through all the other
vertices different to i. To this path we add the distance from
i to the initial vertex to complete the tour.
We now present the complexity analysis for the algorithms
where n is the number of points to visit, E the number of edges
(335719) and V the number of vertices (179403) in the city’s
graph.
Build subgraph
Brute Force
Algorithm
Nearest Neighbor
Natural
approximation
(Normal mode)
Natural
approximation
(Fast mode)
Wost case complexity
Memory
Time
O(n(E+VlogV))
O(n2)
O(n2n)
O(n)
O(n22n)
O(n)
O(nlogn)
O(n)
O(nlogn)
O(n2)
Figure 2. Program’s menu
The first five options are to choose the way the user wants the
program to calculate the route, after choosing, the program will
show the time needed to generate the route and will take the user
to a Google Maps page where he or she can see this route. Since
the options number two, three, four and five need to internally
create a subgraph, the first time any of these options is chosen,
the program will show the time spent creating it.
1.
2.
3.
Table 1. Complexity table
6. IMPLEMENTATION
When starting the program the first thing the user will see is an
“Initializing” signal, shown to make the user wait until the
program builds the city’s graph. After this, the program will
show the time required to build the city’s graph and ask for the
Google Maps URL containing the points to be visited (Figure 1).
If the user inputs an invalid URL, the program will tell so and
then ask again for a new one (Figure 1). It will repeat this
process until the user gives a valid input. If the user wants to end
the program from here, he or she may input an ‘x’ (lower or
upper case) as URL and the program will finish.
4.
5.
Figure 1. Starting the program and invalid URL
After this, the program shows a warning telling the user that the
route given by our program may differ with Google’s because
the points we used to build the city’s graph are different. Then
offers a menu with 6 or 7 different options (Figure 2), depending
on how many points were given, to compute a tour which covers
all the given points and comes back to the first one (the user
should not add the first point at the end of the path).
6.
7.
“Natural approximation fast mode” will present a
route that tends to be circular and, compared with the
other options, it is the fastest, but it will not give any
distance and may not show the shortest route.
“Natural approximation normal mode” will also
present a route that tends to be circular, but this time
shows the total distance and may even show a route
that is shorter than the one obtained using last mode,
although it may still not be the shortest possible route.
“Nearest Neighbor” will give a route formed by
finding the closest point from the current one, and then
move to it. This process will repeat until reaching the
last point (corresponding to the first one). It may not
give the shortest possible route and its execution time
is similar to the second option. This option shows total
distance too.
“The best of both” will choose the best route between
the ones generated by last two options and show the
total distance; this will take a little more time. We
encourage its usage when there are more than 20
points in the URL and the user wants the shortest
route.
“Exact” will always show the shortest route with its
respective distance. It is potentially much slower than
the other options, reason why is not present in the
menu when the URL has more than twenty points,
since that would take a lot of time. If the user wants,
he can expand the maximum of points he or she can
input to ‘unlimited’. For this, when the program asks
for an URL (either when starting the program or
changing the current one) “extreme-mode” must be
written and press the enter key, then a warning will
appear and the program will ask for the URL to
continue as usual.
“Change URL” lets the user change the current URL,
if the new URL is not valid it will indicate so and ask
for a new one; if the given input is an ‘x’ the program
will end.
“Exit” will end the program.
Something important to consider is that, when the URL contains
at least two points that are not connected between them, the
program cannot calculate a distance, so it will tell the user so
and use the first option (“Natural approximation fast mode”) to
compute a possible route.
7. RESULTS
MEDELLÍN
WITH
THE
MAP
OF
7.1 Execution Time
The following table shows the time in seconds that each
algorithm takes to process a route for a different number of
vertices. The time required to build the subgraph for that same
amount of points is also shown.
Vertices
Build subgraph
Brute Force
Best of Both
Nearest Neighbor
Algorithm
Natural approximation
(Normal mode)
Natural approximation
(Fast mode)
5
0,4182
0
0
0
10
1,5146
0,0014
0
0
15
2,1448
0,0428
0,0001
0
20
2,7558
2,1206
0,0002
0
0
0
0
0
0
0
0
0
Considering current limits imposed by Google Maps in terms of
the available amounts of destinations for a single route (ten
vertices), it is easy to notice that our algorithm will run in
feasible time even when looking for an optimal solution after
building the subgraph, and in under two seconds if it has not
been build.
7.2 Memory space
The following table shows the memory in megabytes that each
algorithm takes to process a route for a different number of
vertices. Is important to notice that the memory used by the
subgraph can only be included in the algorithms that need it.
Algorithm
Table 2. Execution time (seconds) on a Core i7 2410u processor.
Build subgraph
Brute Force
Nearest Neighbor
Natural
approximation
(Normal mode)
Natural
approximation
(Fast mode)
5
27
0
3
10
58
1
4
Vertices
15
73
2
3
20
111
100
4
24
136
4771
4
3
3
3
4
3
2
4
4
4
4
Table 3. Memory used (in MB)
Remember that the Natural Approximation (Fast mode) does not
require the subgraph to be computed by the time it is called. For
the rest of the algorithms, the program only needs to compute
the subgraph once, because after building it, it will be saved and
reused until a new subgraph is required (new URL).
Is easy to notice that for all algorithms, except brute force, the
memory is almost constant. This is due, the amount of vertices is
not enough to show any difference.
As expected, the brute force algorithm is the slower one. For the
others, it is hard to compare their running times with such a little
amount of vertices.
In the other hand, one can see the incredible difference in
memory used by the brute force algorithm. This was
expected since the memory use increase exponentially
(O(n2n)). In graphic 3 we plot the obtained results.
Graphic 3. Memory for Brute force
Graphic 1. Brute force’s execution time on a Core i7 2410U
processor.
Graphic 2. Subgraph construction time on a Core i7 2410U
processor.
7.3 Total distance
The following graphic shows the total distance (in meters) of the
routes found as shortest by the different algorithms for random
sets of points.
Graphic 4. Total distance
http://mathworld.wolfram.com/EulerianCycle.html.
[Accessed 27 August 2016].
Graphic 4 shows how volatile Nearest Neighbor and Natural
Approximation algorithms are: in one case they got the optimal
answer whereas in other cases both obtained longer routes than
the optimal solution with a difference higher than 10 kilometers.
[3]
E. W. Weisstein, "Eulerian Path," Wolfram MathWord,
[Online].
Available:
http://mathworld.wolfram.com/EulerianPath.html.
[Accessed 28 August 2016].
[4]
GeeksForGeeks, "Eulerian path and circuit for
undirected
graph,"
GeeksForGeeks,
[Online].
Available: http://www.geeksforgeeks.org/eulerian-pathand-circuit/. [Accessed 28 August 2016].
[5]
E. W. Weisstein, "Hamiltonian Path," Wolfram
MathWorld,
[Online].
Available:
http://mathworld.wolfram.com/HamiltonianPath.html.
[Accessed 28 August 2016].
[6]
E. W. Weisstein, "Hamiltonian Cycle," Wolfram
MathWorld,
[Online].
Available:
http://mathworld.wolfram.com/HamiltonianCycle.html.
[Accessed 28 August 2016].
[7]
E. W. Weisstein, "Graph Cycle," Wolfram MathWorld,
[Online].
Available:
http://mathworld.wolfram.com/GraphCycle.html.
[Accessed 28 August 2016].
[8]
GeeksForGeeks, "Backtracking | Set 6 (Hamiltonian
Cycle),"
GeeksForGeeks,
[Online].
Available:
http://www.geeksforgeeks.org/backtracking-set-7hamiltonian-cycle/. [Accessed 28 August 2016].
[9]
University of Waterloo, "History of the TSP,"
University of Waterloo, Enero 2007. [Online].
Available:
http://www.math.uwaterloo.ca/tsp/history/index.html.
[Accessed 28 August 2016].
[10]
A. Levitin, Introduction to The Design and Analysis of
Algorithms, 3rd Edition, New Jersey: Pearson, 2012, p.
116.
[11]
C. e. dos, "Github," 18 Abril 2016. [Online]. Available:
https://github.com/mvpossum/eldiego/blob/master/geo
metria/orden.radial.cpp. [Accessed 4 September 2016].
The code can be found at
https://subversion.assembla.com/svn/planningdeliverysystemme
dellin/
8. CONCLUSIONS
The problem to calculate a delivery route given some points in a
map is not new and has been researched for years. As prove to
that, we can find the famous Traveling Salesman Problem or
TSP, in which the delivery route may not have repeated points,
except for the first one, that is the last one at the same time. This
problem was defined in the 1800s and even today, there are not
any algorithms that can give the best route efficiently: we still
have to choose between efficiency and precision. For the
problem of planning delivery routes in Medellín, that can be
modeled as the TSP problem, our proposed solution can be of
huge help for small companies when their couriers go out for a
route because it’s unlikely that, on a single trip, they will visit
more than 20 points. If the delivery route can consider repeated
points it will be harder to solve, so is better to simply solve TSP.
The biggest problem is to find an efficient way to give an
answer: algorithms that can give optimal answers require a lot of
time and memory, to the point that they cannot be used for big
graphs, and algorithms that can give an answer without
consuming too much resources, may not be able to give the
shortest route. Even if we apply all possible optimizations to the
code, it is still not enough to efficiently compute an optimal
solution.
Thanks to this work, we were able to understand the limitations
a computer has and the need of implementing efficient
algorithms with the appropriate data structures, otherwise a
program could take too much time to execute, time that the user
is not able or willing to spend waiting, since it can get to a point
where it needs years to compute an answer. And this gets worse
considering that right now we are living in a world were a lot of
data is stored, and only accessing it may take a lot of the
computer’s resources.
9. FUTURE WORK
Currently, there are two big limitations we would like to fix in
the future:
1.
2.
Our graph does not fit Google Map’s very well, which
makes the distances and the routes to be shown in a
very different way in many cases.
The only way to add more than 10 points on a route is
by working with the URL and the points’ latitude and
longitude.
10. REFERENCES
[1]
S. Skiena, The Algorithm Design Manual, New York:
Springer, 2008.
[2]
E. W. Weisstein,
MathWorld,
"Eulerian
[Online].
Cycle,"
Wolfram
Available:
| 8 |
Shaping Influence and Influencing Shaping: A
Computational Red Teaming Trust-based Swarm
Intelligence Model
Jiangjun Tang1 , Eleni Petraki2 , and Hussein Abbass1
arXiv:1802.09647v1 [] 26 Feb 2018
1
University of New South Wales, School of Engineering and Information Technology,
Canberra, ACT 2600, Australia?? .
2
Faculty of Science, Technology, Education, and Mathematics, University of
Canberra Canberra, Australia.
j.tang@adfa.edu.au,Eleni.Petraki@canberra.edu.au,h.abbass@adfa.edu.au
Abstract. Sociotechnical systems are complex systems, where nonlinear
interaction among different players can obscure causal relationships. The
absence of mechanisms to help us understand how to create a change in
the system makes it hard to manage these systems.
Influencing and shaping are social operators acting on sociotechnical systems to design a change. However, the two operators are usually discussed
in an ad-hoc manner, without proper guiding models and metrics which
assist in adopting these models successfully. Moreover, both social operators rely on accurate understanding of the concept of trust. Without
such understanding, neither of these operators can create the required
level to create a change in a desirable direction.
In this paper, we define these concepts in a concise manner suitable for
modelling the concepts and understanding their dynamics. We then introduce a model for influencing and shaping and use Computational Red
Teaming principles to design and demonstrate how this model operates.
We validate the results computationally through a simulation environment to show social influencing and shaping in an artificial society.
Keywords: Influence, Shaping, Trust, Boids
1
Introduction
Recently, computational social scientists are attracted to studying means for
measuring the concepts of influence and shaping. For influence to work is to
exert a form of social power. Servi and Elson [1] introduce a new definition
of influence which they apply to online contexts as ‘the capacity to shift the
patterns of emotion levels expressed by social media users’. They propose that
measuring influence entails identifying shifts in users’ emotion levels followed by
the examination of the extent to which these shifts can be connected with a user.
??
Portions of this work was funded by the Australian Research Council Discovery
Grant number DP140102590.
However, if the process of influence creates shifts in patterns of emotions which
can be detected in the short-term, can a persistent application of influencing
operators create a long-term shift (i.e. shaping)?
Shmueli et.al. [2] discuss computational tools to measure processes for shaping and affecting human behaviour in real life scenarios. Trust was identified as
a means to influence humans in a social system. Moreover, trust was found to
have a significant impact on social persuasion. Trust is a complex psychological
and social concept. A review of the concept can be found in [3].
Larson et.al. [4] are among a few to imply a distinction between influence
and shaping, whereby shaping is perceived to be a change to the organization
or the environment, while influence fosters attitudes, behaviours or decisions of
individuals or groups. However, the majority of the literature follows a tendency
to assume that social influencing would lead to shaping.
In this paper, we aim to distil subtle differences to distinguish between the
two concepts. This distinction is very important for a number of reasons. First,
it examines the validity of the implicit assumption that influencing is a sufficient
condition for shaping. Second, it is important when studying social sciences
using computational models (computational social sciences) to create models
that are not ambiguous about the social and psychological phenomena under
investigation. Third, it is vital to make it explicit that social influencing and
shaping work on different time scales; that is, social influencing is effective in
the short run, while shaping requires time and is more effective in the long run.
We will use a computational red teaming (CRT) model, whereby a red agent
acts on a blue team to influence, shape and sometimes distract the blue team.
The idea of the model should not be seen from a competition or conflict perspective. The model is general, were the red agent can be an agent that promotes a
positive attitude within a team (a servant leader) or a social worker correcting
the social attitude of a gang.
2
Influence, Shaping, and Trust
Influence will be defined in this paper as: an operation which causes a short-term
effect in the attitude or behaviour of an individual, group or an organization.
Shaping, on the other hand, is defined as: an operation which causes a long-term
effect in the attitude or behaviour of an individual, group or an organization.
We use the more accurate term “effect” rather than the term “change” because sometimes social influence and shaping need to operate to maintain the
status quo. If agent A is attempting to influence agent B by changing B’s behaviour, agent C can attempt to counteract agent A’s influence by influencing
B to maintain its behaviour. Therefore, influence does not necessarily require a
change to occur.
In a strict mathematical sense, social influence would change the parameters
of a model, while social shaping would alter the constraint system.
To illustrate the difference, we will use a model, whereby a group of blue
agents attempts to follow a blue leader. A red agent has a self-interest to influence
or shape the blue team. All agents are connected through a network. Each agent,
excluding the blue leader and the red agent, attempts to align their behaviour
with their neighbours (the other agents it is connected to). The blue leader
attempts to reach the blue goal (a position in space). When all agents fully trust
each other, and in the absence of the red agent’s effect, it is expected that the
intention of the blue leader will propagate throughout the network. Over time,
the blue agents will also move towards the blue goal.
The red agent has a different goal. It aligns with the agents it is connected
to, but it also attempts to influence and/or shape them towards its own goal
(or away from the blue’s goal). Social influence by red is represented through
changing red movements; thus, affecting the movements of its neighbours. Social
shaping by red is represented through a network re-wiring mechanism. Connections in a network are the constraints on the network’s topology. By rewiring the
network, the red agent changes the constraints system. We abstract trust to a
scale between -1 and 1, whereby “1” implies maximum trust, while “-1” implies
maximum distrust. We do not differentiate in this paper between distrust and
mistrust. A “0” value is a neutral indicator that is equivalent to not knowing a
person.
3
The Model
An agent-based Boids [5] model is proposed in this paper. All agents are randomly initialized with random headings and locations. Agents are connected
through a network structure that allows information exchange among the agents.
In this setup, the neighborhood is mostly defined by the hamming distance between two agents in the network, while sometimes it will be defined by the
proximity of one agent to another in the physical space. This latter definition is
the classic and default one used in the Boids model. Three Boids rules: cohesion,
alignment, and separation, are still adopted here. However, the first two vectors are sensed by network connections while the separation vector is perceived
through the Euclidean distance in the physical space. Each agent has a trust
factor value which decides how much this agent trusts the information perceived
from others. The first two vectors are scaled using the trust factor before an
agent’s velocity gets updated. An agent 100% trusts the cohesion and alignment
information from its linked neighbours when it has a trust factor of 1. When the
trust factor is -1, the agent totally believe that the information is deliberately
altered to the opposite value, and therefore, the agent reverses the information
it receives.
In the model, there are three types of agents: blue leader (AB ), red agent
(AR ), and blue agent. The blue leader always moves towards a specific location/goal, and attempts to make the other blue agents follow it. The blue agent
is an agent that senses its neighbours through network links for both cohesion
and alignment but by Euclidean distance for separation, and then makes decisions on its new velocity. The red agent is a special agent in the model who
controls the level of noise (η) in the velocity and network connections for influ-
encing and shaping. Many blue agents can exist but there is only a single blue
leader and a single red agent.
Agents form a set A and live in a space (S) defined by a given width (spaceW )
and a given length (spaceL). All agents are connected by a random network. To
establish network connections, a probability (p) is defined. If we have n agents
including one blue leader, one red agent, and n − 2 blue agents, the network
can be denoted as G(n, p). A Goal (G) is a 2-D position that sits at one of the
corners of S. Blue leader always aims to move towards G. The area surrounding
of G is denoted by δ. Once the blue leader enters this area, the position of G
changes. An agent has the following common attributes:
– Position (p), p ∈ S, is a 2-D coordinate.
– Velocity (v) is a 2-D vector representing the agent’s movement (heading and
speed) in a time unit.
– Cohesion Velocity (cohesionV ) of an agent is the velocity calculated based
on the mass of all agents that are connected to this agent.
– Alignment Velocity (alignmentV ) of an agent is the velocity calculated based
on the average velocity of all agents that are connected to this agent.
– Separation Velocity (separationV ) of an agent is the velocity that forces this
agent to keep a certain small distance from its neighbors and is based on the
Euclidean distance.
– Velocity weights:
• Cohesion weight (wc ): a scaler for the cohesion velocity.
• Alignment weight (wa ): a scaler for the alignment velocity.
• Separation weight (ws ): a scaler for the separation velocity.
– Trust factor (τ ) defines how much an agent trusts its connected neighbours.
It has an impact on both the cohesion velocity and alignment velocity but
not on the separation velocity.
All agents except the blue leader attempt moving towards the neighbours’
location guided with the cohesion vector. The cohesion vector,cohesionVi , of an
agent Ai is:
P|N |
j=0 pj
− pi
(1)
cohesionVi =
|N |
where, |N | is the cardinality of the neighbourhood N .
The alignment velocity of an agent with its linked neighbours is:
P|N |
alignmentVi =
j=0
|N |
vj
− vi
(2)
The separation velocity of an agent is calculated using neighbours Nd in the
spatial proximity of other agents as follows:
|Nd |
separationVi = −
X
j=0
(pj − pi )
(3)
The trust factor of a blue agent is updated by the average trust factors of all
its connected neighbours (N ) as below:
P|N |
τi = 0.5 × (τi +
j=0 τj
|N |
)
(4)
The blue leader and red agent’s trust factors are not updated.
The velocity of the blue leader always aims at the goal G at each step and it
is not affected by any other factor. The velocities at time t of all agents except
the blue leader are updated by Equation 5.
v = v + τ × (wc × cohesionV + wa × alignmentV ) + ws × separationV
(5)
where, cohesionV , AlignmentV , and separationV are normalized vectors. The
position at time t of each agent can be updated by:
p = p + vt
(6)
If an agent’s new position is outside the bounds of S, the reflection rule is
applied. According to Equation 5, an agent adjusts its own velocity in compliance
with both cohesionV and alignmentV when it has a positive trust value, and
follows the opposite direction as suggested by cohesionV and alignmentV when
its trust factor is negative. If τ = 0, only the separation vector takes effect on
this agent so that this agent doesn’t anyone.
The red agent introduces heading noise, changes its network structure, or
does both at each time step. The heading noise can be propagated to blue agents
through the connections of the network to cause a deviation in some blue agents’
moving directions. Changes in the network structure may result in long term
effects on blue agents.
At each time step, Red agent updates its own velocity (vRedAgent ) by Equation 5 and then Equation 7 uses a normal distribution (N (0, η)) to generate noise
and add it to Red’s velocity.
vRedAgent = vRedAgent + N (0, η)
(7)
Equation 6 is used to update the red agent’s position.
Furthermore, the red agent has the ability to re-configure network connections by using the noise level η as a probability that governs the eventuality of
the following steps:
1.
2.
3.
4.
Randomly pick up a blue agent (Ai ) who is connected with the red agent.
Randomly pick up another blue agent (Aj ) who is connected with Ai .
Break the connection between Ai and Aj .
Connect the red agent with a randomly chosen blue agent Aj .
In this way, the connection between the red agent and blue agents changes
but the number of edges of the whole network remains as before. The long term
effects of these topological updates are expected because the path along with
information propagates changes and some blue agents may not get consistent
updates from their neighbours.
The blue leader attempts to lead other blue agents towards a given destination, and the red agent attempts to disorient through influence (deliberate
changes in heading) and/or shaping (deliberate re-configuration of network structure). Therefore, the “effect” from our model can be derived as how well the blue
agents follow the blue leader given the influence/shaping by the red agent. A
straightforward measure of this effect within our model is the average distance
between blue agents and the goal when the blue leader reaches the goal. If this
distance is small, blue agents followed the blue leader. If it is large, red agent
distracted the blue agent.
During a single simulation run, the blue leader is tasked to reach the goal
multiple times. Each time it reaches the goal (an iteration), the location of the
goal changes. The effect is measured at the end of each iteration. The overall
effect of a simulation run is the average of all iterations except the first iteration, which is excluded to eliminate the warm-up period in the system resultant
from the random initialisation of agents. In summary, the effect is defined by
Equation 8.
!
n
M
X
1X
1
¯
dm,i
(8)
d=
M m=1 n i=1
where, M is the number of iterations except the first one, n is the number of
blue agents, and dm,i is the distance between agent i and the goal location at
the m’th iteration.
4
Experimental Design
Our aim is to evaluate the “effects” of the short term inference and long term
shaping caused by the red agent. Two stages are used for the experiments. The
first stage focuses on the noise of red agent where the effect from the trust factor
is minimised. The second stage investigates both the trust factor and the red
agent’s noise. The number of blue agents is 25, so there are a total of 27 agents
including a blue leader and a red agent. All agents’ initial locations are uniformly
initialised at random within S, where S is a square space with the dimension of
500×500. All agents’ headings are uniformly initialised at random with constant
speed of 1. All agents except the blue leader have the same velocity weights:
wc = 0.4 for cohesion, wa = 0.4 for alignment, and ws = 0.2 for separation. The
initial trust factor values of all blue agents are uniformly assigned at random
within the range of [−1, 1]. Connections among agents are created by a random
network G(n, 0.1), where n = 27.
In all experiments, two levels of noise (η − = 0.1 and η + = 0.9) are used. In
the first stage, to reduce the effect of the trust factor, it is assumed constant with
a value of 1 for all agents; that is, all blue agents trust any perceived information,
including the information arriving from red. In the second stage, the blue leader
has two trust levels: τB− = 0.2 and τB+ = 1, and the red agent has two levels of
trust: τR− = −0.2 and τR+ = −1.
Three scenarios are designed for investigating the red agent’s impact in our
experiments. In Scenario 1, the red agent introduces noise to its heading at
each time step thus this noise can immediately affect direct neighbours and can
be propagated through the network. In Scenario 2, the red agent changes the
network structure at each time step thus shaping the environment of the blue
agents. In Scenario 3, the red agent introduces noises to its heading and changes
network structures at each time step, so that both influence and shaping take
place in our model.
Using 2k factorial design [6], a total of 2 factor combinations is available at
the first stage and 8 at the second stage. Each combination has three individual
scenarios to study. Moreover, the randomness exists in the initialisation phase,
therefore 10 runs for each factor combination and impact are desired in order to
obtain meaningful results. In summary, there are 60 (3 × 2 × 10) simulation runs
in the first stage and 240 (3 × 8 × 10) runs in the second stage. The results and
analysis are provided in the next section.
5
Results and Discussion
The results from the first stage experiments are presented in Table 1. The dis¯ is listed from the
tance between blue agents and goal location of each run (d)
second column to the eleventh column. And the last three columns of the table
are the averages of 10 runs, the standard deviations and the confidence intervals
that are obtained at α = 0.05.
Table 1: Results of red agent’s noise impact when τB = 1 and τR = 1
R1
R2
η = 0.1 47.64 46.78
η = 0.9 145.90 155.75
eη
98.27 108.97
η = 0.1 45.71 59.28
η = 0.9 61.23 57.63
eη
15.52 -1.65
η = 0.1 45.34 47.09
η = 0.9 213.49 168.69
eη
168.15 121.59
R3
R4
R5
R6
R7
R8
R9
R10
Scenario1: Velocity Noise
39.26 56.63 47.09 67.29 60.65 38.76 42.99 44.86
168.04 199.94 171.94 243.61 162.15 144.08 103.82 117.94
128.78 143.31 124.85 176.33 101.50 105.33 60.83 73.08
Scenario 2: Network Changes
47.39 54.31 58.14 69.65 50.27 44.35 43.90 48.83
56.30 81.25 53.65 74.69 55.76 40.86 47.74 52.03
8.91 26.94 -4.49
5.04
5.49 -3.49
3.85
3.20
Scenario 3: Velocity Noise and Network Changes
65.90 54.05 51.93 84.91 54.66 41.11 43.88 52.21
197.52 188.80 171.62 236.93 174.46 183.98 84.95 122.82
131.62 134.75 119.69 152.02 119.80 142.87 41.07 70.61
Avg STD Conf.
49.19 8.90 6.36
161.32 37.65 26.93
112.12 31.70 22.68
52.18 7.78
58.11 11.36
5.93 9.00
5.56
8.13
6.44
54.11 12.23 8.75
174.32 41.20 29.47
120.22 35.90 25.68
The results show that the more noise the red agent has in its velocity, the more
deviation from the goal observed by the blue agents. Changes in the network
structure can lower blue agents performance, although the magnitude of this
decrease may not be significant. This is expected since the shaping operates
work on a smaller timescale than the influence operator. When both influence
and shaping work together, the effect is more profound than any of the individual
cases in isolation.
Table 2: Results of effects from red agent’s noise impact and trust factors. The
confidence level is at 0.05.
Effect
R1
R2
eτ B
eτ R
eN
-170.40 -146.54
165.32 160.08
1.78 15.84
eτ B
eτ R
eN
-122.99 -165.55
142.72 177.17
42.90
1.75
eτ B
eτ R
eN
-151.65 -140.59
157.63 152.23
2.27 22.60
R3
R4
R5
R6
R7
R8
R9
R10
Scenario 1: Velocity Noise
-83.92 -55.00 -131.83 -6.05 -128.09 -110.66 -184.81 -152.86
95.83 71.21 149.41 8.11 133.20 111.97 167.79 160.23
-9.18 6.22
6.74 3.40 -13.50
0.35 -14.64 -17.58
Scenario 2: Network Changes
-144.34 -64.94 -168.15 -8.09 -154.27 -170.61 -189.59 -187.66
154.10 77.45 186.62 24.03 164.25 172.21 171.13 172.02
12.81 19.08
8.50 -1.08
-0.29 -12.17 25.41 -15.03
Scenario 3: Velocity Noise and Network Changes
-132.44 -35.97 -166.98 -7.63 -159.37 -171.89 -194.38 -163.85
147.97 38.06 176.90 16.03 175.96 163.48 171.95 174.82
15.16 14.70 21.23 4.64
5.73 10.22 20.50
8.15
Avg STD Conf.
-117.02 55.01 39.35
122.31 51.75 37.02
-2.06 11.04 7.90
-137.62 58.31 41.71
144.17 52.25 37.38
8.19 17.62 12.61
-132.47 61.12 43.72
137.50 59.31 42.43
12.52 7.38 5.28
The results from the second stage are summarised in Table 2. Interestingly,
the trust factors of the blue leader and red agent become critical but the red
agent’s noise is not critical. The model responses to the trust factor as expected.
When the blue leader has a higher level of trust, all blue agents follow it better
(smaller effect values can be observed from eτB ). On the other hand, the blue
agents demonstrate disorder behaviours (larger value of eτR ) if red agents have
small negative trust. These situations are found in all three scenarios and the
effects of trust factors are all statistically significant. Although the red agent’s
noise has some effects on blue agents, it is very little and can be ignored when
compared to the effect of the trust factor. Negative trust values taken by the
blue agents counteract the influence generated by both blue and red agents.
The red agent’s noise can have impact on blue agents’ behaviours through
short term influence (velocity) and long term shaping (network structures) if the
effects of trust are low. When the trust factors are high, the situation changes.
Trust has a significant impact on blue agents’ behaviours.
Figure 1 illustrates the agents’ footprints when the red agent impacts velocity,
network structure or both but with minimum trust effects. These footprints are
obtained from the first runs of all three scenarios in the first stage and the results
are listed in the column “R1” of Table 1.
Figure 1a, 1b, and 1c show that the blue leader leads other agents towards
the goal well as being demonstrated by a few congested trajectories. When noise
increases, blue agents’ trajectories are disturbed as shown in Figure 1d. Figure 1e shows that changes in the network structure seem to not generate much
effects on blue agents’ behaviours. However, the blue agents behaviours are more
random when red affects both velocity and network structure. This manifests disorderliness as more scattered blue agents’ footprints can be observed in the last
figure.
Figure 2 shows two examples of agents’ footprints that are affected by trust
with small noise values (η = 0.1). The footprints presented in Figure 2a are
extracted from the first run of the third scenario in the second stage with τB =
0.2 and τR = −1. When the red agent’s trust is -1, the negative effect on blue
500
45
500
55
500
450
40
450
50
450
400
35
45
400
80
70
400
60
40
350
350
350
30
35
300
300
25
250
30
250
40
250
25
20
200
50
300
200
200
30
20
15
150
150
150
15
10
100
5
50
100
200
300
400
500
100
5
100
200
300
400
50
500
100
(b) Scenario 2: η = 0.1
25
450
20
400
20
100
10
0
(a) Scenario 1: η = 0.1
500
10
50
200
300
400
500
0
(c) Scenario 3: η = 0.1
500
55
500
20
450
50
450
18
400
16
350
14
300
12
250
10
200
8
150
6
100
4
45
400
40
350
350
35
15
300
300
30
250
250
25
10
200
200
20
150
150
15
5
100
50
100
10
50
100
200
300
400
500
(d) Scenario 1: η = 0.9
5
100
200
300
400
500
(e) Scenario 2: η = 0.9
2
50
100
200
300
400
500
0
(f) Scenario 3: η = 0.9
Fig. 1: Agents’ footprints under Red agent’s noise (η) impacts on velocity and
network with minimum trust effects (τB = 1 and τR = 1).
agents’ trust is continuously broadcasted throughout the network. Eventually,
all blue agents will have a negative trust value that is close to -1 since the blue
leader doesn’t have much power (τB = 0.2) to compete against the red agent.
This results in all blue agents distrusting each other. In this case, the blue agents
spread out to the boundaries. However, the reflection rule forces them back into
the given space, causing the blue agents to move around the corners after several
time steps as shown in Figure 2a.
The right side of Figure 2 depicts agents’ footprints extracted from the third
scenario in the second stage with τB = 1, τR = −0.2, and η = 0.1. Some
trajectory patterns can be observed from Figure 2b. In this case, the blue leader
has enough power to beat the red agent in terms of trust. All blue agents will
have positive trust that are passed from the blue leader. Although the red agent
has influence on their velocity and connections, the blue agents are still capable
to follow the blue leader to reach the goal locations (corners) as the trajectory
patterns show.
From the above examples and previous results, it can be concluded that trust
has a more significant impact on blue agents behaviours than the effect of noise
caused by the red agent.
6
Conclusion
In this paper, we presented a CRT trust-based model which is an extension of
the classic Boids. The network topologies for situation awareness and a trust
500
500
500
10
450
450
450
9
400
400
400
8
350
350
350
7
300
300
300
6
250
250
250
5
200
200
200
4
150
150
150
3
100
100
100
2
50
1
50
50
100
200
300
400
500
0
(a) τB = 0.2, τR = −1, and η = 0.1
100
200
300
400
500
0
(b) τB = 1, τR = 0.2, and η = 0.1
Fig. 2: Trust effects on agents behaviours with red agent noise level at 0.1 in
Scenario 3.
factor on perceived information are introduced into our model. They provide
the necessary tools to investigate influence and shaping using CRT.
A number of experiments are designed and conducted in order to differentiate
the potential impact from influence and shaping on a system. As the results of
the first experimental stage suggest, short term influence can have an immediate
effect on the system which is easily observed. The long term shaping effects may
not be easily observable although it has effect on the system, especially when
it interacts with influence. However, trust among agents plays a critical role in
the model. Based on our findings in the second experiment, trust dominates the
agents’ behaviours regardless of noise.
Acknowledgement
This is a pre-print of an article published in International Conference in Swarm
Intelligence, Springer, Cham, 2016. The final publication is available at https:
//doi.org/10.1007/978-3-319-41000-5_2.
References
1. Les Servi and Sara Beth Elson. A mathematical approach to gauging influence
by identifying shifts in the emotions of social media users. Computational Social
Systems, IEEE Transactions on, 1(4):180–190, 2014.
2. Erez Shmueli, Vivek K Singh, Bruno Lepri, and Alex Pentland. Sensing, understanding, and shaping social behavior. Computational Social Systems, IEEE Transactions
on, 1(1):22–34, 2014.
3. Eleni Petraki and Hussein Abbass. On trust and influence: A computational red
teaming game theoretic perspective. In Computational Intelligence for Security
and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on, pages 1–7.
IEEE, 2014.
4. Eric V Larson, Richard E Darilek, Daniel Gibran, Brian Nichiporuk, Amy Richardson, Lowell H Schwartz, and Cathryn Q Thurston. Foundations of effective influence
operations: A framework for enhancing army capabilities. Technical report, DTIC
Document, 2009.
5. Craig W Reynolds. Flocks, herds and schools: A distributed behavioral model. In
ACM SIGGRAPH computer graphics, volume 21, pages 25–34. ACM, 1987.
6. Douglas C Montgomery. Design and analysis of experiments. John Wiley & Sons,
2008.
| 2 |
Fuzzy Recommendations in Marketing Campaigns
S. Podapati, L. Lundberg, L. Skold, O. Rosander, J. Sidorova
Abstract
The population in Sweden is growing rapidly due to immigration. In this light, the issue of infrastructure upgrades to provide
telecommunication services is of importance. New antennas can
be installed at hot spots of user demand, which will require an
investment, and/or the clientele expansion can be carried out in a
planned manner to promote the exploitation of the infrastructure
in the less loaded geographical zones. In this paper, we explore
the second alternative. Informally speaking, the term Infrastructure-Stressing describes a user who stays in the zones of high
demand, which are prone to produce service failures, if further
loaded. We have studied the Infrastructure-Stressing population
in the light of their correlation with geo-demographic segments.
This is motivated by the fact that specific geo-demographic
segments can be targeted via marketing campaigns. Fuzzy logic
is applied to create an interface between big data, numeric
methods for processing big data and a manager.
Key Words: intelligent data mining, call detail records, fuzzy
membership function, geo-demographic segments, marketing.
1 Introduction
In the era of big data a mapping is desired from
multitudes of numeric data to a useful summary and
insights expressed in a natural language yet with a
mathematical precision [Zadeh, 2009]. Fuzzy logic
bridges from mathematics to the way humans reason and
the way the human world operates. Clearly, the "class of
all real numbers which are much greater than 1," or "the
class of beautiful women," or "the class of tall men," do
not constitute classes or sets in the usual mathematical
sense of these terms. Yet, “the fact remains that such
imprecisely defined notions play an important role in
human thinking, particularly in the domains of decisionmaking, abstraction and communication of information”
[Zadeh, 1965]. According to [Meyer, Zimmerman, 2011],
few works exist in business intelligence that use fuzzy
logic due to certain inherent difficulties of creating such
applications, and yet; despite them, such applications are
possible and very useful. The difficulties are as follows.
Firstly, many applications do not permit a trial and error
calibration, because the results of a fuzzy model cannot
easily be compared to the results of the behaviour of the
real system. Secondly, the operators, membership
functions, and inference methods have to properly map
the counterparts of human mind, in which they are very
often very context dependent. Thirdly, this is no longer a
mathematical problem but predominantly a problem of
psycholinguistics or similar disciplines, and unlikely this
part of science is much less developed than the
mathematics of fuzzy set theory. The main two types of
fuzzy technology are fuzzy knowledge based systems, e.g.
[Meyer, Zimmerman, 2011] and fuzzy clustering e.g.
[Tettamanzi et al, 2007].
Our idea is different from the above. Fuzzy logic
enables us to formulate a natural language interface
between big data, numeric analytics, and a manager,
hiding the compexity of data and methods. We summarize
data using linguistic hedges and formulating queries in a
natural language. Our specific application is targeting
different user segments to fill in the spare capacity of the
network in a network-friendly manner. In [Sidorova et al,
2017], the notion of Infrastructure-Stressing (IS) Client
was proposed together with the method to reveal such
clients from the customer base. Informally, IS clients use
the infrastructure in a stressing manner, such as always
staying in the zones of high demand, where the antennas
are prone to service failures, if further loaded. Being IS is
not only a function of user’s qualities, but also of the
infrastructure, and of the relative mobility of the rest of
the population. It is not possible to directly use this
knowledge in marketing campaigns, where the desired
action is to avoid recruiting IS clients, at least recruiting
them in disproportionally large quantities. This paper
aims to make the knowledge about IS users applicable in
marketing.
For
marketing
campaigns
geodemographic
segmenations (like ACORN or MOSAIC) are used, since
it is known how the segments can be targeted to achieve
the desired goal, as for example, the promotion of a new
mobile service in certain neigbourhoods. The client’s
home address determines the geodemographic category.
People of similar social status and lifestyle tend to live
close. Compared to conventional occupational measures
of social class, postcode classifications typically achieve
higher levels of discrimination, whether averaged across a
random basket of behaviors recorded on the Target Group
Index or surveys of citizen satisfaction with the provision
of local authority services. One of the reasons that
segmentation systems like MOSAIC are so effective is
that they are created by combining statistical averages for
both census data and consumer spending data in predefined geographical units [Grubesic, 2004]. The
postcode descriptors allow us powerful means to unravel
lifestyle differences in ways that are difficult to
distinguish using conventional survey research given
limited sources and sample size constraints [Webber and
Butler, 2007]. For example, it was demonstrated that
middle-class MOSAIC categories in the UK such as ‘New
Urban Colonists’, ‘Bungalow Retirement’, ‘Gentrified
Villages’ and ‘Conservative Values’, whilst very similar
in terms of overall social status, nonetheless register
widely different public attitudes and voting intentions,
show support for different kinds of charities and
preferences for different media as well as different forms
of consumption. Geodemographic categories correlate to
diabetes propensity [Levy, 2006], school students’
performance [Webber and Butler, 2007], broadband
access and availability [Grubesic, 2004] and so on.
Industries rely increasingly on geodemographic
segmentation to classify their markets when acquiring
new customers [Haenlein and Kaplan, 2009]. The
localized versions of MOSAIC have been developed for a
number of countries, including the USA and most of the
EU countries. The main geodemographic systems are in
competition with each other and the exact details of the
data and methods for generating lifestyles segments are
never released [Debenham et al., 2003] and, as a result,
the specific variables or the derivations of these variables
are unknown. To conclude, geodemographic segmentation
provides a collective view point, where the client is seen
as a representative of the population who live nearby.
However, in recent research, it has been shown that the
problem of resource allocation in the zones with nearly
overloaded and underloaded antennas is better handled
relying on individual modelling based on client’s
historical trajectories [Sagar, 2016]. The author completed
a user segmentation based on clustering of user
trajectories and demonstrated that network planning is
more effective, if trajectory-based segments are used
instead of MOSAIC segments.
Our aim is to explore the ways to connect the
individual trajectory-based view of IS customers and the
geo-demographic view in order to devise analytics
capable to complete the efficient analysis based on the
individual view point and yet be useful in marketing
campaigns in which geodemographic groups are targeted.
As a practical conclusion, we have compiled a ranked list
of the segments according to their propensity to contain
IS clients (expressed as a fuzzy notion) and crafted two
queries:
1. Which segments are more or less devoid of IS
clients? (attract them, while the infrastructure
is still rather underloaded)
2. Which segment is highly devoid of IS clients?
(when the custormer base becomes mature and
the infrastructure becomes increasingly
loaded).
The contributions of this paper are as follows. Firstly, we
have studied the correlation between IS users and the
MOSAIC segments. For different contexts, we have
completed candidate rankings of geodemographic
segments, and, given an absense of other preferences, the
top-tier segments are preferable. Which ranking out of
several candidate ones is taken depends on the hedge
(degree) calculated for the intensity of infrastructure
exploitation. Secondly, the verification/simulation of the
resulting fuzzy recommendations guarantees the absense
of false negatives, such as, concluding that certain
segments can be hired from, but in fact that would lead to
a service failure at some place in the network.
The rest of the paper is organised as follows. Section 2
describes the data set. In Section 3 the proposed
methodology is explained. In Section 4, experiments are
reported, and finally the conclusions are drawn and
discussion is held in Section 5.
2
Data Set
The study has been conducted on anonymized geospatial
and geo-demographic data provided by a Scandinavian
telecommunication operator. The data consist of CDRs
(Call Detail Records) containing historical location data
and calls made during one week in a midsized region in
Sweden with more than one thousand radio cells. Several
cells can be located on the same antenna. The cell density
varies in different areas and is higher in city centers,
compared to rural areas. The locations of 27010 clients
are registered together with which cell serves the client.
The location is registered every five minutes. In the
periods when the client does not generate any traffic, she
does not make any impact on the infrastructure and such
periods of inactivity are not included in the resource
allocation analysis. Every client in the database is labeled
with her MOSAIC segment. The fields of the database
used in this study are:
• the cells IDs with the information about which a user it
served at different time points,
• the location coordinates of the cells,
• the time stamps of every event (5 minute resolution),
• the MOSAIC geodemographic segment for each client,
and
• the Telenor geodemographic segment for each client.
There are 14 MOSAIC segments present in the database;
for their detailed description the reader is refferred to
[InsightOne]. The six in-house Telenor segments were
developed by Telenor in collaboration with InsightOne,
and, to our best knowledge, though not conceptually
different from MOSAIC, they are especially crafted for
telecommunication businesses.
3 A Link between IS and Geo-demographic
Segments
3.1 Notation and Definitions
Definition (in the style of [Zadeh, 1965]). A fuzzy set A
in X is characterized by a membership function fA(x),
which associates with each point in X a real number in the
interval [0, 1], with the value of fA(x) at x representing the
"grade of membership" of x in A. For the opposite quality:
fnotA(x) = 1- fA(x).
Fuzzy membership scores reflect the varying degree to
which different cases belong to a set:
• Under the six value fuzzy set, there are six tiers of
membership 1: fully in, 0.9: mostly but not fully in,
0.6: more or less in, 0.4: more or less out, 0.1:
mostly but not fully out, 0: fully out.
• Thus, fuzzy sets combine qualitative and quantitative
assessment: 1 and 0 are qualitative assignments
(“fully in” and “fully out”, respectively); values
between 0 and 1 indicate parcial membership. The
0.5 score is also qualitatively anchored, for it
indicates the point of maximum ambiguity
(fuzziness) in the assessment of whether a case is
more “in” or “out” of a set.
For a comprehensive guide of good practices in fuzzy
logic analysis in social sciences the reader is refferred to,
for example, [Ragin, 2009].
Interpretation:
• Rather will be added to a quality A, if the square root
of its membership function fA(x)1/2 is close to 1.
• Very will be added to a quality A, if the square of its
membership function fA(x)2 is close to 1.
• Extremely will be added to a quality A, if fA(x)3 is
close to 1.
The interpretation follows from the application of the
hedge operator, which adds the quantifiers such as very,
rather, extremely, to the membership function, for
example fveryA(x)= fA(x)2 [Zadeh, 1972]. Then, given the
new membership function, the same principle applies: the
closer to 1, the higher is the degree of membership. Inside
a tier, the hedged membership functions obey an inclusion
relation: extremely f ⊂ very f ⊂ rather f.
3.2 Query Formulation
As mentioned above, within the same geo-demographic
segment, the clients differ with respect to being IS or not.
When the infrastructure is not overloaded, that is, the
recent historical load is still significantly smaller than the
capacity, then virtually any client is welcome. The
following two queries are formulated reflecting the desire
to apply context-dependent strategies. As the
infrastructure becomes more loaded, the operator wants to
be more discriminative with respect to the degree of the
IS/IF quality. In its turn, “loaded” for an antenna is
naturally formulated as a fuzzy variable:
floaded(antenna j)=max all t {load(j,t) capacity(antenna j)-1}.
The floaded(antenna j) is calculated in man units. The load
in the analyzed zone is set to the maximum peak of
demand registered:
floaded(infrastructure)=maxall antennas j { floaded(antenna j)}.
Queries:
• Which segments to target, provided that rather IF
are acceptable clientele?
• Which segments to target, provided that only very
IF are wanted?
Depending on the load, different rankings of segments
become available. If initially some segments were in the
same tier, for example, very IF segments, some of them
fall out of the tier, as the hedge operator is applied and the
value of the membership function is squared (for
extremely IF). The context, when to apply Query 1 or 2,
becomes clarified comparing the network load (measured
as network peak load) to network capacity. The method to
obtain fuzzy heuristics is summarized to the sequence of
the following steps, depicted as a flow chart in Figure 1,
and formalized as Algorithm 1.
reveal IS
analysis of the
recent load
rank the segments into different
tiers for different contexts
calculate which hedge should apply
fuzzy modeling
of the load
a recommendation in a
natural language
simulation/verification of the expected
effect on the infrastructure
fuzzy recommendation
Figure 1:The flow chart for the calculation of fuzzy
recommendation for a marketing campaign.
Step 1: The IS clients in the customer base are revealed
with the method [Sidorova et al, 2017] (the algorithm is
reproduced as the function reveal_IS clients in the
Algorithm 1), and each client is labeled with the IS/notIS
descriptor.
Step 2: The propensity of a segment to contain IS clients
is defined as the frequency of IS clients among its
members and it is calculated from the data:
fIS(segmenti)= frequencyIS(segmenti).
Infrastructure-Friendly (IF) is set to:
fIF(segmenti)=1- fIS(segmenti).
Step 3: The ranking of segments is carried out with
respect to their IF quality: for all segments i, frather
IF(segmenti), fvery IF(segmenti), and fextremely IF(segmenti).
Within a context, the segments fall into the different tiers
(corresponding to one of the fuzzy values): “fully in”,
“mostly but not fully in”, “more or less in”, and so on.
Step 4: Locally for the region under analysis, the
infrastructure is assessed as loaded, very loaded, and
extremely loaded, in order to map the context into a
corresponding hedge. Among several candidate rankings,
the one for a corresponding hedge is selected (as a leap of
faith further verified in the next section).
Algorithm
1:
computation
recommendation heuristic.
of
the
fuzzy
Variables:
• clientSet: set of with IDs of clients;
• I: the set with geo-demographic segments
{segment1, …, segmentk};
• D: the mobility data for a region that for
each user contain client’s ID, client’s geodemographic segment, time stamps when the client
generated traffic, and which antenna served the
client.
• Si: the number of subscribers that belong to a
geo-demographic segment i;
• Σall i Si,t,j : the footprint, i.e. the number of
subscribers that belong to a geo-demographic
segment i, at time moment t, who are registered
with a particular cell j;
• Cj: the capacity of cell j in terms of how many
persons it can safely handle simultaneously;
• x: the vector with the scaling coefficients
for the geo-demographic segments or other groups
such as IS clients;
• xIS: the coefficent for the IS segment from the
vector x;
• Nt,j= number of users at cell j at time t;.
[I. Characterize each user with respect to her
relative mobility.]
for each userID {
trajectoryID = cellt1, …, cellt2016;
relativeTrajectoryID = Nt1,j, …, Nt2016,j;
sortedTrajectoryID =
sortdecreas_or.(relativeTrajectoryID);
topHotSpotsID = Σk=1..100(5%)sortedTrajectoryID[k];
userTopHotSpots = <userID, topHotSpotsID>
}
rankedUserList = sortdecreasing_or(rankedUserList)
[II. Initialization.]
xstressing = 0;
setStressingUsers = ∅.
[III. Reveal the infrastructure-stressing
clients.]
While (xstressing = 0) do {
tentativeStressingUsers =
head1%(rankedUserList);
setFriendlyUsers = bottom1%(rankedUserList);
otherUsers = rankedUserList –
tentativeSetStressingUsers setFriendlyUsers;
[Confirm the tentative labeling via
combinatorial optimization.]
I = {stressing, medium, friendly};
{xstressing, xmedium, xfriendly} =
combinatorial_optimization(I,D);
IF (xstressing = 0), THEN {
tentativeSetStressingUsers =
tentativeStressingUsers< userID >1;
setStressingUsers = setStressingUsers +
tentativeSetStressingUsers<UserID>
D = D - Dstressing
} [end of while]
for id in <userIDs> do {
if (id ∈ setStressingUsers) then
label(id,”IS”)
else label(id,”notIS”)
} [end loop on id in <userIDs>]
} [end reveal_ISclients]
function combinatorial_optimization(I,D){
solve
Maximize Σi {IF,other,IS} Si xi,
subject to:
∈
for all j,t, Σi {IF,other,IS} Si,t,j xi ≤ Cj
Input: data set D: <userID, time stamp t, cell j>.
reveal_ISclients;
for i in I{
ratherIF[i] = false
veryIF[i] = false
extremelyIF[i] = false
degreeIS = frequency(userIDIS,I)
degreeIF = 1- degreeIS
if (degreeIF1/2 ≥ 0.9) then ratherIF[i]=true
if (degreeIF2 ≥ 0.9) then veryIF[i]=true
if (degreeIF3 ≥ 0.9) then extremelyIF[i]=true
∈
} returns {xIF,xother,xIS}.
Output: array ratherIF[], veryIF[],
extremelyIF[].
3.3 Query Simulation
In the above, when deciding which context should be
applied, we relied on an intuitive rule: If the load is
}
function reveal_ISclients{
1
field userID from tentativeStressingUsers
<hedge X2> big, then <hedge X> IS segments are suitable
to hire clients from. It does not necessary hold, since the
calibration of fuzzy functions depend on the expected
outcome of the campaign and the consequent effect on the
infrastructure. For example, the campaign can attract 300
new clients or 1500 new clients. To avoid false negatives,
the fuzzy heuristic is subjected to a validation procedure,
which simulates the impact of the expected result on the
infrastructure.
I. It throws a warning, if some antenna is overloaded.
That is, if expected footprint by the segment violates a
restriction for some segment i, some antenna j, some time
moment t:
α Si,j,t ≤ Cj,
where α is a scaling coefficient,
α = expected number of new clients × (total number of
clients)-1. This is a justifiable approximation, because of
the consensus in the literature is that there is a high
predictability in user trajectories within different
segments, e.g. [Song et al, 2010], [Lu et al, 2013].
II. It recalculates the hedge for “being loaded”.
2. Calculate degree of infrastructure-friendliness for
each segment. The charts with the number of customers
in each MOSAIC and Telenor segments in the geographic
region are represented in Figures 2 and 3. In the whole
customer base, 7% of subscribers were revealed to be IS
[Sidorova et al, 2017]. We have obtained the distribution
of the IS clients within the MOSAIC and Telenor
segments and depicted them in figures 4 and 5,
respectively. The degree of the infrastructure-friendliness
is reported in Table 1 and 2, for MOSAIC and Telenor
segments, respectively.
4 Experiment
1. Reveal the IS clients. Applying the algorithm to reveal
IS clients from, we have added a field to data set with the
label IS or Not IS as a descriptor for each client.
Figure 4: The percent of IS clients in different MOSAIC
categories.
Figure 2: The distribution of users across Telenor segmetns in
the region.
Figure 5: The percent of IS clients in different Telenor
segments.
Figure 3: The distribution of users across MOSAIC segmetns in
the region.
2
for example, very.
II. Reasoning behind the queries. Tables 1 and 2
simulate the reasoning behind the query results for
different contexts (codified with a hedge) for the
MOSAIC and Telenor segments, respectively. Each of the
14 MOSAIC classes qualifies as rather IF, which are
those with fIF(i)1/2 >0.9. Once the customer base becomes
larger and the spare capacity diminishes, only very IF will
be wanted, which are those with fIF(i)2 >0.9. Out of those 9
segments qualify as very IF and five segments qualify as
extremely IF (fIF(i)3>0.9). The customer population was
subjected to the same analysis with respect to Telenor
segmentation. As follows from Table 2, each of the six
Telenor segments is rather friendly, and there are four and
three very and extremely IF segments, respectively.
rather fIF(i)2 very fIF(i)3 extremely
IF?
IF?
IF?
A
0.96
0.97
yes
0.92
yes
0.88
no
B
0.98
0.98
yes
0.96
yes
0.94
yes
C
0.93
0.96
yes
0.86
no
0.79
no
D
0.92
0.95
yes
0.84
no
0.77
no
E
0.96
0.97
yes
0.92
yes
0.88
no
F
0.92
0.95
yes
0.86
no
0.79
no
G
0.93
0.96
yes
0.86
no
0.79
no
H
0.96
0.97
yes
0.92
yes
0.88
no
I
0.97
0.98
yes
0.94
yes
0.91
yes
J
0.92
0.95
yes
0.86
no
0.79
no
K
0.97
0.98
yes
0.94
yes
0.91
yes
L
0.98
0.98
yes
0.96
yes
0.94
yes
M
0.96
0.97
yes
0.92
yes
0.88
no
N
0.95
0.97
yes
0.9
yes
0.85
no
Table 1: The reasoning behind the query results for the MOSAIC
segments.
segment
fIF(i)
fIF(i)1/2
rather fIF(i)2 very fIF(i)3 extremely
IF?
IF?
IF?
CA
0.94
0.97
yes
0.88
no
0.82
no
MM
0.99
0.89
yes
0.98
yes
0.97
yes
QA
0.96
0.92
yes
0.92
yes
0.88
no
T
0.98
0.87
yes
0.96
yes
0.94
yes
CC
0.92
0.8
yes
0.86
no
0.79
no
VA
0.97
0.91
yes
0.94
yes
0.91
yes
Table 2: The reasoning behind the query results for the Telenor
segments.
segment
fIF(i)
fIF(i)1/2
6 Results
When it comes to designing strategies of accomodating
many more clients, being IS-prone for a segment is an
important quality. We have studied the correlation
between IS users and the MOSAIC segments, motivated
by the fact that we can target the MOSAIC segments in
marketing campaigns. For different contexts, we have
completed candidate rankings of geodemographic
segments, and, given an absense of other preferences, the
top-tier segments are preferable. Which ranking out of
several candidate ones is taken depends on the hedge
calculated for the intensiveness of infrastructure
exploitation. The verification/simulation guarantees no
false negatives, such as saying that certain segments are
safe to hire from, but in fact that would lead to a service
failure at some place in the network.
Acknowledgments
References
Debenham, J., Clarke, G. and Stillwell, J., 2003. Extending
geodemographic
classification:
a
new
regional
prototype. Environment and Planning A, 35(6), pp.1025-1050.
Grubesic, T.H., 2004. The geodemographic correlates of
broadband access and availability in the United
States. Telematics and Informatics, 21(4), pp.335-358.
Haenlein, M. and Kaplan, A.M., 2009. Unprofitable
customers and their management. Business Horizons, 52(1),
pp.89-97.
InsightOne MOSAIC lifestyle classification for Sweden
http://insightone.se/en/mosaic-lifestyle/, Retrived on Apr, 15,
2017.
Levy, Jonathan C. How to market better health -- diabetes. A
Dr. Foster Community Health Workbook. London: Dr. Foster.
Lu, X., Wetter, E., Bharti, N., Tatem, A.J. and Bengtsson, L.,
2013. Approaching the limit of predictability in human mobility. Scientific reports, 3.
Meyer, A. and Zimmermann, H.J., 2011. Applications of
fuzzy technology in business intelligence. International Journal
of Computers Communications & Control, 6(3), pp.428-441.
Pappalardo, L., Simini, F., Rinzivillo, S., Pedreschi, D.,
Giannotti, F. and Barabási, A.L., 2015. Returners and explorers
dichotomy in human mobility. Nature communications, 6.
Ragin, C.C., 2009. Qualitative comparative analysis using
fuzzy sets (fsQCA). Rihoux, B.
Sagar, S., Skold, L., Lundberg L., Sidorova J, Trajectory
Segmentation for a Recommendation Module of a Customer
Relationship Management System, The 2017 Int. Symposium on
Advances in Smart Big Data Processing. In press. Retrieved at
https://www.researchgate.net/publication/316657841_Trajectory
_Segmentation_for_a_Recommendation_Module_of_a_Custom
er_Relationship_Management_System
Sidorova, J, Skold, L., Lundberg L. Revealing InfrastructureStressing Clients in the Customer Base of a Scandinavian Operator using Telenor Mobility Data and HPI Future SoC Lab
Hardware Resources. Hasso Plattner Institute. A technical
Report. In press. Availbale online on ResearchGate.
Sidorova, J, Skold, L., O. Rosander, Lundberg L. Discovering
insights in telecommunication business from an interplay of
geospatial and geo-demographic factors, manuscript under
reviews. 2017.
Song, C., Qu, Z., Blumm, N. and Barabási, A.L., 2010. Limits of predictability in human mobility. Science, 327(5968),
pp.1018-1021.
Tettamanzi, A., Carlesi, M., Pannese, L. and Santalmasi, M.,
2007. Business intelligence for strategic marketing: Predictive
modelling of customer behaviour using fuzzy logic and
evolutionary
algorithms. Applications
of
evolutionary
computing, pp.233-240.
Webber, R. and Butler, T., 2007. Classifying pupils by where
they live: how well does this predict variations in their GCSE
results?. Urban Studies, 44(7), pp.1229-1253.
Zadeh, L. “Fuzzy Logic and Beyond - A New Look”, Zadeh,
L. “Fuzzy Logic and Beyond - A New Look”, in (Eds.) Zadeh,
L., King-Sun F., Konichi T, Fuzzy Sets and their applications to
cognitive and decision processes: Proceedings of the US-Japan
seminar on fuzzy sets and their applications, held at university
of California, Berkeley, California, July 1-4, Academic Press,
2014.
Zadeh, L.A., 1965. Fuzzy sets. Information and control, 8(3),
pp.338-353.
Zadeh, L.A., 1972. A fuzzy-set-theoretic interpretation of linguistic hedges.
| 2 |
Targeted Random Projection for Prediction
arXiv:1712.02445v1 [] 6 Dec 2017
from High-Dimensional Features
Minerva Mukhopadhyay
David B. Dunson
Abstract
We consider the problem of computationally-efficient prediction from high dimensional and highly correlated predictors in challenging settings where accurate variable selection is effectively impossible. Direct application of penalization or Bayesian
methods implemented with Markov chain Monte Carlo can be computationally daunting and unstable. Hence, some type of dimensionality reduction prior to statistical
analysis is in order. Common solutions include application of screening algorithms to
reduce the regressors, or dimension reduction using projections of the design matrix.
The former approach can be highly sensitive to threshold choice in finite samples,
while the later can have poor performance in very high-dimensional settings. We
propose a TArgeted Random Projection (TARP) approach that combines positive
aspects of both strategies to boost performance. In particular, we propose to use information from independent screening to order the inclusion probabilities of the features in the projection matrix used for dimension reduction, leading to data-informed
1
sparsity. We provide theoretical support for a Bayesian predictive algorithm based
on TARP, including both statistical and computational complexity guarantees. Examples for simulated and real data applications illustrate gains relative to a variety
of competitors.
Some key words: Bayesian; Dimension reduction; High-dimensional; Large p, small n;
Random projection; Screening.
Short title: Targeted Random Projection
1
Introduction
In many applications, the focus is on prediction of a response variable y given a massivedimensional vector of predictors x = (x1 , x2 , . . . xp )0 . Often enormous numbers of possibly
collinear predictors x are collected, and the sample size n is modest relative to p, so that
p n. In such situations, it is common to assume that x can be replaced by a much
lower-dimensional feature vector comprised of sparse linear combinations of the original
predictors. However, accurate learning of the precise lower-dimensional structure is often
not possible, as the data simply do not contain sufficient information even putting aside
the intractable computational problem.
There is a large literature on variable selection in p n settings. Much of the focus has
been on penalized optimization-based approaches, with some popular methods including
LASSO (Tibshirani, 1996), SCAD (Fan and Li, 2001)), elastic net (Zou and Hastie, 2005),
the Dantzig selector (Candes and Tao, 2007), and MCP (Zhang et al., 2010). There is
2
also a large literature on Bayesian approaches that attempt to characterize uncertainty
in variable selection. Most approaches use some variation on the spike and slab prior
(e.g. Ishwaran and Rao (2005)) or continuous shrinkage priors concentrated at zero with
heavy tails (e.g. Carvalho et al. (2009)). There is an increasingly rich theoretical literature
providing asymptotic support for such methods in the p → ∞ as n → ∞ setting. However,
positive results rely on strong assumptions in terms of a high signal-to-noise ratio and low
linear dependence in the columns of the design matrix.
We are interested in settings where practical performance of the above methods is poor,
due to a combination of statistical and computational intractability. In such settings, it
is common to use variable screening as a pre-processing step. In particular, independent
screening tests for association between y and each xj separately, and selects predictors with
the largest or most significant associations for second stage analysis. In general, screening
can be guaranteed asymptotically to select a superset of the ‘true’ predictors (Fan et al.,
2009). When the number of predictors is sufficiently reduced, one can apply a simple
maximum likelihood approach, penalized optimization, or Bayesian Markov chain Monte
Carlo (MCMC) algorithms in the second stage. However, when the predictors are highly
correlated and/or the true data generating process does not exhibit strong sparsity with
a high signal-to-noise ratio, it may be necessary to use a very conservative threshold for
the measure of marginal association, limiting the dimensionality reduction occurring in the
first stage.
As an alternative to variable screening, there is a rich literature on using random projections (RPs) to reduce data dimensionality prior to statistical analysis. For example,
3
compressive sensing uses RPs to reduce storage and communication costs in signal processing. By exploiting sparsity, the original signal can be reconstructed from the compressive
measurements with high accuracy (see, e.g., Candes and Tao (2005), Donoho (2006), Davenport et al. (2010)). Usual compressive sensing acts in a row-wise manner, reducing the
dimensionality of the design matrix from n × p to m × p, with m n. This does not
solve the big p problem. There is a relatively smaller literature on column-wise compression, which instead reduces the design matrix from n × p to n × m, with m p, while
providing bounds on predictive errors (see, e.g., Maillard and Munos (2009), Fard et al.
(2012), Kabán (2014), Thanei et al. (2017), Guhaniyogi and Dunson (2015), Pettenuzzo
et al. (2016)). Guhaniyogi and Dunson (2015) concentrate on approximating predictive
distributions in Bayesian regression. The above literature on RPs focuses primarily on
random matrices with i.i.d elements.
When predictors are very high-dimensional, existing RP methods can fail as they tend
to include many unimportant predictors in each linear combination, diluting the signal.
Potentially, one can attempt to improve performance by estimating the projection matrix,
but this results in a daunting computational and statistical problem. Alternatively, we
propose a TArgeted Random Projection (TARP) approach, which includes predictors in
the RP matrix with probability proportional to their marginal utilities. These utilities are
estimated quickly in a first stage using an independent screening-type approach. To reduce
sensitivity of the results to the different realizations of the RP matrices, we aggregate over
multiple realizations. TARP can be viewed as a type of rapid preconditioning, enabling
improved predictive performance in high-dimensional settings. Compared with applying
4
RPs after screening out predictors, TARP has the advantage of removing sensitivity to
threshold choice by using a soft probabilistic approach.
In Section 2, we propose the methodology including the computational algorithm, choice
of different tuning parameters and an analysis of computational complexity. We focus on
generalized linear models (GLMs) for ease in presentation and development of a strong
theoretical justification, although TARP can be applied directly in general settings to provide lower-dimensional features that can then be used in any predictive algorithm (random
forests, Gaussian processes, neural nets, etc). Section 3 provides theory on convergence
rates for the predictive distribution of y. Section 4 contains a simulation study comparing
performance with a variety of competitors. In Section 5, we apply TARP to a variety of
real data applications including a genomics dataset with millions of predictors. Section 6
contains a brief discussion, and proofs are included in an Appendix.
2
The Proposed Method
Let Dn = {(yn ; Xn ) : yn ∈ Rn , Xn ∈ Rn×pn } denote the dataset consisting of n observations
on pn predictors x1 , x2 , . . . , xpn and a response y, and (yi ; xi ) denote the ith data point,
i = 1, 2, . . . , n. Suppose that the data can be characterized by a generalized linear model
(GLM). The density of y is related to the predictors as
1
0
0
{yi a(xi β) + b(xi β) + c(yi )} ,
f (yi |β, σ ) = exp
d(σ 2 )
2
5
(1)
where a(·) and b(·) are continuously differentiable functions, a(·) has non-zero derivative
and d(·) is a non-zero function. The vector of regression coefficients β ∈ Rpn , and σ 2 ∈ R+ is
the scale parameter. We approximate the density of y in a compressed regression framework
as follows:
1
0
0
{yi a ((Rn xi ) θ) + b ((Rn xi ) θ) + c(yi )} .
f (yi |θ, Rn , σ ) = exp
d(σ 2 )
2
Here Rn ∈ Rmn ×pn is a random projection matrix, θ ∈ Rmn is the vector of compressed
regression coefficients, and mn pn . We discuss the choice of the random matrix Rn in
Section 2.1, and illustrate the method in detail in Section 2.2.
Priors. We assume that the covariates are standardized. Taking a Bayesian approach
to inference, we assign priors to θ and σ 2 . The vector of compressed regression parameters
θ is assigned a Nmn (0, σ 2 I) prior given σ 2 , where 0 is a vector of zeros, and I is the identity
matrix. The scale parameter σ 2 is assigned an Inv-Gamma (aσ , bσ ) prior, with aσ , bσ > 0.
The Normal-Inverse Gamma (ING) prior is a common choice of prior for GLMs. In the
special case of a Gaussian likelihood, this prior is conjugate, and the posterior and predictive
distributions are available in analytic forms.
2.1
Choice of the projection matrix
The projection matrix Rn embeds Xn to a lower dimensional subspace. If pn is not large,
the best linear embedding can be estimated using a singular value decomposition (SVD)
of Xn . The projection of Xn to the space spanned by the singular vectors associated with
6
the first mn singular values is guaranteed to be closer to Xn in an appropriate sense than
any other mn -dimensional matrix. However, if pn is very large with pn n, then it is
problematic to estimate the projection, both computational and statistically, and random
projection (RP) provides a practical alternative. If an appropriate RP matrix is chosen, due
to Johnson-Lindenstrauss (JL) type embedding results, distances between sample points
are maintained (see Dasgupta and Gupta (2003), Achlioptas (2003)).
Our focus is on modifying current approaches by constructing RP matrices that incorporate sparsity in a way that the predictors xj having relatively weak marginal relationships
with y are less likely to be included in the matrix. In particular, the TArgeted Random
Projection (TARP) matrices are constructed as follows:
i.i.d.
γ = (γ1 , γ2 , . . . , γpn )0
qj ∝ |rxj ,y |δ
Rγ = Omn ×(pn −pγ )
and γj ∼ Bernoulli (qj )
for some constant δ > 0,
and
where
(2)
Rγ = Rn∗ ,
where rxj ,y is the Pearson’s correlation coefficient of xj and yn , qj is the inclusion probability
of xj , Rγ and Rγ are the sub-matrices of Rn with columns corresponding to non-zero and
P
zero values of γ, respectively, and Rn∗ is the mn × pγ projection matrix where pγ = j γj .
We prioritize predictors based on their marginal utilities, q = (q1 , q2 , . . . , qpn )0 , and
consider a random subset of the predictors with inclusion probabilities q. This can be
viewed as a randomized version of independent screening. The selected subset is further
projected to a lower dimensional sub-space using Rn∗ . There are many possible choices of Rn∗
7
which can successfully reduce the dimension of the selected variables while having minimal
impact on prediction accuracy. Two predominant classes of such projection matrices are
based on partial SVD and random projections facilitating JL type embedding. We consider
both these choices, as described below.
∗
, of Rn∗ is sampled independently from a three
Random projection. Each element, Rk,j
point distribution as
∗
Rk,j
±1/√2ψ
=
0
with probability ψ,
(3)
with probability 1 − 2ψ,
where ψ ∈ (0, 0.5) is a constant.
Projection matrices of this form are widely used due to their inter point distance preservation property. Incorporating zero values facilitates data reduction and improves computational efficiency. We refer to the method that generates Rn∗ in (2) from (3) as RIS-RP
(Randomized Independent Screening-Random Projection).
Remark 1. The choice of projection matrix in (3) can be replaced by a wide variety of
matrices having i.i.d. components with mean zero and finite fourth moments. One of the
√
∗
sparsest choices is of the form Rk,j
= ±nκ/2 / mn with probability 1/2nκ , 0 with probability
(1 − 1/nκ ), where mn ∼ nκ (see Li et al. (2006)). This choice of projection matrix reduces
the data to a great extent, and is useful in compressing extremely large dimensional data.
Our theoretical results would hold also if we consider a random matrix Rn∗ as above.
Principal component projection. Another alternative is to use the matrix of principal
8
component scores. Let Xγ be the sub-matrix of Xn with columns corresponding to non-zero
0
values of γ. Consider the partial spectral decomposition Xγ0 Xγ = Vγ,m
Dγ,mn Vγ,mn . The
n
projection matrix Rn∗ we consider is
0
.
Rn∗ = Vγ,m
n
(4)
The method corresponding to this choice of projection matrix combines a randomized
version of independence screening with principal component regression (PCR). Therefore,
we refer to this method as RIS-PCR (Randomized Independence Screening-PCR).
The performance of TARP depends on tuning parameters mn , δ and ψ. In addition, for
any given choice of tuning parameters, different realizations of the random projection matrix will vary, leading to some corresponding variation in the results. To limit dependence
of the results on the choice of tuning parameters and random variation in the projection,
we take the approach of generating multiple realizations of the matrix for different choices
of tuning parameters and aggregating these results together. Potentially, one could estimate weights for aggregation using Bayesian methods (see Hoeting et al. (1999)) or other
ensemble learning approaches, but we focus on simple averaging due to its computational
and conceptual simplicity.
2.2
Posteriors and Predictive Distribution
We illustrate the proposed method in normal linear regression for simplicity, although it is
applicable to more general settings. In the compressed regression framework, we replace
9
the normal linear model yi = x0i β + ei by yi = (Rn xi )0 θ + ei , where ei ∼ N (0, σ 2 ).
Given the NIG prior structure stated above, the posterior distribution of θ follows a scaled
mn -variate t distribution with degrees of freedom (d.f.) n + 2aσ , location vector µt , and
scale matrix Σt , where µt = Z(Xn Rn0 )0 yn , Σt = (yn0 yn − µ0t Z −1 µt + 2bσ )Z/(n + 2aσ ) and
Z = (Rn Xn0 Xn Rn0 + I)−1 . Moreover, the posterior distribution of σ 2 , given Dn and Rn , is
inverse gamma with parameters aσ + n/2 and (yn0 yn − µ0t Σ−1
t µt + bσ )/2.
Consider the problem of point prediction of y when nnew new data points on the predictors are obtained, given the dataset Dn . The predicted values of y, say ynew , can be
obtained using the Bayes estimator of θ under squared error loss as follows
ŷnew = Xnew Rn0 θ̂ Bayes where θ̂ Bayes = (Zn0 Zn + I)−1 Zn0 yn and Zn = Xn Rn .
Here Xnew is the new design matrix. Moreover, the posterior predictive distribution of ynew
is a nnew -variate t distribution with degrees of freedom n + 2aσ , location vector ŷnew and
0
scale parameter (yn0 yn − µ0t Z −1 µt + 2bσ )(I + Xnew ZXnew
)/(n + 2aσ ).
When the distribution yn is non-normal, analytical expressions of the posteriors of θ,
σ 2 and predictive distribution of ynew are not available. In such cases, it is common to rely
on a Laplace approximation (see Tierney and Kadane (1986)) or sampling algorithm, such
as MCMC. In the compressed regression framework, as the pn -dimensional variables are
projected to a much lower-dimensional mn -hyperplane, with mn pn , we are no longer in
a high-dimensional setting. Hence, MCMC is computationally tractable.
2.3
Tuning Parameter Choice
Next, we describe the choices of tuning parameters that are involved in TARP.
10
Choice of mn . The parameter mn determines the number of linear combinations of
predictors we consider. Instead of choosing a fixed value of mn , we consider a range of
values. In particular, we suggest choosing values in the range (2 log pn , min{3n/4, pn }),
consistent with our theoretical results in Section 3 requiring that mn < n and with our
experiments assessing performance for different choices in a variety of scenarios.
Choice of δ and q. The parameter δ plays an important role in screening. Higher values
of δ lead to fewer variables selected in the screening step. If pn n, one would like to select
a small proportion of predictors, and if pn ∼ n then selection of a moderate proportion is
desirable. We recommend δ = max{0, (1+log(pn /n))/2} as a default. This function selects
all the predictors if pn n, and becomes more restrictive as pn becomes larger than n. The
selection probabilities in the RIS stage are then qj = |rxj ,y |δ / maxj |rxj ,y |δ , j = 1, 2, . . . , pn .
Hence, the variable with highest marginal correlation is definitely included in the model.
Choice of ψ. The value of ψ controls sparsity in the random matrix, and it is necessary
to let ψ ∈ (0, 0.5). Achlioptas (2003) suggest choosing ψ = 1/6 as a default value. To avoid
sensitivity of the results for a particular choice of ψ, we choose ψ ∈ (0.1, 0.4) avoiding the
very sparse and dense cases.
2.4
Computational Algorithm and Complexity
We now illustrate the time complexity of TARP along with the algorithm for computation
in normal linear models.
RIS-RP. For a specific choice of (mn , δ, ψ), calculation of ŷnew using RIS-RP involves the
following steps:
11
1: Calculate rxj ,y for j = 1, . . . , pn .
2: Generate γj ∼ Bernoulli(qj ) where qj = |rxj ,y |δ / max{|rxj ,y |δ }, j = 1, . . . , pn .
IF γj = 1, generate Rn with Ri,j as in (3).
ELSE set Ri,j = 0.
3: Post-multiply Rn with Xn . Set Zn = Xn Rn .
4: For a given Xnew , compute Znew = Xnew Rn and ŷnew = Znew θ̂.
The complexity of steps 1, 2-3, 4 and 5 are O(pn ), O(npγ mn ), O(nm2n ) and O (nnew pγ mn ),
P
respectively, where pγ =
γj . Thus, if nnew ≤ n, the total complexity for a single choice
of (mn , δ, ψ) is O(pn ) + 2O(nmn pγ ) + O(nm2n ) without using parallelization.
RIS-PCR. RIS-PCR differs from RIS-RP in step 2 of the algorithm. After generation
of γ RIS-PCR requires SVD of Xγ involving complexity O (npγ min{n, pγ }). Thus, total
complexity of RIS-PCR for a single choice of (mn , δ, ψ) can similarly be derived. Therefore,
the two methods have comparable time complexity unless either n or pγ is much larger
than mn . Although theoretically we do not impose any restriction on pγ , in practice when
pn = exp{o(n)} and δ ≥ 2, pγ is usually of order n.
Increment of complexity due to aggregation. Suppose N different choices of (mn , ψ, Rn )
are considered. Each choice yields a model Ml : y ∼ f (y|x, mn,l , ψl , Rn,l ) along with a corresponding estimate of ynew (say ŷnew,l ), where l ∈ {1, 2, . . . , N }. The proposed estimate
is the simple average of these N estimates of ynew .
Step 1 in the TARP algorithm is not repeated over the aggregration replicates, while
the remaining steps are repeated N times. In addition, the first step of screening and
12
aggregration are embarassingly parallelizable. Hence, given k processors, if nnew ≤ n, the
total complexity is O(pn /k) + 2O(N nmn pγ /k) + O(N nm2n /k) for RIS-RP and O(pn /k) +
O(N npγ min{n, pγ }/k) + O(N nmn pγ /k) + O(N nm2n /k) for RIS-PCR.
3
Theory on Predictive Accuracy
We study the asymptotic performance of the predictive distribution produced by TARP
for a single random projection matrix without considering the aggregation step. We focus
on weakly sparse and dense cases where the absolute sum of the true regression coefficients
is bounded. This condition also includes strong sparsity where only a few covariates have
non-zero coefficients.
The projection matrix in TARP depends on the random variable γ, and therefore is
denoted by Rγ . We denote a particular realization of the response variable as y, and a
particular realization of the variables (x1 , x2 , . . . , xpn )0 as x. Let f0 be the true density of
y given the predictors, and f (y|x, γ, Rγ , θ) be the conditional density of y given the model
induced by (γ, Rγ ), the corresponding vector of regression coefficients θ and x. We follow
Jiang (2007) in showing that the predictive density under our procedure is close to the true
predictive density in an appropriate sense.
We assume that each covariate xj is standardized so that |xj | < M , for j = 1, 2, . . . , pn ,
with M a constant. We also assume that the scale parameter σ 2 in (1) is known. We
require the following two assumptions on the design matrix.
Assumption (A1) Let rxj ,y denote the correlation coefficient of the observed values
13
of xj and y, 1 ≤ j ≤ pn . Then for each data point (y, x) and constant δ in (2), there exists
a positive constant αδ such that
pn
1 X 2
xj |rxj ,y |δ → αδ .
lim
pn →∞ pn
j=1
Assumption (A2) Let q(γ) =
Qn
γj
(1−γj )
,
i=1 qj (1 − qj )
with qj defined in (2), denote the
probability of obtaining a particular γ = (γ1 , . . . , γpn )0 in the random screening step. Let
Pn
γj = l, and let Ml ⊂ Γl
Γl ⊂ {0, 1}pn denote the set of γ vectors such that pγ = pj=1
denote the first pknn elements of Γl ordered in their q(γ) values. Let An denote the event
that γ ∈ Ml for some l. Then, P (Acn ) = P {γ : γ ∈
/ ∪l Ml } ≤ exp(−nε2n /4), for some
increasing sequence of integers {kn } and sequence {εn } satisfying 0 < ε2n < 1 and nε2n → ∞.
Remark 2. As the probability of selection in the random screening step depends on the
empirical correlation between the predictor and the response, assumption (A2) is on the
data generating process. If l ≤ kn or l ≥ pn − kn , then all models of dimension l belong to
Ml , and none of the corresponding γ vectors belong to Acn . If kn < l < pn − kn , there are
more than pknn models of dimension l, but as the models are ordered in terms of their q(γ)
values, the models not falling in Ml should have extremely small probabilities of selection,
hence satisfying (A2). Violations of (A2) would imply that large numbers of predictors have
empirical correlations that are not close to zero.
Measure of closeness: Let νx (dx) be the probability measure for x, and νy (dy) be the
dominating measure for conditional densities f and f0 . The dominating measure of (y, x)
is taken to be the product of νy (dy)νx (dx).
14
The Hellinger distance between f and f0 is given by
rZ
p
p 2
f − f0 νx (dx)νy (dy).
d(f, f0 ) =
The Kullback-Leibler divergence between f and f0 is given by
Z
f0
νx (dx)νy (dy).
d0 (f, f0 ) = f0 ln
f
t
Z
f0
−1
Define dt (f, f0 ) = t
f0
νx (dx)νy (dy) − 1 , for any t > 0.
f
Consider the following two facts: (i) d(f, f0 ) ≤ (d0 (f, f0 ))1/2 , and (ii) dt (f, f0 ) decreases
to d0 (f, f0 ) as t decreases to 0 (see Jiang (2007)).
Let Pn be a sequence of sets of probability densities, and εn be a sequence of positive
numbers. Let N (εn , Pn ) be the εn -covering number, i.e., the minimal number of Hellinger
balls of radius εn needed to cover Pn .
RIS-RP. The result showing asymptotic accuracy in approximating the predictive density using RIS-RP is stated below.
Theorem 1. Let θ ∼ N (0, σθ2 I), and f (y|x, γ, Rγ , θ) be the conditional density of y given
the model induced by (γ, Rγ ), where Rγ is as in (2) and (3). Let β 0 be the true regression
P
parameter with
j |β0,j | < K for some constant K, and assumptions (A1)-(A2) hold.
Consider the sequence {εn } as in assumption (A2) satisfying 0 < ε2n < 1 and nε2n → ∞,
and assume that the following statements hold for sufficiently large n:
(i) mn | log ε2n | < nε2n /4,
(ii) kn log pn < nε2n /4, and
p
< nε2n /4, where
(iii) mn log 1 + D σθ 6nε2n pn mn
15
D(h∗ ) = h∗ suph≤h∗ |a0 (h)| suph≤h∗ |a0 (h)/b0 (h)|, b(·) as in (1). Then,
h
i
2
2
Pf0 π {d(f, f0 ) > 4εn |Dn } > 2e−nεn /4 ≤ 2e−nεn /5 ,
where π{·|Dn } is the posterior measure.
The proof of Theorem 1 is given in the Appendix.
Remark 3. If we consider the sparse choice of Rγ , as described in Remark 1, the same
line of proof (as that of Theorem 1) would go though. For both the choices of Rγ , each
component of the projection matrix has expectation zero and finite fourth moment. For the
random matrix in (3), the probability of choosing a non-zero element, P (Ri,j 6= 0), is fixed,
while the probability is decaying with n for the sparse choice of Rγ . However, the rate of
decay is such that distances are preserved between the sample-points, a critical property for
proving consistency.
RIS-PCR. Asymptotic guarantees on predictive approximation for RIS-PCR requires an
additional assumption.
Assumption (A3) Let Xγ be the sub-matrix of Xn with columns corresponding to nonzero values of γ, and xγ be a row of Xγ . Let Vγ be the mn × pγ matrix of mn eigenvectors
corresponding to the first mn eigenvalues of Xγ0 Xγ . Then, for each γ and data point xγ ,
kVγ xγ k2
≥ αn ,
kxγ k2
16
where αn ∼ (nε2n )−1 , where the sequence {ε2n } is as in assumption (A2).
Remark 4. If the matrix Xγ0 Xγ has rank less than mn , then αn = 1 by Perseval’s identity.
Consider the situation where rank of the gram matrix, say rn (≤ n), is bigger than mn . Then
the row space of Xγ , or that of Xγ0 Xγ , is spanned by a set of rn basis vectors v1 , v2 , . . . , vrn .
Therefore, any data point x can be written as a linear combination of these rn vectors as
x = a1 v1 + a2 v2 + · · · + arn vrn , where a1 , a2 , . . . , arn are constants not all equal to zero. As
the vectors vj are orthonormal, vj0 x = aj for all j = 1, 2, . . . , rn , which in turn implies that
Pn 2
x0 x = rj=1
aj . Also, note that the first mn among these rn vectors constitute Vγ0 , which
P n 2
Pmn 2 Prn 2
2
2
implies kVγ0 xk2 = m
j=1 aj . Thus kVγ xk /kxk =
j=1 aj /
j=1 aj , and magnitude of the
ratio depends on the part of x explained by the last few principal component directions.
The lower bound αn ∼ (nε2n )−1 is weaker than many real data scenarios where most of the
variation is explained by the first few principal components.
Theorem 2. Let θ ∼ N (0, σθ2 I), and f (y|x, γ, Rγ , θ) be the conditional density of y given
the model induced by (γ, Rγ ), where Rγ is as in (2) and (4). Let β 0 be the true regression
P
parameter with j |β0,j | < K for some constant K, and assumptions (A1)-(A3) hold. Assume that the conditions (i)-(iii) of Theorem 1 hold for the sequence {εn } as in assumption
(A2) satisfying 0 < ε2n < 1 and nε2n → ∞. Then,
h
i
2
2
Pf0 π {d(f, f0 ) > 4εn |Dn } > 2e−nεn /4 ≤ 2e−nεn /5 ,
where π{·|Dn } is the posterior measure.
The proof of Theorem 2 is given in the Appendix.
17
Remark 5. The conditions (i)-(iii) in Theorems 1 and 2 are related to the sizes of pn ,
mn and kn in comparison with nε2n . A sufficient condition for (i) is mn log n < nε2n /4,
providing an upper bound on the dimension of the subspace mn . Condition (ii) restricts the
permissible number of regressors pn , and the number of possible models of each dimension
kn . If there is a strict ordering in the marginal correlation coefficients |rxj ,y |, so that
kn ≤ κ for some large number κ (see assumption (A2)), then the condition reduces to
log pn < nε2n /4. To illustrate that condition (iii) tends to be weak, consider distributions of
y corresponding to Bernoulli, Poisson and normal. For these cases, the quantity D(h∗ ) is
at most order O(h∗ ). Therefore, condition (iii) does not impose much additional restriction
over (i)-(ii), except mn log pn < nε2n /4, inducing a stronger upper-bound to mn .
4
Simulation Study
In this section, we consider different simulation schemes (Scheme I – IV ) to compare
TARP with a variety of methods. We mainly focus on high-dimensional and weakly sparse
regression problems with a variety of correlation structures in the predictors. The sample
size is taken to be 200, while pn varies. Additional results for different choices of n are
provided in the Supplement.
Competitors. We compare with: SCAD screened by iterative SIS (ISIS), ISIS-SCAD;
minimax concave penalty (MCP) method screened by ISIS, ISIS-MCP ; LASSO screened
by sequential strong rule (SSR, Tibshirani et al. (2012)), SSR-LASSO; ridge regression
screened by SSR, SSR-Ridge; elastic net screened by SSR, SSR-EN ; principal component
18
regression (PCR); sparse PCR, SPCR (see Witten et al. (2009)); robust PCR, RPCR (see
Candès et al. (2011)); and Bayesian compressed regression (BCR). ISIS-SCAD and ISISMCP are available in the ‘SIS’ package, and LASSO, ridge and elastic net are available in
the ‘biglasso’ package (Zeng and Breheny (2017)). SPCR and RPCR are performed using
‘PMA’ and ‘rsvd’ packages in R, respectively. To estimate PC scores, we rely on approximate SVD using fast.svd in the ‘corpcor’ package. For BCR, we average over 100 different
random projections with varying mn values within the range [2 log pn , 3n/4]. We use the
qr function in R to apply QR factorization in place of Gram-Schmidt orthogonalization of
the random matrix, which is computationally prohibitive for large pn .
The proposed method. We select the tuning parameters of TARP as described in Section
2.3. The parameter mn is chosen in the range [2 log pn , 3n/4]. We assign δ = 2 as the
function max{0, (1 + log(pn /n))/2} is close to 2 for all the choices of (n, pn ). Further, the
hyperparameters of the inverse gamma priors are set to 0.02 to correspond to a minimally
informative prior.
Simulation Schemes. In the first three simulation schemes, the predictors were generated
from N (0, Σ), with different choices of pn , Σ and the regression coefficients. In Scheme IV
we consider a functional regression setup. Different methods are compared with respect to
their performance in out of sample prediction. We calculate mean square prediction error
(MSPE), empirical coverage probability (ECP) of a 50% prediction interval (PI) and the
width of the PI for each of a 100 replicate datasets in each simulation case.
Scheme I: First order autoregressive structure. Σi,j = (0.9)|i−j| , i, j = 1, . . . , pn , with
19
Figure 1: Box-plots of MSPE for pn = 2000 in Scheme I.
pn ∈ {2000, 3000}, and βj = 0 for all but a randomly selected set of 30 predictors having
βj = 1.
Scheme II: Block diagonal covariance structure. We choose (pn /100 − 2) blocks of 100
predictors each, along with 200 independent predictors, with pn ∈ {104 , 2 × 104 }. The
within-block correlation is ρ and the across-block correlation is zero, with ρ = 0.3 for half
of the blocks and ρ = 0.9 for the remaining half. There are 21 non-zero βj s having βj = 1,
with 20 of the corresponding predictors in the ρ = 0.9 blocks and the remaining in the
independent block.
Scheme III: Principal Component Regression. We first choose a matrix P with orthonormal
columns, and a 3 × 3 matrix D = diag(152 , 102 , 72 ). We set Σ = P DP 0 and choose β = P·,1 ,
where P·,1 is the first column of P . This produces an Xn with three dominant principal
components, with the response yn dependent on the first and pn ∈ {104 , 5 × 104 }.
Scheme IV: Functional Regression. Finally, we consider a functional regression setup, where
the covariates are generated from Brownian bridge Bt with t ∈ (0, 5) and values ranging
from (0, 10). A set of 20 covariates is randomly selected as active, each having regression
20
Figure 2: Box-plots of MSPE for pn = 10000 in Scheme II.
Figure 3: Box-plot of MSPEs for pn = 5000 in Scheme III.
parameters in the range (2, 2.5), and pn ∈ {104 , 2 × 104 }.
Results. For each of the simulation schemes, we present the results corresponding to the
first choice of pn , with the other results provided in the Supplement.
In Scheme I (see Figure 1) SSR-Ridge, PCR, RIS-RP and RIS-PCR show competitive
performance, with RIS-PCR showing the best overall result. Performance of RIS-PCR is
closely followed by RIS-RP, which in turn is followed by PCR and SSR-Ridge. Apart from
these four methods, ISIS based approaches and SSR-EN exhibit reasonable performance.
Although ISIS based methods have lower average MSPE, the variances of MSPEs are high
indicating less stability. SPCR, RPCR, BCR and SSR-LASSO fail to perform adequately.
21
(a) MSPE of all the methods.
(b) MSPE of selected methods.
Figure 4: Box-plot of MSPEs for pn = 10000 in Scheme IV.
For Scheme II, PCR, RIS-RP and RIS-PCR again have the best performance (see Figure
2). RIS-PCR yields lowest overall MSPE, closely followed by RIS-RP and PCR. SSR-EN,
SSR-Ridge and RPCR exhibit moderate performance. Performance of these three methods
are followed by LASSO, SPCR and BCR. Although SPCR enjoys better average MSPE, it
yields higher dispersion as well. ISIS based methods fail to perform adequately.
In Scheme III (see Figure 3) SSR based methods fail to exhibit competitive performance.
Among the SSR based methods, performance of SSR-LASSO is better than the others. All
the other methods perform well, although ISIS based methods occasionally show higher
MSPEs.
In Scheme IV (see Figure 4(a)), SPCR and ISIS based methods fail completely, making
the other methods indistinguishable in the figure. Hence, we separately show these methods
in Figure 4(b). Among the other methods, PCR, RIS-RP and RIS-PCR have the best
overall performance closely followed by BCR and then by SSR-EN. The other three methods
fail to exhibit competitive performance.
22
We next consider the empirical coverage probabilities (ECPs) of 50% prediction intervals
(PI), and the width of PIs. For the Bayesian methods, the PI is obtained from the highest
credible region of the predictive distribution of ynew given Dn , Xnew . The PIs for the freq
quentist methods can be obtained as ynew ±tn−pγ −1, α2 M SP E 1 + x0γ,new (Xγ0 Xγ )−1 xγ,new ,
where tn−pγ −1, α2 is the upper α/2 point of t distribution with (n − pγ − 1) degrees of freedom, the suffix γ indicates consideration of the regressors selected by the corresponding
method, and pγ is the number of selected regressors. The PIs for PCR based methods
can be obtained similarly, except xγ,new and Xγ are replaced by the principal component
scores. For SSR-Ridge and SSR-EN, pγ is much larger than n. Therefore, the PIs of these
methods could not be calculated using the above formula, instead we obtain the interval as
p
√
ynew ± tn, α2 M SP E + se(ŷ|Dn , xnew )2 , or ynew ± tn, α2 2M SP E, whichever gives better re-
sults, where se(ŷ|Dn , xnew )2 is the variance of the fitted values. The results are summarized
in Table 1.
Summary. Considering all the simulation schemes, RIS-RP, RPCR, SPCR and SSRLASSO have the best performance with respect to mean ECP, with RIS-RP having lowest
average width among these four methods. The average width of PI is much bigger for
SPCR, particularly in Scheme IV. ISIS based methods and RIS-PCR have relatively lower
coverage probabilities in general, although among these methods, RIS-PCR has higher coverage with much lower average width than others, especially in Scheme IV. The average
ECP for PCR is satisfactory, although the corresponding widths of PIs have large variances
in almost all the simulation schemes. This indicates instability in the overall performance
23
Table 1: Mean and standard deviation(sd) of empirical coverage probabilities of 50% prediction intervals, and mean and sd of width of 50% prediction intervals.
Methods→
Scheme, pn
ISISSACD
ISISSACD
SSRSSRLASSO EN
SSRRidge
PCR
SPCR
RPCR BCR
RISRP
RISPCR
Average and standard deviation (in braces) of empirical coverage probability
I
2 × 103
0.309
(.052)
0.311
(.058)
0.432
(.051)
0.549
(.040)
0.692
(.055)
0.498
(.051)
0.457
(.055)
0.433
(.290)
0.286
(.053)
0.493
(.059)
0.431
(.057)
II
104
0.327
(.056)
0.324
(.055)
0.429
(.049)
0.607
(.062)
0.702
(.075)
0.455
(.308)
0.499
(.058)
0.445
(.050)
0.278
(.049)
0.502
(.056)
0.362
(.050)
III
5 × 103
0.494
(.053)
0.494
(.053)
0.503
(.058)
0.657
(.031)
0.678
(.127)
0.503
(.058)
0.494
(.049)
0.494
(.049)
0.527
(.054)
0.494
(.055)
0.507
(.054)
0.425
(.053)
0.416
(.056)
0.477
(.065)
0.660
(.031)
0.665
0.448
(0.102) (.300)
0.488
(.048)
0.491
(.053)
0.487
(.058)
0.696
(.052)
0.418
(.055)
IV
104
Average and standard deviation (in braces) of width of the 50% prediction interval
I
2 × 103
3.894
(.500)
3.857
(.480)
5.939
(.424)
7.410
(.578)
8.192
(.528)
7.739
7.964
(8.309) (.700)
6.483
(.451)
5.260
(.416)
5.029
(.194)
4.253
(.365)
II
104
5.868
(.657)
5.836
(.684)
5.113
(.527)
7.335
(.615)
8.139
(.830)
8.407
6.066
4.635
(25.713) (1.103) (.407)
4.894
(.433)
4.204
(.198)
2.744
(.232)
III
5 × 103
1.424
(.174)
1.385
(.144)
1.362
(.072)
1.945
(.145)
5.755
1.362
(1.928) (.072)
1.366
(.069)
1.463
(.073)
1.351
(.079)
1.391
(.077)
13.63
13.34
2.170
(2.705) (2.540) (.422)
1.537
(.139)
3.582
(.501)
2.508
26.512 2.480
(3.169) (6.209) (.345)
1.792
(.115)
2.284
(.268)
1.154
(.123)
IV
104
1.366
(.069)
of PCR. BCR shows under-coverage in the first two simulation schemes, but performs well
with respect to both the measures in Schemes III and IV. Finally the other two methods,
viz., SSR-Ridge and SSR-EN have higher values of ECP, along with higher width of PIs.
SSR-Ridge has highest average width of PI in Schemes I and III. In all the simulation
schemes SSR-EN outperforms SSR-Ridge with respect to width of PI.
4.1
Computational Time
The computational time of a method may depend on the simulation scheme due to varying
level of complexity in the dataset. We only present the computational time for Scheme
24
IV as an example. Figure 5 presents the time (in minutes) taken by different methods to
compute ŷnew using a single core, as pn grows and n = nnew = 100. We run all the methods
in R 3.4.1 in a 64 bit Dell-Inspiron desktop with Ubuntu 16.04 LTS operating system, 15.6
GB random access memory and Intel R CoreTM i5-4460 CPU @ 3.20GHz processor.
Results. When pn is below 104 all the methods require comparable computational
time. When pn is increased, SSR based methods, except SSR-Ridge, RPCR, and fast.svd
based PCR continue to require low computational time. ISIS-SCAD, SPCR and RIS-RP
also have reasonable computational expense (approximately, 5 minutes for pn = 5 × 105 ).
Computational time of BCR, ISIS-MCP and RIS-PCR tends to increase rapidly after pn =
105 . Among these three methods, RIS-PCR requires highest system time (approximately
27 minutes for pn = 5 × 105 ). The computational time required by SSR-Ridge exceeds all
other methods for pn > 5×104 , and for pn = 5×105 it becomes computationally prohibitive
(it takes more than 2 hours).
The increment of computational time of RIS-PCR is due to the computation of exact
SVD of the screened design matrix Xγ . However, this burden would immediately be reduced
if one uses some approximation of the SVD. In that case the computational time would be
comparable to RIS-RP.
5
Real Data Analysis
In this section, we study the performance of TARP using three real datasets, viz., Golub
dataset, GTEx dataset and Eye dataset. The Golub dataset is available at GitHub https:
25
Figure 5: System time required by different methods to predict y as pn grows.
//github.com/ramhiser/datamicroarray/wiki, the GTEx dataset is available at GTEx
portal https://www.gtexportal.org/home/ and the eye data is available in the flare
package of R. In each case, we assess out-of-sample predictive performance averaging over
multiple training-test splits of the data.
Golub data. The Golub data consist of 47 patients with acute lymphoblastic leukemia
(ALL) and 25 patients with acute myeloid leukemia (AML). Each of the 72 (= n) patients
had bone marrow samples obtained at the time of diagnosis (see Golub et al. (1999)).
Expression levels of 7129 (= pn ) genes have been measured for each patient. We consider
a training set of size 60 with 20 AML patients, and 40 ALL patients. The test set consists
of the remaining 12 samples.
GTEx Data. To understand the functional consequences of genetic variation, Consortium
26
et al. (2015) presented an analysis of RNA sequencing data from 1641 samples across 43
tissues from 175 individuals, generated as part of the pilot phase of the Genotype-Tissue
Expression (GTEx) project. We selected RNA-seq data on two normal tissues, viz., ArteryAorta and Artery-Tibial. The dataset contains RNA-seq expressions on 36115 (= pn ) genes
and 556 (= n) samples, among which 224 are from Artery-Aorta, and 332 are from ArteryTibial. A training set of 100 samples from each of the tissue types is considered, and the
remaining 446 samples are used as test set.
Eye Data. The Eye dataset consists of gene expressions for 200 (= pn ) gene probes
from the microarray experiments of mammalian-eye tissue samples of 120 (= n) rats (see
Scheetz et al. (2006)). The response variable is the expression level of the TRIM32 gene.
We consider 100 sample points as the training set, and the remaining 20 samples as the
test set.
Golub and GTEx datasets have nominal response, and therefore the methods are evaluated by the misclassification rate (in %) and the area under the receiver operating characteristic (ROC) curve. Table 2 provides the average and standard deviation (sd) of percentages
of misclassification, and those for the area under the ROC curve over 100 random subsets
of the same size chosen from the dataset for the competing methods. We further compare
the predictive performance of the methods in terms of mean squared difference of predictive and empirical probabilities for these two datasets. Most methods (except SSR based
methods and SPCR) exhibit similar performance in this aspect. We provide the details of
the predictive calibration in the Supplement.
The eye dataset has continuous response, and therefore we evaluate the methods by
27
Table 2: Mean and standard deviation (in braces) of percentage of misclassification and
area under the ROC curve for Golub and GTEx datasets, and those of MSPE, ECP of 50%
PI and width of PI for Eye dataset for all the competing methods.
Methods→
Dataset ↓
ISISSACD
ISISSACD
SSRSSRLASSO EN
SSRRidge
PCR
SPCR
RPCR BCR
RISRP
RISPCR
Misclassification rate and Area under ROC curve for Datasets with Categorical response
Mean and SD of Misclassification Rate (in %)
Golub
11.82
(6.90)
11.50
(7.06)
45.45
(0.00)
45.45
(0.00)
45.45
(0.00)
7.09
(5.68)
41.36
9.73
(13.31) (7.28)
19.36
(9.79)
5.54
(4.36)
5.77
(4.52)
GTEx
0.00
(0.00)
0.00
(0.00)
34.83
(0.00)
34.83
(0.00)
34.83
(0.00)
0.06
(0.13)
3.53
(3.31)
13.28
(3.79)
0.39
(0.20)
0.49
(0.32)
Golub
0.876
(.073)
0.879
(.074)
0.500
(.000)
0.500
(.000)
0.500
(.000)
0.923
(.062)
0.582
(.134)
0.895
(.078)
0.816
(.093)
0.978
(.027)
0.943
(.044)
GTEx
1.00
(.000)
1.00
(.000)
0.500
(.000)
0.500
(.000)
0.500
(.000)
0.999
(.001)
0.964
(.033)
0.998
(.001)
0.877
(.041)
1.00
(.000)
.996
(.002)
10.01
(4.04)
8.54
(3.09)
8.29
(2.99)
0.22
(0.18)
Mean and SD of Area under Receiver Operating Characteristic curve
MSPE, ECP of 50% PI and Width of 50% PI
Mean and SD of Mean Square Prediction Error
Eye
11.66
(4.06)
11.66
(4.06)
20.92
20.92
7.31
(19.33) (19.33) (2.91)
13.84
(3.94)
8.65
(3.08)
7.67
(3.30)
Mean and SD of Empirical Coverage Probability and Width of the Prediction Interval
Empirical
coverage
0.502
(.138)
0.502
(.138)
0.634
(.130)
0.709
(.106)
0.700
(.076)
0.423
(.325)
0.508
(.123)
0.522
(.114)
0.564
(.117)
0.598
(.101)
0.507
(.107)
Width of
interval
1.208
(.057)
1.208
(.057)
1.970
(.190)
2.033
(.917)
1.539
(.303)
1.884
1.202
(1.612) (.079)
1.055
(.049)
1.249
(.056)
1.341
(.038)
1.056
(.036)
MSPE and empirical coverage probabilities (ECP) of 50% prediction intervals (PI) as in
Section 4. As variation in the expression levels of the TRIM32 gene is very small (the
range is 1.37) we multiply the MSPEs of different methods by 10 to increase the variability.
Table 2 provides the mean and sd of MSPEs, ECPs of 50% PIs, and widths of the PIs over
100 different training and test sets selected from the dataset, for the competing methods.
28
Results. For the Golub data set, both the lowest misclassification rate and the highest area
under ROC curve are achieved by RIS-RP, which is closely followed by RIS-PCR. TARP
based methods attain lower sd than other methods as well. PCR also yields reasonable
performance with 7% average misclassification rate and area under ROC more than 0.9.
RPCR and ISIS-based methods produce average rates of misclassification of at least 10%,
with area under the ROC of ∼ 0.9. BCR possesses high misclassification rate (about 19%),
although area under ROC is more than 0.8. Neither the MSPE, nor the area under ROC
curve, is satisfactory for SPCR. Finally, for all the 100 repetitions, SSR based methods
invariably select the intercept-only model. Thus, the MSPEs of these methods depend
entirely on the proportion of test samples obtained from the two classes.
For the GTEx dataset, perfect classification is achieved by ISIS based methods. These
methods along with RIS-RP have the highest area under the ROC curve. PCR, RPCR,
RIS-RP and RIS-PCR also yield satisfactory results, having less than 0.5% average misclassification rate and more than 99% area under the ROC curve. SPCR is comparable with an
average MSPE of less than 4%. BCR attains 13.3% average misclassification rate, with the
area under the ROC curve almost 0.9. SSR based methods fail to show any discriminatory
power in the GTEx dataset.
SSR-Ridge, RPCR, RIS-PCR, RIS-RP and SPCR yield excellent performance in terms
of MSPE in the eye dataset with an average MSPE of less than 0.9. SSR-Ridge has an
average ECP of about 0.7. RIS-PCR shows more stable performance in terms of ECP,
followed by SPCR. BCR and ISIS based methods have similar overall performance. In
terms of MSPE, BCR outperforms ISIS based methods but is outperformed by ISIS based
29
methods in terms of ECP. PCR is not quite as good in terms of either measure. SSR-LASSO
and SSR-EN again fail to perform adequately for the eye dataset.
The GEUVADIS cis-eQTL dataset
We conclude this section by illustrating the
TARP approach on a massive dataset. The GEUVADIS cis-eQTL dataset (Lappalainen
et al. (2013)) is publicly available at http://www.ebi.ac.uk/Tools/geuvadis-das/. This
dataset consists of messenger RNA and microRNA on lymphoblastoid cell line (LCL) samples from 462 individuals provided by the 1000 Genomes Project along with roughly 38
million SNPs. E2F2 plays a key role in the control of the cell cycle. Hence, as in Chen
and Dunson (2017), we choose the gene E2F2 (Ensemble ID: ENSG00000000003) as the
response. A total of 8.2 million (= pn ) SNPs are pre-selected as candidate predictors on
the basis of having at least 30 non-zero expressions. The total number of subjects included in the dataset is about 450 (= n). The genotype of each SNP is coded as 0, 1 or 2
corresponding to the number of copies of the minor allele.
TARP is applied on this dataset. We consider four different training sample sizes, viz.,
nt = 200, 250, 300 and 350, and test sample size 100 in each case. As pn is huge, we
applied three different values of δ, namely, 2, 5 and 8, to analyze the effect of a conservative
screening. The recommended choice of δ lies within (5, 6) when pn = 8.2 × 106 and n ∈
[200, 400]. To perform SVD for RIS-PCR, we use fast.svd instead of the usual svd to cope
with the massive number of regressors. Table 3 provides the MSPE, the ECP of 50% PI
and width of the PI, obtained by two different variants of TARP.
Results: The MSPEs of RIS-RP and RIS-PCR are comparable for all the choices on
n. However, RIS-RP yields much better empirical coverage probabilities than RIS-PCR,
30
Table 3: MSPE, ECP and width of PI (in order) obtained by RIS-RP and RIS-PCR for
three values of δ and different training sample sizes (nt ).
nt
200
250
300
350
δ=2
MSPE ECP Width
0.800 0.39 1.059
0.852 0.39 1.102
0.860 0.36 1.126
0.778 0.45 1.210
RIS-RP
δ=5
MSPE ECP Width
0.872 0.42 0.983
0.920 0.42 1.023
0.855 0.44 1.075
0.779 0.48 1.221
δ=8
MSPE
0.855
0.921
0.866
0.829
ECP
0.34
0.35
0.36
0.46
Width
0.928
1.013
1.069
1.219
nt
200
250
300
350
δ=2
MSPE
0.834
0.858
0.845
0.757
RIS-PCR
δ=5
MSPE ECP Width
0.838 0.12 0.192
0.882 0.12 0.289
0.867 0.20 0.511
0.786 0.36 0.886
δ=8
MSPE
0.831
0.896
0.865
0.826
ECP
0.10
0.19
0.20
0.41
Width
0.252
0.420
0.487
0.984
ECP
0.06
0.14
0.14
0.35
Width
0.177
0.355
0.399
0.893
especially when n ≤ 300. The three choices of δ yield comparable results in terms of all
the measures in general. For RIS-RP, δ = 5 results in higher ECP and for RIS-PCR higher
ECP is obtained using δ = 8. Moreover, the choice δ = 8 makes both the procedures
much faster compared to other choices of δ. When the training sample is 350, δ = 2, 5
and 8 select about 290800, 12600 and 7960 variables, respectively, on an average in the
screening stage out of 8.2 × 106 variables. In view of the results in this massive dimensional
dataset, it seems reasonable to use a higher value of δ for filtering out noisy regressors, and
computational convenience.
31
6
Appendix
This section contains proofs of the theorems stated in the paper. We use a generic notation c for
the constants, although all of them may not be equal.
Some Useful Results
Lemma 1. Let εn be a sequence of positive numbers such that nε2n 1. Then under conditions
(a) lnN (εn , Pn ) ≤ nε2n for all sufficiently large n.
2
(b) π (Pnc ) ≤ e−2nεn for all sufficiently large n.
(c) π f : dt (f, f0 ) ≤ ε2n /4 ≥ exp{−nε2n /4} for all sufficiently large n and for some t > 0.
Then
i
h
2
2
Pf0 π {d(f, f0 ) > 4εn |(yn , X)} > 2e−nεn (0.5∧(t/4)) ≤ 2e−nεn (0.5∧(t/4)) .
The proof is given in Jiang (2007).
Lemma 2. Suppose assumption (A1) holds. Let αδ be such that
P
j
x2j |rxj ,y |δ /pn → αδ as n → ∞
and x be a pn × 1 sample vector of the regressors. Then the following holds:
a. The random matrix Rγ described in (2) and (3) satisfies
p
kRγ xk2 /pn →
− cαδ .
b. Let kxγ k2 =
Ppn
2
j=1 xj I(γj
= 1) where γj is the j th element of the vector γ described in (2) and
I(·) is the indicator function, then
p
kxγ k2 /pn →
− cαδ ,
32
where c is the proportionality constant in (2).
The proof is given in the Supplement.
Lemma 3. Let θ ∼ N (0, σθ2 I), then for a given Rγ , x and y the following holds
24 ∆4
(x0 β )2 + ∆2
.
P (|(Rγ x)0 θ − x0 β 0 | < ∆) > exp − 2 0
2
σθ kRγ xk2
σθ kRγ xk2
The proof is given in Guhaniyogi and Dunson (2015).
Proofs of the Theorems. Without loss of generality we consider |xj | < M with M = 1,
j = 1, 2, . . . , pn and d(σ 2 ) = 1, although the proofs go through for any fixed value of M and σ 2 .
Proof of Theorem 1. Define the sequence of events
n
o
2
2
Bn = π {d(f, f0 ) > 4εn |(yn , X)} > 2e−nεn /4 and we need to show P (Bnc ) > 1 − 2e−nε /5 .
We first consider the sequence of events An in assumption (A2), and show that P (Bnc |An ) >
2 /4
1 − 2e−nε
. The proof then follows from assumption (A2) for moderately large n.
The proof of P (Bnc |An ) > 1 − 2e−nε
2 /4
hinges on showing the three conditions of Lemma 1
for the approximating distribution
f (y) = exp{ya(h) + b(h) + c(y)} with h = (Rγ x)0 θ,
(5)
and the true distribution f0 , where Rγ is as given in (4).
Checking condition (a). Let Pn be the set of densities f (y) stated above with parameter |θj | < cn ,
√
j = 1, 2, . . . , mn , where {cn } = {σθ 5nεn } and the model γ is such that γ ∈ Mk , for some
k ∈ {0, 1, . . . , pn }, given An . For any γ the corresponding set of regression parameters can be
n
covered by l∞ balls of the form B = (vj − , vj + )m
j=1 of radius > 0 and center vj . It takes
33
(cn / + 1)mn balls to cover the parameter space for each of model γ in Pn . There are at most
min{ pkn , pknn } models for each γ under consideration as we are only concerned with models in
An (see assumption (A2)), and there are (pn + 1) possible choices of k. Hence it requires at most
N (, k) ≤ cpnkn +1 (cn / + 1)mn l∞ balls to cover the space of regression parameters Pn , for some
constant c.
Next we find the number of Hellinger balls required to cover Pn . We first consider the KL
distance between f and f0 , then use the fact d(f, f0 ) ≤ (d0 (f, f0 ))1/2 . Given any density in Pn , it
n
can be represented by a set of regression parameters (uj )m
j=1 falling in one of these N (, k) balls
n
B = (vj − , vj + )m
j=1 and pγ = k. More specifically, let fu and fv be two densities in Pn of the
form (5), where u = (Rγ x)0 θ 1 , v = (Rγ x)0 θ 2 with |θi,j | < cn , i = 1, 2 and pγ = k, then
d0 (fu , fv ) =
Z Z
Z Z
fv log
fv
fu
νy (dy)νx (dx)
{y(a(u) − a(v)) + (b(u) − b(v))} fv νy (dy)νx (dx)
0
Z
b (v)
0
0
=
(u − v) a (uv ) − 0
+ b (uv ) νx (dx).
a (v)
=
The last expression is achieved by integrating with respect to y and using mean value theorem,
where uv is an intermediate point between u and v. Next consider
|u − v| = |(Rγ x)0 θ 1 − (Rγ x)0 θ 2 | ≤ kRγ xkkθ 1 − θ 2 k,
p
using the Cauchy-Schwartz inequality. Now, by Lemma 2 we have kRγ xk2 /pn →
− αδ as
n → ∞ for some constant 0 < αδ < 1. Therefore we can assume that for sufficiently large pn ,
√
√
√
kRγ xk ≤ pn . Also, kθ 1 − θ 2 k ≤ mn . Combining these facts we have |u − v| ≤ mn pn .
√
Similarly max{|u|, |v|} ≤ cn mn pn . These together imply that
(
)
0 (h)|
|b
√
d0 (fu , fv ) ≤ mn pn
sup
|a0 (h)|
sup
.
0
√
√
|h|≤cn mn pn
|h|≤cn mn pn |a (h)|
34
Therefore d(fu , fv ) ≤ εn if we choose
o
n√
mn pn sup|h|≤cn √mn pn |a0 (h)| sup|h|≤cn √mn pn (|b0 (h)|/|a0 (h)|) .
= ε2n /
Therefore, density fu falls in a Hellinger ball of size εn , centered at fv . As shown earlier, there
are at most N (, k) such balls. Thus, the Hellinger covering number
c
mn
n
N (εn , Pn ) ≤ N (, k) = cpknn +1
+1
(
)
!#mn
"
cn √
|b0 (h)|
kn +1
0
mn pn
sup
+1
= cpn
|a (h)|
sup
0
√
√
ε2n
|h|≤cn mn pn
|h|≤cn mn pn |a (h)|
mn
1
√
kn +1
≤ cpn
D(cn mn pn ) + 1
,
ε2n
where D(R) = R suph≤R |a0 h| suph≤R |b0 (h)/a0 (h)|. The logarithm of the above quantity is no
√
more than log c + (kn + 1) log pn − mn log(ε2n ) + mn log 1 + D(cn mn pn ) , as 0 < ε2n < 1. Using
the assumptions in Theorem 1 condition (a) follows.
n
Checking condition (b) For the Pn defined in condition (a), π(Pnc ) ≤ π(∪m
j=1 |θj | > cn ).
q
Observe that π(|θj | > cn ) ≤ 2 exp{−c2n /(2σθ2 )}/ 2πc2n /σθ2 by Mills ratio. Now for the choice that
p
√
cn = σθ 5nεn the above quantity is 2 exp{−5nε2n /2}/ 10πnε2n . Therefore
π(Pnc ) ≤
mn
X
j=1
π(|θj | > cn ) ≤ 2mn exp{−5nε2n /2}/
p
2
10πnε2n ≤ e−2nεn
for sufficiently large n. Thus condition (b) follows.
Checking condition (c) Condition (c) is verified for t = 1. Observe that
dt=1 (f, f0 ) =
Integrating out y we would get
R
Z Z
f0
f0
− 1 νy (dy)νx (dx).
f
Ey|x [{(f0 /f )(Y ) − 1}] νx (dx). Note that under f and f0 we
35
have same function of y as given in (5) with h = x0 β 0 for f0 .
Therefore, the above can
be written as Ex [{(Rγ x)0 θ − x0 β 0 } g (u∗ )] using mean value theorem where g is a continuous
derivative function, and u∗ is an intermediate point between (Rγ x)0 θ and x0 β 0 . Therefore, if
|(Rγ x)0 θ − x0 β 0 | < ∆n , then |u∗ | < |x0 β 0 | + ∆n . This in turn implies that for sufficiently small
∆n , |g(u∗ )| will be bounded, say by M . Consider a positive constant ∆n . From Lemma 3 we have
P (|(Rγ x)0 θ − x0 β 0 | < ∆n ) =
X
γ
P (|(Rγ x)0 θ − x0 β 0 | < ∆n |γ)π(γ)
24 ∆4
(x0 β 0 )2 + ∆2n
≥ Eγ exp − 2
σθ kRγ xk2
σθ2 kRγ xk2
Zγ
Zγ
24 ∆4n
=
,
Eγ
exp −
(x0 β 0 )2 + ∆2n
pn
pn
(6)
where Zγ = (x0 β 0 )2 + ∆2n / σθ2 kRγ xk2 /pn . By part (a) of Lemma 2, and continuous mapping
p
theorem Zγ →
− (x0 β 0 )2 + ∆2n / σθ2 cαδ > ∆2n / σθ2 cαδ .
For some non-negative random variable Z and non-random positive numbers p, a and b,
consider the following fact
E
Z
Z
Z
Z
exp −
≥ aP
exp −
>a
p
p
p
p
Z
a
Z
≥ aP
> , exp −
> ab
p
b
p
ap
= aP Z >
, Z < −p log(ab)
b
ap
< Z < −p log(ab) .
= aP
b
(7)
Replacing Z by Zγ , p by pn and taking a = ∆2n exp{−nε2n /3}/(σθ2 cαδ ), and b = pn exp{−nε2n /3}.
Thus −pn log(ab) = −pn log ∆2n pn exp{−2nε2n /3}/(σθ2 cαδ ) > pn nε2n /2 and apn /b = ∆2n / σθ2 cαδ
36
for sufficiently large n. Therefore the expression in (7) is greater than
∆2n −nε2n /3
e
P
σθ2 cαδ
Note that (x0 β 0 )2 <
Ppn
j=1 |β0,j |
∆2n
1
≤ Zγ ≤ pn nε2n
2
2
σθ cαδ
< K, and the probability involved in the above expression can be
shown to be bigger than some positive constant p for sufficiently large n. Using these facts along
with equation (6), we have P (|(Rγ x)0 θ − x0 β 0 | < ∆n ) > exp{−nε2 /4}. Choosing ∆n < ε2 /(4M )
condition (c) follows.
Proof of Theorem 2. The outline of the proof of Theorem 2 closely follows the arguments given
in the proof of Theorem 1. Therefore we only present those parts of the proof which are different.
2 /4
As in Theorem 1, we show that P (Bnc |An ) > 1 − 2e−nε
by checking the three conditions of
Lemma 1.
The proof of Condition (a) is the same as for Theorem 1, except for the places involving
the projection matrix Rγ . Observe that given a dataset Dn and other tuning parameters we fix
a particular projection matrix Rγ . The only property of Rγ needed to prove condition (a) is
kRγ xk2 ≤ pn for sufficiently large n. To show this we use that fact that Rγ is a matrix with
orthonormal row vectors, and therefore kRγ xk2 ≤ kxγ k2 ≤ pn .
The proof of Condition (b) depends only on the prior assigned on θ, and therefore remains
the same under the conditions of Theorem 2.
The proof of Condition (c) differs from that of Theorem 1 in showing P (|(Rγ x)0 θ − x0 β 0 | <
∆n ) > exp{−nε2 /4} for some constant ∆n . To see this consider a positive constant ∆n . As
before, from Lemma 3 we have
37
(x0 β 0 )2 + ∆2n
24 ∆4
P (|(Rγ x) θ − x β 0 | < ∆n ) ≥ Eγ exp − 2
σθ kRγ xk2
σθ2 kRγ xk2
(x0 β 0 )2 + ∆2n
24 ∆4
≥ Eγ exp − 2
σθ αn kxγ k2
σθ2 kxγ k2
Zγ
Zγ
24 ∆4n
= 0
exp
−
,
E
γ
(x β 0 )2 + ∆2n
pn
αn pn
0
0
(8)
where Zγ = (x0 β 0 )2 + ∆2n / σθ2 kxγ k2 /pn , and αn is as in (A3). From part (b) of Lemma 2,
p
and continuous mapping theorem we have Zγ →
− (x0 β 0 )2 + ∆2n / σθ2 cαδ > ∆2n / σθ2 cαδ .
For some positive random variable Z and non-random positive numbers p, a and b, consider
the following
E
Z
Z
Z
Z
exp −
≥ aP
exp −
≥a
p
αp
p
αp
Z
a
Z
≥ aP
≥ , exp −
≥ ab
p
b
αp
ap
= aP
< Z < −αp log(ab) .
b
(9)
(10)
Replacing Z by Zγ , p by pn , α by αn and taking a = ∆2n exp{−nε2n /3}/(σθ2 cαδ ), and b =
pn exp{−nε2n /3}. Thus,
−αn pn log(ab) = −αn pn log ∆2n pn exp{−2nε2n /3}/(σθ2 cαδ ) ∼ 2pn log ∆2n pn /(σθ2 cαδ ) /3 > 2pn /3
for sufficiently large n and apn /b = ∆2n / σθ2 cαδ . Therefore the expression in (10) is greater than
∆2n
2
∆2n −nε2n /3
e
P
≤ Zγ ≤ p n .
3
σθ2 cαδ
σθ2 cαδ
P
n
Note that (x0 β 0 )2 < pj=1
|β0,j | < K, and the probability involved in the above expression
can be shown to be bigger than some positive constant p for sufficiently large n. Using these
facts along with equation (8), we have P (|(Rγ x)0 θ − x0 β 0 | < ∆n ) > exp{−nε2 /4}. Choosing
38
∆n < ε2 /(4M ) condition (c) follows.
References
Achlioptas, D. (2003). Database-friendly random projections: Johnson-Lindenstrauss with binary
coins. Journal of Computer and System Sciences, 66(4):671–687.
Candes, E. and Tao, T. (2007). The Dantzig selector: statistical estimation when p is much larger
than n. The Annals of Statistics, 35(6):2313–2351.
Candès, E. J., Li, X., Ma, Y., and Wright, J. (2011). Robust principal component analysis?
Journal of the ACM, 58(3):Art. 11, 37.
Candes, E. J. and Tao, T. (2005). Decoding by linear programming. IEEE Transactions on
Information Theory, 51(12):4203–4215.
Carvalho, C. M., Polson, N. G., and Scott, J. G. (2009). Handling sparsity via the horseshoe.
In International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of
Machine Learning Research, pages 73–80.
Chen, Y. and Dunson, D. B. (2017). Modular Bayes screening for high-dimensional predictors.
ArXiv preprint arXiv:1703.09906.
Consortium, G. et al. (2015). The genotype-tissue expression (GTEx) pilot analysis: Multitissue
gene regulation in humans. Science, 348(6235):648–660.
Dasgupta, S. and Gupta, A. (2003). An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures & Algorithms, 22(1):60–65.
39
Davenport, M. A., Boufounos, P. T., Wakin, M. B., and Baraniuk, R. G. (2010). Signal processing with compressive measurements. IEEE Journal of Selected Topics in Signal Processing,
4(2):445–460.
Donoho, D. L. (2006).
Compressed sensing.
IEEE Transactions on Information Theory,
52(4):1289–1306.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle
properties. Journal of the American Statistical Association, 96(456):1348–1360.
Fan, J., Samworth, R., and Wu, Y. (2009). Ultrahigh dimensional feature selection: Beyond the
linear model. Journal of Machine Learning Research, 10(Sep):2013–2038.
Fard, M. M., Grinberg, Y., Pineau, J., and Precup, D. (2012). Compressed least-squares regression
on sparse spaces. In Association for the Advancement of Artificial Intelligence, pages 1055–1060.
Golub, T. R., Slonim, D. K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J. P., Coller, H.,
Loh, M. L., Downing, J. R., Caligiuri, M. A., Bloomfield, C. D., and Lander, E. S. (1999).
Molecular classification of cancer: Class discovery and class prediction by gene expression
monitoring. Science, 286(5439):531–537.
Guhaniyogi, R. and Dunson, D. B. (2015). Bayesian compressed regression. Journal of the
American Statistical Association, 110(512):1500–1514.
Hoeting, J. A., Madigan, D., Raftery, A. E., and Volinsky, C. T. (1999). Bayesian model averaging:
A tutorial. Statistical Science. A Review Journal of the Institute of Mathematical Statistics,
14(4):382–417.
40
Ishwaran, H. and Rao, J. S. (2005). Spike and slab variable selection: Frequentist and Bayesian
strategies. The Annals of Statistics, 33(2):730–773.
Jiang, W. (2007). Bayesian variable selection for high dimensional generalized linear models:
Convergence rates of the fitted densities. The Annals of Statistics, 35(4):1487–1511.
Kabán, A. (2014). New bounds on compressive linear least squares regression. In Artificial
Intelligence and Statistics, pages 448–456.
Lappalainen, T., Sammeth, M., Friedländer, M. R., AC‘t Hoen, P., Monlong, J., Rivas, M. A.,
Gonzalez-Porta, M., Kurbatova, N., Griebel, T., Ferreira, P. G., et al. (2013). Transcriptome
and genome sequencing uncovers functional variation in humans. Nature, 501(7468):506.
Li, P., Hastie, T. J., and Church, K. W. (2006). Very sparse random projections. In Proceedings of
the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
KDD ’06, pages 287–296.
Maillard, O. and Munos, R. (2009). Compressed least-squares regression. In Advances in Neural
Information Processing Systems, pages 1213–1221.
Pettenuzzo, D., Koop, G., Korobilis, D., et al. (2016). Bayesian compressed vector autoregressions. Working Papers 103, Brandeis University, Department of Economics and International
Businesss School.
Scheetz, T. E., Kim, K.-Y. A., Swiderski, R. E., Philp, A. R., Braun, T. A., Knudtson, K. L.,
Dorrance, A. M., DiBona, G. F., Huang, J., Casavant, T. L., et al. (2006). Regulation of gene
expression in the mammalian eye and its relevance to eye disease. Proceedings of the National
Academy of Sciences, 103(39):14429–14434.
41
Thanei, G.-A., Heinze, C., and Meinshausen, N. (2017). Random projections for large-scale
regression. In Big and Complex Data Analysis, pages 51–68.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society. Series B. Methodological, 58(1):267–288.
Tibshirani, R., Bien, J., Friedman, J., Hastie, T., Simon, N., Taylor, J., and Tibshirani, R. J.
(2012). Strong rules for discarding predictors in lasso-type problems. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 74(2):245–266.
Tierney, L. and Kadane, J. B. (1986). Accurate approximations for posterior moments and
marginal densities. Journal of the American Statistical Association, 81(393):82–86.
Witten, D. M., Tibshirani, R., and Hastie, T. (2009). A penalized matrix decomposition, with
applications to sparse principal components and canonical correlation analysis. Biostatistics,
10(3):515–534.
Zeng, Y. and Breheny, P. (2017). The biglasso package: A memory-and computation-efficient
solver for lasso model fitting with big data in R. ArXiv preprint arXiv:1701.05936.
Zhang, C.-H. et al. (2010). Nearly unbiased variable selection under minimax concave penalty.
The Annals of Statistics, 38(2):894–942.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal
of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320.
42
arXiv:1712.02445v1 [] 6 Dec 2017
Supplementary Material
1
Additional Simulation Results
1.1
Results for Larger pn
Here we present the performance of the different methods with respect to mean square prediction error (MSPE) and empirical coverage probability (ECP) of 50% prediction intervals
(PI), along with the width of PIs for a larger choice of pn in each scheme.
Mean Square Prediction Error (MSPE). In Scheme I (see Figure 1) for pn = 3000,
TARP based methods yield the best results, with RIS-PCR having marginally better performance than RIS-RP. These methods are followed by SSR-Ridge and PCR. ISIS based
methods and SSR-EN have overall comparable performance with SSR-EN showing more
stable results, and ISIS-MCP having minimum average MSPE among these 3 methods.
Performance of these six methods are followed by BCR, and that in turn is followed by
RPCR. SSR-LASSO and SPCR have very poor performance in this scenario.
As in the main paper simulation case, TARP based methods have the best overall
performance in Scheme II for pn = 2 ∗ 104 , immediately followed by PCR (see Figure 2).
These three methods are followed by SSR-EN and SSR-Ridge. Unlike for pn = 104 , here
1
Figure 1: Box-plot of MSPE of eleven competing methods for pn = 3000 in Scheme I.
SSR-EN outperforms SSR-Ridge. RPCR is substantially worse but is still better than SSRLASSO, BCR and SPCR (in that order). ISIS based methods had very poor performance.
In Scheme III (see Figure 3), the performance of all the methods are almost similar,
except SSR based methods. Among the SSR based methods, SSR-EN and SSR-LASSO
have very similar results. SSR-EN has lower average MSPE, however SSR-LASSO has a
more stable performance. Unlike for pn = 5000, ISIS based methods often yield large values
of MSPE for pn = 104 .
The relative performance of SPCR and ISIS based methods are unsatisfactory in Scheme
IV (see Figure 4(a)). Among the other methods (see Figure 4(b)), TARP based methods
and PCR have the best performance, followed by BCR and SSR-EN. The relative performance of RPCR and SSR-LASSO improve significantly for pn = 2 ∗ 104 . These two
methods outperform SSR-Ridge, and among them, RPCR yields much better results than
SSR-LASSO.
Empirical Coverage Probability (ECP) and Prediction Interval (PI) Widths.
The relative performance of all the methods is quite similar to that observed in the main
2
Figure 2: Box-plot of MSPE of eleven competing methods for pn = 20000 in Scheme II.
Figure 3: Box-plot of MSPE of eleven competing methods for pn = 10000 in Scheme III.
paper (see Table 1 of the paper and Table 4) for all the simulation schemes. The detailed
description is given below.
The average ECP of SPCR is closest to 0.5 in Scheme I, followed by RPCR, RIS-RP
and SSR-EN, with RIS-RP showing the smallest average width among these methods. The
sd of ECPs and widths of the PIs is highest for PCR, indicating a lack of stability in it’s
performance. RIS-PCR yields some under coverage, however the average width of PI for
RIS-PCR is lowest among all the competing methods. SSR-Ridge has an average ECP of
3
(a) MSPE of all the methods.
(b) MSPE of selected methods.
Figure 4: Box-plot of MSPE of the competing methods for pn = 20000 in Scheme IV.
4
Table 4: Mean and standard deviation(sd) of empirical coverage probabilities of 50% prediction intervals, and mean and sd of width of 50% prediction intervals.
Methods→
Scheme, pn
ISISISISSSR- SSRSACD SACD LASSO EN
SSRRidge
PCR
SPCR RPCR BCR
RISRP
RISPCR
Average and standard deviation (in braces) of empirical coverage probability
I
3∗
103
0.307
(.052)
0.306
(.050)
0.434
(.059)
0.536
(.043)
0.631
(.048)
0.437
(.288)
0.498
(.059)
0.471
(.055)
0.283
(.053)
0.464
(.052)
0.382
(.054)
II
2 ∗ 104
0.334
(.051)
0.333
(.048)
0.422
(.055)
0.577
(.045)
0.637
(.049)
0.365
(.260)
0.505
(.059)
0.454
(.052)
0.280
(.058)
0.488
(.056)
0.312
(.055)
III
104
0.495
(.053)
0.494
(.053)
0.502
(.058)
0.653
(.033)
0.671
(.144)
0.503
(.058)
0.498
(.052)
0.498
(.052)
0.550
(.054)
0.487
(.055)
0.499
(.054)
IV
2 ∗ 104
0.432
(.063)
0.433 0.485
(0.060) (.099)
0.658
(.034)
0.664
(.108)
0.489
(.308)
0.492
(.052)
0.483
(.058)
0.494
(.051)
0.675
(.060)
0.408
(.052)
Average and standard deviation (in braces) of width of the 50% prediction interval
I
3 ∗ 103
4.096
(.522)
3.942
(.531)
6.148
(.410)
7.388
(.543)
8.098
(.484)
8.082 8.441
(7.354) (.608)
7.023
(.492)
5.720
(.411)
5.192
(.152)
4.090
(.335)
6.589 7.649
(7.380) (.929)
6.270
(.565)
5.687
(.465)
4.594
(.204)
2.635
(.209)
1.373
(.070)
1.551
(.073)
1.343
(.079)
1.383
(.077)
2.895 32.430 1.925
(3.808) (7.647) (.234)
1.794
(.144)
2.066
(.142)
1.070
(.071)
II
2 ∗ 104
5.964
(.554)
5.954
(.554)
5.276
(.525)
7.039
(.479)
8.604
(.674)
III
104
1.483
(.174)
1.452
(.144)
1.36
(.072)
1.942
(.122)
5.439 1.362
(2.018) (.072)
IV
2 ∗ 104
18.00 17.98 3.428 1.621
(2.876) (2.789) (2.522) (.167)
4.087
(.634)
1.373
(.070)
about 0.63 and a high width of the PI. ISIS based methods and BCR have lower-coverage
probabilities on average.
For Scheme II, SSR-RP and SPCR have the best overall results, with SPCR having
higher PI widths than SSR-RP. The ECPs of SSR-LASSO, SSR-EN and RPCR are closer
to 0.5 than the remaining methods with SSR-LASSO having a lower average width of the
PIs. The average ECP of PCR is close to 0.5, however there are huge fluctuations in the
widths of PIs (sd= 7.4). Among the other methods, SSR-Ridge shows higher coverage with
a comparatively large width of PI. The remaining four methods show under coverage for
both the choices of pn . Among these four methods, RIS-PCR attains the lowest average
width of PI.
5
All the methods, except SSR-EN and SSR-Ridge have comparable performance in terms
of both the criteria in Scheme III. SSR-EN and SSR-Ridge tend to have higher ECP and
larger width of PI in this setup.
SSR-LASSO, RPCR and BCR yield the best performance in Scheme IV, and BCR
achieves the lowest average width of PI among them. Although the average ECP of SPCR
is close to 0.5, it possesses much higher width than all other methods. SSR-Ridge, SSR-EN
and RIS-RP show higher ECP in general. In terms of PI widths of these three methods,
performance of RIS-RP is the best, followed by SSR-EN. In Scheme IV as well, PCR has
a higher sd of the width of PIs indicating instability in its performance. ISIS-SCAD, ISISMCP and RIS-PCR show some under coverage, and averages of ECPs for all of them are
around 0.4. RIS-RCR yields the lowest overall width of PI among all the methods. Average
width of PI for ISIS based methods and SPCR are much higher than all the other methods
in Scheme IV.
1.2
Effect of sample size on different competing methods
We consider three different choices of n, viz., 150, 200, 250. The results for n = 200 are
presented in the paper and previous sections of Supplementary file. The results corresponding two other choices of n are presented in this section. The relative performance of
the competing methods for different values of n remains similar for the two choices of pn .
Therefore, we present results corresponding to higher values of pn only.
Comparison with respect to MSPE. In Scheme I (see Figures 5(a)-5(b)), SSR-Ridge,
SSR-EN, PCR and TARP based methods work well for lower sample size. Among these
methods, RIS-RP has the best overall performance, immediately followed by RIS-PCR.
6
(a) n = 150
(b) n = 250
Figure 5: Box-plot of MSPE of the competing methods for pn = 3 × 103 in Scheme I.
Among the other methods, BCR and SSR-LASSO yield better performance than others.
The overall scenario changes reasonably for higher sample size, when n = 250. ISIS-MCP
based methods outperform other methods with respect to average MSPE, closely followed
by RIS-RP, ISIS-SCAD and RIS-PCR which also have satisfactory performance. However,
the ISIS based methods have much larger dispersion than TARP based methods, indicating
lack of stability in their results. The other methods do not show much improvement with
increment of sample size. Among these methods, SSR-Ridge yields relatively better results
followed by SSR-EN and PCR.
The relative performance of all these methods does not vary much for different sample
sizes in Scheme II (see Figures 6(a)-6(b)). For both the choices of n, TARP outperforms
all other methods. SSR based methods show reasonable results, although the differences of
their performance with TARP increases with increment of sample size. The other methods
do not perform well for either choices of n. Notably, PCR yields a low median MSPE in
general, however, it results in very high values of MSPE occasionally. For n = 150, the
7
(a) n = 150
(b) n = 250
Figure 6: Box-plot of MSPE of the competing methods for pn = 2 × 104 in Scheme II.
MSPE is as large as 1038, while for n = 250 the MSPE increases up to 3089.
Projection based methods work extremely well in Scheme III for both choices of n (see
Figures 7(a)-7(b)). SSR based methods fail to perform well in this scheme, with SSR-Ridge
showing largest overall MSPE. ISIS based methods show low MSPE in general, however
they occasionally yield higher values of MSPEs.
As the ISIS based method and SPCR yield extremely large values of MSPE in Scheme
IV, we avoid presenting further results on those methods (see Figures 8(a)-8(b)). TARP
yields the best performance for both choices of n in this scheme. SSR-EN also gives comparable performance. Among the other methods, SSR-LASSO and BCR yield reasonable
results. The other three methods tend to have higher values of MSPE in this scheme. As
in Scheme II, PCR tends to have extremely large MSPEs. For n = 250, the highest MSPE
obtained by PCR is around 125, whereas the average MSPE of all other methods is below
7.
8
(a) n = 150
(b) n = 250
Figure 7: Box-plot of MSPE of the competing methods for pn = 104 in Scheme III.
Table 5: Mean and standard deviation (in small font) of empirical coverage probabilities
of 50% prediction intervals.
n→
Schemes →
n = 150
n = 250
I
II
III
VI
I
II
III
VI
Methods↓ pn →
3 × 103
2 × 104
104
2 × 104
3 × 103
2 × 104
104
2 × 104
ISIS-SCAD
0.290.060
0.340.068
0.488.064
0.429.057
0.344.060
0.349.057
0.507.061
0.447.050
ISIS-MCP
0.294.052
0.336.070
0.489.063
0.427.058
0.345.065
0.354.059
0.503.061
0.443.053
SSR-LASSO
0.406.059
0.386.053
0.494.064
0.660.031
0.441.052
0.436.050
0.502.061
0.656.049
ISIS-EN
0.658.035
0.660.035
0.655.031
0.657.030
0.656.031
0.661.035
0.657.033
0.656.030
ISIS-Ridge
0.660.031
0.652.034
0.652.032
0.656.030
0.661.031
0.660.034
0.659.032
0.657.033
PCR
0.407.200
0.450.285
0.494.064
0.484.287
0.421.193
0.416.275
0.502.061
0.504.316
SPCR
0.507.064
0.499.055
0.494.064
0.490.056
0.491.054
0.497.050
0.502.061
0.501.054
RPCR
0.487.052
0.475.056
0.494.065
0.490.053
0.489.057
0.484.051
0.502.061
0.494.058
BCR
0.519.056
0.353.062
0.626.061
0.525.058
0.555.057
0.353.054
0.557.059
0.510.052
RIS-RP
0.508.063
0.460.059
0.492.065
0.723.050
0.598.058
0.504.057
0.499.059
0.704.047
RIS-PCR
0.434.062
0.270.049
0.508.066
0.395.058
0.549.054
0.338.051
0.510.060
0.420.052
Empirical Coverage Probability (ECP) and Prediction Interval (PI) Widths.
Tables 5 and 6 provide the means and standard deviations (sd) of empirical coverage
probabilities (ECP) of 50% prediction intervals (PI) and the widths of the PIs, respectively,
9
(a) n = 150
(b) n = 250
Figure 8: Box-plot of MSPE of the competing methods for pn = 2 × 104 in Scheme IV.
for two different choices of n.
As n increases, ISIS based methods tend to improve rapidly with respect to both the
measures. The other methods show little difference with an increment of n. ISIS based
methods show under coverage for all the simulation schemes except Scheme III. However,
the PI widths are low in general for these methods, except for Scheme IV, where they
result in the highest average width. SSR-EN and SSR-Ridge tend to have higher coverage
probabilities, and higher average PI width. For most of the schemes, these two methods
possess the highest average width. PCR tends to show under coverage for Schemes I and II,
and yield reasonable ECPs for the other two schemes. However, the widths of PIs for PCR
attain the highest variance in all the schemes, except Scheme III, indicating lack of stability
in the results. SPCR and RPCR yield the best performance in terms of ECP. The width of
PI is higher on an average for SPCR than RPCR. For Scheme IV, SPCR yields extremely
large widths of PI. Except for Scheme II, BCR and SSR-LASSO perform reasonably well
in terms of ECP. However, these two methods result in some over-coverage in Scheme III
10
and Scheme IV. With respect to PI width, SSR-LASSO outperforms BCR. TARP based
methods yield the lowest PI width in general. RIS-RP yields an average ECP close to 0.5
in most of the cases, and tend to show high coverage probabilities occasionally. However,
RIS-PCR tends to show low coverage probabilities in general, especially for n = 150.
2
Predictive Calibration in Binary Response Datasets
Apart from measuring the misclassification rates and the area under ROC curve, we validate TARP in terms of it’s ability to quantify uncertainly in real datasets with binary
responses. To this end, we partition the interval [0, 1] into ten equal sub-intervals, viz.,
[0, 0.1), [0.1, 0.2) and so on, and classify the test data points (x, y)i,new to the k th class if
predictive probability of yi,new falls in that class. Next, we consider the squared difference
of the empirical proportion of yi,new = 1 among the data points classified in a given interval
with the middle point of the interval, and consider the mean of these squared differences
(MSD) of all the intervals. If a method is well calibrated, then the MSD would be small.
The following table shows means and standard deviations of MSDs of eleven competing
methods for Golub and GTEx datasets.
Table 7 indicates that the competing methods do not show much differences in MSD
of empirical and predictive probabilities. ISIS based methods, PCR, RPCR and TARP
perform relatively well compared to the other methods. Among these methods, RIS-PCR
and RIS-RP have lowest MSD for Golub and GTEx, respectively. Relative performance of
SSR based methods are deficient in terms of MSD as well. SPCR results in high MSD for
both the datasets too.
11
Table 6: Mean and standard deviation (in small font) of width of 50% prediction intervals.
n→
n = 150
Schemes →
n = 250
I
II
III
VI
I
II
III
VI
Methods↓ pn →
3 × 103
2 × 104
104
2 × 104
3 × 103
2 × 104
104
2 × 104
ISIS-SCAD
4.390.50
6.130.85
7.8744.45
13.762.60
2.990.87
5.870.54
1.490.25
9.674.11
ISIS-MCP
4.410.54
6.120.86
1.460.30
14.202.24
2.780.95
5.900.55
1.400.14
9.714.01
SSR-LASSO
5.800.41
4.950.38
1.360.08
3.920.40
6.450.42
5.490.50
1.360.06
3.760.47
ISIS-EN
9.660.77
8.870.79
3.350.69
2.650.34
9.350.78
8.110.74
3.270.53
2.670.38
ISIS-Ridge
9.310.75
9.641.19
5.392.05
6.320.99
8.120.64
8.830.95
5.731.97
4.490.63
PCR
5.642.89
7.978.33
1.360.08
2.602.82
5.912.97
6.776.41
1.360.06
2.823.53
SPCR
8.440.61
8.220.80
1.360.08
31.089.79
8.310.53
7.930.70
1.360.06
30.2110.41
RPCR
7.760.50
7.730.64
1.360.08
2.820.49
7.720.45
7.540.54
1.360.06
2.750.46
BCR
7.440.48
6.280.52
1.810.07
2.740.23
8.180.42
5.970.38
1.550.05
2.050.14
RIS-RP
6.120.26
4.910.22
1.350.08
2.460.16
6.260.21
4.350.17
1.360.06
1.970.13
RIS-PCR
5.810.33
2.530.20
1.40.08
1.090.08
6.100.37
2.630.20
1.390.06
0.980.06
Table 7: Mean and standard deviation (in bracket) of mean square differences (MSD) of
empirical and predictive probabilities.
3
Methods→
Dataset ↓
ISISSACD
ISISSACD
SSRSSRLASSO EN
SSRRidge
PCR
SPCR
RPCR BCR
RISRP
RISPCR
Golub
2.605
(.036)
2.606
(.035)
2.804
(.000)
2.804
(.000)
2.742
(.028)
2.589
(.012)
3.429
(.073)
2.587
(.017)
2.886
(.032)
2.611
(.045)
2.555
(.044)
GTEx
2.784
(.000)
2.784
(.000)
3.325
(.000)
3.325
(.000)
3.325
(.000)
2.784
(.000)
3.216
(.102)
2.784
(.000)
2.873
(.007)
2.782
(.001)
2.783
(.001)
Mathematical Details
Proof of Lemma 2 a. Consider the conditional expectation and variance of kRγ xk2 given
(γ, x) as follows:
E kRγ xk2 |γ
var kRγ xk2 |γ
= kxγ k2
= 4kxγ k2
(
12
1+
Ppγ 4 )
1
j=1 xγ,j
−1
,
ψ
2kxγ k2
where xγ includes the regressors j for which γj = 1. The detailed proof is given in Result
1 below.
Next consider the conditional expectation of kRγ xk2 given x is given by
Eγ E kRγ xk2 |γ = Eγ
X
!
x2j I(γj = 1)
j
=c
X
x2j |rxj ,yn |δ ,
j
(1)
where c > 0 is the proportionality constant. Also the conditional variance of kRγ xk2 given
x is given by
varγ E kRγ xk2 |γ + Eγ var kRγ xk2 |γ .
(2)
Considering both the terms in (2) separately as follows:
2
varγ E kRγ xk |γ
= varγ
= c
X
j
X
x2j I(γj
j
!
= 1)
x4j |rxj ,yn |δ 1 − c|rxj ,yn |δ ≤ pn ,
(3)
as given x, γj s are independent, and each |xj | ≤ 1, and qj = c|rj |δ < 1. Again
Eγ var kRγ xk2 |γ
"
= Eγ 4kxγ k2
(
1+
≤ cEγ kxγ k2
X
≤ c
x2j |rxj ,yn |δ ,
Ppγ 4 )#
1
j=1 xγ,j
−1
ψ
2kxγ k2
(4)
j
for some constant c, as
Ppγ
j=1
x4γ,j < kxγ k2 .
Therefore, from (1), (3) and (4) it can be shown that the expectation of kRγ xk2 /pn
13
converges to the limit cαδ , and variance of the same converges to 0.
Proof of Lemma 2 b. The proof follows from observing that
X
Eγ kxγ k2 = c
x2j |rxj ,yn |δ
j
and
X 4
varγ E kxγ k2 |γ c
xj |rxj ,yn |δ 1 − c|rxj ,yn |δ ≤ pn .
j
Therefore it can be shown that the expectation of kxγ k2 /pn converges to the limit cαδ , and
variance of the same converges to 0.
Result 1. Consider a random matrix Rγ which depends on another random vector γ
distributed as in (2). Then the conditional distribution of Rγ satisfies the following:
a. E (kRγ xk2 |γ) = kxγ k2 , and
Ppγ 4
b. var (kRγ xk2 |γ) = 4kxγ k2 1 + (ψ −1 − 1) j=1
xγ,j / (2kxγ k2 ) .
Proof of part a. Observe that
2
kRγ xk
=
X
r1,j γj xj ,
j
=
X
r1,j γj xj
j
X
r2,j γj xj , . . . ,
+
r2,j γj xj
!2
j
X
j
rmn ,j γj xj
!0
+ ... +
X
X
j
!2
2
rmn ,j γj xj
j
!2
.
(5)
Now
E
X
j
r1,j γj xj
!2
=E
(
X
j
2
r1,j
γj x2j +
X
r1,j r1,j 0 γj γj 0 xj xj 0
j6=j 0
14
)
=2
X
j
γj x2j = 2kxγ k2 ,
2
as E(ri,j
) = 1 and E(ri,j ri,j 0 ) = 0 as i = 1, 2, . . . , mn , j, j 0 = 1, 2, . . . , pn , and j 6= j 0 .
Proof of part b. From (5) we have
2
var kRγ xk |γ
= var
X X
+
i
ri,j γj xj
j
X
cov
i6=i0
X
!2
=
X
var
,
X
i
ri,j γj xj
j
!2
X
ri,j γj xj
j
ri0 ,j γj xj
j
!2
!2
.
We will consider each term of (6) one by one. Consider the first term. Note that
var
X
ri,j γj xj
j
!2
= var
(
X
2
ri,j
γj x2j +
j
= var
(
X
X
2
ri,j
γj x2j
j
+cov
(
X
)
ri,j ri,j 0 γj γk xj xj 0
j6=k
+ var
(
X
)
ri,j ri,j 0 γj γk xj xj 0
j6=j 0
2
ri,j
γj x2j ,
X
ri,j ri,j 0 γj γj 0 xj xj 0
j6=j 0
j
)
)
.
Consider the first term in (6).
var
(
X
j
2
ri,j
γj x2j
)
=
X
j
=
X
X
2
2
2
2
cov ri,j
γj x2j , ri,j
var ri,j
γj x2j +
0 γj 0 xj 0
2
γj x4j var ri,j
+
j
=
X
j
j6=j 0
γj x4j
E
4
ri,j
15
X
2
2
γj x2j γj 0 x2j 0 cov ri,j
, ri,j
0
j6=j 0
−E
2
2
ri,j
=2
X
1
−1
γj x4j .
ψ
j
(6)
Again,
var
=
X
j6=j 0
(
X
ri,j ri,j 0 γj γk xj xj 0
j6=j 0
)
2 2
γj γk x2j x2j 0 E ri,j
ri,j 0 +
=4
X
=E
X
X
ri,j ri,j 0 γj γk xj xj 0
j6=j 0
!2
,
γj γk γj 0 γk0 x2j x2j 0 x2k x2k0 E (ri,j ri,j 0 ri,k ri,k0 )
(j, j 0 ) 6= (k, k0 )
j 6= j 0 , k 6= k0
γj γk x2j x2j 0 ,
j6=j 0
as the other term will be zero. Next
o
P
2
2
0 γj γj 0 xj xj 0
,
r
r
γ
x
r
0
i,j
i,j
j
j
j6=j
j i,j
P P
2
= j k6=k0 γj x2j , γk γk0 xk xk0 cov ri,j
, ri,k ri,k0 = 0.
cov
nP
Therefore the first term in (6) is
X
var
i
X
ri,j γj xj
j
!2
#
"
X
X
1
κ
4
2 2
=
γj γk xj xj 0
(n − 1)
γj xj +
mn
j
j6=j 0
!2
X
X
1
= 2
−1
γj x4j + 2
γj x2j .
ψ
j
j
(7)
Now consider the last term in (6).
cov
X
= cov
j
(
X
j
ri,j γj xj
!2
2
ri,j
γj x2j +
,
X
j
X
ri0 ,j γj xj
!2
ri,j ri,j 0 γj γj 0 x2j x2j 0 ,
j6=j 0
X
k
16
ri20 ,k γk x2k +
X
k6=k0
ri0 ,k ri0 ,k0 γk γk0 x2k x2k0
)
= cov
(
X
2
γj x2j ,
ri,j
j
+cov
(
X
X
ri20 ,k γk x2k
k
ri,j ri,j 0 γj γj 0 x2j x2j 0 ,
j6=j 0
+cov
(
X
X
)
+ cov
ri20 ,k γk x2k
k
ri,j ri,j 0 γj γj 0 x2j x2j 0 ,
j6=j 0
X
(
X
)
2
γj x2j ,
ri,j
k6=k0
ri0 ,k ri0 ,k0 γk γk0 x2k x2k0
k6=k0
j
ri0 ,k ri0 ,k0 γk γk0 x2k x2k0
X
)
.
)
(8)
Consider the first term of (8).
cov
(
X
2
ri,j
γj x2j ,
j
X
k
ri20 ,k γk x2k
)
=
XX
j
k
2
γj x2j γk x2k cov ri,j
, ri20 ,k = 0.
Similarly, it can be shown that all other terms in (8) are zero. Combining the above result
and (7) the proof follows.
17
| 10 |
arXiv:1609.03710v4 [] 18 Sep 2017
ARITHMETICAL RANK OF BINOMIAL IDEALS
ANARGYROS KATSABEKIS
Abstract. In this paper, we investigate the arithmetical rank of a binomial
ideal J. We provide lower bounds for the binomial arithmetical rank and the
J-complete arithmetical rank of J. Special attention is paid to the case where
J is the binomial edge ideal of a graph. We compute the arithmetical rank of
such an ideal in various cases.
1. Introduction
Consider the polynomial ring K[x1 , . . . , xm ] in the variables x1 , . . . , xm over a
field K. For the sake of simplicity we will denote by xu the monomial xu1 1 · · · xumm
of K[x1 , . . . , xm ], with u = (u1 , . . . , um ) ∈ Nm , where N stands for the set of nonnegative integers. A binomial in the sense of [12, Chapter 8] is a difference of two
monomials, i.e. it is of the form xu − xv . A binomial ideal is an ideal generated by
binomials.
Toric ideals serve as important examples of binomial ideals. Let A = {a1 , . . . , am }
be a subset of Zn . The toric ideal IA is the kernel of the K-algebra homomorphism
±1
φ : K[x1 , . . . , xm ] → K[t±1
1 , . . . , tn ] given by
a
φ(xi ) = tai = t1 i,1 · · · tani,n for all i = 1, . . . , m,
where ai = (ai,1 , . . . , ai,n ).
We grade K[x1 , . . . , xm ] by the semigroup NA := {l1 a1 + · · · + lm am |li ∈ N}
setting degA (xi ) = ai for i = 1, . . . , m. The A-degree of a monomial xu is defined
by
degA (xu ) = u1 a1 + · · · + um am ∈ NA.
A polynomial F ∈ K[x1 , . . . , xm ] is A−homogeneous if the A−degrees of all the
monomials that occur in F are the same. An ideal is A-homogeneous if it is generated by A-homogeneous polynomials. The ideal IA is generated by all the binomials xu − xv such that degA (xu ) = degA (xv ) (see [11, Lemma 4.1]), thus IA is
A-homogeneous.
Let J ⊂ K[x1 , . . . , xm ] be a binomial ideal. There exist a positive integer n and
a vector configuration A = {a1 , . . . , am } ⊂ Zn such that J ⊂ IA , see for instance
[7, Theorem 1.1]. We say that a polynomial F = c1 M1 + . . . + cs Ms ∈ J, where
ci ∈ K and M1 , . . . , Ms are monomials, is J-complete if Mi − Ml ∈ J for every
1 ≤ i < l ≤ s. Clearly every J-complete polynomial F is also A-homogeneous.
Computing the least number of polynomial equations defining an algebraic set is
a classical problem in Algebraic Geometry which goes back to Kronecker [9]. This
problem is equivalent, over an algebraically closed field, with the corresponding
problem in Commutative Algebra of the determination of the smallest integer s for
2010 Mathematics Subject Classification. 13F20, 14M12, 05C25.
Key words and phrases. Arithmetical rank; binomial ideals; graphs; indispensable monomials.
1
2
A. KATSABEKIS
which there exist polynomials F1 , . . . , Fs in J such that rad(J) = rad(F1 , . . . , Fs ).
The number s is commonly known as the arithmetical rank of J and will be denoted
by ara(J). Since J is generated by binomials, it is natural to define the binomial
arithmetical rank of J, denoted by bar(J), as the smallest integer s for which there
exist binomials B1 , . . . , Bs in J such that rad(J) = rad(B1 , . . . , Bs ). Furthermore
we can define the J-complete arithmetical rank of J, denoted by arac (J), as the
smallest integer s for which there exist J-complete polynomials F1 , . . . , Fs in J such
that rad(J) = rad(F1 , . . . , Fs ). Finally we define the A-homogeneous arithmetical
rank of J, denoted by araA (J), as the smallest integer s for which there exist
A-homogeneous polynomials F1 , . . . , Fs in J such that rad(J) = rad(F1 , . . . , Fs ).
From the definitions and [2, Corollary 3.3.3] we deduce the following inequalities:
cd(J) ≤ ara(J) ≤ araA (J) ≤ arac (J) ≤ bar(J)
where cd(J) is the cohomological dimension of J.
In section 2 we introduce the simplicial complex ∆J and use combinatorial invariants of the aforementioned complex to provide lower bounds for the binomial
arithmetical rank and the J-complete arithmetical rank of J. In particular we prove
that bar(J) ≥ δ(∆J ){0,1} and arac (J) ≥ δ(∆J )Ω , see Theorem 2.6.
In section 3 we study the arithmetical rank of the binomial edge ideal JG of a
graph G. This class of ideals generalize naturally the determinantal ideal generated
by the 2-minors of the matrix
x1
x2
. . . xn
.
xn+1 xn+2 . . . x2n
We prove (see Theorem 3.3) that, for a binomial edge ideal JG , both the binomial
arithmetical rank and the JG -complete arithmetical rank coincide with the number
of edges of G. If G is the complete graph on the vertex set {1, . . . , n}, then, from [3,
Theorem 2], the arithmetical rank of JG equals 2n − 3. It is still an open problem
to compute ara(JG ) when G is not the complete graph. We show that ara(JG ) ≥
n + l − 2, where n is the number of vertices of G and l is the vertex connectivity of
G. Furthermore we prove that in several cases ara(JG ) = cd(JG ) = n + l − 2, see
Theorem 3.7, Theorem 3.9 and Theorem 3.13.
2. Lower bounds
First we will use the notion of indispensability to introduce the simplicial complex
∆J . Let J ⊂ K[x1 , . . . , xm ] be a binomial ideal containing no binomials of the form
xu − 1, where u 6= 0. A binomial B = M − N ∈ J is called indispensable of J if
every system of binomial generators of J contains B or −B, while a monomial M
is called indispensable of J if every system of binomial generators of J contains a
binomial B such that M is a monomial of B. Let MJ be the ideal generated by all
monomials M for which there exists a nonzero M − N ∈ J. By [7, Proposition 1.5]
the set G(MJ ) of indispensable monomials of J is the unique minimal generating
set of MJ .
The support of a monomial xu of K[x1 , . . . , xm ] is supp(xu ) := {i|xi divides xu }.
Let T be the set of all E ⊂ {1, . . . , m} for which there exists an indispensable
monomial M of J such that E = supp(M ). Let Tmin denote the set of minimal
elements of T .
ARITHMETICAL RANK OF BINOMIAL IDEALS
3
Definition 2.1. We associate to J a simplicial complex ∆J with vertices the elements of Tmin . Let T = {E1 , . . . , Ek } be a subset of Tmin , then T ∈ ∆J if there exist
Mi , 1 ≤ i ≤ k, such that supp(Mi ) = Ei and Mi − Ml ∈ J for every 1 ≤ i < l ≤ k.
Next we will study the connection between the radical of J and ∆J . The induced
subcomplex ∆′ of ∆J by certain vertices V ⊂ Tmin is the subcomplex of ∆J with
vertices V and T ⊂ V is a simplex of the subcomplex ∆′ if T is a simplex of ∆J . A
subcomplex H of ∆J is called a spanning subcomplex if both have exactly the same
set of vertices.
Let F be a polynomial in K[x1 , . . . , xm ]. We associate to F the induced subcomplex ∆J (F ) of ∆J consisting of those vertices Ei ∈ Tmin with the property:
there exists a monomial Mi in F such that Ei = supp(Mi ). The next theorem
provides a necessary condition under which a set of polynomials in the binomial
ideal J generates the radical of J up to radical.
Proposition 2.2. Let K be any field. If rad(J) = rad(F1 , . . . , Fs ) for some polynomials F1 , . . . , Fs in J, then ∪si=1 ∆J (Fi ) is a spanning subcomplex of ∆J .
Proof. Let E = supp(xu ) ∈ Tmin , where B = xu − xv ∈ J and xu is an indispensable monomial of J. We will show that there exists a monomial M in some
Fl , 1 ≤ l ≤ s, such that E = supp(M ). Since rad(J) = rad(F1 , . . . , Fs ), there
is a power B r , r ≥ 1, which belongs to the ideal generated by F1 , . . . , Fs . Thus
there is a monomial M in some Fl dividing the monomial (xu )r and therefore
supp(M ) ⊆ supp(xu ). But Fl ∈ J and J is generated by binomials, so there exists
xz − xw ∈ J such that xz divides M . Since xz ∈ MJ and G(MJ ) generates MJ ,
there is an indispensable monomial N dividing xz , thus
supp(N ) ⊆ supp(xz ) ⊆ supp(M ) ⊆ E.
Since E ∈ Tmin , we deduce that E = supp(N ), and therefore E = supp(M ).
Remark 2.3. (1) If F is a J-complete polynomial of J, then ∆J (F ) is a simplex.
To see that ∆J (F ) is a simplex suppose that ∆J (F ) 6= ∅ and let T = {E1 , . . . , Ek }
be the set of vertices of ∆J (F ). For every 1 ≤ i ≤ k there exists a monomial Mi ,
1 ≤ i ≤ k, in F such that Ei = supp(Mi ). Since F is J-complete, we have that
Mi − Ml ∈ J for every 1 ≤ i < l ≤ k. Thus ∆J (F ) is a simplex.
(2) If B is a binomial of J, then ∆J (B) is either a vertex, an edge or the empty
set.
Remark 2.4. If the equality rad(J) = rad(F1 , . . . , Fs ) holds for some J-complete
polynomials F1 , . . . , Fs in J, then ∪si=1 ∆J (Fi ) is a spanning subcomplex of ∆J and
each ∆J (Fi ) is a simplex.
For a simplicial complex ∆ we denote by r∆ the smallest number s of simplices Ti
of ∆, such that the subcomplex ∪si=1 Ti is spanning and by b∆ the smallest number
s of simplices Ti of ∆, such that the subcomplex ∪si=1 Ti is spanning and each Ti is
either an edge, a vertex or the empty set.
Theorem 2.5. Let K be any field, then b∆J ≤ bar(J) and r∆J ≤ arac (J).
It turns out that both b∆J and r∆J have a combinatorial interpretation in terms
of matchings in ∆J .
Let ∆ be a simplicial complex on the vertex set Tmin and Q be a subset of
Ω := {0, 1, . . . , dim(∆)}. A set N = {T1 , . . . , Ts } of simplices of ∆ is called a Qmatching in ∆ if Tk ∩ Tl = ∅ for every 1 ≤ k, l ≤ s and dim(Tk ) ∈ Q for every
4
A. KATSABEKIS
1 ≤ k ≤ s; see also Definition 2.1 in [8]. Let supp(N ) = ∪si=1 Ti , which is a subset
of the vertices Tmin . We denote by card(N ) the cardinality s of the set N . A
Q-matching N in ∆ is called a maximal Q-matching if supp(N ) has the maximum
possible cardinality among all Q-matchings. By δ(∆)Q , we denote the minimum of
the set
{card(N )|N is a maximal Q − matching in ∆}.
Theorem 2.6. Let K be any field, then bar(J) ≥ δ(∆J ){0,1} and arac (J) ≥
δ(∆J )Ω .
Proof. By Proposition 3.3 in [8], b∆J = δ(∆J ){0,1} and r∆J = δ(∆J )Ω . Now the
result follows from Theorem 2.5.
Proposition 2.7. Let J be a binomial ideal. Suppose that there exists a minimal
generating set S of J such that every element of S is a difference of two squarefree
monomials. Assume that J is generated by the indispensable binomials, namely
S consists precisely of the indispensable binomials (up to sign). Then bar(J) =
card(S).
Proof. Let card(S) = t. Since S is a generating set of J, we have that bar(J) ≤ t.
It is enough to prove that t ≤ bar(J). Let |Tmin | = g. By [4, Corollary 3.6] it holds
that card(G(MJ )) = 2t, so g = 2t. For every maximal
{0, 1}-matching M in ∆J
we have that supp(M) = Tmin , so δ(∆J ){0,1} ≥ g2 and therefore δ(∆J ){0,1} ≥ t.
Thus, from Theorem 2.6, bar(J) ≥ t.
Example 2.8. Let J be the binomial ideal generated by f1 = x1 x6 − x2 x5 , f2 =
x2 x7 − x3 x6 , f3 = x1 x8 − x4 x5 , f4 = x3 x8 − x4 x7 and f5 = x1 x7 − x3 x5 . Actually
J is the binomial edge ideal of the graph G with edges {1, 2}, {2, 3}, {1, 4}, {3, 4}
and {1, 3}, see section 3 for the definition of such an ideal. Note that J is Ahomogeneous where A = {a1 , . . . , a8 } is the set of columns of the matrix
1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0
D=
0 0 1 0 0 0 1 0 .
0 0 0 1 0 0 0 1
By [4, Theorem 3.3] every binomial fi is indispensable of J. Thus
Tmin = {E1 = {1, 6}, E2 = {2, 5}, E3 = {2, 7}, E4 = {3, 6}, E5 = {1, 8},
E6 = {4, 5}, E7 = {3, 8}, E8 = {4, 7}, E9 = {1, 7}, E10 = {3, 5}}.
By Proposition 2.7 the binomial arithmetical rank of J equals 5. The simplicial complex ∆J has 5 connected components and all of them are 1-simplices,
namely ∆1 = {E1 , E2 }, ∆2 = {E3 , E4 }, ∆3 = {E5 , E6 }, ∆4 = {E7 , E8 } and
∆5 = {E9 , E10 }. Consequently
δ(∆J )Ω =
5
X
δ(∆i )Ω = 1 + 1 + 1 + 1 + 1 = 5
i=1
and therefore 5 ≤ arac (J). Since arac (J) ≤ bar(J), we get that arac (J) = 5. We
will show that araA (J) = 5. Suppose that araA (J) = s < 5 and let F1 , . . . , Fs be
A-homogeneous polynomials in J such that rad(J) = rad(F1 , . . . , Fs ). For every
vertex Ei ∈ Tmin there exists, from Proposition 2.2, a monomial Mi in Fk such that
Ei = supp(Mi ). But s < 5, so there exist Ei ∈ Tmin and Ej ∈ Tmin such that
ARITHMETICAL RANK OF BINOMIAL IDEALS
5
(1) {Ei , Ej } is not a 1-simplex of ∆J ,
(2) Ei = supp(Mi ), Ej = supp(Mj ) and
(3) Mi and Mj are monomials of some Fk .
Since Fk is A-homogeneous, it holds that degA (Mi ) = degA (Mj ). Considering
all possible combinations of Ei and Ej we finally arrive at a contradiction. Thus
araA (J) = 5. Note that J is B-homogeneous where B is the set of columns of the
matrix
1 1 1 1 0 0 0 0
1 0 0 0 1 0 0 0
N =
0 1 0 0 0 1 0 0 .
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1
Since every row of D is a row of N , we deduce that every B-homogeneous polynomial
in J is also A-homogeneous. So araB (J) is an upper bound for araA (J), therefore
araB (J) = 5. We have that rad(J) = rad(f1 , f2 + f3 , f4 , f5 ), since the second power
of both binomials f2 and f3 belongs to the ideal generated by the polynomials
f1 , f2 + f3 , f4 , f5 . Remark that the polynomials f1 , f2 + f3 , f4 and f5 are Chomogeneous, where C is the set of columns of the matrix
1 2 3 4 1 2 3 4
.
5 6 7 8 5 6 7 8
Thus araC (J) ≤ 4, so ara(J) ≤ 4. A primary decomposition of J is
J = (f1 , f2 , f3 , f4 , f5 , x2 x8 − x4 x6 ) ∩ (x1 , x3 , x5 , x7 ).
Hence, by [2, Proposition 19.2.7], it follows that ara(J) ≥ 4. Thus
ara(J) = araC (J) = 4 < 5 = araA (J) = araB (J) = arac (J) = bar(J).
3. Binomial edge ideals of graphs
In this section we consider a special class of binomial ideals, namely binomial
edge ideals of graphs. This ideal was introduced in [6] and independently at the
same time in [10].
Let G be an undirected connected simple graph on the vertex set [n] := {1, . . . , n}
and with edge set E(G). Consider the polynomial ring
R := K[x1 , . . . , xn , xn+1 , . . . , x2n ]
in 2n variables, x1 , . . . , xn , xn+1 , . . . , x2n , over K.
Definition 3.1. The binomial edge ideal JG ⊂ R associated to the graph G is the
ideal generated by the binomials fij = xi xn+j − xj xn+i , with i < j, such that {i, j}
is an edge of G.
Remark 3.2. From [7, Corollary 1.13] every binomial fij , where {i, j} is an edge
of G, is indispensable of JG . Thus
1
2
Tmin = {Eij
= {i, n + j}, Eij
= {j, n + i}|{i, j} ∈ E(G)}.
We recall some fundamental material from [6]. Let G be a connected graph on
[n] and let S ⊂ [n]. By G \ S, we denote the graph that results from deleting all
vertices in S and their incident edges from G. Let c(S) be the number of connected
6
A. KATSABEKIS
components of G \ S and let G1 , . . . , Gc(S) denote the connected components of
∼
G \ S. Also let Gi denote the complete graph on the vertices of Gi . We set
PS (G) = (∪i∈S {xi , xn+i }, J ∼ , . . . , J ∼
G1
Gc(S)
)R.
Then PS (G) is a prime ideal for every S ⊂ [n]. The ring R/P∅ (G) has Krull
dimension n+1. For S 6= ∅ the ring R/PS (G) has Krull dimension n−card(S)+c(S).
The ideal PS (G) is a minimal prime of JG if and only if S = ∅ or S 6= ∅ and for
each i ∈ S one has c(S \ {i}) < c(S). Moreover JG is a radical ideal and it admits
the minimal primary decomposition JG = ∩S∈M(G) PS (G), where M(G) = {S ⊂
[n] : PS (G) is a minimal prime of JG }.
Theorem 3.3. Let G be a connected graph on the vertex set [n] with m edges.
Then bar(JG ) = arac (JG ) = m.
Proof. Every binomial fij , where {i, j} is an edge of G, is indispensable of JG , thus,
1
2
from Proposition 2.7, bar(JG ) = m. Note that, for every edge {i, j} of G, {Eij
, Eij
}
is a 1-simplex of ∆JG . Furthermore ∆JG has exactly m connected components and
all of them are 1-simplices. Thus δ(∆JG )Ω = m and therefore, from Theorem 2.6,
arac (JG ) ≥ m. Consequently arac (JG ) = m.
Theorem 3.4. Let G be a connected graph on the vertex set [n] with m edges. Consider the canonical basis {e1 , . . . , en } of Zn and the canonical basis {w1 , . . . , wn+1 }
of Zn+1 . Let A = {a1 , . . . , a2n } ⊂ Nn be the set of vectors where ai = ei , 1 ≤ i ≤ n,
and an+i = ei for 1 ≤ i ≤ n. Let B = {b1 , . . . , b2n } ⊂ Nn+1 be the set of vectors where bi = w1 + wi+1 , 1 ≤ i ≤ n, and bn+i = wi+1 for 1 ≤ i ≤ n. Then
araA (JG ) = araB (JG ) = m.
Proof. Suppose that araA (JG ) = t < m and let F1 , . . . , Ft be A-homogeneous
polynomials in JG such that JG = rad(F1 , . . . , Ft ). For every edge {i, j} of G with
i < j there exist, from Proposition 2.2, monomials Mijk and Nijl in Fk and Fl ,
2
1
= supp(Nijl ). But t < m, so there
= supp(Mijk ) and Eij
respectively, such that Eij
1
exists Ers
∈ Tmin , where {r, s} is an edge of G with r < s, such that
1
1
} is not a 1-simplex of ∆JG ,
(1) {Eij
, Ers
1
1
k
(2) Eij = supp(Mijk ), Ers
= supp(Mrs
) and
k
k
(3) Mij and Mrs are monomials of some Fk .
g
j
s
k
and Mrs
= xgrr xgn+s
Let Mijk = xgi i xn+j
. Since Fk is A-homogeneous, we deduce
k
k
that degA (Mij ) = degA (Mrs ), and therefore gi ei +gj ej = gr er +gs es . Consequently
k
, a contradiction. Let D and Q be the matrices with
i = r, j = s and also Mijk = Mrs
columns A and B, respectively. Since every row of D is a row of Q, we deduce that
every B-homogeneous polynomial in JG is also A-homogeneous. Thus araB (JG ) is
an upper bound for araA (JG ), so m ≤ araB (JG ) and therefore araB (JG ) = m.
The graph G is called l-vertex-connected if l < n and G \ S is connected for every
subset S of [n] with card(S) < l. The vertex connectivity of G is defined as the
maximum integer l such that G is l-vertex-connected.
In [1] the authors study the relationship between algebraic properties of a binomial edge ideal JG , such as the dimension and the depth of R/JG , and the vertex
connectivity of the graph. It turns out that this notion is also useful for the computation of the arithmetical rank of a binomial edge ideal.
ARITHMETICAL RANK OF BINOMIAL IDEALS
7
Theorem 3.5. Let K be a field of any characteristic and G be a connected graph
on the vertex set [n]. Suppose that the vertex connectivity of G is l. Then ara(JG ) ≥
n + l − 2.
Proof. If G is the complete graph on the vertex set [n], its vertex connectivity is
n − 1, then ara(JG ) = 2n − 3 = n + l − 2 by [3, Theorem 2]. Assume now that
G is not the complete graph. Let P∅ (G), W1 , . . . , Wt be the minimal primes of
JG . It holds that JG = P∅ (G) ∩ L where L = ∩ti=1 Wi . First we will prove that
dim (R/(P∅ (G) + L)) ≤ n−l +1. For every prime ideal Q such that P∅ (G)+L ⊆ Q,
we have that L ⊆ Q, so there is 1 ≤ i ≤ t such that Wi ⊆ Q. Thus P∅ (G) + Wi ⊆ Q
and therefore dim (R/(P∅ (G) + L)) ≤ dim (R/(P∅ (G) + Wi )). It is enough to show
that dim (R/(P∅ (G) + Wi )) ≤ n − l + 1. Let Wi = PS (G) for ∅ 6= S ⊂ [n]. We have
that P∅ (G) + PS (G) is generated by
{xi xn+j − xj xn+i : i, j ∈ [n] \ S} ∪ {xi , xn+i : i ∈ S}.
Then dim (R/(P∅ (G) + PS (G))) = n − card(S) + 1. If l = 1, then card(S) ≥ 1
since S 6= ∅, and therefore dim (R/(P∅ (G) + Wi )) ≤ n. Suppose that l ≥ 2 and
also that card(S) < l. Since PS (G) is a minimal prime, for every i ∈ S we have
that c(S \ {i}) < c(S). But G is l-vertex-connected, namely G \ S is connected, so
P∅ (G) ⊂ PS (G) a contradiction to the fact that PS (G) is a minimal prime. Thus
dim (R/(P∅ (G) + Wi )) ≤ n − l + 1 and therefore dim (R/(P∅ (G) + L)) ≤ n − l + 1.
Next we will show that min{dim (R/P∅ (G)) , dim (R/L)} > dim (R/(P∅ (G) + L)).
Recall that dim (R/P∅ (G)) = n + 1, so dim (R/(P∅ (G) + L)) < dim (R/P∅ (G)).
Since L ⊂ P∅ (G) + L, we deduce that dim (R/(P∅ (G) + L)) ≤ dim (R/L). Suppose
that dim (R/(P∅ (G) + L)) = dim (R/L), say equal to s, and let Q1 $ Q2 $ · · · $ Qs
be a chain of prime ideals containing P∅ (G) + L. Then there is 1 ≤ j ≤ t such that
Q1 = Wj . So P∅ (G) ⊂ Wj , a contradiction. By [2, Proposition 19.2.7] it holds that
cd(JG ) ≥ dim(R) − dim (R/(P∅ (G) + L)) − 1 = 2n − dim (R/(P∅ (G) + L)) − 1 ≥
2n − (n − l + 1) − 1 = n + l − 2.
Consequently ara(JG ) ≥ n + l − 2.
Example 3.6. Let G be the graph on the vertex set [5] with edges {1, 2}, {2, 3},
{1, 3}, {2, 4}, {4, 5} and {3, 5}. Here the vertex connectivity is l = 2. By Theorem
3.5, ara(JG ) ≥ 5. The ideal JG is generated up to radical by the polynomials
2
2
f12 , f23 , f13 + f24 , f35 and f45 , since both f13
and f24
belong to the ideal generated
by f12 , f23 , f13 + f24 , f35 and f45 . Thus ara(JG ) = 5 < 6 = bar(JG ).
Theorem 3.7. If G is a cycle of length n ≥ 3, then ara(JG ) = bar(JG ) = n.
Proof. The vertex connectivity of G is 2, so, from Theorem 3.5, the inequality
n ≤ ara(JG ) holds. Since G has n edges, we have that ara(JG ) ≤ bar(JG ) = n and
therefore ara(JG ) = n.
Proposition 3.8. Let G be a connected graph on [n], with m edges and n ≥ 4. If
G contains an odd cycle of length 3, then ara(JG ) ≤ m − 1.
Proof. Let C be an odd cycle of G of length 3, with edge set {{1, 2}, {2, 3}, {1, 3}}.
Since G is connected, without loss of generality there is a vertex 4 ≤ i ≤ n such
that {1, i} is an edge of G. We will show that (x1 xn+i − xi xn+1 )2 belongs to the
ideal L generated by the polynomials f12 , f13 , f1i + f23 . We have that
x21 x2n+i ≡ x1 xn+i xi xn+1 − x1 x2 xn+i xn+3 + x1 x3 xn+i xn+2 ≡
8
A. KATSABEKIS
x1 xi xn+i xn+1 − x2 xn+i x3 xn+1 + x2 x3 xn+1 xn+i ≡ x1 xi xn+i xn+1 mod L.
Similarly we have that x2i x2n+1 ≡ x1 xi xn+i xn+1 mod L. Thus x21 x2n+i + x2i x2n+1 ≡
2x1 xi xn+i xn+1 mod L, so (x1 xn+i − xi xn+1 )2 belongs to L. Next we prove that
(x2 xn+3 − x3 xn+2 )2 belongs to L. We have that
x22 x2n+3 ≡ x2 xn+3 x3 xn+2 − x2 xn+3 x1 xn+i + x2 xn+3 xi xn+1 ≡
x2 xn+3 x3 xn+2 − x2 xn+i x3 xn+1 + xn+3 xi x1 xn+2 ≡
x2 xn+3 x3 xn+2 − x1 xn+2 xn+i x3 + xi xn+2 x3 xn+1 mod L.
Furthermore
x23 x2n+2 ≡ x2 xn+3 x3 xn+2 − x3 xn+2 xi xn+1 + x3 xn+2 x1 xn+i mod L.
Thus x22 x2n+3 + x23 x2n+2 ≡ 2x2 xn+3 x3 xn+2 mod L, so (x2 xn+3 − x3 xn+2 )2 ∈ L. Let
H be the subgraph of G consisting of the cycle C and the edge {1, i}. Then JG is
generated up to radical by the following set of m − 1 binomials:
{fkl |{k, l} ∈ E(G) \ E(H)} ∪ {f12 , f13 , f1i + f23 }.
Therefore ara(JG ) ≤ m − 1.
Let G1 = (V (G1 ), E(G1 )), G2 = (V (G2 ), E(G
L2 )) be graphs such that G1 ∩ G2
is a complete graph. The new graph G = G1 G2 with the vertex set V (G) =
V (G1 ) ∪ V (G2 ) and edge set E(G) = E(G1 ) ∪ E(G2 ) is called the clique sum of G1
and G2 in G1 ∩ G2 . If the cardinality of V (G1 ) ∩ V (G2 ) is k + 1, then this L
operation
is called a k-clique sum of the graphs G1 and G2 . We write G = G1 vb G2 to
indicate that G is the clique sum of G1 and G2 and that V (G1 ) ∩ V (G2 ) = vb.
Theorem 3.9. Let G be a connected graph on the vertex set [n]. Suppose that G
has exactly one cycle C. If n ≥ 4 and C is odd of length 3, then ara(JG ) = n − 1.
Proof. The graph G can be written as the 0-clique sum of the cycle C and some
trees. More precisely,
M M
M
G=C
T1
···
Ts
v1
v2
vs
for some vertices v1 , . . . , vs of C. The vertex connectivity of G is 1. By Theorem 3.5, the inequality n − 1 ≤ ara(JG ) holds. Since G has exactly one cycle, we
have that card(E(G)) = n. From Proposition 3.8, ara(JG ) ≤ n − 1, and therefore
ara(JG ) = n − 1.
Let ht(JG ) be the height of JG , then, we have, from the generalized Krull’s
principal ideal theorem, that ht(JG ) ≤ ara(JG ). We say that JG is a set-theoretic
complete intersection if ara(JG ) = ht(JG ).
Corollary 3.10. Let G be a connected graph on the vertex set [n] with n ≥ 4.
Suppose that G has exactly one cycle C and its length is 3. Then the following
properties are equivalent:
(a) JG is unmixed,
(b) JG is Cohen-Macaulay,
(c) JG is a L
set-theoretic
intersection,
L
L complete
(d) G = C v1 T1 v2 · · · vs Ts , where {v1 , . . . , vs } ⊂ V (C), s ≥ 1, vh are
pairwise distinct and Th are paths.
In particular, if one of the above conditions is true, then ara(JG ) = ht(JG ) = n − 1.
ARITHMETICAL RANK OF BINOMIAL IDEALS
9
Proof. The implication (b)⇒(a) is well known. If JG is a set-theoretic complete
intersection, then, from Theorem 3.9, ht(JG ) = n − 1 and dim(R/JG ) = n + 1.
Also depth(R/JG ) = n + 1 by [5, Theorem 1.1], so JG is Cohen-Macaulay, whence
(c)⇒(b). Recall that M(G) = {S ⊂ [n] : PS (G) is a minimal prime of JG }. If
JG is unmixed, then every vertex v of Th , v 6= vh , has degree at most 2. In fact,
{v} ∈ M(G) and, if degG (v) ≥ 3, then by [6, Lemma 3.1], one has ht(P{v} (G)) =
n+card({v})−c({v}) = n+1−degG (v) ≤ n−2 < n−1 = ht(P∅ (G)), a contradiction.
Moreover, vh has degree at most 3 for every h. In fact, {vh } ∈ M(G) and, if
degG (vh ) ≥ 4, then by [6, Lemma 3.1], one has ht(P{vh } (G)) = n + card({vh }) −
c({vh }) = n + 1 − (degG (vh ) − 1) ≤ n − 2 < n − 1 = ht(P∅ (G)), a contradiction.
Thus, (d) follows. Finally, assuming (d), JG is unmixed by [5, Theorem 1.1] and
ht(JG ) = n − 1. By Theorem 3.9, it follows that
ara(JG ) = n − 1 = ht(JG ).
If C1 and C2 are cycles of G having no common vertex, then a bridge between
C1 and C2 is an edge {i, j} of G with i ∈ V (C1 ) and j ∈ V (C2 ).
Proposition 3.11. Let G be a connected graph on the vertex set [n] with m edges.
Suppose that G contains a subgraph H consisting of two vertex disjoint odd cycles
of length 3, namely C1 and C2 , and also two bridges between the cycles C1 and C2 .
Then ara(JG ) ≤ m − 2.
Proof. Let E(C1 ) = {{1, 2}, {2, 3}, {3, 1}} and E(C2 ) = {{4, 5}, {5, 6}, {4, 6}}.
Suppose first that the bridges have no common vertex. Let e1 = {1, 4} and e2 =
2
{3, 6} be the bridges of the two cycles. Then f14
belongs to the ideal generated by
2
belongs to the ideal generated by
the polynomials f12 , f13 , f14 +f23 . Furthermore f36
the polynomials f46 , f56 , f36 + f45 . Thus JG is generated up to radical by the union
of {f12 , f13 , f14 + f23 , f46 , f56 , f36 + f45 } and {fij |{i, j} ∈ E(G) and {i, j} ∈
/ E(H)}.
If the bridges have a common vertex, then without loss of generality we can assume
that e1 = {1, 4} and e2 = {3, 4} are the bridges of the two cycles. Applying similar
arguments as before, we deduce that ara(JG ) ≤ m − 2.
Example 3.12. Suppose that G is a graph with 6 vertices and 8 edges consisting
of two vertex disjoint odd cycles of length 3, namely C1 and C2 , and also two vertex
disjoint bridges between the cycles C1 and C2 . Here the vertex connectivity is l = 2.
Thus ara(JG ) ≥ 6. By Proposition 3.11, ara(JG ) ≤ 6 and therefore ara(JG ) = 6.
Theorem 3.13. Let Gk be a graph containing k odd cycles C1 , . . . , Ck of length 3
such that the cycles Ci and Cj have disjoint vertex sets, for every 1 ≤ i < j ≤ k.
Suppose that there exists exactly one path Pi,i+1 of length ri ≥ 2 connecting a vertex
of Ci with a vertex of Ci+1 , 1 ≤ i ≤ k − 1. If Gk has no more vertices or edges, then
Pk−1
ara(JGk ) = ht(JGk ) = 2k + i=1 ri . In particular, JGk is a set-theoretic complete
intersection.
Pk−1
Proof. The graph Gk has 3k + i=1 (ri − 1) vertices. Here the vertex connectivity
is l = 1, so
k−1
k−1
X
X
(ri − 1) + 1 − 2 ≤ ara(JGk ).
ri = 3k +
2k +
i=1
i=1
Pk−1
We will prove that ara(JGk ) ≤ 2k + i=1 ri by induction on k ≥ 2. Suppose that
k = 2 and let E(C1 ) = {{1, 2}, {2, 3}, {1, 3}}, P1,2 = {{3, 4}, {4, 5}, . . . , {r + 2, r +
10
A. KATSABEKIS
3}} and C2 = {{r + 3, r + 4}, {r + 4, r + 5}, {r + 3, r + 5}}. Then JG2 is generated
up to radical by the union of
{f12 + f34 , xr+2 xn+r+3 − xr+3 xn+r+2 + xr+4 xn+r+5 − xr+5 xn+r+4 }
and
{fij |{i, j} ∈ E(G2 ) \ {{1, 2}, {3, 4}, {r + 2, r + 3}, {r + 4, r + 5}}}.
Pk−1
Thus ara(JG2 ) ≤ 4 + r. Assume that the inequality ara(JGk ) ≤ 2k + i=1 ri
Pk
holds for k and we will prove that ara(JGk+1 ) ≤ 2(k + 1) + i=1 ri . We have that
JGk+1 = JGk + JH where H is the graph consisting of the path Pk,k+1 and the cycle
Ck+1 . By Theorem 3.9, ara(JH ) = rk + 2. Then, from the induction hypothesis,
ara(JGk+1 ) ≤ ara(JGk ) + ara(JH ) ≤ 2k +
k−1
X
ri + rk + 2 = 2(k + 1) +
k
X
ri .
i=1
i=1
Since JGk is unmixed by [5, Theorem 1.1], we have that
ht(JGk ) = card(V (Gk )) − 1 = 2k +
k−1
X
ri .
i=1
Remark 3.14. All the results presented are independent of the field K.
Acknowledgments
The author is grateful to an anonymous referee for useful suggestions and comments that helped improve an earlier version of the manuscript. This work was supported by the Scientific and Technological Research Council of Turkey (TÜBITAK)
through BIDEB 2221 grant.
References
[1] A. Banerjee and L. Núnez-Betancourt, Graph Connectivity and Binomial Edge Ideals, Proc.
Amer. Math. Soc. 145 (2017), 487-499.
[2] M. P. Brodmann and R. Y. Sharp, Local cohomology: an algebraic introduction with geometric applications, volume 60 of Cambridge Studies in Advanced Mathematics. Cambridge
University Press, Cambridge, 1998.
[3] W. Bruns and R. Schwänzl, The number of equations defining a determinantal variety, Bull.
London Math. Soc. 22 (1990), 439-445.
[4] H. Charalambous, A. Thoma and M. Vladoiu, Binomial fibers and indispensable binomials,
J. Symbolic Comput. 74 (2016), 578-591.
[5] V. Ene, J. Herzog and T. Hibi, Cohen-Macaulay binomial edge ideals, Nagoya Math. J. 204
(2011), 57-68.
[6] J. Herzog, T. Hibi, F. Hreinsdóttir, T. Kahle and J. Rauh, Binomial edge ideals and conditional independence statements, Adv. in Appl. Math. 45 (2010), no. 3, 317-333.
[7] A. Katsabekis and I. Ojeda, An indispensable classification of monomial curves in A4 (K),
Pac. J. Math. 268 (2014), 95-116.
[8] A. Katsabekis and A. Thoma, Matchings in simplicial complexes, circuits and toric varieties,
J. Comb. Theory Ser. A 114 (2007), 300-310.
[9] L. Kronecker, Grundzüge einer arithmetischen Theorie der algebraischen Grössen, Jour. für
die Reine und Angew. Math. 92 (1882), 1-122.
[10] M. Ohtani, Graphs and ideals generated by some 2-minors, Comm. Algebra 39 (2011), 905917.
[11] B. Sturmfels, Gröbner Bases and Convex Polytopes, University Lecture Series, No. 8 American Mathematical Society Providence, R.I. 1995.
ARITHMETICAL RANK OF BINOMIAL IDEALS
11
[12] R. Villarreal, Monomial Algebras, Second Edition, Monographs and Research Notes in Mathematics, Chapman and Hall/CRC, 2015.
Department of Mathematics, Bilkent University, 06800 Ankara, Turkey
E-mail address: katsampekis@bilkent.edu.tr
| 0 |
Adapting Graph Application Performance Via
Alternate Data Structure Representations
Amlan Kusum
Iulian Neamtiu
Rajiv Gupta
University Of California,
Riverside
University Of California,
Riverside
University Of California,
Riverside
arXiv:1412.8120v1 [] 28 Dec 2014
akusu001@cs.ucr.edu
neamtiu@cs.ucr.edu
gupta@cs.ucr.edu
ABSTRACT
Graph processing is used extensively in areas from social
networking mining to web indexing. We demonstrate that
the performance and dependability of such applications critically hinges on the graph data structure used, because a
fixed, compile-time choice of data structure can lead to poor
performance or applications unable to complete. To address this problem, we introduce an approach that helps
programmers transform regular, off-the-shelf graph applications into adaptive, more dependable applications where
adaptations are performed via runtime selection from alternate data structure representations. Using our approach,
applications dynamically adapt to the input graph’s characteristics and changes in available memory so they continue to run when faced with adverse conditions such as low
memory. Experiments with graph algorithms on real-world
(e.g., Wikipedia metadata, Gnutella topology) and synthetic
graph datasets show that our adaptive applications run to
completion with lower execution time and/or memory utilization in comparison to their non-adaptive versions.
Keywords
runtime data structure selection, space-time trade-off
1.
INTRODUCTION
Graph processing continues to increase in popularity with
the emergence of applications such as social network mining, real-time network traffic monitoring, etc. Due to their
data-intensive nature, the performance and dependability of
such applications depends upon how well the choice of runtime data structure matches the input data characteristics
and availability of memory (low memory can prevent the
applications from completing).
Input Data Characteristics. Programmers often choose specific, fixed data structures when developing graph applications. The memory used by the data structure can be greatly
influenced by the input data characteristics. Thus, it is possible that the characteristics of data may not match the
choice of the data structure. This is particularly problematic
when the application is expected to encounter a wide range
of input data characteristics, and these characteristics may
change during the course of execution. For example, matrices can be represented in the Compressed Column Storage
(CCS) format, appropriate for sparse matrices, or the array
representation, appropriate for dense matrices. An application, e.g., matrix multiplication, programmed to use the
sparse CCS format, could take longer to complete when presented with a dense input. Similarly, evolving graphs [33],
where nodes or edges are added during execution, are another example of changes in input data characteristics. The
data structure selection based on input pre-analysis will fail
under such scenario. Therefore, in our approach, adaptive
applications tailor the choice of data structure to match input data characteristics at runtime.
Availability of Memory. Since real-world applications often do not run in isolation, they share the available memory resources with other applications. There could be times
where the application experiences a resource crunch, caused
by other running programs. In this scenario the performance of the application may be degraded, or the application may even be prematurely terminated. Therefore, in
our approach, adaptive applications tailor the choice of data
structure to match availability of memory at runtime.
It is well known that for data-intensive applications, the
choice of data structure is critical to memory usage and execution time. There has been previous work on data structure
identification [11], as well as data structure prediction and
selection [5, 6, 17, 22]. While these prior approaches help
in data structure selection, none of them support switching from one data structure to another as the application
executes. There has also been work on dynamically adapting the representation of individual data items for impacting
memory usage and performance—employing data compression [29] or replacing float data with int data [20]. These
techniques are orthogonal to our work that switches between
alternate high level data structures. Other approaches dynamically switch between implementations. Elastin [20] allows a program to switch between versions using dynamic
software update techniques [21, 9]; however, it does not consider switching between alternate high level data structures.
IBM’s K42 Operating System [1, 3] supports hot-swapping
classes as a mechanism for performing dynamic updates.
Scenario Based Optimization [19], a binary level online optimization technique dynamically changes the course of execution through a route meant for a particular runtime scenario as predefined by developer. Wang et al. [32] proposed
dynamic resource management techniques based on userspecific, application-specific and hardware-specific management policies. In contrast, our objective is to simultaneously
support alternate data structures and switch between them.
In this paper we consider several widely-used graph applications and study how data structure representations impact
execution time and memory consumption on a range of input
graphs (Section 2). The input graphs consist of both realworld graphs such as Wikipedia metadata, Gnutella network topology (from the SNAP library [31]), and synthetic
graphs. Based upon the observations from our study, we
design a concrete adaptation system that supports switch-
ing between alternate representations of the data in memory (Section 3). We demonstrate that the cost of performing the runtime adaptations is quite small in comparison
to the benefits of adaptation (Section 4). Moreover, the
lightweight monitoring we employ to detect adaptation opportunities imposes acceptable overhead even when no adaptations are triggered at runtime. Thus, our adaptive versions
have nearly the same performance as the most appropriate
non-adaptive versions for various input characteristics. We
compare our approach with related work in Section 5, and
in Section 6 we conclude.
2.
Table 1: Relative performance ranges.
Application
MSSP
BC
MST-K
BFS
MST-B
PP
Table 2: Density ranges where each data structure prevails.
Application
ADJLIST
best space &
time
MSSP
BC
MST-K
BFS
MST-B
PP
<
<
<
<
<
<
A STUDY OF GRAPH APPLICATIONS
In this section we study the execution time and memory
usage behavior of a set of graph applications. The goal of
this study is two fold. First, we want to quantify how input
data characteristics and the choice of data structures used
to represent the graphs impact memory usage and execution
time. Second, we would like to develop a simple characterization of program behavior that can be used to guide data
structure selection at runtime.
We considered six graph algorithms: Muliple Source Shortest Path (MSSP) finds the shortest path from all the nodes
to every other node; Betweenness Centrality (BC) computes
the importance of a node in a network; Breadth First Search
(BFS) traverses the graph with each node as root per iteration; Boruvka’s Algorithm (MST-B) and Kruskal’s Algorithm (MST-K), finds the minimum spanning tree; Preflow
Push (PP), finds out the maximum flow in a network starting with each individual node as source. The core data
structure used in these applications is a graph. We consider two different representations of graphs: Adjacency List
(ADJLIST); and Adjacency Matrix (ADJMAT). When the graph
is sparse, it is expected that ADJLIST will use less memory
than ADJMAT. On the other hand, for highly dense graphs
ADJMAT may use less memory than ADJLIST. Determining
whether a pair of nodes is connected by an edge can be
done in constant time using ADJMAT while it may require
searching through a list with ADJLIST. Thus, the runtime
memory usage and execution time depend upon the sparsity, or conversely the density, of the input graph. The input
graphs with relevant properties and densities were generated
to study program behavior.
To observe the trade-offs of using the alternative representations of graphs, we executed each of the programs using
the two representations. The programs were run on inputs
consisting of randomly-generated graphs with varying den|E|
sity which is computed as |V ||(V
, where |V | and |E| are
−1)|
number of nodes and edges in the graph. The inputs were selected such that the trade-offs could be exposed easily. The
results of these executions are summarized as follows:
Impact of data structure selection on memory usage and execution time. We present the relative memory usage and execution time of program versions in Table 1. In particular, we
computed the ratios of memory usages and execution times
for ADJLIST and ADJMAT versions across all graph densities
considered. The minimum and maximum values of observed
ratios is given in Table 1. As we can see, in terms of both
memory usage and execution time, the relative performances
vary a great deal. Moreover, neither representation gives the
best memory usage or execution time performance across all
graph densities. Hence, it is crucial to select the data structure at runtime, based upon the input data characteristics.
ADJLIST / ADJMAT
Memory Usage Execution Time
0.68 - 8.02
0.40 - 4.00
0.40 - 4.00
0.59 - 2.88
0.72 - 2.44
0.35 - 3.83
0.54 - 5.47
0.72 - 3.14
0.71 - 1.67
0.16 - 7.21
0.60 - 5.40
0.50 - 3.53
9%
10%
25%
8%
10%
2%
ADJLIST
best
space,
ADJMAT
best time
9% - 25%
10% - 25%
25% - 37%
8% - 25%
10% - 40%
2% - 34%
ADJMAT
best space &
time
>
>
>
>
>
>
25%
25%
37%
25%
40%
34%
Characterization of application behavior. For the purpose of
runtime data structure selection, we characterize the behavior of each application as shown in Table 2. Note that graph
densities are divided into three subranges. In the first range
(e.g., < 9% for MSSP) the ADJLIST is both more memoryand time-efficient than ADJMAT. In the second range (e.g.,
9% − 25%) ADJLIST is more memory-efficient while ADJMAT
is more time-efficient. Thus, the selection can be made at
runtime based upon memory availability. Finally, in the
third range (e.g., > 25% for MSSP) ADJMAT is both more
memory and time efficient than ADJLIST.
3. ADAPTIVE APPLICATIONS
&''()*)+,$"(-./+$0(,+$
!"#$
!"%$
&,*123+$&1145/*2('$
"(-./+$/(,+$:5);$
).*'852('$4(<5/$
!"#$
!"%$
&,*1)*2('$6(,-4+$
7.*'852('$
1(45/5+8$
9-'26+$
6('5)(.8$
Figure 1: High level overview of our approach.
We now present our approach for building adaptive applications; an overview is shown in Figure 1. The starting
point is the annotated source code: in the source code, programmers add annotations to identify the alternative data
structures, e.g., DS1 and DS2 , and functions operating on
them. The compiler takes heed of these annotations and
generates the source code with transition logic, that is capable of dynamically switching among alternative data structure representations. The transitions are allowed at selected
program points where the processing of an input item has
just completed and that of another item is about to begin. Lastly, the adaptation module consists of the runtime
monitors for tracking input data characteristics and memory usage as well as the code that implements the transition
policy that triggers the switch from one data structure representation to another. The adaptation can be triggered by
a mismatch between the input data characteristics and the
data structure currently in use. To discover this mismatch
the characterization of application behavior as performed in
the previous section is used. The adaptation can also be
triggered by the system during high memory usage.
Programming for adaptation. To enable adaptation, the
programmer implements the alternate data structures. In
addition, a compute-intensive function during whose execution adaptation may be performed, must be coded as follows.
First, it should contain a variable that tracks the progress
in terms of processing steps defined as either the amount of
input processed or results produced. Second, it should be
written so that it can commence execution from any point
between two processing steps. The latter is needed because
we allow execution to switch from one data representation to
another at these points. We used a set of pragmas in our approach to identify alternate data structure representations,
enable generation of code that transfers code from one representation to another, and identify program points where
transitions may be performed. First, the programmer identifies the data structure to the compiler. The programmer
annotates the alternate representation of data structures
in multiple files with #pragma ADP(<SRC_FILENAME>, "data1_def").
<SRC_FILENAME>’s presence clearly differentiates the alternate
representation of the data structure in multiple files. If there
are multiple data structures with alternate representations
in different files, then they could be annotated with a different index, e.g., #pragma ADP(<SRC_FILENAME>, "data2_def"). Second, the programmer uses several pragmas to identify the
key methods (insert, delete, traverse, and fetch) that manage data stored in the data structure. Another pragma allows access to the initialization parameters which must be
migrated from one data structure to another. All of this information is used to generate the code for data and function
migration when we switch between data structures.
Triggering adaptations. The adaptation module decides
whether or not to switch between data structures based
upon the input from runtime monitors and the transition
policy. Since the adaptation could be program-triggered or
system-triggered, there are two kinds of monitors which are
required by the adaptation module. The input data monitor
captures input data characteristics and the memory monitor reports the available system memory. The transition
policy defines which data structure representation is better for what range of input data characteristics in terms of
execution time and memory consumption. Its specification
consist of three parts, as illustrated below:
/* EXECUTION TIME */
DS1 [0,9)
DS2 [9,100]
/*MEMORY*/
DS1 [0,25)
DS2 [25,100]
/*THRESHOLD*/
MEMORY 100
The first part indicates the ranges for which a particular
data structure representation is best in terms of execution
time: under EXECUTION TIME in the figure, the input data property for which ADJLIST (DS1) is better is denoted by directives DS1, which means that ADJLIST is favorable in terms
of execution time if the input data property or density of
the graph (in case of MSSP) is in between 0% and 9%. The
second part consists of the ranges of the input data property for which a particular data structure representation is
better in terms of memory. According to the figure, under
MEMORY, we see that ADJLIST (DS1) is better when the density
of the input graph is between 0% and 25% while ADJMATRIX
(DS2) is better when the density of the graph is between
26% and 100%. The third part is the threshold for memory,
defined by the programmer to notify the system that if the
available memory is below this threshold then, regardless
of input data characteristics always use the representation
requiring least memory; in the figure (under THRESHOLD) the
threshold is set to 100MB.
dataMigrationDS1DS2(void* DS1, void* DS2)
{
initializationParameters* ip;
ip = getInitializationParameter(DS1);
initializeDS2(&DS2,ip);
transferDataDS1DS2(&DS1,&DS2)
deleteDS1(&DS1);
}
transferDataDS1DS2(void** DS1, void** DS2)
{
i = 0; void* dataValue;
for(i = 0;i< **DS1->maxData;i++) {
dataValue = fetchDataDS1(i,*DS1);
if(dataValue != NULL) {
insertDataDS2(*DS2, dataValue, i);deleteDataDS1(i,*DS1);
}}}
Figure 2: Data migration.
Switching between data structure representations. The data
structure transition logic is inserted into the source files by
the compiler, guided by the pragmas. This transition logic
carries out on-the-fly transitions from one data structure
representation to another whenever required. To accomplish
the transition, the in-memory data must be transformed
from one representation to another, along with the functions operating on them. The transition logic handles this
by function migration and in-memory data methods contained in the logic. When the code for transition logic is
inserted, appropriate header files are also inserted such that
source code after modification compiles and links properly.
To avoid recomputation of already-computed results, the result transfer logic (injected into code along with the transition logic) will transfer the already-computed results from
one representation to the other representation.
An example data migration function is shown in Figure 2.
The code in the figure transfers the data from the data structure representation DS1 to another representation DS2. It
begins with initialization of the DS2 data structure representation. The initialization parameters are fetched from
DS1 and they consist of standard parameters that are invariant in both DS1 and DS2. For example, in the MSSP
benchmark the invariant data is the number of nodes. In
the PP benchmark the invariant data consists of number
of nodes, the height, capacity and flow of each node. The
transferData function is generated from traverseData function
of DS1 as provided by the developer. This function traverses
through the data by reading each data value, migrating it
to DS2 representation using insertDataDS2 and also deleting
that data from DS1 using deleteDataDS1 thus releasing memory. The deleteDS1 clears memory which contains the data
regarding the initialization parameters.
The transition between implementations, i.e., switching
from one set of functions operating on representation DS1
to functions operating on representation DS2 must be carefully orchestrated. The developer denotes an operation with
a directive such as #pragma ADP("DS1","data1_op1"), which informs the compiler that the function is compute-intensive,
as shown in Figure 3. Any call to that function is replaced by our customized method, which checks and executes operations with the suitable data structure. In this
example computeMSSP_DS1 is replaced by callOP1. The additional parameter, startDS, denotes the type of the current
data structure representation in memory. The other three
parameters are the data structure, a progress gauge, and
the result set for storing the result. For example in the
#pragma ADP("DS1",
"ds1_op1")
void computeMSSP_DS1(
void* graph, void*
rs,
int* progress);
...
computeMSSP_DS1(graph,
rs, progress);
...
//#pragma ADP("DS1",
"ds1_op1")
void computeMSSP_DS1(
void* graph,void*
rs,
int* progress);
...
callOP1(graph,
rs,progress,
startDS);
...
void computeMSSP_DS1(
void* graph,
void* rs,
int* progress){
...
#pragma ADP("DS1",
"ds1_op1_safe")
...
void computeMSSP_DS1(
void* graph,
void* rs,
int* progress){
...
//#pragma ADP("DS1"
,"ds1_op1_safe")
if(checkChangeStatus()==1)
{
*progress = curProgress;
return;
}
}
}
void callOP1(void* ds, void* rs, int progress, currentDS){
extern int changeReq; void* newDS; void* newRS;
while(progress < 100){
if(changeReq == 1){ switch(currentDS) {
case 1:
currentDS = 2; dataMigrationDS1DS2(ds, newDS);
resultMigrationRS1RS2(rs, newRS);
ds = newDS; newDS = NULL; rs = newRS; newRS = NULL;
computeMSSPDS2(ds, rs, progress);
break;
case 2:
currentDS = 1; dataMigrationDS2DS1(ds, newDS);
resultMigrationRS2RS1(rs, newRS);
ds = newDS; newDS = NULL; rs = newRS; newRS = NULL;
computeMSSPDS1(ds, rs, progress);
break;
}}
else { switch(currentDS) {
case 1: computeMSSPDS1(ds, rs, progress); break;
case 2: computeMSSPDS2(ds, rs, progress); break;
}}}}
Figure 4: Switching between implementations.
case of MSSP, a method that finds MSSP has the signature
void computeMSSP_DS1(void* graph, void* rs ,int* progress). The
first parameter is the input graph and the second parameter rs stands for the result set and its declaration must
be annotated by the programmer with #pragma ADP("DS1",
"data1_res1"). The last parameter identifies the progress,
which is the iteration number of the outer most long running
loop. For example, if the method is called with a progress
value 10, then the execution is started from progress value 10
and continuously updated with the loop iteration number.
The detailed function selection and migration activity is
shown in Figure 4—for MSSP benchmark. An external variable changeReq, set by the adaptation module, is checked
(line 4). If a transition has been requested, then first the
data is migrated from one data structure representation to
another (lines 6 and 12). Next, if needed, the result is migrated from one representation to another (lines 7 and 13).
Finally, the corresponding MSSP function for that data structure is called (lines 9 and 15) and the operation is resumed
from the progress point. If there is a change request from
the adaptation module, then operation is paused and it returns back to callOP1. This process continues until the MSSP
computation completes.
The question arises where ongoing MSSP computations
should be interrupted to check if the adaptation module has
requested a change or not. To solve this problem, we rely
on the programmers to use the directive #pragma ADP("DS1",
"ds1_op1_safe") to indicate the safe transition points in operation1
as shown in Figure 5. This directive notifies our framework
that, if the operation is paused and the transformation is
performed at that point, then there is minimal recomputation of result. This is typically the end of an iteration in
long-running loops. Since the programmer is well aware of
the long running loops in the compute-intensive function, it
is best to have the programmer mark the points appropri-
Figure 5: Adaptation module interrupt before compilation
(left) and after compilation (right).
Table 3: Comparison of execution times of non-adaptive versions with adaptive version under program triggered adaptations, on the original p2p-Gnutella graph. Note that ADJLIST is the better representation.
App.
Non-Adaptive Ex.
Time (sec)
Adaptive:
ADJMAT→
ADJLIST (sec)
Ex.
Transition
Time
Latency
1,408
3.00
1,383
2.73
397
3.17
1,457
2.56
1,465
3.15
87
3.2
ADJLIST ADJMAT
MSSP
BC
MST-K
BFS
MST-B
PP
1,386
1,362
389
1,434
1,454
81
2,489
2,565
956
2,594
2,114
256
Benefit
Realized
(%)
98.05
98.32
98.55
98.05
98.35
96.35
ate for the insertion of adaptation module interrupts. The
directive is replaced by an interrupt which checks if there is
a change required and thus returns back to callOP1.
ADAPTIVE VERSION
ADJMAT
BC
EXECUTION
TIME (SEC)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
ADJLIST
BFS
3000
2000
1000
0
MSSP
3000
2000
1000
0
8
9
10
20
3000
2000
1000
0
8
% DENSITY
9
10
20
2000
1000
0
30
40
% DENSITY
10
20
PP
1000
80
500
40
0
20
9
% DENSITY
MST-K
3000
9
8
% DENSITY
MST-B
EXECUTION
TIME (SEC)
Figure 3: Function migration method before compilation
(left) and after compilation (right).
0
10
20
30
% DENSITY
40
1
2
3
4
% DENSITY
Figure 6: Adaptive vs. non-adaptive performance.
4. EVALUATION
In this section we evaluate the performance of adaptive
versions of graph algorithms and compare them with corresponding non-adaptive versions of the applications. The
goals of these experiments are as follows. First, we evaluate the efficiency of our approach by measuring its benefits
and overhead. Second, we consider the benefits of adaptation under two scenarios: adaptation triggered by the input characteristic, i.e., graph density; and system triggered
adaptation. All experiments were run on a 24-core machine
(4 six-core AMD OpteronT M 8431 processors) with 32GB
RAM. The system ran Ubuntu 10.04, Linux kernel version
2.6.32-21-server. The sources were compiled with Gcc 4.4.3.
Real World Data-sets: We evaluate our system on some of
the real-world graphs from the SNAP graph library [31]. The
first graph, wiki-Vote, contains the who-votes-for-whom
graph in Wikipedia administrator elections. This graph has
7,115 nodes and 103,689 edges. The second graph, p2pGnutella, is a snapshot of Gnutella, a decentralized peer to
peer file sharing network from August 9, 2002. This graph
Table 5: Breakdown of adaptation overhead.
Table 4: Programming effort.
BC MST-K BFS
9
9
8
12
10
8
MST-B PP
9
8
14
6
has 8,114 nodes representing hosts and 26,013 edges representing the connections between these hosts. For experiments, in cases where a more dense graph was needed, we
added edges in both the graphs to raise the required density.
4.1 Programming Effort
The programmers need to add annotations to transform
off-the-shelf applications to adaptive ones. In addition to
this, programmers also need to modify the compute-intensive
methods so they can be executed in incrementalized fashion.
The number of pragmas added and the number of additional
lines of code added to modify the methods are shown in Table 4. As we can see, these numbers are fairly modest.
4.2 Input Triggered Adaptation
In this scenario we study how adaptive applications respond to the mismatch between the data structure representation fixed a priori at compile time and the density of
the input graph. We compute the benefit realized by our approach for various applications. In particular, we start the
program by using the ADJMAT representation and select a real
world graph (p2p-Gnutella) which is 0.004% dense, which
makes ADJLIST the ideal representation. Therefore, when
the adaptive application is run, it dynamically switches from
the ADJMAT to the ADJLIST representation.
In Table 3 we present the execution times of the nonadaptive (ADJLIST and ADJMAT representations) and adaptive (ADJMAT→ADJLIST) versions of the applications. For the
latter version, we also present the transition latency which
is the execution time after which the program has completed
the transition to the ADJLIST representation. From the results in Table 3, we observe the following. The execution
time of the adaptive version, on average, is 2.49% higher
than the non-adaptive ADJLIST version; but 48.09% lower
than the non-adaptive ADJMAT version. For example, for
MSSP, the execution of the adaptive version is 1,408 seconds
which is 1.54% higher than the execution time of the nonadaptive ADJLIST version (1386 seconds) and 56.55% lower
than the execution time of the non-adaptive ADJMAT version
(2,489 seconds). In addition, we observe that the transition
latency of the adaptive version is small in comparison to the
total execution time. For example, for MSSP, the transition
latency of 3 seconds is approximately 0.21% of the total execution time of 1,408 seconds. That is, the adaptation is performed quickly (low transition latency) and efficiently (low
transition overhead). Thus, nearly all the benefits of using
ADJLIST over ADJMAT are realized by the adaptive version.
We quantify the benefit realized by our approach as follows. The maximum possible benefit is given by the difference in the execution times of the non-adaptive ADJMAT and
non-adaptive ADJLIST versions. The benefit our approach
realizes is the difference between the execution times of the
non-adaptive ADJLIST version and the adaptive version. The
realized benefit, as a percentage of maximum possible benefit, is given in the last column of Table 3. As we can see,
the realized benefit is over 96% for these applications.
The additional execution time taken by the adaptive version over the non-adaptive ADJLIST version can be divided
into three categories: time spent on converting from one
Application
DS Conversion
Monitoring
&
Transition Logic
Suboptimal Mode
MSSP
1.56
10.78
9.07
BC MST-K BFS
1.44
1.98
1.4
10.11
5.89
12.74
8.62
300
250
200
150
100
50
0
300
250
200
150
100
50
0
0.3
BC
8.41
2 4 6 8 10
TIME (SEC)
0
MSSP
2 4 6 8 10
TIME (SEC)
0
2 4 6 8 10
TIME (SEC)
MST-K
300
250
200
150
100
50
0
2 4 6 8 10
TIME (SEC)
1.48
300
250
200
150
100
50
0
MST-B
0
1.71
ADJLIST
BFS
300
250
200
150
100
50
0
0
MST-B PP
1.98
1.98
7.19
2.92
TRANSITION
ADJMAT
MEMORY
CONSUMPTION (MB)
MSSP
8
9
MEMORY
CONSUMPTION (MB)
Application
# pragmas
Additional
LOC
PP
300
250
200
150
100
50
0
0
2 4 6 8 10
TIME (SEC)
0
2 4 6 8 10
TIME (SEC)
Figure 7: Program-triggered adaptation.
data structure representation to another; time spent on runtime monitoring and transition logic to trigger adaptation;
and the time lost due to running the application in suboptimal mode, i.e., with the ADMAT data structure. The breakdown of the extra execution time into the three categories is
shown in Table 5. As we can see, the majority of the time is
spent on runtime monitoring and transition logic. The next
significant component is the time spent due to running the
program in the suboptimal configuration before the transition occurs. Note that the time spent on converting one
data structure into another (column 2) is the least.
An intuitive way to visualize adaptation is to plot how
the memory used by applications varies before, during, and
after adaptation. In Figure 7 we show how memory (yaxis) varies over time (x-axis) when starting the application
in the ADJMAT representation and then through adaptation,
the application transitions to ADJLIST. The charts point out
several aspects. First, since we are using sparse graphs, as
expected, the memory used is reduced significantly (tens of
megabytes) when we switch from the ADJMAT to ADJLIST
representation. Second, the switch from one data structure
to the other takes place fairly early in the execution of the
program. Third, the time to perform adaptation and the
extra memory used during adaptation are very low.
In Figure 6 we show the execution time of the adaptive
version for varying input densities over the range where we
expect the adaptive application to switch from the ADJLIST
to the ADJMAT representation. For these experiments, we
have used graph size of 4000 nodes and varied densities.
The execution times of the non-adaptive versions that use
fixed representations (ADJLIST and ADJMAT) are also shown.
As we can see, the performance of the adaptive application
is very close to the best of the two non-adaptive versions.
4.3 System Triggered Adaptation
In this section we study the second scenario, i.e., when the
adaptation is triggered by the system. The graph used for
these experiments was p2p-Gnutella at 20% density. However, we select ADJMAT as the initial data structure representation so that no adaptation was triggered due to the
mismatch between the data structure and graph density. Instead we provided the program with a system trigger that
forces the program to reduce its memory consumption. This
NORMALIZED EXECUTION TIME
1ADMAT ONLY
0
075% ADJMAT − 25% ADJLIST
1
0
1
150% ADJMAT − 50% ADJLIST
0
1
0.8
0.6
0.4
0.2
0
4.4 Limitations of Our Approach
ADJMAT − 75% ADJLIST
011025%
ADJLIST ONLY
01
1
001
11
0
1
0
00
11
00
11
0
0
1
0
1
00
11
0
1
0
1
00
11
00
11
00
11
0
1
0
1
00
11
00
11
0
1
0
1
0
1
00
11
0
1
0
1
0
1
0
1
0
1
00
11
00
11
00
11
0
1
0 1
1
00
11
0
1
0
1
0
1
00
11
00
11
0
0
1
0
1
00
11
0
1
00
11
00
11
0
1
0
1
0
1
0
1
0
1
00
11
00
11
01
1
01
1
00
11
0
1
0 1
1
00
11
00
11
00
11
0
1
01
1
0
1
0
1
00
11
0
00
11
00
11
0
0
00 11
1
0
1
0
1
00
11
00
11
0
1
0
1
0
1
0
1
0
1
0
1
00
11
00
01
1
01
1
00
11
0
1
0 1
1
00
11
00
11
0
1
00
11
0
1
00
01
1
0
0
1
00
11
0
1
00
11
00
11
0
0
0011
1
00 1
1
00
11
0
1
00
11
00
11
0
1
0
1
0
1
00
11
0
1
0
1
0
1
0
1
00
11
0
1
0
1
0
1
00
11
00
11
00
011
1
0
1
00
00
11
00
0
1
00
0
1
0
1
0
1
00
11
011
1
011
0 10101010101011
00
11
00
0
1
0
1
01
1
01
1
00
11
011
1
01
1
00
11
00
11
0
1
00
11
0
0
1
011
1
0
1
0
1
00
11
0
1
00
11
01
1
0
1
00
11
00
0
1
00
11
0
1
0
1
00
11
00
11
0
1
00
11
00
11
0
1
0
1
0
1
00
11
011
1
00
11
0
00
11
00
0
1
0
1
0011
01
1
00
11
00
11
01
1
01
1
00
11
00
11
0
1
0
1
00
0
0
1
011
1
0
1
0
1
0
1
00
11
0
1
00
11
01
1
0
1
00
11
00
11
0
1
00
00
11
0
1
0
1
00
11
00
11
0
1
0
1
00
11
00
11
0
1
0
1
0
1
0
1
00
11
0
1
00
11
011
0
1
00
11
00
11
0
1
0
1
0
1
0
1
00
11
00
11
0
1
0
1
00
11
00
11
0
1
0
1
00
0
1
0
1
0
1
0
1
0
1
0
1
00
11
0
1
00
11
01
1
0
1
00
11
00
11
0
1
00
11
00
11
0
1
0
1
00
11
00
11
0
1
0
1
00
11
00
11
0
1
01
1
0
1
0
1
00
11
011
1
00
11
0
00
11
00
0
1
0
0011
01
00
11
00
11
01
1
01
1
00
11
00
11
0
1
0
1
0
1
0
011
1
0
1
0
1
00
11
0
1
0
1
00
11
0 00
1
0
1
00
11
00
11
011
1
00
0
1
00
0
1
00
11
00
11
0
1
0
1
00
11
00
11
0
1
0 10BC 1
0MST−K
00 BFS
11
00MST−B
11
0 PP 1
0
1
MSSP1
01
1
Figure 8: Comparison of adaptive vs. non adaptive normalized execution time in system-triggered adaptation.
TRANSITION
MEMORY
CONSUMPTION (MB)
MEMORY
CONSUMPTION (MB)
ADJMAT
BC
MSSP
300
300
300
250
250
250
200
200
200
150
150
0
2500
5000
TIME (SEC)
150
0
MST-B
300
2500
5000
TIME (SEC)
200
150
0
2500
5000
TIME (SEC)
0
MST-K
300
250
200
150
100
250
5. RELATED WORK
ADJLIST
BFS
2500
5000
TIME (SEC)
PP
300
250
200
150
100
0
500 1000 1500
TIME (SEC)
0
200 400
TIME (SEC)
First, our approach is only useful when the alternative
data structures offer a significant trade-off between memory
usage and execution time. For example, for the agglometric clustering benchmark, when we tried using two alternate
data structures of kd-tree and r-tree, we observed no significant trade-off between memory usage and execution time.
Since there is a need to bulk load the data, the kd-tree always outperforms the r-tree. Second, our approach is only
useful when the application is sufficiently compute and data
intensive to justify the cost of runtime monitoring and transition logic. For example, in the case of the Max Cardinality
Bipartite Matching benchmark, although the trade-off exists, the benchmark is not sufficiently compute-intensive to
justify the adaptation cost.
600
Figure 9: System-triggered adaptation for wiki-Vote.
causes adaptation to be triggered, and the program to switch
from ADJMAT to ADJLIST representation to save memory. As
expected, the execution takes longer. Since the conversion
from one representation to another can be triggered at any
time during a program’s execution, in this study we present
data for different trigger points – after 25%, 50%, and 75% of
total processing. We controlled the trigger point by tracking
the amount of processing that has been completed.
The results are presented in Figure 8. The execution times
of the following versions are presented: non-adaptive version
in the ADJMAT representation (leftmost bar); three adaptive
versions with different trigger points (middle three bars);
and non-adaptive ADJLIST (rightmost bar). All times are
normalized with respect to the time for non-adaptive ADJLIST. As we can see, the execution time of the adaptive version is always greater than the non-adaptive ADJMAT version
and less than the non-adaptive ADJLIST version. In other
words, if large amounts of memory are available for longer
duration, the adaptive version yields greater reduction in
execution time over the non-adaptive ADJLIST version.
To study the behavior of our approach when there are
multiple transitions, we ran experiments on wiki-Vote at
10% density in the following scenario. For each benchmark,
the execution was started with ADJMAT and then switched to
ADJLIST and vice versa after 20 %, 40%, 60% and 80%. We
controlled the triggers for memory changes from the system
by tracking the amount of processing that has been completed. We present the results in Figure 9. We can clearly
see that, during a resource crunch when available memory
decreases, our applications adapt to decrease their memory
requirements accordingly, hence running slower; after the
resource crunch is over, our applications re-assume the uncompressed representation and their performance increases.
There is a large body of work on program transformations applied at compile-time or runtime to enhance program
performance, which also influences resource usage. Some of
these techniques can be used to support adaptation. ContextErlang [4] supports the construction of self-adaptive software using different call back modules. Compiler-enabled
adaptation techniques include altering of the contentiousness of an application [14, 16], which enables co-location
of applications without interfering with their performance;
data spreading [18] migrates the application across multiple cores; adaptive loop transformation [7] allows a program
to execute in more than one way during execution based on
runtime information. Multiple applications that are running
on multicore systems can significantly impact each other’s
performance as they must share hardware resources (e.g.,
last level cache, access paths to memory) [27]. The impact
of interference on program performance can be predicted and
estimated [10, 15], and contention management techniques
guided by last level shared cache usage and lock contention
have been developed [2, 25, 24, 13, 26, 12].
Huang et al. proposed Self Adaptive Containers [8] where
they provide the developer with a container library which
adjusts the underlying data structure associated with the
container to meet Service Level Objectives (SLO); adaptation occurs during SLO violations. Similarly, CoCo [28] allows adaptation by switching between Java collections during execution depending on the size of collection. These
methods are orthogonal to our approach as they do not have
scope for user-defined data structures, and the space-time
tradeoff is not taken into consideration.
6. CONCLUSION
Graph applications have resource requirements that vary
greatly across runs due to differences in graph characteristics; moreover, the required memory might not be available due to pressure from co-located applications. We have
observed that data structure choice is crucial for allowing
the application to get the best out of available resources.
We propose an approach that uses programming and runtime support to allow graph applications to be transformed
into adaptive applications by choosing the most appropriate
data structure. Experiments with graph-manipulating applications which adapt by switching between data structure
representations show that our approach is easy to use on offthe-shelf applications, is effective at performing adaptations,
and imposes very little overhead.
Acknowledgments
This work was supported in part by NSF grants CCF-0963996
and CCF-1149632. This research was sponsored by the
Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in
this document are those of the authors and should not be
interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the
U.S. Government. The U.S. Government is authorized to
reproduce and distribute reprints for Government purposes
notwithstanding any copyright notation here on.
7.
REFERENCES
[1] A. Baumann, J. Appavoo, D. D. Silva, J. Kerr,
O. Krieger, and R. W. Wisniewski. Providing dynamic
update in an operating system. USENIX ATC’05.
[2] M. Bhadauria and S. A. McKee. An approach to
resource-aware co-scheduling for cmps. ICS’10.
[3] C. A. N. Soules, J. Appavoo, D. D. Silva,
M. Auslander, G. R. Ganger, M. Ostrowski, and et al..
System support for online reconfiguration. USENIX
ATC’03.
[4] C. Ghezzi, M. Pradella, and G. Salvaneschi.
Programming language support to context-aware
adaptation: a case-study with erlang. SEAMS’10.
[5] C. Jung, S. Rus, B. P. Railing, N. Clark, and S. Pande.
Brainy: effective selection of data structures. PLDI’11.
[6] E. Schonberg, J. T. Schwartz, and M. Sharir. An
automatic technique for selection of data
representations in setl programs. ACM TOPLAS’81.
[7] R. Gupta and R. Bodik. Adaptive loop
transformations for scientific programs. IPDPS’95.
[8] W.-C. Huang and W. J. Knottenbelt. Self-adaptive
containers: Building resource-efficient applications
with low programmer overhead. SEAMS 2013.
[9] I. Neamtiu, M. Hicks, G. Stoyle, and M. Oriol.
Practical dynamic software updating for C. PLDI’06.
[10] J. Mars, L. Tang, R. Hundt, K. Skadron, and M. L.
Soffa. Bubble-up: increasing utilization in modern
warehouse scale computers. In MICRO-44 ’11.
[11] C. Jung and N. Clark. Ddt: design and evaluation of a
dynamic program analysis for optimizing data
structure usage. MICRO-42 ’09.
[12] K. Kumar Pusukuri, R. Gupta, and L. N. Bhuyan.
Adapt: A framework for coscheduling multithreaded
programs. In ACM TACO’13.
[13] K. Pusukuri, R. Gupta, and L. Bhuyan. No more
backstabbing... a faithful scheduling policy for
multithreaded programs. PACT’11.
[14] L. Tang, J. Mars, and M. L. Soffa. Compiling for
niceness: mitigating contention for qos in warehouse
scale computers. CGO’12.
[15] L. Tang, J. Mars, N. Vachharajani, R. Hundt, and
M. Soffa. The impact of memory subsystem resource
sharing on datacenter applications. ISCA’11.
[16] L. Tang, J. Mars, W. Wang, T. Dey, M. Soffa. Reqos:
Reactive static/dynamic compilation for qos in
warehouse scale computers. ASPLOS’13.
[17] L. Liu and S. Rus. Perflint: A context sensitive
performance advisor for C++ programs. CGO’09.
[18] M. Kamruzzaman, S. Swanson, and D. M. Tullsen.
Software data spreading: leveraging distributed caches
to improve single thread performance. PLDI’10.
[19] J. Mars and R. Hundt. Scenario based optimization:
A framework for statically enabling online
optimizations. In CGO, pages 169–179, 2009.
[20] I. Neamtiu. Elastic executions from inelastic
programs. SEAMS’11
[21] I. Neamtiu and M. W. Hicks. Safe and timely updates
to multi-threaded programs. PLDI’09.
[22] O. Shacham, M. Vechev, and E. Yahav. Chameleon:
adaptive selection of collections. PLDI’09.
[23] K. Pingali, D. Nguyen, M. Kulkarni, M. Burtscher,
M. A. Hassaan, R. Kaleem, T.-H. Lee, A. Lenharth,
R. Manevich, M. Méndez-Lojo, D. Prountzos, and
X. Sui. The tao of parallelism in algorithms. PLDI’11.
[24] R. Knauerhase, P. Brett, B. Hohlt, T. Li, and
S. Hahn. Using os observations to improve
performance in multicore systems. Micro’93.
[25] S. Blagodurov, S. Zhuravlev, A. Fedorova, and
A. Kamali. A case for numa-aware contention
management on multicore systems. PACT’10.
[26] S. Zhuravlev, S. Blagodurov, and A. Fedorova.
Addressing shared resource contention in multicore
processors via scheduling. ASPLOS’10.
[27] W. Wang, T. Dey, J. Mars, L. Tang, J. Davidson, and
M. Soffa. Performance analysis of thread mappings
with a holistic view of the hardware resources.
ISPASS’12.
[28] Guoqing Xu. Coco: Sound and Adaptive replacement
of java collections. In ECOOP, pages 1–26, June 2013.
[29] Y. Zhang and R. Gupta. Data compression
transformations for dynamically allocated data
structures. LNCS 2304 ’02.
[30] J. Eastep, D. Wingate and A. Agarwal. Smart Data
Structures: An Online Machine Learning Approach to
Multicore Data Structures. In ICAC, pages 11–20,
2011.
[31] J. Leskovec et al. Stanford Network Analysis Platform.
http://snap.stanford.edu/snap/index.html
[32] Wei Wang et al. REEact: A Customizable Virtual
Execution Manager for Multicore Platforms. VEE’12.
[33] Kunegis, Jérôme. KONECT: The Koblenz Network
Collection. WWW ’13.
| 6 |
THE SUBGROUP IDENTIFICATION PROBLEM FOR
FINITELY PRESENTED GROUPS
arXiv:1109.1792v4 [] 19 Oct 2016
MAURICE CHIODO
Abstract. We introduce the subgroup identification problem, and show
that there is a finitely presented group G for which it is unsolvable, and
that it is uniformly solvable in the class of finitely presented locally Hopfian
groups. This is done as an investigation into the difference between strong
and weak effective coherence for finitely presented groups.
1. Introduction
Decision problems for finitely presented groups are well-studied, going back
to the work of Dehn [8] in 1911. The word problem, of deciding when a word
represents the trivial element in a group, was the first of these to be well understood, with the independent work of Boone, Britton and Novikov [3, 5, 17]. A
clever adaption of this problem by the independent work of Adian and Rabin
[1, 18] gives rise to the impossibility of the isomorphism problem, of deciding
if two finite presentations describe isomorphic groups. To list all major results
in group-theoretic decision problems would be a substantial undertaking, and
we defer to [16] for an excellent introduction and survey of the discipline up to
1992.
For the remainder of this section we write X ∗ to denote all finite words on
a set X, P to denote the group given by a presentation P , and ⟨w1 , . . . , wn ⟩P
to denote the subgroup in P generated by words w1 , . . . , wn (see section 2.1 for
a clarification of this notation if required).
We introduce the following problem, and give the main result of this work.
Definition 1.1. We say that the subgroup identification problem for a finite
presentation P = ⟨X∣R⟩ is solvable if there is a partial algorithm that, on input
of any finite set of words w1 , . . . , wn ∈ X ∗ , and any finite presentation Q = ⟨Y ∣S⟩,
outputs (if the subgroup ⟨w1 , . . . , wn ⟩P ≅ Q) an explicit set map φ from Y to
{w1 , . . . , wn }∗ which extends to an isomorphism φ ∶ Q → ⟨w1 , . . . , wn ⟩P . That is,
the induced homomorphism φ ∶ Q → P is injective, and Im(φ) = ⟨w1 , . . . , wn ⟩P .
Otherwise, we place no conditions on the output, or even whether it halts.
Theorem 4.6. There is a finitely presented group with unsolvable subgroup
identification problem.
2010 AMS Classification: 20F10, 03D40, 03D80.
Keywords: Effectively coherent groups, subgroup identification problem, decidability.
The author was supported by:
An Australian Postgraduate Award, and a David Hay Postgraduate Writing-Up Award.
Date: October 20, 2016.
1
2
MAURICE CHIODO
As a consequence of lemma 2.6, having solvable subgroup identification problem depends only on the isomorphism class of a finite presentation.
In [9, Definition 1.1] a finitely generated group G is said to be effectively
coherent if it is coherent (all of its finitely generated subgroups are finitely
presentable) and there is an algorithm for G that, on input of a finite collection
of words S, outputs a finite presentation for the subgroup generated by S. This
motivates us to make the following two definitions:
Definition 1.2. We say a coherent group G = ⟨X∣R⟩ is weakly effectively coherent if there is an algorithm for G that, on input of a finite collection of words
S ⊂ X ∗ , outputs a finite presentation for the subgroup ⟨S⟩G generated by S.
Definition 1.3. We say a coherent group G = ⟨X∣R⟩ is strongly effectively
coherent if there is an algorithm for G that, on input of a finite collection of
words S ⊂ X ∗ , outputs a finite presentation P = ⟨Y ∣V ⟩ for the subgroup ⟨S⟩G
generated by S, along with a map φ ∶ Y → X ∗ which extends to an injection
φ ∶ P → G whose image is ⟨S⟩G .
By definition, strongly effectively coherent groups are weakly effectively coherent. Conversely, it is imediate that the notions of strong and weak effective
coherence are equivalent in groups with solvable subgroup identification problem. A motivating (unresolved) question for this work is whether strong and
weak effective coherence are equivalent notions for all finitely presented groups.
The subgroup identification problem was formulated after the author read
the proof of [9, Lemma 1.2] and [9, Remark 1.3] where, were it not for a later
typographical correction provided in [10, Definition 1.2], it would suggest that
the subgroup identification problem is (uniformly) solvable over all finitely presented groups. This would, in turn, imply that weak and strong effective coherence are equivalent notions. The groups encountered in [9] all have solvable
subgroup identification problem, as they are all locally Hopfian (see theorem 3.8
below). In addition, for all results shown in [9] all weakly effectively coherent
groups are actually proven to be strongly effectively coherent, so combined with
the typographical correction from [10, Definition 1.2] this then becomes a moot
point. We do not suggest that there are any errors with the main conclusions of
[9]. However, [9, Lemma 1.2 and Remark 1.3] do raise an interesting question
regarding the connection between strong and weak effective coherence.
An application of standard techniques gives the following result, which should
be read very carefully as one could misinterpret it as showing that the subgroup
identification problem is uniformly solvable for all finitely presented groups.
Proposition 3.5. There is a uniform partial algorithm that, on input of: A finite presentation P = ⟨X∣R⟩ of a group, and a finite set of words w1 , . . . , wn ∈ X ∗
such that ⟨w1 , . . . , wn ⟩P is finitely presentable, and Q = ⟨Y ∣S⟩ a finite presentation such that Q ≅ ⟨w1 , . . . , wn ⟩P ;
Outputs: A finite set of words c1 , . . . , ck ∈ {w1 , . . . , wn }∗ such that each ci is
trivial in P (when viewed as a word in X ∗ ) and ⟨w1 , . . . , wn ∣c1 , . . . , ck ⟩ is isomorphic to Q and hence also to ⟨w1 , . . . , wn ⟩P .
THE SUBGROUP IDENTIFICATION PROBLEM FOR FINITELY PRESENTED GROUPS 3
A group G is Hopfian if every surjective endomorphism of G is an isomorphism, and locally Hopfian if every finitely generated subgroup is Hopfian.
Theorem 3.8. The subgroup identification problem is uniformly solvable in the
class of finitely presented locally Hopfian groups. Hence, in this class of groups,
weak effective coherence is equivalent to strong effective coherence.
Thus the notions of weak and strong effective coherence are equivalent in
the class of locally Hopfian groups, and are so because in this class the subgroup identification problem is uniformly solvable. However, as soon as we start
dealing with non-Hopfian groups, the situation is a lot different.
The following theorem, and subsequent lemma, are vital to our main result.
They show that a solution to the word problem for a finite presentation cannot
be uniformly lifted to a solution to the word problem for a recursive presentation isomorphic to that same group (so an isomosphism between the two cannot
be constructed). This contrasts the case of having a pair of isomorphic finite
presentations, where an isomorphism can be uniformly constructed (lemma 2.6).
Theorem 4.1. There is a finite presentation P of a group with solvable word
problem (namely, the Baumslag-Solitar group BS(2, 3)) for which there is no
partial algorithm that, on input of a recursive presentation Q such that P ≅ Q,
outputs a solution to the word problem for Q.
Lemma 4.2. There is a finite presentation P = ⟨X∣R⟩ of a group with solvable
word problem (namely, BS(2, 3)) for which there is no partial algorithm that,
on input of a recursive presentation Q = ⟨Y ∣S⟩ such that P ≅ Q, outputs a set
map φ ∶ X → Y ∗ which extends to an isomorphism φ ∶ P → Q.
By viewing isomorphisms between groups as Tietze transformations rather
than set maps (see [13, §1.5] for a full exposition of this concept), we can reinterpret lemma 4.2 in the following way.
Lemma 4.4. There is a finite presentation P of a group with solvable word
problem (namely, BS(2, 3)) for which there is no algorithm that, on input of
a recursive presentation Q such that P ≅ Q, outputs a recursive enumeration
of Tietze transformations of type (T 1) and (T 2) (manipulation of relators),
and a finite set of Tietze transformations of type (T 3) and (T 4) (addition and
removal of generators), with notation as per [13, §1.5], transforming P to Q.
A direct consequence of the Higman embedding theorem [11, Theorem 1] is
that any recursively presented group can be uniformly embedded into a finitely
presented group. By a careful application of this result, paying special attention to the uniformity in which such a finite presentation is constructed as
given in the proof of [20, Theorem 12.18], our main result (theorem 4.6) follows.
4
MAURICE CHIODO
Acknowledgements: The author would like to thank Jack Button and Andrew Glass for their general advice, and Chuck Miller for giving general comments and corrections as well as pointing out a more elegant proof of lemma
3.6 and a much more direct proof of theorem 4.6. Thanks also go to Nicholas
Touikan for his helpful suggestions for a revised version of this work, and to
Daniel Groves and Henry Wilton for explaining their work in [9].
2. Preliminaries
2.1. Standard notation. With the convention that N contains 0, we take
ϕm to be the mth partial recursive function ϕm ∶ N → N, and the mth partial
recursive set (r.e. set) Wm as the domain of ϕm . If P = ⟨X∣R⟩ is a group
presentation with generating set X and relators R, then we denote by P the
group presented by P . A presentation P = ⟨X∣R⟩ is said to be a recursive
presentation if X is a finite set and R is a recursive enumeration of relators;
P is said to be an infinite recursive presentation if instead X is a recursive
enumeration of generators. A group G is said to be finitely presentable if G ≅ P
for some finite presentation P . If P, Q are group presentations then we denote
their free product presentation by P ∗ Q, given by taking the disjoint union
of their generators and relators; this extends to the free product of arbitrary
collections of presentations. If X is a set, then we denote by X −1 a set of
the same cardinality as X along with a fixed bijection φ ∶ X → X −1 , where
we denote x−1 ∶= φ(x). We write X ∗ for the set of finite words on X ∪ X −1 ,
including the empty word ∅. If g1 , . . . , gn are a collection of elements of a
group G, then we write ⟨g1 , . . . , gn ⟩G for the subgroup in G generated by these
elements, and ⟪g1 , . . . , gn ⟫G for the normal closure of these elements in G. A
group G is Hopfian if every surjective endomorphism of G is an isomorphism,
and locally Hopfian if every finitely generated subgroup is Hopfian. Finally, the
commutator [x, y] is taken to be xyx−1 y −1 .
2.2. Groups. It is a result by Mihailova [14] that there exists a finitely presented group with solvable word problem for which the subgroup membership
problem, of deciding when a word on the generators lies in a given finitely generated subgroup, is unsolvable. We mention this in the context of the following
known result about the word problem for HNN extensions of groups.
Lemma 2.1. Let H, K ⩽ G be isomorphic finitely generated subgroups of G,
each having solvable subgroup membership problem in G. Let ϕ ∶ H → K be an
isomorphism. Then the HNN extension G∗ϕ has solvable word problem.
Proof. This is immediate from the normal form theorem for HNN extensions
(see [12, §IV Theorem 2.1]).
The following group construction from [2] plays an important part in our
arguments, as it is an explicit example of a finitely presented non-Hopfian group
with solvable word problem.
Definition 2.2. The Baumslag-Solitar groups BS(m, n) are defined via the
following finite presentations, each an HNN extension of Z:
BS(m, n) ∶= ⟨s, t∣s−1 tm s = tn ⟩
THE SUBGROUP IDENTIFICATION PROBLEM FOR FINITELY PRESENTED GROUPS 5
Theorem 2.3 (Baumslag-Solitar [2, Theorem 1]). The group BS(2, 3) is nonHopfian and has solvable word problem. The map f ∶ BS(2, 3) → BS(2, 3) given
by extending the map f ∶ {s, t} → {s, t}∗ , f (s) = s, f (t) = t2 , is a non-injective
epimorphism. The word [s−1 ts, t] is non-trivial in BS(2, 3) and lies in ker(f ).
Proof. That BS(2, 3) has solvable word problem comes from the fact that it is
an HNN extension of Z (see lemma 2.1, or alternatively a result by Magnus in
[12, § IV Theorem 5.3] that all 1-relator groups have solvable word problem).
The remainder of the theorem is proved in [2, Theorem 1].
2.3. Enumerable processes in groups. We note some partial algorithms
and recursively enumerable sets, in the context of group presentations. These
are standard results, which we state (without proof) for the convenience of any
reader not familiar with the area.
Lemma 2.4. Let P = ⟨X∣R⟩ be a recursive presentation. Then the words in
X ∗ which represent the identity in P are recursively enumerable. Moreover,
this algorithm is uniform over all recursive presentations.
Lemma 2.5. There is a partial algorithm that, on input of two finite presentations P = ⟨X∣R⟩ and Q = ⟨Y ∣S⟩, and a set map φ ∶ X → Y ∗ , halts if and only
if φ extends to a homomorphism φ ∶ P → Q.
Lemma 2.6. There is a partial algorithm that, on input of two finite presentations P = ⟨X∣R⟩ and Q = ⟨Y ∣S⟩, halts if and only if P ≅ Q, and outputs an
isomorphism between them.
It is important to note that lemma 2.6 does not hold if we instead consider
recursive presentations. In fact, even if we start with one recursive presentation,
and one finite presentation, the lemma does not hold; we show this later as
lemma 4.2.
Lemma 2.7. Let P = ⟨X∣R⟩ be a finite presentation. Then the set of finite presentations defining groups isomorphic to P is recursively enumerable. Moreover,
this enumeration algorithm is uniform over all finite presentations.
3. Initial questions and observations
3.1. The finite presentation problem. Following [4, Definition 1.1], we say
that the finite presentation problem for a finitely presented group ⟨X∣R⟩ is
solvable if there is a partial algorithm that, on input of a finite set of words
w1 , . . . , wn ∈ X ∗ such that ⟨w1 , . . . , wn ⟩P is finitely presentable, outputs a finite
presentation Q for ⟨w1 , . . . , wn ⟩P . By definition, a finitely presented weakly effectively coherent group has solvable finite presentation problem. The existence
of a finitely presented group with unsolvable word problem immediately gives
rise to the following obsevation:
Lemma 3.1. There is a finite presentation P = ⟨X∣R⟩ of a group for which the
finite presentation problem is unsolvable.
6
MAURICE CHIODO
Proof. Take P = ⟨X∣R⟩ to be a finite presentation of a group with unsolvable
word problem (see [20, Lemma 12.7]), and suppose this has an algorithm to
solve the finite presentation problem. Note that for any word w ∈ X ∗ , we have
that ⟨w⟩P is cyclic, and hence finitely presentable. So, on input of a single word
w, the algorithm will output a finite presentation Q of the cyclic group ⟨w⟩P .
But the isomorphism problem for finitely presented abelian groups is solvable
(see [16] p. 31), so we can decide if Q is trivial, and hence if w is trivial in P ,
which is impossible since P has unsolvable word problem.
Remark. As pointed out to the author by Chuck Miller, the work of Collins in [7]
shows that the group in lemma 3.1 can be taken to have solvable word problem.
This is because such an algorithm for the finite presentation problem would
explicitly solve the order problem, of deciding what the order of an element of
the group is, and a direct consequence of [7, Theorem A] is that there is a finitely
presented group with solvable word problem and unsolvable order problem.
Bridson and Wilton [4] give an in-depth analysis of the finite presentation
problem, showing that it is not uniformly solvable in several classes of finitely
presented groups [4, Corollary C]. Moreover, they construct a finite presentation
of a group with polynomial Dehn function (and hence solvable word problem)
which has unsolvable finite presentation problem [4, Theorem E].
Having seen that the finite presentation problem is unsolvable in general, we
shift our attention to similar questions. By considering the trivial group, and
the fact that the triviality problem is undecidable, one can show that there is
no algorithm that, given two finite presentations P, Q, determines if P embeds
in Q or not. The following two stronger results were obtained in [6], and are
closely related to the subgroup identification problem.
Theorem 3.2 (Chiodo [6, Theorem 6.8]). There is a finitely presented group
G such that the set of finite presentations of groups which embed into G is not
recursively enumerable.
Theorem 3.3 (Chiodo [6, Theorem 6.6]). There is no algorithm that, on input
of two finite presentations P = ⟨X∣R⟩, Q = ⟨Y ∣S⟩ such that P embeds in Q,
outputs an explicit map φ ∶ X → Y ∗ which extends to an embedding φ ∶ P ↪ Q.
In the proof of theorem 3.3, as found in [6], we see that the algorithmic
problem arises from not definitely knowing a set of target words for an injection
from P into Q. Knowing which elements P maps to in Q brings us precisely to
the subgroup identification problem.
3.2. The subgroup identification problem for Hopfian groups. The following is a standard result about finitely presentable groups.
Lemma 3.4. Let ⟨X∣R⟩ be a recursive presentation of a finitely presentable
group. Then there is a finite subset R′ ⊆ R such that ⟨X∣R⟩ ≅ ⟨X∣R′ ⟩ via
extention of the identity map on X. That is, there is a finite truncation R′ of
R such that all other relations are a consequence of the first R′ .
THE SUBGROUP IDENTIFICATION PROBLEM FOR FINITELY PRESENTED GROUPS 7
The following result could lead us to think that all finitely presented groups
have solvable subgroup identification problem, and we urge the reader to study
this carefully to become convinced that it is not what is proved.
Proposition 3.5. There is a uniform partial algorithm that, on input of: A finite presentation P = ⟨X∣R⟩ of a group, and a finite set of words w1 , . . . , wn ∈ X ∗
such that ⟨w1 , . . . , wn ⟩P is finitely presentable, and Q = ⟨Y ∣S⟩ a finite presentation such that Q ≅ ⟨w1 , . . . , wn ⟩P ;
Outputs: A finite set of words c1 , . . . , ck ∈ {w1 , . . . , wn }∗ such that each ci is
trivial in P (when viewed as a word in X ∗ ) and ⟨w1 , . . . , wn ∣c1 , . . . , ck ⟩ is isomorphic to Q and hence also to ⟨w1 , . . . , wn ⟩P .
Proof. Begin an enumeration c1 , c2 , . . . of all words in the {w1 , . . . , wn }∗ which
are trivial in P (when viewed as words in X ∗ ); this can be done by repeated application of lemma 2.4. Define the presentation Pl ∶= ⟨w1 , . . . , wn ∣c1 , . . . , cl ⟩. By
lemma 3.4, as ⟨w1 , . . . , wn ⟩P is finitely presentable, and T ∶= ⟨w1 , . . . , wn ∣c1 , c2 , . . .⟩
is a recursive presentation of a group isomorphic to ⟨w1 , . . . , wn ⟩P via extension
of the map wi ↦ wi , there exists some finite m such that P m ≅ T (again via
extension of the map wi ↦ wi ). That is, using lemma 3.4 we can truncate the
relations of T at position m (forming Pm ) such that all successive relations are
consequences of the first m (note that selecting m is not an algorithmic process;
for the moment we merely appeal to the fact that such an m exists). So we have
Q ≅ ⟨w1 , . . . , wn ⟩P ≅ T ≅ P m , where Q is our given explicit finite presentation.
Now we use lemma 2.6 to begin checking for an isomorphism between Q and
P 1 , P 2 , . . . as a parallel process. Eventually this process will stop at some k
(perhaps different from m) such that P k ≅ T ≅ Q.
Note that we were very careful to mention that the selection of m in the above
proof was existential, but not necessarily recursive. This is very important, as
later in corollary 4.3 we construct a class of recursive presentations for which
the selection of such an m is provably non-recursive.
Remark. With the above proof in mind, one would naively hope that the map
from Pk = ⟨w1 , . . . , wn ∣c1 , . . . , ck ⟩ to P = ⟨X∣R⟩ given by extending wi ↦ wi ∈ X ∗
would be an injection onto its image ⟨w1 , . . . , wn ⟩P ; in the case that P k is
Hopfian this would indeed be true, as shown later in in lemma 3.7. But theorem
4.6 shows that this is not true in general.
The existence of a finitely presented non-Hopfian group, combined with
lemma 3.4, immediately gives rise to the following:
Lemma 3.6. There exist finite presentations P = ⟨x1 , . . . , xn ∣r1 , . . . , rm ⟩ and
Q = ⟨x1 , . . . , xn ∣r1 , . . . , rm , s1 , . . . , sk ⟩, along with a recursive presentation T =
⟨x1 , . . . , xn ∣r1 , r2 , . . .⟩ (where ri are the same in P, Q, T for all i ≤ m) such that:
1. P ≅ Q ≅ T (and hence T is finitely presentable).
2. The quotient maps φ ∶ P → Q and ψ ∶ P → T , each an extension of the map
xi ↦ xi , are not isomorphisms.
8
MAURICE CHIODO
Observe that the above quotient maps are always isomorphisms in the case of
Hopfian groups, by very definition. The point of making the above observation
is to stress the following:
1. There may be many ways to truncate the relators of T to get a presentation
of a group isomorphic to P .
2. Truncating T to get a finite presentation T ′ with T ′ ≅ T may not give
a presentation which is isomorphic to P via the quotient map we have been
discussing.
Lemma 3.7. Let P , Pk and {w1 , . . . , wn } be as in the proof of proposition 3.5.
If ⟨w1 , . . . , wn ⟩P is Hopfian, then the map φ ∶ {w1 , . . . , wn } → X ∗ sending wi
(as a generator of Pk ) to wi (as a word in X ∗ ) extends to a monomorphism
φ ∶ P k ↪ P , and thus to an isomorphism to its image ⟨w1 , . . . , wn ⟩P .
Proof. By the proof of proposition 3.5, we know that φ extends to a surjection
φ ∶ P k ↠ ⟨w1 , . . . , wn ⟩P . But P k ≅ ⟨w1 , . . . , wn ⟩P , which is Hopfian. Hence φ
must be injective, and thus an isomorphism.
Theorem 3.8. The subgroup identification problem is uniformly solvable in the
class of finitely presented locally Hopfian groups. Hence, in this class of groups,
weak effective coherence is equivalent to strong effective coherence.
Proof. The first part follows from lemma 3.7; the second is then immediate.
4. Main results
We begin by making the important observation that a solution to the word
problem can’t be algorithmically lifted from a finite presentation of BS(2, 3) to
a recursive presentation of BS(2, 3).
Theorem 4.1. There is a finite presentation P of a group with solvable word
problem (namely, BS(2, 3)) for which there is no partial algorithm that, on input
of a recursive presentation Q such that P ≅ Q, outputs a solution to the word
problem for Q.
Proof. Assume we have such an algorithm. Take P = ⟨X∣R⟩ to be a finite
presentation for BS(2, 3). Fix Q = ⟨X∣R∪S⟩ to be a finite presentation of a nontrivial quotient which is isomorphic to P , but not by the extension IdX of the
map IdX ∶ X → X (say, instead, by the extension φ of the map φ ∶ X → X ∗ ). Fix
a word w ∈ X ∗ such that w lies in the kernel of φ, but is not trivial in BS(2, 3).
Given any r.e. set Wi , we form the recursive presentation Pi,j ∶= ⟨X∣R (∪S if j ∈
Wi ) ⟩. That is, Pi,j ∶= ⟨X∣R⟩ if j ∉ Wi , and Pi,j ∶= ⟨X∣R ∪ S⟩ if j ∈ Wi . So Pi,j
is a recursive presentation (we add all the relators S to Q if we see j ∈ Wi ).
Now use our assumed algorithm that solves the word problem in Pi,j to test
if φ(w) = e in P i,j ; this will occur if and only if j ∈ Wi . By taking Wi to be
non-recursive, we derive a contradiction.
Note. The above proof is not the original proof as developed by the author, but
a simplified version due to Chuck Miller which follows from lemma 3.6 of this
work. The author is grateful to Chuck for this simplification.
THE SUBGROUP IDENTIFICATION PROBLEM FOR FINITELY PRESENTED GROUPS 9
Lemma 4.2. There is a finite presentation P = ⟨X∣R⟩ of a group with solvable
word problem (namely, BS(2, 3)) for which there is no partial algorithm that,
on input of a recursive presentation Q = ⟨Y ∣S⟩ such that P ≅ Q, outputs a set
map φ ∶ X → Y ∗ which extends to an isomorphism φ ∶ P → Q.
Proof. Suppose such an algorithm exists. Given a recursive presentation Q =
⟨Y ∣S⟩ such that P ≅ Q, use our supposed algorithm to output a set map φ ∶
X → Y ∗ which extends to an isomorphism φ ∶ P → Q. But since we have a
solution to the word problem for P , we can combine this with the map φ to get
a solution to the word problem for Q, thus contradicting theorem 4.1.
Note that, by lemma 3.4, in the above proof there will always be a finite
subset S ′ of S such that all other relators are consequences of S ′ , and hence
⟨Y ∣S⟩ ≅ ⟨Y ∣S ′ ⟩. So we have the following immediate corollary:
Corollary 4.3. There is a finite presentation P = ⟨X∣R⟩ of a group with solvable word problem (namely, BS(2, 3)) for which there is no partial algorithm
that, on input of a recursive presentation Q = ⟨Y ∣S⟩ such that P ≅ Q, outputs
a finite subset S ′ ⊆ S such that all other relators S ∖ S ′ are consequences of S ′ .
Equivalently, construction of a finite truncation point m of S (with m as in the
proof of proposition 3.5) is not algorithmically possible in general.
As an aside, if we were instead to view isomorphisms between groups as
Tietze transformations rather than set maps (see [13, §1.5] for a full exposition
of this concept) then we can interpret lemma 4.2 in the following way.
Lemma 4.4. There is a finite presentation P of a group with solvable word
problem (namely, BS(2, 3)) for which there is no algorithm that, on input of
a recursive presentation Q such that P ≅ Q, outputs a recursive enumeration
of Tietze transformations of type (T 1) and (T 2) (manipulation of relators),
and a finite set of Tietze transformations of type (T 3) and (T 4) (addition and
removal of generators), with notation as per [13, §1.5], transforming P to Q.
Remark. It should be pointed out that such a sequence of Tietze transformations
as described above will always exist, which follows directly from lemma 3.4; we
merely truncate the relators of Q to get a finite presentation Q′ , perform a
finite sequence of Tietze transformations which takes P to Q′ (possible by [13,
Corollary 1.5]), and then add the rest of the enumeration of relators of Q to Q′ .
The point here is that we cannot compute an enumeration of such a sequence
in general.
Before proceeding to our main result, we note the following lemma which
shows that the direction in which we define our isomorphism in lemma 4.2 is
inconsequential.
Lemma 4.5. Using the notation of lemma 4.2, having an explicit set map
φ ∶ X → Y ∗ which extends to an isomorphism φ allows us to construct a set map
ψ ∶ Y → X ∗ which extends to the inverse of φ (and hence is an isomorphism).
The reverse also holds.
10
MAURICE CHIODO
Proof. This follows from the fact that we only deal with recursive presentations, and we can uniformly enumerate all trivial words of such presentations
by lemma 2.4.
We now have all the technical machinery required to prove our main result:
Theorem 4.6. There is a finitely presented group with unsolvable subgroup
identification problem.
Note that, by theorem 3.8, such a group cannot be locally Hopfian.
Proof. Take a recursive enumeration Pi ∶= ⟨Xi ∣Ri ⟩ of all recursive presentations
of groups (fix some countably infinite alphabet A; each Xi is a finite subset of
A, and each Ri is a recursive enumeration of words in Xi∗ ). By the Higman
embedding theorem (see [20, Theorem 12.18]) we can embed their free product
(with presentation P1 ∗P2 ∗. . .) into a finitely presented group with presentation
P = ⟨X∣R⟩. Then the group P does not have solvable subgroup identification
problem, for if it did then we could use this to contradict lemma 4.2 as follows:
By the uniformity of Higman’s result, as described in the proof of [20, Theorem
12.18], there is a uniform procedure that, for any i, outputs a finite set of
words Si ⊂ X ∗ in 1-1 correspondence with Xi such that the subgroup ⟨Si ⟩P
is isomorphic to P i , and an explicit bijection φi ∶ Xi → Si which extends to
an isomorphism φi ∶ Pi → ⟨Si ⟩P . That is, we can keep track of where each
P i is sent in this embedding. So, given a recursive presentation Q = ⟨Y ∣S⟩
and a finite presentation H = ⟨Z∣V ⟩ such that H ≅ Q, compute j such that
Pj = Q as recursive presentations (that is, number all Turing machines which
read alphabet Y , and look for one identical to the description S). We know that
H ≅ ⟨Sj ⟩P . So use our algorithm to output a set map ψ ∶ Z → Sj∗ which extends
to an isomorphism ψ ∶ H → ⟨Sj ⟩P . But we can compose ψ with φ−1
j to get the
∗
−1
set map φ−1
j ○ ψ ∶ Z → Y which extends to an isomorphism φj ○ ψ ∶ H → Q.
But this is impossible by lemma 4.2, so our assumed algorithm can’t exist.
5. Further work
The group G constructed in theorem 4.6 contains an embedded copy of every
recursively presented group, and so is not coherent, let alone weakly effectively
coherent. We ask the question of whether theorem 4.6 can be modified so that
G is weakly effectively coherent, or even merely coherent. Either of these would
help make the result even more relevant, as we conjecture that strong and weak
effective coherence are not equivalent properties for finitely presented groups.
References
[1] S. I. Adian, Finitely presented groups and algorithms, Dokl. Akad. Nauk SSSR 117, 9–12
(1957).
[2] G. Baumslag, D. Solitar, Some two generator one-relator non-Hopfian groups, Bull. Amer.
Math. Soc. 68, 199–201 (1962).
[3] W. W. Boone, The word problem, Ann. of Math., 70, 207–265 (1959).
[4] M. Bridson, H. Wilton, On the difficulty of presenting finitely presentable groups, Groups
Geom. Dyn. 5, 301–325 (2011).
THE SUBGROUP IDENTIFICATION PROBLEM FOR FINITELY PRESENTED GROUPS11
[5] J. L. Britton, The word problem for groups, Proc. London Math. Soc. (3) 8, 493–506
(1958).
[6] M. Chiodo, Finding non-trivial elements and splittings in groups, J. Algebra. 331, 271–
284 (2011).
[7] D. J. Collins, The word, power and order problems in finitely presented groups, in Word
Problems, eds. Boone, Cannonito, and Lyndon, Amsterdam, North-Holland, 401–420
(1973).
[8] M. Dehn, Über unendliche diskontinuerliche Gruppen (German), Math. Ann. 69, 116–144
(1911).
[9] D. Groves, H. Wilton, Enumerating limit groups, Groups Geom. Dyn. 3, 389–399 (2009).
[10] D. Groves, H. Wilton, Enumerating limit groups: A Corrigendum, arXiv:1112.1223v1
(2011).
[11] G. Higman, Subgroups of finitely presented groups, Proc. Royal Soc. London Ser. A 262,
455–475 (1961).
[12] R. Lyndon, P. Schupp, Combinatorial Group Theory, Springer (2001).
[13] W. Magnus, A. Karrass, D. Solitar, Combinatorial group theory, Dover (2004).
[14] K. A. Mihailova, The occurrence problem for direct products of groups, Dokl. Akad. Nauk
SSSR 119, 1103–1105 (1958).
[15] C. F. Miller III, On group theoretic decision problems and their classification, Annals of
Math. Study 68, Princeton University Press, Princeton, NJ (1971).
[16] C.F. Miller III, Decision problems for groups-survey and reflections. Algorithms and classification in combinatorial group theory (Berkeley, CA, 1989), Math. Sci. Res. Inst. Publ.,
23, Springer, New York, 1–59 (1992).
[17] P. S. Novikov, On the algorithmic unsolvability of the word problem in group theory, Trudy
Mat. Inst. Steklov 44, 1–143 (1955).
[18] M. O. Rabin, Recursive unsolvability of group theoretic problems, Annals of Math. 67,
172–194 (1958).
[19] H. Rogers Jr, Theory of recursive functions and effective computability, MIT Press (1987).
[20] J. Rotman, An introduction to the theory of groups, Springer-Verlag, New York (1995).
Dipartimento di Matematica ‘Federigo Enriques’
Università degli Studi di Milano
Via Cesare Saldini 50, Milano, 20133, ITALIA
maurice.chiodo@unimi.it
| 4 |
1
Efficient Enumeration of Unidirectional Cuts for
Technology Mapping of Boolean Networks
Abstract—In technology mapping, enumeration of subcircuits
or cuts to be replaced by a standard cell is an important step
that decides both the quality of the solution and execution speed.
In this work, we view cuts as set of edges instead of as set
of nodes and based on it, provide a classification of cuts. It is
shown that if enumeration is restricted to a subclass of cuts called
unidirectional cuts, the quality of solution does not degrade. We
also show that such cuts are equivalent to a known class of cuts
called strong line cuts first proposed in [14]. We propose an
efficient enumeration method based on a novel graph pruning
algorithm that utilizes network flow to approximate minimum
strong line cut. The runtimes for the proposed enumeration
method are shown to be quite practical for enumeration of a
large number of cuts.
I. I NTRODUCTION
T
c d
e
f
a
b
c
d
e
f
a b
c d
e f
G3
G2
G1
G4
G2
G3
G5
G4
G7
G7
G6
G6
G8
G8
G0
G0
This research is supported by NSF PFI award no. 1237856 and NSF FRP
award no. 1230401.
a b
G5
ECHNOLOGY mapping (TM) is a process of transforming a generic Boolean network, which is a network consisting of primitive gates (e.g. AND/OR), into an equivalent
mapped network, that consists of a network of cells from a
given technology library. Depending on the target implementation, the library cells can correspond to either lookup tables
(LUT) in the case of an FPGA, or to a pre-designed set of
standard cells in the case of an ASIC. The measures of delay,
power, area, or some combination of them serve as an objective
function to optimize during the transformation process.
TM is formalized as a graph covering problem. A given
Boolean network is represented as a directed acyclic graph
(DAG) G = (V, E), which is referred to as the subject graph.
The cells in the library, being single output Boolean functions,
are also represented by DAGs, and each is referred to as
a pattern graph. A feasible solution of TM is a complete
covering of the subject graph with one or more of the pattern
graphs (see Figure 1). A core step in this process is to identify
a subgraph in the subject graph to match with one or more
pattern graphs. In the structural approach to TM [24], the
selection of subgraphs is achieved by computing a cut.
The recent works on structural approaches to TM [4], [7],
[16], [19] are based on a particular type of a cut, called
minimal node-cut. A node-cut Cv associated with a node
v ∈ V is a subset of nodes in the transitive fanin cone of
v such that every path from the primary inputs to v includes a
node in Cv . Note that in the existing literature, the definition of
a node-cut restricts it to be minimal, i.e., no proper superset of
a node cut is a legal node cut. In this article we will explicitly
refer to a minimal node cut, if that is the case. By definition,
A k-feasible minimal node cut of a node v is a minimal node
cut of cardinality k [4].
A cut together with a node v defines a single-sink subgraph
that is either directly mappable onto a k-input LUT in the
case of an FPGA, or constitutes a match candidate for a NPNequivalent standard cell, for an ASIC design, provided such
an entity exists in the given library. In the literature k-feasible
node cuts are usually simply referred as k-feasible cuts as no
other type of cut is usually considered for technology mapping
[4], [7], [16], [19].
In Figure 1, associated with node G0, the sets {G6, G7},
and {G6, G8} are 2-feasible minimal node cuts, whereas
{G2, G5, G7} is a 3-feasible minimal node cut. When the
cut {G6, G7} is selected, the actual subgraph replaced by a
cell in TM consists of gates {G8, G0}. Similarly, when the
cut {G2, G5, G7} is chosen the subcircuit consisting of gates
{G8, G6, G0} is replaced.
G1
arXiv:1603.07371v1 [] 23 Mar 2016
Niranjan Kulkarni, Sarma Vrudhula
School of Computing, Informatics and Decisions Systems Engineering
Arizona State University
Email: vrudhula@asu.edu
(a)
h
(b)
h
(c)
h
Fig. 1: a) A covered Boolean network. b) Its graph representation. c) The network mapped on the library gates.
A node may have many associated k-feasible node cuts,
which result in different coverings. In TM, the quality of
a covering is usually taken to be the delay of the critical
path, and/or the total area of the mapped circuit. Hence, to
evaluate the quality of the complete covering, cuts also need
to be evaluated. The quality of a cut is typically a combined
measure of the subcircuit that will be replaced by a library
cell and the subcircuit that feeds that cell. Since neither of
2
these can be evaluated without knowing the cut, enumeration
of cuts is a core task in TM. Consequently, there has been
a significant amount of effort devoted to the development of
efficient algorithms for enumerating k-feasible node cuts [4],
[5], [16], [22].
While structural technology mapping turned out to be very
successful, minimal node cuts bear some bias that eventually
limits the number of possible matches. As demonstrated in
[18], this may result in excluding many feasible, high quality
matches from the evaluation. The authors of [18] address this
problem by constructing so-called supergates i.e. single output
networks of library gates that are added to the library in order
to increase the number of matches. This demonstrates that
enhancing a match space could yield significant benefits for
structural technology mapping.
Existing work on TM focuses on minimal node cuts. However, including non-minimal node cuts can increase substantially increase the possible matches. Consider Figure 2. The
node cut C = {a, b, x, d} is not minimal since {a, b, d} is
also a (minimal) node cut. However C corresponds to the
function ab + x + d. This happens to be a linearly separable
(threshold) function, which would never be found by any cut
enumerator that enumerates only minimal node cuts. Another
representation of a cut, based on edges, is called a line cut,
which includes both minimal and non-minimal node cuts.
Definition 1: (a) A line cut is a minimal set of directed
edges in a single sink DAG which when removed,
eliminates all paths from any of source (primary input)
to the sink i.e. produces an “S-T bipartition”.
(b) A line cut is k-feasible if its cardinality (number of
edges) is k or smaller.
(c) A line cut is called a strong line cut [14] if no two edges
in the line cut are on the same directed path.
Note that line cuts and node cuts are simply two representations of same entity, and either form can be converted
into other. They both partition the nodes of the DAG into two
mutually exclusive sets S and T . In such an S-T partition,
the set S of nodes must contain primary inputs and T must
contain the sink node v.
In Figure 2(a) the line cut {(x, z), (a, y), (b, y), (d, w)}
(corresponding node cut being C = {a, b, x, d}) generates
S = {a, b, c, d, e, x}, and T = {y, z, w}, and is unidirectional.
In Figure 2(b), the line cut {(a, c), (b, g)} (corresponding
node cut is {a, b}) has S = {a, b} and T = {c, g}, and is
bidirectional since (b, g) is a “forward edge” and (c, b) is a
“backward edge”. Note also that the minimal node cut {a, b, d}
would identify ab + a0 b0 + d as a function to be replaced by
some cell from the library. However this is neither a member
of any well defined class of functions, e.g. threshold functions,
nor it is a function that would be in a typical cell library. Thus,
minimal cuts are not always the most useful or desirable. In
addition, we show that bidirectional cuts are not necessary
and discarding them does not degrade quality of TM with
regard to critical path delay. Thus we need only enumerate
unidirectional node cuts (minimal or non-minimal).
We establish a one-to-one correspondence between unidirectional nod cuts and strong line cuts [14]. This correspondence
is important because there exists a well established relation
between strong line cuts in a DAG and independent sets in
its corresponding line dependency graph (LDG) [14]. This
allows for construction of efficient strong line cut enumeration
techniques based on enumeration of independent sets. Since
the latter is a problem well researched in computational
graph theory, techniques exist that can be directly applied
to enumerating line cuts by enumerating independent sets.
Figure 3 shows classification of cuts and their relationships.
The proposed technique for enumerating strong line cuts was
used in [15] to find threshold functions in a logic circuit.
Conventionally
enumerated
Line cuts
Minimal node cuts
Bidirectional
Non-Minimal node cuts
Unidirectional Unidirectional
Bidirectional
Discardable set
Strong line cuts
Fig. 3: Classification of cuts and their relationships
Fig. 2: (a) Unidirectional node cut denoted as {a, b, x, d} (b)
Bidirectional node cut denoted by {a, b}
Figure 2 a) shows a line cut consisting of edges
{(x, z), (a, y), (b, y), (d, w)}. The node cut corresponding to
this line cut is C = {a, b, x, d}, which is not minimal. Since
minimal and non-minimal node cuts are useful, it appears that
line cuts are the cuts that should be enumerated. A cut can
also be classified as being unidirectional or bidirectional. It is
unidirectional if all the edges between S and T originate in
S and terminate in T . Otherwise it is bidirectional.
The main contributions of this article include:
• Introduction of unidirectional node cuts and a proof
showing that restricting the cut space to only unidirectional node cuts does not degrade the quality of mapping
for delay.
• Establishing the equivalence unidirectional node cuts and
strong line cuts.
• An efficient k-feasible strong line cut enumeration algorithm based on the relationship between a DAG and its
LDG.
• A general framework and the specific implementation
of a pruning technique for k-feasible strong line cut
enumeration.
• An efficient implicit representation of bounded size MISs
of a graph that allows for both unconstrained and con-
3
strained enumeration of such MISs.
II. R ELATED W ORK
The importance of cut computation in TM for FPGAs was
first identified by the Cong et al. [5]. They developed a
novel and elegant network flow based algorithm that directly
identified a single, depth-optimal, k-feasible node cut, without
enumerating cuts. Later, Pan et al. [21], [22] developed an
efficient algorithm for enumerating cuts that avoided the large
computational requirements of network flow. More recently,
Ling et. al [16] developed a novel scheme for implicitly
encoding cuts using Binary Decision Diagrams (BDD). This
representation allowed for extraction of cuts when the value
of a cut could be computed recursively. However, the authors
admit that BDD approach is not very well suited for cut
enumeration since non-cuts, which dominate cuts, are also
implicitly included and need to be pruned during enumeration.
Finding a computationally efficient way of encoding and
enumerating cuts is of fundamental importance to technology
mapping. Recently Takata et al. [25] proposed a top-down
scheme for the procedure of [22] and demonstrated speedups of 3x-8x for larger k = 8, 9. Unfortunately, since the
number of cuts of size at most k is of O(nk ), cut enumeration
algorithms inherently suffer from poor scalability. To alleviate
this problem, techniques for ranking and pruning of cuts were
first proposed by Cong et al. in [7]. The basic observation
of this work is that for certain optimization objectives it is
possible to narrow the search down efficiently and extract
depth-maximal or area-minimal cuts directly. Similar ideas,
referred to as priority cuts, were proposed by Mischenko et al.
in [19], where appropriate seeding of the procedure from [22]
assured enumeration of only O(n2 ) priority cuts instead of
O(nk ) cuts. These can be further sorted by quality, and pruned.
An alternative approach to pruning was proposed by Chatterjee
et al. in [4] where they introduced hierarchical partitioning
of the cut space based on a novel concept that is similar
to algebraic factorization. The authors showed that while
complete factorization may still suffer from poor scalability,
partial factorization of the cut space could yield good, practical
solutions with very short runtimes. Takata [25] proposed a
partial enumeration scheme that enumerates only a subset
called label cuts. The scheme improves scalability of cut
enumeration and guarantees to maintain the circuit’s depth, at
the expense of small increase in the estimated network area.
All the works identified above, and many others have
demonstrated that structural technology mapping, the core of
which involves cut enumeration, leads to far superior solutions
than the traditional graph/tree matching based algorithms. Cut
enumeration has also found uses in related applications such
as re-synthesis through rewriting [17], application specific
instruction set extension generation/optimization [6], hardware/software co-design [23], model checking in verification
[3], and SAT problem preprocessing for simplification [10].
The use of a line dependency graph (LDG) derived from
a DAG was proposed by Kagaris et al. [14] to compute
the maximum strong cut in a circuit for the purpose of
delay testing. Based on the observation that an LDG is a
transitively-oriented graph, hence a comparability graph [13],
they provide an efficient and elegant algorithm that computes
a maximum independent set of the LDG using network flow.
This set represents a maximum strong cut in the corresponding
DAG. While their approach generated interest in the area of
delay-testing, we will demonstrate that there is still greater
opportunity for further exploration and exploitation of the
DAG/LDG duality for strong cut enumeration.
III. S TRONG LINE CUTS
We now describe the relation between DAGs and their
corresponding line dependency graphs (LDG). An LDG is
an undirected graph derived from a DAG that encodes the
transitive dependencies between DAG edges [14]. Each edge
e of the DAG has a corresponding node v in the LDG.
Two nodes of an LDG are connected if and only if their
corresponding lines in the DAG are transitively dependent,
i.e., there exists a path in the DAG from any source to any
sink that contains both edges. Consequently, if an edge in
DAG is transitively dependent on another edge, by definition
the corresponding nodes in LDG will be neighbors. Since
LDGs are by definition transitively oriented, they are also
comparability graphs [13].
An independent set (IS) in a graph G(V, E) is a set S
of vertices no two of which share an edge. A maximal
independent set (MIS) is an independent set which is not a
proper subset of any other independent set in the graph.
Lemma 1: (From [14]) A strong line cut of a DAG forms
a maximal independent set (MIS) in its corresponding LDG.
Fig. 4 illustrates the relation between DAGs and LDGs
established by Lemma 1, on an example that we will use
throughout this article for illustration. The direct consequence
of this lemma is that enumerating all k-feasible strong line
cuts in a DAG is equivalent to enumerating all maximal
independent sets of size ≤ k in the LDG.
A
B
q
p
C
r
q
r
s
E
D
t
u
t
u
p
s
F
v
(a)
v
(b)
M
Fig. 4: a) a DAG with strong cuts annotated. b) The corresponding maximal independent sets in LDG.
A. Relationship between unidirectional node cuts and strong
line cuts
In this section, we show the equivalence between unidirectional node cuts and strong line cuts. We also establish the
fact that if the cut space is restricted to unidirectional node
cuts then the quality of technology mapping for minimizing
delay remains same.
In the following we restrict the DAG to be a transitive fanin cone of some gate in a circuit, since in TM only transitive
4
Fig. 5: a) Classification of edges in a bidirectional cut b) Replication after TM, (c) Classification of edges in corresponding
unidirectional cut.
fan-in cones of gates/nodes are considered for enumerating
cuts. v refers to the root node whose transitive fan-in cone is
being considered.
Lemma 2: A strong line cut corresponds to a unidirectional
set of edges crossing an S − T partition.
Proof: For an arbitrary node u ∈ S, there exists a path
u
v. This is straightforward from the definition of DAG
considered here which is the transitive fan-in cone of the node
v. Assume that the S − T partition corresponding to a strong
line cut Cv is bidirectional, i.e., there exists a directed edge
(p, q) such that p ∈ T and q ∈ S. Then for some x ∈ S and
y ∈ T , (x, y) ∈ Cv , there must exist a path x → y
p. Since
edge (p, q) exists, there must exist a path x → y
p → q.
Since q ∈ S, the root node v must be reachable from q through
another edge in the cut Cv , say (r, s). Therefore we have a
complete path that looks like x → y
p→q
r→s
v.
Note that edges (x, y) and (r, s) both belong to the cut. This is
clearly a contradiction, since no two lines in the cut Cv should
lie on same path. Also (x, y) 6= (r, s) because that would lead
to a directed cycle x → y
p → q
x in the directed
acyclic graph under consideration.
Conversely, assume a unidirectional node cut in which all
the edges are from S to T , and suppose the corresponding
line cut is not a strong line cut. Then there must exist at least
edges e1 = (x, y) and e2 = (u, w) in the line cut that are
on the same path to node v (the output). Assume e1 precedes
e2 in the path. By definition of a S − T partition, x ∈ S,
y ∈ T , u ∈ S and w ∈ T . However, since y
u, we have an
edge starting from T and ending in S, which contradicts the
assumption that it is a unidirectional node cut.
Lemma 2 confirms that a strong line cut must be unidirectional and a unidirectional cut must be a strong line cut. Note
that the cardinality of a strong line cut and the unidirectional
node cut can be different. The reason we convert back from a
strong line cut to a node cut (which is unidirectional) is that
eventually a node cut is what is mapped onto a cell. A node
cut form of a line cut would always require smaller library cell
whether mapping is done using standard Boolean functions or
threshold functions.
Next we show that restricting node cuts to unidirectional
node cuts will not increase the critical path delay when that is
the objective function being minimized by TM. Note that in
TM, the delay of path is the sum of the delays of gates in the
path. We show that the set of paths to the output node v in
a bidirectional cut is the same as those in the corresponding
unidirectional cut.
Figure 5(a) shows a classification of the edges in a bidirectional cut. T = X ∪Y , where X is the set of logic cones (node
and all nodes in its fanin cone) whose output has a directed
edge to some node in S, and Y is the set of nodes with no
paths to S. Note v ∈ Y . TM would replicate X in S and then
replace T with some appropriate cell in the library. This is
depicted in Figure 5(b). Four types of edges can be identified
in the S − T partition: (1) E1 are edges from S to X, (2) E2
are edges from X to S, (3) E3 are edges from S to Y , and
(4) E4 are edges from X to Y .
Now a path from input node in S to the output node v ∈ T
can be one three types:
E
E
1
4
1) S =⇒
X =⇒
Y =⇒ v.
E2
E3
2) S =⇒ S =⇒ Y =⇒ v.
E3
3) S =⇒
Y =⇒ v.
Note that every one of the above paths (sequence of nodes)
in the graph of Figure 5(a) also exists in the graph shown
in Figure 5(b). Now consider the corresponding unidirectional
cut shown in Figure 5(c). Every path that exists in Figure 5(a)
also exists in Figure 5(c), and visa versa. This shows that there
is no disadvantage to retaining only unidirectional cuts.
IV. C UT ENUMERATION
Enumerating MISs is a key step in many computational
graph theory problems [1], [2], [11]. In TM, because there
is a fixed library of functions, MIS enumeration needs to be
restricted to sets of size ≤ k. Without any restrictions, the
number of MISs in arbitrary graphs grows exponentially in the
size of the graph [2]. However, in TM, the size k of the MIS is
bounded above by some constant, and independent of n, which
is the size of the graph. Therefore the number of MISs of size
≤ k is ≤ nk . A brute force approach in which all subsets
of size ≤ k are examined and those that are not an MIS are
discarded, is not practical for even realistic values of n and k.
Existing algorithms exploit specific properties of small MISs
to facilitate enumeration [11]. We now describe a method that
can significantly speedup existing MIS enumeration algorithms
by pruning away many MISs that will not be part of the final
solution.
A. MIS pruning
The LDG of a DAG encodes MISs, many of which have
sizes > k. The basic idea in the pruning algorithm is to
5
(efficiently) transform an LDG into a new, smaller, and denser
graph G0 which contains all the MISs of size ≤ k of the
original LDG, and as few other (parasitic) MISs as possible.
The objective is to construct a transformation which is computationally efficient and would significantly reduce the runtime
of enumeration.
The graph G0 to be constructed must satisfy the following
conditions: every vertex v of G0 as well as every disconnected
pair of vertices in G0 must independently be a part of some
MIS of size ≤ k of the original graph G. This condition
translates into two steps of the pruning algorithm. In the first
step, for each vertex v we attempt to determine the size of the
smallest MIS to which v belongs. If this MIS is of size ≤ k
then v is included in G0 . The second step decides if any two
disconnected vertices in G may safely share an edge in G0 ,
implying that they will not be part of any MIS. Again for each
pair of disconnected vertices (u, v) we attempt to determine
the size of the smallest MIS containing both of the vertices.
If such MIS is of size > k then an edge (u, v) is added to G0 .
This is the key step in the of following pruning algorithm.
Input: A DAG D(VN , EN ), An LDG G = (V, E) and
an integer k
Output: Graph G0 = (V 0 , E 0 ) characterized above
dl = Φ, el = Φ;
for vertex v in G do
λ = Min-MIS(D, G, v);
if λ > k then
S
dl = dl
v;
end
end
for disconnected pair (u, v) in G such that u ∈
/ dl and
v∈
/ dl do
λ = Min-MIS(D, G, u, v);
if λ > k then
S
el = el
(u, v);
end
end
S
E0 = E
el;
V 0 = V − dl;
return G0 (V 0 , E 0 );
Algorithm 1: Algorithm to prune MISs of an LDG
There is no known polynomial procedure to compute the
size of the smallest MIS that contains a given vertex or
a pair of vertices for comparability graphs [9]. Hence we
approximate the size of the minimum MIS containing a vertex
v or a pair of vertices (u, v) of a given LDG by exploiting the
duality between MISs in LDGs and strong line cuts in DAGs.
The minimum MIS in an LDG is the minimum strong cut
in the DAG. We use this fact to approximate the minimum
MIS size in an LDG by means of a flow computation in its
corresponding DAG.
It is well known that a minimum s-t cut (min-cut) is
equivalent to the maximum flow the sink t receives from the
source s [8], [12]. The size of a cut is a sum of capacities
of edges involved in it. If unit edge capacities are assigned,
then the size of a cut is equivalent to number of edges. We
note that an edge with a capacity of ∞ can never be part
of a finite size cut. The size of the minimum MIS in an
LDG containing a vertex v or a pair of vertices (u, v) is
approximated by computing the min-cut in the DAG with unit
edge capacities. The procedure (Alg. 2) assigns a capacity of
∞ to the dependent lines of the given line (e.g. corresponding
to node v) and a capacity of 1 to all other lines. The capacities
of edges attached to s and t are always ∞. This is because
a min-cut must consist of circuit lines only. Finally it returns
the size of the minimum s-t cut of the network (λ).
Input: A DAG D(VN , EN ), LDG G and a line v
Output: Approximate size of minimum MIS (strong cut)
containing v
for u in G do
capacity[u] = 1;
end
for neighbor w of v in G do
capacity[w] = ∞;
end
return Min-cut(D,capacity);
Algorithm 2: Min-MIS procedure for single edge in DAG
Lemma 3: Let Smin be the minimum strong cut containing
line v. Let λ be the size of a min-cut of the network with
capacities modified based on line v. Then λ ≤ |Smin |.
Proof: The Min-MIS procedure never assigns a capacity
of ∞ to any line l ∈ Smin . Thus |Smin | is finite. It
immediately follows that Smin is an arbitrary s-t cut, and
cannot be smaller than the min-cut of the network.
Lemma 3 states that the size of the minimum strong cut is
guaranteed to be greater than or equal to the size of a mincut. Hence if the result of Min-MIS is > k then the size of
a minimum strong cut is also > k. Consequently, the vertex
in G for which Min-MIS was computed to be > k can be
safely discarded from G0 , or an unconnected pair of vertices
for which Min-MIS was computed can be connected in G0 .
As an example, consider the LDG in Fig. 4(b), and suppose
we wish to check whether vertex p belongs to an MIS of
size ≤ 2. This is equivalent to determining if line p belongs
to minimum strong cut of size at most 2 in the DAG. We
assign capacities to the edges, as shown in Fig. 6(a). Line p
is assigned a capacity of 1 and its dependent lines, t and v,
are assigned a capacity of ∞. After this, the s-t minimum cut
size (λ) is determined. In this example, it is lines p, q and u,
and its size is 3 — i.e. λ = 3.
Theorem 1: Let S be an MIS in G such that |S| ≤ k. Then
S is also an MIS in G0 .
Proof: From Lemma 3 every vertex of S must be present
in G0 and that no edge was added between any two of its
vertices. Thus S is an independent set in G0 . Assuming S is
not a MIS in G0 , there exists an independent set S 0 in G0 such
that S ⊂ S 0 . It follows that S 0 is also an independent set in
G contradicting the fact that S is an MIS in G.
Lemma 4: Pruning algorithm runs in O(n3 ).
6
small k. Hence, even the simple recursive algorithm that
we used is still efficient. More sophisticated approaches to
enumerate MISs of size ≤ k [1], [11] could be used to
improve runtime for with values of k. Their application on
the transformed graph may further improve total runtimes as
well as scalability of the solution.
S
∞
A
∞
∞
B
q
1
r
1
p
D
t
1
1
s
E
u
∞
C
1
t
F
v
u
C. Results
∞
G
∞
v
(b)
(a)
T
Fig. 6: a) Formation of s−t Boolean network for determination
of a cut containing line p. b) LDG from Fig. 4 pruned for
k ≤ 2.
Proof: Let n be the number of nodes, m be the number of
edges in DAG. The second for loop in pruning algorithm runs
in O(m2 ) and dominates the overall complexity. Determining
the min-cut takes O(km) time [8]. Since k ≤ n and is
independent of n, the pruning has fixed complexity of O(m3 ).
We know that m ≤ ∆n where ∆ is maximum degree found
in the DAG. For most of the circuits with limited fanin (and
fanout) capacities ∆ can be regarded as a small constant
independent of n. Hence the time complexity of the pruning
procedure is O(n3 ).
As a result of the pruning transformation we perform MIS
enumeration on G0 instead of G. Note that not all MISs in G0
are of size k in G. However, our experiments demonstrate that
using G0 instead of G to enumerate MISs significantly reduces
the enumeration time. In fact, the enumeration runtimes we
observe in our experiments are practical for all of the evaluated
benchmark circuits, suggesting that the approximation of a
Min-MIS used in pruning must be quite accurate.
B. Enumerating MISs
As stated earlier, there are many known MIS enumeration
techniques in computational graph theory. In fact, the choice of
enumeration algorithm is independent of MISs pruning. In our
experiments, we used a recursive procedure that is basically
a simplified form of the algorithm presented in [1]. The idea
is to recursively enumerate MISs that contain a specific node
and then those that do not contain that same node. Once we
designate a node to be part of the MIS, none of its neighbors
can be part of that MIS. Similarly, if a node is not a part of
the MIS, any of its neighbors can be part of the MIS.
Note, that due to the pruning transformation being approximate, G0 may still contain some MISs of size > k. Since we
are interested only in maximal independent sets of size ≤ k,
we simply discard, on the fly, the MISs whose size is greater
than k.
Unfortunately, the number of MISs of a graph increases
exponentially as a function of its size [20]. However in
practice we found that while running it on a pruned graph G0 ,
enumeration time is dominated by pruning time for sufficiently
The procedures described in this article were implemented
in C++ and run on a 2GHz PC with 2 GB of RAM. The results
of these runs are summarized in Table I. We ran the simple cut
enumeration algorithm after pruning the MISs, and enumerated
all cuts in the ISCAS benchmark circuits. The starting point
was an AND-INVERTER Graph (AIG) obtained from [19].
Note that MIS enumeration is not possible with any of the
existing MIS enumeration schemes. In fact, even a graph with
as few as a hundred nodes would not be practical.
The proposed pruning procedure makes it possible to exhaustively enumerate large numbers of strong line cuts in
reasonable time. We also evaluated the combined effect of
constraining the cone size and increasing the value of k.
The results demonstrate that for sufficiently small k, line
cut enumeration is dominated by our pruning transformation
and as such is practically polynomial. For larger values of k
however the procedure could however benefit from a more
efficient MIS enumeration procedures.
V. C ONCLUSION
In this work, we presented a novel cut enumeration framework that exploits duality between enumerable entities in
DAGs and LDGs. Apart from resource efficient computational
procedure, it also introduces into the area of technology
mapping, the concept of k-feasible strong line cuts (or unidirectional cuts) that are distinct from conventional node cuts.
The advantages are two-fold. On one hand they are enumerable
with low per-unit computational effort. On the other hand
they potentially open up a new space available to be explored
by the technology mapper without degrading the quality of
the mapping. Line cuts provide choices not available to node
cut based technology mappers. More importantly line cuts
inherently mitigate some of the structural bias of node cuts and
unlike node cuts they guarantee a mappable cut of a circuit.
R EFERENCES
[1] J. Byskov. Algorithms for k-colouring and finding maximal independent
sets. In Proc. Symp. on Discrete Algorithms, pages 456–457. PA, USA,
2003.
[2] J. M. Byskov. Enumerating maximal independent sets with applications
to graph colouring. Oper. Res. Lett., 32(6):547–556, Nov. 2004.
[3] M. L. Case, A. Mishchenko, and R. K. Brayton. Cut-based inductive
invariant computation. In Proc. IWLS’08, pages 172–179, 4–6 Jun. 2008.
[4] S. Chatterjee, A. Mishchenko, and R. K. Brayton. Factor cuts. In Proc.
ICCAD ’06, pages 143–150, 5–9 Nov. 2006.
[5] J. Cong and Y. Ding. FlowMap: an optimal technology mapping
algorithm for delay optimization in lookup-table based FPGA designs.
IEEE Trans. CAD, 13(1):1–12, Jan. 1994.
[6] J. Cong, G. Han, and Z. Zhang. Architecture and compiler optimizations
for data bandwidth improvement in configurable processors. IEEE Trans.
on VLSI Syst., 14(9):986–997, 2006.
7
TABLE I: Running times for enumeration
circuit
# inputs
# nodes
c432
c1355
c1908
c6288
c7552
36
41
33
32
207
356
1133
1793
4864
7233
K = 6, cone size = no
#cuts
pruning
after pruning
time (s)
4317
0.93
228,950
49.04
9,519,182
48.12
1,494,636
6085.06
5,949,016
182.96
limit
enum.
time (s)
0.02
3.22
9.43
190.86
17.43
K = 6, cone size = 300
#cuts
pruning
enum.
after pruning
time (s)
time (s)
4253
0.98
0.02
129,994
15
2.5
2,099,375
18.49
5.02
737,587
480.83
63.96
2,521,039
55.48
10.75
[7] J. Cong, C. Wu, and Y. Ding. Cut ranking and pruning: enabling a
general and efficient FPGA mapping solution. In Proc. FPGA’99, pages
29–35, New York, 21–23, Feb. 1999. ACM.
[8] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduction to
algorithms. MIT Press, Cambridge, MA, 2001.
[9] D. Corneil and Y. Perl. Clustering and domination in perfect graphs.
Discrete Applied Mathematics, 9(1):27–39, 1984.
[10] N. Een, A. Mishchenko, and N. Sorensson. Applying logic synthesis
for speeding up SAT. Lect. N. Comp. Sci., 4501:272, 2007.
[11] D. Eppstein. Small maximal independent sets and faster exact graph
coloring. Lect. N. Comp. Sci., pages 462–470, 2001.
[12] L. Ford and D. Fulkerson. Flow in networks. Princeton University
Press, Princeton, NJ, 1962.
[13] M. Golumbic. Algorithmic graph theory and perfect graphs. NorthHolland, 2004.
[14] D. Kagaris and S. Tragoudas. Maximum weighted independent sets
on transitive graphs and applications. Integration, the VLSI Journal,
27(1):77–86, 1999.
[15] N. Kulkarni, N. Nukala, and S. Vrudhula. Minimizing area and power of
sequential cmos circuits using threshold decomposition. In Proceedings
of the International Conference on Computer-Aided Design, ICCAD ’12,
pages 605–612, New York, NY, USA, 2012. ACM.
[16] A. C. Ling, J. Zhu, and S. D. Brown. BddCut: Towards scalable symbolic
cut enumeration. In Proc. ASP-DAC’07, pages 408–413, 23–26 Jan.
2007.
[17] A. Mishchenko, R. Brayton, J.-H. R. Jiang, and S. Jang. Scalable don’tcare-based logic optimization and resynthesis. In Proc. FPGA ’09, pages
151–160, NY, 21–23 Feb. 2009. ACM.
[18] A. Mishchenko, S. Chatterjee, R. Brayton, X. Wang, and T. Kam.
Technology mapping with boolean matching, supergates and choices.
[19] A. Mishchenko, S. Cho, S. Chatterjee, and R. Brayton. Combinational
and sequential mapping with priority cuts. In Proc. ICCAD’07, pages
354–361, 4–8 Nov. 2007.
[20] J. Moon and L. Moser. On cliques in graphs. Israel Journal of
Mathematics, 3(1):23–28, 1965.
[21] P. Pan and C.-C. Lin. A new retiming-based technology mapping
algorithm for LUT-based FPGAs. In Proc. FPGA ’98, pages 35–42,
New York, 22–24 Feb. 1998. ACM.
[22] P. Pan and C. L. Liu. Optimal clock period FPGA technology mapping
for sequential circuits. ACM TODAES, 3(3):437–462, 1998.
[23] J. Peddersen, S. Shee, A. Janapsatya, and S. Parameswaran. Rapid
embedded hardware/software system generation. In Int. Conf. on VLSI
Design, pages 111–116, 2005.
[24] S. Chatterjee. On Algorithms for Technology Mapping, PhD Thesis.
2007.
[25] T. Takata and Y. Matsunaga. An efficient cut enumeration for depthoptimum technology mapping for lut-based fpgas. In Proceedings of
the 19th ACM Great Lakes Symposium on VLSI, GLSVLSI ’09, pages
351–356, New York, NY, USA, 2009. ACM.
K = 10, cone size =
#cuts
pruning
after pruning
time (s)
382,697
2.22
26,194,458
29.25
1,311,442,759
36.46
150,921,850
379.75
3,113,268,991
193.59
100
enum.
time (s)
1.60
286.06
6510.5
4570.26
7046.68
| 8 |
1
Feedback Acquisition and Reconstruction of
Spectrum-Sparse Signals by Predictive Level
Comparisons
arXiv:1711.09658v1 [] 27 Nov 2017
Mahdi Boloursaz Mashhadi, Student Member, IEEE, Saeed Gazor, Nazanin
Rahnavard, and Farokh Marvasti, Senior Members, IEEE
Abstract—In this letter, we propose a sparsity promoting feedback acquisition and reconstruction scheme for sensing, encoding
and subsequent reconstruction of spectrally sparse signals. In the
proposed scheme, the spectral components are estimated utilizing
a sparsity-promoting, sliding-window algorithm in a feedback
loop. Utilizing the estimated spectral components, a level signal
is predicted and sign measurements of the prediction error are
acquired. The sparsity promoting algorithm can then estimate
the spectral components iteratively from the sign measurements.
Unlike many batch-based Compressive Sensing (CS) algorithms,
our proposed algorithm gradually estimates and follows slow
changes in the sparse components utilizing a sliding-window
technique. We also consider the scenario in which possible
flipping errors in the sign bits propagate along iterations (due to
the feedback loop) during reconstruction. We propose an iterative
error correction algorithm to cope with this error propagation
phenomenon considering a binary-sparse occurrence model on
the error sequence. Simulation results show effective performance
of the proposed scheme in comparison with the literature.
Index Terms—Sparse Signal Acquisition, 1-Bit Compressive
Sensing (CS), Level Comparison (LC) Sign Measurements,
Binary-Sparse Error Correction.
I. I NTRODUCTION
PECTRUM sparse signals arise in many applications such
as cognitive radio networks, frequency hopping communications, radar/sonar imaging systems, musical audio signals
and many more. In such cases, the signal components maybe
sparsely spread over a wide spectrum and need to be acquired
without prior knowledge of their frequencies. This is a major
challenge in spectrum sensing that is an essential block in any
spectrum-aware communication system. In this research, we
propose a scheme and the corresponding signal processing
algorithms for acquisition of spectrally sparse signals. The
proposed scheme utilizes tools from the general theory of
Compressive Sensing (CS) [1], [2] to address spectral sparsity.
Several schemes have already been proposed for sparse signal acquisition. These include the Random Demodulator (RD)
[3], the Multi-coset Sampler [4] and the Modulated Wideband
Converter (MWC) [5]. However, the acquired measurements
S
Mahdi Boloursaz Mashhadi is both with the ECE Department, University of
Central Florida, Orlando, USA and the Advanced Communications Research
Institute (ACRI), Sharif University of Technology, Tehran, Iran, e-mail:
boloursaz@eecs.ucf.edu.
Saeed Gazor is with the ECE Department, Queens’ University, Kingston,
Canada, Nazanin Rahnavard is with the ECE Department, University of
Central Florida, Orlando, USA, and Farokh Marvasti is with the Advanced
Communications Research Institute (ACRI), Sharif University of Technology,
Tehran, Iran.
need to be quantized and encoded to bits for subsequent
transmission or processing. This is addressed in the Quantized
Compressive Sensing [6], [7], [8] literature.
The extreme case of 1-bit compressive sensing has been
extensively studied [9], [10], [11], [12], [13], [14] and proved
to be robust against high levels of additive noise on the
measurements [8]. However, the 1-bit measurements acquired
in these works provide no information on the norm of the
sparse signal. Hence in these works, reconstruction is possible
only up to a scale factor.
In the proposed scheme, the input signal is compared with
a level signal [15], [16], [17] and sign measurements of the
error are acquired. The level signal is estimated adaptively in a
feedback loop utilizing a sparse reconstruction algorithm. The
reconstruction algorithm utilizes the previously acquired sign
values to estimate the sparse signal components and predict the
level signal, subsequently. This overcomes the scale ambiguity
of 1-bit CS reconstruction.
The idea of acquiring sign measurements of level comparisons was also applied in [18], [19], [20]. Previous studies on
one-bit sigma-delta quantization [21], [22], [23] investigate
how adaptivity in the level values can improve the reconstruction error bound in terms of the number of measurements.
The approach in [24] achieves exponential decay in the reconstruction error as a function of the number of measurements but requires the levels themselves to be transmitted
for reconstruction. This is in contrast to our proposed scheme
where the adaptive levels are estimated from the sequence of
previously acquired sign measurements themselves. Moreover,
unlike many previously proposed batch-based reconstruction
algorithms, our proposed algorithm applies one iteration on
each sliding window on the input signal using the previous
estimate of the sparse vector as an initial estimate. This not
only can decrease the computational complexity for large
values of batch sizes and iteration counts, but also enables
the proposed algorithm to better follow possible slow changes
in the sparse components along iterations. In Section IV, we
provide performance comparisons with state-of-the-art techniques in [23], [24] and show effective performance of the
proposed scheme by simulations.
In case the acquired sign bits are subsequently transmitted
over a channel, the sign bits available to the receiver may
contain flipping errors. Due to the feedback, these errors will
propagate and make reconstruction unstable. To cope with
this, we propose an iterative algorithm for correcting possible
2
sign flip errors assuming a binary-sparse occurrence model
on the error sequence. The iterations for error correction are
performed along iterations of the main sparse component
estimation algorithm at the receiver to gradually estimate
the error sequence and avoid error propagation. Unlike the
previously proposed error-robust 1-bit CS reconstruction techniques [25], [26], [27], our proposed error correction algorithm
alleviates the need for prior knowledge of the number of errors
by applying a binary-sparse occurrence model on the error
sequence.
This paper is organized as follows. In section II we describe
our proposed feedback acquisition and the corresponding
reconstruction scheme. Section III presents the algorithms performed in the main building blocks of our proposed scheme.
Section IV provides the simulation results and finally section
V concludes the paper.
II. T HE P ROPOSED ACQUISITION AND R ECONSTRUCTION
S CHEME
In this research, we adopt the sparse exponential model in
order to accommodate the general class of spectrally sparse
signals that arise in many real world applications. Assuming
that power
P spectrum of x(t) is sparse, we may approximate
x(t) =
z∈Z sz (t) as the sum of exponential components
for Z = {z1 , z2 , · · · , zN }, zi ∈ C where each component can
be predicted by szi (t + ǫ) = ezi ǫ szi (t). Also assume that
x(t) is sparse in the sense that only a few of its components
have significant amplitudes |sz (t)| at any time. Note that the
adopted model allows non-equidistant frequencies and hybrid
real/imaginary exponentials.
Fig. 1a shows the block diagram of the proposed feedback
acquisition scheme. In this figure, the complex input signal
x(t) is compared with the level signal ℓ(t) utilizing a simple
comparator. The error signal e(t) goes through the complex
sign 1 block and is then sampled uniformly at t = mτ resulting
the output sequence of sign values bm ∈ {±1 ± 1j}. To
encode the signal more efficiently, ℓ(t) is calculated from bm
in a feedback loop utilizing a sparse component estimation
algorithm followed by prediction.
In many cases, the acquired signal needs to be subsequently
transmitted over a channel. In these cases, the sign bits
available for reconstruction at the receiver experience flipping
errors. These errors cause the receiver to estimate inaccurate
level values. If the levels estimated at the receiver are inaccurate, the subsequent sign bits received will be wrongly
interpreted which introduces further errors to reconstruction.
In other words, due to the feedback, the error propagates and
may unstabilize the whole reconstruction. To prevent error
propagation, we propose secondary iterations that are applied
along iterations of the main sparse component estimation
algorithm at the receiver to correct the sign-flip errors as
depicted in Fig. 1b.
In the next section, we elaborate the algorithms performed
in the main building blocks of the proposed scheme.
x(t) +
+
csgn
e(t)
bm
−
t = mτ
ℓ(t)
Sparse
Component
Estimation
Predict
&
Hold
(a) Block diagram for the proposed acquisition scheme.
b̂m
Sparse
Component
Estimation
Predict
&
Hold
ℓ̂(t)
Sparse
Error
Correction
(b) Block diagram for reconstruction at the receiver.
III. T HE P ROPOSED A LGORITHMS
In this section, we first elaborate our proposed algorithm
to be performed in the sparse component estimation block to
reconstruct the spectral components from the sign bits. Then,
we introduce our proposed sparsity-promoting algorithm to
correct sign-flip errors at the receiver.
A. Sparse Component Estimation
Consider a sliding window on the input samples as Xm =
[x(mτ ), x((m−1)τ ), · · · , x((m−M +1)τ )]T in which τ is the
sampling period. Moreover, denote the corresponding level and
sign values by Lm = [ℓ(mτ ), ℓ((m − 1)τ ), · · · ℓ((m − M +
1)τ )]T and Bm = [bm , bm−1 , · · · , bm−M+1 ]T , respectively.
Utilizing this vector notation, we get Bm = csgn(Xm − Lm ).
Now define Sm = [sz1 (mτ ), sz2 (mτ ), · · · , szN (mτ )]T as the
state vector for the observed signal x(t), we can write Xm =
ΦSm , where Φ is a Vandermond matrix defined by
1
1
···
1
e−z1 τ
e−z2 τ
···
e−zN τ
Φ=
(1)
.
..
..
..
..
.
.
.
.
e−z1 (M−1)τ e−z2 (M−1)τ · · · e−zN (M−1)τ
The exponential modeling szi (t + ǫ) = ezi ǫ szi (t) simplifies
to a one step predictor as Sm = P ⊙ Sm−1 where P =
[ez1 τ , ez2 τ , · · · , ezN τ ] and ⊙ is element wise multiplication
of two vectors. To estimate and update the sparse state vector
Sm , we propose to iteratively minimize
Ŝm = arg min kB̂m − csgn(ΦS − Lm )k22
S
+ λ1 kS − P Ŝm−1 k22 + λ2
N
X
(2)
gσ ([S]i ),
i=1
1 The
complex sign function(is defined as csgn(.) = sgn(Re(.)) +
√
1,
x≥0
, j = −1. csgn(.) operates
jsgn(Im(.)) where sgn(x) =
−1, x < 0
element-wisely on vectors.
where Ŝm and Ŝm−1 represent estimates of the vector of
sparse components for the sliding windows corresponding to
t = mτ and t = (m − 1)τ , respectively, and [Sm ]i denotes
3
the ith element of the vector Sm . Note that B̂m is the vector
of observed sign bits and is different from the true Bm in
the sense that it may contain bit-flip errors. The first term of
the cost function in (2) enforces consistency with the encoded
sequence of sign values, the second term guarantees smooth
update of the solution and the last term promotes sparsity.
For the sparsity promoting term, we set gσ (s) = arctan(σ|s|)
P arctan(σ)
[28], [29], [30]. It P
is easy to show that limσ→∞ i gσ ([S]i ) =
kSk0 and limσ→0 i gσ ([S]i ) = kSk1 . Thus, by starting from
a small σ value and increasing it along the iterations, we
migrate from the convex ℓ1 to the non-convex l0 norm gradually. Similarly, for ease of calculating the gradient, we replace
the sign function with an S-shaped, infinitely differentiable
function [31], [32], [33]. We set fδ (s) = π2 arctan(δs), for
some δ > 0 which is differentiable with the derivative f ′ (s) =
2
δ
d
ds f (s) = π 1+δ 2 s2 . It is obvious that limδ→∞ fδ (s) = sgn(s)
and hence we increase δ value exponentially along the iterations. Making these substitutions we get (3) 2
Ŝm = arg min C(S)
(3)
S
= arg min kB̂m − cf (ΦS − Lm )k22
S
PN
arctan(σ|[S]i |)
2
.
+ λ1 kS − P Ŝm−1 k2 + λ2 i=1
arctan(σ)
∂
C(S) = 0. In
To solve (3), we shall find the roots of ∂S
order to decrease the computational cost, we apply only one
iteration on each sliding window but gradually increase the σ
and δ parameters along temporal iterations. Utilizing a slidingwindow approach also enables following possible changes in
the spectral components along iterations. We get,
2ΦH cf ′ (ΦS − Lm ) ⊙ (cf (ΦS − Lm ) − B̂m )
λ2
+ 2λ1 (S − P Ŝm−1 ) +
G ⊙ S = 0,
arctan(σ)
(4)
where
[G]i =
1
,
|[S]i |(1 + σ 2 |[S]i |2 )
for i = 1, · · · , N.
(5)
To solve this non-linear equation, we approximate the first
term in (4) by its value at the prior state estimate and denote
Ym−1 = 2λ1 P Ŝm−1 − 2ΦH f ′ (ΦŜm−1 − Lm ) ⊙ (f (ΦŜm−1 −
Lm ) − B̂m ), we get (6)
(2λ1 1N +
λ2
G) ⊙ S = Ym−1 ,
arctan(σ)
(6)
where 1N = [1, · · · , 1] ∈ RN . The elements of 2λ1 1 +
λ2
λ2
arctan(σ) G are 2λ1 + arctan(σ)|[S]i |(1+σ2 |[S]i |2 ) , which are all
real positive values, therefore, from (6) we obtain
∠[S]i = ∠[Ym−1 ]i ,
2λ1 |[S]i | +
2 For
λ2
= |[Ym−1 ]i |.
arctan(σ)(1 + σ 2 |[S]i |2 )
(7)
(8)
a function f : R 7→ R, we denote cf (.) = f (Re(.)) + jf (Im(.)).
λ2
and αi = |[Ym−1 ]i |, we can
By denoting β = arctan(σ)
rewrite (8) as a cubic polynomial equation in terms of ri =
|[S]i | given by
2λ1 σ 2 ri3 − αi σ 2 ri2 + 2λ1 ri + (β − αi ) = 0.
(9)
The coefficients of the cubic polynomial (9) are real. Hence (9)
has either three real roots or a single real root and a complex
conjugate pair. To enforce sparsity, coefficients with smaller
amplitudes are encouraged and hence (3) is minimized by
choosing the smallest non-negative real root of (9). We propose
to solve (9) as follows
Case 1 All
roots of (9) are real: The sum of the three roots
αi
αi σ2
=
2λ1 σ2
2λ1 > 0 is always positive and hence there
exists at least a positive root. The smallest positive root
is feasible for the algorithm.
Case 2 One of the roots is real and the other two are a
complex conjugate pair: If the product of the roots is posαi −β
itive, i.e., 2λ
2 > 0, the real root is positive and hence
1σ
λ2
< αi
feasible. Hence we must enforce β = arctan(σ)
or equivalently increase σ such that σ > arctan( λα2i ).
Note that σ is already increased along the iterations,
hence if this situation happens, σ is further increased till
σ > arctan( λα2i ) holds.
As described above, the magnitude and phase of [Ŝm ]i are
given by the solution of (9) and (7), respectively.
Using the state estimate Ŝm , the predict &P
hold block calN
culates the next level value as ℓ((m + 1)τ ) = i=1 [P ⊙ Ŝm ]i .
Finally, to get ℓ(t) from its samples, each ℓ(mτ ) is holded by
this block at the output for as long as the sampling period τ .
B. Sparse Error Correction
r
Let us define the real and imaginary error vectors Em
i
r
i
r
and Em with elements em , em ∈ {0, 1}. Define em = 1 if
Re(bm ) is flipped and erm = 0, otherwise. Hence, we get
Re(bm ) = Re(b̂m )(1−2erm ) and Im(bm ) = Im(b̂m )(1−2eim ).
Note that for ease of calculations, we consider the real and
imaginary error vectors separately and provide our algorithm
for the real part. The imaginary part is similar. It is obvious that
r
Em
itself, is a sparse vector with elements in {0, 1}. Hence, we
r
propose secondary iterations to update and estimate Em
along
the primary iterations of the sparse component estimation
ˆr
r
= S(Êm−1
), in which the
algorithm. Let us denote Êm−1
S(.) operator denotes sliding the estimated error vector for
one sample and inserting a zero as the initial estimate of its
r
new element. Now to estimate Em−1
, we solve the following
r
Êm
= arg min h(E) + θ
E
s.t.
M
X
[E]i
(10)
i=1
ˆr
k2 ≤ ǫ,
kE − Êm−1
[E]i ∈ [0, 1],
r
]i is relaxed to be the convex interval
where the range for [Em
[0, 1] and the second term of the cost function is the l1 norm
r
r
which promotes sparsity in Em
since the elements of Em
are
non-negative. h(E) = kRe(B̂m ) ⊙ (1 − 2E) − sgn(Re(ΦŜm −
Lm ))k22 is a quadratic convex term with regard to E.
4
p=0
p = 0.0125
p = 0.025
p = 0.05
w/o EC
w/ EC
w/o EC
w/ EC
w/o EC
w/ EC
k = 2.5%
-19.9
-16.3
-19.4
-10.2
-18.2
-4.1
-17.8
k = 5%
-19.6
-12.1
-18.3
-8.7
-17.4
-2.3
-16.5
k = 10%
-17.9
-10.3
-14.5
-4.6
-12.5
F
-10.2
k = 20%
-10.4
-6.3
-7.8
F
-7.1
F
-5.8
To solve (10), we use the gradient descent algorithm followed by projection onto [0, 1] and stochastic rounding [34],
[35] to {0, 1}. Note that both the projected gradient and
stochastic rounding techniques have convergence guarantees
for the convex case as in (10). The gradient descent step is
given by
Tm
D
ˆr
,
=Êm−1
−ǫ
kDk2
(12)
− sgn(ΦŜm − Lm )) + θ1M ,
The projection and stochastic rounding are performed by
(
0, [Tm ]i ≤ u
[Em ]i =
,
(13)
1, [Tm ]i > u,
where u is generated as a uniformly distributed random
variable over the interval [0, 1].
IV. S IMULATION
M = 50
-13.34
-15.76
-16.12
[23]
[24]
This Work
M = 100
-18.54
-29.61
-29.73
M = 200
-25.88
-57.21
-56.96
5
p=0.0125
p=0.025
p=0.05
0
−5
−10
−15
−20
0
20
40
60
80
100
Iteration
Fig. 2: MSE vs. Iteration for the Off-Grid Scenario (k = 0.05).
(11)
where ǫ is an small step-size and
ˆr
D = − 4Re(B̂m ) ⊙ (Re(B̂m) ⊙ (1 − 2Êm−1
)
TABLE II: MSE Comparisons (dB) between our Proposed
Scheme and the Literature
Reconstruction MSE (dB)
TABLE I: The MSE Values (dB) Achieved by the Proposed
Scheme
RESULTS
To numerically evaluate the performance of our proposed
scheme, we generate random spectrally sparse signals according to the model presented in Section II with N = 500,
M = 50, τ = 5 × 10−4 sec, and Z = {1j, 2j, · · · , 500j} × ω0,
ω0 = 10 rad/sec. The non-zero spectral components are
selected uniformly at random and the corresponding amplitudes come from a N (0, 1) distribution. For comparisons, the
final normalized reconstruction Mean Square Error (M SE =
kS−Ŝk2
10 log10 ( kSk2 2 )) values averaged over 100 runs are reported
2
in Table I for different sparsity factors. The sparsity factor
k is defined as the ratio of the number of nonzero spectral
components over the total number of components N . The
algorithm parameters are experimentally optimized for the best
performance as δm = 1.01 × δm−1 , σm = 1.1 × σm−1 .In
this table, p denotes the rate at which sign-flip errors occur,
”w/ EC” and ”w/o EC” represent the results with and without
the proposed error correction (EC) iterations and the letter
”F” shows divergence of the proposed algorithm (MSE>-5dB)
due to the error propagation phenomenon. As shown, EC is
necessary to avoid error propagation.
Next, we investigate the general scenario in which x(t) both
contains frequencies that do not lie on any of the quantized
frequencies Z = {1j, 2j, · · · , 500j} × 10 rad/sec (the offgrid problem) and may also have stable real exponential
parts. Note that exp(γt + j(Kω0 + ∆ω)t) = exp((γ +
j∆ω)t) × exp(jKω0 t), γ ∈ R− which is the grid frequency
exp(jKω0 t) with an amplitude that varies with time according
to exp((γ + j∆ω)t). Hence, if γ and ∆ω are small, the
algorithm will still be able to converge and follow the smooth
changes in the component amplitudes. To investigate this, we
generate x(t) with a sparsity factor of k = 0.05 that contains
components on ω = j214.8 × 10, −1.5 + j442.1 × 10 rad/sec
and provide the MSE curves versus iteration in Fig. 2. These
curves confirm effective performance of the proposed algorithm to follow smooth changes in the component amplitudes.
Finally in Table II, we compare the performance of our
proposed algorithm with state-of-the-art techniques in [23],
[24] for different values of the window length M where
k = 5%, p = 0 and the other simulation parameters are
fixed as previously. This table provides the final normalized
reconstruction MSEs (dB) achieved by the three acquisition/reconstruction schemes averaged over 20 runs when there
exists an additive zero-mean Gaussian pre-quantization noise
with standard deviation 0.1 and [24] is applied in a hard
thresholding scheme. As observed in this table, both our
proposed algorithm and [24] outperform [23] especially for
larger values of M . This is due to an exponential error decay
bound for our proposed algorithm and [24] in comparison with
a root exponential decay bound for the Σ∆ scheme in [23].
Our proposed scheme shows a slightly improved performance
in comparison with [24] for smaller values of M which may
be due to improved robustness to noise by the proposed error
correction algorithm.
V. C ONCLUSION
In this letter, we proposed a feedback acquisition scheme for
encoding of spectrally sparse signals to a stream of 1-bit measurements. We proposed a sparsity promoting reconstruction
algorithm to predict comparison levels in a feedback loop to
facilitate more efficient 1-bit measurements of the input signal.
We also proposed a sparse error correction technique to cope
with possible sign flip errors during transmission. Finally, we
reported simulation results to confirm effective performance
of the proposed scheme and algorithms.
5
R EFERENCES
[1] D.L. Donoho, “Compressed sensing,” IEEE Transactions on Information
Theory, vol. 52, no. 4, pp. 1289–1306, 4 2006.
[2] E.J. Candes and M.B. Wakin, “An Introduction To Compressive
Sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30,
3 2008.
[3] Joel A. Tropp, Jason N. Laska, Marco F. Duarte, Justin K. Romberg, and
Richard G. Baraniuk, “Beyond Nyquist: Efficient Sampling of Sparse
Bandlimited Signals,” IEEE Transactions on Information Theory, vol.
56, no. 1, pp. 520–544, 1 2010.
[4] M. Mishali and Y.C. Eldar, “Blind Multiband Signal Reconstruction:
Compressed Sensing for Analog Signals,” IEEE Transactions on Signal
Processing, vol. 57, no. 3, pp. 993–1009, 3 2009.
[5] M. Mishali and Y.C. Eldar, “From Theory to Practice: Sub-Nyquist
Sampling of Sparse Wideband Analog Signals,” IEEE Journal of
Selected Topics in Signal Processing, vol. 4, no. 2, pp. 375–391, 4 2010.
[6] A. Zymnis, S. Boyd, and E. Candes, “Compressed Sensing With
Quantized Measurements,” IEEE Signal Processing Letters, vol. 17,
no. 2, pp. 149–152, 2 2010.
[7] Wei Dai and Olgica Milenkovic, “Information Theoretical and Algorithmic Approaches to Quantized Compressive Sensing,” IEEE Transactions
on Communications, vol. 59, no. 7, pp. 1857–1866, 7 2011.
[8] Jason N. Laska and Richard G. Baraniuk, “Regime Change: Bit-Depth
Versus Measurement-Rate in Compressive Sensing,” IEEE Transactions
on Signal Processing, vol. 60, no. 7, pp. 3496–3505, 7 2012.
[9] Petros T. Boufounos, “Greedy sparse signal reconstruction from sign
measurements,” in 2009 Conference Record of the Forty-Third Asilomar
Conference on Signals, Systems and Computers. 2009, pp. 1305–1309,
IEEE.
[10] J. N. Laska, Zaiwen Wen, Wotao Yin, and R. G. Baraniuk, “Trust, But
Verify: Fast and Accurate Signal Recovery From 1-Bit Compressive
Measurements,” IEEE Transactions on Signal Processing, vol. 59, no.
11, pp. 5289–5301, 11 2011.
[11] Yaniv Plan and Roman Vershynin, “Robust 1-bit Compressed Sensing
and Sparse Logistic Regression: A Convex Programming Approach,”
IEEE Transactions on Information Theory, vol. 59, no. 1, pp. 482–494,
1 2013.
[12] Laurent Jacques, Jason N. Laska, Petros T. Boufounos, and Richard G.
Baraniuk, “Robust 1-Bit Compressive Sensing via Binary Stable
Embeddings of Sparse Vectors,” IEEE Transactions on Information
Theory, vol. 59, no. 4, pp. 2082–2102, 4 2013.
[13] Fuwei Li, Jun Fang, Hongbin Li, and Lei Huang, “Robust One-Bit
Bayesian Compressed Sensing with Sign-Flip Errors,” IEEE Signal
Processing Letters, vol. 22, no. 7, pp. 857–861, 7 2015.
[14] Sohail Bahmani, Petros T Boufounos, and Bhiksha Raj, “Robust 1-bit
Compressive Sensing via Gradient Support Pursuit,” .
[15] Mahdi Boloursaz Mashhadi and Farokh Marvasti, “Iterative Methods
for Sparse Signal Reconstruction from Level Crossings,” 11 2016.
[16] F. Marvasti and M.B. Mashadi, “Wideband analog to digital conversion
by random or level crossing sampling,” Aug. 8 2017, US Patent
9,729,160.
[17] M. B. Mashhadi, N. Salarieh, E. S. Farahani, and F. Marvasti, “Level
crossing speech sampling and its sparsity promoting reconstruction using
an iterative method with adaptive thresholding,” IET Signal Processing,
vol. 11, no. 6, pp. 721–726, 2017.
[18] U. S. Kamilov, A. Bourquard, A. Amini, and M. Unser, “Onebit measurements with adaptive thresholds,” IEEE Signal Processing
Letters, vol. 19, no. 10, pp. 607–610, Oct 2012.
[19] K. Knudson, R. Saab, and R. Ward, “One-bit compressive sensing with
norm estimation,” IEEE Transactions on Information Theory, vol. 62,
no. 5, pp. 2748–2758, May 2016.
[20] C. Qian and J. Li, “Admm for harmonic retrieval from one-bit sampling
with time-varying thresholds,” in 2017 IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP), March 2017, pp.
3699–3703.
[21] C. S. Gntrk, M. Lammers, A. Powell, R. Saab, and . Yilmaz, “Sigma
delta quantization for compressed sensing,” in 2010 44th Annual
Conference on Information Sciences and Systems (CISS), March 2010,
pp. 1–6.
[22] Felix Krahmer, Rayan Saab, and zgr Ylmaz, “Sigma-Delta quantization
of sub-Gaussian frame expansions and its application to compressed
sensing,” 6 2013.
[23] Rayan Saab, Rongrong Wang, and Ozgr Ylmaz, “Quantization of
compressive samples with stable and robust recovery,” 2015.
[24] R. G. Baraniuk, S. Foucart, D. Needell, Y. Plan, and M. Wootters,
“Exponential decay of reconstruction error from binary measurements
of sparse signals,” IEEE Transactions on Information Theory, vol. 63,
no. 6, pp. 3368–3385, June 2017.
[25] M. Yan, Y. Yang, and S. Osher, “Robust 1-bit compressive sensing using
adaptive outlier pursuit,” IEEE Transactions on Signal Processing, vol.
60, no. 7, pp. 3868–3875, July 2012.
[26] A. Movahed, A. Panahi, and G. Durisi, “A robust rfpi-based 1-bit compressive sensing reconstruction algorithm,” in 2012 IEEE Information
Theory Workshop, Sept 2012, pp. 567–571.
[27] A. Movahed, A. Panahi, and M. C. Reed, “Recovering signals with
variable sparsity levels from the noisy 1-bit compressive measurements,”
in 2014 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), May 2014, pp. 6454–6458.
[28] Y. E. Salehani, S. Gazor, I. M. Kim, and S. Yousefi, “Sparse hyperspectral unmixing via arctan approximation of l0 norm,” in 2014 IEEE
Geoscience and Remote Sensing Symposium, July 2014, pp. 2930–2933.
[29] I. W. Selesnick and . Bayram, “Sparse signal estimation by maximally
sparse convex optimization,” IEEE Transactions on Signal Processing,
vol. 62, no. 5, pp. 1078–1092, March 2014.
[30] Yaser Esmaeili Salehani, Saeed Gazor, Il-Min Kim, and Shahram
Yousefi, “0-norm sparse hyperspectral unmixing using arctan smoothing,” Remote Sensing, vol. 8, no. 3, 2016.
[31] Stanley Osher Zhaohui Guo, Todd Wittman, “L1 unmixing and its
application to hyperspectral image enhancement,” 2009.
[32] H. Zayyani, M. Korki, and F. Marvasti, “Dictionary learning for blind
one bit compressed sensing,” IEEE Signal Processing Letters, vol. 23,
no. 2, pp. 187–191, Feb 2016.
[33] H. Zamani, H. Zayyani, and F. Marvasti, “An iterative dictionary
learning-based algorithm for doa estimation,” IEEE Communications
Letters, vol. 20, no. 9, pp. 1784–1787, Sept 2016.
[34] Prabhakar Raghavan and Clark D. Tompson, “Randomized rounding:
A technique for provably good algorithms and algorithmic proofs,”
Combinatorica, vol. 7, no. 4, pp. 365–374, 12 1987.
[35] “Probabilistic construction of deterministic algorithms: Approximating
packing integer programs,” Journal of Computer and System Sciences,
vol. 37, no. 2, pp. 130–143, 10 1988.
| 7 |
Model reduction of linear time-varying systems with
applications for moving loads
Maria Cruz Varona† and Boris Lohmann†
Preprint: June 15, 2016
arXiv:1607.02846v1 [math.DS] 11 Jul 2016
Abstract
In this paper we consider different model reduction techniques for systems with moving
loads. Due to the time-dependency of the input and output matrices, the application of timevarying projection matrices for the reduction offers new degrees of freedom, which also come
along with some challenges. This paper deals with both simple methods for the reduction of
particular linear time-varying systems, as well as with a more advanced technique considering
the emerging time derivatives.
Keywords: model reduction; time-varying systems; moving loads
1
Introduction
The detailed modeling of physical and technical phenomena arising in many engineering and
computer science applications may yield models of very large dimension. This is particular
the case in fields such as thermo-fluid dynamics, structural mechanics or integrated circuit
design, where the models are mostly obtained from a spatial discretization of the underlying
partial differential equations. The resulting large systems of ordinary differential equations or
differential-algebraic equations are computationally expensive to simulate and handle. In order
to reduce the computational effort, model reduction techniques that generate reduced-order
models that approximate the dynamic behaviour and preserve the relevant properties of the
original model are required. For the reduction of linear time-invariant (LTI) systems, various
well-established reduction approaches exist (see e.g. [1]). In the past ten years, further model
reduction methods have been developed for linear, parametric and nonlinear systems [4, 3] and
applied in a wide variety of domains.
In this contribution, we investigate model order reduction of linear time-varying (LTV)
systems. Such systems arise in many real-life applications, since dynamical systems often depend
on parameters which vary over time or might alter their behaviour due to ageing, degradation,
environmental changes and time-dependent operating conditions. Another possible application
for LTV systems are moving loads. This particular but still very frequent problem arises, for
example, in working gears, cableways, bridges with moving vehicles or milling processes. Since
the position of the acting force varies over time, systems with sliding components exhibit a timevariant behaviour. The varying load location can be modeled and considered in different ways,
thus yielding diverse alternative representations for systems with moving loads and, according
to this, leading to different approaches to reduce them.
One possibility is to represent moving loads as LTV systems, in which only the input and/or
output matrices are time-dependent. Such systems can be then reduced using balanced truncation model reduction methods developed in [16, 15]. These approaches, however, require a
high computational and storage effort, since two differential Lyapunov equations must be solved.
†
Chair of Automatic Control, Technical University of Munich, Boltzmannstr.
({maria.cruz,lohmann}@tum.de)
1
15, D-85748 Garching
Recently, a practical and efficient procedure of balanced truncation for LTV systems has been
presented in [12]. Note that these aforementioned balanced truncation techniques can be applied
to general LTV systems, where all system matrices are time-dependent. For the reduction of
systems with only time-varying input and output matrices the two-step approach proposed in
[17, 2] can also be pursued. This method consists first on a low-rank approximation of the timedependent input matrix and consequently on applying standard model reduction techniques to
the resulting LTI system with a modified input. The approximation of the input matrix in a
low-dimensional subspace is performed via the solution of a linear least squares minimization
problem.
Systems with moving loads can further be modeled by means of linear switched systems.
Well-known reduction methods such as balanced truncation can then be applied for the reduction
of each LTI subsystem [11].
A last alternative option for describing systems with moving loads is to consider the load
position as a time-dependent parameter of the system model. This results in a linear parametervarying (LPV) system, in which only the input and/or output matrices depend on a time-varying
parameter. In many recent publications, e.g. [8, 11, 9, 2], the parameter is assumed to be timeindependent. Thereby any parametric model order reduction (pMOR) approach [4] can be
applied to the resulting parametric LTI system. In some other recent publications [18, 5, 6, 7]
the time variation of the parameter is taken into account, whereby new time derivative terms
emerge during the time-dependent parametric model reduction process.
This paper deals with different time-varying model reduction techniques for systems with
moving loads. Firstly, LTV systems are considered and the time-dependent projective reduction
framework is briefly explained in section 2. Since moving loads represent particular LTV systems,
we then introduce some straightforward reduction approaches for the resulting special cases in
section 3. In the second part of the paper, we focus on LPV systems and present a timedependent parametric model reduction approach by matrix interpolation [5, 7] in section 4.
Some numerical results for the reduction of systems with moving loads applying the proposed
methods are finally reported and discussed in section 5.
2
Linear Time-Varying Model Order Reduction
In the following we first consider a high-dimensional linear time-varying system of the form
E(t) ẋ(t) = A(t) x(t) + B(t) u(t),
y(t) = C(t) x(t),
(1)
where E(t), A(t) ∈ Rn×n , B(t) ∈ Rn×m and C(t) ∈ Rq×n are the time-dependent system
matrices, x(t) ∈ Rn is the state vector and u(t) ∈ Rm , y(t) ∈ Rq represent the inputs and
outputs of the system, respectively. The system matrix E(t) is assumed to be nonsingular for all
t ∈ [0, T ]. Note that it is also possible to consider second-order systems and reformulate them
into the first-order form (1).
2.1
Time-dependent Projective Reduction Framework
In projective model order reduction, we aim to find a reduced-order model by approximating
the state vector x(t) on a subspace of lower dimension r n. In the time-varying case, the
state vector x(t) might be projected onto a varying subspace spanned by the columns of a timedependent projection matrix V(t) ∈ Rn×r [16, 18]. Therefore, the approximation equations
2
read
x(t) ≈ V(t) xr (t),
ẋ(t) ≈ V̇(t) xr (t) + V(t) ẋr (t),
(2)
whereby the product rule must be considered in this case for the time derivative of the state
vector. Plugging first these both equations into (1), and applying thereon a properly chosen
time-dependent projection matrix W(t) which enforces the Petrov-Galerkin condition leads to
the time-varying reduced-order model
Ar (t)
Br (t)
Er (t)
}|
{
z
}|
{
z
}|
{
z
W(t)T E(t)V(t) ẋr (t) = W(t)T A(t)V(t) −W(t)T E(t)V̇(t) xr (t) + W(t)T B(t) u(t),
(3)
yr (t) = C(t)V(t) xr (t).
| {z }
Cr (t)
It is noteworthy to mention that the system matrix of the reduced-order model (3) not only
comprises the reduced matrix Ar (t), but also includes a further term which depends on the
time derivative V̇(t) of the time-varying projection matrix. This additional term influences the
dynamic behaviour of the reduced-order model and should therefore be taken into account.
The usage of time-dependent projection matrices for the reduction of linear time-varying
systems certainly offers benefits regarding the approximation quality. For their computation,
however, standard reduction methods such as balanced truncation cannot be directly applied,
but must be adapted instead. Furthermore, the time derivative of V(t) should be approximated
numerically (thus increasing the computational effort) and included in the time integration
scheme of the reduced-order model [12].
3
Straightforward Reduction Approaches for particular Linear Time-Varying
Systems
In the previous section we have seen that the application of time-dependent projection matrices
for the reduction of LTV systems comes along with some difficulties and challenges. For the
reduction of particular LTV systems, in which only the input and/or output matrices depend on
time, the usage of time-independent projection matrices V and W might be sufficient. In this
section we discuss some special cases for LTV systems and propose straightforward approaches
to reduce them.
3.1
Case 1: Moving Loads
The first case we want to consider is a high-dimensional LTV system with only time-varying
input matrix, and all other matrices being time-independent:
E ẋ(t) = A x(t) + B(t) u(t),
y(t) = C x(t).
(4)
The time-dependent input matrix describes the position of the moving forces at time t. In the
following we present two straightforward approaches to reduce a system in the form above using
time-independent projection matrices.
Approach 1: Two-step method
The first straightforward reduction method is deducted from the two-step approach presented
in [17, 2]. The method is composed of two steps:
3
1. The time-variability of the input matrix is shifted to the input variables through a lowrank approximation of the input matrix by B(t) ≈ B B̃(t), where B ∈ Rn×m̃ with m̃ n
is a constant matrix and B̃(t) ∈ Rm̃×m . Introducing a new input ũ(t) = B̃(t) u(t), the
original model (4) can be transformed to:
ũ(t)
z }| {
E ẋ(t) = A x(t) + B B̃(t) u(t),
(5)
y(t) = C x(t).
2. The resulting multiple-input multiple-output (MIMO) LTI system (E, A, B, C) can subsequently be reduced by means of balanced truncation, MIMO rational Krylov or MIMOIRKA, for instance. The reduced-order model is then given by
E
A
B
ũ(t)
r
r
r
z }| {
z }| {
z }| { z }| {
T
T
T
W EV ẋr (t) = W AV xr (t) + W B B̃(t) u(t),
(6)
yr (t) = |{z}
CV xr (t),
Cr
where the reduced time-varying input matrix reads Br (t) = Br B̃(t).
For the approximation of the input matrix B(t), other than [17, 2] we simply take the correct
input columns bi (t) with the moving load acting at corresponding nodes i of a coarse finite
element grid and form the low-rank matrix B with them, without performing a least squares
minimization with the basis functions. Note that the two-step approach only provides satisfactory results, if the number of columns m̃ of the low-rank matrix B is sufficiently large [17].
Otherwise the overall approximation error in the output (due to the approximation error in the
input matrix and the model reduction error) can become inadmissibly large. Note also that
this reduction method is limited to systems with a known trajectory of the load before the
simulation.
Approach 2: One-sided reduction with output Krylov subspace
The second straightforward method uses Krylov subspaces for the reduction and exploits the
fact that the only time-varying element in system (4) is the input matrix B(t). Since an input
Krylov subspace would yield a time-varying projection matrix
−1
−1
−1
r−1 A−1 B(t) ,
V(t) := A−1
(7)
s0 B(t) As0 EAs0 B(t) . . . (As0 E)
s0
where As0 = A−s0 E, the idea of this approach is to perform a one-sided reduction with V = W,
where the columns of W form a basis of the output Krylov subspace:
T A−T ET A−T CT . . . (A−T ET )r−1 A−T CT .
W := A−T
(8)
s0 C
s0
s0
s0
s0
Thereby, time-independent projection matrices are obtained for computing the reduced-order
model
Br (t)
Ar
Er
z }| {
z }| {
z }| {
WT EW ẋr (t) = WT AW xr (t) + WT B(t) u(t),
(9)
yr (t) = CW
x
(t).
r
| {z }
Cr
Although only the first r Taylor coefficients (so-called moments) of the transfer function of the
original and the reduced model around the expansion points s0 match due to the application of
a one-sided reduction, we obtain time-independent projection matrices with this approach and
can therefore get rid of the time derivative V̇(t).
4
3.2
Case 2: Moving Sensors
Now we consider a LTV system with only time-varying output matrix
E ẋ(t) = A x(t) + B u(t),
(10)
y(t) = C(t) x(t).
The time-dependent output matrix describes the position of the moving sensors at time t. This
particular LTV system can easily be reduced in the following ways.
Approach 1: Two-step method
1. We shift the time-variability of the output matrix to the output variables through a lowrank approximation by C(t) ≈ C̃(t) C, where C ∈ Rq̃×n with q̃ n is a constant matrix
and C̃(t) ∈ Rq×q̃ . Introducing a new output ỹ(t) = C x(t), the original model (10) can be
transformed to:
E ẋ(t) = A x(t) + B u(t),
(11)
y(t) = C̃(t) C x(t) .
| {z }
ỹ(t)
2. The resulting system (E, A, B, C) can subsequently be reduced by means of any appropriate multiple-input multiple-output LTI reduction technique. The calculated timeindependent projection matrices lead to the reduced-order model
Er
Ar
Br
z }| {
z }| {
z }| {
WT EV ẋr (t) = WT AV xr (t) + WT B u(t),
(12)
yr (t) = C̃(t) |{z}
CV xr (t),
Cr
with the reduced time-varying output matrix Cr (t) = C̃(t) Cr .
The approximation of C(t) is performed by simply taking the output rows with the moving
sensor at the corresponding nodes of a coarse finite element grid. Note that the approximation
of the output matrix yields additional errors in the output [17].
Approach 2: One-sided reduction with input Krylov subspace
Since in this case an output Krylov subspace would lead to a time-varying projection matrix
T A−T ET A−T C(t)T . . . (A−T ET )r−1 A−T C(t)T
W(t) := A−T
(13)
s0 C(t)
s0
s0
s0
s0
due to the time-dependent output matrix C(t), the idea is now to perform a one-sided reduction
with W = V, where the columns of V form a basis of the input Krylov subspace:
−1
−1
−1
r−1 A−1 B .
V := A−1
(14)
s0 B As0 EAs0 B . . . (As0 E)
s0
The reduced model is then given by:
Er
Ar
Br
z }| {
z }| {
z }| {
VT EV ẋr (t) = VT AV xr (t) + VT B u(t),
yr (t) = C(t)V xr (t).
| {z }
(15)
Cr (t)
Due to the application of a one-sided reduction, only r moments are matched. Nevertheless, the
time derivative is avoided, since V and W are time-independent (V̇ = 0).
5
3.3
Case 3: Moving Loads and Sensors
Finally, we consider the combined case with time-varying input and output matrices
E ẋ(t) = A x(t) + B(t) u(t),
(16)
y(t) = C(t) x(t).
If the sensor position coincides with the location of the load, then C(t) = B(t)T .
Approach 1: Two-step method
In this case, the respective two-step techniques explained before have to be combined properly:
1. The time-variability of B(t) is shifted to the input variables and the time-dependency of
C(t) to the output variables, thus obtaining a MIMO LTI system.
2. Time-independent projection matrices are then calculated applying an appropriate model
order reduction method to the resulting system (E, A, B, C). The reduced-order model is
finally given by
E
A
B
ũ(t)
r
r
r
z }| {
z }| {
z }| { z }| {
T
T
T
W EV ẋr (t) = W AV xr (t) + W B B̃(t) u(t),
(17)
yr (t) = C̃(t) |{z}
CV xr (t).
Cr
Approach 2: Reduction with modal truncation
Unfortunately, in this case 3 the application of a one-sided reduction with either an input or an
output Krylov subspace would yield time-varying projection matrices V(t) and W(t) according
to (7) or (13), respectively. A possible alternative to still obtain time-independent projection
matrices and thus get rid of the time derivative V̇ is to use modal truncation as reduction approach. This method only uses the time-independent matrices A and E for computing dominant
eigenvalues (e.g. with smallest magnitude or smallest real part) and eigenvectors, thus yielding
time-independent projection matrices for the reduction.
4
Time-Varying Parametric Model Order Reduction
After having considered linear time-varying systems and presented some straightforward approaches to reduce special cases arising in moving load and sensor problems, in this section we
focus on linear parameter-varying systems of the form
E(p(t)) ẋ(t) = A(p(t)) x(t) + B(p(t)) u(t),
y(t) = C(p(t)) x(t).
(18)
Such systems also exhibit a time-varying dynamic behaviour, since the system matrices explicitly
depend on parameters p(t) which vary over time. Note that moving load and sensor problems
can be represented as LPV systems with only parameter-varying input and/or output matrices,
if the load and sensor location are considered as time-dependent parameters of the system model.
Due to the time-dependency of the parameters, in the next subsection we derive a projectionbased, time-varying parametric model order reduction approach called p(t)MOR, to obtain a
reduced-order model of a LPV system [5, 6]. Based on that, we then adapt the pMOR approach
by matrix interpolation [14] to the parameter-varying case, whereby new time derivative terms
emerge [6, 7]. For the sake of a concise presentation, the time argument t will be omitted in the
state, input and output vectors hereafter.
6
4.1
Projective p(t)MOR
Similarly as explained in subsection 2.1, in the case of projection-based time-dependent parametric model order reduction we aim to approximate the state vector x by x ≈ V(p(t)) xr using a
parameter-varying projection matrix V(p(t)). Plugging the corresponding approximation equations for x and its derivative ẋ in (18), and applying thereon a properly chosen projection matrix
W(p(t)) that imposes the Petrov-Galerkin condition yields the reduced-order model
Er (p(t)) ẋr = Ar (p(t)) − W(p(t))T E(p(t))V̇(p(t)) xr + Br (p(t)) u,
(19)
yr = Cr (p(t)) xr ,
with the time-dependent parametric reduced matrices
Er (p(t)) = W(p(t))T E(p(t))V(p(t)), Ar (p(t)) = W(p(t))T A(p(t))V(p(t)),
Br (p(t)) = W(p(t))T B(p(t)),
(20)
Cr (p(t)) = C(p(t))V(p(t)).
The reduced model comprises an additional term depending on the time derivative V̇(p(t)),
which has to be considered during the extension of the matrix interpolation method to the
parameter-varying case.
4.2
p(t)MOR by Matrix Interpolation
The local pMOR technique of matrix interpolation can be applied to efficiently obtain a parametric reduced-order model from the interpolation of reduced matrices precomputed at different
grid points in the parameter space. Similarly as in the classic method [14], the LPV system (18)
is first evaluated and individually reduced at certain parameter samples pi , i = 1, . . . , k with
respective projection matrices Vi := V(pi ) and Wi := W(pi ). The reduced state vectors xr,i
of the independently calculated reduced models
Er,i ẋr,i = Ar,i − WiT Ei V̇(p(t)) xr,i + Br,i u,
(21)
yr,i = Cr,i xr,i
generally lie in different subspaces and have, therefore, different physical meanings. For this
reason, the direct interpolation of the reduced matrices is not meaningful, and hence the local
reduced models have to be transformed into a common set of coordinates first. This is performed
applying state transformations of the form
xr,i = Ti x̂r,i ,
ẋr,i = Ṫi x̂r,i + Ti x̂˙ r,i ,
(22)
with regular matrices Ti := T(pi ), whereby the product rule is required again for the differentiation of xr,i . These state transformations serve to adjust the different right local bases Vi
to new bases V̂i = Vi Ti . In order to adjust the different left local bases Wi by means of
Ŵi = Wi Mi as well, the reduced models from (21) are subsequently multiplied from the left
7
with regular matrices MTi . The resulting reduced and transformed systems are thus given by
Ânew r,i
z
}|
{
Êr,i
Âr,i
B̂r,i
z }| {
z }| {
z }| {
T
T
T
T
T
MTi Er,i Ti x̂˙ r,i =
Mi Ar,i Ti −Mi Wi Ei V̇(p(t))Ti − Mi Er,i Ṫi x̂r,i + Mi Br,i u,
(23)
yr,i = Cr,i Ti x̂r,i .
| {z }
Ĉr,i
One possible way to calculate the transformation matrices Ti and Mi is based on making
the state vectors x̂r,i compatible with respect to a reference subspace spanned by the columns
of the orthogonal matrix R. To this end, the matrices are chosen as Ti := (RT Vi )−1 and
Mi := (RT Wi )−1 , where the columns of R correspond to the r most important directions of
Vall = [V1 . . . Vk ] calculated by a Singular Value Decomposition (SVD) [14].
The resulting system matrix Ânew r,i not only comprises the expected reduced matrix Âr,i ,
but also consists of two further terms that depend on V̇(p(t)) and Ṫi , respectively. The calculation of these time derivatives that are required for the computation of the reduced-order model
will be discussed in the next two sections.
After the transformation of the local models and the computation of the new emerging time
derivatives, a parameter-varying reduced-order model for a new parameter value p(t) is obtained
in the online phase by a weighted interpolation between the reduced matrices from (23) according
to
Xk
Xk
Ẽr (p(t)) =
ωi (p(t))Êr,i , Ãnew r (p(t)) =
ωi (p(t))Ânew r,i ,
i=1
i=1
(24)
Xk
Xk
B̃r (p(t)) =
ωi (p(t))B̂r,i ,
C̃r (p(t)) =
ωi (p(t))Ĉr,i ,
i=1
i=1
Pk
where i=1 ωi (p(t)) = 1. For simplicity, here we use piecewise linear interpolation of the reduced
matrices. Higher order interpolation schemes could also be applied.
4.2.1
Time derivative of V
The time derivative of the projection matrix V(p(t)) can be numerically calculated using a finite
difference approximation. Applying the chain rule first and employing a finite difference method
thereon, the time derivative is given by:
V̇(p(t)) =
∂V
Vt − Vt pt − pt−1
ṗ =
.
∂p
pt − pt
∆t
(25)
pt and pt denote the upper and lower limit of the interval [pt , pt ], in which the parameter vector
pt is located at time instant t. The local bases at these parameter sample points are given by
Vt and Vt , respectively. The partial derivatives ∂V
∂p for each pair of parameter sample points
are calculated in the offline phase of the matrix interpolation approach. In the online phase,
the current time derivative V̇(p(t)) is then computed by multiplying the partial derivative of
the corresponding parameter interval at time instant t with ṗ, which represents the current
velocity of the moving load. Fig. 1 illustrates the aforementioned intervals and the efficient
numerical calculation of the time derivative V̇(p(t)) by a finite difference approximation using
only precomputed local bases.
8
Vt−1
Vt−1
pt−1
Vt
Vt
Vt−1
pt−1
∂V
∂p
pt−1
∂V
∂p
pt
Vt
pt
∂V
∂p
pt
p
Figure 1: Graphical representation of the calculation of the time derivative V̇(p(t)) using the
local bases Vi computed at the parameter sample points pi
4.2.2
Time derivative of T
As explained before, in this paper the transformation matrices Ti are calculated with Ti =
(RT Vi )−1 := K−1 . For the computation of the time derivative Ṫi we make use of the following
definition [10, p. 67]:
Definition 1. Let the matrix K be nonsingular. The time derivative of the inverse matrix is
−1
−1
then given by dKdt = −K−1 dK
dt K .
This leads to:
Ṫi =
4.3
dK−1
= −(RT Vi )−1 RT V̇(p(t))(RT Vi )−1 = −Ti RT V̇(p(t))Ti .
dt
(26)
p(t)MOR by Matrix Interpolation for particular cases
For the reduction of general linear parameter-varying systems the application of time-dependent
parametric projection matrices undoubtedly provides an accurate consideration of the arising
time variability. Their usage, however, involves some difficulties, like the calculation of the
additional derivatives and their incorporation in the numerical simulation of the reduced-order
model. Particular LPV systems with only parameter-varying input and/or output matrices,
arising e.g. in moving load and sensor problems, can efficiently be reduced using the matrix
interpolation approach combined with the usage of parameter-independent projection matrices.
In the following, this technique is briefly explained for some special cases:
Moving Loads
The application of parameter-varying projection matrices V(p(t)) and W(p(t)) for the individual reduction of the local systems within matrix interpolation results in a reduced model, where
all reduced matrices vary with the time-dependent parameter, although the original LPV system
only contains variations in the input matrix. In order to get rid of the emerging derivatives and
only have to interpolate the input matrix in the online phase of matrix interpolation, one-sided
reductions with an output Krylov subspace W = span(W) should be employed.
Moving Sensors
In a similar manner, for the case of a LPV system with only parameter-varying output matrix
C(p(t)) one-sided projections with a single input Krylov subspace V = span(V) computed with
the input matrix should be performed for the reduction of the sampled models during matrix
interpolation. In this way, we obtain parameter-independent projection matrices V = W and
only have to interpolate the output matrix, thus reducing the computational effort in the online
phase.
9
Moving Loads and Sensors
For the combined moving load and sensor example the application of one-sided projections with
either input or output Krylov subspaces is not helpful, since both the input and output matrices
are parameter-varying in this case. Therefore, parameter-independent projection matrices can
only be calculated using modal truncation. By doing so, the reduced-order model only contains
parameter variations in the input and output reduced matrices like in the original system.
5
Numerical Examples
In this section, we present some numerical results for systems with moving loads.
5.1
Timoshenko beam
The presented reduction approaches are first applied to the finite element model of a simply
supported Timoshenko beam of lenght L subjected to a moving load.
F(t)
z
y
x
Figure 2: A simply supported Timoshenko beam subjected to a moving force F (t)
Since the moving force F (t) is applied in the negative z-direction and we are only interested in
the vertical displacement of the beam, the model described in [13] is adapted from a 3D to a 1D
finite element model. Furthermore, both the moving load and/or sensor case are incorporated
into the model, yielding time-dependent input and/or output matrices. The resulting singleinput single-output second-order system is reformulated into a LTV first-order model of the
form
b(t)
E
A
ẋ
x
z }| {
z }| { z}|{
z
}|
{ z}|{
0
F 0
ż
0
F
z
(t) =
(t) +
F (t),
0 M z̈
−K −D ż
b̂(t)
(27)
z
y(t) = ĉ(t)T 0T
(t),
|
{z
} ż
c(t)T
where the arbitrary nonsingular matrix F ∈ R2N ×2N is chosen in our case to F = K for the aim
of stability preservation using a one-sided reduction (cf. [13, 6]). The dimension of the original
model is then given by n = 2 · 2N with N finite elements.
Moving load case
We first consider the reduction of a beam of length L = 1 m subjected to a point force moving
from the tip to the supporting with a constant velocity v and an amplitude of F (t) = 20 N.
For the numerical simulation we use an implicit Euler scheme with a step size of dt = 0.001 s.
In Fig. 3 the simulation results for the different proposed reduction methods are presented.
We first apply the standard matrix interpolation (MatrInt) approach using k = 76 equidistantly
10
−2
−4
−6
0.2
·10−2
0.4
0.6
time [s]
0.8
FOM
ROM MatrInt V(p(t)) 1-RK
ROM MatrInt V̇, Ṫ 1-RK
ROM MatrInt W 1-RK
ROM approx B + IRKA
−4
0
z-deflection of beam tip [m]
−2
−4
0.4
0.6
time [s]
0.2
·10−2
0
0.2
−2
v = 10 ms
2
0
0
−6
1
v = 5 ms
·10−2
2
0
0
z-deflection of beam tip [m]
v = 1 ms
z-deflection of beam tip [m]
z-deflection of beam tip [m]
·10−2
0.8
1
0.4
0.6
time [s]
0.8
1
0.8
1
v = 20 ms
2
0
−2
0
0.2
0.4
0.6
time [s]
Figure 3: Simulation results for the Timoshenko beam with moving load for different reduction
methods and velocities. Original dimension n = 2 · 2 · 451 = 1804, reduced dimension r = 10.
Krylov-based reductions performed with expansion points s0 = 0
distributed local models with corresponding current input, which are individually reduced applying one-sided projections with input Krylov subspaces (V(p(t))) for r = 10. The consideration
of the theorically emerging derivatives V̇ and Ṫ according to (23) in the matrix interpolation
scheme only yields better results than the standard MatrInt method for large velocities of the
moving load. In any case, the application of a single time-independent output Krylov subspace
(W) during MatrInt and the two-step method (m̃ = 76) combined with MIMO-IRKA yields the
best results (see Table 1).
Moving load and sensor case
Now we consider a larger beam of length L = 50 m with both moving load and sensor. The
observation of the z-deflection of the beam coincides at any time with the position of the moving
load, meaning that c(p(t))T = b(p(t)). First we apply the matrix interpolation approach and
use modal truncation for the individual reduction of the k = 201 sampled models constructed
with the input and output vectors corresponding to each parameter sample point. Since modal
truncation only considers the matrices A and E for the reduction and these matrices do not vary
over time, we only have to compute one single pair of time-independent projection matrices V
11
Table 1: Absolute L2 output error norms ky − yr kL2
MatrInt V(p(t))
MatrInt V̇, Ṫ
MatrInt W
approx B + IRKA
v = 1m
s
4.8e−4
1.5e−2
3.0e−5
1.0e−4
v = 5m
s
3.0e−3
6.8e−2
1.8e−5
2.0e−4
v = 10 m
s
2.6e−3
1.0e−3
1.7e−5
5.8e−5
v = 20 m
s
2.2e−2
5.4e−3
1.2e−5
3.9e−5
and W in the offline phase. During the online phase, only the parameter-varying input and
output vectors have to be interpolated in order to obtain a reduced-order model for each current
position of the load/sensor.
Next, we further apply the aforementioned two-step method for the reduction. To this end,
the time-varying input and output vectors are first approximated by low-rank matrices B and
C on a coarse finite element grid. To ensure a proper comparability with MatrInt, we choose
the same m̃ = 201 nodes where local models were constructed before. The herewith obtained
approximated output y(t) and approximation errors are depiced in Fig. 4. One can see that the
number of chosen columns m̃ is sufficiently large, since the approximation error is adequately
small. After that, we both apply two-sided MIMO rational Krylov (2-RK) and MIMO-IRKA
for the reduction of the resulting LTI system. Fig. 4 shows the simulated output for the different explained reduction methods as well as the corresponding absolute and relative L2 errors.
Although all results show a similar behaviour, the matrix interpolation approach combined with
modal truncation together with the two-step method by IRKA lead to the smallest errors. Simulations were also conducted with the extended p(t)MOR approach by matrix interpolation
considering the time derivatives like in 23. Unfortunately, these additional terms make the
pencils (Ânew r,i , Êr,i ) often unstable, yielding unstable interpolated systems and results.
5.2
Beam with moving heat source
We now apply the presented techniques on a second example [12], which describes the heat
transfer along a beam of length L with a moving heat source. The temperature is observed
at the same position as the heat source, thus c(t)T = b(t). In our case, we consider a system
dimension of n = 2500, apply an input heating source of u(t) = 50 ◦ C and use an implicit
Euler scheme with a step size of dt = 1 s for the time integration. Fig. 5 shows the simulation
results, and the absolute and relative errors for the different employed reduction methods. One
interesting observation is that in this case the application of the extended MatrInt approach
with the consideration of the time derivatives yields a slightly better approximation than the
classic MatrInt combined with modal truncation (k = 84). In general, this fact could also
be observed for the previous and some other numerical experiments with higher velocities, as
long as the overall interpolated systems were stable. This slightly better approximation can be
explained through the more accurate consideration of the arising time variability using timedependent projection matrices, as opposed to modal truncation which does not consider the
moving interactions. The approximation of the time-dependent input and output vectors by
low-rank matrices using m̃ = 84 nodes, and the subsequent application of balanced truncation
(TBR) or MIMO-IRKA for the reduction shows a similar behaviour. Although the extended
MatrInt shows in this case the best results, it is difficult to clearly identify a superior method,
since all presented approaches are suitable for the reduction of systems with moving loads.
12
z-deflection at current sensor position [m]
v = 11 ms
·10−2
0
−11
FOM
ROM MatrInt V, W MODAL
FOM approx B, C
ROM approx B, C + 2-RK
ROM approx B, C + IRKA
−2
−11.2
−6
−11.4
−10
−11.6
−14
−11.8
−18
0
1
2
3
time [s]
4
3.95
5
v = 11 ms
·10−3
4.05
4
relative error
2
0
4
time [s]
v = 11 ms
·10−2
4
absolute error
v = 11 ms
·10−2
0
1
2
3
time [s]
4
2
0
5
0
1
2
3
time [s]
4
5
Figure 4: Simulation results for the Timoshenko beam with moving load and sensor for different
reduction methods. Original dimension n = 2 · 2 · 1001 = 4004, reduced dimension r = 80.
Krylov-based reductions performed with expansion points s0 = 0
13
temperature at current sensor position [◦ C]
·10−3
5
·10−3
4
4.9
FOM
ROM MatrInt V, W MODAL
ROM MatrInt V̇, Ṫ 1-RK
FOM approx B, C
ROM approx B, C + TBR
ROM approx B, C + IRKA
4.8
2
4.7
0
0
20
40
60
time [s]
80
100
40
·10−4
1.5
45
50
time [s]
20
40
60
time [s]
55
60
·10−2
10
1
relative error
absolute error
8
0.5
6
4
2
0
20
40
60
time [s]
80
100
0
0
80
100
Figure 5: Simulation results for the 1D beam with moving heat source for different reduction
methods. Original dimension n = 2500, reduced dimension r = 40. Krylov-based reductions
performed with expansion points s0 = 0
14
6
Conclusions
In this paper, we have presented several time-varying model reduction techniques for systems
with moving loads. Such particular, but still frequent problems lead to high-dimensional systems
with time-varying input and/or output matrices. For their reduction, time-dependent projection
matrices can be applied, thus offering an accurate consideration of the time variation, but leading
also to an additional derivative in the reduced model which have to be taken into account. Since
moving load problems represent particular LTV systems, we have presented straightforward
reduction approaches for some special cases, where time-independent projection matrices are
calculated and therefore the emerging time derivative is avoided. Systems with moving loads
can also be modeled as special LPV systems, where the input and/or output matrices depend
on a time-varying parameter describing the position of the load. In this context we have derived
a projection-based, time-varying parametric model reduction approach and extended the matrix
interpolation scheme to the parameter-varying case. With the appropriate combination of this
method with the application of parameter-independent projection matrices, special LPV systems
can be efficiently reduced avoiding the time derivatives. The proposed methods have been tested
on two different beam models for both the moving load and/or sensor cases. All techniques have
provided similar satisfactory results, showing that all methods are suitable for the reduction of
systems with moving loads. In particular, the presented straightforward approaches using timeindependent projection matrices are very simple, but may be absolutely sufficient for certain
problems. They provide a basis for comparison with more complex techniques that consider the
time variability using time-dependent projection matrices. These advanced techniques should be
investigated more deeply in the future, especially concerning general LTV systems, the increased
computational effort due to the time-dependent projection matrices and derivatives, fast-varying
load variations and stability preservation in the reduced-order model.
Acknowledgements
The authors would like to thank N. Lang, T. Stykel and J. Saak for kindly proving us the 1D
heat transfer model used for the numerical example in section 5.2. Furthermore, we thank the
former and current members of our model reduction lab for the fruitful discussions.
References
[1] A. C. Antoulas. Approximation of Large-Scale Dynamical Systems. SIAM, Philadelphia,
PA, 2005.
[2] M. Baumann, A. Vasilyev, T. Stykel, and P. Eberhard. Comparison of two model order
reduction methods for elastic multibody systems with moving loads. In Proceedings of the
Institution of Mechanical Engineers, Part K: Journal of Multi-body Dynamics, 2016.
[3] U. Baur, P. Benner, and L. Feng. Model order reduction for linear and nonlinear systems: a system-theoretic perspective. Archives of Computational Methods in Engineering,
21(4):331–358, 2014.
[4] P. Benner, S. Gugercin, and K. Willcox. A survey of projection-based model reduction
methods for parametric dynamical systems. SIAM Review, 57(4):483–531, 2015.
[5] M. Cruz Varona, M. Geuss, and B. Lohmann. p(t)MOR: Time-varying parametric model
order reduction and applications for moving loads. In Proceedings of the 8th Vienna Conference on Mathematical Modelling (MATHMOD), volume 48, pages 677–678. Elsevier, 2015.
15
[6] M. Cruz Varona, M. Geuss, and B. Lohmann. Zeitvariante parametrische Modellordnungsreduktion am Beispiel von Systemen mit wandernder Last. In Methoden und Anwendungen der Regelungstechnik – Erlangen-Münchener Workshops 2013 und 2014, pages
57–70. B. Lohmann und G. Roppenecker (Hrsg.), Shaker-Verlag, 2015.
[7] M. Cruz Varona and B. Lohmann. Time-varying parametric model order reduction by matrix interpolation. In Proceedings of the MoRePaS 2015–Model Reduction of Parametrized
Systems III, 2015.
[8] M. Fischer and P. Eberhard. Application of parametric model reduction with matrix interpolation for simulation of moving loads in elastic multibody systems. Advances in Computational Mathematics, pages 1–24, 2014.
[9] M. Fischer and P. Eberhard. Interpolation-based parametric model order reduction for
material removal in elastic multibody systems. In Proceedings ECCOMAS Thematic Conference on Multibody Dynamics, 2015.
[10] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press,
Baltimore, 4th edition, 2013.
[11] N. Lang, J. Saak, and P. Benner. Model order reduction for systems with moving loads.
at-Automatisierungstechnik, 62(7):512–522, June 2014.
[12] N. Lang, J. Saak, and T. Stykel. Balanced truncation model reduction for linear timevarying systems. MCMDS, to appear.
[13] H. Panzer, J. Hubele, R. Eid, and B. Lohmann. Generating a parametric finite element model of a 3D cantilever Timoshenko beam using MATLAB. TRAC, Lehrstuhl für
Regelungstechnik, Technische Universität München, 09 2009.
[14] H. Panzer, J. Mohring, R. Eid, and B. Lohmann. Parametric model order reduction by
matrix interpolation. at–Automatisierungstechnik, 58(8):475–484, 8 2010.
[15] H. Sandberg and A. Rantzer. Balanced truncation of linear time-varying systems. IEEE
Transactions on Automatic Control, 49(2):217–229, 2004.
[16] S. Shokoohi, L. M. Silverman, and P. M. van Dooren. Linear time-variable systems: balancing and model reduction. IEEE Transactions on Automatic Control, 28(8):810–822,
1983.
[17] T. Stykel and A. Vasilyev. A two-step model reduction approach for mechanical systems
with moving loads. Journal of Computational and Applied Mathematics, 297:85–97, 2016.
[18] T. Tamarozzi, G. Heirman, and W. Desmet. An on-line time dependent parametric model
order reduction scheme with focus on dynamic stress recovery. Computer Methods in Applied
Mechanics and Engineering, 268(0):336 – 358, 2014.
16
| 3 |
Subspace Network: Deep Multi-Task Censored Regression for
Modeling Neurodegenerative Diseases
Mengying Sun1 , Inci M. Baytas1 , Liang Zhan2 , Zhangyang Wang3 , Jiayu Zhou1
1 Computer
Science and Engineering, Michigan State University
Engineering Program, University of Wisconsin-Stout
3 Computer Science and Engineering, Texas A&M University
{sunmeng2,baytasin}@msu.edu,zhanl@uwstout.edu,atlaswang@tamu.edu,jiayuz@msu.edu
arXiv:1802.06516v2 [cs.LG] 1 Mar 2018
2 Computer
ABSTRACT
1
Over the past decade a wide spectrum of machine learning models
have been developed to model the neurodegenerative diseases, associating biomarkers, especially non-intrusive neuroimaging markers,
with key clinical scores measuring the cognitive status of patients.
Multi-task learning (MTL) has been commonly utilized by these
studies to address high dimensionality and small cohort size challenges. However, most existing MTL approaches are based on linear
models and suffer from two major limitations: 1) they cannot explicitly consider upper/lower bounds in these clinical scores; 2) they
lack the capability to capture complicated non-linear interactions
among the variables. In this paper, we propose Subspace Network,
an efficient deep modeling approach for non-linear multi-task censored regression. Each layer of the subspace network performs a
multi-task censored regression to improve upon the predictions
from the last layer via sketching a low-dimensional subspace to
perform knowledge transfer among learning tasks. Under mild assumptions, for each layer the parametric subspace can be recovered
using only one pass of training data. Empirical results demonstrate
that the proposed subspace network quickly picks up the correct
parameter subspaces, and outperforms state-of-the-arts in predicting neurodegenerative clinical scores using information in brain
imaging.
Recent years have witnessed increasing interests on applying machine learning (ML) techniques to analyze biomedical data. Such
data-driven approaches deliver promising performance improvements in many challenging predictive problems. For example, in
the field of neurodegenerative diseases such as Alzheimer’s disease
and Parkinson’s disease, researchers have exploited ML algorithms
to predict the cognitive functionality of the patients from the brain
imaging scans, e.g., using the magnetic resonance imaging (MRI) as
in [1, 36, 40]. A key finding points out that there are typically various types of prediction targets (e.g., cognitive scores), and they can
be jointly learned using multi-task learning (MTL), e.g., [6, 9, 36],
where the predictive information is shared and transferred among
related models to reinforce their generalization performance.
Two challenges persist despite the progress of applying MTL
in disease modeling problems. First, it is important to notice that
clinical targets, different from typical regression targets, are often
naturally bounded. For example, in the output of Mini-Mental State
Examination (MMSE) test, a key reference for deciding cognitive
impairments, ranges from 0 to 30 (a healthy subject): a smaller
score indicates a higher level of cognitive dysfunction (please refer
to [31]). Other cognitive scores, such as Clinical Dementia Rating
Scale (CDR) [11] and Alzheimer’s Disease Assessment Scale-Cog
(ADAS- Cog) [23], also have specific upper and lower bounds. Most
existing approaches, e.g., [22, 36, 40], relied on linear regression
without considering the range constraint, partially due to the fact
that mainstream MTL models for regression, e.g., [2, 13, 36, 39],
are developed using the least squares loss and cannot be directly
extended to censored regressions. As the second challenge, a majority of MTL research focused on linear models because of computational efficiency and theoretical guarantees. However, linear models
cannot capture the complicated non-linear relationship between
features and clinical targets. For example, [3] showed the early
onset of Alzheimer’s disease to be related to single-gene mutations
on chromosomes 21, 14, and 1, and the effects of such mutations on
the cognitive impairment are hardly linear (please refer to [19, 30]).
Recent advances in multi-task deep neural networks [26, 33, 38]
provide a promising direction, but their model complexity and demands of huge number of training samples prohibit their broader
usages in clinical cohort studies.
To address the aforementioned challenges, we propose a novel
and efficient deep modeling approach for non-linear multi-task
censored regression, called Subspace Network (SN), highlighting the
following multi-fold technical innovations:
CCS CONCEPTS
• Computing methodologies → Multi-task learning; Machine
learning;
KEYWORDS
Censoring, Subspace, Multi-task Learning, Deep Network
ACM Reference Format:
Mengying Sun1 , Inci M. Baytas1 , Liang Zhan2 , Zhangyang Wang3 , Jiayu
Zhou1 . 2018. Subspace Network: Deep Multi-Task Censored Regression for
Modeling Neurodegenerative Diseases. In Proceedings of ACM Conference
(Conference’17). ACM, New York, NY, USA, 9 pages. https://doi.org/10.475/
123_4
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
Conference’17, July 2017, Washington, DC, USA
© 2018 Association for Computing Machinery.
ACM ISBN 123-4567-24-567/08/06. . . $15.00
https://doi.org/10.475/123_4
INTRODUCTION
ReLU
ReLU
M. Sun et al.
ReLU
Conference’17, July 2017, Washington, DC, USA
x
y
V(1)
Subspace
Sketching
U(1)
Parameter
Subspace
x
V(2)
U(2)
x
V(3)
U(3)
x
V(k)
U(k)
}
}
}
Layer 2
Layer 3
Layer k
Figure 1: The proposed subspace network via hierarchical subspace sketching and refinement.
• It efficiently builds up a deep network in a layer-by-layer feedforward fashion, and in each layer considers a censored regression problem. The layer-wise training allows us to grow a deep
model efficiently.
• It explores a low-rank subspace structure that captures task relatedness for better predictions. A critical difference on subspace
decoupling between previous studies such as [18] [28] and our
method lies on our assumption of a low-rank structure in the
parameter space among tasks rather than the original feature
space.
• By leveraging the recent advances in online subspace sensing [18,
28], we show that the parametric subspace can be recovered for
each layer with feeding only one pass of the training data, which
allows more efficient layer-wise training.
Synthetic experiments verify the technical claims of the proposed
SN, and it outperforms various state-of-the-arts methods in modeling neurodegenerative diseases on real datasets.
2
MULTI-TASK CENSORED REGRESSION VIA
PARAMETER SUBSPACE SKETCHING AND
REFINEMENT
In censored regression, we are given a set of N observations D =
N of D dimensional feature vectors {x ∈ RD } and T
{(x i , yi )}i=1
i
corresponding outcomes {yi ∈ RT+ }, where each outcome yi,t ∈ R+ ,
t ∈ {1, · · · ,T }, can be cognitive scores (e.g., MMSE and ADASCog) or other biomarkers of interest such as proteomics1 . For each
outcome, the censored regression assumes a nonlinear relationship
between the features and the outcome through a rectified
linear
unit (ReLU) transformation, i.e., yi,t = ReLU Wt⊤x i + ϵ where
Wt ∈ RD is the coefficient for input features, ϵ is i.i.d. noise, and
ReLU is defined by ReLU(z) = max(z, 0). We can thus collectively
represent the censored regression for multiple tasks by:
yi = ReLU (W x i + ϵ) ,
transformation matrix W ∈ RT ×D belongs to a linear low-rank subspace U. The subspace allows us to represent W as product of two
matrices, W = UV , where columns of U ∈ RT ×R = [U1 , . . . , UT ]⊤
span the linear subspace U, and V ∈ RR×D is the embedding coefficient. We note that the output y can be entry-wise decoupled, such
that for each component yi,t = ReLU(Ut⊤V x i + ϵ). By assuming
Gaussian noise ϵ ∼ N (0, σ 2 ), we derive the following likelihood
function:
yi,t − Ut⊤V x i
Pr(yi,t , x i |Ut , V ) = ϕ
I(yi,t ∈ (0, ∞))
σ
0 − Ut⊤V x i
I(yi,t = 0),
+ 1−Q
σ
where ϕ is the probabilistic density function of the standardized
Gaussian N (0, 1) and Q is the standard Gaussian tail. σ controls
how accurately the low-rank subspace assumption can fit the data.
Note that other noise models can be assumed here as well. The
likelihood of (x i , yi ) pair is thus given by:
⊤
T y
Y
i,t − Ut V x i
ϕ
I(yi,t ∈ (0, ∞))
Pr(yi , x i |U , V ) =
σ
t =1
U ⊤V x i
+ 1−Q − t
I(yi,t = 0) .
σ
The likelihood function allows us to estimate subspace U and coefficient V from data D. To enforce a low-rank subspace, one common
approach is to impose a trace norm on UV , where trace norm of a
P
matrix A is defined by ∥A∥∗ = j s j (A) and s j (A) is the jth singular
value of A. Since ∥UV ∥∗ = minU ,V 12 (∥U ∥F2 +∥V ∥F2 ), e.g., see [18, 29],
the objective function of multi-task censored regression problem is
given by:
XN
minU ,V − i=1 log Pr(yi , x i |U , V ) + λ2 (∥U ∥F2 +∥V ∥F2 ). (2)
(1)
where W = [W1 , . . . ,WT ]⊤ ∈ RT ×D is the coefficient matrix. We
consider the regression problem for each outcome as a learning
task. One commonly used task relationship assumption is that the
1 Without loss of generality, in this paper we assume that outcomes are lower censored
at 0. By using variants of Tobit models, e.g., as in [28], the proposed algorithms and
analysis can be extended to other censored models with minor changes in the loss
function.
2.1
An online algorithm
We propose to solve the objective in (2) via the block coordinate
descent approach which is reduced to iteratively updating the following two subproblems:
XN
V + = arg minV − i=1 log Pr(yi , x i |U − , V ) + λ2 ∥V ∥F2 ,
(P:V)
XN
+
+
2
λ
U = arg minU − i=1 log Pr(yi , x i |U , V ) + 2 ∥U ∥F .
(P:U)
Subspace Network: Deep Multi-Task Censored Regression
Algorithm 1 Single-layer parameter subspace sketching and refinement.
N , rank parameters λ and
Require: Training data D = {(x i , yi )}i=1
R,
Ensure: parameter subspace U , parameter sketch V
Initialize U − at random
for i = 1, . . . , N do
// 1. Sketching parameters in the current subspace
V + = arg minV − log Pr(yi , x i |U − , V ) + λ2 ∥V ∥F2
// 2. Parallel subspace refinement {Ut }Tt=1
for t = 1, . . . ,T do
Ut+ = arg minUt − log Pr(yi,t , x i |Ut , V + ) + λ2 ∥Ut ∥22
end for
Set U − = U + , V − = V +
end for
Define the instantaneous cost of the i-th datum:
д(x i , yi , U , V ) = − log Pr(x i , yi |U , V ) + λ2 ∥U ∥F2 + λ2 ∥V ∥F2 ,
Conference’17, July 2017, Washington, DC, USA
Algorithm 2 Gradient descent algorithm for problem P:V.
Require: Training data (x i , yi ), U − , step size η,
Ensure: sketch V
Initialize V − at random.
// 1. Perform gradient step and update the current solution of V.
for t = 1, . . . ,T do
⊤
Compute zi,t = σ −1 (− Ut− V x i ).
Compute the gradient for yt :
⊤
− yi, t −(Ut− ) V x i U −x ⊤ y ∈ (0, ∞)
i,t
2
t i
− +
σ
∇дt (x i , yi,t ; U , V ) =
ϕ(z i, t )
yi,t = 0
σ [1−Q (z )] Ut−x i⊤
i, t
end for
// 2. Update the current sketch V −
hX
i
T
− +
V+ = V− −η
∇д
(x,
y
;
U
,
V
)
+
λV
t
t
t =1
Set V − = V +
(x i , yi ), the problem related to the t-th subspace basis is:
min дt (x i , yi,t ; Ut , V + ) ≡ − log Pr(yi,t , x i |Ut , V + ) +
Ut
and the online optimization form of (2) can be recast as an empirical
cost minimization given below:
XN
minU ,V N1
i=1 д(x i , yi , U , V ).
According to the analysis in Section 2.2, one pass of the training
data can warrant the subspace learning problem. We outline the
solver for each subproblem as follows:
Problem (P:V) sketches parameters in the current space. We
solve (P:V) using gradient descent. The parameter sketching couples
all the subspace dimensions in V (not decoupled as in [28]), and
thus we need to solve this collectively. The update of V (V + ) can
be obtained by solving the online problem given below:
λ
min д(x i , yi ; U − , V ) ≡ − t =1 log Pr(yi,t , x |Ut− , V ) + ∥V ∥F2
2
V
!
⊤
−
T
X
Vx
yi,t − Ut
=−
log ϕ
I(yi,t ∈ (0, ∞))
σ
t =1
"
!#
⊤
− Ut− V x
λ
+ 1−Q
I(yi,t = 0) + ∥V ∥F2 .
σ
2
XT
V + can be computed by the following gradient update: V + = V − −
η∇V д(x i , yi ; U − , V + ), where the gradient is given by:
yi, t −(Ut− )⊤V x i − ⊤
T
−
X
Ut x i yi,t ∈ (0, ∞)
σ2
∇V д(x i , yi ; U , V ) = λV +
ϕ(z t )
−
T
yi,t = 0
t =1 σ [1−Q (z )] Ut x i
i, t
⊤
where zi,t = σ −1 (− Ut− V x). The algorithm for solving (P:V) is
summarized in Alg. 2.
Problem (P:U) refines the subspace U + based on sketching.
We solve (P:U) using stochastic gradient descent (SGD). We note
that the problem is decoupled for different subspace dimensions
t = 1, . . . ,T (i.e., rows of U ). With careful parallel design, this
procedure can be done very efficiently. Given a training data point
−
+
λ
∥Ut ∥22
2
yi,t − Ut⊤V +x i
I(yi,t ∈ (0, ∞))
= − log ϕ
σ
−Ut⊤V +x i
λ
+ 1−Q
I(yi,t = 0) + ∥Ut ∥22 .
σ
2
We can revise subspace by the following gradient update: Ut+ =
Ut− − µ t ∇Ut дt (x i , yi,t ; Ut , V + ), where the gradient is given by:
∇Ut дt (x i , yi,t ; Ui,t , V ) = λUt +
+
⊤ +
− yi, t −Ut2 V x V +x i
σ
ϕ(z i, t )
+
σ [1−Q (zi, t )] V x i
yi,t ∈ (0, ∞)
yi,t = 0
where zi,t = σ −1 (−Ut⊤V +x i ). We summarize the procedure in Algorithm 1 and show in Section 2.2 that under mild assumptions this
procedure will be able to capture the underlying subspace structure
in the parameter space with just one pass of the data.
2.2
Theoretical results
We establish both asymptotic and non-asymptotic convergence
properties for Algorithm 1. The proof scheme is inspired by a series
of previous works: [14, 16–18, 27, 28]. We briefly present the proof
sketch, and more proof details can be found in Appendix. At each
iteration i = 1, 2, ..., N , we sample (x i , yi ), and let U i , V i denote the
intermediate U and V , to be differentiated from Ut , Vt which are
the t-th columns of U , V . For the proof feasibility, we assume that
N are sampled i.i.d., and the subspace sequence {U i } N
{(x i , yi )}i=1
i=1
lies in a compact set.
Asymptotic Case: To estimate U , the Stochastic Gradient Descent (SGD) iterations can be seen as minimizing the approximate
PN ′
cost N1 i=1
д (x i , yi , U , V ), where д ′ is a tight quadratic surrogate for д based on the second-order Taylor approximation around
U N −1 . Furthermore, д can be shown to be smooth, by bounding
its first-order and second-order gradients w.r.t. each Ut (similar to
Appendix 1 of [28]).
Conference’17, July 2017, Washington, DC, USA
Algorithm 3 Network expansion via hierarchical parameter subspace sketching and refinement.
Require: Training data D = {(x i , yi )}, target network depth K.
Ensure: The deep subspace network f
Set f [0] (x) = y and solve f [0] using Algorithm 1.
for k = 1, . . . , K − 1 do
// 1. Subspace sketching based on the current subspace using
Algorithm 1:
n
o
∗
∗
U[k
] , V[k ] = arg min E(x,y)∼D ℓ(y, ReLU U[k ]V[k ] f [k−1] (x) ) ,
U[k ],V[k ]
// 2. Expand the layer using the refined subspace as our new
network:
∗
∗
f [k ] (x) = ReLU U[k]
V[k
] f [k −1] (x)
end for
return f = f [K ]
Following [16, 18], it can then be established that, as N → ∞, the
N asymptotically converges to a stationarysubspace sequence {U i }i=1
point of the batch estimator, under a few mild conditions. We can sePN ′
quentially show: 1) i=1
д (x i , yi , U i , V i ) asymptotically converges
PN
to i=1 д(x i , yi , U i , V i ), according to the quasi-martingale property in the almost sure sense, owing to the tightness of д ′ ; 2) the
first point implies convergence of the associated gradient sequence,
due to the regularity of д; 3) дt (x i , yi , U , V ) is bi-convex for block
variables Ut and V .
Non-Asymptotic Case: When N is finite, [17] asserts that the
distance between successive subspace estimates will vanish as fast as
O(1/i): ∥U i −U i−1 ∥F ≤ Bi , for some constant B that is independent of
i and N . Following [28] to leverage the unsupervised formulation of
regret analysis as in [14, 27], we can similarly obtain a tight regret
bound that will again vanish if N → ∞.
3
SUBSPACE NETWORK VIA HIERARCHICAL
SKETCHING AND REFINEMENT
The single layer model in (1) has limited capability to capture the
highly nonlinear regression relationships, as the parameters are linearly linked to the subspace except for a ReLU operation. However,
the single-layer procedure in Algorithm 1 has provided a building
block, based on which we can develop an efficient algorithm to train
a deep subspace network (SN) in a greedy fashion. We thus propose
a network expansion procedure to overcome such limitation.
After we obtain the parameter subspace U and sketch V for the
single-layer case (1), we project the data points by x̄ = ReLU(UV x).
A straightforward idea of the expansion is to use (x̄, y) as the new
samples to train another layer. Let f [k −1] denote the network structure we obtained before the k-th expansion starts, k = 1, 2, ..., K − 1,
the expansion can recursively stack more ReLU layers:
f [k ] (x) = ReLU U[k ]V[k ] f [k −1] (x) + ϵ ,
(3)
However, we observe that simply stacking layers by repeating (3)
many times can cause substantial information loss and degrade
the generalization performance, especially since our training is
M. Sun et al.
layer-by-layer without “looking back” (i.e., top-down joint tuning). Inspired by deep residual networks [10] that exploit “skip
connections” to pass lower-level data and features to higher levels,
we concatenate the original samples with the newly transformed,
censored outputs after each time of expansion, i.e., reformulating
x̄ = [ReLU(UV x); x] (similar manners could be found in [41]). The
new formulation after the expansion is given below:
f [k ] (x) = ReLU U[k ]V[k ] f [k −1] (x); x + ϵ .
We summarize the network expansion process in Alg. 3. The
architecture of the resulting SN is illustrated in Fig. 1. Compared
to the single layer model (1), SN gradually refines the parameter
subspaces by multiple stacked nonlinear projections. It is expected
to achieve superior performance due to the higher learning capacity,
and the proposed SN can also be viewed as a gradient boosting
method. Meanwhile, the layer-wise low-rank subspace structural
prior would further improve generalization compared to naive
multi-layer networks.
4
EXPERIMENT
The subspace network code and scripts for generating the results in
this section are available at https://github.com/illidanlab/subspace-net.
4.1
Simulations on Synthetic Data
Subspace recovery in a single layer model. We first evaluate
the subspace recovered by the proposed Algorithm 1 using synthetic
data. We generated X ∈ RN ×D , U ∈ RT ×R and V ∈ RR×D , all as
i.i.d. random Gaussian matrices. The target matrix Y ∈ RN ×T was
then synthesized using (1). We set N = 5, 000, D = 200, T = 100
R = 10, and random noise as ϵ ∼ N (0, 32 ).
Figure 2a shows the plot of subspace difference between the
ground-truth U and the learned subspace Ui throughout the iterations, i.e., ∥U − Ui ∥F /∥U ∥F w.r.t. i. This result verifies that Algorithm 1 is able to correctly find and smoothly converge to the
underlying low-rank subspace of the synthetic data. The objective
values throughout the online training process of Algorithm 1 are
plotted in Figure 2b. We further show the plot of iteration-wise
subspace differences, defined as ∥Ui − Ui−1 ∥F /∥U ∥F , in Figure 2c,
which complies with the o(1/t) result in our non-asymptotic analysis. Moreover, the distribution of correlation between recovered
weights and true weights for all tasks is given in Figure 3, with
most predicted weights having correlations with ground truth of
above 0.9.
Subspace recovery in a multi-layer subspace network. We regenerated synthetic data by repeatedly applying (1) for three times,
each time following the same setting as the single-layer model. A
three-layer SN was then learned using Algorithm 3. As one simple
baseline, a multi-layer perceptron (MLP) is trained, whose three
hidden layers have the same dimensions as the three ReLU layers
of the SN. Inspired by [24, 32, 34], we then applied low-rank matrix
factorization to each layer of MLP, with the same desired rank R,
creating the factorized MLP (f-MLP) baseline that has the identical
architecture (including both ReLU hidden layers and linear bottleneck layers) to SN. We further re-trained the f-MLP on the same
data from end to end, leading to another baseline, named retrained
factorized MLP (rf-MLP).
Subspace Network: Deep Multi-Task Censored Regression
Conference’17, July 2017, Washington, DC, USA
(a)
(b)
(c)
−10
−5
0
5
10
2
4
Density
5
0
Predicted value (w)
0
−10
−5
0
−10
−5
Predicted value (w)
5
6
10
10
Figure 2: Experimental results on subspace convergence. (a) Subspace differences, w.r.t. the index i; (b) Convergence of Algorithm 1, w.r.t. the index i; (c) Iteration-wise subspace differences, w.r.t. the index i.
−10
−5
Ground truth (w)
0
5
10
Ground truth (w)
(a)
0.7
0.8
0.9
1.0
Correlation (w_truth, w_pred)
(b)
(c)
Figure 3: (a) Predicted weight vs true weight for task 1; (b) Predicted weight vs true weight for task 2; (c) Distribution of correlation between
predicted weight and true weight for all tasks
Table 1: Comparison of subspace differences for each layer of SN, f-MLP, and rf-MLP.
Metric
Method
Layer 1
Layer 2
Layer 3
Subspace Difference
SN
f-MLP rf-MLP
0.0313 0.0315 0.0317
0.0321 0.0321 0.0321
0.0312 0.0315 0.0313
Maximum Mutual Coherence
SN
f-MLP
rf-MLP
0.7608 0.7727
0.7895
0.8283 0.7603
0.7654
0.8493 0.7233
0.7890
Table 1 evaluates the subspace recovery fidelity in three layers,
using three different metrics: (1) the maximum mutual coherence
of all column pairs from two matrices, defined in [5] as a classical measurement on how correlated the two matrices’ column
subspaces are; (2) the mean mutual coherence of all column pairs
from two matrices; (3) the subspace difference defined the same
as in the single-layer case2 . Note that the two mutual coherencebased metrics are immune to linear transformations of subspace
coordinates, to which the ℓ2 -based subspace difference might become fragile. SN achieves clear overall advantages under all three
measurements, over f-MLP and rf-MLP. More notably, while the
performance margin of SN in subspace difference seems to be small,
the much sharper margins, in two (more robust) mutual coherencebased measurements, suggest that the recovered subspaces by SN
are significantly better aligned with the groundtruth.
Benefits of Going Deep. We re-generate synthetic data again in
the same way as the first single-layer experiment; yet differently, we
now aim to show that a deep SN will boost performance over singlelayer subspace recovery, even the data generation does not follow a
known multi-layer model. We compare SN (both 1-layer and 3-layer)
with two carefully chosen sets of state-of-art approaches: (1) single
2 The
higher in terms of the two mutual coherence-based metrics, the better subspace
recovery is achieved.That is different from the subspace difference case where the
smaller the better,
Mean Mutual Coherence
SN
f-MLP rf-MLP
0.2900 0.2725 0.2735
0.2882 0.2820 0.2829
0.2586 0.2506 0.2485
and multi-task “shallow” models; (2) deep models. For the first set,
the least squares (LS) is treated as a naive baseline, while ridge (LS
+ ℓ2 ) and lasso (LS + ℓ1 ) regressions are considered for shrinkage or
variables selection purpose; Censor regression, also known as the
Tobit model, is a non-linear method to predict bounded targets
, e.g., [4]. Multi-task models with regularizations on trace norm
(Multi Trace) and ℓ2,1 norm (Multi ℓ2,1 ) have been demonstrated
to be successful on simultaneous structured/sparse learning, e.g.,
[35, 37].3 We also verify the benefits of accounting for boundedness
of targets (Uncensored vs. Censored) in both single-task and multitask settings, with best performance reported for each scenario (LS
+ ℓ1 for single-task and Multi Trace for multi-task). For the set of
deep model baselines, we construct three DNNs for fair comparison:
i) A 3-layer fully connected DNN with the same architecture as
SN, with a plain MSE loss; ii) A 3-layer fully connected DNN as i)
with ReLU added for output layer before feeding into the MSE loss,
which naturally implements non-negativity censored training and
evaluation; iii) A factorized and re-trained DNN from ii), following
the same procedure of rf-MLP in the multi-layer synthetic experiment. Apparently, ii) and iii) are constructed to verify if DNN also
3 Least
squares, ridge, lasso, and censor regression are implemented by Matlab optimization toolbox. MTLs are implemented through MALSAR [39] with parameters
carefully tuned.
Conference’17, July 2017, Washington, DC, USA
M. Sun et al.
Table 2: Average normalized mean square error under different approaches for synthetic data.
Percent
40%
50%
60%
70%
80%
Uncensored (LS + ℓ1 )
0.1412 (0.0007)
0.1384 (0.0005)
0.1365 (0.0005)
0.1349 (0.0005)
0.1343 (0.0011)
Percent
40%
50%
60%
70%
80%
DNN i (naive)
0.0623 (0.0041)
0.0593 (0.0048)
0.0587 (0.0053)
0.0590 (0.0071)
0.0555 (0.0057)
Single Task (Shallow)
Censored (LS + ℓ1 )
Nonlinear Censored (Tobit)
0.1127 (0.0010)
0.0428 (0.0003)
0.1102 (0.0010)
0.0408 (0.0004)
0.1088 (0.0009)
0.0395 (0.0003)
0.1078 (0.0010)
0.0388 (0.0004)
0.1070 (0.0012)
0.0383 (0.0006)
Deep Neural Network
DNN ii (censored) DNN iii (censored + low-rank)
0.0489 (0.0035)
0.0431 (0.0041)
0.0462 (0.0042)
0.0400 (0.0039)
0.0455 (0.0054)
0.0395 (0.0050)
0.0447 (0.0043)
0.0386 (0.0058)
0.0431 (0.0053)
0.0380 (0.0057)
Table 3: Average normalized mean square error at each layer
for subspace network (R = 10) for synthetic data.
Perc.
40%
50%
60%
70%
80%
Layer 1
0.0390 (0.0004)
0.0389 (0.0007)
0.0388 (0.0006)
0.0388 (0.0006)
0.0390 (0.0008)
Layer 2
0.0381 (0.0005)
0.0379 (0.0005)
0.0378 (0.0004)
0.0378 (0.0005)
0.0378 (0.0006)
Layer 3
0.0369 (0.0002)
0.0366 (0.0003)
0.0364 (0.0003)
0.0363 (0.0003)
0.0364 (0.0005)
Layer 10
0.0368 (0.0002)
0.0366 (0.0003)
0.0364 (0.0003)
0.0363 (0.0003)
0.0363 (0.0005)
Table 4: Running time on synthetic data.
Layer 20
0.0368 (0.0002)
0.0365 (0.0003)
0.0363 (0.0003)
0.0362 (0.0003)
0.0363 (0.0005)
benefits from the censored target and the low-rank assumption,
respectively.
We performed 10-fold random-sampling validation on the same
dataset, i.e., randomly splitting into training and validation data 10
times. For each split, we fitted model on training data and evaluated
the performance on validation data. Average normalized mean
square error (ANMSE) across all tasks was obtained as the overall
performance for each split. For methods without hyper parameters
(least square and censor regression), an average of ANMSE for 10
splits was regarded as the final performance; for methods with
tunable parameters, e.g., λ in lasso, we performed a grid search
on λ values and chose the optimal ANMSE result. We considered
different splitting sizes with training samples containing [40%, 50%,
60%, 70%, 80%] of all the samples.
Table 2 further compares the performance of all approaches.
Standard deviation of 10 trials is given in parenthesis (same for all
following tables). We can observe that: (1) all censored models significantly outperform their uncensored counterparts, verifying the
necessity of adding censoring targets for regression. Therefore, we
will use censored baselines hereinafter, unless otherwise specified;
(2) the more structured MTL models tend to outperform single task
models by capturing task relatedness. That is also evidenced by
the performance margin of DNN iii over DNN i; (3) the nonlinear
models are undoubtedly more favorable: we even see the single-task
Tobit model to outperform MTL models; (4) As a nonlinear, censored
MTL model, SN combines the best of them all, accounting for its
superior performance over all competitors. In particular, even a
1-layer SN already produces comparable performance to the 3-layer
DNN iii (which also a nonlinear, censored MTL model trained with
back-propagation, with three times the parameter amount of SN),
thanks to SN’s theoretically solid online algorithm in sketching
subspaces.
Furthermore, increasing the number of layers in SN from 2 to 20
demonstrated that SN can also benefit from growing depth without
an end to end scheme. As Table 3 reveals, SN steadily improves with
Multi Task (Shallow)
Uncensored (Multi Trace) Censored (Multi Trace)
0.1333 (0.0009)
0.1053 (0.0027)
0.1323 (0.0010)
0.1054 (0.0042)
0.1325 (0.0012)
0.1031 (0.0046)
0.1315 (0.0013)
0.1024 (0.0042)
0.1308 (0.0008)
0.1040 (0.0011)
Subspace Net (SN)
Layer 1
Layer 3
0.0390 (0.0004)
0.0369 (0.0002)
0.0389 (0.0007)
0.0366 (0.0003)
0.0388 (0.0006)
0.0364 (0.0003)
0.0388 (0.0006)
0.0363 (0.0003)
0.0390 (0.0008)
0.0364 (0.0005)
Method
Time (s)
Platform
Least Square
LS+ℓ2
LS+ℓ1
Multi-trace
Multi-ℓ21
Censor
SN (per layer)
DNN
0.02
0.02
18.4
32.3
27.0
1680
109
659
Matlab
Matlab
Matlab
Matlab
Matlab
Matlab
Python
Tensorflow
more layers, until reaching a plateau at ∼ 5 layers (as the underlying data distribution is relatively simple here). The observation is
consistent among all splits.
Computation speed. All experiments run on the same machine
(1 x Six-core Intel Xeon E5-1650 v3 [3.50GHz], 12 logic cores, 128
GB RAM). GPU accelerations are enabled for DNN baselines, while
SN has not exploited the same accelerations yet. The running
time for a single round training on synthetic data (N=5000, D=200,
T=100) is given in Table 4. Training each layer of SN will cost 109
seconds on average. As we can see, SN improves generalization
performance without a significant computation time burden. Furthermore, we can accelerate SN further, by reading data in batch
mode and performing parallel updates.
4.2
Experiments on Real data
We evaluated SN in a real clinical setting to build models for the
prediction of important clinical scores representing a subject’s cognitive status and signaling the progression of Alzheimer’s disease
(AD), from structural Magnetic Resonance Imaging (sMRI) data.
AD is one major neurodegenerative disease that accounts for 60 to
80 percent of dementia. The National Institutes of Health has thus
focused on studies investigating brain and fluid biomarkes of the disease, and supported the long running project Alzheimer’s Disease
Neuroimaging Initiative (ADNI) from 2003. We used the ADNI-1 cohort (http://adni.loni.usc.edu/). In the experiments, we used the 1.5
Tesla structural MRI collected at the baseline, and performed cortical
reconstruction and volumetric segmentations with the FreeSurfer
following the procotol in [12]. For each MRI image, we extracted
138 features representing the cortical thickness and surface areas
of region-of-interests (ROIs) using the Desikan-Killiany cortical
atlas [8]. After preprocessing, we obtained a dataset containing
670 samples and 138 features. These imaging features were used
to predict a set of 30 clinical scores including ADAS scores [23]
at baseline and future (6 months from baseline), baseline Logical
Subspace Network: Deep Multi-Task Censored Regression
Conference’17, July 2017, Washington, DC, USA
Table 5: Average normalized mean square error at each layer for subspace network (R = 5) for real data.
Percent
40%
50%
60%
70%
80%
Layer 1
0.2016 (0.0057)
0.1992 (0.0040)
0.1990 (0.0061)
0.1981 (0.0046)
0.1970 (0.0034)
Layer 2
0.2000 (0.0039)
0.1992 (0.0053)
0.1990 (0.0047)
0.1966 (0.0052)
0.1967 (0.0044)
Layer 3
0.1981 (0.0031)
0.1971 (0.0038)
0.1967 (0.0038)
0.1953 (0.0039)
0.1956 (0.0040)
Layer 5
0.1977 (0.0031)
0.1968 (0.0036)
0.1964 (0.0039)
0.1952 (0.0039)
0.1955 (0.0039)
Layer 10
0.1977 (0.0031)
0.1967 (0.0035)
0.1964 (0.0038)
0.1951 (0.0038)
0.1953 (0.0039)
Table 6: Average normalized mean square error under different approaches for real data.
Percent
40%
50%
60%
70%
80%
Least Square
0.3874 (0.0203)
0.3119 (0.0124)
0.2779 (0.0123)
0.2563 (0.0108)
0.2422 (0.0112)
Percent
40%
50%
60%
70%
80%
DNN i (naive)
0.2549 (0.0442)
0.2236 (0.0066)
0.2215 (0.0076)
0.2149 (0.0077)
0.2132 (0.0138)
Single Task (Censored)
LS + ℓ1
Tobit (Nonlinear)
0.2393 (0.0056)
0.3870 (0.0306)
0.2202 (0.0049)
0.3072 (0.0144)
0.2112 (0.0055)
0.2719 (0.0114)
0.2037 (0.0042)
0.2516 (0.0108)
0.2005 (0.0054)
0.2384 (0.0099)
Deep Neural Network
DNN ii (censored)
DNN iii (censored + low-rank)
0.2388 (0.0121)
0.2113 (0.0063)
0.2208 (0.0062)
0.2127 (0.0118)
0.2200 (0.0076)
0.2087 (0.0102)
0.2141 (0.0079)
0.2093 (0.0137)
0.2090 (0.0079)
0.2069 (0.0135)
Multi Task (Censored)
Multi Trace
Multi ℓ2, 1
0.2572 (0.0156)
0.2006 (0.0099)
0.2406 (0.0175)
0.2002 (0.0132)
0.2596 (0.0233)
0.2072 (0.0204)
0.2368 (0.0362)
0.2017 (0.0116)
0.2176 (0.0171)
0.2009 (0.0050)
Subspace Net (SN)
Layer 1
Layer 3
0.2016 (0.0057)
0.1981 (0.0031)
0.1992 (0.0040)
0.1971 (0.0038)
0.1990 (0.0061)
0.1967 (0.0038)
0.1981 (0.0046)
0.1953 (0.0039)
0.1970 (0.0034)
0.1956 (0.0040)
Table 7: Average normalized mean square error under different rank assumptions for real data.
Method
SN
DNN iii
(censored + low-rank)
Percent - Rank
40%
50%
60%
70%
80%
40%
50%
60%
70%
80%
R=1
0.2052 (0.0030)
0.2047 (0.0029)
0.2052 (0.0033)
0.2043 (0.0044)
0.2058 (0.0051)
0.2322 (0.0146)
0.2298 (0.0093)
0.2244 (0.0132)
0.2178 (0.0129)
0.2256 (0.0117)
R=3
0.1993 (0.0036)
0.1983 (0.0034)
0.1988 (0.0047)
0.1975 (0.0042)
0.1977 (0.0042)
0.2360 (0.0060)
0.2256 (0.0127)
0.2277 (0.0099)
0.2177 (0.0115)
0.2250 (0.0079)
R=5
0.1981 (0.0031)
0.1971 (0.0038)
0.1967 (0.0038)
0.1953 (0.0039)
0.1956 (0.0040)
0.2113 (0.0063)
0.2127 (0.0118)
0.2087 (0.0102)
0.2093 (0.0137)
0.2069 (0.0135)
R = 10
0.2010 (0.0044)
0.2001 (0.0046)
0.1996 (0.0052)
0.1990 (0.0057)
0.1990 (0.0058)
0.2196 (0.0124)
0.2235 (0.0142)
0.2145 (0.0208)
0.2083 (0.0127)
0.2158 (0.0183)
Table 8: Average normalized mean square error for non-calibrated
vs. calibrated SN for real data (6 layers).
Percent
40%
50%
60%
70%
80%
Non-calibrate
0.1993 (0.0034)
0.1987 (0.0043)
0.1991 (0.0044)
0.1982 (0.0042)
0.1984 (0.0041)
Calibrate
0.1977 (0.0031)
0.1967 (0.0036)
0.1964 (0.0039)
0.1951 (0.0038)
0.1954 (0.0039)
Memory from Wechsler Memory Scale IV [25], Neurobattery scores
(i.e. immediate recall total score and Rey Auditory Verbal Learning
Test scores), and the Neuropsychiatric Inventory [7] at baseline and
future.
Calibration. In MTL formulations we typically assume that noise
variance σ 2 is the same across all tasks, which may not be true in
many cases. To deal with heterogeneous σ 2 among tasks, we design
a calibration step in our optimization process, where we estimate
task-specific σ̂t2 using ∥y − ŷ∥22 /N before ReLU, as the input for
next layer and repeat on layer-wise. We compare performance of
both non-calibrated and calibrated methods.
Performance. We adopted the two sets of baselines used in the
last synthetic experiment for the real world data. Different from
synthetic data where the low-rank structure was predefined, for
real data, there is no groundtruth rank available and we have to
try different rank assumptions. Table 8 compares the performances
between σ 2 non-calibrated versus calibrated models. We observe a
clear improvement by assuming different σ 2 across tasks. Table 6
shows the results for all comparison methods, with SN outperforming all else. Table 5 shows the SN performance growth with
increasing the number of layers. Table 7 further reveals the performance of DNNs and SN using varying rank estimations in real data.
As expected, the U-shape curve suggests that an overly low rank
may not be informative enough to recover the original weight space,
while a high-rank structure cannot enforce as strong a structural
prior. However, the overall robustness of SN to rank assumptions
is fairly remarkable: its performance under all ranks is competitive,
consistently outperforming DNNs under the same rank assumptions and other baselines.
Qualitative Assessment. From the multi-task learning perspective, the subspaces serve as the shared component for transferring
predictive knowledge among the censored learning tasks. The subspaces thus capture important predictive information in predicting
cognitive changes. We normalized the magnitude of the subspace
into the range of [−1, 1] and visualized the subspace in brain mappings. The the 5 lowest level subspaces in V1 are the most important
five subspaces, and is illustrated in Figure 4.
Conference’17, July 2017, Washington, DC, USA
M. Sun et al.
Proof of Asymptotic Properties
For infinite data streams with N → ∞, we recall the instantaneous
cost of the i-th datum:
дi (x i , yi , U , V ) = − log Pr(x i , yi |U , V ) +
Figure 4: Brain mapping of 5 lowest-level subspaces identified in the proposed Subspace Network.
and the online optimization form recasted as an empirical cost
minimization:
1 XN
minU
i=1 дi (x i , yi , U , V ).
N
The Stochastic Gradient Descent (SGD) iterations can be seen as
minimizing the approximate cost:
minU
We find that each subspace captures very different information.
In the first subspace, the volumes of right banks of the superior
temporal sulcus, which is found to involve in prodromal AD [15],
rostral middle frontal gyrus, with highest Aβ loads in AD pathology [21], and the volume of inferior parietal lobule, which was found
to have an increased S-glutathionylated proteins in a proteomics
study [20], have significant magnitude. We also find evidence of
strong association between AD pathology and brain regions of
large magnitude in other subspaces. The subspaces in remaining
levels and detailed clinical analysis will be available in a journal
extension of this paper.
5
CONCLUSIONS AND FUTURE WORK
In this paper, we proposed a Subspace Network (SN), an efficient
deep modeling approach for non-linear multi-task censored regression, where each layer of the subspace network performs a
multi-task censored regression to improve upon the predictions
from the last layer via sketching a low-dimensional subspace to
perform knowledge transfer among learning tasks. We show that
under mild assumptions, for each layer we can recover the parametric subspace using only one pass of training data. We demonstrate
empirically that the subspace network can quickly capture correct
parameter subspaces, and outperforms state-of-the-arts in predicting neurodegenerative clinical scores from brain imaging. Based on
similar formulations, the proposed method can be easily extended
to cases where the targets have nonzero bounds, or both lower and
upper bounds.
APPENDIX
We hereby give more details for the proofs of both asymptotic
and non-asymptotic convergence properties for Algorithm 1 to
recover the latent subspace U . The proofs heavily rely on a series
of previous results in [14, 16–18, 27, 28], and many key results
are directly referred to hereinafter for conciseness. We include the
proofs for the manuscript to be self-contained.
At iteration i = 1, 2, ..., N , we sample (x i , yi ), and let U i , V i denote the intermediate U and V , to be differentiated from Ut , Vt
which are the t-th columns of U , V . For the proof feasibility, we asN are sampled i.i.d., and the subspace sequence
sume that {(x i , yi )}i=1
N lies in a compact set.
{U i }i=1
λ
λ
∥U ∥F2 + ∥V ∥F2 ,
2
2
1 XN ′
i=1 дi (x i , yi , U , V ).
N
′ is a tight quadratic surrogate for д based on the secondwhere дN
N
order Taylor approximation around U N −1 :
′
дN
(x N , y N , U , V ) = дN (x N , y N , U N −1 , V )
+ ⟨∇U дN (x N , y N , U N −1 , V ), U − U N −1 ⟩
αN
+
∥U − U N −1 ∥F2 ,
2
2 д (x , y , U N −1 , V )∥. д ′ is further recognized
with α N ≥ ∥∇U
N N N
N
as a locally tight upper-bound surrogate for дN , with locally tight
gradients. Following the Appendix 1 of [28], we can show that дN
is smooth, with its first-order and second-order gradients bounded
w.r.t. each U N .
With the above results, the convergence of subspace iterates
can be proven in the same regime developed in [18], whose main
inspirations came from [16] that established convergence of an online dictionary learning algorithm using the martingale sequence
theory. In a nutshell, the proof procedure proceeds by first showPN ′
PN
ing that i=1
дi (x i , yi , U i , V i ) converges to i=1
дi (x i , yi , U i , V i )
asymptotically, according to the quasi-martingale property in the
almost sure sense, owing to the tightness of д ′ . It then implies convergence of the associated gradient sequence, due to the regularity
of д.
Meanwhile, we notice that дi (x i , yi , U , V ) is bi-convex for the
block variables Ut and V (see Lemma 2 of [28]). Therefore due to
the convexity of дN w.r.t. V when U = U N −1 is fixed, the parameter
sketches V can also be updated exactly per iteration.
All above combined, we can claim the asymptotic convergence
for the iterations of Algorithm 1: as N → ∞, the subspace sequence
N asymptotically converges to a stationary-point of the batch
{U i }i=1
estimator, under a few mild conditions.
Proof of Non-Asymptotic Properties
For finite data streams, we rely on the unsupervised formulation of
regret analysis [14, 27] to assess the performance of online iterates.
Specifically, at iteration t (t ≤ N ), we use the previous U t −1 to
span the partial data at i = 1, 2, ..., t. Prompted by the alternating
nature of iterations, we adopt a variant of the unsupervised regret
to assess the goodness of online subspace estimates in representing
the partially available data. With дt (x t , yt , U t −1 , V ) being the loss
incurred by the estimate U t −1 for predicting the t-th datum, the
Subspace Network: Deep Multi-Task Censored Regression
Conference’17, July 2017, Washington, DC, USA
cumulative online loss for a stream of size T is given by:
C̄T :=
T
X
1
дτ (xτ , yτ , U τ −1 , V ).
T τ =1
(4)
Further, we will assess the cost of the last estimate U T using:
ĈT =
T
1X
дτ (xτ , yτ , U T , V ).
T τ =1
(5)
We define CT as the batch estimator cost. For the sequence {U t }Tt=1 ,
we define the online regret:
RT := ĈT − C̄T .
(6)
We investigate the convergence rate of the sequence {RT } to zero
as T grows. Due to the nonconvexity of the online subspace iterates, it is challenging to directly analyze how fast the online
cumulative loss C̄t approaches the optimal batch cost Ct . As [28]
advocates, we instead investigate whether Ĉt converges to C̄t . That
is established by first referring to the Lemma 2 of [17]: the distance
between successive subspace estimates will vanish as fast as O(1/t):
∥U t − U t −1 ∥F ≤ Bt , for some constant B that is independent of t
and N . Following the proof of Proposition 2 in [28], we can similarly show that: if {U t }Tt=1 and {Vt x t }Tt=1 are uniformly bounded,
i.e., ∥U t ∥F ≤ B 1 , and ∥Vt x t ∥2 ≤ B 2 , ∀t ≤ T , then with constants
B 1 , B 2 > 0 and by choosing a constant step size µ t = µ, we have a
bounded regret as:
B 2 (ln(T ) + 1)2 5B 2
+
.
2µT
6µT
This thus concluded the proof.
RT ≤
ACKNOWLEDGMENTS
This research is supported in part by National Science Foundation
under Grant IIS-1565596, IIS-1615597, the Office of Naval Research
under grant number N00014-17-1-2265, N00014-14-1-0631.
REFERENCES
[1] Ehsan Adeli-Mosabbeb, Kim-Han Thung, Le An, Feng Shi, and Dinggang Shen.
2015. Robust feature-sample linear discriminant analysis for brain disorders
diagnosis. In NIPS. 658–666.
[2] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2007. Multitask feature learning. NIPS 19 (2007), 41.
[3] Alzheimer’s Association et al. 2013. 2013 Alzheimer’s disease facts and figures.
Alzheimer’s & dementia 9, 2 (2013), 208–245.
[4] Dimitris Berberidis, Vassilis Kekatos, and Georgios B Giannakis. 2016. Online
censoring for large-scale regressions with application to streaming big data. TSP
64, 15 (2016), 3854–3867.
[5] Emmanuel Candes and Justin Romberg. 2007. Sparsity and incoherence in compressive sampling. Inverse problems 23, 3 (2007), 969.
[6] Rich Caruana. 1998. Multitask learning. In Learning to learn. Springer, 95–133.
[7] Jeffrey L Cummings. 1997. The Neuropsychiatric Inventory Assessing psychopathology in dementia patients. Neurology 48, 5 Suppl 6 (1997), 10S–16S.
[8] Rahul S Desikan, Florent Ségonne, Bruce Fischl, et al. 2006. An automated labeling
system for subdividing the human cerebral cortex on MRI scans into gyral based
regions of interest. Neuroimage 31, 3 (2006), 968–980.
[9] Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi–task
learning. In SIGKDD. ACM, 109–117.
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual
learning for image recognition. In CVPR. 770–778.
[11] Charles P Hughes, Leonard Berg, Warren L Danziger, et al. 1982. A new clinical
scale for the staging of dementia. The British journal of psychiatry 140, 6 (1982),
566–572.
[12] Clifford R Jack, Matt A Bernstein, Nick C Fox, Paul Thompson, et al. 2008. The
Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. of mag. res.
imag. 27, 4 (2008), 685–691.
[13] Ali Jalali, Sujay Sanghavi, Chao Ruan, and Pradeep K Ravikumar. 2010. A dirty
model for multi-task learning. In NIPS. 964–972.
[14] Shiva P Kasiviswanathan, Huahua Wang, Arindam Banerjee, and Prem Melville.
2012. Online l1-dictionary learning with application to novel document detection.
In NIPS. 2258–2266.
[15] Ronald J Killiany, Teresa Gomez-Isla, Mark Moss, Ron Kikinis, Tamas Sandor,
Ferenc Jolesz, Rudolph Tanzi, Kenneth Jones, Bradley T Hyman, and Marilyn S
Albert. 2000. Use of structural magnetic resonance imaging to predict who will
get Alzheimer’s disease. Annals of neurology 47, 4 (2000), 430–439.
[16] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. 2010. Online
learning for matrix factorization and sparse coding. JMLR 11, Jan (2010), 19–60.
[17] Morteza Mardani, Gonzalo Mateos, and Georgios B Giannakis. 2013. Dynamic
anomalography: Tracking network anomalies via sparsity and low rank. J. of Sel.
To. in Sig. Proc. 7, 1 (2013), 50–66.
[18] Morteza Mardani, Gonzalo Mateos, and Georgios B Giannakis. 2015. Subspace
learning and imputation for streaming big data matrices and tensors. TSP 63, 10
(2015), 2663–2677.
[19] CAR Martins, A Oulhaj, CA De Jager, and JH Williams. 2005. APOE alleles predict
the rate of cognitive decline in Alzheimer disease A nonlinear model. Neurology
65, 12 (2005), 1888–1893.
[20] Shelley F Newman, Rukhsana Sultana, Marzia Perluigi, Rafella Coccia, Jian Cai,
William M Pierce, Jon B Klein, Delano M Turner, and D Allan Butterfield. 2007.
An increase in S-glutathionylated proteins in the Alzheimer’s disease inferior
parietal lobule, a proteomics approach. Journal of neuroscience research 85, 7
(2007), 1506–1514.
[21] James AR Nicoll, David Wilkinson, Clive Holmes, Phil Steart, Hannah Markham,
and Roy O Weller. 2003. Neuropathology of human Alzheimer disease after
immunization with amyloid- β peptide: a case report. Nature medicine 9, 4 (2003),
448.
[22] Stéphane P Poulin, Rebecca Dautoff, John C Morris, et al. 2011. Amygdala atrophy
is prominent in early Alzheimer’s disease and relates to symptom severity. Psy.
Res.: Neur. 194, 1 (2011), 7–13.
[23] Wilma G Rosen, Richard C Mohs, and Kenneth L Davis. 1984. A new rating scale
for Alzheimer’s disease. The American journal of psychiatry (1984).
[24] Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, et al. 2013. Low-rank matrix
factorization for deep neural network training with high-dimensional output
targets. In ICASSP. IEEE, 6655–6659.
[25] Wechsler D Wechsler Memory Scale—Fourth. 2009. Edition (WMS-IV). New York:
Psychological Corporation (2009).
[26] Michael L Seltzer and Jasha Droppo. 2013. Multi-task learning in deep neural
networks for improved phoneme recognition. In ICASSP. IEEE, 6965–6969.
[27] Shai Shalev-Shwartz et al. 2012. Online learning and online convex optimization.
Foundations and Trends® in Machine Learning 4, 2 (2012), 107–194.
[28] Yanning Shen, Morteza Mardani, and Georgios B Giannakis. 2016. Online Categorical Subspace Learning for Sketching Big Data with Misses. arXiv preprint
arXiv:1609.08235 (2016).
[29] Nathan Srebro, Jason Rennie, and Tommi S Jaakkola. 2005. Maximum-margin
matrix factorization. In NIPS. 1329–1336.
[30] Robert A Sweet, Howard Seltman, James E Emanuel, et al. 2012. Effect of
Alzheimer’s disease risk genes on trajectories of cognitive function in the Cardiovascular Health Study. Ame. J. of Psyc. 169, 9 (2012), 954–962.
[31] Tom N Tombaugh and Nancy J McIntyre. 1992. The mini-mental state examination: a comprehensive review. Journal of the American Geriatrics Society 40, 9
(1992), 922–935.
[32] Zhangyang Wang, Jianchao Yang, Hailin Jin, et al. 2015. Deepfont: Identify your
font from an image. In MM. ACM, 451–459.
[33] Zhizheng Wu, Cassia Valentini-Botinhao, Oliver Watts, and Simon King. 2015.
Deep neural networks employing multi-task learning and stacked bottleneck
features for speech synthesis. In ICASSP. IEEE, 4460–4464.
[34] Jian Xue, Jinyu Li, and Yifan Gong. 2013. Restructuring of deep neural network
acoustic models with singular value decomposition.. In Interspeech. 2365–2369.
[35] Haiqin Yang, Irwin King, and Michael R Lyu. 2010. Online learning for multi-task
feature selection. In CIKM. ACM, 1693–1696.
[36] Daoqiang Zhang, Dinggang Shen, Alzheimer’s Disease Neuroimaging Initiative,
et al. 2012. Multi-modal multi-task learning for joint prediction of multiple
regression and classification variables in Alzheimer’s disease. NeuroImage 59, 2
(2012), 895–907.
[37] Tianzhu Zhang, Bernard Ghanem, Si Liu, and Narendra Ahuja. 2013. Robust
visual tracking via structured multi-task sparse learning. IJCV 101, 2 (2013),
367–383.
[38] Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. 2014. Facial
landmark detection by deep multi-task learning. In ECCV. Springer, 94–108.
[39] Jiayu Zhou, Jianhui Chen, and Jieping Ye. 2011. Malsar: Multi-task learning via
structural regularization. Arizona State University 21 (2011).
[40] Jiayu Zhou, Lei Yuan, Jun Liu, and Jieping Ye. 2011. A multi-task learning
formulation for predicting disease progression. In SIGKDD. ACM, 814–822.
[41] Zhi-Hua Zhou and Ji Feng. 2017. Deep forest: Towards an alternative to deep
neural networks. arXiv preprint arXiv:1702.08835 (2017).
| 2 |
1
An Incremental Self-Organizing Architecture
for Sensorimotor Learning and Prediction
arXiv:1712.08521v2 [] 9 Mar 2018
Luiza Mici, German I. Parisi, and Stefan Wermter
Abstract—During visuomotor tasks, robots must compensate
for temporal delays inherent in their sensorimotor processing
systems. Delay compensation becomes crucial in a dynamic
environment where the visual input is constantly changing, e.g.,
during the interacting with a human demonstrator. For this purpose, the robot must be equipped with a prediction mechanism
for using the acquired perceptual experience to estimate possible
future motor commands. In this paper, we present a novel
neural network architecture that learns prototypical visuomotor
representations and provides reliable predictions on the basis of
the visual input. These predictions are used to compensate for the
delayed motor behavior in an online manner. We investigate the
performance of our method with a set of experiments comprising
a humanoid robot that has to learn and generate visually
perceived arm motion trajectories. We evaluate the accuracy
in terms of mean prediction error and analyze the response of
the network to novel movement demonstrations. Additionally, we
report experiments with incomplete data sequences, showing the
robustness of the proposed architecture in the case of a noisy
and faulty visual sensor.
Index Terms—Self-organized networks, hierarchical learning,
motion prediction
I. I NTRODUCTION
R
EAL-TIME interaction with the environment requires
robots to adapt their motor behavior according to perceived events. However, each sensorimotor cycle of the robot
is affected by an inherent latency introduced by the processing
time of sensors, transmission time of signals, and mechanical
constraints [1][2][3]. Due to this latency, robots exhibit a
discontinuous motor behavior which may compromise the
accuracy and execution time of the assigned task.
For social robots, delayed motor behavior makes humanrobot interaction (HRI) asynchronous and less natural. Synchronization of movements during HRI may increase rapport
and endow humanoid robots with the ability to collaborate
with humans during daily tasks [4]. A possible solution to the
sensorimotor latency is the application of predictive mechanisms which accumulate information from robot’s perceptual
and motor experience and learn an internal model which
estimates possible future motor states [5][6]. The learning of
these models in an unsupervised manner and their adaptation
throughout the acquisition of new sensorimotor information
remains an open challenge.
Latencies between perception and possible motor behavior occur in human beings [7] as well. Such discrepancies
Authors are with the Department of Informatics, Knowledge
Technology, University of Hamburg, Vogt-Koelln-Strasse 30, 22527
Hamburg,
Germany
e-mail:
{mici, parisi, wermter}
@informatik.uni-hamburg.de
Preprint submitted to IEEE Transactions on Cognitive and Developmental
Systems (TCDS)
are caused by neural transmission delays and are constantly
compensated by predictive mechanisms in our sensorimotor
system that account for both motor prediction and anticipation
of the target movement. Miall et al. [8] have proposed that
the human cerebellum is capable of estimating the effects
of a motor command through an internal action simulation
and a prediction model. Furthermore, there are additional
mechanisms for visual motion extrapolation which account
for the anticipation of the future position and movement of
the target [9]. Not only do we predict sensorimotor events in
our everyday tasks, but we also constantly adjust our delay
compensation mechanisms to the sensory feedback [10] and
to the specific task [11].
Recently, there has been a considerable growth of learningbased prediction techniques, which mainly operate in a “learn
then predict” approach, i.e., typical motion patterns are extracted and learned from training data sequences and then
learned patterns are used for prediction [2][12][13][14]. The
main issue with this approach is that the adaptation of the
learned models is interrupted by the prediction stage. However,
it is desirable for a robot operating in natural environments
to be able to learn incrementally, i.e., over a lifetime of
observations, and to refine the accumulated knowledge over
time. Therefore, the development of learning-based predictive
methods accounting for both incremental learning and predictive behavior still need to be fully investigated.
In this work, we propose a novel architecture that learns
sensorimotor patterns and predicts the future motor states in
an on-line manner. We evaluate the architecture in the context
of an imitation task in an HRI scenario. In this scenario,
body motion patterns demonstrated by a human demonstrator
are mapped to trajectories of robot joint angles and then
learned for subsequent imitation by a humanoid robot. We
approach the demonstration of the movements through motion
capture with a depth sensor, which provides us with reliable
estimations and tracking of a 3D human body pose. Thus,
the three-dimensional joint positions of the skeleton model
constitute the input to the architecture. The learning module
captures spatiotemporal dependencies through a hierarchy of
Growing When Required (GWR) [15] networks, which has
been successfully applied for the classification of human
activities [16][17]. The learning algorithm processes incoming robot joint angles and progressively builds a dictionary
of motion segments. Finally, an extended GWR algorithm,
implemented at the last layer of our architecture, approximates
a prediction function and utilizes the learned motion segments
for predicting forthcoming motor commands.
We evaluate our system on a dataset of three subjects performing 10 arm movement patterns. We study the prediction
2
accuracy of our architecture while being continuously trained.
Experimental results show that the proposed architecture can
adapt quickly to an unseen pattern and can provide accurate
predictions albeit continuously incorporating new knowledge.
Moreover, we show that the system can maintain its performance even when training takes place with missing sensory
information.
II. R ELATED W ORK
growing extensions of the SOM such as the GWR algorithm
[15]. The GWR algorithm has the advantage of a nonfixed, but
varying topology that requires no specification of the number
of neurons in advance. Moreover, the prediction capability of
the self-organizing approaches in the case of multidimensional
data sequences has not been thoroughly analyzed in the literature. In the current work, we present experimental results in
the context of a challenging robotic task, whereby real-world
sensorimotor sequences have to be learned and predicted.
A. Motion prediction
Motion analysis and prediction are an integral part of robotic
platforms that counterbalance the imminent sensorimotor latency. Well-known methods for tracking and prediction are the
Kalman Filter models, as well as their extended versions which
assume non-linearity of the system, and the Hidden Markov
Models (HMM). Kalman filter-based prediction techniques
require a precise kinematic or dynamic model that describes
how the state of an object evolves while being subject to a
set of given control commands. HMMs describe the temporal
evolution of a process through a finite set of states and transition probabilities. Predictive approaches based on dynamic
properties of the objects are not able to provide correct longterm predictions of human motion [18] due to the fact that
human motion also depends on other higher-level factors than
kinematic constraints, such as plans or intentions.
There are some alternatives to approaches based on probabilistic frameworks in the literature and neural networks are
probably the most popular ones. Neural networks are known to
be able to learn universal function approximations and thereby
predict non-linear data even though dynamic properties of a
system or state transition probabilities are not known [19][3].
For instance, Multilayer Perceptrons (MLP) and Radial Basis Function (RBF) networks as well as Recurrent Neural
Networks have found successful applications as predictive
approaches [12][20][13][2]. A subclass of neural network
models, namely the Self-Organizing Map (SOM) [21], is able
to perform local function approximation by partitioning the
input space and learning the dynamics of the underlying
process in a localized region. The advantage of the SOMbased methods is their ability to achieve long-term predictions
at much less expensive computational time [22].
Johnson and Hogg [23] first proposed the use of multilayer
self-organizing networks for the motion prediction of a tracked
object. Their model consisted of a bottom SOM layer learning
to represent the object states and the higher SOM layer learning motion trajectories through the leaky integration of neuron
activations over time. Similar approaches were proposed later
by Sumpter and Bulpitt [24] and Hue et al. [25], who modeled
time explicitly by adding lateral connections between neurons
in the state layer, obtaining performances comparable to that
of the probabilistic models.
Several other approaches use SOMs extended with temporal associative memory techniques [20], e.g., associating to
each neuron a linear Autoregressive (AR) model [26][27].
A drawback which is common to these approaches is their
assumption of knowing a priori the number of movement
patterns to be learned. This issue can be mitigated by adopting
B. Incremental learning of motion patterns
In the context of learning motion sequences, an architecture
capable of incremental learning should identify unknown patterns and adapt its internal structure in consequence. This topic
has been the focus of a number of studies on programming
by demonstration (PbD) [28]. Kulić et al. [29] used HMMs
for segmenting and representing motion patterns together with
a clustering algorithm that learns in an incremental fashion
based on intra-model distances. In a more recent approach, the
authors organized motion patterns as leaves of a directed graph
where edges represented temporal transitions [30]. However,
the approach was built upon automatic segmentation which
required observing the complete demonstrated task, thereby
becoming task-dependent. A number of other works have also
adapted HMMs to the problem of incremental learning of
human motion [31][32][33][34]. The main drawback of these
methods is their requirement for knowing a priori the number
of motions to be learned or the number of Markov models
comprising the learning architecture.
Ogata et al. [35] proposed a model that considers the
case of long-term incremental learning. In their work, a
recurrent neural network was used to learn a navigation task
in cooperation with a human partner. The authors introduced
a new training method for the recursive neural network in
order to avoid the problem of memory corruption during new
training data acquisition. Calinon et al. [36] showed that the
Gaussian Mixture Regression (GMR) technique can be successfully applied for encoding demonstrated motion patterns
incrementally through a Gaussian Mixture Model (GMM)
tuned with an expectation-maximization (EM) algorithm. The
main limitation of this method is the need to specify in advance
the number and complexity of tasks in order to find an optimal
number of Gaussian components. Therefore, Khansari-Zadeh
and Billard [37] suggested a learning procedure capable of
modelling demonstrated motion sequences through an adaptive
GMM. Cederborg et al. [38] suggested to perform a local
partitioning of the input space through kd-trees and training
several local GMR models.
However, for high-dimensional data, partitioning of input
space in a real-time system requires additional computational
time. Regarding this issue, it is convenient to adopt selforganized network-based methods that perform in parallel partitioning of the input space through the creation of prototypical
representations as well as the fitting of necessary local models.
The application of a growing self-organizing network, such
as the GWR, allows for the learning of prototypical motion
patterns in an incremental fashion [39].
3
GWR2
m(t)
Vision module
P-GWR
Predict
Compute Angles
GWR1
Visuomotor learning
Nao actuators
m(t+d)
Delay
compensation
Robot control
Fig. 1: Overview of the proposed system for the sensorimotor delay compensation during an imitation scenario. The vision
module acquires motion from a depth sensor and estimates the three-dimensional position of joints. Shoulder and elbow angle
values are extracted and fed to the visuomotor learning algorithm. The robot then receives predicted motor commands processed
by the delay compensation module.
III. M ETHODOLOGY
A. Overview
The proposed learning architecture consists of a hierarchy of
GWR networks [15] which process input data sequences and
learn inherent spatiotemporal dependencies (Fig. 1). The first
layer of the hierarchy learns a set of spatial prototype vectors
which will then encode incoming data samples. The temporal
dependence of the input data is captured as temporally ordered
concatenations of consecutively matched prototypes which
become more complex and of higher dimensionality when
moving towards the last layer. When body motion sequences
are provided, the response of the neurons in the architecture
resembles the neural selectivity towards temporally ordered
body pose snapshots in the human brain [40]. This simple, but
effective data sequence representation is also convenient in a
prediction application due to implicitly mapping past values to
the future ones. The concatenation vector is composed of two
parts: the first part carries information about the input data at
previous time steps and the second part concerns the desired
output of this mapping.
The evaluation of the predictive capabilities of the proposed
architecture for compensating robot sensorimotor delay will
be conducted in an imitation scenario where a simulated Nao
robot imitates a human demonstrator while compensating for
the sensorimotor delay in an on-line manner.
B. Learning with the GWR algorithm
The GWR network is composed of neurons and edges
that link the neurons forming neighborhood relationships. The
network starts with a set of two neurons randomly initialized
and, during the learning iterations, both neurons and edges can
be created, updated, or removed. At each learning iteration,
t, the first and the second best-matching units (BMUs) are
computed as the neurons with the smallest Euclidean distance
with the input sample x(t). The activity of the network, a(t),
is computed as a function of the Euclidean distance between
the weight vector of the first BMU, wb , and the input data
sample x(t):
a = exp(−||x(t) − wb ||).
(1)
Whenever the activity of the network is smaller than a given
threshold aT , a new neuron is added with a weight vector:
wr = 0.5 · (x(t) + wb )
(2)
The activation threshold parameter, aT , modulates the amount
of generalization, i.e., the largest discrepancy between an incoming stimulus and its BMU. Edges are created between the
first and the second BMUs. An edge ageing mechanism takes
care of removing rarely activated edges, i.e., edges exceeding
the age threshold, and unconnected neurons consequently. In
this way, representations of data samples that have been seen
in the far past are eliminated leading to an efficient use
of available resources from the lifelong learning perspective.
Moreover, a firing rate mechanism that measures how often
each neuron has been activated by the input leads to a sufficient
training before new neurons are created. The firing rate is
initially set to one and than decreases every time a neuron
and its neighbors are activated in the following way:
∆hi = ρi · κ · (1 − hi ) − ρi ,
(3)
where ρi and κ are the constants controlling the behaviour of
the decreasing function curve. Typically, the ρ constant is set
higher for the BMU (ρb ) than for its topological neighbors
(ρn ). Given an input data sample x(t), if no new neurons are
added, the weights of the first BMU and its neighbors are
updated as follows:
∆wi = i · hi · (x(t) − wi ),
(4)
where i and hi are the constant learning rate and the firing
counter variable respectively. The learning of the GWR algorithm stops when a given criterion is met, e.g., a maximum
network size or a maximum number of learning epochs.
C. Temporal sequence representations
GWR networks do not encode temporal relationships of
the input. This limitation has been addressed by different
extensions, such as hierarchies of GWRs augmented with a
window in time memory or recurrent connections [41][16][17].
Since our goal is to both encode data sequences and generate
them, we adopt the first approach, in which the relevant
information regarding data samples in a window of time is
always explicitly available. Moreover, the use of a hierarchy
instead of a shallow architecture allows for the encoding of
multiple time-varying sequences through prototype neurons
which can be reused for representing different sequences.
The GWR learning mechanism described in Section III-B
is employed for training the first two layers of the proposed
architecture, namely the GWR1 and GWR2 networks. The
4
o(t)
z-1
z-1
z-1
z-1
this input-output mapping and apply this learning algorithm to
the last layer of our architecture, i.e., to the P-GWR network.
The input samples fed to the P-GWR network are concatenations of the temporally ordered BMUs from the preceding
layer (Eq. 5). We divide the input into two parts: the regressor,
xin (t), and the desired output, i.e., the value to predict xout (t):
z-1
wb(t)
xin (t) = x(t) ⊕ x(t − 1) ⊕ ... ⊕ x(t − p + 1),
x(t)
xout (t) = x(t + 1),
Fig. 2: Schematic description of the output computation for the
first two layers of our learning architecture (not all neurons
and connections are shown). Given an input data sample x(t),
the weight of the best-matching unit is concatenated with the
weights of the previously activated neurons (depicted in fading
yellow) in order to compute the output o(t). The length of the
concatenation vector is a pre-defined constant τ (τ = 3 in this
example). The z −1 blocks denote the time delay.
output of both networks is computed as the concatenation of
the weights of consecutively activated neurons within a predefined temporal window τ (see Fig.2):
o(t) = wb(t) ⊕ wb(t−1) ⊕ ... ⊕ wb(t−τ +1) ,
(5)
where ⊕ denotes the concatenation operation. Moving up in
the hierarchy, the output o(t) will represent the input for the
GWR network of the higher layer. In this way, the GWR1
network learns a dictionary of prototypes of the spatial body
configurations domain, while the GWR2 and P-GWR networks
encode body motion patterns accumulated over a short and a
longer time period respectively.
Following this hierarchical learning scheme, we adapt the
GWR neuron elimination strategy in a layer-wise manner to
address the problem of forgetting rarely encountered, but still
relevant information. For instance, at the level of the GWR1
network, which represents spatial body configurations, it is
more probable that rarely seen input data samples are due
to sensory noise. Therefore, we can set a lower edge age
threshold here, leading to a higher rate of neuron elimination.
For the GWR2 and P-GWR networks, on the other hand, rarely
seen data samples are most probably due to sub-sequences
encountered in the far past. We can set a higher edge age
threshold so that neurons are removed more rarely.
D. The Predictive GWR algorithm
The problem of one-step-ahead prediction can be formalized as a function approximation problem. Given a multidimensional time series denoted by {y(t)}, the function approximation is of the form:
ŷ(t + 1) = fˆ (y(t), y(t − 1), ..., y(t − (p − 1))|Θ) ,
(6)
where the input of the function, or regressor, has an order of
regression p ∈ Z+ , with Θ denoting the vector of adjustable
parameters of the model and y(t + 1) is the predicted value.
In other words, the prediction function maps the past p input
values to the observed value y(t + 1) directly following them.
We adapt the GWR learning algorithm in order to implement
(7)
with p denoting the maximum index of the past values. Each
neuron of the P-GWR network will then have two weight
vectors which we will call the input win and the output wout
weight vectors. During training, the input weight vector will
learn to represent the input data regressor and the output
weight vector will represent the corresponding predicted value.
This learning scheme has been successfully applied to the
Vector-Quantized Temporal Associative Memory (VQTAM)
model [20], shown to perform well on tasks such as time series
prediction and predictive control [42].
The learning procedure for the Predictive GWR algorithm
resembles the original GWR with a set of adaptations for
temporal processing. During training, the first and the second
best-matching units, b and s, at time step t are computed
considering only the regressor part of the input:
b = arg min ||xin (t) − win
n ||,
n∈A
s = arg min ||xin (t) − win
n ||,
(8)
n∈A/{b}
win
n
is the input weight vector of the neuron n and A
where
is the set of all neurons. However, for the weight updates both
xin (t) and xout (t) are considered:
in
in
∆win
i = i · ci · (x (t) − wi ),
∆wout
= i · ci · (xout (t) − wout
i
i ),
(9)
with the learning rates 0 < i < 1 being higher for the
BMUs (b ) than for the topological neighbors (n ). This
learning mechanism guaranties that the regressor space is
vector-quantized while the prediction error is minimized at
each learning iteration.
The Predictive GWR algorithm operates differently from
supervised prediction approaches. In the latter, the prediction
error signal is the factor that guides the learning, whereas in
the Predictive GWR the prediction error is implicitly computed and minimized without affecting the learning dynamics.
Moreover, unlike the SOM-based VQTAM model, the number
of input-output mapping neurons, or local models, is not predefined nor fixed, but instead adapts to the input data.It should
be noted that overfitting does not occur with the growth of
the network due to the fact that neural growth decreases the
quantization error which is proportional to the prediction error.
E. Predicting sequences
Given an input regressor at time step t, xin (t), the one-stepahead estimate is defined as the output weight vector of the
P-GWR best-matching unit:
ŷ(t + 1) = wout
b
(10)
5
where b is the index of the best-matching unit (Eq. 8). In
the case that the desired prediction horizon is greater than
1, the multi-step-ahead prediction can be obtained by feeding
back the predicted values into the regressor and computing
Eq. 8 recursively until the whole desired prediction vector
is obtained. An alternative to the recursive prediction is the
vector prediction which is obtained by increasing the dimension of the xout vector with as many time steps as the desired
prediction horizon h. Thus, the input regressor and the desired
output would have the following form:
xin (t) = x(t) ⊕ x(t − 1) ⊕ ... ⊕ x(t − p + 1),
xout (t) = x(t + 1) ⊕ x(t + 2) ⊕ ... ⊕ x(t + h),
(11)
where p denotes the index of the past values. The same
dimensionality should be defined for the weight vectors win
and wout of the P-GWR neurons as well. This solution requires
the training of the architecture with this setting of the weights.
IV. T HE IMITATION SCENARIO
A. Overview
The scenario consists of a Nao robot incrementally learning
a set of visually demonstrated body motion patterns and
directly imitating them while compensating for the sensorimotor delay. We showcase the predictive capabilities of the
proposed architecture in the context of an imitation scenario
motivated by the fact that it can potentially imply behaviour
synchronization in the human-robot interaction. For humans,
the synchronization of behavior is a fundamental principle for
motor coordination and is known to increase rapport in daily
social interaction [4]. Psychological studies have shown that
during conversation humans tend to coordinate body posture
and gaze direction [43]. This phenomenon is believed to be
connected to the mirror neuron system [44], suggesting a
common neural mechanism for both motor control and action
understanding. Interpersonal coordination is an integral part
of human interaction, thus we assume that applied to HRI
scenarios it may promote the social acceptance of robots.
A schematic description of the proposed system is given in
Fig. 1. The user’s body motion is the input into the model and
the motor commands for the robot are obtained by mapping
the user’s arm skeletal structure to the robot’s arm joint
angles. This direct motion transfer allows for a simple, yet
compact representation of the visuomotor states that does not
require the application of computationally expensive inverse
kinematics algorithms. Demonstrated motion trajectories are
learned incrementally by training our hierarchical GWR-based
learning algorithm. This allows for extracting prototypical
motion patterns which can be used for the generation of robot
movements as well as prediction of future target trajectories in
parallel. In this robot task, the prediction of future visuomotor
states is necessary to compensate for the sensory delay introduced by the vision sensor camera, the signal transmission
delay as well as the robot’s motor latency during motion
generation. The simulated Nao robot is used as the robotic
platform for the experimental evaluation.
Fig. 3: Examples of arm movement patterns. The visual input
data, represented as three-dimensional skeleton sequences, are
mapped to the robots’ joint angles.
B. System description
A general overview of the proposed architecture is depicted
in Fig. 1. The system consists of three main modules: 1) The
vision module, which includes the depth sensor and the tracking of the 3D skeleton through OpenNI/NITE framework; 1
2) The visuomotor learning module, which receives angle
values and provides future motor commands; 3) The robot
control module, which processes motor commands and relays
them to the microcontrollers of the robot, which in our case
is a locally simulated Nao.
Our contribution is the visuomotor learning module which
performs incremental adaptation and early prediction of human
motion patterns. Although the current setup uses a simulated
environment, we will consider a further extension of the
experiments towards the real robot. Therefore, we simulate
the same amount of motor response latency as it has been
quantified in the real Nao robot, being between 30 to 40
ms [2]. This latency could be even higher due to reduced
motor performance, friction or weary hardware. Visual sensor
latency on the other hand, for an RGB and depth resolution of
640x480, together with the computation time required from the
skeleton estimation middleware can peak up to 500 ms [45].
Taking into consideration also possible transmission delays
due to connectivity issues, we assume a maximum of 600 ms
1 OpenNI/NITE:
http://www.openni.org/software
6
of overall sensorimotor latency in order to carry out experiments described in Section V.
C. Data acquisition and representation
The motion sequences were collected with an Asus Xtion
Pro camera at 30 frames per second. This type of sensor
is capable of providing synchronized color information and
depth maps at a reduced power consumption and weight,
making it a more suitable choice than a Microsoft Kinect
for being placed on our small humanoid robot. Moreover, it
offers a reliable and markerless body tracking method [46]
which makes the interface less invasive. The distance of each
participant from the visual sensor was maintained between the
sensor’s operational range, i.e., 0.8 − 3.5 meters. To attenuate
noise, we computed the median value for each body joint every
3 frames resulting in 10 joint position vectors per second [16].
We selected joint angles to represent the demonstrator’s
postures. Joint angles allow a straightforward reconstruction
of the regressed motion without applying inverse kinematics,
which may be difficult due to redundancy and leads to less
natural movements. Nao’s arm kinematic configuration differs
from the human arm in terms of degrees of freedom (DoF).
For instance, the shoulder and the elbow joints have only
two DoFs while human arms have three. For this reason, we
compute only shoulder pitch and yaw and elbow yaw and
roll from the skeletal representation by applying trigonometric
functions and map them to the Nao’s joints by appropriate
rotation of the coordinate frames. Wrist orientations are not
considered since they are not provided by the OpenNI/NITE
framework. Considering the two arms, a frame contains a total
of 8 angle values of body motion, which are given as input to
the visuomotor learning module.
V. E XPERIMENTAL R ESULTS
We conducted experiments with a set of movement patterns
that were demonstrated either with one or with both arms
simultaneously: raise arm(s) laterally, raise arm(s) in front,
wave arm(s), rotate arms in front of the body both clockwise
and counter-clockwise. Some of the movement patterns are
illustrated in Fig. 3. In total, 10 different motion patterns were
obtained, each repeated 10 times by three participants (one
female and two male) who were given no explicit indication of
the purpose of the study nor instructions on how to perform the
arm movements. In total, we obtained 30 demonstrations for
each of the pattern. We first describe the incremental training
procedure, then we assess and analyze in details the prediction
accuracy of the proposed learning method. We focus on the
learning capabilities of the method while simulating a possible
recurring malfunctioning of the visual system leading to loss
of entire data chunks. We conclude with a model for choosing
the optimal predicted value for a system with a variable delay.
TABLE I: Training parameters for each GWR network in
our architecture for the incremental learning of sensorimotor
patterns.
Parameter
Activation Threshold
Firing Threshold
Learning rates
Firing counter behavior
Maximum edge age
Training epochs
Value
aT = 0.98
fT = 0.1
b = 0.1, n = 0.01
ρb = 0.3, ρn = 0.1, κ = 1.05
{100, 200, 300}
50
sees all networks composed of two neurons with random
weight vectors, i.e., carrying no relevant information about
the input data. The GWR1 network is trained in order to
perform spatial vector quantization. Then the current sequence
is gradually encoded as a trajectory of activated neurons as
described in Eq. 5 and given in input to the GWR2 network
of the second layer. The same procedure is then repeated
for the second layer until the training of the full architecture
is performed. The learning of the 30 demonstrations of one
motion pattern from all three subjects constitutes an epoch.
The learning parameters used throughout our experiments
are listed in Table I. The parameters have been empirically
fine-tuned by considering the learning factors of the GWR
algorithm. The firing threshold fT and the parameters ρb , ρn ,
and κ define the decrease function of the firing counter (Eq. 3)
and were set in order to have at least seven trainings of a bestmatching unit before inserting a new neuron. It has been shown
that increasing the number of trainings per neuron does not
affect the performance of a GWR network significantly [15].
The learning rates are generally chosen to yield a faster
training for the BMUs than for their topological neighbors.
However, given that the neurons’ decreasing firing counter
modulates the weights update (Eq. 4), an optimal choice of the
learning rates has little impact on the architecture’s behaviour
in the long run. The training epochs were chosen by analysing
the converging behaviour of the composing GWR networks in
terms of neural growth.
The activation threshold parameter aT , which modulates the
number of neurons, has the largest impact on the architecture’s behaviour. The closer to 1 this value is, the greater
is the number of neurons created and the better is the data
reconstruction during the prediction phase. Therefore, we kept
aT relatively high for all GWR networks. We provide an
analysis of the impact of this parameter on the prediction
performance of our architecture in Section V-B3. Finally, the
maximum edge age parameter, which modulates the removal
of rarely used neurons, was set increasingly higher with each
layer. As discussed in Section III-C, the neurons activated less
frequently in the lower layer may be representing noisy input
data samples, whereas in higher layers the neurons capture
spatiotemporal dependencies which may vary significantly
from sequence to sequence.
A. Hierarchical training
The training of our architecture is carried out in an online manner. This requires that the GWR networks are trained
sequentially one data sample at a time. The initialization phase
B. Predictive behavior
We now assess the predictive capabilities of the proposed
method while the training is occurring continuously. Consid-
7
Nr. of Frames
Fig. 4: Behavior of the proposed architecture during training on an unseen sequence demonstrated by one subject (the sequence
is presented three times to the network). From top to bottom illustrated are: the skeleton model of the visual sequence, the
ground truth data of robot joint angles, the values predicted from the network, and the Euclidean distance between predicted
values and the ground truth over time (red dashed line indicating the statistical trend).
ering that the data sample rate is 10 fps (see Section IV-C), we
set a prediction horizon of 6 frames in order to compensate
for the estimated delay of 600 ms.
1) How fast does the architecture adapt to a new sequence?: An example of the on-line response of the architecture is shown in Fig. 4. We observed that, except in cases
of highly noisy trajectories, the network adapted to an unseen
input already after a few video frames, e.g., ≈ 100 frames
which correspond to 10 seconds of the video sequence, and
refined its internal representation after three presentations of
the motion sequence demonstrated by one subject, i.e., after
30 demonstrations. This can be seen by the statistical trend of
the prediction error.
2) Behaviour analysis and prediction performance during
incremental learning: We presented the movement sequences
one at a time and let the architecture train for 50 epochs
on each new sequence. The training phase was in total of
500 epochs for the whole dataset. Then, we re-ran the same
experiment by varying the presentation order of the sequences
and report the results averaged across all trials. In this way,
the behavior analysis does not depend on the order of the data
given during training. We analyzed the cumulative prediction
error (C.P.E) of the model by computing the mean squared
error (MSE) over all movement sequences learned up to each
training epoch. For comparison, we also computed the MSE
between the values predicted by the model and the sensory
input after being processed by the GWR1 and the GWR2 net-
works. We refer to this performance measure as the prediction
error (P.E.) since it evaluates directly the prediction accuracy
of the P-GWR network while removing the quantization error
propagated from the first two layers.
The flow of the overall MSE during training and the neural
growth of the GWR networks composing the architecture are
reported in Fig. 5. The moment in which we introduce a
new motion sequence is marked by a vertical dashed line. As
expected, the cumulative prediction error increases as soon
as a new sequence is introduced (leading to the high peaks
in Fig. 5.a.), for then decreasing immediately. However, the
error does not grow but stays constant even though new
knowledge is being added every 50 learning epochs. This is
a desirable feature for an incremental learning approach. In
Fig. b., we observe that with the introduction of a new motion
sequence there is an immediate neural growth of the three
GWR networks followed by the stabilisation of the number of
neurons indicating a fast convergence. This neural growth is
an understandable consequence of the fact that the movement
sequences are very different from each other. In fact, the
GWR1 network, performing quantization of the spatial domain,
converges to a much lower number of neurons, whereas the
higher layers, namely the GWR2 and the P-GWR network,
have to capture a high variance of spatiotemporal patterns.
However, the computational complexity of a prediction step is
O(n), where n is the number of neurons. Thus the growth of
the network does not introduce significant computational cost.
Prediction MSE
8
(a)
0.12
0.11
0.10
0.09
0.08
0.07
0.06
0.05
0.04
0
1000
2000
3000 4000 5000
Number of Neurons
6000
7000
8000
Fig. 6: Prediction mean squared error (MSE) versus the
number of neurons in the P-GWR network.
as no surprise since producing accurate long-term predictions
is a challenging task when dealing with human-like motion
sequences. However, it seems that in average the error does
not grow linearly but remains under 0.25 radians.
(b)
C. Learning with missing sensory data
Fig. 5: (a) The cumulative prediction error (C.P.E) averaged
over all learned sequences up to each learning epoch (in blue)
and the prediction error (P.E.) computed between the predicted
sequence and the sequence represented by the architecture (in
red), (b) Average and standard deviation of the neural growth
of the three GWR networks during learning.
3) Impact of the activation threshold: In the described
experiments, we set a relatively high activation threshold
parameter aT which led to a continuous growth of the GWR
networks. Thus, we further investigated how a decreased
number of neurons in the P-GWR network would affect
the overall prediction error. For this purpose, we fixed the
weight vectors of the first two layers after having been
trained on the entire dataset, and ran multiple times the
incremental learning procedure on the P-GWR network, each
time with a different activation threshold parameter aT ∈
{0.5, 0.55, 0.6, ..., 0.9, 0.95, 0.99}. We observed that a lower
number of neurons, obtained through lower threshold values,
led to quite high values of the mean squared error (Fig. 6).
However, due to the hierarchical structure of our architecture,
the quantization error can be propagated from layer to layer. It
is expected that similar performances can be reproduced with
a lower number of neurons in the P-GWR network when a
lower quantization error is obtained in the preceding layers.
4) Sensitivity to the prediction horizon: We now take the
architecture trained on the whole dataset and evaluate its
prediction accuracy while increasing the prediction horizon
up to 20 frames which correspond to 2s of a video sequence.
For achieving multi-step-ahead prediction, we compute the
predicted values recursively as described in Section III-E. In
Fig. 7, we report the mean absolute error and the standard
deviation in radians in order to give a better idea of the error
range. The results show a relatively high magnitude of error for
prediction horizons bigger than 10 frames. This should come
In the following set of experiments, we analyze how the
predictive performance of the network changes when trained
on input data produced by a faulty visual sensor. We simulate
an occurring loss of entire input data chunks in the following
way: during the presentation of a motion pattern, we randomly
choose video frames where a whole second of data samples
(i.e., 10 frames) is eliminated. The network is trained for
50 epochs on a motion sequence, each time with a different
missing portion of information. We repeat the experiment
thereby increasing the occurrence of this event in order to
compromise up to 95% of the data and see how much the
overall prediction error increases. Results are averaged over
epochs and are presented in Fig. 8. As it can be seen, the
prediction MSE stays almost constant up to 30% of data loss.
This means that the network can still learn and predict motion
sequences even under such circumstances.
D. Compensating a variable delay
Experimental results reported so far have accounted for
compensating a fixed time delay which has been measured
empirically by generating motor behavior with the real robot.
However, the proposed architecture can also be used when the
delay varies due to changes in the status of the hardware. In
this case, given the configuration of the robot at time step t
in terms of joint angle values Jξ (t), where ξ is the time delay
estimation, the optimal predicted angle values to execute in
the next step can be chosen in the following way:
P ∗ = arg min ||Jξ (t) − P (t + i)||,
(11)
i∈[0,h]
where P (t+i) are the predictions computed up to a maximum
h of the prediction horizon.
The application of this prediction step requires a method for
the estimation of the time-delay ξ, which is out of the scope
of this work. Current time-delay estimation techniques mainly
cover constant time delays, random delay with a specific noise
9
showed that a possible system malfunction causing loss of data
samples has a relatively low impact on the overall performance
of the system.
B. Growing self-organization and hierarchical learning
Fig. 7: Mean absolute error (in radians) for increasing values
of prediction horizons (expressed in frames). In our case, 20
frames correspond to 2 seconds of a video sequence.
Fig. 8: Prediction MSE averaged over 50 epochs of training
on each motion pattern. For up to 30% of data loss the MSE
does not grow linearly but rather stays almost constant. From
this point on, the increasing percentage of data loss leads to
an inevitable growth of the prediction error.
characteristic, or restricted dynamic time delay [47], which
nonetheless do not address uncertainty affecting real-world
robot applications. Computational models inspired by biology
have also been proposed for the time-delay estimation [47].
However, these models assume knowledge of the sensorimotor
dynamics.
VI. D ISCUSSION
A. Summary
In this paper, we presented a self-organized hierarchical
neural architecture for sensorimotor delay compensation in
robots. In particular, we evaluated the proposed architecture in
an imitation scenario, in which a simulated robot had to learn
and reproduce visually demonstrated arm movements. Visuomotor sequences were extracted in the form of joint angles,
which can be computed from a body skeletal representation in
a straightforward way. Sequences generated by multiple users
were learned using hierarchically-arranged GWR networks
equipped with an increasingly large temporal window.
The prediction of the visuomotor sequences was obtained
by extending the GWR learning algorithm with a mapping
mechanism of input and output vectors lying in the spatiotemporal domain. We conducted experiments with a dataset of 10
arm movement sequences showing that our system achieves
low prediction error values on the training data and can adapt
to unseen sequences in an online manner. Experiments also
The building block of our architecture is the GWR network [15], which belongs to the unsupervised competitive
learning class of artificial neural networks. A widely known
algorithm of this class is the SOM [21]. The main component
of these algorithms are the neurons equipped with weight
vectors of a dimensionality equal to the input size. Through
learning, the neurons become prototypes of the input space
while preserving the input’s topology, i.e., similar inputs are
mapped to neurons that are near to each other. In the case of
SOMs, these neurons are distributed and remain fixed in a 2D
or a 3D lattice which has to be defined a priori and requires an
optimal choice of its size. In the GWR network, the topological
structure of the neurons is dynamic and grows to adapt to the
topological properties of the input space. In this regard, the
GWR network is similar to the GNG algorithm [48], another
widely used growing self-organizing neural network. However,
the neural growth of the GWR algorithm is not constant, as
in the case of the GNG, but rather depends on how well
the current state of the network represents the input data.
Thus, from the perspective of incremental learning, the GWR
algorithm is more suitable than the GNG since new knowledge
can be added to the network as soon as new data become
available.
The hierarchical arrangement of the GWR networks
equipped with a window in time memory is appealing due
to the fact that it can dynamically change the topological
structure in an unsupervised manner and learn increasingly
more complex spatiotemporal dependencies of the data in the
input space. This allows for a reuse of the neurons during
sequence encoding, having learned prototypical spatiotemporal
patterns out of the training sequences. Although this approach
seems to be quite resource-efficient for the task of learning
visuomotor sequences, the extent to which neurons are reused
is tightly coupled with the input domain. In fact, in our
experiments with input data samples represented as multidimensional vectors of both arms’ shoulder and elbow angles,
there was little to no overlap between training sequences. This
led to a significant growth of the network with each sequence
presentation.
The parameters modulating the growth rate of each GWR
network are the activation threshold and the firing counter
threshold. The activation threshold aT establishes the maximum discrepancy between the input and the prototype neurons
in the network. The larger we set the value of this parameter,
the smaller is the discrepancy, i.e., the quantization error
of the network. The firing counter threshold fT is used to
ensure the training of recently added neurons before creating
new ones. Thus, smaller thresholds lead to more training of
existing neurons and the slower creation of new ones, favoring
again better network representations of the input. Intuitively,
the less discrepancy between the input and the network
representations, the smaller the inputs reconstruction error
10
during prediction phase. However, less discrepancy means
also more neurons. This proved to be not the main issue in
our experiments since the number of neurons did not affect
significantly the computation of the predicted values.
A limitation of the sliding time-window technique for the
encoding of temporal sequences is the high computational
cost it introduces due to the data’s higher dimensionality.
However, in our case using angles as body pose features
leads to a low-dimensional input compared to e.g. images.
So, the training with long time windows does not pose a
computational challenge. Furthermore, it has been shown that
long-term predictions based on a sliding window are more
accurate than recurrent approaches [49].
The use of joint angles as visuomotor representations may
seem to be a limitation of the proposed architecture due to the
fact that it requires sensory input and robot actions to share
the same representational space. For instance, in an object
manipulation task, this requirement is not satisfied, since the
visual feedback would be the position information given by
the object tracking algorithm. This issue can be addressed by
including both the position information and the corresponding
robot joint angles as input to our architecture. Due to the
associative nature of self-organizing networks and their capability to function properly when receiving an incomplete input
pattern, only the prediction of the object movement patterns
would trigger the generation of corresponding patterns of the
robot behavior.
C. Future work
An interesting direction for future work is the extension of
the current implementation towards the autonomous generation
of robot movements that account for both delay compensation
as well as reaching a given action goal. For this purpose, the
implementation of bidirectional Hebbian connections would
have to be investigated in order to connect the last layer of
the proposed architecture with a symbolic layer containing
action labels [16][50] and explore how such symbolic layer
can modulate the generation of the movement patterns when
diverging from the final goal.
Future studies with the real robot will address the introduction of overall body configuration constraints for learning
the perceived motion. The visual body tracking framework becomes unreliable in certain conditions, e.g., when the demonstrator is sitting or is touching objects in the background. In
these cases, the provided body configurations may become
unrealistic and cannot be mapped to the robot, or, even
worse, when mapped to the robot may lead to a hardware
break. For this reason, outlier detection mechanisms should
be investigated in order to discard these unrealistic body
configurations during training.
The imitation scenario studied in this paper was carried
out offline, i.e., the synchronization was evaluated on an
acquired data set of motion patterns. However, the successful
application of the proposed learning algorithm in providing
accurate motor commands, thereby compensating the sensorimotor delay, encourages future experiments comprising an
HRI user study in which participants will be able to teach the
motion patterns directly to the robot.
ACKNOWLEDGMENT
The authors gratefully acknowledge partial support by
the EU- and City of Hamburg-funded program ProExzellenzia 4.0, the German Research Foundation DFG under project CML (TRR 169), and the Hamburg Landesforschungsförderungsprojekt.
R EFERENCES
[1] J. Mainprice, M. Gharbi, T. Siméon, and R. Alami, “Sharing effort in
planning human-robot handover tasks,” in Proceedings of IEEE ROMAN, 2012, pp. 764–770.
[2] J. Zhong, C. Weber, and S. Wermter, “A predictive network architecture
for a robust and smooth robot docking behavior,” Paladyn, vol. 3, no. 4,
pp. 172–180, 2012.
[3] R. Saegusa, F. Nori, G. Sandini, G. Metta, and S. Sakka, “Sensory prediction for autonomous robots,” in Proceedings of IEEE-RAS Humanoid
Robots, 2007, pp. 102–108.
[4] T. Lorenz, A. Mörtl, B. Vlaskamp, A. Schubö, and S. Hirche, “Synchronization in a goal-directed task: human movement coordination with
each other and robotic partners,” in Proceedings of IEEE RO-MAN,
2011, pp. 198–203.
[5] A. Bahill, “A simple adaptive smith-predictor for controlling time-delay
systems: A tutorial,” IEEE Control systems magazine, vol. 3, no. 2, pp.
16–22, 1983.
[6] S. Behnke, A. Egorova, A. Gloye, R. Rojas, and M. Simon, “Predicting
away robot control latency,” in Robot Soccer World Cup. Springer,
2003, pp. 712–719.
[7] R. Nijhawan and S. Wu, “Compensating time delays with neural predictions: are predictions sensory or motor?” Philosophical Transactions of
the Royal Society of London A: Mathematical, Physical and Engineering
Sciences, vol. 367, no. 1891, pp. 1063–1078, 2009.
[8] R. Miall, D. J. Weir, D. M. Wolpert, and J. Stein, “Is the cerebellum a
smith predictor?” Journal of motor behavior, vol. 25, no. 3, pp. 203–216,
1993.
[9] D. Kerzel and K. R. Gegenfurtner, “Neuronal processing delays are
compensated in the sensorimotor branch of the visual system,” Current
Biology, vol. 13, no. 22, pp. 1975–1978, 2003.
[10] M. Rohde, L. C. van Dam, and M. O. Ernst, “Predictability is necessary
for closed-loop visual feedback delay adaptation,” Journal of vision,
vol. 14, no. 3, pp. 4–4, 2014.
[11] C. de la Malla, J. López-Moliner, and E. Brenner, “Dealing with delays
does not transfer across sensorimotor tasks,” Journal of Vision, vol. 14,
no. 12, p. 8, 2014.
[12] J. Mainprice and D. Berenson, “Human-robot collaborative manipulation
planning using early prediction of human motion,” in Proceedings of
IEEE/RSJ IROS, 2013, pp. 299–306.
[13] M. Ito and J. Tani, “On-line imitative interaction with a humanoid robot
using a mirror neuron model,” in Proceedings of IEEE ICRA, vol. 2,
2004, pp. 1071–1076.
[14] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-eye
coordination for robotic grasping with deep learning and large-scale data
collection,” arXiv preprint arXiv:1603.02199, 2016.
[15] S. Marsland, J. Shapiro, and U. Nehmzow, “A self-organising network
that grows when required,” Neural Networks, vol. 15, no. 8, pp. 1041–
1058, 2002.
[16] G. I. Parisi, J. Tani, C. Weber, and S. Wermter, “Emergence of multimodal action representations from neural network self-organization,”
Cognitive Systems Research, 2016.
[17] L. Mici, G. I. Parisi, and S. Wermter, “Recognition of transitive actions
with hierarchical neural network learning,” in Proceedings of ICANN.
Springer International Publishing, 2016, pp. 472–479.
[18] D. Vasquez, T. Fraichard, O. Aycard, and C. Laugier, “Intentional motion
on-line learning and prediction,” Machine Vision and Applications,
vol. 19, no. 5, pp. 411–425, 2008.
[19] A. M. Schaefer, S. Udluft, and H.-G. Zimmermann, “Learning long-term
dependencies with recurrent neural networks,” Neurocomputing, vol. 71,
no. 13, pp. 2481–2488, 2008.
[20] G. A. Barreto, “Time series prediction with the self-organizing map: A
review,” in Perspectives of neural-symbolic integration. Springer, 2007,
pp. 135–158.
[21] T. Kohonen, Self-organization and associative memory. Berlin: Springer
Science & Business Media, 1993.
11
[22] G. Simon, J. A. Lee, M. Cottrell, and M. Verleysen, “Forecasting the
cats benchmark with the double vector quantization method,” Neurocomputing, vol. 70, no. 13, pp. 2400–2409, 2007.
[23] N. Johnson and D. Hogg, “Learning the distribution of object trajectories
for event recognition,” Image and Vision computing, vol. 14, no. 8, pp.
609–615, 1996.
[24] N. Sumpter and A. Bulpitt, “Learning spatio-temporal patterns for
predicting object behaviour,” Image and Vision Computing, vol. 18,
no. 9, pp. 697–704, 2000.
[25] W. Hu, D. Xie, T. Tan, and S. Maybank, “Learning activity patterns using
fuzzy self-organizing neural network,” IEEE Transactions on Systems,
Man, and Cybernetics, Part B (Cybernetics), vol. 34, no. 3, pp. 1618–
1626, 2004.
[26] J. Walter, H. Riter, and K. Schulten, “Nonlinear prediction with selforganizing maps,” in Proceedings of IEEE IJCNN, 1990, pp. 589–594.
[27] J. Vesanto, “Using the SOM and local models in time-series prediction,”
in Proceedings of Workshop on Self-Organizing Maps 1997, 1997, pp.
209–214.
[28] A. G. Billard, S. Calinon, and R. Dillmann, Learning from Humans.
Springer International Publishing, 2016, pp. 1995–2014.
[29] D. Kulić, W. Takano, and Y. Nakamura, “Incremental learning, clustering
and hierarchy formation of whole body motion patterns using adaptive
hidden markov chains,” The International Journal of Robotics Research,
vol. 27, no. 7, pp. 761–784, 2008.
[30] D. Kulić, C. Ott, D. Lee, J. Ishikawa, and Y. Nakamura, “Incremental
learning of full body motion primitives and their sequencing through
human motion observation,” The International Journal of Robotics
Research, vol. 31, no. 3, pp. 330–345, 2012.
[31] W. Takano and Y. Nakamura, “Humanoid robot’s autonomous acquisition of proto-symbols through motion segmentation,” in Proceedings of
IEEE-RAS Humanoid Robots, 2006, pp. 425–431.
[32] A. G. Billard, S. Calinon, and F. Guenter, “Discriminative and adaptive
imitation in uni-manual and bi-manual tasks,” Robotics and Autonomous
Systems, vol. 54, no. 5, pp. 370–384, 2006.
[33] S. Ekvall, D. Aarno, and D. Kragic, “Online task recognition and realtime adaptive assistance for computer-aided machine control,” IEEE
Transactions on Robotics, vol. 22, no. 5, pp. 1029–1033, 2006.
[34] K. R. Dixon, J. M. Dolan, and P. K. Khosla, “Predictive robot programming: Theoretical and experimental analysis,” The International Journal
of Robotics Research, vol. 23, no. 9, pp. 955–973, 2004.
[35] T. Ogata, S. Sugano, and J. Tani, “Open-end human robot interaction
from the dynamical systems perspective: Mutual adaptation and incremental learning,” in Proceedings of IEA-AIE. Springer, 2004, pp. 435–
444.
[36] S. Calinon and A. Billard, “Incremental learning of gestures by imitation
in a humanoid robot,” in Proceedings of ACM/IEEE HRI, 2007, pp. 255–
262.
[37] S. M. Khansari-Zadeh and A. Billard, “Bm: An iterative algorithm
to learn stable non-linear dynamical systems with gaussian mixture
models,” in Proceedings of IEEE ICRA, 2010, pp. 2381–2388.
[38] T. Cederborg, M. Li, A. Baranes, and P.-Y. Oudeyer, “Incremental local
online gaussian mixture regression for imitation learning of multiple
tasks,” in Proceedings of IEEE/RSJ IROS, 2010, pp. 267–274.
[39] G. I. Parisi, S. Magg, and S. Wermter, “Human motion assessment in
real time using recurrent self-organization,” in Proceedings of IEEE ROMAN, Aug 2016, pp. 71–76.
[40] M. A. Giese and T. Poggio, “Neural mechanisms for the recognition of
biological movements,” Nature Reviews Neuroscience, vol. 4, no. 3, pp.
179–192, 2003.
[41] G. I. Parisi, C. Weber, and S. Wermter, “Self-organizing neural integration of pose-motion features for human action recognition,” Frontiers in
Neurorobotics, vol. 9, 2015.
[42] G. D. A. Barreto, A. F. Araújo, and H. J. Ritter, “Self-organizing feature
maps for modeling and control of robotic manipulators,” Journal of
Intelligent and Robotic Systems, vol. 36, no. 4, pp. 407–450, 2003.
[43] K. Shockley, D. C. Richardson, and R. Dale, “Conversation and coordinative structures,” Topics in Cognitive Science, vol. 1, no. 2, pp.
305–319, 2009.
[44] G. Tessitore, R. Prevete, E. Catanzariti, and G. Tamburrini, “From
motor to sensory processing in mirror neuron computational modelling,”
Biological Cybernetics, vol. 103, no. 6, pp. 471–485, 2010.
[45] M. A. Livingston, J. Sebastian, Z. Ai, and J. W. Decker, “Performance
measurements for the microsoft kinect skeleton,” in Proceedings of IEEE
VRW, 2012, pp. 119–120.
[46] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with
microsoft kinect sensor: A review,” IEEE Transactions on Cybernetics,
vol. 43, no. 5, pp. 1318–1334, Oct 2013.
[47] A. Sargolzaei, M. Abdelghani, K. K. Yen, and S. Sargolzaei, “Sensorimotor control: computing the immediate future from the delayed
present,” BMC Bioinformatics, vol. 17, no. 7, 2016.
[48] B. Fritzke, “A growing neural gas network learns topologies,” Advances
in Neural Information Processing Systems, vol. 7, pp. 625–632, 1995.
[49] J. Bütepage, M. Black, D. Kragic, and H. Kjellström, “Deep representation learning for human motion prediction and classification,” ArXiv
preprint arXiv:1702.07486, Feb. 2017.
[50] F. Schrodt and M. V. Butz, “Just imagine! Learning to emulate and infer
actions with a stochastic generative architecture,” Frontiers in Robotics
and AI, vol. 3, p. 5, 2016.
Luiza Mici received her Bachelor’s and Master’s degree in Computer Engineering from the University of
Siena, Italy. Since 2015, she is a research associate
and PhD candidate in the Knowledge Technology
Group at the University of Hamburg, Germany,
where she was part of the research project CML
(Crossmodal Learning). Her main research interests
include perception and learning, neural network selforganization, and bio-inspired action recognition.
German I. Parisi received his Bachelors and Masters degree in Computer Science from the University
of Milano-Bicocca, Italy. In 2017 he received his
PhD in Computer Science from the University of
Hamburg, Germany, where he was part of the research project CASY (Cognitive Assistive Systems)
and the international PhD research training group
CINACS (Cross-Modal Interaction in Natural and
Artificial Cognitive Systems). In 2015 he was a
visiting researcher at the Cognitive Neuro-Robotics
Lab at the Korea Advanced Institute of Science and
Technology (KAIST), Daejeon, South Korea. Since 2016 he is a research associate of international project Transregio TRR 169 on Crossmodal Learning in
the Knowledge Technology Institute at the University of Hamburg, Germany.
His main research interests include neurocognitive systems for human-robot
assistance, computational models for multimodal integration, neural network
self-organization, and deep learning.
Stefan Wermter is Full Professor at the University
of Hamburg and Director of the Knowledge Technology institute. He holds an MSc from the University of Massachusetts in Computer Science, and
a PhD and Habilitation in Computer Science from
the University of Hamburg. He has been a research
scientist at the International Computer Science Institute in Berkeley, US before leading the Chair in
Intelligent Systems at the University of Sunderland,
UK. His main research interests are in the fields
of neural networks, hybrid systems, neuroscienceinspired computing, cognitive robotics and natural communication. He has
been general chair for the International Conference on Artificial Neural
Networks 2014. He is an associate editor of the journals Transactions of
Neural Networks and Learning Systems, Connection Science, International
Journal for Hybrid Intelligent Systems and Knowledge and Information
Systems and he is on the editorial board of the journals Cognitive Systems
Research, Cognitive Computation and Journal of Computational Intelligence.
Currently he serves as Co-coordinator of the DFG-funded SFB/Transregio
International Collaborative Research Centre on Crossmodal Learning and is
coordinator of the European Training Network SECURE on Safe Robots.
| 9 |
Optimizing PID parameters with machine learning
Adam Nyberg
arXiv:1709.09227v1 [] 26 Sep 2017
Abstract
This paper examines the Evolutionary programming (EP) method for optimizing
PID parameters. PID is the most common type of regulator within control theory,
partly because it’s relatively simple and yields stable results for most applications. The
p, i and d parameters vary for each application; therefore, choosing the right parameters
is crucial for obtaining good results but also somewhat difficult. EP is a derivative-free
optimization algorithm which makes it suitable for PID optimization. The experiments
in this paper demonstrate the power of EP to solve the problem of optimizing PID
parameters without getting stuck in local minimums.
1
Introduction
This paper will examine one approach to the optimization of Proportional-Integral-Derivative
(PID) parameters. PID is the most common type of regulator within control theory. The
mathematical expression of a PID regulator is relatively simple and yields stable results for
most applications. The p, i and d parameters vary for each application; therefore, choosing
the right parameters is crucial for obtaining good results but also somewhat difficult. According to K.J.Åström and T.Hägglund [1], there are several methods used in the industry
for tuning the p, i and d parameters. Some methods involve developing models while others
involve manual tuning by trial and error.
This paper examines the Evolutionary programming (EP) method for optimizing PID
parameters. EP is an iterative algorithm that runs until some performance criteria are met.
The basic steps are as follows: EP generates a population, evaluates every member in the
population, selects the best member, creates a new population based on the best member,
and then repeats the aforementioned process with an evaluation of the new population.
The new population is generated by adding a Gaussian mutation to the parent. The results
of this paper are compared with the results generated in R.Johns [4], which uses Genetic
algorithm (GA), and S.Hadenius [3], which uses Particle Swarm optimization (PSO). The
GA and PSO methods are further explained in [5].
All training and testing of the algorithm was done in a simulated environment called
MORSE on a Robot Operating System (ROS). The algorithm and movements of the robot
were developed as three ROS nodes. An ROS node is one subsystem; typically, a robotics
environment is built with multiple ROS nodes. The actual simulation of the robot was done
by MORSE which is a generic 3D simulator for robots. Nodes are only able to communicate
with other nodes using streaming topics, Remote Procedure Calls (RPC) services, and the
Parameter Server. One route was used for training and a separate route was used for
evaluating the step response. The experiments in this paper tuned two PIDs simultaneously.
1
The PIDs tuned were the ones controlling linear velocity and angular velocity on the robot.
Only the Husky robot was used.
EP can perform effectively in this type of domain due to its ability to optimize without
getting stuck in local minimums and its efficiency over brute force. The resulting parameters
were evaluated by looking at the step response. Important measurements are: rise time,
overshoot, and the steady state error.
2
2.1
Method
Set up environment
This paper describes three different experiments that all ran on the same environment.
Everything required to run the experiments is listed in table 1.
Name
Description
Roscore
Roscore is a collection of nodes and programs that serve as prerequisites of a ROS-based system.
The state machine is the ROS node that controls general movement and includes the PID for both linear velocity and angular
velocity. This node is built in Modelled Architecture (March).
This node is only responsible for making the robot drive a predefined route. It is triggered by the Determinator, and it sends a
callback back to the Determinator when the route is completed.
The route used for training can be seen in algorithm 1 with parameters start = −0.3 and end = 0.3. The route used for testing
uses the same algorithm but with parameters start = 0.1 and
end = 0.7. The parameters start and end determine the velocity
of the robot. The test route was run with the best PID parameters
from the training.
This node runs the algorithm and logs data to file. This is also
the node that starts the route node. A detailed description of the
EP algorithm is in section 2.3.
State machine
Route
Determinator
Table 1: Table of systems required for running the experiments.
Algorithm 1 The route used by the robot for train and test.
1: procedure Route(start, end)
2:
linearV elocity ← start
3:
angularV elocity ← start
4:
wait 3 seconds
5:
linearV elocity ← end
6:
angularV elocity ← end
7:
wait 3 seconds
8:
Notify finnished to Determinator
2
2.2
Definitions
The Determinator receives both the actual velocity and desired velocity of the robot 50 times
per second. With that data, an error can be estimated by calculating abs(desired − actual).
That calculation is executed for every sample, summed, and then divided by the total
number of samples. Thus, the total error for one run of a route is the average error of that
route. This average error (AE) is used by the algorithm to evolve the parameters. Fitness
refers to lowest AE in this paper.
A population, or generation, is a collection of individuals in which each individual contains values for kpv , kiv , kdv , kpa , kia , kda . The values denoted kpv , kiv , kdv controls the linear
velocity PID and values denoted kpa , kia , kda controls the angular velocity PID. The velocity
and angular parameters were evaluated separately from one another. Figure 1 provides a
visual illustration of the generations within the EP 2.3 algorithm. Each box in figure 1
illustrates an individual and each row illustrates a population. The green boxes indicate
the fittest individual of each population (i.e., lowest AE).
2.3
Evolutionary programming algorithm
The version of evolutionary programming used in this paper follows the algorithm described
in [5].
1. Generate an initial population of individuals. The number of individuals differs between the experiments.
2. Evaluate fitness as described in 2.2.
3. Select the fittest individual of the population by selecting the individual with lowest
AE.
4. If the fittest individual has an average error better 0.01, return that individual and
exit. 0.01 was chosen as a way to try to get the AE lower than 1 percent.
5. Else generate a new population by applying a mutation to the fittest individual. The
mutation differs between the experiments but the general principle is to add a random
number from a Gaussian distribution to each of the new offspring. The mutation used
for the experiments is described in detail in section 2.4.
6. Go to step 2.
Algorithm 2 Function used in experiment 1.
procedure Mutate(value)
add ∼ N (0, 0.05)
while value + add < 0 do
add ← add ∗ 0.5
return value + add
3
Figure 1: Figure illustrating 3 generations with 6 individuals in each generation.
Algorithm 3 Function used in experiment 2 and 3.
1: procedure Mutate(value)
2:
add ∼ value ∗ N (0, 0.5)
3:
while value + add < 0 do
4:
add ← add ∗ 0.5
return value + add
2.4
Experiments
Table 2 describes all of the experimental configurations. All experiments ran for 100 generations and no experiment reached the set criteria of AE < 0.01. All experiments began with
the same initial population, which looked like the following: kpv ∼ U(0, 1) , kiv ∼ U(0, 0.1) ,
kdv ∼ U(0, 0.01) , kpa ∼ U(0, 1) , kia ∼ U(0, 0.1) , kda ∼ U(0, 0.01) .
The mutate algorithm provided in [2] and used in experiment 1 did not perform well for
this application because the parameters differed in size by more than an order of magnitude.
Therefore, experiments 2 and 3 used a mutate algorithm that scales the mutation with the
value being mutated.
Experiment
1
2
3
Mutation algorithm
See algorithm 2
See algorithm 3
See algorithm 3
Population size
10
10
20
Table 2: Configurations for the experiments.
4
3
Result
For each experiment, this paper will present the AE value as well as a graph illustrating
the step response of the PID for both train and test. Table 3 lists every experiment’s best
parameters and lowest AE for train and test.
Experiment
1
1
2
2
3
3
Type
Linear
Angular
Linear
Angular
Linear
Angular
kp
7.78 ∗ 10−2
7.83 ∗ 10−2
8.16 ∗ 10−2
1.17 ∗ 10−1
1.23 ∗ 10−1
1.21 ∗ 10−1
ki
1.63 ∗ 10−1
4.24 ∗ 10−2
2.12 ∗ 10−6
6.34 ∗ 10−6
2.12 ∗ 10−5
3.25 ∗ 10−4
kd
0
0
0
2.60 ∗ 10−9
0
3.35 ∗ 10−8
AE train
0.0563
0.0615
0.0312
0.0253
0.0313
0.0234
AE Test
0.0707
0.0529
0.0406
0.0414
0.0514
0.0421
Table 3: Results for the experiments.
In addition to the results presented in Table 3, Figure 2 illustrates how the PID parameters evolve over every generation in experiment 2. For experiment 2, Figure 3 shows the
AE across all generations and Figure 4 shows the step response for train and test for both
linear and angular velocity.
Figure 2: Figure showing PID parameters evolving over generations.
5
Figure 3: The AE over generations during experiment 2.
Figure 4: Step response for train and test for both linear and angular velocity.
4
Discussion
Overall, we can conclude that using EP is successful in finding sufficient parameters given
enough training time as described in [2]. The results presented in this paper are somewhat
surprising. As we can see, the d part of PID becomes either 0 or negligible after running
the algorithm for a few generations. The same thing happens to the i part, but to an
lesser extent. An explanation for this behavior could be the way that the simulator is built.
This begs the question of whether the results are useful in an environment outside of our
6
simulation. Finally, we noticed that increasing the population size from 10 to 20 did not
significantly improve the results.
When comparing the results with the results from R.Johns [4] and S.Hadenius [3], we can
see that they got similar results, including very small ID-parameters. This adds confidence
that the algorithm works but that the simulation is not optimal for tuning the PIDs.
There are several variations to the experiment that could be done to improve the results.
One way is to divide the route into different parts and then calculate the AE for each part.
That would minimize the risk of a good integral value being punished by the algorithm for a
slow rise time. Second, other fitness functions could be used instead of AE, including mean
squared error. Third, training the parameters on a dynamic route could take the algorithm
out of a local minima. Lastly, running the robot with the best known parameters in between
every run would allow each individual to start with as low of an error as possible. In this
paper, bad parameters from a previous individual could affect the AE of the next tested
individual.
References
[1]
K.J. Åström and T. Hägglund. “The future of PID control”. In: Control Engineering
Practice Volume 9, Issue 11, November 2001, Pages 1163–1175 (2001), pp. 1163–1175.
[2]
David B. Fogel and Lawrence J. Fogel. “An introduction to evolutionary programming”. In: Alliot JM., Lutton E., Ronald E., Schoenauer M., Snyers D. (eds) Artificial
Evolution. AE 1995. Lecture Notes in Computer Science, vol 1063. Springer, Berlin,
Heidelberg (1995), p. 30.
[3]
S. Hadenius. “Optimizing PID Parameters with PSO”. In: (2017).
[4]
R. Johns. “Tuning a PID-controller with machine learning”. In: (2017).
[5]
B. Nagaraj, S. Subha, and B.Rampriya. “Tuning Algorithms for PID Controller Using
Soft Computing Techniques”. In: IJCSNS International Journal of Computer Science
and Network Security, VOL.8 No.4, April 2008 (2008).
7
| 9 |
1
Localisation, Communication and Networking with
VLC: Challenges and Opportunities
Rong Zhang1
arXiv:1709.01899v1 [] 6 Sep 2017
1
School of Electronics and Computer Science, University of Southampton, UK
Abstract—The forthcoming Fifth Generation (5G) era raises
the expectation for ubiquitous wireless connectivity to enhance
human experiences in information and knowledge sharing as well
as in entertainment and social interactions. The promising Visible
Light Communications (VLC) lies in the intersection field of
optical and wireless communications, where substantial amount
of new knowledge has been generated by multi-faceted investigations ranging from the understanding of optical communications
and signal processing techniques to the development of disruptive
networking solutions and to the exploitation of joint localisation
and communications. Building on these new understandings
and exciting developments, this paper provides an overview
on the three inter-linked research strands of VLC, namely
localisation, communications and networking. Advanced recent
research activities are comprehensively reviewed and intriguing
future research directions are actively discussed, along with
the identifications of a range of challenges, both for enhancing
the established applications and for stimulating the emerging
applications.
I. I NTRODUCTION
We are at the dawn of an era in information and communications technology with unprecedented demand for digitalised
everything, for connected everything and for automated everything [1]. The next decade will witness a range of disruptive changes in wireless technologies, embracing all aspects
of cross-disciplinary innovations [2]. Fundamental challenges
arise when we have reached the limit of the conventional
Radio Frequency (RF) based wireless technologies, which are
increasingly less capable of meeting the escalating trafficdemands and of satisfying the emerging use-cases. Especially,
there have been substantial research efforts dedicated to the
high carrier frequencies, including the millimetre wave [3]
and the visible light spectrum [4] in the forthcoming Fifth
Generation (5G) wireless networks landscape. In this paper,
we provide an overview on the fast growing technology of
Visible Light Communications (VLC), which lies at the crosssection of optical and wireless communications, and focuses
on the human perceivable part of the electromagnetic spectrum, corresponding to wavelengths from 380nm to 780nm.
The use of the visible light spectrum for wireless communications has gained great interests. This is because the
visible light spectrum is licence-free, has a vast bandwidth
and does not interfere with the RF band. Historically, in
the late 19th-century, Alexander Graham Bell invented the
photo-phone by transmitting voice signals over modulated
sunlight [5]. Almost a century later, artificial light generated
by fluorescent lamps was also successfully demonstrated for
supporting low data-rate communications [6]. It is these exciting experiments that inspired the modern VLC using Light
Emitting Diodes (LEDs). In addition to facilitating communications, being a modern illumination technology as their main
function, LEDs have been increasingly dominating over the
traditional incandescent lamps and fluorescent lamps, owing
to their higher energy-efficiency, colour-rendering capability
and longevity [7]. Hence, the potential for VLC is further
supported by the anticipated presence of a ubiquitous and
efficient LED lighting infrastructure.
The pioneering implementation of VLC using LEDs for
the dual purpose of indoor illumination and communications
was carried out by the Nakagawa laboratory in the early
2000s [8]. Subsequently, tremendous research efforts have
been invested in improving the link-level performance between
a single LEDs array and a single receiver, focusing on both
the LED components and on the VLC transceivers, where
ambitious multi Gbps targets have been achieved. These exciting link-level achievements set the basis for broadening the
scope of VLC research beyond point-to-point applications [9],
with the focus on VLC aided networking, where various
promising protocols, architectures and cross-layer solutions
have been proposed. Furthermore, LEDs based localisation
has also attracted dedicated research interests, where recent
advances have demonstrated sub-centimetre accuracy and 3D
positioning capability [10]. In addition to the main thrust
research in localisation, communications and networking, VLC
can also provide innovative solutions for a number of usecases [11], including vehicular, Device-to-Device (D2D) and
underwater applications, just to name a few.
Along with the above technical advances, there have been
significantly increased activities in the VLC domain. Large
scale research programmes were launched, bringing together
research-led universities and industries, as exemplified by
the Visible Light Communications Consortium (VLCC), the
EU-FP7 project OMEGA, the consortium on ultra parallel
VLC. Most recently, VLC was also included in the scope
for networking research beyond 5G under the framework
of EU H2020. Dedicated research centres have been also
established, such as the Smart Lighting Engineering Research
Center in US, the Li-Fi Research and Development Centre
in UK and the Optical Wireless Communication and Network Centre in China, etc. Furthermore, in recent years,
ComSoc has published three special issues on VLC in the
Communications Magazine [12], [13] and in the Wireless
Communications Magazine [14]. Moreover, three consecutive
number of papers
2000
1500
1000
conference
journal
total
500
0
07~08
09~10
11~12
13~14
15~16
Fig. 1. Biannual statistics on the number of papers published in IEEE and
OSA in the topic of visible light communications and positioning. These data
are gathered by searching for the keywords (Visible Light Communications,
VLC, Visible light Positioning, VLP).
international workshops on VLC have also been sponsored by
ComSoc along with ICC’15, ICC’16 and ICC’17. Meanwhile,
GlobeCom has continuously offered annual workshops on
optical wireless since 2010. These investments and efforts have
all contributed to the growing success of the subject.
Hence, we see a strong momentum on the research of VLC,
as evidenced by the publication statistics in Fig 1, that motivates this special issue and our paper is organised as follows.
We will first introduce some basics in Section II-A. We will
then review the achievements of three main research strands
of VLC, namely localisation in Section II-B, communications
in Section II-C and networking in Section II-D. After that, we
will discuss about the challenges of VLC in Section III-A and
paint our vision on the future of VLC in Section III-B. Finally,
we will conclude our discourse in Section IV.
II. M AIN R ESEARCH S TRANDS
A. Basics
The modern VLC relies on LEDs for the dual purpose of
illumination and communications. LEDs are solid-state semiconductor devices [7], [15], which are capable of converting
electrical energy to incoherent light through the process of
electro-luminescence. When using LEDs for VLC, data can
be readily modulated by electronically flickering the intensity
of light at a rate fast enough to be imperceivable by human
eyes. At the receiver, data can be directly detected using inexpensive photo-detectors or imaging sensors. This transceiver
structure may be referred to as the Intensity Modulation (IM)
and Direct Detection (DD), which is much simpler than the
conventional coherent RF receivers relying on complicated
and energy-consuming heterodyne structure. Although there
exist implementations of Laser Diodes (LDs) for VLC [16],
[17] that can provide an even higher data-rate, our following
discussions will be based on the more popular LEDs.
The primary function of LEDs is for illumination, hence
the performance of VLC is naturally dependent on the specific characteristics of LEDs [18]. White light is the most
widely used color for many applications, where there are three
different methods to produce while light LEDs. The most
common method is the use of blue light LEDs with yellow
phosphor coating layer [19]. This is a low-cost method, but
the coating layer limits the modulation bandwidth to only
tens of MHz. Early experiments have demonstrated tens of
Mbps data-rate [20], while hundreds of Mbps data-rate may
also be achieved with advanced processing [21] and adaptive
receiver [22]. The second method to generate white light is to
carefully mix the color of Red, Green and Blue (RGB) emitted
from the RGB LEDs [23]. This method is more expensive, but
it offers much wider modulation bandwidth upto hundreds of
MHz, supporting multi Gbps data-rate [24]. The third method
is resulted from the recent advances on micro-LED array [25],
which produces white light through wavelength conversion.
This method is capable of facilitating parallel communications
with even higher data-rate [26].
When the receiving side is considered, there are generally
two types of detectors that are often employed, namely the
photo-detectors and the imaging sensors [27]. In particular,
the Positive-Intrinsic-Negative (PIN) photo-detectors are most
widely considered in current studies, owing to its low cost
to implement and high sensitivity to strong incident light.
By contrast, in the scenario of weak incident light, avalanche
photo-detectors are often more preferred. On the other hand,
imaging sensors are constituted by an array of photo-detectors,
which inherently support the use of pixelated reception owing
to its desirable capability in better separating the optical
channels [28]. Another advantage of imaging sensors is that
they can be readily integrated with camera lens in smart handheld devices [29]. Since the frame-rate of the commonly used
camera lens is low, this type of receiver is particularly suitable
for applications requiring low date-rate [30], such as the
project of CeilingCast [31]. Sometimes, focusing lens may also
be used on top of photo-detectors [32] or imaging sensors [33]
to enhance the reception by collecting more light energy from
diffuse links with increased Field of View (FoV). In addition
to the above receivers, other interesting developments include
prism-array receiver [34], solar-cell receiver [35], rotational
receiver [36] and aperture-based receiver [37], etc.
Since the surface diameter of typical photo-detectors is
several thousand times of the light wavelength, the fast fading
effects are averaged out at the receiver. Hence, when the
indoor propagation is considered, the Line of Sight (LoS)
path-loss effect is typically observed, plus the reflections from
diffuse links [38]. In most of the studies, (dispersive) channel
modelling is carried out by assuming Lambertian type emitters
and for simplicity, we adopt the well-established Infra-Red
(IR) channel modelling [39], with the aid of the convolutionbased simulation method to properly characterize the Power
Delay Profile (PDP) of the propagation channels [40]. This
method comes at the expense of complexity, hence more
efficient computational methods were proposed [41]. Furthermore, there are three typical sources of noise, namely the
ambient noise, short noise and thermal noise. In fact, they are
mutually dependent, hence it is important to carry out system
design by taking into account this property [42].
B. Localisation
The widely used Global Navigation Satellite System
(GNSS) technology provides a coarse localisation capability
that works just fine for the majority of outdoor applications [43]. However, its use for indoor localisation fails, since
the satellite signal has a poor indoor penetration and the multipath reflections are highly complex in indoor environment.
Various alternatives were proposed for indoor localisation
services [44], mainly based on RF techniques such as Wireless
Fidelity (Wi-Fi) or Ultra-Wide Band (UWB). However, the
associated cost is often too high to reach the desired accuracy.
Hence, LEDs based localisation becomes an interesting option
to meet the above demand [45]. This is because the LEDs can
be readily integrated in the existed lightening infrastructure
and several nearby LEDs may provide joint localisation to
achieve a very high level of accuracy. Furthermore, the visible
light spectrum can be used in RF sensitive environment and
the imaging sensors built-in the smart hand-held devices
constitute a convenient receiver for LEDs based localisation, or
inexpensive PDs may be purposely installed. In the future 5G
systems, both manufacturing and service robots will become
dominant Machine Type Communication (MTC) devices. In
homes, factories and warehouses, light is useful for accurate
position control of robots [46], again thanks to its linear propagation property and short wavelength. Explicitly, there exist
several approaches for LEDs based localisation, including the
proximity based approach, the triangulation based approach
and the fingerprinting based approach, where each of them
will be elaborated in detail as follows.
The simplest way to realise LEDs based localisation is the
proximity based approach, where the position of the object is
roughly classified to its nearest LED transmitter. This procedure can be conveniently integrated in the cell search stage,
where no complicated algorithms are imposed [47]. Hence,
the achievable localisation accuracy is very coarse, in terms of
decimetres. When further exploiting the geometric properties
of multiple LEDs, the triangulation based approach is capable
of determining the absolute position of the object and reaching
a localisation accuracy in terms of centimetres. This approach
exploits the measured distance (angles) between the multiple
LEDs and the localisation object, where the measurement
could be Time of Arrival (ToA) [48], Time Difference of
Arrival (TDoA) [49], Received Signal Strength (RSS) [50]
and Angle of Arrival (AoA) [51]. There are several challenges
associated with these measurements that may deteriorate the
localisation accuracy. For example, both ToA and TDoA
require strict timing synchronisation, RSS requires detailed
knowledge of radiation pattern and AoA requires careful
calibration of light directionality. Hence, localisation based
on a combination of different measurements is a good practice [52], [53], particularly for 3D localisation [54]. Finally, the
fingerprinting based approach determines the position of the
object by comparing on-line measured data with pre-stored
off-line measured data [55]. Depending on the richness of
the pre-stored data set, this approach may provide a higher
level of localisation accuracy, but at the cost of an increased
complexity. Nevertheless, this approach has a poor scalability
to use owing to its scene-dependence.
Unique approach exists for LEDs based localisation using
camera sensors available in smart hand-held devices [56].
Thanks to the high density of pixels, camera sensors can
extract detailed spatial information in order to determine the
position of the object with high localisation accuracy by using
vision analysis and image processing. In addition to only
use camera sensors, accelerometer build-in the smart hand-
held devices can also be exploited together to achieve 3D
localisation [57], [58]. To conclude, despite the existence of
various LEDs based localisation approaches and their achievements, LoS blockage constitutes the single dominant problem,
which is lacking of considerations in the current research.
Hence, the recent study on the effect of multi-path reflections
on positioning accuracy becomes highly practical [59]. More
importantly, when localisation is considered, it should be
included in the entire design chain for achieving a better
integrity [60]. We believe that with further scientific advances,
powerful and robust solutions for LEDs based localisation will
be developed to meet diverse requirements.
C. Communications
As discussed above, VLC is capable of realising the dual
purpose of illumination and communications. Amongst the
cascaded physical layer components of VLC, we elaborate
on those that require contrived design and dedicated discussions, namely optical domain modulation and Multiple
Input Multiple Output (MIMO). Generally, there are two types
of modulation techniques that are commonly employed for
VLC, namely the single-carrier modulation and multi-carrier
modulation. To elaborate on the former, On-Off Keying (OOK)
constitutes the simplest technique for VLC modulation. Hence,
it is the most commonly studied and experimented technique
together with various different types of LEDs. Despite its simplicity, multi Gbps data-rate has been reported in recent experiment [61]. Developed from the plain OOK, there is a family
of pulse-based modulations, such as Pulse Width Modulation
(PWM) [62] and Pulse Position Modulation (PPM) [63]. These
modulations belong to the M-ary orthogonal signalling, which
is particularly suitable for IM. Unique to VLC, Color Shift
Keying (CSK) has been introduced in the VLC standardisation 802.15.7 for simultaneously achieving a higher data-rate
and a better dimming support [64], [65]. Specifically, CSK
modulates the signal based on the intensity of the RGB color,
relying on the employment of RGB LEDs. Other interesting
schemes exploiting the color property of VLC may be found
in [66], [67]. It is worth noting that as a modulation technique,
CSK is different from the concept of Wavelength Division
Multiplexing (WDM) [68]. This is because in WDM system,
additional modulation may be designed together with each of
the multiplexing layers.
When considering the multi-carrier modulation family, the
celebrated Optical Orthogonal Frequency Division Multiplexing (OOFDM) schemes are often employed. This is because
OOFDM scheme allows parallel data transmissions, and it is
also capable of combating the detrimental channel dispersion
without complex time domain equalisations. Different from the
conventional RF based OFDM schemes, transmitted signals
of OOFDM need to be real-valued and positive in order to
facilitate IM. There are a family of OOFDM schemes, typically including the Asymmetrically Clipped OOFDM (ACOOFDM) scheme and the DC-biased OOFDM (DCO-OFDM)
scheme [69]. In both schemes, the property of Hermitian
symmetry is exploited for rendering the complex-valued signal
to be real-valued, at the cost of halving the bandwidth efficiency. In order to maintain positivity of the transmitted signal,
ACO-OFDM scheme resorts to use only the odd sub-carriers,
while DCO-OFDM scheme resorts to apply sufficient DC bias.
Hence, the former approach is more power efficient, while the
latter approach is more bandwidth efficient [70]. Moreover,
there exist many other interesting realisations of OOFDM,
such as the flip OOFDM [71], unipolar OOFDM [72], multilayer OOFDM [73], hybrid OOFDM [74], and DC-informative
OOFDM [75]. In general, OOFDM schemes suffer from
the classic problem of high Peak to Average Power Ratio
(PAPR) [76] and their performance is also limited by the
clipping distortion [77] and the LEDs’ non-linearity [78],
where many countermeasures have thus been proposed [79],
[80], [81]. Despite all these challenges, OOFDM schemes
have attracted great attentions owing to their high data-rate
potential, robustness to channel dispersion and flexibility in
resource allocation [82].
It is important that the above modulation techniques should
be co-designed with LEDs’ illumination requirements to avoid
flickering and support dimming [83]. Flickering refers to the
undesirable effect of human perceivable brightness fluctuation,
or in other words light intensity changes. This is relatively
easy to cope with by using Run Length Limited (RLL) coding
in order to balance the resultant zeros and ones [84], [85].
On the other hand, modern LEDs have now been capable of
supporting arbitrary levels of dimming for energy saving and
environmental-friendly illumination. Hence, a more intriguing
aspect is to jointly design modulation and dimming [86]. In
general, there are two approaches to support dimming with
modulation, namely to control either the light intensity or the
signal duty cycle, where the former is easier to implement
and the latter achieves higher precision [87]. As an example,
in OOK, its on or off levels can be defined to support dimming
or one may introduce compensation period along with the
signalling period without resorting to intensity modification.
Amongst others, pulse-based modulations show great flexibility in support dimming, such as the variable PPM scheme
proposed in the 802.15.7 standard [88]. On the other hand,
dimming support in multi-carrier modulation requires further
investigations, where recent research has been dedicated to
this direction [89], [90]. In addition to modulation, channel
coding schemes can also be co-designed with dimming support
in mind, as demonstrated in various research efforts, including
turbo codes [91], Reed-Muller codes [92], adaptive codes [93]
and concatenated codes [94].
Similar to the conventional RF based wireless communications systems, MIMO in VLC is also capable of providing
promising throughput enhancements [95]. It is known that the
full potential of MIMO can only be achieved in a fully scattered environment. However, in VLC, the particular challenge
is that the optical MIMO channels are often highly correlated
and the resultant MIMO channel matrix appears to be rank
deficient [96]. To create full rank MIMO channel matrix, it is
crucial to maintain clear separation of the transmitters at the
(non)-imaging receiver by careful calibration. In addition to
support full rank MIMO channel matrix, robustness to receiver
tilts and blockage is also desirable, where it has been shown
that specifically designed receivers to harness angle diversity
constitutes a good design practice [97], [98]. Most recently,
the research on MIMO in VLC under diffuse channel is also
emerging, leading to substantial practical insights [99]. As
far as the MIMO functions are considered, MIMO in VLC
can achieve diversity gain by using space time coding [100],
multiplexing gain by using parallel transmission [101] and
beamforming gain by using electronic beam steering [102].
Multiple transmit luminaries can also be used to improve
the security of VLC transmission by making the VLC signal
difficult to intercept by an eavesdropper. Several linear beamforming schemes for active and passive eavesdroppers have
recently been presented in [103], [104], [105]. Being an
important generalisation, Multiple Input Single Output (MISO)
transmission and in particular, Multi-User MISO (MU-MISO)
transmission have attracted substantial research interests [106].
This is because the MU-MISO scheme provides beneficial
multi-user diversity gain without incurring rank deficient
MIMO channel matrix. However, in this multi-user scenario,
challenges arise when performing inter-user interference cancellation, where (non)-linear transmit pre-coding scheme is
required.
D. Networking
The above mentioned advances in physical layer research
of VLC have lead to the development of VLC aided networking [107]. Being next to the physical layer, there are
several different candidates for the Medium Access Control
(MAC) layer, including both the multiple access and random
access schemes. Let us now elaborate on the multiple access
scheme first, the most straightforward arrangement is the
Time Division Multiple Access (TDMA) scheme [108], [109],
where users simply access the network in different time slots.
When multi-carrier modulation is employed, the Orthogonal
Frequency Division Multiple Access (OFDMA) scheme allows
users to be allocated different Time and Frequency (TF)
resource blocks [110]. When compared to TDMA scheme,
OFDMA scheme provides a higher flexibility in terms of resource allocation and user scheduling, at a modestly increased
complexity. In addition to the above orthogonal multiple
access schemes, in Non-orthogonal Multiple Access (NOMA)
scheme [111], [112], two or more users may be multiplexed in
power domain in addition to the conventional orthogonal TF
domain. At the receiver side, onion-stripping type interference
cancellation is required to separate the users from powerdomain non-orthogonal multiplexing [113]. Other than relying
on the power-domain, spatial domain could also be exploited
at the transmitter to realise NOMA [114]. Differently, (multicarrier) Optical Code Division Multiple Access (OCDMA)
scheme relies on assigning each user a unique and specific
optical code [115], [116]. Finally, when random access is
considered, the classic Carrier Sense Multiple Access (CSMA)
scheme remains highly attractive. Importantly, early implementations have already shown its successful usage [117] and
in slotted random access, both contention access periods and
free periods are included, where the latter ensures guaranteed
time slots for resource limited applications.
The most straightforward way of constructing an indoor
VLC cell is to simply consider each Access Point (AP)
function as an individual cell and to adopt the unity frequency
reuse across all cells. This construction would result in the
highest spatial reuse but it tends to suffer from the typical
problem of Inter-Cell Interference (ICI) amongst neighbouring
cells. Following the traditional cellular design principle [118],
different (fractional) frequency reuse patterns could be used to
mitigate ICI, at the cost of a reduced bandwidth efficiency. An
effective method of improving the efficiency, whilst mitigating
the detrimental ICI is to employ cell merging [119], where a
group of neighbouring APs jointly form an enlarged VLC cluster. In this way, the previously ICI-contaminated area becomes
the cluster-centre of the newly formed cluster. Multiple users
can be served simultaneously by using sophisticated Vectored
Transmission (VT) techniques. The underlying principle is to
totally eliminate the ICI at the multiple transmitters side by
using transmit pre-coding, so that the multiple users receive
mutually interference-free signals [120]. However, this technique requires that both Channel State Information (CSI) and
the users’ data have to be shared amongst multiple APs. All of
the above cell formations follow the conventional cell-centric
design approach, which is based on defining a cell constituted
by a fixed set of one or more APs and then associating the
users with it. By contrast, the newly proposed user-centric
design approach relies on the dynamic association between
APs and users [121]. More explicitly, by taking into account
the users’ geo-locations, the new user-centric design flow is
based on grouping the users together and then associating the
APs with them, leading to amorphous cells [122], which is
capable of supporting video service on the move [123]. Finally,
an intriguing piece of research emerges to consider the optimal
placement of LEDs [124], resulting into rich implications on
throughput, delay and mobility.
Holistically, owing to the existence of lightening infrastructure, Power-Line Communications (PLC) constitute a convenient back-haul for VLC as indoor access technology [125],
[126]. PLC can reach luminaries that serve as VLC transmitters to supply the data streams as well as to coordinate
transmission between multiple VLC transmitters to support
multi-user broadcasting [127]. The VLC transceivers can be
considered as relays that can operate in a full-duplex model
and different relaying paradigms such as amplify-and-forward
and decode-and-forward are possible [125]. From a networking perspective, VLC can be considered as a new member in
the small-cell family of the Heterogeneous Networks (HetNet)
landscape for complementing the over-loaded Radio Access
Technology (RAT). Indeed, the interplay between VLC and
RAT system has been an active area of research [128], [129],
where there are two different interplay scenarios that may be
envisioned, namely single-homing and multi-homing. In the
single-homing scenario, only one access system is allowed
to maintain its association at any instant. In this scenario,
dynamic Load Balancing (LB) will prevent traffic congestion
caused by blockage or mobility through diverting the traffic
flow appropriately [130]. To better exploit the access system’s
diversity potentials, in the multi-homing scenario, each user
maintains multiple associations at the same time by using the
Multipath Transport Control Protocol (TCP) to connect multiple interfaces [131]. In either scenario, robust vertical han-
dover has to be properly designed to mitigate any ping-pong
effect, where the load-ware mobility management appears to
be a promising solution [132]. Different from using higher
layer converging approaches, seamless rate adaptation between
VLC and RAT systems may also be achieved by network
coding and rate-less coding [133]. To sum up, in the light of
the information and communications technology convergence,
the above mentioned network layer functions should be software defined to maximise its full potential [134], [135].
III. C HALLENGES AND O PPORTUNITIES
A. Challenges
1) Channel Modelling: Most of the current channel modelling in VLC was directly adapted from the IR communications. However, it would be ideal to develop specific VLC
channel modelling, corresponding to different types of LEDs
for the ease of cross-calibration. In particular, with regards
to shadowing, there is a lack of both empirical and statistical
modelling. In most of the studies, the shadowing effect was
often assumed to follow the over-simplified Bernoulli distribution. However, given the sensitivity of VLC over shadowing
in all aspects of localisation, communications and networking,
a well-calibrated model is indeed of critical importance. Also
importantly, VLC channel modelling for vehicular applications
is still in its infancy, which requires dedicated efforts.
2) Interference Mitigation and Management: VLC relies
on visible light spectrum which overlaps with solar radiation
and indoor/outdoor lighting and display. A VLC system is
inevitably interfered by those sources, from day and night
time. Therefore, effective techniques are needed to mitigate
not only inter-system interference from neighbouring LEDs,
but also external light interference. Considering those interference sources typically emit a large dynamic range of light
intensities, a robust and sensitive detector needs to respond
reliably to transmitted signals while suppressing interference
to certain extent. It is necessary but very challenging to
develop advanced methods and algorithms to recover useful
signals, weak or strong, from noisy signals overwhelmed by
interference, weak or strong. For example, normal operation of
a vehicular receiver or an indoor receiver under direct sunlight
through a window at noon is extremely difficult.
3) Channel Feedback: Several contributions make a massive use of channel state information. While this aspect is
not critical in approaches that move all the processing toward
the receiver since estimation techniques are not so difficult
to implement both with training-based or (semi)-blind, when
the communications requires a feedback link, as for preequalization, bit loading, retransmission schemes, the feedback
channel performance becomes an issue. The problem of how
a feedback channel is used is a very challenging task either if
it is an optical one and if it is based on RF. In fact, important
aspects for example quantisation and error control must be
properly taken into account, in addition to delay and jitter.
4) Access Techniques: A high percentage of access techniques proposed in the literature and described above start
from the tentative of adapting access mechanisms and procedures already used in the RF for implementing them into the
VLC context. However, mechanisms that take into account
the fact that VLC receivers usually have one or more photodetectors that are not omnidirectional as well as for emerging
VLC applications such as vehicular and camera communications, should be investigated. Access techniques become more
complicated, when a user is covered by some LEDs in indoor
environment, due to the tilt of the device, the signal can be
received from another set of LEDs. Hence, access techniques
are strictly related also to the localisation techniques.
5) Mobility Management: In many VLC applications, VLC
terminals or transceivers are mobile, such as hand-held devices, vehicles, and robots. Maintaining a quality communications link in a point-to-point case and network connectivity in
a multiple-user case are important to avoid communications
losses. Protocols for mobile optical wireless communication
networks are in urgent need, for dynamic system resource
adaptation to communications environments, easy node access
and drop-off, smooth handover from one AP to another and
from one network to another. Meanwhile, mobility prediction
and network topology modelling are under-explored, and these
topics open new room for further investigation.
6) Integrated Smart Lighting: Since VLC is not a paradigm
separated from illumination, the ultimate future challenge can
be a full integration that takes into account not only communications performance related to the single link under illumination constraints but also a holistic optimisation regarding both
the networking (system performance and seamless handover)
and the perceived illumination (a good level of light all over
the room). This is an issue especially in large rooms such
as museums, where multimedia content may be based on the
position of the user in the information centric system. Hence,
the aim is to integrate lighting, communications, networking
and positioning in a single homogeneous framework.
B. Opportunities
1) 5G-Home: The concept of 5G-Home is based on the
unified heterogeneous access-home network with wired and
(optical) wireless connectivity, which is capable of creating
tens of millions of additional 5G-Home sites. Technically, to
succeed in a timely and affordable manner, both the existing
copper or cable based fixed access and in-home wiring technique need to be exploited in addition to fibre [136], [137].
Deep in the home, the number of wireless APs and terminals
will be going up corresponding to the increasing bandwidth,
delay and coverage requirements. At least one AP per room
may become the norm given the higher carrier frequencies that
will be used for 5G, including Wi-Fi standards 802.11ac/ad
and VLC aided networking [138], [139]. Hence, we envision
beneficial convergence between fixed access network infrastructure and in-home (optical) wireless networks, in order to
eliminate the boundary between these two domains [140].
2) 5G-Vehicle: Future 5G systems will embody a number
of new applications which span across vast areas beyond enhanced mobile broadband services, such as media distribution,
Smart Cities, and Internet of Things (IoT). In Smart Cities,
light-enabled vehicular communications networks utilize a
large number of densely distributed street lamp poles as
APs, while vehicular lights, roadside street lights, pedestrian
signage and traffic lights can all act as ubiquitous transmitters to meet the need of massive connectivity and high
throughput [141], [142]. Equipped with image sensor or other
types of receivers, these nodes in the immediate vicinity
of vehicles provide ultra-reliable and low latency (mission
critical) communications and control links [143], [144], which
will serve well intelligent transportation, collision avoidance,
autonomous driving and telematics. However, challenges will
arise when experiencing strong ambient noise, while uneven
speed and diverting route may be also difficult to handle [145],
[146]. Nevertheless, vehicles become mobile offices and entertaining homes, enjoying amount of exterior resources.
3) Underwater Communications: In addition to groundbased indoor and outdoor applications, VLC also paves the
way for outreach applications deep into water. Currently,
underwater acoustics is the major wireless technology for
underwater object detection and communications. Due to the
very low data rate allowed by acoustic communications in
underwater environment and also the poor performance from
the propagation point of view in the RF bands, light sources
offer a unique opportunity for short-range high speed communications potentially at low cost and low power [147], [148],
[149], [150]. Data rate of hundreds to thousands of Mbps is
possible, and the communications distance is extendible to
hundreds of meters in clear water. The VLC technology will
undoubtedly play a critical role in marine resource exploration,
water sensing, monitoring and tracking of marine organism,
and saving endangered species.
4) Emerging Applications: LEDs are natural cheap communications transmitters, and CMOS image sensors are equipped
in pervasive consumer electronics as detectors. New applications emerge with these tiny optical sensors. Image Sensor
Communications (ISC) and positioning is easily realized based
on CMOS sensors built upon smart phones. Millions of pixels
can be maximally explored for communications [151] and
accurate positioning [152]. The LED screen-to-camera communications constitute another interesting application, which
may facilitate unimaginable potentials in entertainment, education, broadcasting and near-field communications, etc. Several
early experiments have been already carried out with exciting findings, including projects of SoftLight [153], Uber-inlight [154] and SBVLC [155] etc. In addition to traditional
LEDs, emerging Organic LEDs (OLEDs) are attractive in
flexibility, easy integration and fabrication, convenient color
selectivity, and wide viewing angle. Thus they serve as wearable and portable VLC transmitters as well as fixed-location
large screen communication transmitters [156], [157], [158].
Application domains encompass those of LEDs as well as new
body area networks and sensor networks. Last but not least,
LED based lighting infrastructures are becoming ubiquitous
and have already been dubbed as the ‘eyes and ears of the
IoT’ in the context of smart lighting systems. Hence, VLC is a
promising solution for indoor communications to a broad class
of smart objects in particular in scenarios with high density
of IoT devices. Steps along this direction have been presented
in [159], [160], [161].
5) Commercialisation: To create a larger economic and
societal impact, the above mentioned academic research has
to be in collaboration with industries. More importantly, the
future success of VLC urges the joint efforts from both information industry and illumination industry, driven by demands
from verticals, such as health, automotive, manufacturing,
entertainment etc. The inclusive of mobile phones within
the ecosystem would be highly desirable to the wide public
acceptance. One promising area for real-world VLC to grow
is the future pervasive IoT in consumer electronics, retailers,
warehouses, offices and hospitals, etc. For example, Philips
has recently commercialised VLC aided indoor localisation for
the hypermarket in France 1 . They have also teamed up with
Cisco on an exciting IoT project called ‘Digital Ceiling’ 2 ,
which connects all of a building’s services in a single and
converged network, where empowered LEDs can be used to
collect, send and analyse data.
[8] T. Komine and M. Nakagawa, “Fundamental analysis for visiblelight communication system using LED lights,” IEEE Transactions on
Consumer Electronics, vol. 50, no. 1, pp. 100–107, Feb 2004.
[9] H. Burchardt, N. Serafimovski, D. Tsonev, S. Videv, and H. Haas,
“VLC: Beyond point-to-point communication,” IEEE Communications
Magazine, vol. 52, no. 7, pp. 98–105, July 2014.
[10] J. Lim, “Ubiquitous 3D positioning systems by LED-based visible light
communications,” IEEE Wireless Communications, vol. 22, no. 2, pp.
80–85, Apr 2015.
[11] P. H. Pathak, X. Feng, P. Hu, and P. Mohapatra, “Visible light communication, networking, and sensing: A survey, potential and challenges,”
IEEE Communications Surveys and Tutorials, vol. 17, no. 4, pp. 2047–
2077, Fourth Quarter 2015.
[12] S. Hranilovic, L. Lampe, and S. Hosur, “Visible light communications:
the road to standardization and commercialization (Part 1) [Guest
Editorial],” IEEE Communications Magazine, vol. 51, no. 12, pp. 24–
25, Dec 2013.
[13] S. Hranilovic, L. Lampe, S. Hosur, and R. D. Roberts, “Visible light
communications: the road to standardization and commercialization
(Part 2) [Guest Editorial],” IEEE Communications Magazine, vol. 52,
no. 7, pp. 62–63, July 2014.
[14] N. Chi, H. Haas, M. Kavehrad, T. D. C. Little, and X.-L. Huang, “Visible light communications: demand factors, benefits and opportunities
[Guest Editorial],” IEEE Wireless Communications, vol. 22, no. 2, pp.
5–7, Apr 2015.
IV. C ONCLUSION
[15] M. H. Crawford, “LEDs for solid-state lighting: Performance chalThis paper provides an overview on the localization, comlenges and recent advances,” IEEE Journal of Selected Topics in
Quantum Electronics, vol. 15, no. 4, pp. 1028–1040, July 2009.
munications and networking aspects of VLC, along with the
[16] C. Lee, C. Zhang, M. Cantore, R. M. Farrell, S. H. Oh, T. Margalith,
discussions on various challenges and opportunities. It is
J. S. Speck, S. Nakamura, J. E. Bowers, and S. P. DenBaars, “4 Gbps
envisioned that the future research of VLC will open up new
direct modulation of 450nm GaN laser for high-speed visible light
communication,” Opt. Express, vol. 23, no. 12, pp. 16 232–16 237, Jun
scientific areas in the wider academic context, which will
2015.
be extremely beneficial to the whole community. VLC will
[17] I. C. Lu, C. H. Yeh, D. Z. Hsu, and C. W. Chow, “Utilization of 1create a unique opportunity to innovate all areas related to
GHz VCSEL for 11.1-Gbps OFDM VLC wireless communication,”
IEEE Photonics Journal, vol. 8, no. 3, pp. 1–6, Jun 2016.
the future evolution, deployment and operation of ultra-dense
[18] S. Rajbhandari, H. Chun, G. Faulkner, K. Cameron, A. V. N. Jalasmall-cell networks, where potentially hundreds of people and
jakumari, R. Henderson, D. Tsonev, M. Ijaz, Z. Chen, H. Haas,
thousands of appliances will be connected. More broadly, VLC
E. Xie, J. J. D. McKendry, J. Herrnsdorf, E. Gu, M. D. Dawson,
and D. O’Brien, “High-speed integrated visible light communication
will stimulate commercial solutions for supporting new killer
system: Device constraints and design considerations,” IEEE Journal
applications, facilitating innovations for entertainment, collabon Selected Areas in Communications, vol. 33, no. 9, pp. 1750–1757,
orative design and vehicular networking, etc. By simultaneSep 2015.
[19] D. A. Steigerwald, J. C. Bhat, D. Collins, R. M. Fletcher, M. O.
ously exploiting illumination and communications, VLC will
Holcomb, M. J. Ludowise, P. S. Martin, and S. L. Rudaz, “Illumination
directly contribute towards the all-important ‘green’ agenda,
with solid state lighting technology,” IEEE Journal of Selected Topics
which has been one of the salient topics in the 21st-century.
in Quantum Electronics, vol. 8, no. 2, pp. 310–320, Mar 2002.
[20] H. L. Minh, D. O’Brien, G. Faulkner, L. Zeng, K. Lee, D. Jung,
and Y. Oh, “High-speed visible light communications using multipleR EFERENCES
resonant equalization,” IEEE Photonics Technology Letters, vol. 20,
no. 14, pp. 1243–1245, July 2008.
[1] M. K. Weldon, The Future X Network: A Bell Labs Perspective, 1st ed.
[21] H. L. Minh, D. O’Brien, G. Faulkner, L. Zeng, K. Lee, D. Jung, Y. Oh,
CRC Press, 2016.
and E. T. Won, “100-Mb/s NRZ visible light communications using a
[2] L. Hanzo, H. Haas, S. Imre, D. O’Brien, M. Rupp, and L. Gyongyosi,
postequalized white LED,” IEEE Photonics Technology Letters, vol. 21,
“Wireless myths, realities, and futures: From 3G/4G to optical and
no. 15, pp. 1063–1065, Aug 2009.
quantum wireless,” Proceedings of the IEEE, vol. 100, no. Special
[22] M. Biagi, T. Borogovac, and T. D. C. Little, “Adaptive receiver for inCentennial Issue, pp. 1853–1888, May 2012.
door visible light communications,” Journal of Lightwave Technology,
[3] T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N.
vol. 31, no. 23, pp. 3676–3686, Dec 2013.
Wong, J. K. Schulz, M. Samimi, and F. Gutierrez, “Millimeter wave
[23] S. Muthu, F. J. P. Schuurmans, and M. D. Pashley, “Red, green, and
mobile communications for 5G cellular: It will work!” IEEE Access,
blue LEDs for white light illumination,” IEEE Journal of Selected
vol. 1, pp. 335–349, 2013.
Topics in Quantum Electronics, vol. 8, no. 2, pp. 333–338, Mar 2002.
[4] S. Wu, H. Wang, and C. H. Youn, “Visible light communications for 5G
[24] G. Cossu, A. M. Khalid, P. Choudhury, R. Corsini, and E. Ciaramella,
wireless networking systems: from fixed to mobile communications,”
“3.4 Gbit/s visible optical wireless transmission based on RGB LED,”
IEEE Network, vol. 28, no. 6, pp. 41–45, Nov 2014.
Opt. Express, vol. 20, no. 26, pp. B501–B506, Dec 2012.
[5] A. G. Bell, “Upon the production and reproduction of sound by light,”
[25] J. J. D. McKendry, D. Massoubre, S. Zhang, B. R. Rae, R. P. Green,
Journal of the Society of Telegraph Engineers, vol. 9, no. 34, pp. 404–
E. Gu, R. K. Henderson, A. E. Kelly, and M. D. Dawson, “Visible-light
426, 1880.
communications using a CMOS-controlled micro-light- emitting-diode
[6] D. K. Jackson, T. K. Buffaloe, and S. B. Leeb, “Fiat lux: a fluorescent
array,” Journal of Lightwave Technology, vol. 30, no. 1, pp. 61–67, Jan
lamp digital transceiver,” IEEE Transactions on Industry Applications,
2012.
vol. 34, no. 3, pp. 625–630, May 1998.
[26] R. X. G. Ferreira, E. Xie, J. J. D. McKendry, S. Rajbhandari, H. Chun,
[7] J. Y. Tsao, M. E. Coltrin, M. H. Crawford, and J. A. Simmons, “SolidG. Faulkner, S. Watson, A. E. Kelly, E. Gu, R. V. Penty, I. H. White,
state lighting: An integrated human factors, technology, and economic
D. C. OBrien, and M. D. Dawson, “High bandwidth GaN-based microperspective,” Proceedings of the IEEE, vol. 98, no. 7, pp. 1162–1179,
LEDs for multi-Gb/s visible light communications,” IEEE Photonics
July 2010.
Technology Letters, vol. 28, no. 19, pp. 2023–2026, Oct 2016.
[27] K. D. Dambul, D. C. O’Brien, and G. Faulkner, “Indoor optical wireless
1 http://www.lighting.philips.co.uk/systems/themes/led-based-indoor-positioning.html
MIMO system with an imaging receiver,” IEEE Photonics Technology
2 http://www.cisco.com/c/en/us/solutions/digital-ceiling/partner-ecosystem.html
Letters, vol. 23, no. 2, pp. 97–99, Jan 2011.
[28] S. Hranilovic and F. R. Kschischang, “A pixelated MIMO wireless
optical communication system,” IEEE Journal of Selected Topics in
Quantum Electronics, vol. 12, no. 4, pp. 859–874, July 2006.
[29] R. Boubezari, H. L. Minh, Z. Ghassemlooy, and A. Bouridane,
“Smartphone camera based visible light communication,” Journal of
Lightwave Technology, vol. 34, no. 17, pp. 4121–4127, Sep 2016.
[30] C. W. Chow, C. Y. Chen, and S. H. Chen, “Enhancement of signal
performance in LED visible light communications using mobile phone
camera,” IEEE Photonics Journal, vol. 7, no. 5, pp. 1–7, Oct 2015.
[31] J. Hao, Y. Yang, and J. Luo, “CeilingCast: Energy efficient and
location-bound broadcast through LED-camera communication,” in
IEEE INFOCOM 2016, Apr 2016, pp. 1–9.
[32] T. Q. Wang, Y. A. Sekercioglu, and J. Armstrong, “Analysis of an
optical wireless receiver using a hemispherical lens with application in
MIMO visible light communications,” Journal of Lightwave Technology, vol. 31, no. 11, pp. 1744–1754, Jun 2013.
[33] T. Chen, L. Liu, B. Tu, Z. Zheng, and W. Hu, “High-spatial-diversity
imaging receiver using Fisheye lens for indoor MIMO VLCs,” IEEE
Photonics Technology Letters, vol. 26, no. 22, pp. 2260–2263, Nov
2014.
[34] T. Q. Wang, R. J. Green, and J. Armstrong, “MIMO optical wireless
communications using ACO-OFDM and a prism-array receiver,” IEEE
Journal on Selected Areas in Communications, vol. 33, no. 9, pp. 1959–
1971, Sep 2015.
[35] Y. Liu, H. Y. Chen, K. Liang, C. W. Hsu, C. W. Chow, and C. H. Yeh,
“Visible light communication using receivers of camera image sensor
and solar cell,” IEEE Photonics Journal, vol. 8, no. 1, pp. 1–7, Feb
2016.
[36] W. A. Cahyadi, Y. H. Kim, Y. H. Chung, and C. J. Ahn, “Mobile
phone camera-based indoor visible light communications with rotation
compensation,” IEEE Photonics Journal, vol. 8, no. 2, pp. 1–8, Apr
2016.
[37] T. Q. Wang, C. He, and J. Armstrong, “Performance analysis of
aperture-based receivers for MIMO IM/DD visible light communications,” Journal of Lightwave Technology, early access.
[38] Z. Zhou, C. Chen, and M. Kavehrad, “Impact analyses of highorder light reflections on indoor optical wireless channel model and
calibration,” Journal of Lightwave Technology, vol. 32, no. 10, pp.
2003–2011, May 2014.
[39] K. Lee, H. Park, and J. R. Barry, “Indoor channel characteristics for
visible light communications,” IEEE Communications Letters, vol. 15,
no. 2, pp. 217–219, Feb 2011.
[40] F. Miramirkhani and M. Uysal, “Channel modeling and characterization
for visible light communications,” IEEE Photonics Journal, vol. 7,
no. 6, pp. 1–16, Dec 2015.
[41] P. Chvojka, S. Zvanovec, P. A. Haigh, and Z. Ghassemlooy, “Channel
characteristics of visible light communications within dynamic indoor
environment,” Journal of Lightwave Technology, vol. 33, no. 9, pp.
1719–1725, May 2015.
[42] Q. Gao, S. Hu, C. Gong, and Z. Xu, “Modulation designs for visible
light communications with signal-dependent noise,” Journal of Lightwave Technology, vol. 34, no. 23, pp. 5516–5525, Dec 2016.
[43] J. L. Volakis, A. J. O’Brien, and C. C. Chen, “Small and adaptive
antennas and arrays for GNSS applications,” Proceedings of the IEEE,
vol. 104, no. 6, pp. 1221–1232, Jun 2016.
[44] H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor
positioning techniques and systems,” IEEE Transactions on Systems,
Man, and Cybernetics, Part C (Applications and Reviews), vol. 37,
no. 6, pp. 1067–1080, Nov 2007.
[45] J. Armstrong, Y. A. Sekercioglu, and A. Neild, “Visible light positioning: a roadmap for international standardization,” IEEE Communications Magazine, vol. 51, no. 12, pp. 68–73, Dec 2013.
[46] J. Hu, C. Gong, and Z. Xu, “Demonstration of a robot controlling and
positioning system based on visible light,” in 2016 8th International
Conference on Wireless Communications Signal Processing (WCSP),
Oct 2016, pp. 1–6.
[47] D. Zheng, G. Chen, and J. A. Farrell, “Joint measurement and trajectory recovery in visible light communication,” IEEE Transactions on
Control Systems Technology, vol. 25, no. 1, pp. 247–261, Jan 2017.
[48] T. Q. Wang, Y. A. Sekercioglu, A. Neild, and J. Armstrong, “Position
accuracy of time-of-arrival based ranging using visible light with
application in indoor localization systems,” Journal of Lightwave
Technology, vol. 31, no. 20, pp. 3302–3308, Oct 2013.
[49] S. Y. Jung, S. Hann, and C. S. Park, “TDOA-based optical wireless
indoor localization using LED ceiling lamps,” IEEE Transactions on
Consumer Electronics, vol. 57, no. 4, pp. 1592–1597, Nov 2011.
[50] X. Zhang, J. Duan, Y. Fu, and A. Shi, “Theoretical accuracy analysis
of indoor visible light communication positioning system based on
received signal strength indicator,” Journal of Lightwave Technology,
vol. 32, no. 21, pp. 4180–4186, Nov 2014.
[51] S. H. Yang, E. M. Jeong, and S. K. Han, “Indoor positioning based
on received optical power difference by angle of arrival,” Electronics
Letters, vol. 50, no. 1, pp. 49–51, Jan 2014.
[52] S. H. Yang, H. S. Kim, Y. H. Son, and S. K. Han, “Three-dimensional
visible light indoor localization using AOA and RSS with multiple
optical receivers,” Journal of Lightwave Technology, vol. 32, no. 14,
pp. 2480–2485, July 2014.
[53] M. F. Keskin and S. Gezici, “Comparative theoretical analysis of
distance estimation in visible light positioning systems,” Journal of
Lightwave Technology, vol. 34, no. 3, pp. 854–865, Feb 2016.
[54] A. ahin, Y. S. Erolu, . Gven, N. Pala, and M. Yksel, “Hybrid 3D localization for visible light communication systems,” Journal of
Lightwave Technology, vol. 33, no. 22, pp. 4589–4599, Nov 2015.
[55] S. Feng, X. Li, R. Zhang, M. Jiang, and L. Hanzo, “Hybrid positioning aided amorphous-cell assisted user-centric visible light downlink
techniques,” IEEE Access, vol. 4, pp. 2705–2713, 2016.
[56] J. Y. Kim, S. H. Yang, Y. H. Son, and S. K. Han, “High-resolution
indoor positioning using light emitting diode visible light and camera
image sensor,” IET Optoelectronics, vol. 10, no. 5, pp. 184–192, 2016.
[57] M. Yasir, S. W. Ho, and B. N. Vellambi, “Indoor positioning system using visible light and accelerometer,” Journal of Lightwave Technology,
vol. 32, no. 19, pp. 3306–3316, Oct 2014.
[58] P. Huynh and M. Yoo, “VLC-based positioning system for an indoor
environment using an image sensor and an accelerometer sensor,”
Sensors, vol. 16, no. 6, p. 783, 2016.
[59] W. Gu, M. Aminikashani, P. Deng, and M. Kavehrad, “Impact of multipath reflections on the performance of indoor visible light positioning
systems,” Journal of Lightwave Technology, vol. 34, no. 10, pp. 2578–
2587, May 2016.
[60] M. Biagi, S. Pergoloni, and A. M. Vegni, “LAST: A framework
to localize, access, schedule, and transmit in indoor VLC systems,”
Journal of Lightwave Technology, vol. 33, no. 9, pp. 1872–1887, May
2015.
[61] B. Fahs, A. J. Chowdhury, and M. M. Hella, “A 12-m 2.5-Gb/s lighting
compatible integrated receiver for OOK visible light communication
links,” Journal of Lightwave Technology, vol. 34, no. 16, pp. 3768–
3775, Aug 2016.
[62] S. He, G. Ren, Z. Zhong, and Y. Zhao, “M-ary variable period
modulation for indoor visible light communication system,” IEEE
Communications Letters, vol. 17, no. 7, pp. 1325–1328, July 2013.
[63] J. H. Yoo, B. W. Kim, and S. Y. Jung, “Modelling and analysis of M-ary
variable pulse position modulation for visible light communications,”
IET Optoelectronics, vol. 9, no. 5, pp. 184–190, 2015.
[64] E. Monteiro and S. Hranilovic, “Design and implementation of colorshift keying for visible light communications,” Journal of Lightwave
Technology, vol. 32, no. 10, pp. 2053–2060, May 2014.
[65] J. Jiang, R. Zhang, and L. Hanzo, “Analysis and design of threestage concatenated color-shift keying,” IEEE Transactions on Vehicular
Technology, vol. 64, no. 11, pp. 5126–5136, Nov 2015.
[66] Q. Gao, R. Wang, Z. Xu, and Y. Hua, “DC-informative joint colorfrequency modulation for visible light communications,” Journal of
Lightwave Technology, vol. 33, no. 11, pp. 2181–2188, Jun 2015.
[67] S. Pergoloni, M. Biagi, S. Rinauro, S. Colonnese, R. Cusani, and
G. Scarano, “Merging color shift keying and complementary pulse
position modulation for visible light illumination and communication,”
Journal of Lightwave Technology, vol. 33, no. 1, pp. 192–200, Jan
2015.
[68] H. Chun, S. Rajbhandari, G. Faulkner, D. Tsonev, E. Xie, J. J. D.
McKendry, E. Gu, M. D. Dawson, D. C. O’Brien, and H. Haas,
“LED based wavelength division multiplexed 10 Gb/s visible light
communications,” Journal of Lightwave Technology, vol. 34, no. 13,
pp. 3047–3052, July 2016.
[69] J. Armstrong, “OFDM for optical communications,” Journal of Lightwave Technology, vol. 27, no. 3, pp. 189–204, Feb 2009.
[70] R. Mesleh, H. Elgala, and H. Haas, “On the performance of different
OFDM based optical wireless communication systems,” IEEE/OSA
Journal of Optical Communications and Networking, vol. 3, no. 8,
pp. 620–628, Aug 2011.
[71] N. Fernando, Y. Hong, and E. Viterbo, “Flip-OFDM for unipolar communication systems,” IEEE Transactions on Communications, vol. 60,
no. 12, pp. 3726–3733, Dec 2012.
[72] D. Tsonev, S. Videv, and H. Haas, “Unlocking spectral efficiency in
intensity modulation and direct detection systems,” IEEE Journal on
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
[83]
[84]
[85]
[86]
[87]
[88]
[89]
[90]
[91]
[92]
[93]
[94]
[95]
[96]
Selected Areas in Communications, vol. 33, no. 9, pp. 1758–1770, Sep
2015.
R. Zhang and L. Hanzo, “Multi-layer modulation for intensitymodulated direct-detection optical OFDM,” IEEE/OSA Journal of
Optical Communications and Networking, vol. 5, no. 12, pp. 1402–
1412, Dec 2013.
B. Ranjha and M. Kavehrad, “Hybrid asymmetrically clipped OFDMbased IM/DD optical wireless system,” IEEE/OSA Journal of Optical
Communications and Networking, vol. 6, no. 4, pp. 387–396, Apr 2014.
Q. Gao, C. Gong, S. Li, and Z. Xu, “DC-informative modulation for
visible light communications under lighting constraints,” IEEE Wireless
Communications, vol. 22, no. 2, pp. 54–60, Apr 2015.
J. Wang, Y. Xu, X. Ling, R. Zhang, Z. Ding, and C. Zhao, “PAPR
analysis for OFDM visible light communication,” Optics Express,
vol. 24, no. 24, pp. 27 457–27 474, Nov 2016.
S. Dimitrov, S. Sinanovic, and H. Haas, “Clipping noise in OFDMbased optical wireless communication systems,” IEEE Transactions on
Communications, vol. 60, no. 4, pp. 1072–1081, Apr 2012.
S. Dimitrov and H. Haas, “Information rate of OFDM-based optical
wireless communication systems with nonlinear distortion,” Journal of
Lightwave Technology, vol. 31, no. 6, pp. 918–929, Mar 2013.
H. Qian, S. J. Yao, S. Z. Cai, and T. Zhou, “Adaptive postdistortion
for nonlinear LEDs in visible light communications,” IEEE Photonics
Journal, vol. 6, no. 4, pp. 1–8, Aug 2014.
K. Ying, Z. Yu, R. J. Baxley, H. Qian, G. K. Chang, and G. T. Zhou,
“Nonlinear distortion mitigation in visible light communications,” IEEE
Wireless Communications, vol. 22, no. 2, pp. 36–45, Apr 2015.
M. S. A. Mossaad, S. Hranilovic, and L. Lampe, “Visible light
communications using OFDM and multiple LEDs,” IEEE Transactions
on Communications, vol. 63, no. 11, pp. 4304–4313, Nov 2015.
L. Wu, Z. Zhang, J. Dang, and H. Liu, “Adaptive modulation schemes
for visible light communications,” Journal of Lightwave Technology,
vol. 33, no. 1, pp. 117–125, Jan 2015.
J. Gancarz, H. Elgala, and T. D. C. Little, “Impact of lighting requirements on VLC systems,” IEEE Communications Magazine, vol. 51,
no. 12, pp. 34–41, Dec 2013.
K. Kim, K. Lee, and K. Lee, “Appropriate RLL coding scheme for
effective dimming control in VLC,” Electronics Letters, vol. 52, no. 19,
pp. 1622–1624, 2016.
X. Lu and J. L. Tiffany, “Achieving FEC and RLL for VLC: A
concatenated convolutional-miller coding mechanism,” IEEE Photonics
Technology Letters, vol. 28, no. 9, pp. 1030–1033, May 2016.
K. Lee and H. Park, “Modulations for visible light communications
with dimming control,” IEEE Photonics Technology Letters, vol. 23,
no. 16, pp. 1136–1138, Aug 2011.
F. Zafar, D. Karunatilaka, and R. Parthiban, “Dimming schemes for
visible light communication: the state of research,” IEEE Wireless
Communications, vol. 22, no. 2, pp. 29–35, Apr 2015.
S. Rajagopal, R. D. Roberts, and S. K. Lim, “IEEE 802.15.7 visible
light communication: modulation schemes and dimming support,”
IEEE Communications Magazine, vol. 50, no. 3, pp. 72–82, Mar 2012.
Q. Wang, Z. Wang, and L. Dai, “Asymmetrical hybrid optical OFDM
for visible light communications with dimming control,” IEEE Photonics Technology Letters, vol. 27, no. 9, pp. 974–977, May 2015.
Y. Yang, Z. Zeng, J. Cheng, and C. Guo, “An enhanced DCO-OFDM
scheme for dimming control in visible light communication systems,”
IEEE Photonics Journal, vol. 8, no. 3, pp. 1–13, Jun 2016.
S. H. Lee and J. K. Kwon, “Turbo code-based error correction scheme
for dimmable visible light communication systems,” IEEE Photonics
Technology Letters, vol. 24, no. 17, pp. 1463–1465, Sep 2012.
S. Kim and S. Y. Jung, “Modified Reed Muller coding scheme made
from the bent function for dimmable visible light communications,”
IEEE Photonics Technology Letters, vol. 25, no. 1, pp. 11–13, Jan
2013.
S. Kim, “Adaptive FEC codes suitable for variable dimming values
in visible light communication,” IEEE Photonics Technology Letters,
vol. 27, no. 9, pp. 967–969, May 2015.
S. Zhao, “A serial concatenation-based coding scheme for dimmable
visible light communication systems,” IEEE Communications Letters,
vol. 20, no. 10, pp. 1951–1954, Oct 2016.
A. H. Azhar, T. A. Tran, and D. O’Brien, “A Gigabit/s indoor wireless
transmission using MIMO-OFDM visible-light communications,” IEEE
Photonics Technology Letters, vol. 25, no. 2, pp. 171–174, Jan 2013.
L. Zeng, D. C. O’Brien, H. L. Minh, G. E. Faulkner, K. Lee, D. Jung,
Y. Oh, and E. T. Won, “High data rate multiple input multiple output
(MIMO) optical wireless communications using white LED lighting,”
[97]
[98]
[99]
[100]
[101]
[102]
[103]
[104]
[105]
[106]
[107]
[108]
[109]
[110]
[111]
[112]
[113]
[114]
[115]
[116]
IEEE Journal on Selected Areas in Communications, vol. 27, no. 9,
pp. 1654–1662, Dec 2009.
C. He, T. Q. Wang, and J. Armstrong, “Performance of optical receivers
using photodetectors with different fields of view in a MIMO ACOOFDM system,” Journal of Lightwave Technology, vol. 33, no. 23, pp.
4957–4967, Dec 2015.
A. Nuwanpriya, S. W. Ho, and C. S. Chen, “Indoor MIMO visible light
communications: Novel angle diversity receivers for mobile users,”
IEEE Journal on Selected Areas in Communications, vol. 33, no. 9,
pp. 1780–1792, Sep 2015.
P. F. Mmbaga, J. Thompson, and H. Haas, “Performance analysis of
indoor diffuse VLC MIMO channels using angular diversity detectors,”
Journal of Lightwave Technology, vol. 34, no. 4, pp. 1254–1266, Feb
2016.
M. Biagi, A. M. Vegni, S. Pergoloni, P. M. Butala, and T. D. C. Little,
“Trace-orthogonal PPM-space time block coding under rate constraints
for visible light communication,” Journal of Lightwave Technology,
vol. 33, no. 2, pp. 481–494, Jan 2015.
W. O. Popoola, E. Poves, and H. Haas, “Error performance of generalised space shift keying for indoor visible light communications,”
IEEE Transactions on Communications, vol. 61, no. 5, pp. 1968–1976,
May 2013.
A. T. Hussein, M. T. Alresheedi, and J. M. H. Elmirghani, “Fast and efficient adaptation techniques for visible light communication systems,”
IEEE/OSA Journal of Optical Communications and Networking, vol. 8,
no. 6, pp. 382–397, Jun 2016.
A. Mostafa and L. Lampe, “Physical-layer security for MISO visible
light communication channels,” IEEE Journal on Selected Areas in
Communications, vol. 33, no. 9, pp. 1806–1818, Sept 2015.
——, “Optimal and robust beamforming for secure transmission in
MISO visible-light communication links,” IEEE Transactions on Signal
Processing, vol. 64, no. 24, pp. 6501–6516, Dec 2016.
S. Ma, Z. L. Dong, H. Li, Z. Lu, and S. Li, “Optimal and robust secure
beamformer for indoor MISO visible light communication,” Journal of
Lightwave Technology, vol. 34, no. 21, pp. 4988–4998, Nov 2016.
B. Li, J. Wang, R. Zhang, H. Shen, C. Zhao, and L. Hanzo, “Multiuser
MISO transceiver design for indoor downlink visible light communication under per-LED optical power constraints,” IEEE Photonics
Journal, vol. 7, no. 4, pp. 1–15, Aug 2015.
H. Haas, L. Yin, Y. Wang, and C. Chen, “What is LiFi?” Journal of
Lightwave Technology, vol. 34, no. 6, pp. 1533–1544, Mar 2016.
Y. Hou, S. Xiao, H. Zheng, and W. Hu, “Multiple access scheme
based on block encoding time division multiplexing in an indoor
positioning system using visible light,” IEEE/OSA Journal of Optical
Communications and Networking, vol. 7, no. 5, pp. 489–495, May
2015.
A. Chaaban, Z. Rezki, and M. S. Alouini, “Fundamental limits of parallel optical wireless channels: Capacity results and outage formulation,”
IEEE Transactions on Communications, early access.
D. Bykhovsky and S. Arnon, “Multiple access resource allocation in
visible light communication systems,” Journal of Lightwave Technology, vol. 32, no. 8, pp. 1594–1600, Apr 2014.
H. Marshoud, V. M. Kapinas, G. K. Karagiannidis, and S. Muhaidat,
“Non-orthogonal multiple access for visible light communications,”
IEEE Photonics Technology Letters, vol. 28, no. 1, pp. 51–54, Jan
2016.
L. Yin, W. O. Popoola, X. Wu, and H. Haas, “Performance evaluation of
non-orthogonal multiple access in visible light communication,” IEEE
Transactions on Communications, vol. 64, no. 12, pp. 5162–5175, Dec
2016.
R. Zhang and L. Hanzo, “A unified treatment of superposition coding
aided communications: Theory and practice,” IEEE Communications
Surveys and Tutorials, vol. 13, no. 3, pp. 503–520, Third Quarter 2011.
O. Gonzlez, M. F. Guerra-Medina, I. R. Martn, F. Delgado, and
R. Prez-Jimnez, “Adaptive WHTS-assisted SDMA-OFDM scheme for
fair resource allocation in multi-user visible light communications,”
IEEE/OSA Journal of Optical Communications and Networking, vol. 8,
no. 6, pp. 427–440, Jun 2016.
Y. A. Chen, Y. T. Chang, Y. C. Tseng, and W. T. Chen, “A framework
for simultaneous message broadcasting using CDMA-based visible
light communications,” IEEE Sensors Journal, vol. 15, no. 12, pp.
6819–6827, Dec 2015.
M. H. Shoreh, A. Fallahpour, and J. A. Salehi, “Design concepts and
performance analysis of multicarrier CDMA for indoor visible light
communications,” IEEE/OSA Journal of Optical Communications and
Networking, vol. 7, no. 6, pp. 554–562, Jun 2015.
[117] C. G. Gavrincea, J. Baranda, and P. Henarejos, “Rapid prototyping
of standard-compliant visible light communications system,” IEEE
Communications Magazine, vol. 52, no. 7, pp. 80–87, July 2014.
[118] R. Zhang and L. Hanzo, “Wireless cellular networks,” IEEE Vehicular
Technology Magazine, vol. 5, no. 4, pp. 31–39, Dec 2010.
[119] R. Zhang, J. Wang, Z. Wang, Z. Xu, C. Zhao, and L. Hanzo, “Visible
light communications in heterogeneous networks: Paving the way for
user-centric design,” IEEE Wireless Communications, vol. 22, no. 2,
pp. 8–16, Apr 2015.
[120] R. Zhang and L. Hanzo, “Cooperative downlink multicell preprocessing
relying on reduced-rate back-haul data exchange,” IEEE Transactions
on Vehicular Technology, vol. 60, no. 2, pp. 539–545, Feb 2011.
[121] X. Li, F. Jin, R. Zhang, J. Wang, Z. Xu, and L. Hanzo, “Users first:
User-centric cluster formation for interference-mitigation in visiblelight networks,” IEEE Transactions on Wireless Communications,
vol. 15, no. 1, pp. 39–53, Jan 2016.
[122] R. Zhang, H. Claussen, H. Haas, and L. Hanzo, “Energy efficient visible
light communications relying on amorphous cells,” IEEE Journal on
Selected Areas in Communications, vol. 34, no. 4, pp. 894–906, Apr
2016.
[123] X. Li, Y. Huo, R. Zhang, and L. Hanzo, “User-centric visible light
communications for energy-efficient scalable video streaming,” IEEE
Transactions on Green Communications and Networking, early access.
[124] S. Pergoloni, M. Biagi, S. Colonnese, R. Cusani, and G. Scarano,
“Optimized LEDs footprinting for indoor visible light communication
networks,” IEEE Photonics Technology Letters, vol. 28, no. 4, pp. 532–
535, Feb 2016.
[125] H. Ma, L. Lampe, and S. Hranilovic, “Integration of indoor visible
light and power line communication systems,” in 2013 IEEE 17th
International Symposium on Power Line Communications and Its
Applications, March 2013, pp. 291–296.
[126] J. Song, W. Ding, F. Yang, H. Yang, B. Yu, and H. Zhang, “An
indoor broadband broadcasting system based on PLC and VLC,” IEEE
Transactions on Broadcasting, vol. 61, no. 2, pp. 299–308, Jun 2015.
[127] H. Ma, L. Lampe, and S. Hranilovic, “Coordinated broadcasting for
multiuser indoor visible light communication systems,” IEEE Transactions on Communications, vol. 63, no. 9, pp. 3313–3324, Sept 2015.
[128] X. Bao, X. Zhu, T. Song, and Y. Ou, “Protocol design and capacity
analysis in hybrid network of visible light communication and OFDMA
systems,” IEEE Transactions on Vehicular Technology, vol. 63, no. 4,
pp. 1770–1778, May 2014.
[129] S. Shao, A. Khreishah, M. Ayyash, M. B. Rahaim, H. Elgala, V. Jungnickel, D. Schulz, T. D. C. Little, J. Hilt, and R. Freund, “Design
and analysis of a visible-light-communication enhanced WiFi system,”
IEEE/OSA Journal of Optical Communications and Networking, vol. 7,
no. 10, pp. 960–973, Oct 2015.
[130] X. Li, R. Zhang, and L. Hanzo, “Cooperative load balancing in
hybrid visible light communications and WiFi,” IEEE Transactions on
Communications, vol. 63, no. 4, pp. 1319–1329, Apr 2015.
[131] F. Jin, R. Zhang, and L. Hanzo, “Resource allocation under delayguarantee constraints for heterogeneous visible-light and RF femtocell,”
IEEE Transactions on Wireless Communications, vol. 14, no. 2, pp.
1020–1034, Feb 2015.
[132] Y. Wang and H. Haas, “Dynamic load balancing with handover in
hybrid Li-Fi and Wi-Fi networks,” Journal of Lightwave Technology,
vol. 33, no. 22, pp. 4671–4682, Nov 2015.
[133] M. Wang, J. Wu, W. Yu, H. Wang, J. Li, J. Shi, and C. Luo,
“Efficient coding modulation and seamless rate adaptation for visible
light communications,” IEEE Wireless Communications, vol. 22, no. 2,
pp. 86–93, Apr 2015.
[134] I. T. Haque and N. Abu-Ghazaleh, “Wireless software defined networking: A survey and taxonomy,” IEEE Communications Surveys and
Tutorials, vol. 18, no. 4, pp. 2713–2737, Fourth Quarter 2016.
[135] A. S. Thyagaturu, A. Mercian, M. P. McGarry, M. Reisslein, and
W. Kellerer, “Software defined optical networks (SDONs): A comprehensive survey,” IEEE Communications Surveys and Tutorials, vol. 18,
no. 4, pp. 2738–2786, Fourth Quarter 2016.
[136] S. Galli, K. J. Kerpez, H. Mariotte, and F. Moulin, “PLC-to-DSL
interference: Statistical model and impact on VDSL2, vectoring, and
G.Fast,” IEEE Journal on Selected Areas in Communications, vol. 34,
no. 7, pp. 1992–2005, July 2016.
[137] F. Effenberger, “Future broadband access networks [point of view],”
Proceedings of the IEEE, vol. 104, no. 11, pp. 2078–2081, Nov 2016.
[138] G. Dede, T. Kamalakis, and D. Varoutas, “Evaluation of optical
wireless technologies in home networking: An analytical hierarchy
process approach,” IEEE/OSA Journal of Optical Communications and
Networking, vol. 3, no. 11, pp. 850–859, Nov 2011.
[139] M. Ayyash, H. Elgala, A. Khreishah, V. Jungnickel, T. Little, S. Shao,
M. Rahaim, D. Schulz, J. Hilt, and R. Freund, “Coexistence of WiFi
and LiFi toward 5G: concepts, opportunities, and challenges,” IEEE
Communications Magazine, vol. 54, no. 2, pp. 64–71, Feb 2016.
[140] M. Maier, M. Chowdhury, B. P. Rimal, and D. P. Van, “The tactile
internet: vision, recent progress, and open challenges,” IEEE Communications Magazine, vol. 54, no. 5, pp. 138–145, May 2016.
[141] K. Cui, G. Chen, Z. Xu, and R. D. Roberts, “Traffic light to vehicle
visible light communication channel characterization,” Appl. Opt.,
vol. 51, no. 27, pp. 6594–6605, Sep 2012.
[142] M. Uysal, Z. Ghassemlooy, A. Bekkali, A. Kadri, and H. Menouar,
“Visible light communication for vehicular networking: Performance
study of a V2V system using a measured headlamp beam pattern
model,” IEEE Vehicular Technology Magazine, vol. 10, no. 4, pp. 45–
53, Dec 2015.
[143] T. Yamazato, I. Takai, H. Okada, T. Fujii, T. Yendo, S. Arai, M. Andoh,
T. Harada, K. Yasutomi, K. Kagawa, and S. Kawahito, “Image-sensorbased visible light communication for automotive applications,” IEEE
Communications Magazine, vol. 52, no. 7, pp. 88–97, July 2014.
[144] Y. Goto, I. Takai, T. Yamazato, H. Okada, T. Fujii, S. Kawahito, S. Arai,
T. Yendo, and K. Kamakura, “A new automotive VLC system using
optical communication image sensor,” IEEE Photonics Journal, vol. 8,
no. 3, pp. 1–17, June 2016.
[145] T. Yamazato, M. Kinoshita, S. Arai, E. Souke, T. Yendo, T. Fujii,
K. Kamakura, and H. Okada, “Vehicle motion and pixel illumination
modeling for image sensor based visible light communication,” IEEE
Journal on Selected Areas in Communications, vol. 33, no. 9, pp. 1793–
1805, Sep 2015.
[146] A. M. Cilean, M. Dimian, V. Popa, L. Chassagne, and B. Cagneau,
“Novel DSP receiver architecture for multi-channel visible light
communications in automotive applications,” IEEE Sensors Journal,
vol. 16, no. 10, pp. 3597–3602, May 2016.
[147] M. A. Khalighi, C. Gabriel, T. Hamza, S. Bourennane, P. Lon,
and V. Rigaud, “Underwater wireless optical communication: recent
advances and remaining challenges,” in 2014 16th International Conference on Transparent Optical Networks (ICTON), July 2014, pp. 1–4.
[148] W. Liu, D. Zou, P. Wang, Z. Xu, and L. Yang, “Wavelength dependent
channel characterization for underwater optical wireless communications,” in 2014 IEEE International Conference on Signal Processing,
Communications and Computing (ICSPCC), Aug 2014, pp. 895–899.
[149] P. Wang, C. Li, B. Wang, and Z. Xu, “Real-time 25Mb/s data
transmission for underwater optical wireless communication using a
commercial blue LED and APD detection,” in Asia Communications
and Photonics Conference 2016. Optical Society of America, 2016,
p. AS2C.3.
[150] C. Li, B. Wang, P. Wang, Z. Xu, Q. Yang, and S. Yu, “Generation
and transmission of 745Mb/s ofdm signal using a single commercial
blue LED and an analog post-equalizer for underwater optical wireless
communications,” in Asia Communications and Photonics Conference
2016. Optical Society of America, 2016, p. AS3B.4.
[151] W. Huang, P. Tian, and Z. Xu, “Design and implementation of a
real-time CIM-MIMO optical camera communication system,” Opt.
Express, vol. 24, no. 21, pp. 24 567–24 579, Oct 2016.
[152] D. Li, W. Huang, and Z. Xu, “Flicker free indoor visible light
positioning system assisted by a filter and mobile phone camera,” in
2016 IEEE/CIC International Conference on Communications in China
(ICCC), July 2016, pp. 1–5.
[153] W. Du, J. C. Liando, and M. Li, “SoftLight: Adaptive visible light
communication over screen-camera links,” in IEEE INFOCOM 2016,
Apr 2016, pp. 1–9.
[154] M. Izz, Z. Li, H. Liu, Y. Chen, and F. Li, “Uber-in-light: Unobtrusive
visible light communication leveraging complementary color channel,”
in IEEE INFOCOM 2016, Apr 2016, pp. 1–9.
[155] B. Zhang, K. Ren, G. Xing, X. Fu, and C. Wang, “SBVLC: Secure
barcode-based visible light communication for smartphones,” IEEE
Transactions on Mobile Computing, vol. 15, no. 2, pp. 432–446, Feb
2016.
[156] H. Chun, C. J. Chiang, A. Monkman, and D. OBrien, “A study of
illumination and communication using organic light emitting diodes,”
Journal of Lightwave Technology, vol. 31, no. 22, pp. 3511–3517, Nov
2013.
[157] P. A. Haigh, Z. Ghassemlooy, I. Papakonstantinou, F. Arca, S. F. Tedde,
O. Hayden, and E. Leitgeb, “A 1-Mb/s visible light communications
link with low bandwidth organic components,” IEEE Photonics Technology Letters, vol. 26, no. 13, pp. 1295–1298, July 2014.
[158] H. Chen, S. Li, B. Huang, Z. Xu, W. Li, G. Dong, and J. Xie,
“A 1.9mbps OFDM-based all-organic visible light communication
[159]
[160]
[161]
[162]
[163]
[164]
[165]
[166]
[167]
[168]
[169]
[170]
[171]
[172]
[173]
[174]
[175]
[176]
[177]
[178]
[179]
system,” in 2016 IEEE International Conference on Communication
Systems (ICCS), Dec 2016, pp. 1–6.
S. Schmid, T. Bourchas, S. Mangold, and T. R. Gross, “Linux light
bulbs: Enabling internet protocol connectivity for light bulb networks,”
in Proceedings of the 2Nd International Workshop on Visible Light
Communications Systems, ser. VLCS ’15. New York, NY, USA: ACM,
2015, pp. 3–8.
J. Li, A. Liu, G. Shen, L. Li, C. Sun, and F. Zhao, “Retro-VLC: Enabling battery-free duplex visible light communication for mobile and
IoT applications,” in Proceedings of the 16th International Workshop
on Mobile Computing Systems and Applications, ser. HotMobile ’15.
New York, NY, USA: ACM, 2015, pp. 21–26.
K. Warmerdam, A. Pandharipande, and D. Caicedo, “Connectivity in
IoT indoor lighting systems with visible light communications,” in
2015 IEEE Online Conference on Green Communications (OnlineGreenComm), Nov 2015, pp. 47–52.
R. Zhang, Y. Cui, H. Claussen, H. Haas, and L. Hanzo, “Anticipatory
association for indoor visible light communications: Light, follow me
!” IEEE Transactions on Wireless Communications, submitted.
R. Zhang, M. Biagi, L. Lampe, T. D. Little, S. Mangold, and Z. Xu,
“Localisation, communication and networking with VLC: Challenges
and opportunities,” IEEE Journal on Selected Areas in Communications, submitted.
R. Zhang, A. F. A. Rawi, L. D. Humphrey, and L. Hanzo, “Expanded
constellation mapping for enhanced far-end-cross-talk cancellation in
G.fast,” IEEE Communications Letters, vol. 21, no. 1, pp. 56–59, Jan
2017.
R. Zhang, L. L. Yang, and L. Hanzo, “Performance analysis of
non-linear generalized pre-coding aided spatial modulation,” IEEE
Transactions on Wireless Communications, vol. 15, no. 10, pp. 6731–
6741, Oct 2016.
——, “Energy pattern aided simultaneous wireless information and
power transfer,” IEEE Journal on Selected Areas in Communications,
vol. 33, no. 8, pp. 1492–1504, Aug 2015.
R. Zhang, R. G. Maunder, and L. Hanzo, “Wireless information and
power transfer: from scientific hypothesis to engineering practice,”
IEEE Communications Magazine, vol. 53, no. 8, pp. 99–105, August
2015.
R. Zhang, L. L. Yang, and L. Hanzo, “Error probability and capacity
analysis of generalised pre-coding aided spatial modulation,” IEEE
Transactions on Wireless Communications, vol. 14, no. 1, pp. 364–
375, Jan 2015.
——, “Generalised pre-coding aided spatial modulation,” IEEE Transactions on Wireless Communications, vol. 12, no. 11, pp. 5434–5443,
November 2013.
R. Zhang and L. Hanzo, “Advances in base- and mobile-station aided
cooperative wireless communications: An overview,” IEEE Vehicular
Technology Magazine, vol. 8, no. 1, pp. 57–69, March 2013.
——, “Multiple-source cooperation: From code-division multiplexing
to variable-rate network coding,” IEEE Transactions on Vehicular
Technology, vol. 60, no. 3, pp. 1005–1015, March 2011.
——, “Superposition-aided delay-constrained hybrid automatic repeat
request,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4,
pp. 2109–2115, May 2010.
R. Zhang, L. Xu, S. Chen, and L. Hanzo, “EXIT-chart-aided hybrid
multiuser detector for multicarrier interleave-division multiple access,”
IEEE Transactions on Vehicular Technology, vol. 59, no. 3, pp. 1563–
1567, March 2010.
R. Zhang and L. Hanzo, “Interleaved random space time coding for
multisource cooperation,” IEEE Transactions on Vehicular Technology,
vol. 58, no. 4, pp. 2120–2125, May 2009.
——, “Superposition-coding-aided multiplexed hybrid ARQ scheme
for improved end-to-end transmission efficiency,” IEEE Transactions
on Vehicular Technology, vol. 58, no. 8, pp. 4681–4686, Oct 2009.
——, “Coding schemes for energy efficient multi-source cooperation
aided uplink transmission,” IEEE Signal Processing Letters, vol. 16,
no. 5, pp. 438–441, May 2009.
——, “Iterative multiuser detection and channel decoding for DSCDMA using harmony search,” IEEE Signal Processing Letters,
vol. 16, no. 10, pp. 917–920, Oct 2009.
——, “Three design aspects of multicarrier interleave division multiple
access,” IEEE Transactions on Vehicular Technology, vol. 57, no. 6,
pp. 3607–3617, Nov 2008.
——, “Space-time coding for high-throughput interleave division multiplexing aided multi-source co-operation,” Electronics Letters, vol. 44,
no. 5, pp. 367–368, Feb 2008.
[180] J. Zhang, R. Zhang, R. G. Maunder, S. Chen, and L. Hanzo, “Adaptive
coding and modulation for large-scale antenna array based aeronautical
communications in the presence of co-channel interference,” IEEE
Transactions on Wireless Communications, submitted.
[181] Q. Wang, R. Zhang, L. L. Yang, and L. Hanzo, “Non-orthogonal multiple access: A unified perspective,” IEEE Wireless Communications,
submitted.
[182] S. Gupta, R. Zhang, and L. Hanzo, “Energy harvesting aided device-todevice communication in the heterogeneous two-tier downlink,” IEEE
Transactions on Communications, submitted.
[183] Y. Wang, Y. Xu, Y. Yang, X. Gao, B. Zhu, W. Cai, J. Yuan, R. Zhang,
and H. Zhu, “Simultaneous light emission and detection of InGaN/GaN
multiple quantum well diodes for in-plane visible light communication,” Optics Communications, (early access), 2016.
[184] S. Gupta, R. Zhang, and L. Hanzo, “Energy harvesting aided deviceto-device communication underlaying the cellular downlink,” IEEE
Access, (early access), 2016.
[185] F. Jin, X. Li, R. Zhang, C. Dong, and L. Hanzo, “Resource allocation
under delay-guarantee constraints for visible-light communication,”
IEEE Access, vol. 4, pp. 7301–7312, 2016.
[186] B. Li, R. Zhang, W. Xu, C. Zhao, and L. Hanzo, “Joint dimming control
and transceiver design for MIMO-aided visible light communication,”
IEEE Communications Letters, vol. 20, no. 11, pp. 2193–2196, Nov
2016.
[187] S. Gupta, S. Kumar, R. Zhang, S. Kalyani, K. Giridhar, and L. Hanzo,
“Resource allocation for D2D links in the FFR and SFR aided cellular
downlink,” IEEE Transactions on Communications, vol. 64, no. 10, pp.
4434–4448, Oct 2016.
[188] S. Gupta, R. Zhang, and L. Hanzo, “Throughput maximization for a
buffer-aided successive relaying network employing energy harvesting,” IEEE Transactions on Vehicular Technology, vol. 65, no. 8, pp.
6758–6765, Aug 2016.
[189] J. Jiang, P. Zhang, R. Zhang, S. Chen, and L. Hanzo, “Aperture
selection for ACO-OFDM in free-space optical turbulence channel,”
IEEE Transactions on Vehicular Technology, vol. 65, no. 8, pp. 6089–
6100, Aug 2016.
[190] C. Zhu, Y. Huo, J. Jiang, H. Sun, C. Dong, R. Zhang, and L. Hanzo,
“Hierarchical colour-shift-keying aided layered video streaming for the
visible light downlink,” IEEE Access, vol. 4, pp. 3127–3152, 2016.
[191] C. Zhu, Y. Huo, B. Zhang, R. Zhang, M. El-Hajjar, and L. Hanzo,
“Adaptive-truncated-HARQ-aided layered video streaming relying on
interlayer fec coding,” IEEE Transactions on Vehicular Technology,
vol. 65, no. 3, pp. 1506–1521, March 2016.
[192] F. Wu, R. Zhang, L. L. Yang, and W. Wang, “Transmitter precodingaided spatial modulation for secrecy communications,” IEEE Transactions on Vehicular Technology, vol. 65, no. 1, pp. 467–471, Jan 2016.
[193] J. Feng, R. Zhang, L. Hanzo, and S. X. Ng, “Cooperative medium
access control based on spectrum leasing,” IEEE Transactions on
Vehicular Technology, vol. 63, no. 1, pp. 297–307, Jan 2014.
[194] J. Zhang, F. Jin, R. Zhang, G. Li, and L. Hanzo, “Analysis and design
of distributed antenna-aided twin-layer femto- and macrocell networks
relying on fractional frequency reuse,” IEEE Transactions on Vehicular
Technology, vol. 63, no. 2, pp. 763–774, Feb 2014.
[195] F. Jin, R. Zhang, and L. Hanzo, “Fractional frequency reuse aided twinlayer femtocell networks: Analysis, design and optimization,” IEEE
Transactions on Communications, vol. 61, no. 5, pp. 2074–2085, May
2013.
[196] J. Zhang, R. Zhang, G. Li, and L. Hanzo, “Distributed antenna
systems in fractional-frequency-reuse-aided cellular networks,” IEEE
Transactions on Vehicular Technology, vol. 62, no. 3, pp. 1340–1349,
March 2013.
[197] L. Li, X. Zhou, R. Zhang, D. Zhang, and L. Hanzo, “Performance and
capacity analysis of poisson photon-counting based iter-PIC OCDMA
systems,” Optics Express, vol. 21, no. 22, pp. 25 954–25 967, Nov 2013.
[198] X. Zhou, X. Zheng, R. Zhang, and L. Hanzo, “Chip-interleaved optical
code division multiple access relying on a photon-counting iterative
successive interference canceller for free-space optical channels,” Optics Express, vol. 21, no. 13, pp. 15 926–15 937, Jul 2013.
[199] X. Zhou, D. Zhang, R. Zhang, and L. Hanzo, “A photon-counting
spatial-diversity-and-multiplexing MIMO scheme for poisson atmospheric channels relying on Q-ary PPM,” Optics Express, vol. 20,
no. 24, pp. 26 379–26 393, Nov 2012.
[200] J. Zhang, R. Zhang, G. Li, and L. Hanzo, “Remote coalition network
elements for base station cooperation aided multicell processing,” IEEE
Transactions on Vehicular Technology, vol. 61, no. 3, pp. 1406–1415,
March 2012.
[201] J. Feng, R. Zhang, and L. Hanzo, “A spectrum leasing cooperative
medium access protocol and its stability analysis,” IEEE Transactions
on Vehicular Technology, vol. 61, no. 8, pp. 3718–3730, Oct 2012.
[202] X. Xu, R. Zhang, S. Ghafoor, and L. Hanzo, “Imperfect digitalfiber-optic-link-based cooperative distributed antennas with fractional
frequency reuse in multicell multiuser networks,” IEEE Transactions
on Vehicular Technology, vol. 60, no. 9, pp. 4439–4449, Nov 2011.
[203] N. Bonello, R. Zhang, S. Chen, and L. Hanzo, “Channel code-division
multiple access and its multilevel-structured LDPC-based instantiation,” IEEE Transactions on Vehicular Technology, vol. 58, no. 5, pp.
2549–2553, Jun 2009.
[204] ——, “Reconfigurable rateless codes,” IEEE Transactions on Wireless
Communications, vol. 8, no. 11, pp. 5592–5600, November 2009.
[205] X. Li, F. Jin, R. Zhang, and L. Hanzo, “Joint cluster formation and
user association under delay guarantees in visible-light networks,” in
2016 IEEE Global Telecommunications Conference, Dec (early access)
2016.
[206] X. Li, R. Zhang, J. Wang, and L. Hanzo, “Cell-centric and user-centric
multi-user scheduling in visible light communication aided networks,”
in 2015 IEEE International Conference on Communications, June
2015, pp. 5120–5125.
[207] J. Zhang, R. Zhang, G. Li, and L. Hanzo, “Coalition network elements
for base station cooperation,” in 2012 IEEE Vehicular Technology
Conference Fall, Sept 2012, pp. 1–5.
[208] F. Jin, R. Zhang, and L. Hanzo, “Frequency-swapping aided femtocells
in twin-layer cellular networks relying on fractional frequency reuse,”
in 2012 IEEE Wireless Communications and Networking Conference,
April 2012, pp. 3097–3101.
[209] J. Zhang, R. Zhang, X. Xu, G. Li, and L. Hanzo, “Effects of practical
impairments on cooperative distributed antennas combined with fractional frequency reuse,” in 2012 IEEE Wireless Communications and
Networking Conference, April 2012, pp. 1088–1092.
[210] X. Xu, R. Zhang, and L. Hanzo, “Digital RoF aided cooperative
distributed antennas with FFR in multicell multiuser networks,” in 2011
IEEE Vehicular Technology Conference Fall, Sept 2011, pp. 1–5.
[211] J. Feng, R. Zhang, and L. Hanzo, “Auction-style cooperative medium
access control,” in 2011 IEEE Vehicular Technology Conference Fall,
Sept 2011, pp. 1–5.
[212] J. Feng, R. Zhang, S. X. Ng, and L. Hanzo, “Relay selection for energyefficient cooperative media access control,” in 2011 IEEE Wireless
Communications and Networking Conference, March 2011, pp. 287–
292.
[213] R. Zhang and L. Hanzo, “Variable-rate network coding for multi-source
cooperation,” in 2011 IEEE Wireless Communications and Networking
Conference, March 2011, pp. 1517–1522.
[214] R. Zhang, K. Giridhar, and L. Hanzo, “Distributed downlink multi-cell
processing requiring reduced-rate back-haul data exchange,” in 2011
IEEE Wireless Communications and Networking Conference, March
2011, pp. 1277–1281.
[215] R. Zhang, X. Xu, and L. Hanzo, “Co-channel interference mitigation
capability of fixed relays connected by optical fibre,” in 2010 IEEE
Vehicular Technology Conference Fall, Sept 2010, pp. 1–5.
[216] X. Xu, R. Zhang, and L. Hanzo, “Imperfect radio over fibre aided
distributed antennas with fractional frequency reuse,” in 2010 IEEE
Vehicular Technology Conference Fall, Sept 2010, pp. 1–5.
[217] R. Zhang and L. Hanzo, “Joint and distributed linear precoding for
centralised and decentralised multicell processing,” in 2010 IEEE
Vehicular Technology Conference Fall, Sept 2010, pp. 1–5.
[218] ——, “Harmony search aided iterative channel estimation, multiuser
detection and channel decoding for DS-CDMA,” in 2010 IEEE Vehicular Technology Conference Fall, Sept 2010, pp. 1–5.
[219] ——, “Multiplexed hybrid ARQ for energy efficient transmissions
under delay constraints,” in 2010 IEEE International Conference on
Communications, May 2010, pp. 1–5.
[220] M. F. U. Butt, R. Zhang, S. X. Ng, and L. Hanzo, “Superposition coding
aided bi-directional relay transmission employing iteratively decoded
self-concatenated convolutional codes,” in 2010 IEEE Vehicular Technology Conference Spring, May 2010, pp. 1–5.
[221] R. Zhang and L. Hanzo, “Superposition-coding aided multiplexed
hybrid ARQ scheme for improved link-layer transmission efficiency,” in
2009 IEEE International Conference on Communications, June 2009,
pp. 1–5.
[222] N. Bonello, R. Zhang, S. Chen, and L. Hanzo, “Reconfigurable rateless
codes,” in 2009 IEEE Vehicular Technology Conference Spring, April
2009, pp. 1–5.
[223] R. Zhang and L. Hanzo, “Physical-layer algebraic network coding and
superposition coding for the multi-source cooperation aided uplink,” in
[224]
[225]
[226]
[227]
[228]
[229]
2009 IEEE Vehicular Technology Conference Spring, April 2009, pp.
1–5.
——, “High-throughput non-orthogonal interleaved random space-time
coding for multi-source cooperation,” in 2008 IEEE Global Telecommunications Conference, Nov 2008, pp. 1–5.
N. Bonello, R. Zhang, S. Chen, and L. Hanzo, “Channel code division
multiple access and its multilevel structured LDPC based instantiation,”
in 2008 IEEE Vehicular Technology Conference Fall, Sept 2008, pp.
1–5.
R. Zhang, L. Xu, S. Chen, and L. Hanzo, “Repeat accumulate code
division multiple access and its hybrid detection,” in 2008 IEEE
International Conference on Communications, May 2008, pp. 4790–
4794.
L. Xu, R. Zhang, S. Chen, and L. Hanzo, “EXIT-chart aided hybrid
multiuser detector design for frequency-domain-spread chip-interleaved
MC-CDMA,” in 2008 IEEE Vehicular Technology Conference Spring,
May 2008, pp. 1816–1820.
R. Zhang and L. Hanzo, “Interleave division multiplexing aided spacetime coding for high-throughput uplink cooperative communications,”
in 2008 IEEE Wireless Communications and Networking Conference,
March 2008, pp. 465–469.
——, “EXIT chart based joint code-rate and spreading-factor optimisation of single-carrier interleave division multiple access,” in 2007
IEEE Wireless Communications and Networking Conference, March
2007, pp. 735–739.
| 7 |
arXiv:1601.06688v1 [math.AG] 25 Jan 2016
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL
PFAFFIANS
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
Abstract. We determine the Bernstein-Sato polynomials for the ideal of maximal minors of a generic m × n
matrix, as well as for that of sub-maximal Pfaffians of a generic skew-symmetric matrix of odd size. As a
corollary, we obtain that the Strong Monodromy Conjecture holds in these two cases.
1. Introduction
Consider a polynomial ring S = C[x1 , · · · , xN ] and let D = S[∂1 , · · · , ∂N ] denote the associated Weyl
∂
algebra of differential operators with polynomial coefficients (∂i = ∂x
). For a non-zero element f ∈ S, the
i
set of polynomials b(s) ∈ C[s] for which there exists a differential operator Pb ∈ D[s] such that
Pb · f s+1 = b(s) · f s
(1.1)
form a non-zero ideal. The monic generator of this ideal is called the Bernstein–Sato polynomial (or the
b-function) of f , and is denoted bf (s). The b-function gives a measure of the singularities of the scheme
defined by f = 0, and its zeros are closely related to the eigenvalues of the monodromy on the cohomology
of the Milnor fiber. In the case of a single hypersurface, its study has originated in [Ber72, SS74], and
later it has been extended to more general schemes in [BMS06] (see Section 2.5). Despite much research,
the calculation of b-functions remains notoriously difficult: several algorithms have been implemented to
compute b-functions, and a number of examples have been worked out in the literature, but basic instances
such as the b-functions for determinantal varieties are still not understood. In [Bud13] and [Bud15], Budur
posed as a challenge and reviewed the progress on the problem of computing the b-function of the ideal of
p × p minors of the generic m × n matrix. We solve the challenge for the case of maximal minors in this
paper, and we also find the b-function for the ideal of 2n × 2n Pfaffians of the generic skew-symmetric matrix
of size (2n + 1) × (2n + 1). For maximal minors, our main result is as follows:
Theorem on Maximal Minors (Theorem 4.1). Let m ≥ n be positive integers, consider the generic m × n
matrix of indeterminates (xij ), and let I = In denote the ideal in the polynomial ring S = C[xij ] which is
generated by the n × n minors of (xij ). The b-function of I is given by
bI (s) =
m
Y
(s + i).
i=m−n+1
When m = n, I is generated by a single equation – the determinant of the generic n × n matrix – and
the formula for bI (s) is well-known (see [Kim03, Appendix] or [Rai15, Section 5]). For general m ≥ n, if we
let Zm,n denote the zero locus of I, i.e. the variety of m × n matrices of rank at most n − 1, then using
Date: January 26, 2016.
2010 Mathematics Subject Classification. Primary 13D45, 14F10, 14M12, 32C38, 32S40.
Key words and phrases. Bernstein–Sato polynomials, b-functions, determinantal ideals, local cohomology.
1
2
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
Qn−1
the renormalization (2.28) our theorem states that the b-function of Zm,n is i=0
(s + i). It is interesting to
note that this only depends on the value of n and not on m.
The statement of the Strong Monodromy Conjecture of Denef and Loeser [DL92] extends naturally from
the case of one hypersurface to arbitrary ideals, and it asserts that the poles of the topological zeta function
of I are roots of bI (s). We verify this conjecture for maximal minors and sub-maximal Pfaffians in Section 5.
When I = In is the ideal of maximal minors of (xij ), the methods of [Doc13] can be used to show that the
set of poles of the topological zeta function of I is {−m, −m + 1, · · · , −m + n − 1}, and therefore it coincides
precisely with the set of roots of bI (s). If we replace I by the ideal Ip of p × p minors of (xij ), 1 < p < n,
then this is no longer true: as explained in [Bud15, Example 2.12], a computer calculation of T. Oaku shows
that for m = n = 3 one has bI2 (s) = (s + 9/2)(s + 4)(s + 5), while [Doc13, Thm. 6.5] shows that the only
poles of the zeta function of I2 are −9/2 and −4. Besides the Strong Monodromy Conjecture which predicts
some of the roots of bIp (s), we are not aware of any general conjectural formulas for bIp (s) when 1 < p < n.
In the case of Pfaffians we prove:
Theorem on sub–maximal Pfaffians (Theorem 3.9). Let n be a positive integer, and with the conventions
xii = 0, xij = −xji , consider the generic (2n+1)×(2n+1) generic skew-symmetric matrix of indeterminates
(xij ). If we let I denote the ideal in the polynomial ring S = C[xij ] which is generated by the 2n×2n Pfaffians
of (xij ) then the b-function of I is given by
bI (s) =
n−1
Y
(s + 2i + 3).
i=0
If we write Zn for the zero locus of I, i.e. the variety of (2n+1)×(2n+1) skew-symmetric matrices of rank
Qn−1
at most (2n − 2), then by (2.28) we get bZn (s) = i=0
(s + 2i). By [Kim03, Appendix] or [Rai15, Section 6],
this is the same as the b-function of the hypersurface of singular 2n × 2n skew-symmetric matrices.
Organization. In Section 2 we review some generalities on representation theory and D-modules, we recall
the necessary results on invariant differential operators and their eigenvalues, and we state the basic results
and definitions regarding b-functions of arbitrary ideals. In Section 3 we illustrate some methods for bounding
the b-function of an ideal: for upper bounds we use invariant differential operators, while for lower bounds we
show how non-vanishing of local cohomology can be used to exhibit roots of the b-functions. These methods
allow us to compute the b-function for sub-maximal Pfaffians, and to bound from above the b-function for
maximal minors. In Section 4 we employ the SLn -symmetry in the definition of the b-function of maximal
minors in order to show that the upper bound obtained in Section 3 is in fact sharp. In Section 5 we give a
quick derivation, based on our main results, of the Strong Monodromy Conjecture for maximal minors and
sub-maximal Pfaffians.
Notation and conventions. We write [N ] for the set {1, · · · , N }, and for k ≤ N we let [Nk ] denote
the collection of k-element subsets of [N ]. Throughout the paper, X = AN is an affine space, and S =
C[x1 , · · · , xN ] denotes the coordinate ring of X. We write DX or simply D for the Weyl algebra of differential
∂
. In order to distinguish between the various kinds of
operators on X: DX = S[∂1 , · · · , ∂n ] where ∂i = ∂x
i
tuples that arise in this paper, we will try as much as possible to stick to the following conventions: we write
• f = (f1 , · · · , fr ) ∈ S r for a tuple of polynomials in S.
• ĉ = (c1 , · · · , cr ) ∈ Zr for a tuple of integers indexing the operators in the definition of b-functions.
• s = (s1 , · · · , sr ) for a tuple of independent variables used to define b-functions.
• α = (α1 , · · · , αr ) ∈ Zr when αi are exponents which arise as specializations of the variables si .
• λ = (λ1 , · · · , λr ) ∈ Zr for a dominant weight or partition.
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
3
2. Preliminaries
2.1. Representation Theory. We consider the group GLN = GLN (C) of invertible N × N complex
matrices, and denote by TN the maximal torus of diagonal matrices. We will refer to N –tuples λ =
(λ1 , · · · , λN ) ∈ ZN as weights of TN and write |λ| for the total size λ1 + · · · + λN of λ. We say that λ is a
dominant weight if λ1 ≥ λ2 ≥ · · · ≥ λN and denote the collection of dominant weights by ZN
dom . A dominant
weight with λN ≥ 0 is a partition, and we write P N for the set of partitions in ZN
.
We
will implicitly
dom
identify P N −1 with a subset of P N by setting λN = 0 for any λ ∈ P N −1 . For 0 ≤ k ≤ N and a ≥ 0 we
write (ak ) for the partition λ ∈ P k ⊂ P N with λ1 = · · · = λk = a. Irreducible rational representations of
GLN (C) are in one-to-one correspondence with dominant weights λ. We denote by Sλ CN the irreducible
V
representation associated to λ, often referred to as a Schur functor, and note that S(1k ) CN = k CN is the
k-th exterior power of CN for every 0 ≤ k ≤ N . When m ≥ n, we have Cauchy’s formula [Wey03, Cor. 2.3.3]
Sym(Cm ⊗ Cn ) =
M
Sλ C m ⊗ Sλ C n .
(2.1)
λ∈P n
If we identify Cm ⊗ Cn with the linear forms on the space X = Xm×n of m × n complex matrices, then (2.1)
is precisely the decomposition into irreducible GLm × GLn representations of the coordinate ring of X. For
a partition λ we write λ(2) = (λ1 , λ1 , λ2 , λ2 , · · · ) for the partition obtained by repeating each part of λ twice.
The skew-symmetric version of Cauchy’s formula [Wey03, Prop. 2.3.8(b)] yields
!
2
M
^
2n+1
=
C
Sym
(2.2)
Sλ(2) C2n+1 .
λ∈P n
V
If we identify 2 C2n+1 with the linear forms on the space X = Xn of (2n + 1) × (2n + 1) skew-symmetric
matrices, then (2.2) describes the decomposition into irreducible GL2n+1 -representations of the coordinate
ring of X.
2.2. Invariant operators and D-modules. Throughout this paper we will be studying various (left) DX modules when X is a finite dimensional representation of some connected reductive linear algebraic group G.
Differentiating the G-action on X yields a map from the Lie algebra g into the vector fields on X, which in
turn induces a map
τ : U (g) −→ DX ,
(2.3)
where U (g) denotes the universal enveloping algebra of g. In particular, any DX -module M inherits via τ
the structure of a g-representation: if g ∈ g and m ∈ M then g · m = τ (g) · m. In order to make the action
of DX on M compatible with the g-action we need to consider the action of g on DX given by
g • p = τ (g) · p − p · τ (g) for g ∈ g and p ∈ DX .
(2.4)
The induced Lie algebra action of g on the tensor product DX ⊗ M makes the multiplication DX ⊗ M → M
into a g-equivariant map: g · (p · m) = (g • p) · m + p · (g · m) for g ∈ g, p ∈ DX , m ∈ M.
We also use the symbol • to avoid a possible source of confusion that may arise as follows. Since S is both
a DX -module and a subset of DX , the multiplication of an element p ∈ DX with an element f ∈ S can have
two meanings: we write p • f for the result of applying the operator p to f , and p · f for the multiplication
of p with f inside DX . The operation p • f is only used twice in our paper: to discuss the pairing between
differential operators and polynomials (see (2.7)), and in Section 2.5 when we refer to ∂i • fj , the i-th partial
derivative of fj .
4
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
For a Lie subalgebra a ⊂ g, and a DX -module M, we consider the collection Ma of a-invariant sections
in M:
Ma = {m ∈ M : τ (a) · m = 0 for all a ∈ a}.
The main examples that we study arise from a tuple f = (f1 , · · · , fr ) ∈ S r of polynomial functions on X,
where each fi is a-invariant, and M = Sf1 ···fr is the localization of S at the product f1 · · · fr . In this case
we have that Ma = (Sf1 ···fr )a coincides with (S a )f1 ···fr , the localization of S a at f1 · · · fr .
a (not to be confused with Ma for M = D
The ring of a-invariant differential operators on X, denoted by DX
X
as defined above), are defined via
a
DX
= {p ∈ DX : a • p = 0 for all a ∈ a},
(2.5)
a -module whenever M is a D -module. If we write ZU (a) for the center of U (a) then it
and Ma is a DX
X
follows from (2.4) and (2.5) that
a
τ (ZU (a)) ⊆ DX
.
(2.6)
An alternative way of producing a-invariant differential operators is as follows. Let P = C[∂1 , · · · , ∂N ] and
write Sk (resp. Pk ) for the subspace of S (resp. P ) of homogeneous elements of degree k. The action of P
on S by differentiation induces a-equivariant perfect pairings h , i : Pk × Sk → C for each k ≥ 0, namely
hw, vi = w • v. If V ⊂ Sk , W ⊂ Pk are dual a-subrepresentations, with (almost dual) bases v = (v1 , · · · , vt )
and w = (w1 , · · · , wt ), such that for some non-zero constant c
hwi , vj i = 0 for i 6= j,
then we can define elements of
a
DX
hwi , vi i = c for all i,
(2.7)
via
Dv,w =
t
X
vi · wi ,
Dw,v =
t
X
wi · vi .
(2.8)
i=1
i=1
In the examples that we consider, the basis w will have a very simple description in terms of v. For
p = p(x1 , · · · , xN ) ∈ S, we define p∗ = p(∂1 , · · · , ∂N ) ∈ P . For the tuples of maximal minors and submaximal Pfaffians, it will suffice to take wi = vi∗ in order for (2.7) to be satisfied, in which case we’ll simply
write Dv instead of Dw,v and Dv∗ or Dw instead of Dv,w .
We specialize our discussion to the case when X = Xm,n is the vector space of m × n matrices, m ≥ n,
and G = GLm × GLn , g = glm ⊕ gln . The coordinate ring of X is S = C[xij ] with i ∈ [m], j ∈ [n]. We
consider the tuple of maximal minors d = (dK )K∈([m]) of the generic matrix of indeterminates (xij ), where
n
dK = det(xij )i∈K,j∈[n],
(2.9)
and the tuple ∂ = (∂K )K∈([m]) of maximal minors in the dual variables
n
∂K = d∗K = det(∂ij )i∈K,j∈[n],
(2.10)
Vn
Vn
Cn in (2.1), indexed by
Cm ⊗
The elements dK form a basis for the irreducible representation V =
n
the partition λ = (1 ), while ∂K form a basis for the dual representation W . If we let c = n! then it follows
from Cayley’s identity [CSS13, (1.1)] that (2.7) holds for the tuples d and ∂, so we get g-invariant operators
X
X
D∂ =
dK · ∂K , Dd =
∂K · dK .
(2.11)
K∈([m]
n )
K∈([m]
n )
If we consider sln ⊂ gln ⊂ g, the special linear Lie algebra of n × n matrices with trace 0, then
[m]
sln
S = C dK : K ∈
n
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
5
is the C-subalgebra of S generated by the maximal minors dK . Moreover, S sln can be identified with the
homogeneous coordinate ring of the Grassmannian G(n, m) of n-planes in Cm . We let
p0 = d[n] , and pij = d[n]\{i}∪{j} , for i ∈ [n], j ∈ [m] \ [n],
(2.12)
and note that pij /p0 give the coordinates on the open Schubert cell defined by p0 6= 0 inside G(n, m). It
follows that if we take any K ∈ [m]
n , set |[n] \ K| = k, and enumerate the elements of the sets [n]\K =
{i1 , · · · , ik } and K\[n] = {j1 , · · · , jk } in increasing order, then:
p i1 j 1 · · · p i1 j k
.. .
..
(2.13)
dK = p01−k · det ...
.
.
p ik j 1 · · · p ik j k
It will be important in Section 4 to note moreover that p0 , pij are algebraically independent and that
sln
Y
cij
c0
p0 ·
pij : c0 , cij ∈ Z forms a C-basis of Sp0 ·Qi,j pij
.
(2.14)
i∈[n],j∈[m]\[n]
2.3. Capelli elements, eigenvalues, and the Fourier transform [HU91]. Throughout the paper, by
the determinant of a matrix A = (aij )i,j∈[r] with non-commuting entries we mean the column-determinant:
if Sr is the symmetric group of permutations of [r], and sgn denotes the signature of a permutation, then
X
sgn(σ) · aσ(1)1 · aσ(2)2 · · · aσ(n)n .
(2.15)
col-det(aij ) =
σ∈Sr
We consider the Lie algebra glr and choose a basis {Eij : i, j ∈ [r]} for it, where Eij is the matrix whose
only non-zero entry is in row i, column j, and it is equal to one. We think of Eij as the inputs of an
r × r matrix E with entries in U (glr ). We consider an auxiliary variable z, consider the diagonal matrix
∆ = diag(r − 1 − z, r − 2 − z, · · · , 1 − z, −z) and define the polynomial C(z) ∈ U (glr )[z] using notation (2.15):
C(z) = col-det(E + ∆).
(2.16)
For a ≥ 0 we write
[z]a = z(z − 1) · · · (z − a + 1)
(2.17)
and define elements Ca ∈ U (glr ), a = 0, · · · , r, by expanding the polynomial C(z) into a linear combination
C(z) =
r
X
(−1)r−a Ca · [z]r−a .
(2.18)
a=0
In the case when r = 2 we obtain
E11 + 1 − z
E12
C(z) = col-det
= [z]2 − (E11 + E22 ) · [z] + ((E11 + 1) · E22 − E21 · E12 ), thus
E21
E22 − z
C0 = 1, C1 = E11 + E22 , C2 = (E11 + 1) · E22 − E21 · E12 .
The elements Ca , a = 1, · · · , r are called the Capelli elements of U (glr ), and ZU (glr ) is a polynomial algebra
with generators C1 , · · · , Cr . For λ ∈ Zrdom , let Vλ denote an irreducible glr -representation of highest weight
λ, and pick vλ ∈ Vλ to be a highest weight vector in Vλ , so that
Eii · vλ = λi · vλ ,
Eij · vλ = 0 for i < j.
(2.19)
Since Ca are central, their action on Vλ is by scalar multiplication, and the scalar (called the eigenvalue of Ca
on Vλ ) can be determined by just acting on vλ . To record this action more compactly, we will consider how
6
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
C(z) acts on vλ . Expanding C(z) via (2.15), it follows from (2.19) that the only term that doesn’t annihilate
vλ is the product of diagonal entries in the matrix E + ∆, hence
C(z) acts on Vλ [z] by multiplication by
r
Y
(λi + r − i − z).
(2.20)
i=1
We can think of U (glr ) in terms of generators and relations as follows: it is generated as a C-algebra by
Eij , i, j ∈ [r], subject to the relations
[Eij , Ekl ] = δjk · Eil − δil · Ekj ,
(2.21)
where [a, b] = ab − ba denotes the usual commutator, and δ is the Kronecker delta function. For every
complex number u ∈ C, the substitutions
Eij −→ −Eji for i 6= j,
Eii −→ −Eii − u
preserve (2.21), so they define an involution Fu : U (glr ) −→ U (glr ) which we call the Fourier transform with
parameter u. We can apply Fu to C(z) and obtain
Fu C(z) = col-det(−E t − u · Idr +∆),
(2.22)
where E t
is the transpose of E, and Idr denotes the r×r identity matrix. The Fourier transforms Fu C1 , · · · , Fu Cr
of the Capelli elements form another set of polynomial generators for ZU (glr ), hence they act by scalar multiplication on any irreducible glr -representation Vλ . To determine the scalars, we will consider the action on
a lowest weight vector wλ ∈ Vλ , so that
Eii · wλ = λr+1−i · wλ ,
Eji · wλ = 0 for i < j.
(2.23)
Expanding (2.22) via (2.15), it follows from (2.23) that the action of Fu Ca on Vλ is encoded by the fact that
Fu C(z) acts on Vλ [z] by multiplication by
r
Y
(−λr+1−i − u + r − i − z).
(2.24)
i=1
Lemma 2.1. For s ∈ Z, let λ = (sr ) denote the dominant weight with all λi = s, and for a = 1, · · · , r let
Pa (s) (resp. Fu Pa (s)) denote the eigenvalue of Ca (resp. Fu Ca ) on Vλ . We have that Pa (s) and Fu Pa (s) are
polynomial functions in s, and as such Fu Pa (s) = Pa (−s − u).
P
Proof. If we let P (s, z) = ra=0 (−1)r−a Pa (s) · [z]r−a then it follows from (2.18) and (2.20) that
P (s, z) =
r
Y
(s + r − i − z).
i=1
Expanding the right hand side as a linear combination of [z]0 , [z]1 , · · · , [z]r shows that Pa (s) is a polynomial
in s. We define Fu P (s, z) by replacing Pa (s) with Fu Pa (s) and obtain using (2.24) that
Fu P (s, z) =
r
Y
(−s − u + r − i − z).
i=1
Since Fu P (s, z) = P (−s − u, z), the conclusion follows.
Lemma 2.2. For s ∈ Z≥0 , let λ = (sr−1 ) denote the partition with λ1 = · · · = λr−1 = s, λr = 0, and for
a = 1, · · · , r let Qa (s) (resp. Fr−1 Qa (s)) denote the eigenvalue of Ca (resp. Fr−1 Ca ) on Vλ . We have that
Qa (s) and Fr−1 Qa (s) are polynomial functions in s, and as such Fr−1 Qa (s) = Qa (−s − r).
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
7
Proof. We define Q(s, z) and Fr−1 Q(s, z) as in the proof of Lemma 2.1 and obtain using (2.20), (2.24) that
!
r−1
Y
(s + r − i − z) · (0 − z) = (s + r − 1 − z) · (s + r − 2 − z) · · · (s + 1 − z) · (−z),
Q(s, z) =
i=1
Fr−1 Q(s, z) = (−0 − (r − 1) + r − 1 − z) ·
!
r
Y
(−s − (r − 1) + r − i − z)
i=2
= (−z) · (−s − 1 − z) · (−s − 2 − z) · · · (−s − r + 1 − z).
It is immediate to check that Fr−1 Q(s, z) = Q(−s − r, z), from which the conclusion follows.
2.4. A little linear algebra. We let Xm,n , m ≥ n, denote the vector space of m × n matrices, write Zm,n
for the subvariety of Xm,n consisting of matrices of rank at most n − 1, and let U ⊂ Xm,n denote the open
affine subset consisting of matrices u = (uij ) with u11 6= 0.
Lemma 2.3. There exists an isomorphism of algebraic varieties
π : U ∩ Zm,n −→ C∗ × Cm−1 × Cn−1 × Zm−1,n−1 .
Proof. We define π : U → C∗ × Cm−1 × Cn−1 × Xm−1,n−1 via π(u) = (t, ~c, ~r, M ) where if u = (uij ) then
t = u11 , ~c = (u21 , u31 , · · · , um1 ), ~r = (u12 , u13 , · · · , u1n ),
Mij = det{1,i+1},{1,j+1} for i ∈ [m − 1], j ∈ [n − 1],
where det{1,i+1},{1,j+1} = u11 ·ui+1,j+1 − u1,j+1 ·ui+1,1 is the determinant of the 2× 2 submatrix of u obtained
by selecting rows 1, i + 1 and columns 1, j + 1. It follows for instance from [Joh03, Section 3.4] that the
map π is an isomorphism, and that it sends U ∩ Zm,n onto C∗ × Cm−1 × Cn−1 × Zm−1,n−1 , which yields the
desired conclusion.
We let Xn denote the vector space of (2n + 1) × (2n + 1) skew-symmetric matrices, and define Zn ⊂ Xn
to be the subvariety of matrices of rank at most (2n − 2). We let U ⊂ Xn denote the open affine subset
defined by matrices (uij ) with u12 6= 0.
Lemma 2.4. There exists an isomorphism of algebraic varieties
π : U ∩ Zn −→ C∗ × C2n−1 × C2n−1 × Zn−1 .
Proof. We define π : U → C∗ × C2n−1 × C2n−1 × Xn−1 via π(u) = (t, ~c, ~r, M ) where if u = (uij ) then
t = u12 , ~c = (u13 , u14 , · · · , u1,2n+1 ), ~r = (u23 , u24 , · · · , u2,2n+1 ),
Mij =
Pf {1,2,i+2,j+2}
for 1 ≤ i, j ≤ 2n − 1,
u12
where Pf {1,2,i+2,j+2} is the Pfaffian of the 4×4 principal skew-symmetric submatrix of u obtained by selecting
the rows and columns of u indexed by 1, 2, i + 2 and j + 2. Since
Mij = ui+2,j+2 − (u1,i+2 · u2,j+2 − u1,j+2 · u2,i+2 )/u12
8
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
one can solve for ui+2,j+2 in terms of the entries of M, ~r, ~c and u12 in order to define the inverse of π, which
is therefore an isomorphism. We consider the (2n + 1) × (2n + 1) matrix
u24 /u12 · · · u2,2n+1 /u12
0
1 u23 /u12
1/u12 0 −u13 /u12 −u14 /u12 · · · −u1,2n+1 /u12
0
0
1
0
···
0
C= 0
0
0
1
·
·
·
0
..
..
..
..
..
..
.
.
.
.
.
.
0
0
0
0
···
1
Writing ~0 for zero row/column vectors of size (2n − 1), we have (see also [JP79, Lemma 1.1])
0 −1 ~0
~0
Ct · u · C = 1 0
~0 ~0 M
Since rank(u) = rank(C t ·u·C) = rank(M )+2, it follows that π sends U ∩Zn onto C∗ ×C2n−1 ×C2n−1 ×Zn−1 ,
so it restricts to the desired isomorphism.
2.5. The b-function of an affine scheme. In this section we review the results and definitions from
[BMS06] that are most relevant for our calculations. Let X = AN be the N -dimensional affine space, and
∂
)
write S = C[x1 , · · · , xN ] for the coordinate ring of X, and DX = C[x1 , · · · , xN , ∂1 , · · · , ∂N ] (∂i = ∂x
i
for the corresponding Weyl algebra of differential operators. For a collection f = (f1 , · · · , fr ) of non-zero
polynomials in S, we consider a set of independent commuting variables s1 , · · · , sr , one for each fi . We form
the DX [s1 , · · · , sr ]-module
(2.25)
Bsf = Sf1 ···fr [s1 , · · · , sr ] · f s ,
s
where Sf1 ···fr denotes the localization of S at the product of the fi ’s, and f stands for the formal product
f1s1 · · · frsr . Bsf is a free rank one Sf1 ···fr [s1 , · · · , sr ]-module with generator f s , which admits a natural action
of DX : the partial derivatives ∂i act on the generator f s via
r
X
sj · (∂i • fj ) s
∂i · f =
·f .
fj
s
j=1
Writing s = s1 + · · · + sr , the Bernstein–Sato polynomial (or b-function) bf (s) is the monic polynomial of
the lowest degree in s for which bf (s) · f s belongs to the DX [s1 , · · · , sr ]-submodule of Bsf generated by all
expressions
r
Y
ĉs ·
fisi +ci ,
i=1
Zr
where ĉ = (c1 , · · · , cr ) runs over the r-tuples in
with c1 + · · · + cr = 1 (for short |ĉ| = 1), and
Y
ĉs =
si · (si − 1) · · · (si + ci + 1).
(2.26)
ci <0
Equivalently, bf (s) is the monic polynomial of lowest degree for which there exist a finite set of tuples ĉ ∈ Zr
with |ĉ| = 1, and corresponding operators Pĉ ∈ DX [s1 , · · · , sr ] such that
X
ĉ
Pĉ · ĉs ·
r
Y
i=1
fisi +ci = bf (s) · f s .
(2.27)
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
9
Just as in the case r = 1 (of a single hypersurface), bf (s) exists and is a polynomial whose roots are
negative rational numbers. Moreover, bf (s) only depends on the ideal I generated by f1 , · · · , fr , which is
why we’ll often write bI (s) instead of bf (s). Furthermore, if we let Z ⊂ X denote the subscheme defined by
f1 , · · · , fr , and if we define
bZ (s) = bI (s − codimX (Z))
(2.28)
then bZ (s) only depends on the affine scheme Z and not on its embedding in an affine space. The polynomial
bZ (s) is called the Bernstein–Sato polynomial of Z (or the b-function of Z), and is meant as a measure of the
singularities of Z: the higher the degree of bZ (s), the worse are the singularities of Z. For instance, one has
that T is smooth if and only if bT (s) = s. Moreover, it follows from [BMS06, Theorem 5] that for any Z and
any smooth T we have
bZ×T (s) = bZ (s).
(2.29)
It will be important to note also that if Z is irreducible and Z = Z1 ∪ · · · ∪ Zk is an open cover of Z then
bZ (s) = lcm{bZi (s) : i = 1, · · · , k}.
(2.30)
A modification of the above formula is shown in [BMS06] to hold even when Z is reducible, and in fact
can be used to define a b-function for not necessarily affine or irreducible schemes Z: this generality is not
relevant for this article so we won’t discuss it further. Combining (2.29) and (2.30) with the results and
notation from Section 2.4, we conclude that
bZm−1,n−1 (s) divides bZm,n (s), and bZn−1 (s) divides bZn (s).
(2.31)
3. Bounding the b-function
In this section we discuss some methods for bounding the b-function from above and below. As a consequence we obtain formulas for the b-function of the ideal of maximal minors of the generic (n + 1) × n
matrix, and for the b-function of the ideal of sub-maximal Pfaffians of a generic skew-symmetric matrix of
odd size.
3.1. Lower bounds. In order to obtain lower bounds for a b-function, it is important to be able to identify
certain factors of the b-function which are easier to compute. One instance of this is given in equation (2.30):
the b-function of Z is divisible by the b-function of any affine open subscheme. In this section we note that
sometimes it is possible to identify roots of the b-function (i.e. linear factors) by showing an appropriate
inclusion of D-modules. As before f = (f1 , · · · , fr ) ∈ S r , and I ⊂ S is the ideal generated by the fi ’s.
For α ∈ Z we define Fα to be the DX -submodule of Sf1 ···fr generated by
fα =
r
Y
fiαi , where α = (α1 , · · · , αr ) ∈ Zr , α1 + · · · + αr = α.
i=1
It is clear that Fα+1 ⊆ Fα for every α ∈ Z. We have moreover:
Proposition 3.1. If α ∈ Z and if there is a strict inclusion Fα+1 ( Fα then α is a root of bf (s).
Proof. By the definition of bf (s), there exist tuples ĉ and operators Pĉ ∈ DX [s1 , · · · , sr ] such that (2.27) holds.
Assume now that Fα+1 ( Fα for some α ∈ Z, and consider any integers α1 , · · · , αr with α1 + · · · + αr = α.
There is a natural DX -module homomorphism
π : Sf1 ···fr [s1 , · · · , sr ] · f s −→ Sf1 ···fr , defined by π(si ) = αi .
fα
(3.1)
∈ Fα+1 . If bf (α) 6= 0 then we can divide by bf (α) and obtain
Applying π to (2.27) we find that bf (α) ·
α
that f ∈ Fα+1 for all α with |α| = α. Since the elements f α generate Fα it follows that Fα ⊆ Fα+1 which
is a contradiction. We conclude that bf (α) = 0, i.e. that α is a root of bf (s).
10
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
We write HI• (S) for the local cohomology groups of S with support in the ideal I. Proposition 3.1
combined with non-vanishing results for local cohomology can sometimes be used to determine roots of the
b-function as follows:
Corollary 3.2. If bI (s) has no integral root α with α < −r, and if HIr (S) 6= 0 then bI (−r) = 0.
Proof. For every α ∈ Z, α < −r, and every α = (α1 , · · · , αr ) with α = α1 + · · · + αr , we can apply the
specialization map (3.1) to the equation (2.27) to conclude that bI (α) · f α ∈ Fα+1 . Since bI (α) 6= 0 by
assumption, we conclude that f α ∈ Fα+1 for all such α, and therefore Fα = Fα+1 . It follows that
F−r = F−r−1 = F−r−2 = · · · = Sf1 ···fr ,
since the localization Sf1 ···fr is the union of all Fα , α ≤ −r.
By Proposition 3.1, in order to show that bI (−r) = 0, it is enough to show that F−r+1 ( F−r , which by
the above is equivalent to proving that F−r+1 does not coincide with the localization Sf1 ···fr . Consider any
generator f α of F−r+1 , corresponding to a tuple α ∈ Zr with α1 + · · · + αr = −r + 1. At least one of the
αi ’s has to be nonnegative, so that f α belongs to Sf1 ···fˆi ···fr , the localization of S at a product of all but one
of the generators fi . This shows that
r
X
(3.2)
Sf1 ···fˆi ···fr .
F−r+1 ⊆
i=1
Using the Čech complex description of local cohomology, and the assumption that HIr (S) 6= 0, we conclude
that there is a strict inclusion
r
X
Sf1 ···fˆi ···fr ( Sf1 ···fr .
i=1
Combining this with (3.2) we conclude that F−r+1 ( F−r = Sf1 ···fr , as desired.
3.2. Upper bounds. Obtaining upper bounds for b-functions is in general a difficult problem, since most
of the time it involves determining the operators Pĉ in (2.27). In the presence of a large group of symmetries,
invariant differential operators are natural candidates for such operators, and the problem becomes more
tractable. As in Section 2.2, G is a connected reductive linear algebraic group, and g is its Lie algebra.
Definition 3.3. A tuple f = (f1 , · · · , fr ) ∈ S r is said to be multiplicity-free (for the G-action) if
(a) For every nonnegative integer α, the polynomials
f α = f1α1 · · · frαr , for α = (α1 , · · · , αr ) ∈ Zr≥0 satisfying α1 + · · · + αr = α,
span an irreducible G-subrepresentation Vα ⊂ S.
(b) For every α ∈ Z≥0 , the multiplicity of Vα inside S is equal to one.
A typical example of a multiplicity-free tuple arises in the case r = 1 from a semi-invariant on a prehomogeneous vector space. In this case the computations for the Bernstein-Sato polynomials have been pursued
thoroughly (see for example [Kim82, Kim03]). Our definition gives a natural generalization to tuples with
r > 1 entries. We have the following:
Proposition 3.4. Consider
a multiplicity-free tuple f = (f1 , · · · , fr ) for some G, and a G-invariant differP
ential operator Df = ri=1 gi · fi , where gi ∈ DX . If we let s = s1 + · · · + sr then there exists a polynomial
Pf (s) ∈ C[s] such that
Df · f s = Pf (s) · f s ,
and moreover we have that bf (s) divides Pf (s).
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
11
Proof. Since the action of Df preserves Bsf , there exists an element Q ∈ Sf1 ···fr [s1 , · · · , sr ] with the property
Df · f s = Q · f s .
The goal is to show that, as a polynomial in s1 , · · · , sr , Q = Q(s1 , · · · , sr ) has coefficients in C, and moreover
that it can be expressed as a polynomial only in s = s1 + · · · + sr . For this, it suffices to check that:
(a) Q(α1 , · · · , αr ) ∈ C for every α1 , · · · , αr ∈ Z≥0 .
(b) For αi as in (a), Q(α1 , · · · , αr ) only depends on α = α1 + · · · + αr .
Let α1 , · · · , αr be arbitrary non-negative integers, and write α = α1 + · · · + αr . Since Vα is irreducible,
hVα , Si = 1, and Df is G-invariant, it follows from Schur’s Lemma that Df acts on Vα by multiplication by
a scalar, i.e. Q(α1 , · · · , αr ) ∈ C is a scalar that only depends on α, so conditions (a) and (b) are satisfied.
To see that bf (s) divides Pf (s), it suffices to note that Df · f s = Pf (s) · f s can be rewritten in the
form (2.27), where the sum is over tuples ĉ = (0, · · · , 0, 1, 0, · · · , 0) with ci = 1, cj = 0 for j 6= i, with
corresponding operator Pĉ = gi . Since bf (s) is the lowest degree polynomial for which (2.27) holds, it follows
that bf (s) divides Pf (s).
3.3. Maximal minors. In this section X = Xm,n is the vector space of m × n matrices, m ≥ n. The group
G = GLm × GLn acts on X via row and column operations. The coordinate ring of X is S = C[xij ], and we
consider the tuple d = (dK )K∈([m]) of maximal minors defined in (2.9). The tuple d is multiplicity-free for
n
the G-action, where for α ∈ Z≥0 , the corresponding representation Vα in Definition 3.3 is S(αn ) Cm ⊗ S(αn ) Cn
from (2.1) (see for instance [dCEP80, Thm. 6.1]). We associate to d the invariant differential operator Dd
in (2.11) and by Proposition 3.4 there exists a polynomial Pd (s) with
Dd · ds = Pd (s) · ds .
Theorem 3.5. With the notation above, we have that
m
Y
Pd (s) =
(3.3)
(s + i).
(3.4)
i=m−n+1
Proof. In order to compute Pd (s), it suffices to understand the action of Dd on dsL for some fixed L ∈ [m]
n
(this corresponds to letting sK = 0 for K 6= L in (3.3)). We consider instead the action of the operator D∂
in (2.11), and note that by Cayley’s identity [CSS13, (1.1)] one has
!
n−1
Y
s−1
s
s
,
(s + i) · dL
∂K · dL = 0 for K 6= L, ∂L · dL =
i=0
which implies
D∂ · dsL =
n−1
Y
!
(s + i)
i=0
· dsL .
(3.5)
Let F : DX −→ DX denote the (usual) Fourier transform, defined by F(xij ) = ∂ij , F (∂ij ) = −xij , and note
that Dd = (−1)n · F(D∂ ). We will obtain Pd (s) by applying the Fourier transform to (3.5).
For i, j ∈ [n], we consider the polarization operators
m
X
Eij =
xki · ∂kj .
k=1
The action of the Lie algebra gln ⊂ glm ⊕ gln on X induces a map τ : U (gln ) → DX as in (2.3), sending
τ (Eij ) = Eij for all i, j. The Fourier transform sends
F(Eij ) = −Eji for i 6= j,
F(Eii ) = −Eii − m,
12
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
so using the notation in Section 2.3 we obtain a commutative diagram
U (gln )
Fm
/ U (gl )
n
τ
τ
DX
F
/ DX
Since D∂ is in τ (ZU (gln )) (it is in fact equal to τ (Cn ) by [HU91, (11.1.9)]), it follows from (3.5), from the
commutativity of the above diagram and from Lemma 2.1 with r = n and u = m that
!
!
m
n−1
Y
Y
(s + i) · dsK ,
(−s − m + i) · dsK =
Dd · dsK = (−1)n
i=m−n+1
i=0
which concludes the proof of our theorem.
Remark 3.6. A more direct way to prove (3.4) is to use for instance [CSS09, Prop. 1.2] in order to obtain a
determinantal representation for the operator Dd , namely
E11 + m
E12
···
E1n
E21
E22 + m − 1 · · ·
E2n
Dd = col-det
,
..
..
..
..
.
.
.
.
En1
En2
· · · Enn + m − n + 1
from which the conclusion follows easily. The advantage of our proof of Theorem 3.5 is that it applies equally
to the case of sub-maximal Pfaffians in Section 3.4, where we are not aware of a more direct approach.
Almost square matrices. In the case of (n + 1) × n matrices, we can show that the lower and upper
bounds obtained by the techniques described above agree, and we obtain the following special instance of
the Theorem on Maximal Minors described in the Introduction:
Theorem 3.7. If d is the tuple of maximal minors of the generic (n + 1) × n matrix then its b-function is
bd (s) =
n+1
Y
(s + i).
i=2
Proof. We have by Proposition 3.4 and Theorem 3.5 that bd (s) divides the product (s + 2) · · · (s + n + 1).
If we write Zn+1,n for the variety of (n + 1) × n matrices of rank smaller than n as in Section 2.4 then the
defining ideal of Zn+1,n is generated by the entries of d. Since Zn+1,n has codimension two inside Xn+1,n ,
bZn+1,n (s) = bd (s − 2) by (2.28), and thus it suffices to show that
bZn+1,n (s) is divisible by
n−1
Y
(s + i).
(3.6)
i=0
Qn−2
(s + i). Taking into account (2.31) we are left with
By induction on n, we may assume that bZn,n−1 = i=0
proving that (−n + 1) is a root of bZn+1,n (s), or equivalently that (−n − 1) is a root of bd (s). To do this we
apply Corollary 3.2 with r = n + 1, and I the defining ideal of Zn+1,n . It follows from [Wit12, Thm. 5.10]
or [RWW14, Thm. 4.5] that HIn+1 (S) 6= 0, so the Corollary applies and concludes our proof.
Remark 3.8. An alternative approach to proving Theorem 3.7 goes by first computing the b-function of
several variables associated to d1 , · · · , dn+1 (see [Lőr13, Lemma 1.9]). The space Xn+1,n is prehomogeneous
under the action of the smaller group (C∗ )n+1 × GLn (C). We will use freely some notions from [Lőr13]. The
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
13
maximal minors d1 , · · · , dn+1 can be viewed as semi-invariants for the following quiver with n + 2 vertices
and dimension vector
1 gPPP 1 `❇
···
1
71
❇
⑤>
♥♥♥
P
⑤⑤ ♥♥♥
⑤⑤♥♥♥♥
⑤
⑤♥⑤♥♥♥
PPP ❇❇
PPP ❇❇
PPP❇❇
P
n
The dimension vector is preinjective, hence by [Lőr13, Proposition 5.4(b)] we can compute the b-function
of several variables using reflection functors [Lőr13, Theorem 5.3]:
,1
1,0,··· ,0
,0
,1
bd (s) = [s]1,1,···
· [s]0,1,···
· · · [s]0,0,···
.
n−1,n · [s]1
1
1
This means that we have formulas
d∗i · di · ds = (si + 1)(s + 2)(s + 3) · · · (s + n) · ds ,
which, together with Lemma 4.4 below gives readily the Bernstein-Sato polynomial of the ideal. Such relations between b-functions of several variables and Bernstein-Sato polynomials of ideals have been investigated
in [Lőr15].
3.4. Sub-maximal Pfaffians. In this section X = Xn is the vector space of (2n + 1) × (2n + 1) skewsymmetric matrices, with the natural action of G = GL2n+1 . The coordinate ring of X is S = C[xij ] with
1 ≤ i < j ≤ 2n + 1. We consider the tuple d = (d1 , d2 , · · · , d2n+1 ), where di is the Pfaffian of the skewsymmetric matrix obtained by removing the i-th row and column of the generic skew-symmetric matrix
(xij )i,j∈[2n+1] (with the convention xji = −xij and xii = 0). The tuple d is multiplicity-free for the G-action,
where for α ∈ Z≥0 , the corresponding representation Vα in Definition 3.3 is S(α2n ) C2n+1 from (2.2) (see for
instance [ADF80, Thm. 4.1]). We associate to d the invariant differential operator
Dd =
2n+1
X
d∗i · di ,
i=1
and by Proposition 3.4 there exists a polynomial Pd (s) with
Dd · ds = Pd (s) · ds .
(3.7)
Theorem 3.9. If d is the tuple of sub-maximal Pfaffians of the generic (2n + 1) × (2n + 1) skew-symmetric
matrix, then
n−1
Y
(s + 2i + 3).
(3.8)
bd (s) = Pd (s) =
i=0
Proof. We begin by showing, using the strategy from the proof of Theorem 3.5, that Pd (s) =
We have a commutative diagram
U (gl2n+1 )
F2n
τ
DX
If we let Dd∗ =
i=1
i=0
/ U (gl
2n+1 )
τ
P2n+1
Qn−1
F
/ DX
di · d∗i then Dd = (−1)n · F(Dd∗ ). It follows from [CSS13, Thm. 2.3] that
!
n−1
Y
∗
s
∗
s
(s + 2i) · d0s−1 ,
di · d0 = 0 for i 6= 0, d0 · d0 =
i=0
(s+2i+3).
14
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
from which we obtain
n−1
Y
Dd∗ · ds0 =
!
(s + 2i)
i=0
· ds0 .
Since Dd∗ is in τ (ZU (gl2n+1 )) by [HU91, Cor. 11.3.19], it follows from Lemma 2.2 with r = 2n + 1 that
!
n−1
Y
(−s − 2n − 1 + 2i) · ds0 ,
Dd · ds0 = (−1)n ·
i=0
from which we obtain
Pd (s) =
n−1
Y
(s + 2i + 3).
(3.9)
i=0
Using the notation in Section 2.4 we have that bd (s) = bZn (s + 3) since Zn has codimension three in Xn ,
Qn−1
Qn−2
so (3.8) is equivalent to bZn (s) = i=0
(s + 2i). By induction on n we have bZn−1 (s) = i=0
(s + 2i), which
divides bZn (s) by (2.31). This shows that −3, −5, · · · , −2n + 1 are roots of bd (s), and since bd (s) divides
Pd (s), it follows from (3.9) that the only other possible root is −2n − 1. Using [RWW14, Thm. 5.5] and
Corollary 3.2 with r = 2n + 1 and I being the ideal generated by the di ’s, it follows that −2n − 1 is indeed
a root of bd (s), hence (3.8) holds.
Remark 3.10. The method described in Remark 3.8 can be used in this case as well. Using the decomposition
(2.2) and the Littlewood-Richardson rule, we see that d∗i · S((α+1)2n ) C2n+1 ⊂ S(α2n ) C2n+1 for α ∈ Z≥0 .
Moreover, under the action of diagonal matrices the weights of d1 , · · · , d2n+1 are linearly independent. Hence
the tuple d = (d1 , d2 , · · · , d2n+1 ) has a b-function of several variables, and as in the proof of [Lőr13, Theorem
5.1] we obtain the formulas
d∗i · di · ds = (si + 1)(s + 3)(s + 5) · · · (s + 2n − 1) · ds .
Together with the analogue of Lemma 4.4 below, this gives the Bernstein-Sato polynomial of the ideal.
4. Bernstein–Sato polynomials for maximal minors
In this section we generalize Theorem 3.7 to arbitrary m × n matrices. We use the notation from Sections 2.2 and 3.3: in particular d = (dK )K∈([m]) is the tuple of maximal minors as in (2.9).
n
Theorem 4.1. The Bernstein–Sato polynomial of the tuple of maximal minors of the generic m×n matrix is
bd (s) =
m
Y
(s + i).
i=m−n+1
Q
We know by Proposition 3.4 and Theorem 3.5 that bd (s) divides m
i=m−n+1 (s + i). By induction, we also
Qm−1
know from (2.31) that bd (s) is divisible by i=m−n+1 (s + i), so we would be done if we can show that −m
is a root of bd (s). This would follow from Proposition 3.1 if we could prove the following:
Conjecture 4.2. If we associate as in Section 3.1 the D-modules Fα , α ∈ Z, to the tuple d of maximal
minors of the generic m × n matrix, then there exists a strict inclusion F−m+1 ( F−m .
We weren’t able to verify this conjecture when m > n + 1, so we take a different approach. We consider
the (1 + n · (m − n))-tuple
p = (p0 , pij ) ∈ S 1+n·(m−n) ,
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
15
as in (2.12) and associate to p0 a variable s0 , and to each pij a variable sij . We write s = (s0 , sij ) and
consider Bsp as defined in (2.25). Inside Bsp , we consider the C[s]-submodule
Y
s +c
Aps = C[s] · ps+ĉ = p0s0 +c0 ·
pijij ij : ĉ = (c0 , cij ) ∈ Z1+n·(m−n) .
(4.1)
i∈[n],j∈[m]\[n]
A more invariant way of describing Aps follows from the discussion in Section 2.2:
Aps consists precisely of the sln -invariants inside the DX -module Bsp .
(4.2)
sln
sln
for every K ∈ [m]
-module. Since ∂K ∈ DX
It follows that Aps is in fact a DX
n , we can make the following:
P
Definition 4.3. We let s = s0 + i,j sij and define ap (s) to be the monic polynomial of the lowest degree
in s for which ap (s) · ps belongs to
[m]
s+ĉ
C[s] · ∂K · p
:K∈
, |ĉ| = 1 .
n
With Pd (s) as computed in Theorem 3.5 we will prove that
ap (s) divides bd (s), and
(4.3)
Pd (s) divides ap (s).
(4.4)
Combining (4.3) with (4.4), and with the fact that bd (s) divides Pd (s), concludes the proof of Theorem 4.1.
It follows from (2.14) that the elements ps+ĉ in (4.1) in fact give a basis of Aps as a C[s]-module. We have
M
Aps =
Aps (α)
α∈Z
which we can think of as a weight space decomposition, where
n
o
Aps (α) = C[s] · ps+ĉ : |ĉ| = α
(4.5)
is the set of elements in Aps on which g ∈ gln acts by multiplication by tr(g) · (s + α), and in particular each
gl
Aps (α) is preserved by DXn . Using (2.13) we obtain that multiplication by dK sends Aps (α) into Aps (α + 1).
gl
Since dK · ∂K ∈ DXn , it then follows that multiplication by ∂K sends Aps (α + 1) into Aps (α). We obtain:
Lemma 4.4. The polynomial ap (s) is the monic polynomial of lowest degree for which there exist a finite
collection of tuples ĉ ∈ Z1+n·(m−n) with |ĉ| > 0 and corresponding operators Qĉ ∈ DX [s] such that
X
Qĉ · ps+ĉ = ap (s) · ps .
(4.6)
ĉ
sln
[s]. Since
Proof. Using the fact that ps+ĉ and ap (s) · ps are sln -invariants, we may assume that Qĉ ∈ DX
sln
can be expressed as a linear combination of products Q1 · Q2 · Q3 , where Q1 is a
every element in DX
gl
product of ∂K ’s, Q2 is a product of dK ’s, and Q3 ∈ DXn , the conclusion follows from the observation that
gl
DXn preserves each weight space, dK increases the weight by one, while ∂K decreases the weight by one.
We are now ready to prove that ap (s) divides bd (s):
16
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
[m]
Proof of (4.3). Using (2.27) with s = (sK )K∈([m]) we can find a finite collection of tuples ĉ ∈ Z( n ) with
n
|ĉ| = 1, and corresponding operators Pĉ ∈ DX [s] such that we have an equality inside Bsd :
X
ĉs · Pĉ · ds+ĉ = bd (s) · ds .
(4.7)
ĉ
Note that by (2.26), setting sK = 0 makes ĉs = 0 whenever ĉ is such that cK < 0. We apply to (4.7) the
specialization
sK = 0 whenever |K ∩ [n]| ≤ n − 2, s[n] = s0 , and s[n]\{i}∪{j} = sij for i ∈ [n], j ∈ [m] \ [n].
(4.8)
We then use the equalities p0 = d[n] , pij = d[n]\{i}∪{j} and (2.13), and regroup the terms to obtain (with
an abuse of notation) a finite collection of tuples ĉ = (c0 , cij ) ∈ Z1+n·(m−n) with |ĉ| = 1, and corresponding
operators Qĉ ∈ DX [s], where s denotes now the tuple of variables (s0 , sij ), such that the following equality
holds in Bsp :
X
Qĉ · ps+ĉ = bd (s) · ps .
ĉ
Using Lemma 4.4 it follows that ap (s) divides bd (s) as desired.
We conclude by proving (4.4), but before we establish a preliminary result. For |ĉ| = 1 we observe that
ps+ĉ ∈ Aps (1), thus ∂K · ps+ĉ can be expressed as a C[s]-linear combination of the basis elements of Aps (0).
We define QK,ĉ ∈ C[s] to be the coefficient of ps in this expression, and write ê = (1, 0n·(m−n) ).
Lemma 4.5. Write Q0K,ĉ ∈ C[s0 ] for the result of the specialization sij = −1 for all i ∈ [n], j ∈ [m] \ [n],
applied to QK,ĉ. We have that Q0K,ĉ = 0 unless K = [n] and ĉ = ê.
Proof. Since the specialization map commutes with the action of DX , we have that
s0
Y
p
c −1
pijij .
inside ∂K · p0s0 +c0 ·
Q0K,ĉ is the coefficient of Q 0
i,j pij
i,j
Suppose first that ĉ is a tuple with some entry ci0 j0 ≥ 1: we show that for any K, Q0K,ĉ = 0. To see this,
Q
c −1
note that applying any sequence of partial derivatives to p0s0 +c0 · i,j pijij won’t turn the exponent of pi0 j0
sln
, we may then assume that
negative. Since ∂K ∈ DX
Y c −1
Y d
pijij = p0s0 +d0 ·
∂K · p0s0 +c0 ·
pijij · F,
(4.9)
i,j
i,j
where d0 , dij ∈ Z, di0 j0 = 0, and F ∈ S sln [s0 ] is a polynomial in s0 whose coefficients are sln -invariant. Since
, we can apply (2.13) to rewrite the right hand side of (4.9) as
S sln is generated by the maximal minors
Q dK
e
a C[s0 ]-linear combination of ps00 +e0 · i,j pijij where e0 , eij ∈ Z and ei0 j0 ≥ 0. We conclude that Q0K,ĉ = 0.
From now on we assume that ĉ is has all cij ≤ 0. Since |ĉ| = 1, we must have c0 ≥ 1. We look at weights
under the action of the subalgebra
t · In 0
Tt =
: t ∈ C ⊂ glm ,
0
0
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
and note that
Tt · p0s0 +c0 ·
Y
i,j
c −1
pijij
= t · (s0 + c0 ) · n + (n − 1)
X
i,j
(cij − 1) · p0s0 +c0 ·
Tt • ∂K = −t · |K ∩ [n]| · ∂K , using notation (2.4), and
!
!
s0
s0
X
p0
p
Tt · Q
(−1) · Q 0
= t · s0 · n + (n − 1)
.
i,j pij
i,j pij
Y
i,j
17
c −1
pijij
,
i,j
It follows that Q0K,ĉ can be non-zero only when
X
X
(s0 + c0 ) · n + (n − 1)
(cij − 1) − |K ∩ [n]| = s0 · n + (n − 1)
(−1),
i,j
i,j
P
which using the fact that c0 + i,j cij = 1 is equivalent to c0 + (n − 1) = |K ∩ [n]|. Since c0 ≥ 1 this equality
can only hold when c0 = 1 (which then forces all cij = 0), and K = [n].
Proof of (4.4). Using Definition 4.3, we can find finitely many tuples ĉ ∈ Z1+n·(m−n) with |ĉ| = 1, and
such that
polynomials PK,ĉ ∈ C[s] for K ∈ [m]
n
X
PK,ĉ · ∂K · ps+ĉ = ap (s) · ps .
(4.10)
K,ĉ
Using the definition of QK,ĉ, we obtain
X
PK,ĉ · QK,ĉ = ap (s).
K,ĉ
Applying the specialization sij = −1 for all i ∈ [n], j ∈ [m]\[n], it follows from Lemma 4.5 that
X
0
0
· Q0K,ĉ = ap (s0 − n · (m − n)),
P[n],ê
· Q0[n],ê =
PK,ĉ
K,ĉ
0
0 ∈ C[s ] is (just as Q0 ) the specialization of P
where PK,ĉ
0
K,ĉ . We will show that Q[n],ê = Pd (s0 −n·(m−n)),
K,ĉ
from which it follows that Pd (s0 − n · (m − n)) divides ap (s0 − n · (m − n)). Making the change of variable
s = s0 − n · (m − n) proves that Pd (s) divides ap (s), as desired.
To see that Q0[n],ê = Pd (s0 − n · (m − n)), we consider the action of Dd on ps : using (3.4), Theorem 3.5,
and applying the specialization (4.8) as before, we obtain
X
∂K · dK · ps = Dd · ps = Pd (s) · ps .
K∈([m]
n )
Using (2.13), we can rewrite the above equality as
X
RK,ĉ · ∂K · ps+ĉ = Pd (s) · ps ,
∂[n] · ps+ê +
K6=[n]
ĉ with |ĉ|=1
for some RK,ĉ ∈ C[s]. We now apply the same argument as we did to (4.10): we consider the further
specialization sij = 0 and use Lemma 4.5 to obtain Q0[n],ê = Pd (s0 −n·(m−n)), which concludes our proof.
18
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
5. The Strong Monodromy Conjecture for maximal minors and sub-maximal Pfaffians
Let X = CN and Y ⊂ X a closed subscheme with defining ideal I. Consider a log resolution f : X ′ → X
of the ideal I (or of the pair (X, Y ); see for instance [Laz04, Sec. 9.1.B]), i.e. a proper birational morphism
f : X ′ → X such that IOX ′ defines an effective Cartier divisor E, f induces an isomorphism f : X ′ \ E →
X \ Y , and the divisor KX ′ /X + E has simple normal crossings support. Write Ej , j ∈ J , for the irreducible
components of the support of E, and express
X
X
E=
aj Ej , KX ′ /X =
kj · Ej .
j∈J
j∈J
The topological zeta function of I (or of the pair (X, Y )) is defined as [DL92, DL98, Vey06]
X
Y
1
,
(5.1)
ZI (s) =
χ(EI◦ ) ·
ai · s + ki + 1
I⊆J
i∈I
T
S
where χ denotes the Euler characteristic and EI◦ = ( i∈I Ei ) \ ( i∈I
/ Ei ). The topological zeta function is
independent of the log resolution, and the Strong Monodromy Conjecture asserts that the poles of ZI (s) are
roots of bI (s), and in an even stronger form that
bI (s) · ZI (s) is a polynomial.
(5.2)
We verify (5.2) for maximal minors and sub-maximal Pfaffians as a consequence of Theorems 3.9 and 4.1, by
taking advantage of the well-studied resolutions given by complete collineations in the case of determinantal
varieties, and complete skew forms in the case of Pfaffian varieties [Vai84, Tha99, Joh03].
5.1. Maximal minors. Let m ≥ n and X = Xm,n denote the vector space of m × n matrices as before.
Denote by Y the subvariety of matrices of rank at most n − 1, and let I be the ideal of maximal minors
defining Y . It follows from [Joh03, Cor. 4.5 and Cor. 4.6] that I has a log resolution with J = {0, · · · , n − 1}
and
n−1
n−1
X
X
((m − i)(n − i) − 1) · Ei .
(n − i) · Ei , KX ′ /X =
E=
i=0
i=0
It follows that ki + 1 = (m − i)(n − i), and ai = n − i for i = 0, · · · , n − 1, and therefore by our Theorem 4.1
the denominator of every term in (5.1) divides bI (s). This is enough to conclude (5.2).
5.2. Sub-maximal Pfaffians. Let X = Xn be the vector space of (2n + 1) × (2n + 1) skew-symmetric
matrices. Denote by Y the subvariety of matrices of rank at most 2(n − 1) and let I denote the ideal of
sub-maximal Pfaffians defining Y . As shown below, there is a log resolution of I with J = {0, · · · , n−1} and
E=
n−1
X
(n − i) · Ei ,
i=0
KX ′ /X =
n−1
X
(2(n − i)2 + (n − i) − 1) · Ei .
(5.3)
i=0
It follows that (ki + 1)/ai = 2(n − i) + 1 for i = 0, · · · , n − 1, and thus our Theorem 3.9 implies (5.2).
We sketch the construction of the log resolution, based on the strategy in [Joh03, Chapter 4]: this is
perhaps well-known, but we weren’t able to locate (5.3) explicitly in the literature. We write Yi ⊂ X for the
subvariety of (2n + 1) × (2n + 1) skew-symmetric matrices of rank at most 2i. We define the sequence of
transformations πi : X i+1 → X i , fi = π0 ◦ π1 ◦ · · · ◦ πi : X i+1 → X 0 , where X 0 = X, X 1 is the blow-up of
X 0 at Y0 , and in general X i+1 is the blow-up of X i at the strict transform Y i of Yi along fi−1 . The desired
log resolution is obtained by letting X ′ = X n and f = fn−1 : X ′ → X. Each Y i is smooth (as we’ll see
shortly), so the same is true about the exceptional divisor Ei of the blow-up πi . We abuse notation and
write Ei also for each of its transforms along the blow-ups πi+1 , · · · , πn−1 . It follows from the construction
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
19
below that the Ei ’s are defined locally by the vanishing of distinct coordinate functions, so f : X ′ → X is
indeed a log resolution.
We show by induction on i = n, n − 1, · · · that X n−i admits an affine open cover where each open set V
in the cover has a filtration V = Vi ⊃ Vi−1 ⊃ · · · ⊃ V0 , isomorphic to
i
(Yii ⊃ Yi−1
⊃ · · · ⊃ Y0i ) × C4i+3 × · · · × C4(n−1)−1 × C4n−1 ,
(5.4)
where Yin = Yi and more generally
Yji is the variety of (2i + 1) × (2i + 1) matrices of rank at most 2j.
The key property of the filtration (5.4) is that for each j = 0, · · · , i, Vj is obtained by intersecting V with
the strict transform of Yn−i+j along fn−i−1 . In particular V0 = V ∩ Y n−i is (on the affine patch V ) the
center of blow-up for πi . Since Y00 is just a point, V0 is an affine space and hence smooth.
When i = n, X n−i = X, so we can take V = X and (5.4) to be the filtration X = Yn ⊃ Yn−1 ⊃ · · · ⊃ Y0 .
We discuss the first blow-up (i = n − 1) and the associated filtration, while for i < n − 1 the conclusion
follows from an easy iteration of our argument. We write xij (resp. yij ), 1 ≤ i < j ≤ 2n + 1 for the
coordinate functions on X (resp. on PX, the projectivization of X). X 1 is defined inside X × PX by the
equations xij ykl = xkl yij , and we choose V ⊂ X 1 to be the affine patch where y12 6= 0 (similar reasoning
applies on each of the affine patches yij 6= 0). The coordinate functions on V are t0 = x12 and uij = yij /y12
for (i, j) 6= (1, 2). Setting u12 = 1, we get that the map π0 : V → X 0 corresponds to a ring homomorphism
π0∗ : C[xij ] −→ C[t0 , uij ] given by xij 7→ t0 · uij ,
and E0 ∩ V is defined by the equation t0 = 0. With the usual conventions uji = −uij , uii = 0, we write
Mij = Pf {1,2,i+2,j+2} for the Pfaffian of the 4 × 4 principal skew-symmetric submatrix of (uij ) obtained by
selecting the rows and columns of u indexed by 1, 2, i + 2 and j + 2, 1 ≤ i, j ≤ 2n − 1. Using the calculation
in the proof of Lemma 2.4, we obtain that {Mij : 1 ≤ i < j ≤ 2n − 1} ∪ {t0 } ∪ {u1i , u2i : i = 3, · · · , 2n + 1}
is a system of coordinate functions on V , and moreover
π0∗ (Ip+1 (xij )) = tp+1
· Ip (Mij ), for p = 1, · · · , n,
0
(5.5)
where Ip (aij ) denotes the ideal generated by the 2p × 2p Pfaffians of the skew-symmetric matrix (aij ).
Thinking of {t0 } ∪ {u1i , u2i : i = 3, · · · , 2n + 1}, as the coordinate functions on C4n−1 , and of {Mij } as the
n−1
n−1
coordinate functions on Xn−1 = Yn−1
, we identify Yp−1
with the zero locus of Ip (Mij ) for p = 1, · · · , n, and
note that by (5.5) it is the strict transform of Yp which is the variety defined by Ip+1 (xij ). This yields the
filtration (5.4) for i = n − 1. By letting p = n − 1 in (5.5) and noting that I = In (xij ), we obtain that the
inverse image π0−1 (I) = IOX 1 vanishes with multiplicity n along E0 . Iterating this, we obtain the formula
(5.3) for the exceptional divisor E. Pulling back the standard volume form dx = dx12 ∧ · · · ∧ dxn−1,n on X
along π0 , we obtain (on the affine patch V )
π0∗ (dx) = t02n
which vanishes with multiplicity
2n2
2 +n−1
· dt0 ∧ du13 ∧ · · · ∧ dun−1,n ,
+ n − 1 along E0 . Iterating this, we obtain formula (5.3) for KX ′ /X .
Acknowledgments
We are grateful to Nero Budur for many interesting conversations and helpful suggestions. Experiments
with the computer algebra software Macaulay2 [GS] have provided numerous valuable insights. Raicu
acknowledges the support of the National Science Foundation under grant DMS-1458715. Walther acknowledges the support of the National Science Foundation under grant DMS-1401392. Weyman acknowledges
20
ANDRÁS C. LŐRINCZ, CLAUDIU RAICU, ULI WALTHER, AND JERZY WEYMAN
the support of the Alexander von Humboldt Foundation, and of the National Science Foundation under
grant DMS-1400740.
References
[ADF80] S. Abeasis and A. Del Fra, Young diagrams and ideals of Pfaffians, Adv. in Math. 35 (1980), no. 2, 158–178, DOI
10.1016/0001-8708(80)90046-8. MR560133 (83f:14040)
[Ber72] I. N. Bernšteı̆n, Analytic continuation of generalized functions with respect to a parameter, Funkcional. Anal. i Priložen.
6 (1972), no. 4, 26–40. MR0320735 (47 #9269)
[Bud13] Nero Budur, Bernstein-Sato polynomials (2013). Lecture notes: Summer School on Algebra, Algorithms, and Algebraic
Analysis, Rolduc Abbey, Netherlands.
[Bud15]
, Bernstein-Sato polynomials and generalizations, available at
https://perswww.kuleuven.be/~ u0089821/Barcelona/BarcelonaNotes.pdf (2015). Lecture notes, UPC Barcelona.
[BMS06] Nero Budur, Mircea Mustaţǎ, and Morihiko Saito, Bernstein-Sato polynomials of arbitrary varieties, Compos. Math.
142 (2006), no. 3, 779–797, DOI 10.1112/S0010437X06002193. MR2231202 (2007c:32036)
[CSS09] Sergio Caracciolo, Alan D. Sokal, and Andrea Sportiello, Noncommutative determinants, Cauchy-Binet formulae,
and Capelli-type identities. I. Generalizations of the Capelli and Turnbull identities, Electron. J. Combin. 16 (2009),
no. 1, Research Paper 103, 43. MR2529812 (2010g:15003)
[CSS13]
, Algebraic/combinatorial proofs of Cayley-type identities for derivatives of determinants and Pfaffians, Adv.
in Appl. Math. 50 (2013), no. 4, 474–594, DOI 10.1016/j.aam.2012.12.001. MR3032306
[dCEP80] C. de Concini, David Eisenbud, and C. Procesi, Young diagrams and determinantal varieties, Invent. Math. 56 (1980),
no. 2, 129–165, DOI 10.1007/BF01392548. MR558865 (81m:14034)
[DL92] J. Denef and F. Loeser, Caractéristiques d’Euler-Poincaré, fonctions zêta locales et modifications analytiques, J. Amer.
Math. Soc. 5 (1992), no. 4, 705–720, DOI 10.2307/2152708 (French). MR1151541 (93g:11118)
[DL98] Jan Denef and François Loeser, Motivic Igusa zeta functions, J. Algebraic Geom. 7 (1998), no. 3, 505–537. MR1618144
(99j:14021)
[Doc13] Roi Docampo, Arcs on determinantal varieties, Trans. Amer. Math. Soc. 365 (2013), no. 5, 2241–2269, DOI
10.1090/S0002-9947-2012-05564-4. MR3020097
[HU91] Roger Howe and Tōru Umeda, The Capelli identity, the double commutant theorem, and multiplicity-free actions,
Math. Ann. 290 (1991), no. 3, 565–619, DOI 10.1007/BF01459261. MR1116239 (92j:17004)
[GS] Daniel R. Grayson and Michael E. Stillman, Macaulay 2, a software system for research in algebraic geometry,
Available at http://www.math.uiuc.edu/Macaulay2/.
[Joh03] Amanda Ann Johnson, Multiplier ideals of determinantal ideals, ProQuest LLC, Ann Arbor, MI, 2003. Thesis (Ph.D.)–
University of Michigan. MR2704808
[JP79] Tadeusz Józefiak and Piotr Pragacz, Ideals generated by Pfaffians, J. Algebra 61 (1979), no. 1, 189–198, DOI
10.1016/0021-8693(79)90313-2. MR554859 (81e:13005)
[Kim82] Tatsuo Kimura, The b-functions and holonomy diagrams of irreducible regular prehomogeneous vector spaces, Nagoya
Math. J. 85 (1982), 1–80. MR648417 (84j:32017)
[Kim03]
, Introduction to prehomogeneous vector spaces, Translations of Mathematical Monographs, vol. 215, American
Mathematical Society, Providence, RI, 2003. Translated from the 1998 Japanese original by Makoto Nagura and
Tsuyoshi Niitani and revised by the author. MR1944442 (2003k:11180)
[Laz04] Robert Lazarsfeld, Positivity in algebraic geometry. II, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge.
A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of
Modern Surveys in Mathematics], vol. 49, Springer-Verlag, Berlin, 2004. Positivity for vector bundles, and multiplier
ideals. MR2095472 (2005k:14001b)
[Lőr13] András C. Lőrincz, The b-functions of quiver semi-invariants, arXiv 1310.3691 (2013).
[Lőr15]
, Singularities of zero sets of semi-invariants for quivers, arXiv 1509.04170 (2015).
[Rai15] Claudiu Raicu, Characters of equivariant D-modules on spaces of matrices, arXiv 1507.06621 (2015). To appear in
Compos. Math.
[RWW14] Claudiu Raicu, Jerzy Weyman, and Emily E. Witt, Local cohomology with support in ideals of maximal minors and
sub-maximal Pfaffians, Adv. Math. 250 (2014), 596–610, DOI 10.1016/j.aim.2013.10.005. MR3122178
[SS74] Mikio Sato and Takuro Shintani, On zeta functions associated with prehomogeneous vector spaces, Ann. of Math. (2)
100 (1974), 131–170. MR0344230 (49 #8969)
BERNSTEIN–SATO POLYNOMIALS FOR MAXIMAL MINORS AND SUB–MAXIMAL PFAFFIANS
21
[Tha99] Michael Thaddeus, Complete collineations revisited, Math. Ann. 315 (1999), no. 3, 469–495, DOI
10.1007/s002080050324. MR1725990 (2000j:14081)
[Vai84] Israel Vainsencher, Complete collineations and blowing up determinantal ideals, Math. Ann. 267 (1984), no. 3, 417–
432, DOI 10.1007/BF01456098. MR738261 (85f:14053)
[Vey06] Willem Veys, Arc spaces, motivic integration and stringy invariants, Singularity theory and its applications, Adv.
Stud. Pure Math., vol. 43, Math. Soc. Japan, Tokyo, 2006, pp. 529–572. MR2325153 (2008g:14023)
[Wey03] Jerzy Weyman, Cohomology of vector bundles and syzygies, Cambridge Tracts in Mathematics, vol. 149, Cambridge
University Press, Cambridge, 2003. MR1988690 (2004d:13020)
[Wit12] Emily E. Witt, Local cohomology with support in ideals of maximal minors, Adv. Math. 231 (2012), no. 3-4, 1998–
2012, DOI 10.1016/j.aim.2012.07.001. MR2964631
Department of Mathematics, University of Connecticut, Storrs, CT 06269
E-mail address: andras.lorincz@uconn.edu
Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556
Institute of Mathematics “Simion Stoilow” of the Romanian Academy
E-mail address: craicu@nd.edu
Department of Mathematics, Purdue University, West Lafayette, IN 47907
E-mail address: walther@math.purdue.edu
Department of Mathematics, University of Connecticut, Storrs, CT 06269
E-mail address: jerzy.weyman@uconn.edu
| 0 |
Market Coupling as the Universal Algorithm
to Assess Zonal Divisions
Grzegorz Oryńczak, Marcin Jakubek, Karol Wawrzyniak, Michał Kłos
National Centre for Nuclear Research, Świerk Computing Centre
Otwock-Świerk, Poland
Karol.Wawrzyniak@fuw.edu.pl
Abstract— Adopting a zonal structure of electricity market
requires specification of zones’ borders. In this paper we use
social welfare as the measure to assess quality of various zonal
divisions. The social welfare is calculated by Market Coupling
algorithm. The analyzed divisions are found by the usage of
extended Locational Marginal Prices (LMP) methodology
presented in paper [1], which takes into account variable
weather conditions. The offered method of assessment of a
proposed division of market into zones is however not limited to
LMP approach but can evaluate the social welfare of divisions
obtained by any methodology.
Index Terms— Flow Based Market Coupling, Power System
Economics, Social Welfare
I.
INTRODUCTION
The energy market in Europe is undergoing a process of
transformation aimed at integration of national markets and
making better use of renewable generation sources. The
market structure used in many countries, mostly due to
historical reasons, is the uniform pricing, in which there is a
single price of energy set on a national market for each hour
of a day. In spite of its apparent simplicity, such an approach
has serious disadvantages. The equilibrium set on the market
does not take into account safety requirements of the grid.
Hence, (i) the single-price equilibrium set on the market
(energy exchange) is frequently unfeasible, (ii) the system
operator has to perform costly readjustments, (iii) costs of
supplying the energy differ between locations, but they are
not covered where they arise.1 With introduction of other
forms of market, congestion costs are mitigated and the price
on the market reflects the true costs of supplying energy to
different locations in a more adequate way.
Hitherto, the explicit type of the future pan-European
energy market remains an open question as the Third Energy
Package does not specify the design precisely, but the two
most popular approaches towards which national markets
evolve are nodal and zonal pricing. The nodal pricing model
is currently used in, among others, the US and Russia. Zonal
pricing has been introduced in the Nordic countries as well as
in Great Britain. This type of pricing is gaining in popularity
across the Europe, and will be the main subject of this paper.
Zonal market, which can be thought of as a compromise
between simplicity of uniform structure and accuracy of
1
For example, in Poland in 2011 the cost of the balancing market
readjustments amounted to more than 3% (>250 million EUR) of the overall
costs of production (source: URE/ARE S.A.).
____________________________________________________________
This work was supported by the EU and MSHE grant nr POIG.02.03.00-00013/09. The use of the CIS computer cluster at NCBJ is gratefully
acknowledged.
nodal one, introduces differentiation of prices between
regions with distinct costs of supplying energy, but it
maintains the transparency which the nodal market lacks. The
grid is divided into geographical regions (zones), each having
a separate market for the energy with possibly different price.
Market Coupling (MC) algorithm is used to control interzonal power flows and to calculate prices in zones given
those flows. This way, under presumption that the zones were
chosen so that frequently congested lines are on their borders,
the equilibrium on zonal markets will take into account
transfer limits on those critical lines. The need for additional
congestion management is thus minimized, with most of the
task being performed by the MC mechanism. Of course,
small adjustments of equilibrium to satisfy limits on intrazonal lines might be necessary, but they are expected to be
less costly than the adjustments on a corresponding uniform
market.
Still, there is no consensus in the literature with respect to
methodology of identification of optimal zones’ number and
their borders. The existing methods are mostly based on twostage approach – assignment of some specific values to each
of the nodes and division of the system into regions by
clustering the nodes over those parameters. Among existing
methods, we can distinguish two popular approaches for
choosing the values characterizing nodes.
The first approach is based on nodal prices, called also
Locational Marginal Prices (LMPs) [2,3]. Nodal price
represents the local value of energy, i.e. cost of supplying
extra 1 MW of energy to a particular node - physical point in
the transmission system, where the energy can be injected by
generators or withdrawn by loads. This price consists of the
cost of producing energy used at a given node and the cost of
delivering it there taking into account congestion. Therefore
LMPs separate locations into higher and lower price areas if
congestion occurs between them. The second approach is
based on Power Transfer Distribution Factors (PTDFs). The
procedure starts from identification of congested lines, for
which PTDFs are then calculated. The distribution factors
reflect the influence of unit nodal injections on power flow
along transmission lines, thus grouping the nodes
characterized by similar factors into one zone defines a region
of desirably similar sensitivity to congestions [4].
The main issue that has to be addressed concerning the
ultimate choice of zonal configuration is the inconsistency in
determinants used to separate the nodes and in the criteria
used to evaluate constructed partitions. The two
aforementioned methods derive divisions from a reasonable
assumption that congested lines should become inter-zonal
links, however, the actual shape of borderlines is based on
several different premises used to (i) label each of nodes with
a value (e.g. nodal price) or a set of values (e.g. PTDFs) and
(ii) to group the nodes into geographic areas using a
clustering technique.
Since the methods of division assign the values basing on
data derived from nodal market structure (e.g. LMPs), there is
a justified need for the assessment which evaluates a newly
defined zonal architecture. We can broadly divide evaluation
criteria into those based on safety of the network, which
include, for example, predictability of the flows on intrazonal lines, and those based on economic (market) measures,
like market liquidity, concentration or social welfare
(suggested by [7] among the others). In this article we
calculate and compare social welfare (SW), defined as the
sum of producers’ and consumers’ surplus in the equilibrium
of supply and demand on the zonal market. Specifically, we
use mechanism of Flow Based Market Coupling (FB MC) to
determine the equilibria on each of the zonal markets, taking
into account the limits of power flows between them.2
As a test case of divisions, we use the results of LMP
methodology presented in [1], which takes into account
variable weather conditions and is applied to a simplified
model of Polish grid, and derive the appropriate welfare
measures. The exact methodology of Market Coupling and
calculation of social welfare is presented in the next section.
In Sec. III we describe the model of network used as the test
case. The results derived on it are presented and discussed in
Sec. IV. In Sec. V we conclude and present directions for
future work.
II.
THE METHODOLOGY
The zonal energy market can be represented as a set of
energy exchanges, each governing the trade between
generators and consumers of energy located in a particular
geographic area. Energy transfers between zones are allowed
and are governed by MC mechanism, which takes into
account the constraints of the inter-zonal transmission lines.
In order to determine safe supply-demand equilibria on each
of energy exchanges, the MC mechanism must determine
how the realization of buy/sell bids translate into (i) power
injections/withdrawals in the nodes of the grid and into (ii)
flows on inter-zonal lines. In doing so, the aim of the MC
algorithm is to maximize the social welfare (consumer
surplus + producer surplus + congestion rent) while keeping
the flows in safety limits.
The mechanism of keeping the inter-zonal flows in safety
limits can be of varying level of complexity. We use in our
approach Flow Based MC, which takes advantage of the
Power Transfer Distribution Factors (PTDF) matrix to
determine how a particular injection/withdrawal influences
power flows on inter-zonal lines. This approach is analogous
to a Direct Current Power Flow (DC PF) model of the flows
We use the term “Market Coupling” specifically as the mechanism
governing the exchange between market zones which are managed by one
system operator (for example, the case of a national market divided into
zones, or a common zonal market spanning across more than one country).
Governance of the flows on the borders of two (or more) not integrated
markets is not studied in this paper.
in the grid. As such, FB MC is a significant step-up in
robustness compared to simpler MC mechanisms, for
example the Available Transfer Capacity (ATC) MC, which
limits only the aggregated flow between the zones, without
calculating explicitly the flows on each of the inter-zonal
lines.
We based our implementation of FB MC on the
description of the COSMOS/Euphemia algorithm [5,8],
which was derived for the CWE (Central Western Europe)
energy market.
A. Input of Offers
The main input data for the MC algorithm are the buy/sell
offers of the energy consumers/generators. An n-th offer,
n = 1,…, N, is characterized by affiliation to a zone j,
j = 1,…, J, which we denote by n Zj and which is derived
from the physical location of the node where the energy will
be injected/withdrawn, and by the volume qn to be traded,
which is coded as a positive amount for a sell bid, and
negative for a buy bid. The algorithm allows partial
realization of an offer and also the use of “triangle” offers, in
which the offer price either decreases (for a buy offer) or
increases (for a sell offer) linearly on the interval [0, |qn|],
thus the offer is characterized by two prices, Pn0 and Pn1, that
is, Pn0 Pn1 for a buy offer, Pn0 Pn1 for a sell offer. In the
equilibrium found by the algorithm, each of the offers can be
either accepted, accepted partially or not accepted at all,
which will be coded by a coefficient An [0,1]. Thus, an offer
n accepted in percentage An is connected with an
injection/withdrawal of power in the amount of Anqn, with the
highest price in case of “triangle” sell bid (lowest in case of
0
1
0
buy bid) denoted by Pn Pn ( Pn Pn ) An .
B. Flow Calculation
To calculate how the realization of offers affect interzonal flows for a given vector of zonal injections
Q = (Q1,…,QJ)T (the aggregated injections/withdrawals
representing accepted offers in all nodes in a given zone),
which coordinates are given by
Qj
Aq
n
n
,
(1)
nZ j
we use the nodal PTDF matrix (nPTDF), derived from the
model of the grid, and the Generation Shift Key (GSK) matrix
[6] to obtain the flows along K inter-zonal lines (vector
T
Q (Q1 ,..., QK ) ) as Q nPTDF GSK Q .
To construct the GSK matrix we firstly run the MC
algorithm with the flows calculated without the use of GSK as
Q R nPTDF Q , where R matrix selects the flows along K
inter-zonal lines. The load/generation equilibrium, found for
such constraints, is used to calculate the GSK matrix, which is
then treated as input to the “proper” MC FB algorithm’s run.
2
C. Objective Function and Constraints
Maximization of the social welfare is equivalent to the
optimization problem
Pn0
F. Redispatch Costs
Since the MC mechanism controls the congestion of only
2
n 1
inter-zonal lines, there is a possibility that system operator
The safety limits on the K inter-zonal lines are described will have to correct the generation profile set on the market in
by their capacities Ck, k 1,..., K , with vector C defined as order to avoid congestion on intra-zonal lines. This process of
C = (C1,…,CK)T. For the vector of zonal injections Q given by redispatch generates additional costs since some of the
(1), the safety constraints on flows in inter-zonal lines, are generation has to be shifted from the cheaper to more
expensive producers. Thus, in order to better reflect the true
characterized by the condition Q C , which translates to
costs of supplying energy in our social welfare measure, we
correct the amount (5) by estimator of the redispatch costs.
(3)
nPTDF GSK Q C .
To this end, on the load/generation profile acquired as a
Lastly, we add the balance condition, which states that the solution from MC, we run Power Flow algorithm to calculate
sum of energy bought on a market of zone Zj and imported to flows on intra-zonal lines and we compare them with the
the zone Zj must be equal to the energy sold on this market lines’ capacities. If a line l in zone Zj is congested by amount
and exported from zone Zj, that is,
of ol MW, we add to the cost of redispatch the amount
N
max
A q
n
n
qn An
nZ j
Pn
Ql
l from ( Z j )
.
(2)
Q,
l
(4)
l to ( Z j )
where by from(Zj) and to(Zj) we denote the sets of the interzonal lines along which the energy flows are, respectively,
withdrawing power from and injecting power into zone Zj.
The optimization problem described by objective function (2)
and constraints (3) and (4) is then input to the IBM CPLEX
mixed integer quadratic solver, and the vector of offer
acceptance levels A = (A1,…,AN) is derived.
D. Market Clearing Prices
When the acceptance levels A are found, we can identify
the Market Clearing Prices (MCPs) for each of the zone Zj,
namely MCPj, j = 1,…, J. In general, a clearing price on a
market is defined as any price for which the aggregated
supply and demand (taking into account the import/export
flows) is in equilibrium. Since such price might be not
determined exactly (there might exist a range of prices
satisfying the equilibrium condition), we define MCPj in the
following way, which was chosen to accommodate the nonelastic demand assumption (cf. section III): (i) if demand in
zone Zj was satisfied (for all buy offers in zone Zj we have
An = 1), then we take as MCPj the highest price of a sell offer
accepted at or imported to Zj; (ii) if demand in zone Zj was
not satisfied completely (there exist a buy offer in zone Zj
such that Aj < 1), then we take as MCPj the common price of
buy offers, P , as defined in section III.
ol Pj
max
, where Pj
max
is the highest cost of generation in zone
Zj, to obtain an upper bound on the redispatch costs.
III.
THE TEST CASE
To test our approach on an exemplary case which can be
treated as a relatively close representation of a real energy
network, we used the data on the Polish grid based on the
case file included in MATPOWER distribution [6]. This case
represents the Polish 400, 220 and 110 kV network during
winter 1999-2000 peak conditions.
The system consists of 327 power generators, 2383 buses,
and 2896 interconnecting branches. The quality of the data is
unknown but the fact that the assumed costs of generation are
linear gives a premise that these data are of rather poor
quality. Hence, the analysis should be treated as exemplary,
with no constructive conclusions for the Polish system. We
used this specific case since this is the only available one in
which congestion exists under base case load. Additionally,
we decreased capacity limits of two specific branches in order
to obtain more pronounced influence of congestion on the
MC solutions. A more detailed description of the data can be
found in [1].
We divided the grid taking into account variable wind
generation by using the method presented in [1]. We used
three variations of our methodology, which reflect division
for (i) no wind output, (ii) maximal wind output registered in
the period between years 2007–12, and (iii) so called
E. Social Welfare
consensus clustering, which reflects aggregation of 722
The social welfare in the market equilibrium found by the different divisions made from various weather conditions into
above procedure is equal to
one. In each of the three variants we obtained divisions into 2,
3, 4 and 5 clusters, resulting in twelve divisions to be
0
J
Pn Pn
evaluated by the MC algorithm. Each of these 12 divisions
SW
An qn MCPj 2
was then tested in conditions reflecting average wind
j 1 nZ
(5) conditions in the period between years 2007–12.
j
J
J
( MCPj MCPk ) f jk ,
j 1 k 1
where second double sum is the overall congestion rent of the
system operator (fjk indicates power transfer between adjacent
zones j and k).
Specifically, for each division we constructed offers of
wind generators taking the estimated power output for
average wind levels and offering it for a price equal to zero in
order to secure the “sell” of the wind energy, which has
priority over the conventional generation in the system. For
conventional generators the energy available for sell, qn =
3
Namely, the values in Table 1 show the differences between appropriate
levels for the solutions for market divided into 2, 3, 4, 5 zones and the levels
for single-zone solution.
Variant 3
max wind
0
0
-7 144
-3 558
-3 558
3 558
-26 357
-7 147
-7 139
-107 163
5 734
5 052
80 806
-30 943
-30 130
-27 234
-90 622
-103 082
-107 132
59 679
72 952
79 898
-31 460
-30 129
-27 650
-90 645
-103 083
-108 346
59 185
72 954
80 696
5 559
-1 413
3 558
-2 087
total power
demand
[GW]
# generators
357
366
366
24.1
24.6
24.6
147
147
147
97
22
21
1.6
0.3
0.3
9
0
0
0.5
0
0
2000
147
147
yellow
2325
2234
1592
23.0
21.8
15.3
357
343
218
24.1
23.3
18.9
147
147
129
22
22
21
0.3
0.3
0.3
0
0
0
0
0
0
2000
2000
178
75
166
809
1.3
2.5
8.9
9
23
148
0.5
1.3
5.6
2000
2000
159
1477
1286
1592
14.8
12.6
15.3
255
163
218
12.1
15.5
18.9
152
129
129
848
948
734
8.1
9.2
7.6
102
180
139
11.9
7.8
5.1
129
159
159
75
166
75
1.3
2.5
1.3
9
23
9
0.5
1.3
0.5
2000
2000
2000
22
22
21
0.3
0.3
0.3
0
0
0
0
0
0
2000
2000
2000
total output
[GW]
# nodes
23.0
24.3
24.3
2 zones
2325
2400
2401
green
3 zones
Market
Clearing
Price
[PLN/MWh]
Table 1. Social welfare, redispatch costs and corrected SW in relation to
single-zone market from for grid divisions into two, three, four and five
zones done in 3 variants - no wind/consensus/maximal wind.
purple
The results summarized in Tab. 1 show that, as was
expected, the (uncorrected) SW in the case of a single-zone
market turned out to be the highest, since no congestion
constraints are then put on the market solution. However,
high redispatch costs associated with correction of this
solution lead to the lowest corrected SW for single-zone
market. The best division (with the highest SW corrected by
redispatch) are related to ‘max wind’ variant for 3 clusters. In
turn, the worst results are obtained for 2 clusters divisions in
the ‘consensus’ and ‘max wind’ variants. One can notice that
the corrected SW rises while redispatch costs drop with
increasing number of clusters up to 4 (for ‘no wind’ and
-1 585
Variant 2
consensus
yellow
Since, as yet, there is no widely accepted methodology for
choosing the exact number of zones into which market should
be divided (although there have been some attempts, cf. [11]),
in our study we assumed the range of the three
aforementioned variants of divisions (no wind, maximal
wind, consensus) to, respectively, 2, 3, 4 and 5 clusters,
which yielded 12 division cases. Then we analyzed each of
them separately and compared the results of social welfare in
those 12 cases. In Tables 1&2 we characterize the results
quantitatively, while Figure 1 delineates the geographical
placement for divisions into 3 and 4 zones. Tab. 1 shows
differences between SW, redispatch costs and SW corrected
by redispatch levels calculated for zonal divisions with
respect to single-zone market as a reference point.3 In the
case of single-zone market, SW is calculated from the MC
solution in which there is only one zone, thus no congestion
management is embedded in market mechanism and all is
done by redispatch. SW in such case amounts to 47 470 970
PLN, redispatch costs: 116 589 PLN, SW corrected by
redispatch: 47 354 381 PLN, marginal clearing price: 147
PLN.
Variant 1
no wind
grey
RESULTS & DISCUSSION
4 zones
IV.
Number of
zones / type of
division
SW
2
redispatch
SW corr
SW
3
redispatch
SW corr
SW
4
redispatch
SW corr
SW
5
redispatch
SW corr
purple
From the consumers’ side, since only constant loads at
each bus are available in the data, we assume that demand is
perfectly inelastic, namely, that the loads at each bus are
expected to be covered at any cost. To input such buy offers
to the algorithm, for each nodal load we use an offer with qn
equal to the negative of load, and with the price set at a level
Pn0 = Pn1 = P common for all demand offers. In calculations
presented below we have arbitrarily chosen P 2000 PLN ,
which is greater than any production cost in the data. Since
we are interested not in the absolute level of social welfare
for each proposed zonal division, but in the comparison of the
social welfares obtained across the 12 different divisions, the
choice of the demand bid price has no influence on the
results, as long as this price is equal in every division case
and is higher than all the sell offers’ prices.
‘consensus’ variants) or 3 (for ‘max wind’) and then both
become relatively stable. This can be interpreted as a result of
the most congested lines being taken into account as interzonal constraints into MC mechanism (instead of the costly
redispatch) when the market is divided into 4 (or 3 in case of
‘max wind’) zones. Further increasing the number of zones
does not lead to significant improvement.
green
pmax, where pmax is the maximal output of the power plant, is
offered at a constant price (Pn0 = Pn1), since the costs of
generation in the used case file are linear. That is, we assume
that the generators bid the available amount of energy by the
marginal cost of production.
Table 2. Qualitative data for division of the Polish power grid into 2,3,4
zones. Each cell includes values for no-wind / consensus / max-wind variant
in that vertical order. The colors reflect the areas depicted on Fig. 1.
Looking at Tab. 2, one can notice that usually in a bigger
zone the price is lower, in contrary to a smaller zone, where
the final price is higher than the one for the single-zone
market (147 PLN). Hence, when the bigger zone gets a lower
price, then the total SW increases. (However, one must keep
in mind that we do not include the redispatch costs in the
prices calculated as in Section II.D, thus the link between the
MCPs and SW corrected by the redispatch costs, which can
be treated as the best estimator of market efficiency, is not
straightforward.) Another reason for the SW divergence
between the divisions is that demand in tiny zones, especially
those not having their own generation, is sometimes not fully
satisfied (thus MCPs there are often equal to
P 2000 PLN ). In sum, and in relation to SW levels in Tab.
1, one can see that the division with the highest corrected SW
(‘max wind,’ 3 zones) is the one which has the smallest MCP
in the biggest zone, while the demand is fully satisfied in all
of the zones.
V.
FUTURE WORK
Among the issues that ought to be tackled in the future
research we can distinguish two main subjects.
First, we acknowledge the need for improvement of the
LMP-based clustering algorithm and development of other
partitioning techniques that result in all of the zones being
equipped with their own generation. The partitioning methods
which produce bigger bidding areas are expected to eliminate
the problem of exclusively external supply of energy. Also,
the case of zones which overlap due to interfusion of the
corridors formed along different types of transmission lines
(e.g. 220 kV and 400 kV, cf. top right configuration on
Fig. 1) remains unsolved.
Second, in the zonal approach all nodes in a specific zone
are aggregated into one node. The influence of the power
injected into this zone via transfer through the branches is
estimated by zonal PTDFs (zlPTDF) matrices [6]. The
calculation of zlPTDFs requires some assumption about ratios
between generations/loads and net export which are
expressed by GSK matrix. Thus, MC has to work with
inflexible constraints given by GSK, where certain
proportions between loads and generation are constant.
In other words, GSK has to be given as an input to MC.
Thus, its value has to be guessed a priori before MC
optimization starts. Then, in the optimization process, MC
can select different combination of generations/loads than the
one assumed in GSK, which subsequently forces incorrect
evaluation of constraints.
Thus, as the consequence of the rough estimation of GSK
matrices, the MC algorithm operates on unsound
prerogatives. Both under- and overestimation of power flows
along the transmission lines lead to suboptimal use of the
infrastructure [6]. Thus, the process of deriving reliable
GSK/zonal PTDFs is the central task for enhancement of
zonal market stability and efficiency.
Figure 1. Polish grid division for three (left) and four (right) zones. Results
for no-wind, consensus and maximal wind division variants are shown in the
top, middle and bottom row, respectively. Arrows indicate direction and
magnitude of energy transfers between zones in gigawatts.
REFERENCES
[1]
K. Wawrzyniak et al., “Division of the Energy Market into Zones in
Variable Weather Conditions using Locational Marginal Prices,”
Proceedings of the 39th IECON 2013, 2027 - 2032, ISSN: 1553-572X
[http://arxiv.org/abs/1310.5022].
[2]
B. Burstedde, “From Nodal to Zonal Pricing – A Bottom-Up Approach
to the Second-Best,” Proceedings of the 9th EEM, May 2012, pp. 885892.
[3]
J. Bialek and M. Imran, “Effectiveness of Zonal Congestion
Management in the European Electricity Market,” IEEE 2nd
International Power and Energy Conference, PECon, 2008.
[4]
C.Q. Kang et al.,“Zonal marginal pricing approach based on sequential
notwork partition and congestion contribution identification,” Electrical
Power and Energy Systems, vol. 51, p. 321-328, 2013.
[5]
R. Djabali, J. Hoeksema, Y. Langer, “COSMOS description: CWE
Market Coupling algorithm,” Belgian Power Exchange (BELPEX)
documentation, 2011.
[6]
M. Kłos et al., “The Scheme of Novel Methodology Based on Power
Transfer Distribution Factors and used to Zonal Division of Electrical
Grid,” submitted to PSCC14, [http://arxiv.org/abs/1401.8192].
[7]
ENTSO-E, “Network Code on Capacity Allocation and Congestion
Management – Purpose and Objectives”, 2012.
[8]
“EUPHEMIA Public Description,” EPEX Spot, APX, Belpex, Nord
Pool Spot, OMIE, Mercatoelettrico, OTE, 2013.
| 5 |
An Incremental Slicing Method for Functional Programs
PRASANNA KUMAR K., IIT Bombay
AMITABHA SANYAL, IIT Bombay
AMEY KARKARE, IIT Kanpur
arXiv:1709.08016v1 [] 23 Sep 2017
Several applications of slicing require a program to be sliced with respect to more than one slicing criterion. Program specialization,
parallelization and cohesion measurement are examples of such applications. These applications can benefit from an incremental static
slicing method in which a significant extent of the computations for slicing with respect to one criterion could be reused for another.
In this paper, we consider the problem of incremental slicing of functional programs.
We first present a non-incremental version of the slicing algorithm which does a polyvariant analysis1 of functions. Since polyvariant
analyses tend to be costly, we compute a compact context-independent summary of each function and then use this summary at the
call sites of the function. The construction of the function summary is non-trivial and helps in the development of the incremental
version. The incremental method on the other hand consists of a one-time pre-computation step that uses the non-incremental version
to slice the program with respect to a fixed default slicing criterion and processes the results further to a canonical form. Presented
with an actual slicing criterion, the incremental step involves a low-cost computation that uses the results of the pre-computation to
obtain the slice.
We have implemented a prototype of the slicer for a pure subset of Scheme, with pairs and lists as the only algebraic data types.
Our experiments show that the incremental step of the slicer runs orders of magnitude faster than the non-incremental version. We
have also proved the correctness of our incremental algorithm with respect to the non-incremental version.
ACM Reference format:
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare. 2017. An Incremental Slicing Method for Functional Programs. 1, 1,
Article 1 (September 2017), 18 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
1
INTRODUCTION
Program slicing refers to the class of techniques that delete parts of a given program while preserving certain desired
behaviors, for example, memory, state or parts of output. These behaviors are called slicing criteria. Applications of
slicing include debugging (root-cause analysis), program specialization, parallelization and cohesion measurement.
However, in some of the above applications, a program has to be sliced more than once, each time with a different
slicing criterion. In such situations, the existing techniques [5, 8, 15, 18, 20, 23] are inefficient as they typically analyze
the program multiple times. Each round of analysis involves a fixed point computation on the program text or some
intermediate form of the program, typically SDG in the case of imperative languages. We thus require an incremental
1 In
a polyvariant analysis [21], the definition of a function in some form is re-analyzed multiple times with respect to different application contexts.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
© 2017 ACM. Manuscript submitted to ACM
Manuscript submitted to ACM
1
2
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
(define (lcc str lc cc)
(if (null? str)
(return (cons lc cc))
(if (eq? (car str) nl)
(return (lcc (cdr str)
(+ lc 1)
(+ cc 1)))
(return (lcc (cdr str)
π 1 :lc
π 2 :(+ cc 1))))))
(define (lcc str lc )
(if (null? str)
(return (cons lc ))
(if (eq? (car str) nl)
(return (lcc (cdr str)
(+ lc 1)
))
(return (lcc (cdr str)
π1 :lc
π2 : )))))
(define (lcc str cc)
(if (null? str)
(return (cons cc))
(if (eq? (car str) nl)
(return (lcc (cdr str)
(+ cc 1)))
(return (lcc (cdr str)
π1 :
π 2 : (+ cc 1))))))
(define (main)
(return (lcc . . . 0 0))))
(a) Program to compute the number
of lines and characters in a string.
(define (main)
(return (lcc . . . 0 ))))
(b) Slice of program in (a) to compute
the number of lines only.
(define (main)
(return (lcc . . . 0))))
(c) Slice of program in (a) to compute
the number of characters only.
Fig. 1. A program in Scheme-like language and its slices. The parts that are sliced away are denoted by .
approach to slicing which can avoid repeated fixpoint computation by reusing some of the information obtained while
slicing the same program earlier with a different criterion.
The example from [15] shown in Figure 1b motivates the need for incremental slicing. It shows a simple program in
a Scheme-like language. It takes a string as input and returns a pair consisting of the number of characters and lines in
the string. Figure 1b shows the program when it is sliced with respect to the first component of the output pair, namely
the number of lines in the string (lc). All references to the count of characters (cc) and the expressions responsible for
computing cc only have been sliced away (denoted ). The same program can also be sliced to produce only the char
count and the resulting program is shown in Figure 1c.
The example illustrates several important aspects for an effective slicing procedure. We need the ability to specify a
rich set of slicing criteria to select different parts of a possibly complex output structure (first and second component of
the output pair in the example, or say, every even element in an output list). Also notice that to compute some part of
an output structure, all prefixes of the structure have to be computed. Thus, slicing criteria have to be prefix-closed.
Finally, it seems likely from the example, that certain parts of the program will be present in any slice, irrespective of
the specific slicing criterion2 . Thus, when multiple slices of the same program are required, a slicing procedure should
strive for efficiency by minimizing re-computations related to the common parts.
In this paper, we consider the problem of incremental slicing for functional programs. We restrict ourselves to tuples
and lists as the only algebraic data types. We represent our slicing criteria as regular grammars that represent sets of
prefix-closed strings of the selectors car and cdr. The slicing criterion represents the part of the output of the program
in which we are interested, and we view it as being a demand on the program. We first present a non-incremental
slicing method, which propagates the demand represented by the slicing criterion into the program. In this our method
resembles the projection function based methods of [8, 15]. However, unlike these methods, we do a context-sensitive
analysis of functions calls. This makes our method precise by avoiding analysis over infeasible interprocedural paths. To
avoid the inefficiency of analyzing a function once for each calling context, we create a compact context-independent
2 the
trivial null slicing criteria where the whole program is sliced away is an exception, but can be treated separately.
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
3
p ∈ Prog ::=d 1 . . . dn e main
— program
d ∈ Fdef ::=(define (f x 1 . . . x n ) e)
— function definition
(if x e 1 e 2 )
e ∈ Expr ::= (let x ← s in e)
(return x)
— conditional
— let binding
— return from function
k
nil
(cons
x
x
)
1
2
(cdr x)
s ∈ App ::= (car x)
(null?
x)
(+
x1 x2)
(f x . . . x )
1
n
— constants
— constructor
— selectors
— tester/generic-arithmetic
— function application
Fig. 2. The syntax of our language
summary for each function. This summary is then used to step over function calls. As we shall see, it is this context
independent summary that also makes the incremental version possible in our approach.
The incremental version, has a one-time pre-computation step in which the program is sliced with respect to a default
criterion that is same for all programs. The result of this step is converted to a set of automata, one for each expression
in the program. This completes the pre-computation step. To decide whether a expression is in the slice for a given
slicing criterion, we simply intersect the slicing criterion with the automaton corresponding to the expression. If the
result is the empty set, the expression can be removed from the slice.
The main contributions of this paper are as follows:
(1) We propose a view of the slicing criterion in terms of a notion called demand (Section 3) and formulate the problem
of slicing as one of propagating the demand on the main expression to all the sub-expressions of the program. The
analysis for this is precise because it keeps the information at the calling context separate. However it attempts
to reduce the attendant inefficiency through the use of function summaries. The difficulty of creating function
summaries in a polyvariant analysis, especially when the domain of analysis is unbounded, has been pointed out in
[15].
(2) Our formulation (Section 4) allows us to derive an incremental version of slicing algorithm that factors out
computations common to all slicing criteria (Section 5) and re-uses these computations. To the best of our
knowledge, the incremental version of slicing in this form has not been attempted before.
(3) We have proven the correctness of the incremental slicing algorithm with respect to the non-incremental version
(Section 5.2).
(4) We have implemented a prototype slicer for a first-order version of Scheme (Section 7). We have also extended the
implementation to higher-order programs (Section 6) by converting such programs to first-order using firstification
techniques [9], slicing the firstified programs using our slicer, and then mapping the sliced program back to
the higher-order version. The implementation demonstrates the expected benefits of incremental slicing: the
incremental step is one to four orders of magnitude faster than the non-incremental version.
2
THE TARGET LANGUAGE—SYNTAX AND SEMANTICS
Figure 2 shows the syntax of our language. For ease of presentation, we restrict the language to Administrative Normal
Form (ANF) [4]. In this form, the arguments to functions can only be variables. To avoid dealing with scope-shadowing,
Manuscript submitted to ACM
4
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
Premise
Transition
Rule Name
ρ, H, k
H, k
ρ(x) ∈ N
ρ(y) ∈ N
ρ, H, (+ x y)
H, ρ(x) + ρ(y)
H(ρ(x)) = (v 1 , v 2 )
ρ, H, (car x)
H, v 1
H(ρ(x)) = (v 1 , v 2 )
ρ, H, (cdr x)
H, v 2
` < dom(H) is a fresh location
ρ, H, (cons x y)
H[` 7→ (ρ(x), ρ(y))], `
ρ(x) ∈ N \ {0}
ρ, S, H, (if x e 1 e 2 ) −→ ρ, S, H, e 1
ρ(x) = 0
ρ, S, H, (if x e 1 e 2 ) −→ ρ, S, H, e 2
ρ(x) , nil
ρ, H, (null? x)
H, 0
ρ(x) = nil
ρ, H, (null? x)
H, 1
s is (f y1 . . . yn )
ρ, S, H, (let x ← s in e) −→ [~z 7→ ρ(~
y )], (ρ, x, e) • S, H, e f
f is (define (f z 1 . . . zn ) e f )
0
ρ, H, s
H ,v
ρ, S, H, (let x ← s in e) −→ ρ[x 7→ v], S, H 0, e
s is not (f y1 . . . yn )
ρ, (ρ 0, x 0, e 0 ) • S, H, (return x) −→ ρ 0 [x 0 7→ ρ(x)], S, H, e 0
const
prim
car
cdr
cons
if-true
if-false
null-true
null-false
let-fncall
let-nonfn
return
Fig. 3. The semantics of our language
we assume that all variables in a program are distinct. Neither of these two restrictions affect the expressibility of our
language. In fact, it is a simple matter to transform the pure subset of first order Scheme to our language, and map the
sliced program back to Scheme. To refer to an expression e, we may annotate it with a label π as π :e; however the label
is not part of the language. To keep the description simple, we shall assume that each program has its own unique set
of labels. In other words, a label identifies both the program point and the program that contains it.
A program in our language is a collection of function definitions followed by a main expression denoted as e main .
Applications (denoted by the syntactic category App) consist of functions or operators applied to variables. Expressions
(Expr) are either an if expression, a let expression that evaluates an application and binds the result to a variable, or a
return expression. The return keyword is used to mark the end of a function so as to initiate appropriate semantic
actions during execution. The distinction between expressions and applications will become important while specifying
the semantics of programs.
2.1
Semantics
We now present the operational semantics for our language. This is largely borrowed from [1, 7] and we include it here
for completeness. We start with the domains used by the semantics:
v:
Val
=
N + {nil} + Loc
– Values
ρ:
Env
=
Var → V al
– Environment
H:
Heap
=
Loc → (V al × V al + {empty}) – Heap
A value in our language is either a number, or the empty list denoted by nil, or a location in the heap. The heap
maps each location to a pair of values denoting a cons cell. Heap locations can also be empty. Finally, an environment
is a mapping from variables to values.
The dynamic aspects of the semantics, shown in Figure 3, are specified as a state transition system. The semantics of
applications s are given by the judgement form ρ, H, s
ρ 0, S 0, H 0, e 0 .
H 0, v, and those for expressions e by the form ρ, S, H, e →
Here S is a stack consisting of continuation frames of the form (ρ, x, e). The frame (ρ, x, e) signifies that if
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
5
the current function returns a value v, the next expression to be evaluated is e, and the environment for this evaluation
is ρ updated with the variable x bound to v. The start state is ({}ρ , [ ]S , {}H , e main ), where {}ρ is the empty environment,
[ ]S is the empty stack, and {}H is the empty heap. The program terminates successfully with result value ρ(x) on
reaching the halt state (ρ, [ ]S , H, (return x)). We use the notation ρ[x 7→ v] to denote the environment obtained by
updating ρ with the value for x as v. We also use [~
x 7→ v~ ] to denote an environment in which each x i has the value vi .
3
DEMAND
We now connect slicing with a notion called demand. A demand on an expression represents the set of paths that the
context of the expression may explore of the value of the expression. A demand is represented by a prefix-closed set of
strings over (0 + 1)∗ . Each string in the demand, called an access path, represents a traversal over the heap. 0 stands for
a single-step traversal over the heap by dereferencing the car field of a cons cell. Similarly, 1 denotes the dereferencing
of the cdr field of a cons cell.
As an example, a demand of {ϵ, 1, 10} on the expression (cons x y) means its context may need to visit the car
field of y in the heap (corresponding to the string 10 in the demand). The example also illustrates why demands are
prefix-closed—the car field of y cannot be visited without visiting first the cons cell resulting from the evaluation of
(cons x y) (represented by ϵ) and then the cell corresponding to y (represented by 1). The absence of 0 in the demand
also indicates that x is definitely not visited. Notice that to meet the demand {ϵ, 1, 10} on (cons x y), the access paths
{ϵ, 0} has to be visited starting from y. Thus we can think of (cons x y) as a demand transformer transforming the
demand {ϵ, 1, 10} to the demand {ϵ, 0} on y and the empty demand (represented by ∅) on x.
The slicing problem is now modeled as follows. Viewing the slicing criterion (also a set of strings over (0 + 1)∗ ) as a
demand3 on the main expression e main , we compute the demand on each expression in the program. If the demand on
a expression turns out to be ∅, the expression does not contribute to the demand on e main and can be removed from the
slice. Thus the solution of the slicing problem lies in computing a demand transformer that, given a demand on e main ,
computes a demand environment—a mapping of each expression (represented by its program point π ) to its demand.
We formulate this computation as an analysis called demand analysis.
We use σ to represent demands and α to represent access path. Given two access paths α 1 and α 2 , we use the
juxtaposition α 1 α 2 to denote their concatenation. We extend this notation to a concatenate a pair of demands and even
to the concatenation of a symbol with a demand: σ1 σ2 denotes the demand {α 1 α 2 | α 1 ∈ σ and α 2 ∈ σ2 } and 0σ is a
shorthand for {0α | α ∈ σ }.
3.1
Demand Analysis
Figure 4 shows the analysis. Given an application s and a demand σ , A returns a demand environment that maps
expressions of s to their demands. The third parameter to A, denoted DS, represents context-independent summaries
of the functions in the program, and will be explained shortly.
Consider the rule for the selector car. If the demand σ on (car x) is ∅, then no part of the value of (car x) is visited
and the demand on x is also ∅. However, if σ is non-empty, the context of (car x) has to first dereference the value of x
using the car field and then traverse the paths represented by σ . In this case, the demand on x is the set consisting of ϵ
(start at the root of x) and 0σ (dereference using car and then visit the paths in σ ). On the other hand, the rule for the
3 supplied
by a context that is external to the program
Manuscript submitted to ACM
6
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
A(π:κ, σ , DS) = {π 7→ σ }, for constants including nil
A(π:(null? π1:x), σ , DS) = {π 1 7→ if σ , ∅ then {ϵ } else ∅, π 7→ σ }
A(π:(+ π1:x π2:y), σ , DS) = {π 1 7→ if σ , ∅ then {ϵ } else ∅, π 2 7→ if σ , ∅ then {ϵ } else ∅, π 7→ σ }
A(π:: (car π1: x), σ , DS) = {π1 7→ if σ , ∅ then {ϵ } ∪ 0σ else ∅, π 7→ σ }
A(π:: (cdr π1: x), σ , DS) = {π1 7→ if σ , ∅ then {ϵ } ∪ 1σ else ∅, π 7→ σ }
A(π:(cons π1:x π2:y), σ , DS) = {π1 7→ {α | 0α ∈ σ }, π2 7→ {α | 1α ∈ σ }, π 7→ σ }
n
Ø
A(π:(f π1:y1 · · · πn :yn ), σ , DS) =
{πi 7→ DSif (σ )} ∪ {π 7→ σ }
i=1
D(π:(return π1:x), σ , DS) = {π1 7→ σ , π 7→ σ }
D(π:(if π1:x e 1 e 2 ), σ , DS) = D(e 1 , σ , DS) ∪ D(e 2 , σ , DS) ∪ {π1 7→ if σ , ∅ then {ϵ } else ∅, π 7→ σ }
D(π:(let π 1:x ← s in e), σ , DS) = A(s, ∪ DE(π ), DS) ∪ {π 7→ σ }
π ∈Π
where DE = D(e, σ , DS), and Π represents all occurrences of x in e,
∀f , ∀i, ∀σ : D(e f , σ , DS) = DE, DSif =
Ð
π ∈Π
DE(π )
df1 . . . dfk `l DS
(demand-summary)
where (define (f z 1 . . . zn ) e f ) is one of df1 . . . dfk , 1 ≤ i ≤ n, and Π represents all occurrences
of zi in e f
Fig. 4. Demand Analysis
constructor cons works as follows: To traverse the path 0α (alternately 1α) starting from the root of (cons x y), one has
to traverse the path α starting from x (or y).
Since (null? x) only visits the root of x to examine the constructor, a non-null demand on (null? x) translates to the
demand ϵ on x. A similar reasoning also explains the rule for (+ x y). Since, both x and y evaluate to integers in a well
typed program, a non-null demand on (+ x y) translates to the demand ϵ on both x and y.
The rule for a function call uses a third parameter DS that represents the summaries of all functions in the program.
DS is a set of context-independent summaries, one for each (function, parameter) pair in the program. DSif represents
a transformation that describes how any demand σ on a call to f is transformed into the demand on its ith parameter.
DS is specified by the inference rule demand-summary. This rule gives a fixed-point property to be satisfied by DS,
namely, the demand transformation assumed for each function in the program should be the same as the demand
transformation calculated from the body of the function. Given DS, the rule for the function call is obvious. Notice that
the demand environment for each application s also includes the demand on s itself apart from its sub-expressions.
Operationally, the rule demand-summary is converted into a grammar (Section 4) that is parameterized with respect to
a placeholder terminal representing a symbolic demand. The language generated by this grammar is the least solution
satisfying the rule. The least solution corresponds to the most precise slice.
We finally discuss the rules for expressions given by D. The rules for return and if are obvious. The rule for
(let x ← s in e) first uses σ to calculate the demand environment DE of the let-body e. The demand on s is the union of
the demands on all occurrences of x in e. It is easy to see by examining the rules that the analysis results in demands
that are prefix-closed. More formally, let DEσ be the demand environment resulting from the analysis of a program for
a demand σ . Then, for an expression π:e in the program, DEσ (π ) is prefix closed.
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
4
7
COMPUTING CONTEXT-INDEPENDENT FUNCTION SUMMARIES
A slicing method used for, say, debugging needs to be as precise as possible to avoid false errors. We therefore choose
to analyze each function call separately with respect to its calling context. We now show how to obtain a contextindependent summary for each function definition from the rule demand-summary. Recall that this summary is a
function that transforms any demand on the result of a call to demands on the arguments. A convenient way of doing
this is to express how a symbolic demand is transformed by the body of a function. Summarizing the function in this
way has two benefits. It helps us to propagate a demand across several calls to a function without analyzing its body
each time. Even more importantly, it is the key to our incremental slicing method.
However, notice that the rules of demand analysis requires us to do operations that cannot be done on a symbolic
demand. The cons rule, for example is defined in terms of the set {α | 0α ∈ σ }. Clearly this requires us to know
the strings in σ . Similarly, the if rule requires to know whether σ is ∅. The way out is to treat these operations also
symbolically. For this we introduce three new symbols 0̄, 1̄ and 2, to capture the intended operations. If 0 represents
selection using car, 0̄ is intended to represent a use as the left argument of cons. Thus 0̄0 should reduce to the empty
string ϵ. Similarly 2 represents the symbolic transformation of any non-null demand to ϵ and null demand to itself.
These transformation are defined and also made deterministic through the simplification function S.
S({ϵ }) = {ϵ }
S(0σ ) = 0S(σ )
S(1σ ) = 1S(σ )
S(0̄σ ) = {α | 0α ∈ S(σ )}
S(1̄σ ) = {α | 1α ∈ S(σ )}
(
∅
if S(σ ) = ∅
S(2σ ) =
{ϵ } otherwise
S(σ1 ∪ σ2 ) = S(σ1 ) ∪ S(σ2 )
Notice that 0̄ strips the leading 0 from the string following it, as required by the rule for cons. Similarly, 2 examines the
string following it and replaces it by ∅ or {ϵ }; this is required by several rules. The A rules for cons and car in terms of
the new symbols are:
A(π: (cons π 1:x π2:y), σ , DS) = {π1 7→ 0̄σ , π2 7→ 1̄σ , π 7→ σ }
A(π: (car π1:x), σ , DS) = {π1 7→ 2σ ∪ 0σ , π 7→ σ }
and the D rule for if is:
D(π: (if π1:x e 1 e 2 ), σ , DS) = D(e 1 , σ , DS) ∪ D(e 2 , σ , DS) ∪
{π1 7→ if σ , ∅ then {ϵ } else ∅,
π 7→ σ }
The rules for cdr, + and null? are also modified similarly. Now the demand summaries can be obtained symbolically
with the new symbols as markers indicating the operations that should be performed string following it. When the final
Manuscript submitted to ACM
8
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
demand environments are obtained with the given slicing criterion acting a concrete demand for the main expression
e main , the symbols 0̄, 1̄ and 2 are eliminated using the simplification function S.
4.1
Finding closed-forms for the summaries DS
Recall that DSif is a function that describes how the demand on a call to f translates to its ith argument. A straightfor-
ward translation of the demand-summary rule to obtain DSif is as follows: For a symbolic demand σ compute the the
demand environment in e f , the body of f . From this calculate the demand on the ith argument of f , say x. This is
the union of demands of all occurrences of x in the body of f . The demand on the ith argument is equated to DSif (σ ).
Since the body may contain other calls, the demand analysis within e f makes use of DS in turn. Thus our equations
may be recursive. On the whole, DS corresponds to a set of equations, one for each argument of each function. The
reader can verify that DS2lcc (σ ) in our running example is:
DS2lcc (σ )
=
0̄σ ∪ 2DS2lcc (σ )
As noted in [15], the main difficulty in obtaining a convenient function summary is to find a closed-form description
of DS2lcc (σ ) instead of the recursive specification. Our solution to the problem lies in the following observation: Since
we know that the demand rules always prefix symbols to the argument demand σ , we can write DSif (σ ) as DSif σ ,
where DSif is a set of strings over the alphabet {0, 1, 0̄, 1̄, 2}. The modified equations after doing this substitution will be,
DS2lcc σ
=
0̄σ ∪ 2DS2lcc σ
Thus, we have,
4.2
DS2lcc (σ )
=
DS2lcc σ
where DS2lcc
=
{0̄} ∪ 2DS2lcc
Computing the demand environment for the function bodies
The demand environment for a function body e f is calculated with respect to a concrete demand. To start with, we
consider the main expression e main as being the body of a function main, The demand on e main is the given slicing
criterion. Further, the concrete demand on a function f , denoted σf , is the union of the demands at all call-sites of
f . The demand environment of a function body e f is calculated using σf . If there is a call to д inside e f , the demand
summary DSд is used to propagate the demand across the call. Continuing with our example, the union of the demands
on the three calls to lcc is the slicing criterion. Therefore the demand on the expression at program point π 1 is given by
Dπ 1
=
DS2lcc
=
σlcc
=
DS2lcc σlcc
{0̄} ∪ 2DS2lcc
(1)
slicing criterion
At the end of this step, we shall have (i) A set of equations defining the demand summaries DSif for each argument of
each function, (ii) Equations specifying the demand Dπ at each program point π , and (iii) an equation for each concrete
demand σf on the body of each function f .
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
4.3
9
Converting analysis equations to grammars
Notice that the equations for DS2lcc are still recursive. However, Equation 1 can also be viewed as a grammar with
{0, 1, 1̄, 0̄, 2} as terminal symbols and DS2lcc , Dπ1 and σlcc as non-terminals. Thus finding the solution to the set of
equations generated by the demand analysis reduces to finding the language generated by the corresponding grammar.
The original equations can now be re-written as grammar rules as shown below:
Dπ1 → DS2lcc σlcc
DS2lcc → 0̄ | 2 DS2lcc
(2)
σlcc → slicing criteria
Thus the question whether the expression at π1 can be sliced for the slicing criterion σlcc is equivalent to asking whether
the language S(L(Dπ1 )) is empty. In fact, the simplification process S itself can be captured by adding the following set
of five unrestricted productions named unrestricted and adding the production Dπ0 1 → Dπ1 $ to the grammar generated
earlier.
0̄0 → ϵ
1̄1 → ϵ
2$ → ϵ
20 → 2
21 → 2
The set of five unrestricted productions shown are independent of the program being sliced and the slicing criterion.
The symbol $ marks the end of a sentence and is required to capture the 2 rule correctly.
We now generalize: Assume that π is the program point associated with an expression e. Given a slicing criterion σ ,
let G πσ denote the grammar (N , T , P πσ ∪ unrestricted ∪ {Dπ0 → Dπ $}, Dπ0 ). Here T is the set of terminals {0, 1, 0̄, 1̄, 2, $},
P πσ is the set of context-free productions defining Dπ , the demand on e (as illustrated by example 2). N contains the
non-terminals of P πσ and additionally includes the special non-terminal Dπ0 . As mentioned earlier, given a slicing
criterion σ , the question of whether the expression e can be sliced out of the containing program is equivalent to asking
whether the language L(G πσ ) is empty. We shall now show that this problem is undecidable.
Theorem 4.1. Given a program point π and slicing criterion σ , the problem whether L(G πσ ) is empty is undecidable.
Proof Outline. Recollect that the set of demands on an expression, as obtained by our analysis, is prefix closed. Since
the grammar always includes production {Dπ0 → Dπ $}, L(G πσ ) is non-empty if and only if it contains $ (i.e. empty
string followed by the $ symbol). We therefore have to show that the equivalent problem of whether $ belongs to L(G πσ )
is undecidable.
Given a Turing machine and a string α ∈ (0+1)∗ , the proof involves construction of a grammar G = (N ∪{S, S 0 },T , P ∪
unrestricted ∪ {S 0 → S$}, S 0 ) with the property that the Turing machine halts on α if and only if G accepts $. Notice
that P is a set of context-free productions over the terminal set T and may not necessarily be obtainable from demand
analysis of a program. However, G can be used to construct a program whose demand analysis results in a grammar G 0
that can used instead of G to replay the earlier proof. The details can be found in Lemmas B.2 and B.3 of [7].
We get around the problem of undecidability, we use the technique of Mohri-Nederhoff [10] to over-approximate P πσ
by a strongly regular grammar. The NFA corresponding to this automaton is denoted as M πσ . The simplification rules
can be applied on M πσ without any loss of precision. The details of the simplification process are in [6].
Manuscript submitted to ACM
10
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
ǫ
0̄
0̄
0
2
0
2
(a)
0̄
1
2
1
2
(b)
0̄
0
2
(c)
Fig. 5. (a) & (b) show the simplification of the automaton M πσ1 for the slicing criteria σ = {ϵ, 0} and σ = {ϵ, 1} respectively. (c)
shows the canonical automaton A π1 and the corresponding completing automaton Aπ1
For our running example, the grammar after demand analysis is already regular, and thus remains unchanged
by Mohri-Nederhoff transformation. The automata in Figures 5(a) and 5(b) correspond to the two slicing criteria
σ
σlcc = {ϵ, 0} and σlcc = {ϵ, 1} and illustrate the simplification of corresponding Mohri-Nederhoff automata M π1lcc . It
can be seen that, when the slicing criterion is {ϵ, 1}, the language of Dπ1 is empty and hence e can be sliced away. A
drawback of the method outlined above is that with a change in the slicing criterion, the entire process of grammar
generation, Mohri-Nederhoff approximation and simplification has to be repeated. This is likely to be inefficient for
large programs.
5
INCREMENTAL SLICING
We now present an incremental algorithm which avoids the repetition of computation when the same program is sliced
with different criteria. This can be done by pre-computing the part of the slice computation that is independent of the
slicing criterion. The pre-computed part can then be used efficiently to slice the program for a given slicing criterion.
In general, the pre-computation consists of three steps: (i) computing the demand at each expression π : e for the
{ϵ }
fixed slicing criterion {ϵ } and applying the Mohri-Nederhoff procedure to yield the automaton M π , (ii) a step called
canonicalization which applies the simplification rules
{ϵ }
on M π
until the 0̄ and 1̄ symbols in the strings accepted by the
resulting automaton are only at the end, and, from this (iii) constructing an automaton called the completing automaton.
For the running example, the canonicalized and the completing automata are shown Figures 5(c). We explain these
steps now.
{ϵ }
As stated earlier, the automaton M π1 , after some simplifications, gives the first automaton (the canonicalized
automaton) shown in Figure 5(c), which we shall denote Aπ1 . It is clear that if Aπ1 is concatenated with a slicing
criterion that starts with the symbol 0, the result, after simplification, will be non-empty. We call a string that starts
with 0 as a completing string for Aπ1 . In this case, detecting a completing string was easy because all strings accepted
by Aπ1 end with 0̄. Now consider the second automaton in Figure 5(c), called the completing automaton, that recognizes
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
11
the language 0(0 + 1)∗ . This automaton recognizes all completing strings for Aπ1 and nothing else. Thus for an arbitrary
slicing criterion σ , it suffices to intersect σ with the completing automaton to decide whether the expression at π1 will
be in the slice. In fact, it is enough for the completing automaton to recognize just the language {0} instead of 0(0 + 1)∗ .
The reason is that any slicing criterion, say σ , is prefix closed, and therefore σ ∩ {0} is empty if and only if σ ∩ 0(0 + 1)∗
is empty. Our incremental algorithm generalizes this reasoning.
5.1
Completing Automaton and Slicing
For constructing the completing automaton for an expression e, we saw that it would be convenient to simplify the
{ϵ }
automaton Me
to an extent that all accepted strings, after simplification, have 0̄ and 1̄ symbols only at the end. We
now give a set of rules, denoted by C, that captures this simplification.
C({ϵ }) = {ϵ }
C(0σ ) = 0C(σ )
C(1σ ) = 1C(σ )
C(0̄σ ) = {0̄ | {ϵ } = C(σ )} ∪ {α | 0α ∈ C(σ )}
∪ {0̄1̄α | 1̄α ∈ C(σ )} ∪ {0̄0̄α | 0̄α ∈ C(σ )}
C(1̄σ ) = {1̄ | {ϵ } = C(σ )} ∪ {α | 1α ∈ C(σ )}
∪ {1̄1̄α | 1̄α ∈ C(σ )} ∪ {1̄0̄α | 0̄α ∈ C(σ )}
C(2σ ) = 2C(σ )
C(σ1 ∪ σ2 ) = C(σ1 ) ∪ C(σ2 )
C differs from S in that it accumulates continuous run of 0̄ and 1̄ at the end of a string. Notice that C, like S, simplifies
its input string from the right. Here is an example of C simplification:
C
C
120̄00201̄1̄10̄ → 120̄00201̄0̄ → 120201̄0̄
In contrast the simplification of the same string using S gives:
S
S
S
S
120̄00201̄1̄10̄ → 120̄00201̄1̄1∅ → 120̄00201̄0̄∅ → . . . → ∅
C satisfies two important properties:
Property 1. The result of C always has the form (0 + 1 + 2)∗ (0̄ + 1̄)∗ . Further, if σ ⊆ (0 + 1 + 2)∗ , then C(σ ) = σ .
Property 2. S subsumes C, i.e., S(C(σ1 )C(σ2 )) = S(σ1 σ2 ).
Note that while we have defined canonicalization over a language, the actual canonicalization takes place over
an automaton—specifically the automaton M π obtained after the Mohri-Nederhoff transformation. The function
createCompletingAutomaton in Algorithm 1 takes Aπ , the canonicalized Mohri-Nederhoff automaton for the slicing
criterion {ϵ }, as input, and constructs the completing automaton, denoted as Aπ .
Recollect that the strings recognized by Aπ are of the form (0 + 1 + 2)∗ (0̄ + 1̄)∗ . The algorithm first computes the set
of states reachable from the start state using only edges with labels {0, 1, 2}. This set is called the frontier set. It then
complements the automaton and drops all edges with {0, 1, 2} labels. Finally, all states in the frontier set are marked as
Manuscript submitted to ACM
12
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
Function createCompletingAutomaton(A)
Data: The Canonicalized Automaton A = hQ, {0, 1, 0̄, 1̄, 2}, δ, q 0 , F i
Result: A, the completing automaton for A
F 0 ← {q fr | q fr ∈ Q, hasBarFreeTransition(q 0 , q fr , δ )}
/* ........Reverse the ‘‘bar’’ transitions: directions as well as labels ........
foreach (transition δ (q, 0̄) → q 0 ) do
add transition δ 0 (q 0, 0) → q
foreach (transition δ (q, 1̄) → q 0 ) do
add transition δ 0 (q 0, 1) → q
0
q 0 ← new state /* ................... start state of A ...................
foreach (state q ∈ F ) do
add transition δ 0 (q 00 , ϵ) → q
return Q ∪ {q 00 }, {0, 1}, δ 0, q 00 , F 0
Function inSlice(e, σ )
Data: expression e, slicing criteria σ
Result: Decides whether e should be retained in slice
return (L(Ae ) ∩ σ , ∅)
*/
*/
Algorithm 1: Functions to create the completing automaton and the slicing function.
final states. Since Aπ is independent of the slicing criteria, the completing automaton is also independent of the slicing
criteria and needs to be computed only once. It can be stored and re-used whenever the program needs to be sliced. To
decide whether π: e can be sliced out, the function inSlice described in Algorithm 1 just checks if the intersection of
the slicing criteria with L(Aπ ) is null.
5.2
Correctness of Incremental Slicing
We now show that the incremental algorithm to compute incremental slices is correct. Recall that we use the following
notations: (i) G πσ is the grammar generated by demand analysis (Figure 4) for an expression π : e in the program
{ϵ }
of interest, when the slicing criteria is σ , (ii) Aπ is the automaton corresponding to G π after Mohri-Nederhoff
transformation and canonicalization, and (iii) Aπ is the completing automaton for e. We first show that the result of the
demand analysis for an arbitrary slicing criterion σ can be decomposed as the concatenation of the demand analysis
obtained for the fixed slicing criterion {ϵ } and σ itself.
Lemma 5.1. For all expressions e and slicing criteria σ , L(G πσ ) = L(G π )σ .
{ϵ }
Proof. The proof is by induction on the structure of e. Observe that all the rules of the demand analysis (Figure 4)
add symbols only as prefixes to the incoming demand. Hence, the slicing criteria will always appear as a suffix of any
string that is produced by the grammar. Thus, any grammar L(G πσ ) can be decomposed as σ 0σ for some language σ 0 .
{ϵ }
Substituting {ϵ } for σ , we get G π
= σ 0 . Thus L(G πσ ) = L(G π )σ .
{ϵ }
Given a string s over (0̄ + 1̄)∗ , we use the notation s to stand for the reverse of s in which all occurrences of 0̄ are
replaced by 0 and 1̄ replaced by 1. Clearly, S({ss}) = {ϵ }.
We next prove the completeness and minimality of Aπ .
{s }
Lemma 5.2. {s | S(L(M π )) , ∅} = L(Aπ )(0 + 1)∗
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
13
{s }
{ϵ }
Proof. We first prove LHS ⊆ RHS. Let the string s ∈ S(L(M π )). Then by Lemma 5.1, s ∈ S(L(M π ){s}). By
{ϵ }
S(C(L(M π )){s}).
{ϵ }
Property 2, this also means that s ∈
Since strings in C(L(M π )) are of the form (0 + 1 + 2)∗ (0̄ + 1̄))∗
(Property 1), this means that there is a string p1p2 such that p1 ∈ (0 + 1 + 2)∗ and p2 ∈ (0̄ + 1̄)∗ , and S({p2 }{s}) ⊆ (0 + 1)∗ .
Thus s can be split into two strings s 1 and s 2 , such that S({p2 }{s 1 }) = {ϵ }. Therefore s 1 = p2 . From the construction of
Aπ we have p2 ∈ L(Aπ ) and s 2 ∈ (0 + 1)∗ . Thus, s ∈ L(Aπ )(0 + 1)∗ .
Conversely, for the proof of RHS ⊆ LHS, we assume that a string s ∈ L(Aπ )(0 + 1)∗ . From the construction of
Aπ we have strings p1 , p2 , s 0 such that p1p2 ∈ C(L(M πϵ )), p1 ∈ (0 + 1 + 2)∗ , p2 ∈ (0̄ + 1̄)∗ , s is p2s 0 and s 0 ∈ (0 + 1)∗ .
{s }
{ϵ }
{ϵ }
{s }
Thus, S(L(M π )) = S(L(M π {s})) = S(C(L(M π )){s}) = S({p1p2p2s 0 }) = {p1s 0 }. Thus, S(L(M π )) is non-empty
and s ∈ LHS.
We now prove our main result: Our slicing algorithm represented by inSlice (Algorithm 1) returns true if and only
if S(L(Aϵπ )σ ) is non-empty.
Theorem 5.3. S(L(M πσ )) , ∅ ↔ inSlice(e, σ )
Proof. We first prove the forward implication. Let s ∈ S(L(M πσ )). From Lemma 5.1, s ∈ S(L(M πϵ )σ ). From
Property 2, s ∈ S(C(L(M πϵ ))σ ). Thus, there are strings p1 , p2 such that p1 ∈ C(L(M πϵ )), p2 ∈ σ , s = S({p1p2 }). Further
p1 in turn can be decomposed as p3p4 such that p3 ∈ (0 + 1 + 2)∗ and p4 ∈ (0̄ + 1̄)∗ . We also have S({p4p2 }) ⊆ (0 + 1)∗ .
Thus p4 is a prefix of p2 .
From the construction of Aπ , we know p4 ∈ L(Aπ ). Further, p4 is a prefix of p2 and p2 ∈ σ , from the prefix closed
property of σ we have p4 ∈ σ . This implies Aπ ∩ σ , ∅ and thus inSlice(e, σ ) returns true.
Conversely, if inSlice(e, σ ) is true, then ∃s : s ∈ L(Aπ ) ∩ σ . In particular, s ∈ L(Aπ ). Thus, from Lemma 5.2 we have
S(L(M π )) , ∅. Further, since s ∈ σ we have S(L(M πσ )) , ∅.
{s }
6
EXTENSION TO HIGHER ORDER FUNCTIONS
We now describe how our method can also be used to slice higher order programs. This section has been included
mainly for completeness, and we do not make claims of novelty. We handle all forms of higher-order functions except
the cases of functions being returned as a result, and functions being stored in data structures—in our case lists. Even
with these limitations, one can write a number of useful and interesting higher-order programs in our language.
Consider the program in Figure 6(a). It contains a higher order function hof which applies its first argument f on its
second argument l. The function main creates a list lst1 and a function value g (through partial application) and uses
these in the two calls to hof. Finally, main returns the result of these calls in a pair. The program exhibits higher order
functions that take as actual arguments both manifest functions and partial applications.
For our first-order method to work on higher order functions, we borrow from a technique called firstification [9, 16].
Firstification transforms a higher-order program to a first-order program without altering its semantics. Our version of
firstification repeatedly (i) finds for each higher-order function the bindings of each of its functional parameters, (ii)
replaces the function by a specialized version for each of the bindings, and (iii) replaces each application of f by its
specialized version. These steps are repeated till we we are left with a program containing first order functions only.
In the example being considered, we first discover that f in foldr has a single binding to fun and the f of hof has
a binding to car. Specialization gives the functions foldr fun and hof car. We now see that f of hof has a second
binding to the partial application (foldr fun), This gives rise to a second specialization of hof called hof g.
Manuscript submitted to ACM
14
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
(define (hof g l)
(return π f :(foldr fun 0 l)))
(define (hof f l)
(return π :(f l)))
(define (hof f l)
(return π :(f l)))
(define (hof car l)
(return πc :(car l)))
(define (foldr f id l)
(if(null? l)) (return id)
(return (f (car l)
(foldr f id (cdr l)))))
(define (foldr fun id l)
(if(null? l)) (return id)
(return (fun (car l)
(foldr fun id (cdr l)))))
(define (fun x y)
(return (+ y 1)))
(define (fun x y)
(return (+ y 1)))
(define (main)
(let lst1 ← (cons a (cons b nil)) in
(let g ← (foldr fun 0) in
(return (cons (hof car lst1)
(hof g lst1)))))
(define (main)
(let lst1 ← (cons a (cons b nil)) in
(let g ← (foldr fun 0) in
(return (cons (hof car lst1)
(hof g lst1))))))
(a) A program with higher order functions
(b) Program in (a) after specialization.
(define (main)
(let lst1 ← (cons a ) in
(let g ← ) in
(return (cons (hof car lst1)
))))
(c) Slice of the program in (a) with
slicing criterion {ϵ, 0}.
Fig. 6. An example higher order program
The program after firstification is shown in Figure 6(b). This program is subjected to demand analysis and the results
are reflected back into the higher-order program. Inside a higher order function that has been specialized, the demand
on an expression is an union of the demands on the specialized versions of the expression. Thus, the demand on π
is given by the union of the demands on πc and π f . Where the higher order function is applied, the demand on its
arguments is derived from the demand transformer of its specialized version. As an example, the demand on lst1 in
(hof car lst1) is obtained from the demand transformers of hof car. For the slicing criterion {ϵ, 0}, the the demand
on the second argument of (cons (hof car lst1) (hof g lst1)) is null and thus this argument and the binding of
g can both be sliced away. The slice for {ϵ, 0} is shown in Figure 6(c).
Note that our simple firstifier requires us to statically find all bindings of a functional parameter. This is not possible
if we allow functions to be returned as results or store functions in data-structures. As an example we can consider
a function f , that, depending on a calculated value n, returns a function д iterated n times (i.e. д ◦ д ◦ n .times
. . ◦ д). A
higher-order function receiving this value as a parameter would be cannot be specialized using the techniques described,
for example, in [9]. A similar thing can happen if we allow functions in lists.
7
EXPERIMENTS AND RESULTS
In this section, we present the results from our experiments on the implementations of both versions of slicing. In the
absence of the details of implementations of other slicing methods, we have compared the incremental step of our
method with the non-incremental version. Our experiments show that the incremental slicing algorithm gives benefits
even when the overhead of creating the completing automata is amortized over even a few slicing criteria.
Our benchmarks consists of first order programs derived from the nofib suite [11]. The higher order programs have
been handcrafted to bring out the issues related to higher order slicing. The program named parser includes most of
the higher order parser combinators required for parsing. fold corresponds to the example in Figure 6. Table 1 shows
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
15
Table 1. Statistics for incremental and non-incremental slicing
Program
treejoin
deriv
paraffins
nqueens
minmaxpos
nperm
linecharcount
studentinfo
knightstour
takl
lambda
parser
maptail
fold
Precomput
ation
6900.0
399.6
3252.8
395.4
27.9
943.1
11.7
1120.6
2926.5
71.6
4012.9
60088.2
22.1
21.4
#exprs
Slicing with {ϵ }
Slicing with {ϵ, 0}
in pro- NonInc
#expr NonInc
#expr
gram
inc
time
in slice inc
time
in slice
time
(ms)
time
(ms)
(ms)
(ms)
First-order Programs
581 6163.2
2.4
536
5577.2
2.8
538
389
268.0
1.6
241
311.2
1.6
249
1152 2287.3
5.2
1067
2529.2
5.1
1067
350
309.9
1.5
350
324.6
1.5
350
182
18.1
0.9
147
19.5
0.8
149
590
627.4
2.1
206
698.4
11.2
381
91
7.0
0.5
69
7.5
0.5
78
305
858.2
1.2
96
854.6
1.3
101
630 2188.1
2.8
436
2580.6
12.2
436
151
46.1
0.7
99
49.5
0.8
105
721 3089.0
2.7
26
3377.4
13.2
705
Higher-order Programs
820 46066.8
2.3
203 45599.0
2.3
209
96
5.5
0.5
51
15.4
0.6
67
114
13.3
0.4
17
14.4
0.5
76
Slicing with {ϵ, 1}
NonInc
#expr
inc
time
in slice
time
(ms)
(ms)
5861.4
333.2
2658.7
328.1
20.5
664.0
7.4
1043.3
2492.8
48.5
2719.8
4.6
2.3
5.1
1.6
0.9
11.8
0.5
7.5
7.4
0.7
5.3
538
266
1067
350
149
242
82
98
436
99
33
61929.2
17.4
16.9
4.1
0.6
0.6
209
56
33
the time required for slicing with different slicing criteria. For each benchmark, we first show, the pre-computation
time, i.e. the time required to construct the completing automata. We then consider three different slicing criteria, and
for each slicing criterion, present the times for non-incremental slicing and the incremental step. The results in Table 1
show that for all benchmarks, the time required to compute the completing automata is comparable to the time taken
for computing the slice non-incrementally. Since computing completing automata is a one time activity, incremental
slicing is very efficient even when a program is sliced only twice. As seen in Table 1, the time taken for the incremental
step is orders of magnitude faster than non-incremental slicing, thus confirming the benefits of reusing the completing
automata.
We also show the number of expressions in the original program and in the slice produced to demonstrate the
effectiveness of the slicing process itself. Here are some of the interesting cases. It can be seen that the slice for nqueens
for any slicing criterion includes the entire program. This is because finding out whether a solution exists for nqueens
requires the entire program to be executed. On the other hand, the program lambda is a λ-expression evaluator that
returns a tuple consisting of an atomic value and a list. The criterion {ϵ, 0} requires majority of the expressions in the
program to be present in the slice to compute the atomic value. On the other hand, the criterion {ϵ } or {ϵ, 1} do not
require any value to be computed and expressions which compute the constructor only are kept in the slice, hence our
algorithm is able to discard most of the expressions. This behavior can be clearly seen in the higher-order example
fold where a slicing criterion {ϵ, 0} selects an expression which only uses the first element of lst1, thus allowing our
slicing algorithm to discard most of the expressions that construct lst1. After examining the nature of the benchmark
programs, the slicing criteria and the slices, we conclude that slicing is most effective when the slicing criterion selects
parts of a bounded structure, such as a tuple, and the components of the tuple are produced by parts of the program
that are largely disjoint.
Manuscript submitted to ACM
16
8
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
RELATED WORK
Program slicing has been an active area of research. However, most of the efforts in slicing have been for imperative
programs. The surveys [2, 19, 22] give good overviews of the variants of the slicing problem and their solution techniques.
The discussion in this section will be centered mainly around static and backward slicing of functional programs.
In the context of imperative programs, a slicing criterion is a pair consisting of a program point, and a set of
variables. The slicing problem is to determine those parts of the program that decide the values of the variables at
the program point [23]. A natural solution to the slicing problem is through the use of data and control dependences
between statements. Thus the program to be sliced is transformed into a graph called the program dependence graph
(PDG) [5, 13], in which nodes represent individual statements and edges represent dependences between them. The slice
consists of the nodes in the PDG that are reachable through a backward traversal starting from the node representing
the slicing criterion. Horwitz, Reps and Binkley [5] extend PDGs to handle interprocedural slicing. They show that a
naive extension could lead to imprecision in the computed slice due to the incorrect tracking of the calling context.
Their solution is to construct a context-independent summary of each function through a linkage grammar, and then
use this summary to step across function calls. The resulting graph is called a system dependence graph (SDG). Our
method generalizes SDGs to additionally keep track of the construction of algebraic data types (cons), selection of
components of data types (car and cdr) and their interaction, which may span across functions.
Silva, Tamarit and Tomás [20] adapt SDGs for functional languages, in particular Erlang. The adaptation is straightforward except that they handle dependences that arise out of pattern matching. Because of the use of SDGs, they can
manage calling contexts precisely. However, as pointed out by the authors themselves, when given the Erlang program:
{main() -> x = {1,2}, {y,z} = x, y}, their method produces the imprecise slice {main() -> x = {1,2}, {y,} = x, y} when
sliced on the variable y. Notice that the slice retains the constant 2, and this is because of inadequate handling of the
interaction between cons and cdr. For the equivalent program (let x← (cons 1 2) in (let y ← (car x) in y)) with the
slicing criterion ϵ, our method would correctly compute the demand on the constant 2 as 1̄(ϵ ∪ 0). This simplifies to the
demand ∅, and 2 would thus not be in the slice. Another issue is that while the paper mentions the need to handle
higher order functions, it does not provide details regarding how this is actually done. This would have been interesting
considering that the language considered allows lambda expressions.
The slicing technique that is closest to ours is due to Reps and Turnidge [15]. They use projection functions,
represented as certain kinds of tree grammars, as slicing criteria. This is the same as our use of prefix-closed regular
expressions. Given a program P and a projection function ψ , their goal is to produce a program which behaves like
ψ ◦ P. The analysis consists of propagating the projection function backwards to all subexpressions of the program.
After propagation, any expression with the projection function ⊥ (corresponding to our ∅ demand), are sliced out of the
program. Liu and Stoller [8] also use a method that is very similar to [15], but more extensive in scope.
These techniques differ from ours in two respects. These methods, unlike ours, do not derive context-independent
summaries of functions. This results in a loss of information due to merging of contexts and affects the precision of the
slice. Moreover, the computation of function summaries using symbolic demands enables the incremental version of
our slicing method. Consider, as an example, the program fragment π : (cons π 1 : x π2 :y) representing the body of a
function. Demand analysis with the symbolic demand σ gives the demand environment {π 7→ σ , π1 7→ 0̄σ , π2 7→ 1̄σ }.
Notice that the demands π1 and π 2 are in terms of the symbols 0̄ and 1̄. This is a result of our decision to work with
symbolic demands, and, as a consequence, also handle the constructor-selector interaction symbolically. If we now
slice with the default criterion ϵ and then canonicalize (instead of simplify), we are left with the demand environment
Manuscript submitted to ACM
An Incremental Slicing Method for Functional Programs
17
(define (mapsq l)
(if(null? l) (return l)
(return (cons (sq (car l))
(mapsq (cdr l)))))
Fig. 7. Example to illustrate the imprecision due to Mohri-Nederhoff approximation
{π 7→ ϵ, π1 7→ 0̄, π 2 7→ 1̄}. Notice that there is enough information in the demand environment to deduce, through the
construction of the completing automaton, that π 1 (π2 ) will be in the slice only if the slicing criterion includes 0(1).
Since the methods in [15] and [8] deal with demands in their concrete forms, it is difficult to see the incremental version
being replayed with their methods.
There are other less related approaches to slicing. A graph based approach has also been used by Rodrigues and
Barbosa [17] for component identification in Haskell programs. Given the intended use, the nodes of the graph
represents coarser structures such as modules, functions and data type definitions, and the edges represents relations
such as containment (e.g. a module containing a function definition). On a completely different note, Rodrigues and
Barbosa [18] use program calculation in the Bird-Meerteens formalism for obtaining a slice. Given a program P and a
projection function ψ , they calculate a program which is equivalent to ψ ◦ P. However the method is not automated.
Finally, dynamic slicing techniques have been explored for functional programs by Perera et al. [14], Ochoa et al. [12]
and Biswas [3].
9
CONCLUSIONS AND FUTURE WORK
We have presented a demand-based algorithm for incremental slicing of functional programs. The slicing criterion is a
prefix-closed regular language and represents parts of the output of the program that may be of interest to a user of our
slicing method. We view the slicing criterion as a demand, and the non-incremental version of the slicer does a demand
analysis to propagate this demand through the program. The slice consists of parts of the program with non-empty
demands after the propagation. A key idea in this analysis is the use of symbolic demands in demand analysis. Apart
form better handling of calling contexts that improves the precision of the analysis, this also helps in building the
incremental version.
The incremental version builds on the non-incremental version. A per program pre-computation step slices the
program with the default criterion ϵ. This step factors out the computation that is common to slicing with any criterion.
The result, reduced to a canonical form, can now be used to find the slice for a given criterion with minimal computation.
We have proven the correctness of the incremental algorithm with respect to the non-incremental version. And finally,
we have extended our approach to higher-order programs through firstification. Experiments with our implementation
confirm the benefits of incremental slicing.
There are however two areas of concern, one related to efficiency and the other to precision. To be useful, the slicer
should be able to slice large programs quickly. While our incremental slicer is fast enough, the pre-computation step is
slow, primarily because of the canonicalization step. In addition, the firstification process may create a large number of
specialized first-order programs. As an example, our experiments with functional parsers show that the higher-order
parser combinators such as or-parser and the and-parser are called often, and the arguments to these calls are in turns
calls to higher order functions, for instance the Kleene closure and the positive closure parsers.
Manuscript submitted to ACM
18
Prasanna Kumar K., Amitabha Sanyal, and Amey Karkare
The other concern is that while our polyvariant approach through computation of function summaries improves
precision, the resulting analysis leads to an undecidable problem. The workaround involves an approximation that
could lead to imprecision. As an example, consider the function mapsq shown in Figure 7. The reader can verify
that the function summary for mapsq would be given as: DS1mapsq (σ ) = DS1mapsq σ , where DS1mapsq is the language
ϵ | 1n 1̄n | 1n 020̄1̄n , for n ≥ 0. Now, given a slicing criterion σ = {ϵ, 1, 11, 110} standing for the path to the third
element of a list, it is easy to see that DS1mapsq (σ ) after simplification would give back σ itself, and this is the most
precise slice. However, due to Mohri-Nederhoff approximation DS1mapsq would be approximated by ϵ | 1n 1̄m | 1k 020̄1̄l ,
n, m, k, l ≥ 0. In this case, DS1mapsq would be (0 + 1)∗ , keeping all the elements of the input list l in the slice.
REFERENCES
[1] Rahul Asati, Amitabha Sanyal, Amey Karkare, and Alan Mycroft. Liveness-Based Garbage Collection. In Compiler Construction - 23rd International
Conference, CC 2014.
[2] David Binkley and Mark Harman. 2004. A survey of empirical results on program slicing. Advances in Computers 62 (2004).
[3] Sandip Kumar Biswas. 1997. Dynamic Slicing in Higher-order Programming Languages. Ph.D. Dissertation. University of Pennsylvania, Philadelphia,
PA, USA.
[4] Manuel M. T. Chakravarty, Gabriele Keller, and Patryk Zadarnowski. 2003. A Functional Perspective on SSA Optimisation Algorithms. In COCV,
2003.
[5] Susan Horwitz, Thomas W. Reps, and David Binkley. Interprocedural Slicing Using Dependence Graphs. In Proceedings of the ACM SIGPLAN’88
Conference on Programming Language Design and Implementation (PLDI) 1988.
[6] Amey Karkare, Uday Khedker, and Amitabha Sanyal. 2007. Liveness of Heap Data for Functional Programs. In Heap Analysis and Verification, HAV
2007.
[7] Prasanna Kumar, Amitabha Sanyal, and Amey Karkare. 2016. Liveness-based Garbage Collection for Lazy Languages. In Inrenational Symposium on
Memory Management (ISMM 2016).
[8] Yanhong A. Liu and Scott D. Stoller. 2003. Eliminating Dead Code on Recursive Data. Sci. Comput. Program. 47 (2003).
[9] Neil Mitchell and Colin Runciman. 2009. Losing Functions Without Gaining Data: Another Look at Defunctionalisation. In Proceedings of the 2nd
ACM SIGPLAN Symposium on Haskell.
[10] Mehryar Mohri and Mark-Jan Nederhof. 2000. Regular Approximation of Context-Free Grammars through Transformation. In Robustness in
Language and Speech Technology. Kluwer Academic Publishers.
[11] NoFib. 2017. Haskell Benchmark Suite. http://git.haskell.org/nofib.git. (Feb 2017). (Last accessed).
[12] Claudio Ochoa, Josep Silva, and Germán Vidal. 2008. Dynamic Slicing of Lazy Functional Programs Based on Redex Trails. Higher Order Symbol.
Comput. 21 (2008).
[13] Karl J. Ottenstein and Linda M. Ottenstein. 1984. The program dependence graph in a software development environment. ACM SIGPLAN Notices
19 (1984).
[14] Roly Perera, Umut A. Acar, James Cheney, and Paul Blain Levy. Functional programs that explain their work. In ACM SIGPLAN International
Conference on Functional Programming, ICFP 2012.
[15] Thomas W. Reps and Todd Turnidge. 1996. Program Specialization via Program Slicing. In Partial Evaluation, International Seminar, Dagstuhl Castle,
Germany.
[16] John C. Reynolds. 1998. Definitional Interpreters for Higher-Order Programming Languages. Higher-Order and Symbolic Computation 11, 4 (1998).
[17] Nuno F. Rodrigues and Lus S. Barbosa. 2006. Component Identification Through Program Slicing. Electronic Notes in Theoretical Computer Science
160 (2006).
[18] Nuno F. Rodrigues and Lus S. Barbosa. 2006. Program Slicing by Calculation. Journal of Universal Computer Science (2006).
[19] Josep Silva. 2012. A Vocabulary of Program Slicing-based Techniques. ACM Comput. Surv. (2012).
[20] Josep Silva, Salvador Tamarit, and César Tomás. 2012. System Dependence Graphs in Sequential Erlang. In Proceedings of the 15th International
Conference on Fundamental Approaches to Software Engineering (FASE’12).
[21] Scott F. Smith and Tiejun Wang. 2000. Polyvariant Flow Analysis with Constrained Types. In Proceedings of the 9th European Symposium on
Programming Languages and Systems (ESOP ’00).
[22] Frank Tip. 1995. A Survey of Program Slicing Techniques. Journal of Programming Languages 3 (1995).
[23] Mark Weiser. 1984. Program Slicing. IEEE Trans. Software Eng. 10 (1984).
Manuscript submitted to ACM
| 6 |
The Dispersion Bias
arXiv:1711.05360v4 [stat.ME] 15 Feb 2018
Lisa Goldberg∗ Alex Papanicolaou† Alex Shkolnik‡
November 4, 2017
This Version: February 16, 2018§
Abstract
Estimation error has plagued quantitative finance since Harry Markowitz
launched modern portfolio theory in 1952. Using random matrix theory,
we characterize a source of bias in the sample eigenvectors of financial
covariance matrices. Unchecked, the bias distorts weights of minimum
variance portfolios and leads to risk forecasts that are severely biased
downward. To address these issues, we develop an eigenvector bias correction. Our approach is distinct from the regularization and eigenvalue
shrinkage methods found in the literature. We provide theoretical guarantees on the improvement our correction provides as well as estimation
methods for computing the optimal correction from data.
∗
Departments of Economics and Statistics and Consortium for Data Analytics in Risk,
University of California, Berkeley, CA 94720 and Aperio Group, lrg@berkeley.edu.
†
Consortium for Data Analytics in Risk, University of California, Berkeley, CA 94720,
apapanicolaou@berkeley.edu.
‡
Consortium for Data Analytics in Risk, University of California, Berkeley, CA 94720,
ads2@berkeley.edu.
§
We thank the Center for Risk Management Research, the Consortium for Data Analytics
in Risk, and the Coleman Fung Chair for financial support. We thank Marco Avellaneda,
Bob Anderson, Kay Giesecke, Nick Gunther, Guy Miller, George Papanicolaou, Yu-Ting
Tai, participants at the the 3rd Annual CDAR Symposium in Berkeley, participants at the
Swissquote Conference 2017 on FinTech, and participants at the UC Santa Barbara Seminar
in Statistics and Applied Probability for discussion and comments. We are grateful to Stephen
Bianchi, whose incisive experiment showing that it is errors in eigenvectors, and not in
eigenvalues, that corrupt large minimum variance portfolios, pointed us in a good direction.
1
1
Introduction
Harry Markowitz transformed finance in 1952 by framing portfolio construction as a tradeoff between mean and variance of return. This application of
mean-variance optimization is the basis of theoretical breakthroughs as fundamental as the Capital Asset Pricing Model (CAPM) and Arbitrage Pricing Theory (APT), as well as practical innovations as impactful as Exchange Traded
Funds.1 Still, all financial applications of mean-variance optimization suffer
from estimation error in covariance matrices, and we highlight two difficulties.
First, a portfolio that is optimized using an estimated covariance matrix is
never the true Markowitz portfolio. Second, in current practice, the forecasted
risk of the optimized portfolio is typically too low, sometimes by a wide margin.
Thus, investors end up with the wrong portfolio, one that is riskier, perhaps a
lot riskier, than anticipated.
In this article, we address these difficulties by correcting a systematic bias
in the first eigenvector of a sample covariance matrix. Our setting is that of a
typical factor model,2 but our statistical setup differs from most recent literature. In the last two decades, theoretical and empirical emphasis has been on
the case when the number of assets N and number of observations T are both
large. In this regime, consistency of principal component analysis (PCA) estimates may be established (Bai & Ng 2008). Motivated by many applications,
we consider the setting of relatively few observations (in asymptotic theory: N
grows and T is fixed). Indeed, an investor often has a portfolio of thousands of
securities but only hundreds of observations.3 PCA is applied in this environment in the early, pioneering work by Connor & Korajczyk (1986) and Connor
& Korajczyk (1988), but also very recently (Wang & Fan 2017). In this high
dimension, low sample-size regime, PCA factor estimates necessarily carry a
finite-sample bias. This bias is further amplified by the optimization procedure
that is required to compute a Markowitz portfolio.
An elementary simulation experiment reveals that in a large minimum
variance portfolio, errors in portfolio weights are driven by the first principal
component, not its variance.4 The fact that the eigenvalues of the sample covariance matrix are not important requires some nontrivial analysis, which we
carry out. In particular, we show (in our asymptotic regime) that the bias
in the dominant sample eigenvalue does not effect the performance of the estimated minimum variance portfolio. Only the bias in the dominant sample
eigenvector needs to be addressed. We measure portfolio performance using
1
The seminal paper is Markowitz (1952). See Treynor (1962) and Sharpe (1964) for the
Capital Asset Pricing Model and Ross (1976) for the Arbitrage Pricing Theory.
2
More precisely, the eigenvalues of the covariance matrix corresponding to the factors
grow linearly in the dimension. This is not the traditional random matrix theory setting in
which all eigenvalues are bounded, nor that of “weak” factors, e.g., Onatski (2012).
3
While high frequency data are available in some markets, many securities are observed
only at a daily horizon or less frequently. Moreover, markets are non-stationary, so even
when there is a long history of data available, its relevance to some problems is questionable.
4
This experiment was first communicated to us by Stephen Bianchi.
2
two well-established metrics. Tracking error, the workhorse of financial practitioners, measures deviations in weights between the estimated (optimized) and
optimal portfolios. We use the variance forecast ratio, familiar to both academics and practitioners, to measure the accuracy of the risk forecast of the
portfolio, however right or wrong that portfolio may be.
To develop some intuition for the results to come, consider a simplistic
world where all security exposures to the dominant (market) factor are identical. With probability one, a PCA estimate of our idealized, dominant factor
will have higher dispersion (variation in its entries). Decreasing this dispersion,
obviously, mitigates the estimation error. We denote our idealized, dominant
factor by z. We prove that the same argument applies to any other dominant
factor along the direction of z with high probablity for N large. Thus moving our PCA estimate towards z, by some amount, is very likely to decrease
estimation error. In the limit (N ↑ ∞), the estimation error is reduced with
probability one. The larger the component of the true dominant factor along
z is, the more we can decrease the estimation error.
While a careful proof of our result relies on some recent theory on sample
eigenvector asymptotics, rule of thumb versions have been known to practitioners since the 1970s (see footnote 10 and the corresponding discussion). Indeed,
the dominant risk factor consistently found in the US and many other developed public equity markets has most (if not all) individual equities positively
exposed to it. In other words, empirically, the dominant risk factor has a significant component in z. Our characterization of the dispersion bias may then
be viewed as a formalization of standard operation procedure.
The remainder of the introduction discusses our contributions and the
related literature. Section 2 describes the problem and fundamental results
around the sample covariance matrix and PCA. In Section 3, we present our
main results on producing a bias corrected covariance estimate. Section 4 discusses the implementation of our correction for obtaining data-driven estimates.
Finally, in Section 5 we present numerical results illustrating the performance
of our method in improving the estimated portfolio and risk forecasts.
1.1
Our contributions
We contribute to the literature by providing a method that significantly improves the performance of PCA-estimated minimum-variance portfolios. Our
approach and perspective appear to be new. We summarize some of the main
points.
Several authors (see above) have noted that sample eigenvectors carry a
bias in the statistical and model setting we adopt. We contribute in this direction by, first, recognizing that it is the bias in the first sample eigenvector that
drives the performance of PCA-based, minimum-variance portfolios. Second,
we show that this bias may in fact be corrected to some degree (cf. discussion
below (3.7) in Wang & Fan (2017)). In our domain of application this degree
is material. We point out that eigenvalue bias, which has been the focus in
3
most literature, does not have a material impact on minimum-variance portfolio performance. This motivates lines of research into more general Markowitz
optimization problems. Finally, our correction can be framed geometrically
in terms of the spherical law of cosines. This perspective illuminates possible
extensions of our work. We discuss this further in our concluding remarks.
We also develop a bias correction and show that it outperforms standard
PCA. Minimum variance portfolios constructed with our corrected covariance
matrix are materially closer to optimal, and their risk forecasts are materially more accurate. In an idealized one-factor setting, we provide theoretical
guarantees for the size of the improvement. Our theory also identifies some
limitations. We demonstrate the efficacy of the method with an entirely datadriven correction. In an empirically calibrated simulation, its performance is
far closer to the theoretically optimal than to standard PCA.
1.2
Related literature
The impact of estimation error on optimized portfolios has been investigated
thoroughly in simulation and emprical settings. For example, see Jobson &
Korkie (1980), Britten-Jones (1999), Bianchi, Goldberg & Rosenberg (2017)
and the references therein. DeMiguel, Garlappi & Uppal (2007) compare a
variety of methods for mitigating estimation error, benchmarking againt the
equally weigted portfolio in out-of-sample tests. They conclude that unreasonably long estimation windows are required for current methods to consistently
outperform the benchmark. We review some methods most closely related to
our approach.5
Early work on estimation error and the Markowitz problem was focused on
Bayesian approaches. Vasicek (1973) and Frost & Savarino (1986) were perhaps
the first to impose informative priors on the model parameters.6 More realistic priors incorporating multi-factor modeling are analyzed in Pástor (2000)
(sample mean) and Gillen (2014) (sample covariance). Formulae for Bayes’ estimates of the return mean and covariance matrix based on normal and inverted
Wishart priors may be found in Lai & Xing (2008, Chapter 4, Section 4.4.1).
A related approach to the Bayesian framework is that of shrinkage or regularization of the sample covariance matrix.7 Shrinkage methods have been
proposed in contexts where little underlying structure is present (Bickel &
Levina 2008) as well as those in which a factor or other correlation struc5
The literature on this topic is extensive. We briefly mention a few important references
that do not overlap at all with out work. Michaud & Michaud (2008) recommends the use
of bootstap resampling. Lai, Xing & Chen (2011) reformulate the Markowitz problem as
one of stochastic optimization with unknown moments. Goldfarb & Iyengar (2003) develop
a robust optimization procedure for the Markowitz problem by embedding a factor structure
in the constraint set.
6
Preceeding work analyzed diffuse priors and was shown to be inneficient (Frost & Savarino
1986). The latter, instead, presumes all stocks are identical and have the same correlations.
Vasicek (1973) specified a normal prior on the cross-sectional market betas (dominant factor).
7
In the Bayesian setup, sample estimates are “shrunk” toward the prior (Lai & Xing 2008).
4
ture is presumed to exist (e.g. Ledoit & Wolf (2003), Ledoit & Wolf (2004),
Fan, Liao & Mincheva (2013) and Bun, Bouchaud & Potters (2016)). Perhaps
surprisingly, shrinkage methods turn out to be related to placing constraints
on the portfolio weights in the Markowitz optimization. Jagannathan & Ma
(2003) show that imposing a positivity constraint typically shrinks the large
entries of the sample covariance downward.8
As already mentioned, factor analysis and PCA in particular play a prominent role in the literature. It appears that while eigenvector bias is acknowledged, direct9 bias corrections are made only to the eigenvalues corresponding
to the principal components (e.g. Ledoit & Péché (2011) and Wang & Fan
(2017)). Some work on characterizing the behavior of sample eigenvectors may
be found in Paul (2007) and Shen, Shen, Zhu & Marron (2016). In the setting
of Markowitz portfolios, the impact of eigenvalue bias and optimal corrections
are investigated in in El Karoui et al. (2010) and El Karoui (2013).
Our approach also builds upon several profound contributions in the literature on portfolio composition. In an influential paper, Green & Hollifield
(1992) observe the importance of the structure of the dominant factor to the
composition of minimum variance portfolios. In particular, the “dispersion”
of the dominant factor exposures drives the extreme positions in the portfolio composition. This dispersion is further amplified by estimation error, as
pointed out in earlier work by Blume (1975) (see also Vasicek (1973)). These
early efforts have led to a number of heuristics10 to correct the sample bias of
dominant factor estimates.
2
Problem formulation
We address the impact of estimation error on Markowitz portfolio optimization.
To streamline the exposition, we restrict our focus to the minumum variance
b estimated from T obserportfilio. In particular, given a covariance matrix Σ
vations of returns to N securities, we consider the optimization problem,
b
min w> Σw
w∈RN
>
(1)
w 1N = 1.
We denote by wb the solution to (1), the estimated minimum variance portfolio.
Throughout, 1N is the N -vector of all ones. It is well-known that the portfolio
b are extremely sensitive to errors in the estimated model, and risk
weights, w,
8
This is generalized and analyzed further in DeMiguel, Garlappi, Nogales & Uppal (2009).
Several approaches to alter the sample eigenvectors indirectly (e.g. shrinking the sample
towards some structured covariance) do exist. However, the analysis of these approaches is
not focused on characterizing the bias inherent to the sample eigenvectors themselves.
10
For example, the Blume and Vasicek (beta) adjustments. See the discussion of Exhibit
3 and footnote 7 in Clarke, De Silva & Thorley (2011).
9
5
forecasts for the optimized portfolio tend to be too low.11 We aim to address
these issues in a high-dimension, low sample-size regime, i.e., T N .
We are also interested in the equally weigthed portfolio where we = 1N /N ,
a very simple non-optimized portfolio frequenly employed as a benchmark. We
use this benchmark to test whether the corrections we make for improving the
minimum variance portfolio are not offset by degraded performance elsewhere.
2.1
Model specification & assumptions
We consider a one-factor, linear model for returns to N ∈ N securities. Here, a
generating process for the N -vector of excess returns R takes the form
R = φβ +
(2)
where φ is the return to the factor, β = (β1 , . . . , βN ) is the vector of factor
exposures, and = (1 , . . . , N ) is the vector of diversifiable specific returns.
While the returns (φ, ) ∈ R × RN are random, we treat each exposure βn ∈ R
as a constant to be estimated. Assuming φ and the {n } are mean zero and
pairwise uncorrelated, the N × N covariance matrix of R can be expressed as
Σ = σ 2 ββ > + ∆.
(3)
Here, σ 2 is the variance of φ and ∆ a diagonal matrix with nth entry ∆nn = δn2 ,
the variance of n . Estimation of Σ is central to numerous applications.
We consider a setting in which T observations {Rt } Tt=1 of the vector R are
generated by a latent time-series {φt , t } Tt=1 of (φ, ). It is standard to assume
the observations are i.i.d. and we do so throughout. Finite-sample error distorts
measurement of the parameters (σ 2, β, δ 2 ) leading to the estimate,
c
b = σ̂ 2 β̂ β̂ > + ∆,
Σ
(4)
which approximates (3) by using the estimated model (σ̂ 2, β̂, δ̂ 2 ).
Without loss of generality we assume the following condition throughout.
Condition 2.1. Both β and any estimate β̂ are normalized as kβk = kβ̂k = 1.
We require further statistical and regularity assumptions on our model for
our techical results. These conditions stem from our use of recent work on
spiked covariance models (Wang & Fan 2017). Some may be relaxed in various
ways. Our numerical results (Section 5) investigate a much more general setup.
2
Assumption 2.2. The factor variance σ 2 = σN
satisfies TNσ2 → c1 as N ↑ ∞
N
for fixed integer T ≥ 1 and c1 ∈ (0, ∞). Also, ∆ = δ 2 I for fixed δ ∈ (0, ∞).
11
Extreme sensitivity of portfolio weights to estimation error and the downward bias of risk
forecasts are also found in the optimized portfolios constructed by asset managers. Portfolio
specific corrections of the dispersion bias discussed in this article are useful in addressing
these practical problems. The focus on the global minimum variance portfolio in this article
highlights the essential logic of our analysis in the simplest possible setting.
6
Assumption 2.3. The returns {Rt } Tt=1 are i.i.d. with R1 ∼ N (0, Σ).
√
Assumption 2.4. For z = 1N / N we have supN γβ,z < 1 and supN γβ̂,z < 1.
Also, β and β̂ are oriented in such a way that β > z and β̂>z are nonnegative.
The requirement in Assumption 2.2 that the factor variance grow in dimension while the specific variance stays bounded is pervasive in the factor
modeling literature (Bai & Ng 2008). The extra requirement that the specific
risk is homogenous (i.e. ∆ is a scalar matrix) is restrictive but shortens the
technical proofs significanly. It is also commonplace to the spiked covariance
model literature.12 We discuss the adjustments required for heterogenous specific risk (i.e. ∆ diagonal) in Section 4.2. The distributional Assumption 2.3
facilitates several steps in the proofs. In particular, it allows for an elegant characterization of the systematic bias in PCA, i.e., the bias in the first eigenvector
of the sample covariance matrix of the returns {Rt } (see Section 2.2). Assumption 2.4 is not much of a restriction. First, all results can be easily extended
to the case β = z, which is simply a point of singularity and thus requires a
separate treatment. The orientation requirement is essentially without loss of
generality. We will see (in Section 3) that the vector z plays a special role in
our bias correction procedure. And if β>z < 0, we would simply consider −z.
2.2
PCA bias characterization
We consider PCA as the starting point for our analysis as its use for risk
factor identification is widespread in the literature. It is appropriate when σ 2
is much larger than k∆k (e.g., Assumption 2.2). Assembling a data matrix
R = (R1 , . . . , RT ), we will denote the (data) sample covariance matrix by
S = T −1 RR> . PCA identifies σ̂ 2 with the largest eigenvalue of S and β̂ with the
corresponding eigenvector (the first principal component). The diagonal matrix
c = diag(S − σ̂ 2 β̂ β̂ > ), which corresponds to
of specific risks ∆ is estimated as ∆
least-squares regression of R onto the estimated factors. Finally, the estimate
b of the covariance Σ is assembled as in (4).
Σ
Bias in the basic PCA estimator above arises from the use of the sample13
covariance matrix S. We focus on the high-dimension, low sample size regime
which is most appropriate for the practical applications we consider. Asymptotically, T is fixed and N ↑ ∞, a regime in which the PCA estimates of σ 2 and
β are not consistent. We summarize some recent results from the literature
below. We also state our characterization of PCA bias as it pertains to its
pricipal components. Our result is novel in that it suggests a remedy.
12
In particular, our need for this condition stem from our use of results from Shen et al.
(2016). The assumption may be relaxed by imposing regularity conditions on the entries
of ∆ at the expense of a more cumbersome exposition. It may also be removed entirely if
we consider the regime N, T ↑ ∞ as is more common in the literature. We do not pursue
this because many important pratical applications are restricted to only a small number of
observations.
13
If Σ replaces S, sample bias vanishes and the estimator is asymptotically exact as N ↑ ∞.
7
(b) θβ̂,z > θβ,z w.h.p. for large N
(their difference is shaded). PCA
estimates exhibit a systematic
(dispersion) bias relative to vector
z.
(a) β̂ lies near the cone defined by
the Ψ in (5) w.h.p. for large N .
There is no reference frame to detect the bias since β is unknown.
√
Figure 1: A unit sphere with example vectors β, β̂ and z = 1N / N . The
vector z provides a reference frame in which PCA bias may be identified. Note,
the angle θx,y is also the arc-length between points x and y on the unit sphere.
Let θβ̂,β denote the angle between β and its PCA estimate β̂. Shen et al.
(2016) showed, under Assumption 2.2 and mild conditions on the moments of
{Rt }, that there exist random variables Ψ > 1 and ξ ≥ 0 such that14
a.s.
cos θβ̂,β → Ψ−1
2
2 a.s.
2
σ̂ /σ → Ψ ξ
(5)
(6)
as N ↑ ∞ where ξ and Ψ depend on T (fixed). The pair (ξ, Ψ) characterizes
the error in the PCA estimated model asymptotically. As the sample size T increases, both ξ and Ψ approach one almost surely, i.e., PCA supplies consistent
estimates. Since Ψ > 1, the estimate σ̂ 2 tends to be biased upward (whenever
ξ fluctuates around one). Under Assumption 2.3, ξ = χ2T /T where χ2T has
the chi-square distribution with T degrees of freedom. Here, ξ is concentrated
around one with high probability (w.h.p.) for even moderate values of T .
Identifying a (systematic) bias in PCA estimates of β requires a more subtle
analysis. Observe from (5) that Ψ defines a cone near which the estimate β̂ will
lie with high probability for large N (see panel (a) of Figure 1). However, this
does not point to a systematic error that can be corrected.15 Indeed, it is not
apriori clear where on the cone the vector β̂ resides as (5) provides information
14
More precisely, Ψ2 = 1 + δ 2 c1 /ξ where c1 is identified in Assumption 2.2.
Recently, Wang & Fan (2017) provided CLT-type results for the asymptotic distribution
of sample eigenvectors (Theorem 3.2). They remark on the near “impossibility” of correcting
sample eigenvector bias. This follows from their choice of coordinate system (cf. Figure 1).
15
8
only about the angle θβ̂,β away from the unknown vector β. We provide a result
(see Theorem 2.5 below) that sheds
light on this problem.
√
Recall the vector z = 1N / N . We consider, not the angle θβ̂,β between β
and β̂, but the their respective angles θβ,z and θβ̂,z to this reference vector.
Theorem 2.5 (PCA bias). Suppose that Assumptions 2.2, 2.3 and 2.4 hold
a.s.
and let β̂ be a PCA estimate of β. Then, cos θβ,z ∼ Ψ cos θβ̂,z as N ↑ ∞ and,
in particular, we have that θβ̂,z exceeds θβ,z with high probability for large N .
The proof of this result is deferred to Appendix C. It applies the spiked
covariance model results of Shen et al. (2016) and Paul (2007) on sample eigenvectors using a decomposition in terms of the reference vector z.
2.3
Errors in optimized portfolios
Estimation error causes two types of difficulties in optimized portfolios. It distorts portfolio weights, and it biases the risk of optimized portfolios downward.
b constructed as
Both effects are present for the minimum variance portfolio w,
b
the solution to (1) using some estimate Σ of the returns covariance Σ. We now
define the metrics for assessing the magnitude of these two errors.
b
We denote by w∗ the optimal portfolio, i.e., the solution of (1) with Σ
replaced by Σ. Since the latter is positive definite, the optimal portfolio weights
w∗ may be given explicitly.16 We define,
b > Σ (w∗ − w)
b ,
Twb2 = (w∗ − w)
(8)
b Here, T 2 measures the distance between the
the (squared) tracking error of w.
w
b
b Specifically, it is the square of the
optimal and estimated portfolios, w∗ and w.
b
width of the distribution of return differences w∗ − w.
>
b
The variance of portfolio wb is given by wb Σwb and its true variance is
>
b
b We define,
w Σw.
Rwb =
bw
b
wb > Σ
,
wb > Σwb
(9)
the variance forecast ratio. Ratio (9) is less than one when the risk of the
portfolio wb is underforecast.17
Metrics (8) and (9) quantify the errors in portfolio weights and risk forecasts induced by estimation error.18 We analyze Twb and Rwb asymptotically.
16
Indeed, the Sherman-Morrison-Woodbury formula yields an explicit solution even for a
multi-factor model and with a guaranteed mean return (El Karoui 2008). In our setting,
−1
w∗ ∝ ∆
(1N βMV − β) ,
βMV
17
PN
1 + σ 2 k=1 βk2 /δk2
=
.
PN
σ 2 k=1 βk /δk2
(7)
With respect to the equally weighted portfolio, we , (the tracking error of which is zero)
we only consider the variance forecast ratio.
18
For a relationship to more standard error norms see Wang & Fan (2017).
9
√
Again recall z = 1N / N and let γx,y = x> y. Note, γx,y = cos θx,y whenever x
and y lie on the surface of a unit sphere as in Figure 1. Define,
E =
γβ,z − γβ,β̂ γβ̂,z
.
sin2 θβ̂,z
(10)
The variable E drives the asymptotics of our error metrics. Note that E = 0
when β̂ = β. Indeed, it is not difficult to show that for any estimates (σ̂, β̂, δ̂)
satisfying Assumption 2.4, if E is bounded away from zero (i.e. inf N E > 0),
Twb2 ∼ µ2 E 2
Rwb ∼
N −1 δ̂ 2
µ2 E 2 sin2 θβ̂,z
(N ↑ ∞).
(11)
where µ2 = σ 2 /N which is bounded in N . This result (with our all assumptions
above relaxed) is given by Proposition D.1 of Appendix D.
Corollary 2.6 (PCA performance). Suppose Assumptions 2.2, 2.3 and 2.4
b the tracking
hold. For the PCA-estimator of the minimum variance portfolio w,
2
error squared Twb is bounded away from zero and variance forecast ratio Rwb
tends to zero as N ↑ ∞. In particular, for the PCA estimator, the error E in
(10) satisfies
a.s.
E ∼ γβ,z
1 − Ψ−2
2
1 − Ψ−2 γβ,z
!
(12)
as N ↑ ∞ where Ψ > 1 is the random variable in (5).
a.s.
Proof. Expression (12) follows from (5), Theorem 2.5 (i.e. γβ,z ∼ Ψγβ̂,z ) and
2
the identity 1 − γβ̂,z
= sin2 θβ̂,z . The remaining claims follow from (11).
The result states that the variance forecast ratio for the minimum variance
portfolio wb that uses the PCA estimator of Section 2.2 is asymptotically 0.
The estimated risk will be increasingly optimistic as N → ∞. This is entirely
due to estimation error between the sample eigenvector and the population
eigenvector. As N grows, the forecast risk becomes negligible relative to the true
risk, rendering the PCA estimated minimum variance portfolio worse and worse.
The tracking error is also driven by the error of the sample eigenvector and for
increasing dimension, its proximity to the true minimum variance portfolio as
measured by tracking error is asymptotically bounded below (away from zero).
3
Bias correction
Our Theorem 2.5 characterizes the bias of the PCA estimator in terms of the
vector z. This is the unique (up to negation) dispersionless vector on the unit
sphere, i.e., its entries do not vary. Of course when z = β, then the its PCA
10
estimate β̂ will have higher dispersion with probability one. The argument
works along the projection of any β along z, given by γβ,z . Our PCA bias
characterization implies that γβ,z > γβ̂,z , or equivalently, θβ̂,z > θβ,z . with high
probability (for large N ). Figure 1b illustrates this systematic PCA bias and
clearly suggests a correction: a shrinkage of the PCA estimate β̂ towards z.
3.1
Intuition for the correction
Given an estimate β̂, we analyze a parametrized correction of the form
β̂ (ρ) =
β̂ + ρz
kβ̂ + ρzk
(13)
.
for ρ ∈ R. The curve β̂ (ρ) represents a geodesic between −z and z that passes
through β̂ as ρ passes the origin. We select the optimal “shrinkage” parameter
ρ∗ as the ρ that minimizes the error in our metrics Twb2 and Rwb asymptotically.
With SN −1 denoting the unit N -sphere, we define the space
n
o
Sβ = x ∈ SN −1 : γβ,z − γx,z γx,β = 0 .
(14)
Lemma 3.1. Let β 6= z. There is a ρ∗ such that β̂(ρ∗ ) ∈ Sβ and β̂ (ρ∗ ) 6= z.
Proof. By the spherical law of cosines, we obtain
γβ,z − γx,z γx,β = sin θx,z sin θx,β cos κ
(15)
where κ is the angle (in SN −1 ) between the geodesic from β̂ to z and the one
from β̂ to β (see Figure 2a). Write x = β̂ (ρ) and κ = κρ . Then, cos κρ = 0
when ρ = ρ∗ for which the geodesic between β̂ (ρ∗ ) and β is perperdicular to
that between β̂ and z.19 By construction, β̂(ρ∗ ) is not z and is in Sβ .
We observe that β̂ (ρ∗ ) of Lemma 3.1 minimizes the asymptotic error of
our metrics. In particular, replacing β̂ with β̂ (ρ∗ ) ensures that E is zero. Note
that β̂ (ρ∗ ) 6= z (unless β = z), i.e., we do not shrink β̂ all the way to z. While
z ∈ Sβ , when β 6= z, the asymptotic error E explodes as β̂ (ρ) → z.
In a setting where β̂ is the PCA estimator, observe that our selection of
∗
β̂ (ρ ) does more than just correct PCA bias. Theorem 2.5, under the proper
a.s.
assumptions, states that γβ,z ∼ Ψγβ̂,z for a random variable Ψ > 1. This suggests taking ρ such that γβ,z equals γβ̂(ρ),z . This choice is not optimal however
as it lies on the contour {x ∈ SN −1 : γx,z = γβ,z } (see Figure 2b). It is not in
Sβ unless β = z and its asymptotic error E is bounded away from zero.
19
If β̂ and β do not lie on the same side of the sphere we must amend (14) by replacing β̂
with −β̂. Note, γβ,z − γ·,z γ·,β has the same value for x and −x.
11
(b) Setting the angle κ = π/2 corresponds to setting the error driving
term E = 0. Thus, β̂ (ρ∗ ) ∈ Sβ .
(a) The spherical law of cosines states:
γβ,z − γβ̂,z γβ̂,β = sin θβ̂,z sin θβ̂,β cos κ
where κ is the angle of the corner opposite θβ,z , arc-length between β and
z.
Figure 2: Illustration of the spherical law of cosines and the geodesic β̂ (ρ)
between the points β̂ and z along which we shrink the PCA estimate β̂.
3.2
Statement of the main theorem
As noted above, we can improve tracking error and variance forecast ratio by
reducing the angle between the estimated eigenvector and the true underlying
eigenvector, β, equivalently, by replacing β̂ with an appropriate choice of β̂ρ .
We find an optimal value ρ∗N for a particular N . We present our method
and its impact on tracking error and variance forecast ratio for a minimum
variance portfolio below. We restrict to the case of homogeneous specific risk
where ∆ = δ 2 I for expositional purposes but consider the full- fledged case in
empirical results in Section 5.
In the following theorem, we also provide a correction to the sample eigenvalue. Our bias correction for the sample eigenvector introduces a bias in the
variance forecast ratio for the equally weighted portfolio. We shrink the sample eigenvalue, treated as the variance of the estimated factor, to debias the
variance forecast ratio for the equally weighted portfolio.
Theorem 3.2. Suppose Assumptions 2.2, 2.3, 2.4 hold and denote by δ̂ any
estimator of the specific risk δ. Define the oracle corrected estimate by β̂ρ∗ :=
β̂ρ∗N where the finite sample (fixed N and T ) optimal value ρ∗N solves the equation
0 = rβ̂ρ − γβ,β̂ρ (or equivalently, maximizes γβ,β̂ρ ).
i. The finite sample optimal oracle ρ∗N is given by,
ρ∗N =
γβ,z − γβ,β̂ γβ̂,z
.
γβ,β̂ − γβ,z γβ̂,z
12
(16)
ii. For the oracle value ρ∗N , the tracking error and forecast variance ratio for
b ∗ = σ̂ 2 β̂ ∗ β̂ T∗ + δ̂ 2 I satisfy,
the minimum variance portfolio for Σ
ρN
ρ ρ
a.s.
Twb2 ∼
2
γβ̂2 ∗ ,z − γβ,z
1 2
ρ
δ
2
N (1 − γβ̂2 ∗ ,z )(1 − γβ,z
)
(17)
ρ
a.s.
Rwb ∼ δ̂ /δ 2 .
2
(18)
That is, after the optimal correction, the forecast variance ratio for the
minimum variance portfolio no longer converges to 0 while the tracking
error to the true minimum variance portfolio does.
a.s.
iii. For T fixed, N → ∞, we have, ρ∗N ∼ ρ̄N where
ρ̄N =
γβ,z
−1
Ψ
−
Ψ
.
2
1 − γβ,z
(19)
where Ψ2 = 1 + δ 2 c1 /ξ, ξ = χ2T /T , and χ2T is the chi-squared distribution
with T degrees of freedom. Also, ρ̄N > 0 almost surely if γβ,z > 0. And the
asymptotic improvement of the optimal angles θβ,β̂ρ∗ and θβ,β̂ρ̄ over the
N
original angle θβ,β̂ as N → ∞ is,
sin2 θβ,β̂ρ∗
N
sin2 θβ,β̂
2
sin2 θβ,β̂ρ̄
1 − γβ,z
N
∼
.
γβ,z 2 =
2
1−( Ψ )
sin θβ,β̂
a.s.
Remark 3.3. Geometrically, there are two views of β̂ρ∗ . One is that β̂ρ∗ is the
projection of β̂ onto Sβ . The other is that β̂ρ∗ is the projection of β onto the
geodesic defined by β̂ρ . In either case, our goal is to find the intersection of the
geodesic and the space Sβ .
Remark 3.4. While we consider a specific target, z, in principle the target does
not matter. It is possible that these kind of factor corrections can be applied
beyond the first factor, given enough information to create a reasonable prior.
The first takeaway from this result is that in the high dimensional limit,
it is always possible to improve on the PCA estimate by moving along the
geodesic between β̂ and z. As γβ,z approaches 0 or for a larger T , the optimal
correction approaches 0. Conversely, as γβ,z approaches 0 or for smaller T ,
the magnitude of the correction is larger. For γβ,z = 1, the proper choice is
naturally to choose z since β and z are aligned in that case.
The improvement in the angle as measured by the ratio of squared sines
2
is bounded in the interval (1 − γβ,z
, 1). As γβ,z approaches 0 or for larger T ,
the improvement diminishes and the ratio approaches 1. Conversely, for large
2
values of c1 , the improvement approaches 1 − γβ,z
, indicating that improvement
is naturally constrained by how close β is to z in the first place.
13
In the application to the minimum variance portfolio, the initial idea is to
correct the sample eigenvector so that we reduce the angle to the population
eigenvector. However, it is not immediately clear that this should have a dramatic effect. Even more surprising is that underestimation of risk has a large
component due to sample eigenvector bias and not any sample eigenvalue bias.
While an improved estimate β̂ρ has the potential to greatly improve forecast
risk, this represents only a single dimension on which to evaluate the performance of a portfolio. We could be sacrificing small tracking error to the true
long-short minimum variance portfolio in exchange for better forecasting. That
however is not the case here.
Since ξ and Ψ are unobservable non-degenerate random variables, determining their realized values, even with asymptotic estimates, is an impossible
task. Hence perfect corrections to kill off the driving term of underestimation of
risk are not possible. However, it is possible to make corrections that materially
improve risk forecasts.
3.3
Eigenvalue corrections
Our bias correction, based on formula (14), adjusts the dominant eigenvector
of the sample covariance matrix S. It does not involve standard corrections to
the eigenvalues of S, which are well known to be biased. This distinguishes our
results from the existing literature (see Section 1.2).
For the purpose of improving accuracy of minimum variance portfolios,
there is no need to adjust the dominant eigenvalue. As shown in formulas
(11) of Section 2.3, the main drivers of T 2 and R for large minimum variance
portfolios do not depend on the dominant eigenvalue of S when returns follow
a one-factor model. This is a particular feature of minimum variance, where
the dominant factor is effectively hedged.
Since our correction removes a systematic form of bias, it can be used to
improve accuracy of other portfolios. If these portfolios have substantial factor
exposure, however, a compatibility adjustment to the dominant eigenvalue may
be required. As our eigenvector adjustment, the compatibility adjustment to
the eigenvalue is distinct from the eigenvalue corrections in the literature.
Here, we provide some discussion of compatible eigenvalue corrections for
“simple” portfolios, i.e., those that do not depend on the realized matrix of
returns R. Note, that for these weights, the tracking error T is zero, so we
treat the risk forecast ratio R only.
Under assumptions on our one-factor model that hold for most cases of
interest, one can write, for a simple portfolio w that
Rw ∼
2
σ̂N
C (w, β, β̂) 2
2
σN
(20)
where C (w, β, β̂) = (w> β)/(w> β̂). Our correction addresses only the quantity C, but asymptotic formula (20) reveals that for simple portfolios, sample
14
eigenvalues play a material role. Another difference between simple portfolios
and minimum variance is that the estimate δ̂ 2 of δ 2 does not play any role; the
factor risk is all that matters. From the discusion in Section 2.2 we know that
2
2
for large N , but it is not apriori clear that an
tends to be larger than σN
σ̂N
2
eigenvalue correction should aim to lower σ̂N
. This would depend on the behavior of the coefficient C (w, β, β̂) given the estimate β̂ and the simple portfolio w
2
will adjust risk forecast raat hand. Moreover, a correction that decreases σ̂N
tios downward, potentially leading to unintended underforecasts. Thus, sample
eigenvalue corrections should be coupled with those for the sample eigenvectors
to balance their respective terms in (20).
We state a sharp result in this direction for the equally weighted portfolio,
a widely used simple portfolio.
Proposition 3.5. Suppose Assumptions 2.2, 2.3, 2.4 hold. Let we = 1N /N
and define the corrected eigenvalue via
σ̂ρ2∗N =
γβ̂,z 2 2
σ̂N
γβ̂ρ∗ ,z
where β̂ρ∗ is our corrected PCA estimate β̂ of Theorem 3.2. Then, the forecast
a.s.
variance ratio satisfies Rw ∼ χ2T /T as N → ∞ where χ2T has a chi-squared
distribution with T degrees of freedom.
2
downward since by design since γβ̂,z ≤ γβ̂ρ∗ ,z .
Note that we adjust σ̂N
4
Algorithm and extensions
For the precise statement of our algorithm see Appendix A. In what follows we
address data-driven corrections for the case ∆ = δ 2 I of Theorem 3.2 as well as
the extension to heterogeneous specific risk.
4.1
Data-driven estimator for homogenous specific risk
We introduce procedure for constructing an estimator ρ̂ for the asymptotic
oracle correction parameter ρ̄N given in (19). It is based on estimates the
specific variance δ 2 and c1 from Assumption 2.2. From Yata & Aoshima (2012),
we have the estimator given by,
δ̂ 2 =
Tr(S) − λ̂1
,
N −1− N
T
(21)
where S is the sample covariance matrix for the data matrix R and λ̂1 = σ̂ 2 is
the first eigenvalue of S. A natural estimate for the true eigenvalue λ1 (Σ) is
b S = max{λ̂ − δ̂ 2 N/T, 0}. For λ̂ sufficiently large, the estimate of c is given
λ
1
1
1
1
15
by,
ĉ1 =
N
T λˆ1 − N δ̂ 2
(22)
.
Given the estimates of δ 2 and c1 , we need a precise value for ξ, as well as Ψ,
in order to have a data-driven estimator. We approximate ξ by its expectation
E[ξ] = 1 to obtain a completely data driven correction parameter estimate ρ̂,
ρ̂ =
b
Ψγ
β̂,z
b
2
1 − (Ψγ
β̂,z )
b −Ψ
b −1 ,
Ψ
(23)
b 2 = 1 + δ̂ 2 ĉ .
where Ψ
1
We compute the factor variance as,
σ̂ρ̂2 = Φ2ρ̂ σ̂ 2 ,
Φρ̂ =
γβ̂,z
,
γβ̂ρ̂ ,z
where σ̂ 2 = λ̂1 is the first sample eigenvalue from S.
4.2
An extension to heterogenous specific risk
Our analysis thus far has rested on the simplifying assumption that security
return specific variances have a common value. Empirically, this is not the case,
and the numerical experiments discussed below in Section 5 allow for the more
complex and realistic case of heterogenous specific variances. To address the
issue, we modify both the oracle estimator and the data-driven estimator by
rescaling betas by specific variance.
Under heterogeneous specific variance, the oracle value ρ∗N is given by the
formula,
ρ∗N
=
b
b ∆
∆
∆
− γβ,
γb
γβ,z
β̂ β̂,z
b
b ∆
b
∆
∆
γβ,
− γβ,z
γβ̂,z
β̂
,
(24)
q
b
b
b ∆
∆
c −1 y/ γ ∆
where γx,y
= xT ∆
x,x γy,y is a weighted inner product. Furthermore, the
e given by,
risk adjusted returns R∆−1/2 have covariance Σ
e = σ 2 ∆−1/2 ββ > ∆−1/2 + I.
Σ
For the risk adjusted returns, Theorem 3.2 holds. The oracle formula coupled
c −1/2 as the data matrix
f = R∆
with the risk adjusted returns suggest we use R
c is the specific risk estimate from the standard PCA method. Why
where ∆
should we expect this to work? The purpose of the scaling R∆−1/2 is to make
f approximates that. Since we
the specific return distribution isotropic, and R
are only trying to obtain an estimator ρ̂ that is close to ρ∗N , this approximation
16
ends up fine. And for ellipses specified by ∆ with relatively low eccentricity, the
estimator in (23) actually works in practice since the distribution is relatively
close to isotropic. So for larger eccentricity, we require the following adjustment
just to get the data closer to an isotropic specific return distribution.
The updated formulas for the heterogenous specific risk correction estimators are given below, and we use them in our numerical experiments. For an
c the modified quantities are,
initial estimate of specific risk ∆,
c −1/2 β̂
∆
˜
,
β̂ = c
k∆−1/2 β̂k2
c −1/2 S∆
c −1/2 ,
e =∆
S
˜
e − λ̂
Tr(S)
˜
1
δ̂ 2 =
,
N −1− N
T
ρ̂ =
c −1/2 z
∆
z̃ = c
,
k∆−1/2 zk2
˜
c −1/2 β̂k2 ,
λ̂1 = λ̂1 k∆
N
(25)
(26)
ĉ1 =
˜
˜ ,
T λ̂1 − N δ̂ 2
(27)
(28)
b
Ψγ
˜
β̂,z̃
b
2
1 − (Ψγ
˜ )
β̂,z̃
b −Ψ
b −1 .
Ψ
c = diag(S − σ̂ 2 β̂ β̂ > ) as the
We use the PCA estimate of the specific risk ∆
initial estimator.
Once we have the estimated ρ̂, we return to the original data matrix R
and apply the correction as before to β̂, the first eigenvector of the sample
covariance matrix. The method for correcting the sample eigenvalue remains
the same and we opt to recompute the specific variances using the corrected
factor exposures and variance.
5
Numerical study
We use simulation to quantify the dispersion bias and its mitigation in minimum
variance and equally portfolios. We design our simulations around a return
generating process that is more realistic than the single-factor, homogenousspecific-risk model featured in Theorems 2.6 and 3.2.20 Returns to N ∈ N
securities are generated by a multi-factor model with heterogenous specific risk,
R = φB + ,
(29)
where φ is the vector of factor returns, B is the matrix of factor exposures, and
= (1 , 2 , . . . , N ) is the vector of diversifiable specific returns.
Our examples are based on a model with K = 4 factors. For consistency
with Sections 2 and 3, we continue to adopt the mathematically convenient
convention of scaling exposure vectors to have L2 norm 1, and we denote the
20
The simplistic setting for our theoretical results showcases the main theoretical tools used
in their proofs without the distraction of regularity conditions required for generalization.
Global equity risk models used by investors typically include more than 100 factors.
17
vector of exposures to the first factor (the first column of the exposure matrix
B) by β. The recipe for constructing β with a target value γβ,z is to generate a
random vector, rescale the vector to length 1, and then modify the component
in the z direction to have the correct magnitude (while maintaining length 1).
A similar approach would be to construct a random vector β 0 with aveage equal
to 1 and variance equal to τ 2 , where τ 2 is the dispersion of the “market beta,”
and then rescale β 0 to length 1 to obtain β. The parameter τ 2 would control
the concentration of the market betas, just like γβ,z , and tends to be greater in
calmer regimes. The√connection between τ 2 and the dispersion parameter γβ,z
2
= 1/ 1 + τ 2 .
is given by γβ,z
The three remaining factors are fashioned from equity styles such as volatility, earnings yield and size.21 We draw the exposure of each security to each
factor from a mean 0, variance 0.75 normal distribution and again normalize
each vector of factor exposures to have L2 length 1.
We calibrate the risk of the market factor in accordance with Clarke et al.
(2011) and Goldberg, Leshem & Geddes (2014); both report the annualized
volatility of the US market to be roughly 16%, based on estimates that rely
on decades of data. We calibrate the risk of factors 2, 3 and 4 in accordance
with Morozov, Wang, Borda & Menchero (2012, Table 4.3), by setting their
annualized volatilities to be 8%, 4% and 4%. We assume that the returns to
the factors are pairwise uncorrelated.
We draw annualized specific volatilities {δn2 } from a uniform distribution
on [32%, 64%]. This range is somewhat broader than the estimated range in
Clarke et al. (2011).22
In each experiment, we simulate a year’s worth of daily returns, T = 250,
to N securities. From this data set, we construct a sample covariance matrix,
S, from which we extract three estimators of the factor covariance matrix Σ.
The first is the data-driven estimator, the implementation specifics of which are
discussed in Section 4.2 and precisely summarized of Appendix A. The second is
the oracle estimator which is the same as the data-driven estimator but with the
true value of γβ,z , the projection of the true factor onto the z-vector, supplied.23
Our third estimator is classical PCA, which is specified in detail in Section 2.2.
We use the three estimated covariance matrices to construct minimum variance
portfolios and to forecast their risk. We also use these covariance matrices to
forecast the risk of an equally weighted portfolio.
In the experiments described below, we vary the number of securities, N ,
and the concentration of the market factor, γβ,z . We run 50 simulations for
each pair, (N, γβ,z ). Results are shown in Figure 3 for a fixed concentration
and varying number of securities, and in Figures 4 and 5 for a fixed number
of securities and varying concentration. Panels (a) and (b) in each figure show
21
Seminal references on the importance of volatility (or beta), earnings yield and size in
explaining cross-sectional correlations is in Black, Jensen & Scholes (1972), Basu (1983) and
Banz (1981).
22
For example, the empirically observed range in Clarke et al. (2011) is [25%, 65%].
23
For the data-driven estimator, this quantity is estimated from the observed data.
18
annualized tracking error and volatility for a minimum variance portfolio. Panels (c) and (d) show variance forecast ratio for a minimum variance portfolio
and an equally weighted portfolio.
Figure 3: Performance statistics for 50 simulated data sets of T = 250 observations as N varies and γβ,z = 0.9. Panel (a): Annualized tracking error for
minimum variance portfolios. Panel (b): Annualized volatility for minimum
variance portfolios. Panel (c): Variance forecast ratio for minimum variance
portfolios. Panel (d): Variance forecast ratio for equally weighted portfolios.
In Figure 3, the concentration of the market factor is γβ,z = 0.9. Panels (a),
(b) and (c) show that for minimum variance portfolios optimized with dispersion bias-corrected PCA models, tracking error and volatility decline materially
as N grows from 500 to 3000, ranges of outcomes compress, and variance forecast ratios are near 1 for all N considered. These desirable effects are less
pronounced, or even absent, in a PCA model without dispersion bias mitigation. A comparison of panels (c) and (d) highlights the difference in accuracy
of risk forecasts between a minimum variance portfolio and an equally weighted
portfolio. Dispersion bias mitigation materially improves variance forecast ratio
for the former and has no discernible impact for the latter.
19
Figure 4: Performance statistics for 50 simulated data sets of T = 250 observations as γβ,z varies and N = 500. Panel (a): Annualized tracking error for
minimum variance portfolios. Panel (b): Annualized volatility for minimum
variance portfolios. Panel (c): Variance forecast ratio for minimum variance
portfolios. Panel (d): Variance forecast ratio for equally weighted portfolios.
In Figure 4, the number of securities is N = 500. Panels (a), (b) and
(c) show that for minimum variance portfolios optimized with PCA models,
tracking error and volatility increase materially as γβ,z grows from 0.5 to 0.9,
variance forecast ratio diminishes, and the ranges of outcomes expand. These
undesirable effects are diminished when the dispersion bias is mitigated.
Results for values γβ,z ∈ [0.9, 1.0] are shown separately in Figure 5. The
severe decline in performance for all risk metrics for γβ,z in this range is a
consequence of the sea change in the composition of a minimum variance portfolio that can occur when the dominant eigenvalue of the covariance matrix
is sufficiently concentrated. Here, the true minimum variance portfolio tends
to lose its short positions.24 Errors in estimation of the dominant factor lead
to long-short optimized portfolios approximating long-only optimal portfolios.
The market factor is hedged in the former but not in the latter, and this discrepancy propagates to the error metrics.
A comparison of panels (c) and (d) in Figures 4 and 5 highlights, again,
the difference in accuracy of risk forecasts between a minimum variance portfolio and an equally weighted portfolio. Dispersion bias mitigation materially
improves variance forecast ratio for the former and has no discernible impact
on the latter.
24
If specific variances are all equal, the minimum variance portfolio becomes equally
weighted when the dominant eigenvector is dispersionless.
20
Figure 5: Performance statistics for 50 simulated data sets of T = 250 observations as γβ,z varies and N = 500. Panel (a): Annualized tracking error for
minimum variance portfolios. Panel (b): Annualized volatility for minimum
variance portfolios. Panel (c): Variance forecast ratio for minimum variance
portfolios. Panel (d): Variance forecast ratio for equally weighted portfolios.
A casual inspection of the figures suggests that the data-driven estimator
performs nearly as well as the oracle. However, there can be substantial differences between the two estimators in some cases, as shown in tracking error and
variance forecast ratio of minimum variance Figure 5 when γβ,z ∈ [0.9, 1.0]. The
origin of the differences can be seen Lemma 3.1. In order for the tracking error
and variance forecast ratio to have good asymptotic properties, the estimator
β̂ must lie in the null space Sβ . This condition is guaranteed for the oracle
estimators but will not generally be satisfied for data-driven estimators.
6
Summary
In this article, we develop a correction for bias in PCA-based covariance matrix
estimators. The bias is excess dispersion in a dominant eigenvector, and the
form of the correction is suggested by formulas for estimation error metrics
applied to minimum variance portfolios.
We identify an oracle correction that optimally shrinks the sample eigenvector along a spherical geodesic toward the distinguished zero-dispersion vector, and we provide asymptotic guarantees that oracle shrinkage reduces both
types of error. These findings are especially relevant to equity return covariance matrices, which feature a dominant factor whose overwhelmingly positive
exposures tend to be overly dispersed by PCA estimators.
Our results fit into two streams of academic literature. The first is the
large-N -fixed-T branch of random matrix theory, which belongs to statistics.
The second is empirical finance, which features results about Bayesian adjust21
ments to estimated betas.
To enable practitioners to use our results, we develop a data-driven estimator of the oracle. Simulation experiments support the practical value of our
correction, but much work remains to be done. That includes the development
of estimates of the size and likelihood of the exposure bias in finite samples,
the identification and correction of biases in other risk factors, and empirical
studies. Explicit formulas for error metrics in combination with the geometric
perspective in this article provide a way to potentially improve construction
and risk forecasts of investable optimized portfolios.
22
A
Algorithm
Our corrected covariance matrix algorithm is given in Algorithm 1 where the
input is a data matrix of returns R.
Algorithm 1 1-Factor Bias Corrected PCA Covariance Estimator
Require: R = (R1√, . . . , RT )
Require: z = 1N / N
1: procedure Bias Corrected PCA Covariance(R)
2:
S ← T1 RRT
3:
Û ← [û1 , . . . , ûN ], Λ̂ ← Diag(λ̂1 , . . . , λ̂N ) . Eigendecomposition of S
. Orient such that β̂ T z > 0.
4:
β̂ ← sign(ûT1 z)û1
5:
σ̂ 2 ← λ̂1
c ← Diag(S − L),
b
b ← σ̂ 2 β̂ β̂ T
6:
∆
L
. Initial PCA estimate.
˜
c −1/2 β̂k2
c −1/2 S∆
c −1/2 ,
e ←∆
λ̂1 ← λ̂1 k∆
7:
S
−1/2
˜
b
b −1/2 z
∆
β̂
∆
β̂ ← k∆
z̃ ← k∆
b −1/2 β̂k2 ,
b −1/2 zk2
˜
e
˜
S)−λ̂1
b ← 1 + δ̂˜2 ĉ ,
δ̂ 2 ← Tr(
ĉ1 ← ˜ N ˜2
8:
Ψ
N ,
1
N −1−
T λ̂1 −N δ̂
T
9:
β̂+ρ̂z
β̂ρ̂ ← √1+2ρ̂γ
,
+ρ̂2
ρ̂ ←
b
Diag(S − L
b
L
β̂,z
10:
c
∆
ρ̂
←
ρ̂ ),
bγ ˜
Ψ
β̂,z̃
b γ ˜ )2
1−(Ψ
b −Ψ
b −1
Ψ
β̂,z̃
ρ̂
← σ̂ρ̂2 β̂ρ̂ β̂ρ̂T ,
σ̂ρ̂2 ←
γ2
β̂,z
γ2
σ̂ 2
β̂ρ̂ ,z
11:
12:
c
b +∆
return Σ̂ ← L
ρ̂
ρ̂
end procedure
. The 1-factor bias corrected PCA estimator.
23
B
Tables
N
500
1000
1500
2000
2500
3000
PCA
2.35% 2.18% 2.06% 2.01% 2.03% 2.00%
Oracle
1.64% 1.21% 0.97% 0.83% 0.77% 0.70%
Data-Driven 1.66% 1.21% 0.98% 0.84% 0.77% 0.71%
(a) Annualized Tracking Error: Minimum Variance
N
500
1000
1500
2000
2500
3000
PCA
5.10% 3.92% 3.34% 3.03% 2.91% 2.75%
Oracle
4.82% 3.47% 2.80% 2.42% 2.22% 2.01%
Data-Driven 4.82% 3.47% 2.81% 2.42% 2.23% 2.02%
(b) Annualized Volatility: Minimum Variance
N
500
1000
1500
2000
2500
3000
PCA
0.695 0.604 0.541 0.492 0.452 0.413
Oracle
0.963 0.962 0.960 0.961 0.963 0.961
Data-Driven 0.951 0.946 0.943 0.945 0.953 0.947
(c) Forecast Ratio: Minimum Variance
N
500
1000
1500
2000
2500
3000
PCA
0.983 0.991 1.014 1.011 1.001 1.006
Oracle
0.983 0.991 1.015 1.011 1.001 1.006
Data-Driven 0.983 0.991 1.015 1.011 1.001 1.006
(d) Forecast Ratio: Equal Weighted
Table 1: Median values derived from Figure 3. Performance statistics for 50
simulated data sets of T = 250 observations as γβ,z varies and N = 500. Panel
(a): Annualized tracking error for minimum variance portfolios. Panel (b):
Annualized volatility for minimum variance portfolios. Panel (c): Variance
forecast ratio for minimum variance portfolios. Panel (d): Variance forecast
ratio for equally weighted portfolios.
24
γβ,z
0.50
0.60
0.70
0.80
0.90
0.95
1.00
PCA
0.51% 0.59% 0.88% 1.26% 2.35% 3.72% 9.11%
Oracle
0.41% 0.42% 0.67% 0.86% 1.64% 2.91% 0.63%
Data-Driven 0.43% 0.45% 0.70% 0.89% 1.66% 2.94% 2.18%
(a) Annualized Tracking Error: Minimum Variance
0.50
γβ,z
0.60
0.70
0.80
0.90
0.95
1.00
PCA
2.42% 2.57% 2.99% 3.62% 5.10% 7.02% 20.95%
Oracle
2.40% 2.54% 2.94% 3.50% 4.82% 6.63% 18.88%
Data-Driven 2.41% 2.54% 2.95% 3.51% 4.82% 6.65% 18.99%
(b) Annualized Volatility: Minimum Variance
γβ,z
0.50
0.60
0.70
0.80
0.90
0.95
1.00
PCA
0.915 0.904 0.860 0.816 0.695 0.579 0.264
Oracle
0.958 0.965 0.949 0.959 0.963 0.967 1.001
Data-Driven 0.955 0.959 0.945 0.949 0.951 0.928 0.967
(c) Forecast Ratio: Minimum Variance
γβ,z
0.50
0.60
0.70
0.80
0.90
0.95
1.00
PCA
1.019 1.015 1.035 0.984 0.983 1.04 1.002
Oracle
1.020 1.015 1.035 0.984 0.983 1.04 1.002
Data-Driven 1.020 1.015 1.035 0.984 0.983 1.04 1.002
(d) Forecast Ratio: Equal Weighted
Table 2: Median values derived from Figures 4 and 5. Performance statistics
for 50 simulated data sets of T = 250 observations as γβ,z varies and N = 500.
Panel (a): Annualized tracking error for minimum variance portfolios. Panel
(b): Annualized volatility for minimum variance portfolios. Panel (c): Variance
forecast ratio for minimum variance portfolios. Panel (d): Variance forecast
ratio for equally weighted portfolios.
25
C
Proof of main results
We start off with some foundational asymptotic results from the literature. Let
X ∼ N (0, Λ) where Λ = diag(λ1 , λ2 , . . . , λ2 ) is a diagonal matrix satisfying
with λ1 satisying Assumption C1 in Shen et al. (2016). Let SX = T1 XXT be
the sample covariance for X with eigendecomposition SX = V̂Λ̂V̂T . Further
let v̂1 be the first sample eigenvector given by,
v̂11
v̂1 = . . .
,
v̂N 1
iT
h
and define ṽ = v̂21 . . . v̂N 1 . By Paul (2007, Theorem 6), ṽ ∼ Unif(B(N −
2)) where B(N − 2) is a unit N − 2 sphere.
Via a simple scaling by λ2 , by Shen et al. (2016, Theorem 6.3) we have
a.s.
T
e1 v̂1 = v̂11 → Ψ−1 where Ψ2 = 1 + λ2 c1 /ξ and ξ = χ2T /T .
We introduce the following lemma, which we will use in the proofs of our
results. We leave its proofs until the end.
Lemma C.1. If Xn ∼ Unif(B(n − 1)) and Yn ∈ B(n − 1) is a sequence
a.s.
independent of Xn , then XnT Yn → 0 as n → ∞.
Proof of Theorem 2.5. Let X = UR where U is the matrix of eigenvectors
(β, u2 , . . . uN ) of Σ so that Cov(X) = Λ as introduced in the beginning of this
section. Also as before let SX = T1 XXT = V̂Λ̂V̂T be the sample covariance of
X and its eigendecomposition. Then S = T1 RRT = UV̂Λ̂V̂T UT so that the
first sample eigenvector of S, β̂, is given by,
β̂ = v̂11 β +
N
X
v̂j1 uj .
j=2
Then we have γβ̂,β = v̂11 and,
γβ̂,z = v̂11 γβ,z + ṽ T ωN ,
h
i
where ωN = uT2 z · · · uTN z . As noted before, by Shen et al. (2016, Theorem
a.s.
6.3), v̂11 → Ψ−1 . We know that both kṽk2 and kωN k2 are bounded as,
kṽk2 ≤ 1,
1 = kU T zk22 = (β T z)2 +
X
2
(uTj z)2 = γβ,z
+ kωN k22 .
ṽ
Therefore, from Lemma C.1 for XN = kṽk
and YN =
2
converges almost surely to 0 so we conclude the result.
ωN
kωN k2
we have ṽ T ωN
Proof of Theorem 3.2.
i. It is easy to verify that ρ∗N solves γβ,z − γβ̂(ρ),z γβ̂(ρ),β and maximizes γβ,β̂ρ .
26
ii. For the oracle value ρ∗N , β̂ρ∗N ∈ Sβ so the result follows by Lemma 3.1.
iii. Convergence of ρ∗N to ρ̄N stems from Theorem2.5. It is also clear that ρ̄N
if γβ,z > 0 since Ψ − Ψ−1 > 0.
For the asymptotic improvement due to shrinkage, we rely on Björck &
Golub (1973, Theorem 1), which shows that principal angle can be derived
from the singular value decomposition of,
"
β
T
#
β̂−γβ̂,z z
z q
"
= γβ,z
1−γ 2
γβ,β̂ −γβ̂,z γβ,z
q
#
1−γ 2
.
β̂,z
β̂,z
By maximizing γβ,β̂ρ (or equivalently minimizing the angle between β and
βρ ), we are directly choosing ρ∗N such that β̂ρ∗N is the principal vector with
corresponding principal angle to β̂. Finding the vector in terms of correction quantity is easier through direct maximization of γβ,β̂ρ , and finding the
improvement is easier through the principal angle computation, despite the
results being equivalent.
From the above product, the squared cosine of the principal angle and its
asymptotic value is,
γβ,β̂ −γβ̂,z γβ,z
"
2
γβ,
β̂ρ∗
γβ,z
=
q
1−γ 2
β̂,z
N
2
= γβ,z
+
a.s.
2
(γβ,β̂ − γβ̂,z γβ,z )2
2
1 − γβ̂,z
2
∼ γβ,z
+
2
= γβ,
β̂ρ̄
# 2
2 2
Ψ−2 (1 − γβ,z
)
γβ,z 2
1−( Ψ )
N
a.s.
2
Since sin2 θβ,β̂ = 1 − γβ,
→ 1 − Ψ−2 , we see clearly that,
β̂
sin2 θβ,β̂ρ∗
N
sin2 θβ,β̂
2
sin2 θβ,β̂ρ̄
1 − γβ,z
N
∼
γβ,z 2 =
2
1−( Ψ )
sin θβ,β̂
a.s.
Proof of Lemma C.1. By orthogonal invariance of Xn and the independence of Yn ,
D
XnT Yn = XnT QTn Qn Yn = (Qn Xn )1 = Xn1
where Qn is an orthongal matrix such that Qn Yn = e1 , e1 is the first canonical
vector, and Xn1 is the first entry of Xn . We know from Muller (1959) for Xn1 ,
D
Xn1 = q
Z1
Z12 + χ2n−1
27
,
where Z1 ∼ N (0, 1) and is independent of χ2n−1 . We have,
4
h i
EXn1
1
n
P(|Xn1 | ≥ ) ≤
≤
E
Z14 E 2
4
4
2
n
χn
!2
≤
C
,
n2
where the inverse chi-squared distribution has finite moment for n large enough
and C is some constant related to the moments of the standard normal distribution and the inverse chi-squared distribution. By the Borel-Cantelli lemma,
we conclude the result.
Proof of Proposition 3.5. For the equally weighted portfolio w =
using β̂ρ∗N ,
σ̂ρ2∗N N γβ̂2
ρ∗ ,z
N
Rw =
where
σ̂ρ2∗N
we get,
=
σ̂ρ2∗N γβ̂2
ρ∗ ,z
N
2
σ 2 γβ,z
,
!2
γβ̂,z
γβ̂
+ δ̂ 2 N1
2
σ 2 N γβ,z
+ δ 2 N1
a.s.
∼
1
1
N N
σ̂ 2 is the corrected factor variance. Using Theorem 2.5,
ρ∗ ,z
N
a.s.
Rw ∼
2
σ 2 ξΨ2 Ψ−2 γβ,z
a.s.
∼ ξ.
2
2
σ γβ,z
D
Asymptotic estimates
Throughout, we assume Assumption 2.2. The estimates (σ̂, β̂, δ̂), are general
(but with kβ̂k = 1) and not necessarily from PCA. In our setting, formula (7)
for the minimum variance portfolio simplifies to (cf. Clarke et al. (2011))
w∗ ∝ βMV z − β where βMV =
σ2 + δ2
.
σ 2 γβ,z
Analogously, wb and βbMV are defined in terms of the estimates (σ̂, β̂, δ̂). The
normalizing factor for the portolio weights that ensures 1>
N w∗ = 1 is given by
√
W = N (βMV − γβ,z ) .
(30)
28
We develop (N ↑ ∞)-asymtotics for the tracking error T and variance ratio R
in equations (8)–(9) for our simple one-factor model. It is easy to see that
b 2 + δ̂ 2 kwk
b 22
σ̂ 2 (β̂ > w)
,
b 2 + δ 2 kwk
b 22
σ 2 (β > w)
b 22 .
Twb2 = σ 2 (β > wb − β > w∗ )2 + δ 2 kw∗ − wk
Rwb =
(31)
(32)
Note that all quantities involved depend on N . We suppress this dependence
to ease the notation. Recall E in (10) defined as
E =
γβ,z − γβ,β̂ γβ̂,z
.
2
1 − γβ̂,z
Proposition D.1 (Asymptotics). Suppose Assumption 2.2 holds. Let µ2 =
σ 2 /N and µ̂2 = σ̂ 2 /N and assume that (µ̂, δ̂) are bounded in N .
(i) Suppose that supN γβ,z < 1 and supN γβ̂,z < 1. Then,
Rwb ∼
δ̂ 2
δ 2 + µ2 E 2 N sin2 θβ̂,z
Twb2 ∼ µ2 E 2 + δ 2 N −1
(N ↑ ∞).
2
2
(γβ̂,z
− γβ,z
)
(33)
2
2
(1 − γβ,z
) (1 − γβ̂,z
)
(ii) Suppose that supN γβ,z < 1 and γβ̂,z = 1 eventually in N . Then,
Rwb ∼
µ̂2
2
µ2 γβ,z
Twb2
2
µ2 γβ,z
∼
(N ↑ ∞).
(34)
(iii) Suppose that γβ,z = 1 eventually in N and supN γβ̂,z < 1. Then,
δ̂ 2 /µ2
Rwb ∼ N −1 2
sin θβ̂,z
Twb2 = N −1
2
δ 2 γβ̂,z
sin2 θβ̂,z
(N ↑ ∞).
(35)
(iv) Suppose that both γβ,z = 1 and γβ̂,z = 1 eventually in N . Then, Twb2 = 0
eventually in N and Rwb ∼ σ̂ 2 /σ 2 as N ↑ ∞.
Proof. All claims follow from the collection of Lemmas below.
29
It is not difficult to show the following identities in our setting.
>
2
2
σ (β w∗ ) = µ
!2
δ 2 γβ,z
2
δ 2 + σ 2 (1 − γβ,z
)
2
b 2 = µ2
σ 2 (β > w)
γβ,z δ̂ 2 + σ̂ 2 (γβ,z − γβ,β̂ γβ̂,z )
2
)
δ̂ 2 + σ̂ 2 (1 − γβ̂,z
w∗> w∗ =
2
(σ 2 + δ 2 )2 − (σ 2 + 2δ 2 )σ 2 γβ,z
2
N (δ 2 + σ 2 (1 − γβ,z
))2
wb > w∗ =
δ 2 + σ 2 − σ 2 γβ,z E
2
(δ 2 + σ 2 (1 − γβ,z
) )N
E=
2
(36)
(δ̂ 2 + σ̂ 2 )γβ,z − σ̂ 2 γβ̂,z γβ̂,β
2
)
δ̂ 2 + σ̂ 2 (1 − γβ̂,z
These are suffiecient to prove the following Lemmas. As in Propositon D.1 we
set µ2 = σ 2 /N and µ̂2 = σ̂ 2 /N and assume that (µ̂, δ̂) are bounded in N .
Lemma D.2 (True portfolio variance due to factor). When supN γβ,z < 1 and
supN γβ̂,z < 1, the factor components of the true variance of w∗ and wb satisfy
δ4
σ (β w∗ ) ∼ 2 2
µN
2
>
2
γβ,z
2
1 − γβ,z
!2
(N ↑ ∞).
(37)
b 2 ∼ µ2 E 2
σ 2 (β > w)
Lemma D.3 (Porfolio weights). Suppose supN γβ,z < 1 and supN γβ̂,z < 1.
w∗> w∗
∼N
−1
wb w∗ ∼ N
−1
>
1
2
1 − γβ,z
!
1 − γβ,z E
2
1 − γβ,z
!
(N ↑ ∞).
(38)
References
Bai, J. & Ng, S. (2008), ‘Large dimensional factor analysis’, Foundations and
Trends R in Econometrics 3(2), 89–163.
Banz, R. W. (1981), ‘The relationship between return and market value of
common stock’, Journal of Financial Economics 9, 3–18.
Basu, S. (1983), ‘The relationship between earnings’ yield, market value and
return for nyse common stocks’, Journal of Financial Economics 12, 129–
156.
30
Bianchi, S. W., Goldberg, L. R. & Rosenberg, A. (2017), ‘The impact of estimation error on latent factor models of portfolio risk’, The Journal of
Portfolio Management 43(5), 145–156.
Bickel, P. J. & Levina, E. (2008), ‘Covariance regularization by thresholding’,
The Annals of Statistics pp. 2577–2604.
Björck, Ȧ. & Golub, G. H. (1973), ‘Numerical methods for computing angles
between linear subspaces’, Mathematics of computation 27(123), 579–594.
Black, F., Jensen, M. C. & Scholes, M. (1972), The capital asset pricing model:
some empirical tests, in M. C. Jensen, ed., ‘Studies in the Theory of Capital
Markets’, Praeger.
Blume, M. E. (1975), ‘Betas and their regression tendencies’, The Journal of
Finance 30(3), 785–795.
Britten-Jones, M. (1999), ‘The sampling error in estimates of mean-variance
efficient portfolio weights’, The Journal of Finance 54(2), 655–671.
Bun, J., Bouchaud, J. & Potters, M. (2016), ‘Cleaning correlation matrices’,
Risk Magazine .
Clarke, R., De Silva, R. & Thorley, S. (2011), ‘Minimum-variance portfolio
composition’, Journal of Portfolio Management 2(37), 31–45.
Connor, G. & Korajczyk, R. A. (1986), ‘Performance measurement with the arbitrage pricing theory: A new framework for analysis’, Journal of financial
economics 15, 373–394.
Connor, G. & Korajczyk, R. A. (1988), ‘Risk and return in equilibrium apt:
Application of a new test methodology’, Journal of financial economics
21, 255–289.
DeMiguel, V., Garlappi, L., Nogales, F. J. & Uppal, R. (2009), ‘A generalized
approach to portfolio optimization: Improving performance by constraining portfolio norms’, Management Science 55(5), 798–812.
DeMiguel, V., Garlappi, L. & Uppal, R. (2007), ‘Optimal versus naive diversification: How inefficient is the 1/n portfolio strategy?’, The review of
Financial studies 22(5), 1915–1953.
El Karoui, N. (2013), ‘On the realized risk of high-dimensional markowitz portfolios’, SIAM Journal on Financial Mathematics 4(1), 737–783.
El Karoui, N. E. (2008), ‘Spectrum estimation for large dimensional covariance
matrices using random matrix theory’, The Annals of Statistics pp. 2757–
2790.
31
El Karoui, N. et al. (2010), ‘High-dimensionality effects in the markowitz problem and other quadratic programs with linear constraints: risk underestimation’, The Annals of Statistics 38(6), 3487–3566.
Fan, J., Liao, Y. & Mincheva, M. (2013), ‘Large covariance estimation by
thresholding principal orthogonal complements’, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 75(4), 603–680.
Frost, P. A. & Savarino, J. E. (1986), ‘An empirical bayes approach to efficient portfolio selection’, Journal of Financial and Quantitative Analysis
21(3), 293–305.
Gillen, B. J. (2014), ‘An empirical bayesian approach to stein-optimal covariance matrix estimation’, Journal of Empirical Finance 29, 402–420.
Goldberg, L., Leshem, R. & Geddes, P. (2014), ‘Restoring value to minimum
variance’, Journal of Investment Management 12(2), 32–39.
Goldfarb, D. & Iyengar, G. (2003), ‘Robust portfolio selection problems’, Mathematics of operations research 28(1), 1–38.
Green, R. C. & Hollifield, B. (1992), ‘When will mean-variance efficient portfolios be well diversified?’, The Journal of Finance 47(5), 1785–1809.
Jagannathan, R. & Ma, T. (2003), ‘Risk reduction in large portfolios: Why imposing the wrong constraints helps’, The Journal of Finance 58(4), 1651–
1683.
Jobson, J. D. & Korkie, B. (1980), ‘Estimation for markowitz efficient portfolios’, Journal of the American Statistical Association 75(371), 544–554.
Lai, T. L. & Xing, H. (2008), Statistical models and methods for financial markets, Springer.
Lai, T. L., Xing, H. & Chen, Z. (2011), ‘Mean-variance portfolio optimization when means and covariances are unknown’, The Annals of Applied
Statistics pp. 798–823.
Ledoit, O. & Péché, S. (2011), ‘Eigenvectors of some large sample covariance
matix ensembles’, Probability Theory and Related Fields 151(1–2), 233–
264.
Ledoit, O. & Wolf, M. (2003), ‘Improved estimation of the covariance matrix
of stock returns with an application to portfolio selection’, Journal of
empirical finance 10(5), 603–621.
Ledoit, O. & Wolf, M. (2004), ‘Honey, i shrunk the sample covariance matrix’,
The Journal of Portfolio Management 30, 110–119.
32
Markowitz, H. (1952), ‘Portfolio selection’, The Journal of Finance 7(1), 77–91.
Michaud, R. O. & Michaud, R. O. (2008), Efficient asset management: a practical guide to stock portfolio optimization and asset allocation, Oxford University Press.
Morozov, A., Wang, J., Borda, L. & Menchero, J. (2012), The barra global
equity model (gem3). MSCI Barra model documentation.
Muller, M. E. (1959), ‘A note on a method for generating points uniformly on
n-dimensional spheres’, Communications of the ACM 2(4), 19–20.
Onatski, A. (2012), ‘Asymptotics of the principal components estimator of large
factor models with weakly influential factors’, Journal of Econometrics
168(2), 244–258.
Pástor, L. (2000), ‘Portfolio selection and asset pricing models’, The Journal
of Finance 55(1), 179–223.
Paul, D. (2007), ‘Asymptotics of sample eigenstructure for a large dimensional
spiked covariance model’, Statistica Sinica pp. 1617–1642.
Ross, S. A. (1976), ‘The arbitrage theory of capital asset pricing’, Journal of
economic theory 13(3), 341–360.
Sharpe, W. F. (1964), ‘Capital asset prices: A theory of market equilibrium
under conditions of risk’, The Journal of Finance 19(3), 425–442.
Shen, D., Shen, H., Zhu, H. & Marron, S. (2016), ‘The statistics and mathematics of high dimensional low sample size asympotics’, Statistica Sinica
26(4), 1747–1770.
Treynor, J. L. (1962), Toward a theory of market value of risky assets. Presented
to the MIT Finance Faculty Seminar.
Vasicek, O. A. (1973), ‘A note on using cross-sectional information in bayesian
estimation of security betas’, The Journal of Finance 28(5), 1233–1239.
Wang, W. & Fan, J. (2017), ‘Asymptotics of empirical eigenstructure for high
dimensional spiked covariance’, The Annals of Statistics 45(3), 1342–1374.
Yata, K. & Aoshima, M. (2012), ‘Effective pca for high-dimension, low-samplesize data with noise reduction via geometric representations’, Journal of
multivariate analysis 105(1), 193–215.
33
| 10 |
1
Optimal Beam Sweeping and Communication
in Mobile Millimeter-Wave Networks
arXiv:1801.09306v1 [] 28 Jan 2018
Nicolò Michelusi, Muddassar Hussain
Abstract
Millimeter-wave (mm-wave) communications incur a high beam alignment cost in mobile scenarios
such as vehicular networks. Therefore, an efficient beam alignment mechanism is required to mitigate
the resulting overhead. In this paper, a one-dimensional mobility model is proposed where a mobile
user (MU), such as a vehicle, moves along a straight road with time-varying and random speed, and
communicates with base stations (BSs) located on the roadside over the mm-wave band. To compensate
for location uncertainty, the BS widens its transmission beam and, when a critical beamwidth is achieved,
it performs beam-sweeping to refine the MU position estimate, followed by data communication over
a narrow beam. The average rate and average transmission power are computed in closed form and the
optimal beamwidth for communication, number of sweeping beams, and transmission power allocation
are derived so as to maximize the average rate under an average power constraint. Structural properties
of the optimal design are proved, and a bisection algorithm to determine the optimal sweeping –
communication parameters is designed. It is shown numerically that an adaptation of the IEEE 802.11ad
standard to the proposed model exhibits up to 90% degradation in spectral efficiency compared to the
proposed scheme.
I. I NTRODUCTION
Millimeter-wave (mm-wave) technology has emerged as a promising solution to enable multiGbps communication, thanks to the abundant bandwidth available [1]. Mm-wave will be key
to supporting autonomous transportation systems by allowing vehicles to extend their sensing
range and make more informed decisions by exchanging rich sensory information [2]. It will also
enable a wide range of infotainment services such as digital maps, cloud computing, ultra-high
definition video streaming, etc. However, signal propagation at these frequencies poses several
N. Michelusi and M. Hussain are with the School of Electrical and Computer Engineering at Purdue University. Email:
{michelus,hussai13}@purdue.edu.
This research has been funded by the National Science Foundation under grant CNS-1642982.
January 30, 2018
DRAFT
2
challenges to the design of future communication systems supporting high throughput and high
mobility, such as high isotropic path loss and sensitivity to blockages [3]. Mm-wave systems
are expected to leverage narrow-beam communications to counteract the propagation loss [4] by
using large antenna arrays at both base stations (BSs) and mobile users (MUs).
However, sharp beams are susceptible to beam mis-alignment due to mobility or blockage,
necessitating frequent re-alignment. This task can be challenging, especially in mobile scenarios.
The beam alignment protocol may consume time, frequency, and energy resources, thus potentially offsetting the benefits of mm-wave directionality. Therefore, it is imperative to design
schemes to mitigate its overhead.
In this paper, we investigate the trade-off between beam alignment and data communication in
mobile mm-wave networks. We propose a beam-sweeping – data communication protocol that
accounts for the uncertainty on the location and speed of the MU and for the temporal overhead
of beam-sweeping. Specifically, the BS associated with the MU widens its transmission beam to
compensate for the increasing uncertainty on the MU location and, when a critical beamwidth is
achieved, it performs beam-sweeping to refine the MU’s position estimate and achieve a narrow
communication beam. We compute the performance in closed-form, and investigate the design of
the optimal beamwidth for communication, number of sweeping beams, and transmission power
so as to maximize the average rate under average power constraint. We find structural properties
and propose a bisection method to determine the optimal design. We show numerically that
an adaptation of IEEE 802.11ad to our model exhibits a performance degradation up to 90%
compared to our design.
Beam alignment in mm-wave has been a subject of intensive research due to its importance in
mm-wave communication systems. The research in this area can be categorized into beamsweeping [5]–[8]; AoA/AoD estimation [9], [10]; and data-assisted schemes [2], [11]–[13].
Beam-sweeping based schemes require scanning of regions of uncertainty of AoA/AoD. The
simplest and yet most popular form of beam-sweeping is the so-called exhaustive search [5],
which sequentially scans through all possible beam pairs from the BS and MU codebooks and
selects the one with maximum signal power. This approach has been adopted in existing mmwave standards including IEEE 802.15.3c [14] and IEEE 802.11ad [15]. The other popular
scheme is a hierarchical form of scanning called iterative search [6], where beam-sweeping is
first performed using wider beams followed by refinement with narrow beams. In our previous
work [8], we derived an energy-efficient scheme termed fractional search, which minimizes the
January 30, 2018
DRAFT
3
Fig. 1: System model.
energy consumption subject to a rate constraint: in each slot, the BS adaptively scans a fraction
of the uncertainty region of the AoD, function of the slot index, rate requirement, probabilities
of false-alarm and mis-detection, bandwidth, path loss, and noise power spectral density.
AoA/AoD estimation aims to reduce the number of measurements required by beam-sweeping
by leveraging the sparsity of mm-wave channels, e.g., via compressive sensing as in [9]. The
paper [10] derived an approximate maximum likelihood estimator for the channel by directly
exploiting the structure of the mm-wave channel. Data-aided schemes utilize information from
radar [11], lower frequencies [12], or positional information [2], [13] to reduce the cost of beamsweeping. Based on this idea, the authors of [2] proposed a beamwidth optimization algorithm
that maximizes the data rate for non-overlapping beams. In contrast to [2], we propose an
analytical framework for the joint optimization of beamwidth, communication power and beamsweeping to maximize the communication performance. To the best of our knowledge, we are
the first to propose an analytical framework for the optimization of the beam-sweeping and
communication parameters in mobile mm-wave networks.
The paper is organized as follows: in Sec. II, we present the system model and optimization
problem; in Sec. III, we present the analysis, followed by numerical results in Sec. IV; finally,
in Sec. V, we conclude with some remarks.
January 30, 2018
DRAFT
4
II. S YSTEM M ODEL
We consider a dense cell deployment, as depicted in Fig. 1. The MU is associated with its
closest BS, at distance d. We assume that the BS points its beam perpendicularly to the motion of
the MU (a good approximation in dense cell deployments). A macro-cell unit controls functions
such as handover among cells. The time-scale of this task is larger than the beam-sweeping –
data communication cycle, and thus we neglect it. We neglect the additional overhead due to
channel estimation, Doppler correction, and the impact of beamwidth on Doppler spread (see
[16]).
A. User mobility model
The MU moves along a line (e.g., a vehicle along a road). Let (pt , vt ) ∈ R2 be its position and
speed at time t. We assume that vt ∈ [vmin , vmax ], where vmin < vmax (possibly, negative), and
we let vdrift = (vmin + vmax )/2 be the drift velocity and φ , vmax − vmin be the speed uncertainty.
vt is time-varying and random, with arbitrary distribution in [vmin , vmax ]. The speed parameters
vdrift , φ are assumed to be known, and can be estimated from GPS information collected at
the macro-cell (e.g., via lower frequency dedicated short range communication channels [17]).
Herein, we assume that vdrift = 0, since a known non-zero drift can be incorporated by appropriate
beam steering. Thus, it follows that vt ∈ [−φ/2, φ/2] and, given p0 at a reference time 0,
Z t
φt
φt
.
pt = p0 +
vτ dτ ∈ p0 − , p0 +
2
2
0
(1)
In this paper, the uncertainty on the location of the MU at time t is denoted by the uncertainty interval Ut ≡[p̂t −ut /2, p̂t +ut /2], where p̂t is the median estimated position and ut is the
uncertainty width, so that pt ∈ Ut . From the mobility model (1), if no beam-sweeping is done
in the time interval [t, τ ], the uncertainty width augments at rate φ, i.e.,
uτ = ut + φ(τ − t), τ ≥ t,
(2)
and is reduced via beam-sweeping, as discussed in Sec. II-B.
The communication between BS and MU follows a beam-sweeping – data communication
cycle of duration T . We now describe the entire cycle, starting from the reference time t=0.
January 30, 2018
DRAFT
5
B. Beam Sweeping
When, at the reference time t = 0, the uncertainty width reaches a critical value u0 = uth , the
BS currently associated with the MU sweeps the entire uncertainty interval U0 using η ≥ 2, η ∈ N
beams, transmitted sequentially over η microslots, each of duration δS . During this interval, the
uncertainty width increases over time due to MU mobility. In order to compensate for it, the
BS scans wider regions over successive microslots, as detailed below. Thus, we let ωi be the
beamwidth of the ith beam, where i = 1, 2, . . . , η.
At the end of the beam-sweeping interval of duration ηδS , the MU processes the signals, and
feeds back to the BS the ID of the strongest signal (e.g., via a lower frequency control channel).
The BS uses such strongest beam to communicate with the MU in the data communication
phase, as detailed in Sec. II-C. We neglect the time to send this feedback signal.
{ωi , i = 1, 2, . . . , η} are designed with the following requirements: R1 – By the end of the
beam-sweeping phase, the entire uncertainty interval U0 must be scanned, plus the additional uncertainty resulting from the MU mobility during the beam-sweeping phase; R2 – the beamwidth
at the beginning of the data communication phase, uηδS , must be independent of the strongest
beam selected.
To guarantee R2, note that, if the ith beam, i = 1, 2, . . . , η is the strongest one detected (with
beamwidth ωi ), the uncertainty width at the end of the beam-sweeping phase becomes1
uηδS = dωi + (η + 1 − i)δS φ,
(3)
due to the MU mobility in the subsequent (η + 1 − i) microslots until the end of beam-sweeping.
Hence, R2 requires
ωi = ω1 + (i − 1)
δS φ
, ∀i = 1, 2, . . . , η,
d
(4)
so that, at the end of beam-sweeping, the uncertainty width becomes
uηδS = dω1 + ηδS φ, ∀i.
(5)
We now discuss how to design ω1 (and ωi via (4)) so as to guarantee R1. At the reference
time 0, the uncertainty interval is [0, uth ]. In the first microslot, the BS scans the interval [0, dω1 ]
1
Herein, we assume that ωi 2π, so that the length of the interval scanned in the ith microslot is 2d tan(ωi /2) ' dωi , see
Fig. 1 (the beam is approximated as being pointed perpendicularly to the motion of the MU).
January 30, 2018
DRAFT
6
using a beam with beamwidth ω1 . If the MU is within this interval, at the end of the beamsweeping phase it will detect the ID of the strongest beam as #1, and the uncertainty width will
thus be given by (5). Otherwise (if the MU is outside of this interval), after the first microslot
the MU may be in the interval [dω1 − δS φ/2, uth + δS φ/2], which accounts for the additional
uncertainty due to the MU mobility in the time interval [0, δS ]. Thus, in the second microslot,
the BS scans the interval [dω1 − δS φ/2, dω1 + dω2 − δS φ/2] using a beam with beamwidth ω2 .
If the MU is within this interval, at the end of the beam sweeping phase it will detect the ID
of the strongest beam as #2, and the uncertainty width will thus be given by (5). Otherwise (if
the MU is outside of this interval), after the second microslot the MU may be in the interval
[dω1 + dω2 − δS φ, uth + δS φ], which accounts for the additional uncertainty due to the MU
mobility in the time interval [δS , 2δS ]. Thus, in the third microslot, the BS scans the interval
[dω1 + dω2 − δS φ, dω1 + dω2 + dω3 − δS φ] with a beam with beamwidth equal to ω3 , and so on.
By induction, at the beginning of the ith microslot, where i = 1, 2, ..., η, i−1 beams have been
scanned. If the MU was located within one of the previous i − 1 beams (say the jth, j ≤ i − 1),
it will detect the ID of the strongest beam as #j at the end of the beam-sweeping phase, and the
uncertainty width will thus be given by (5). Otherwise (if the MU is located within one of the next
Pi−1
ωk −(i−1)δS φ/2, uth +(i−1)δS φ/2]
beams i, i+1, . . . , η), the MU may be in the interval [d k=1
at the beginning of the ith microslot, which accounts for the additional uncertainty due to the
MU mobility in the time interval [0, (i − 1)δS ]. Thus, in the ith microslot, the BS scans the
Pi
P
interval [d i−1
k=1 ωk − (i − 1)δS φ/2] using a beam with beamwidth
k=1 ωk − (i − 1)δS φ/2, d
ωi . If the MU is within this interval, it will detect the ID of the strongest beam as #i at the end
of the beam-sweeping period, and the uncertainty width will thus be given by (5). Otherwise
(if the MU is outside of this interval), at the end of the ith microslot the MU may be in the
P
interval [d ik=1 ωk − iδS φ/2, uth + iδS φ/2], which accounts for the additional uncertainty due
to the MU mobility in the time interval [(i − 1)δS , iδS ].
Using a similar argument, in the last microslot (the ηth one), if the MU was not located within
P
one of the previous η − 1 beams, then the MU will be located in the interval [d η−1
k=1 ωk − (η −
Pη−1
1)δS φ/2, uth + (η − 1)δS φ/2] of width uth + (η − 1)δS φ − d k=1 ωk . This must be scanned
exhaustively with a beam of width ωη , hence
dωη = uth + (η − 1)δS φ − d
η−1
X
ωk .
(6)
k=1
January 30, 2018
DRAFT
7
By combining (6) with (4) we obtain, ∀i = 1, 2, . . . , η,
ωi =
δS φ
uth (η − 1)(η − 2) δS φ
−
+ (i − 1)
.
dη
2η
d
d
(7)
At the end of beam-sweeping, data communication begins and the new uncertainty width is
given by (5), yielding
ucomm (uth , η),uηδS =
uth
(η−1)(η−2)
+ηδS φ−δS φ
.
η
2η
(8)
which evolves over the data communication interval according to (2).
Note that a feasible beam is such that ωk ≥ 0, ∀k = 1, 2, . . . , η. Additionally, beam-sweeping
must reduce the uncertainty width, i.e., ucomm (uth , η) ≤ uth . These two conditions together yield
2
η /2 + 3/2η − 1 1
uth ≥ δS φ max
, (η − 1)(η − 2) .
(9)
η−1
2
Herein, we assume that the correct sector is detected with no error by the MU (this requires
proper beam design to achieve small false-alarm and misdetection probabilities, see [18]).
C. Data communication
Immediately after beam-sweeping, at time t = ηδS , the data communication phase begins, and
the uncertainty width is uηδS = ucomm (uth , η). The uncertainty width ut increases over time due
to the mobility of the MU, according to (2). The data communication period, and the beamsweeping – data communication cycle, terminate at time T such that uT = uth , at which time a
new cycle begins. From (2) we obtain
T =
(η − 1)uth δS (η − 1)(η − 2)
+
.
φη
2
η
(10)
In the time interval [ηδS , T ], the transmission beam of the BS associated with the MU is chosen
so as to support reliable communication over the entire uncertainty interval. Its beamwidth is
thus chosen as ωt ' ut /d [rad].2
Remark 1. Note that in our model the beamwidth is varied continuously within a continuous
set ω ∈ [ucomm (uth , η)/d, uth /d] for analytical tractability. This approach is a continuous ap2
Note that we assume that ut /d 2π, so that we can approximate the beamwidth as ωt = 2 arctan(ut /d/2) ' ut /d, see
Fig. 1.
January 30, 2018
DRAFT
8
proximation of a practical deployment where the system may operate at discrete times using a
discrete codebook to generate transmission beams with different beamwidths [19].
Let Pt be the transmission power per Hz at time t to communicate reliably. Assuming isotropic
reception at the MU [7], [8], the instantaneous transmission rate is given by
Pt
,
Rt = Wtot log2 1 + γ
ωt
where Wtot is the bandwidth, γ ,
λ2 ξ
8πd2 N0 Wtot
(11)
is the SNR scaling factor, λ is the wavelength, N0
is the noise power spectral density, and ξ is the antenna efficiency. Note that Pt is spread evenly
across the angular directions covered by the transmission beams, so that Pt /ωt is the power per
radian delivered to the receiver.
D. Performance metrics and optimization problem
The optimal choice of the beam-sweeping and communication parameters reflects a trade-off
between locating the MU with high accuracy so as to achieve narrow-beam communication, and
mitigating the overhead in terms of sweeping time. This is the goal of our design.
Let η ≥ 2, η ∈ N, uth satisfying (9), and P : [ηδS , T ] 7→ R+ be the transmit power function in
the data communication phase. We define the time-average communication rate and transmission
power, defined over one beam-sweeping – data communication cycle [0, T ], as
Z
Wtot T
dγPt
R̄(η, uth , P )=
log 1+
dt,
T ηδS 2
ucomm (uth , η)+φt
Z
Wtot T
P̄ (η, uth , P )=
Pt dt.
T
ηδS
(12)
(13)
The goal is to determine the optimal design of the joint data communication and beam-sweeping
parameters (η, uth , P ) so as to maximize the average rate under average power constraint Pmax >
0, i.e.,
P1 :
(η, uth , P )∗ =arg max R̄(η, uth , P ),
(14)
(η,uth ,P )
s.t. P̄ (η, uth , P ) ≤ Pmax .
(15)
The analysis is carried out in the next section.
January 30, 2018
DRAFT
9
III. A NALYSIS
Due to the concavity of the log2 function, Jensen’s inequality yields the following result.
Lemma 1. The optimal power allocation function P : [ηδS , T ] 7→ R+ is given by the water-filling
scheme
+
ut
Pt = ρ −
, ∀t ∈ [ηδS , T ],
dγ
where ρ ≥
ucomm (uth ,η)
dγ
(16)
is a parameter to optimize.
Under the water-filling power allocation, the design space is simplified to (η, uth , ρ), where
η ≥ 2, η ∈ N, uth satisfies (9), and ρ ≥
ucomm (uth ,η)
.
dγ
The average rate and average transmission
power can be computed in closed form and are given by3
"
Wtot
uth −ucomm (uth , η) 1+ ln (dγρ)
R̄(η, uth , ρ) =
ln(2)φT
− uth ln(uth ) + ucomm (uth , η) ln(ucomm (uth , η))
#
uth
+ χ(dγρ ≤ uth ) uth ln
+ dγρ − uth
,
dγρ
P̄ (η, uth , ρ) = χ(dγρ ≤ uth )
+
(uth − dγρ)2
2dφγT
(17)
(18)
uth − ucomm (uth , η)
2dγρ − uth − ucomm (uth , η) ,
2dφγT
where χ(·) denotes the indicator function.
It is useful to define the following change of variables:
uth
,
δS φ
dγρ
ucomm (uth , η)
ζ,
−1≥
− 1.
δS φυ
δS φυ
υ,
3
(19)
(20)
We replace the dependence on the power allocation function P with the parameter ρ.
January 30, 2018
DRAFT
10
The performance metrics (17)-(18) can thus be expressed as
υ 1
3 1
ucomm (uth , η)
= + η+ − ,
δS φ
η 2
2 η
ln(2)
1
η
R̂(η, υ, ζ) ,
R̄(η, uth , ρ) =
Wtot
η − 1 υ + η2 − 1
"
× υ − ûcomm (υ, η) 1+ ln(1 + ζ)
ûcomm (υ, η) ,
−ûcomm (υ, η) ln
υ
ûcomm (υ, η)
(21)
(22)
#
+χ(ζ<0)υ (ζ− ln(1+ζ)) ,
dγ
ηυ 2 ζ 2 χ(ζ<0)
P̄ (η, uth , ρ)=
δS φ
2(η − 1) υ + η2 − 1
η(υ − ûcomm (υ, η))
2υ(1
+
ζ)−υ−û
(υ,
η)
,
+
comm
2(η − 1) υ + η2 − 1
P̂ (η, υ, ζ),
where (9) and ρ ≥
ucomm (uth ,η)
dγ
(23)
yield the feasible set
o
n
ûcomm (υ, η)
−1 ,
Fη ≡ (υ, ζ) : υ ≥ υmin (η), ζ ≥
υ
and we have defined
υmin (η) , max
η 2 + 3η − 2 1
, (η − 1)(η − 2) .
2(η − 1) 2
(24)
Note that we have normalized the average rate and transmission power, so that they no longer
depend on the system parameters Wtot , φ, d, γ, δS . This is beneficial since it unifies the structure
of the optimal design in a wide range of scenarios.
The optimization problem thus becomes
P2 :
(η, υ, ζ)∗ =
arg max
R̂(η, υ, ζ)
(25)
η≥2,η∈N,(υ,ζ)∈Fη
s.t. P̂ (η, υ, ζ) ≤ P̂max ,
where P̂max =
dγ
P .
δS φ max
(26)
This optimization problem is non-convex. We have the following
structural result.
Theorem 1. ζ < 0 is suboptimal.
Proof. See Appendix A.
January 30, 2018
DRAFT
11
The intuition behind Theorem 1 is that, if ζ < 0, then the water-filling power allocation is
such that Pt = 0 during a portion of the data communication phase. This is suboptimal: it is
more energy-efficient to reduce the beam-sweeping threshold uth and increase ζ so as to reduce
the "idle" time interval in the communication phase.
Thus, in the following we focus on the case ζ ≥ 0. Note that P̂ (η, υ, ζ) needs to satisfy the
power constraint. Since it is an increasing function of ζ, we must have P̂ (η, υ, 0) ≤ P̂max to
obtain a feasible solution, yielding
η 2 +3η−2 η P̂max
υ≤
+
2(η − 1) η − 1
s
1+
1+
2η
P̂max
!
, υmax (η).
(27)
Note that υ must also satisfy the constraint υ ≥ υmin (η), hence we must have υmax (η) ≥ υmin (η).
If η ≤ 4, then
1 η 2 +3η−2
2
η−1
> 21 (η − 1)(η − 2) and any υ satisfying (27) also satisfies υ ≥ υmin (η).
On the other hand, if η ≥ 5 then
1 η 2 +3η−2
2
η−1
< 12 (η −1)(η −2) and υmax (η) ≥ υmin (η) is equivalent
to
P̂max ≥
1 [η 2 − 5η + 2]2
, for η ≥ 5.
2 η 2 − 4η + 2
(28)
Since the right hand side is an increasing function of η ≥ 5, we conclude that there exists
4 ≤ ηmax < ∞ such that the problem is feasible for all 2 ≤ η ≤ ηmax (indeed, the problem is
always feasible for η ∈ {2, 3, 4} since υmax (η) ≥ υmin (η) in this case). We thus define the new
feasibility set as
F ≡ {(υ, η) : 2 ≤ η ≤ ηmax , η ∈ N, υmin (η) ≤ υ ≤ υmax (η)} .
Let (υ, η) ∈ F. Under such pair, P̂ (η, υ, ζ) and R̂(η, υ, ζ) are increasing functions of ζ ≥ 0,
hence the optimal ζ is such that the power constraint is attained with equality. We thus obtain
ζ as a function of (υ, η) as
ζ(υ, η) ,
(η − 1)(υ + η/2 − 1)
(P̂max − P̂ (0, η, υ)).
ηυ[υ − ûcomm (υ, η)]
(29)
Since the power constraint is satisfied with equality for (υ, η) ∈ F and ζ = ζ(υ, η), the
optimization problem becomes unconstrained, yielding
P3 :
(η, υ)∗ =arg max R̂(η, υ, ζ(υ, η)),
(30)
(υ,η)∈F
January 30, 2018
DRAFT
12
and ζ ∗ = ζ(υ ∗ , η ∗ ), where
1
η
η − 1 υ + η2 − 1
"
× υ − ûcomm (υ, η) 1+ ln(1 + ζ(υ, η))
R̂(η, υ, ζ(υ, η)) =
− ûcomm (υ, η) ln
υ
ûcomm (υ, η)
(31)
(32)
#
.
We solve the optimization problem as follows: for each 2 ≤ η ≤ ηmax , we solve
υ ∗ (η) =
arg max
R̂(η, υ, ζ(υ, η)).
(33)
υmin (η)≤υ≤υmax (η)
Then, the optimal η ∗ and υ ∗ are found by optimizing η via exhaustive search over the finite
discrete set {2, 3, . . . , ηmax },
η ∗ = arg
max
η∈{2,3,...,ηmax }
R̂(ρ(υ ∗ (η), η), η, υ ∗ (η)),
(34)
and υ ∗ = υ ∗ (η ∗ ).
A. Solution of (33) given η ∈ {2, 3, . . . , ηmax }
In this section, we investigate how to compute υ ∗ (η) given η ∈ {2, 3, . . . , ηmax }. We have the
following theorem.
Theorem 2. Given η ∈ {2, 3, . . . , ηmax }, the optimal υ ∗ (η) is given by
1
∗
υ (η) = max
(η − 1)(η − 2), υ̂ ,
2
where υ̂ is the unique solution in ( 21 η
2 +3η−2
η−1
(35)
, υmax (η)) of fη (υ) = 0, where
υ − ûcomm (υ, η) (η − 1)(υ + η/2 − 1) + 2η
υ(1 + ρ(υ, η))
2η
(η − 1)(υ + η/2 − 1)
−
ρ(υ, η)
η(1 + ρ(υ, η))
fη (υ) , −
(36)
+ ln(1 + ρ(υ, η))η + ln(υ/ûcomm (υ, η))(η/2 + 1).
Proof. See Appendix B.
January 30, 2018
DRAFT
13
The function fη (υ) is proportional to the derivative of R̂(η, υ, ζ(υ, η)) with respect to υ, up
to a positive multiplicative factor. Note that υ̂ can be determined using the bisection method. In
fact, fη (υ) is a decreasing function of υ (see proof of the theorem in [20]), with
υ→ 12
lim
2
fη (υ) = ∞ and fη (υmax (η)) < 0.
(37)
η +3η−2
η−1
IV. N UMERICAL R ESULTS
In this section, we present numerical results to demonstrate the performance of the proposed
beam-sweeping – data communication protocol. We compare our proposed scheme with an
adaptation of IEEE 802.11ad to our model, in which partially overlapping beams of 7o beamwidth
are employed such that adjacent beams share half of the beam area. Moreover, to evaluate this
scheme we assume a worst-case scenario where the vehicle moves with either speed of vmax or
vmin = −vmax . Therefore, with IEEE 802.11ad, beam alignment is required after each r/vmax
o
[s] (the time required for the MU to move to the edge of the beam), where r = d tan 72 . Once
the edge of the beam is reached (thus, the MU is located in either position p ∈ {−r, r}), the BS
scans the two beams covering the intervals [−2r, 0] and [0, 2r], each with 7o beamwidth, so that
the time overhead of beam sweeping is 2δS . Immediately after, the strongest beam is detected
and data communication proceeds. Then, the fraction of time spent in data communication is
given as
fcomm =
r/vmax
,
r/vmax + 2δS
the average throughput of IEEE 802.11ad is given as
Pt
R̄11ad = Wtot log2 1 + γ
× fcomm ,
7π/180
(38)
(39)
TABLE I: Simulation parameters
Parameter
Carrier frequency
Bandwidth
Noise PSD
Microslot duration
Distance BU-MU
Antenna efficiency
January 30, 2018
Symbol
fc
Wtot
N0
δS
d
ξ
Value
60 GHz
1.76 GHz
-174 dBm/Hz
10µs
10 m
1
DRAFT
14
Fig. 2: Average spectral efficiency versus average power.
and the average power as P̄11ad = Pt × fcomm . The common parameters of the simulation are
given in Table I.
In Fig. 2, we plot the average spectral efficiency R̄/Wtot versus the average power consumption
P̄ . A monotonic trend between the spectral efficiency and the average power is observed.
Moreover, the performance of the system deteriorates as we increase the speed, due to the
increasing overhead of beam alignment. Additionally, we observe that IEEE 802.11ad performs
poorly since it uses fixed 7o beams which are not optimized to the specific mobile scenario, with
degradation up to 90% compared to our proposed scheme.
In Fig. 3, we plot the effect of speed on the spectral efficiency for two different values of the
average power P̄ . It can be seen that the spectral efficiency of the proposed scheme degrades
monotonically as the speed vmax is increased. Moreover, the performance improves with higher
value of P̄ as observed also in Fig. 2. It can be noticed that the curves corresponding to IEEE
802.11ad do not show significant degradation as the speed is increased. This is due to the
January 30, 2018
DRAFT
15
Fig. 3: Average spectral efficiency versus speed.
relatively wide beam used in IEEE 802.11ad, so that beam alignment is relatively infrequent.
However, the performance of IEEE 802.11ad is poor compared to our proposed scheme.
V. C ONCLUSION
In this paper, we propose a one-dimensional mobility model where a vehicle moves along a
straight road with time-varying and random speed and communicates with base stations located
on the roadside over the mm-wave band. We propose a beam-sweeping – data communication
protocol and study its performance in closed form. We derive structural properties of the optimal
design, based on which we design a bisection algorithm. We compare numerically our proposed
design to an adaptation of IEEE 802.11ad to our model, which exhibits performance degradation
up to 90%.
R EFERENCES
[1] J. Choi, V. Va, N. Gonzalez-Prelcic, R. Daniels, C. R. Bhat, and R. W. Heath, “Millimeter-wave vehicular communication
to support massive automotive sensing,” IEEE Communications Magazine, vol. 54, no. 12, pp. 160–167, December 2016.
January 30, 2018
DRAFT
16
[2] V. Va, T. Shimizu, G. Bansal, and R. W. Heath, “Beam design for beam switching based millimeter wave vehicle-toinfrastructure communications,” in 2016 IEEE International Conference on Communications (ICC), May 2016, pp. 1–6.
[3] T. S. Rappaport, Wireless communications: principles and practice.
Prentice Hall PTR, 2002.
[4] M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter Wave Channel
Modeling and Cellular Capacity Evaluation,” IEEE Journal on Selected Areas in Communications, vol. 32, no. 6, pp.
1164–1179, June 2014.
[5] C. Jeong, J. Park, and H. Yu, “Random access in millimeter-wave beamforming cellular networks: issues and approaches,”
IEEE Communications Magazine, vol. 53, no. 1, pp. 180–185, January 2015.
[6] V. Desai, L. Krzymien, P. Sartori, W. Xiao, A. Soong, and A. Alkhateeb, “Initial beamforming for mmWave communications,” in 48th Asilomar Conference on Signals, Systems and Computers, Nov 2014, pp. 1926–1930.
[7] M. Hussain and N. Michelusi, “Throughput optimal beam alignment in millimeter wave networks,” in Information Theory
and Applications Workshop (ITA), Feb 2017, pp. 1–6.
[8] ——, “Energy efficient beam alignment in millimeter wave networks,” in 2017 Asilomar Conference on Signals, Systems,
and Computers, 2017, to appear.
[9] A. Alkhateeb, O. E. Ayach, G. Leus, and R. W. Heath, “Channel estimation and hybrid precoding for millimeter wave
cellular systems,” IEEE Journal of Selected Topics in Signal Processing, vol. 8, no. 5, pp. 831–846, Oct 2014.
[10] Z. Marzi, D. Ramasamy, and U. Madhow, “Compressive channel estimation and tracking for large arrays in mm-wave
picocells,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 3, pp. 514–527, April 2016.
[11] N. González-Prelcic, R. Méndez-Rial, and R. W. Heath, “Radar aided beam alignment in mmwave v2i communications
supporting antenna diversity,” in Information Theory and Applications Workshop (ITA), Jan 2016, pp. 1–7.
[12] T. Nitsche, A. B. Flores, E. W. Knightly, and J. Widmer, “Steering with eyes closed: Mm-wave beam steering without
in-band measurement,” in 2015 IEEE Conference on Computer Communications (INFOCOM), April 2015, pp. 2416–2424.
[13] V. Va, J. Choi, T. Shimizu, G. Bansal, and R. W. Heath, “Inverse Multipath Fingerprinting for Millimeter Wave V2I Beam
Alignment,” IEEE Transactions on Vehicular Technology, vol. PP, no. 99, pp. 1–1, 2017.
[14] “IEEE Std 802.15.3c-2009,” IEEE Standard, pp. 1–200, Oct 2009.
[15] “IEEE Std 802.11ad-2012,” IEEE Standard, pp. 1–628, Dec 2012.
[16] V. Va, J. Choi, and R. W. Heath, “The Impact of Beamwidth on Temporal Channel Variation in Vehicular Channels and
Its Implications,” IEEE Transactions on Vehicular Technology, vol. 66, no. 6, pp. 5014–5029, June 2017.
[17] J. B. Kenney, “Dedicated short-range communications (dsrc) standards in the united states,” Proceedings of the IEEE,
vol. 99, no. 7, pp. 1162–1182, July 2011.
[18] M. Hussain, D. J. Love, and N. Michelusi, “Neyman-Pearson Codebook Design for Beam Alignment in Millimeter-Wave
Networks,” in Proceedings of the 1st ACM Workshop on Millimeter-Wave Networks and Sensing Systems.
New York,
NY, USA: ACM, 2017, pp. 17–22.
[19] S. Noh, M. D. Zoltowski, and D. J. Love, “Multi-Resolution Codebook and Adaptive Beamforming Sequence Design for
Millimeter Wave Beam Alignment,” IEEE Transactions on Wireless Communications, vol. 16, no. 9, pp. 5689–5701, Sept
2017.
[20] N. Michelusi and M. Hussain, “Optimal Beam Sweeping and Communication in Mobile Millimeter-wave Networks,”
Purdue University, Tech. Rep., 2017, https://engineering.purdue.edu/~michelus/ICC2018.pdf.
January 30, 2018
DRAFT
17
A PPENDIX A
P ROOF OF T HEOREM 1
Proof. First, note that if ζ = ûcomm (υ, η)/υ − 1, then P̂ (υ(1 + ζ), η, υ) = 0 < Pmax and
R̂(υ(1 + ζ), η, υ) = 0. This configuration is clearly suboptimal since a non-zero rate can be
achieved by increasing ζ.
Now, let ûcomm (υ, η)/υ − 1 < ζ < 0 and assume this configuration is optimal. Note that this
implies υ > ûcomm (υ, η), or equivalently υ >
1 η 2 +3η−2
.
2
η−1
We have two cases: 1) υ > 12 (η − 1)(η − 2) and 2) υ = 21 (η − 1)(η − 2) (and consequently
η ≥ 5 since we must also have υ >
1 η 2 +3η−2
).
2
η−1
a) υ > 12 (η − 1)(η − 2): We show that, by increasing ζ and decreasing υ so as to preserve
the power consumption, the rate strictly increases, and thus we achieve a contradiction. From
(22) and (23) with ζ < 0 we obtain
R̂(η, υ, ζ) =
η ûcomm (υ, η)
η − 1 υ + η2 − 1
"
(40)
υ
υ
(1 + ζ) − 1 − ln
ûcomm (υ, η)
ûcomm (υ, η)
2
η υ(1 + ζ) − ûcomm (υ, η)
.
P̂ (η, υ, ζ) =
2(η − 1) υ + η2 − 1
×
#
(1 + ζ) ,
(41)
We increase ζ by h > 0 (arbitrarily small) and decrease υ by a function g(h) > 0, so as to
maintain the power consumption unaltered, i.e.,
P̂ (η, υ, ζ) = P̂ (η, υ − g(h), ζ + h).
(42)
In the limit h → 0 we must have
dP̂ (η, υ, ζ)
dP̂ (η, υ, ζ)
− g 0 (0)
= 0,
dζ
dυ
January 30, 2018
(43)
DRAFT
18
where g 0 (0) is the derivative of g(h) in zero, which must be positive since g(h) > 0 for arbitrarily
small h. To show this, note that
ηυ υ(1 + ζ) − ûcomm (υ, η)
dP̂ (ζ, η, υ)
=
> 0,
dζ
(η − 1) υ + η2 − 1
η υ(1 + ζ) − ûcomm (υ, η)
dP̂ (ζ, η, υ)
=
2
dυ
2(η − 1) υ + η2 − 1
1 1
υ 1
> 0,
× (1 + ζ)(υ + η − 2) − + η + +
η 2
2 η
(44)
(45)
where the last inequality follows from the fact that ζ > ûcomm (υ, η)/υ − 1. Hence, it follows
that indeed g 0 (0) > 0.
We now show that, for arbitrarily small h,
R̂(η, υ, ζ) < R̂(η, υ − g(h), ζ + h).
(46)
Equivalently, in the limit h → 0, we must have
dR̂(η, υ, ζ)
dR̂(η, υ, ζ)
− g 0 (0)
> 0.
dζ
dυ
(47)
Note that
1
dP̂ (ζ, η, υ)
dR̂(η, υ, ζ)
=
,
dζ
υ(1 + ζ)
dζ
dR̂(ζ, η, υ)
η(η/2 − 1)
=
[υ(1 + ζ) − ûcomm (υ, η)]
dυ
(η − 1)υ(υ + η2 − 1)2
η(η/2 + 1)
υ
+
ln
(1 + ζ) ,
(η − 1)(υ + η2 − 1)2
ûcomm (υ, η)
(48)
(49)
and thus replacing (48) in (47), and using (43) and the fact that g 0 (0) > 0, we obtain the
equivalent condition
dP̂ (η, υ, ζ)
dR̂(η, υ, ζ)
− υ(1 + ζ)
> 0,
dυ
dυ
January 30, 2018
(50)
DRAFT
19
iff
υ 1
1 1
ûcomm (υ, η)
(1 + ζ)υ − + η + +
g(ζ) , 1 −
υ(1 + ζ)
η 2
2 η
υ
− (η + 2) ln
(1 + ζ) > 0,
ûcomm (υ, η)
(51)
which we are now going to prove. The derivative with respect to ζ is given by
dg(ζ)
∝ η(υ(1 + ζ) − ûcomm (υ, η))2
dζ
+ (2υ + η − 2)(υ(1 + ζ) − ûcomm (υ, η)) > 0,
(52)
where the inequality follows from the fact that ζ > ûcomm (υ, η)/υ − 1. It follows that g(ζ) is an
increasing function of ζ, minimized at ζ = ûcomm (υ, η)/υ − 1, thus proving the inequality.
b) υ = 21 (η − 1)(η − 2) and η ≥ 5: In this case, we cannot decrease υ any further. Using
a similar approach as in the previous case, we show that a strictly larger rate can be obtained
by decreasing both η and ζ, while preserving the power consumption. From (40) and (41) with
υ = 21 (η − 1)(η − 2), we obtain
R̂(η, υ, ζ) = 1 + ζ
(53)
−
2η
1 + ln
(η − 1)(η − 2)
(η − 1)(η − 2)(1 + ζ)
2η
,
[υ(1 + ζ) − ûcomm (υ, η)]2
,
P̂ (η, υ, ζ) =
(η − 1)(η − 2)
where
−η 2 +5η−2
(η−1)(η−2)
(54)
< ζ < 0.
Now, we decrease η by one unit, while keeping υ as before, and we choose the new ζ, denoted
as ζ̂, in such a way as to preserve the power consumption. Note that υ ≥ υmin (η − 1) hence the
constraint on υ is still satisfied, since υmin (η − 1) is a decreasing function of η.
From (41) with υ = 21 (η − 1)(η − 2) we obtain
2
(η − 1) υ(1 + ζ̂) − ûcomm (υ, η − 1)
P̂ (η − 1, υ, ζ̂) =
(η − 2)(η 2 − 2η − 1)
.
(55)
where
ûcomm (υ, η − 1) = η −
January 30, 2018
1
< ûcomm (υ, η) = η.
η−1
(56)
DRAFT
20
ζ̂ is chosen so that P̂ (η − 1, υ, ζ̂) = P̂ (η, υ, ζ), yielding
υ(1 + ζ̂) = ûcomm (υ, η − 1)
p
υ(1 + ζ) − ûcomm (υ, η)
+ η 2 − 2η − 1
.
η−1
(57)
Thus, it follows that υ(1 + ζ̂) > ûcomm (υ, η − 1) Additionally, using (56) and the fact that
υ(1 + ζ) − ûcomm (υ, η) > 0 it follows that υ(1 + ζ̂) < υ(1 + ζ). Therefore
ûcomm (υ, η − 1) < υ(1 + ζ̂) < υ(1 + ζ) < υ,
(58)
since ζ < 0, hence ûcomm (υ, η − 1)/υ − 1 < ζ̂ < 0. We now show that this new configuration
strictly increases the throughput. From (40) and using the expression of ζ̂ and of ûcomm (υ, η − 1)
we obtain
R̂(η − 1, υ, ζ̂) =
−
2
p
(υ(1 + ζ) − η)
(η − 2) η 2 − 2η − 1
(59)
2(η 2 − η − 1)
ln(υ(ζ̂ + 1)/ûcomm (υ, η − 1)),
(η − 2)(η 2 − 2η − 1)
and therefore
h(ζ) , R̂(η − 1, υ, ζ̂) − R̂(η, υ, ζ)
(60)
4(υ(1 + ζ) − η)
p
p
(η − 1)(η − 2) η 2 − 2η − 1(η − 1 + η 2 − 2η − 1)
2η
ln(υ(1 + ζ)/η)
+
(η − 1)(η − 2)
2(η 2 − η − 1)
−
ln(υ(ζ̂ + 1)/ûcomm (υ, η − 1)).
(η − 2)(η 2 − 2η − 1)
=
The derivative of h(ζ) with respect to ζ is given by
dh(ζ)
∝ υ(1 + ζ) − ûcomm (υ, η)
dζ
2(υ(1 + ζ) − η)2
p
+
> 0.
η − 1 + η 2 − 2η − 1
(61)
Therefore, h(ζ) is an increasing function of ζ, minimized at ζ = ûcomm (υ, η)/υ − 1, yielding
R̂(ζ̂, η − 1, υ) − R̂(ζ, η, υ) > 0.
January 30, 2018
(62)
DRAFT
21
The Theorem is thus proved.
A PPENDIX B
P ROOF OF T HEOREM 2
Proof. To study the optimization problem P3, we study the derivative R̂(η, υ, ζ(υ, η)) with
respect to υ. We have that
dR̂(η, υ, ζ(υ, η))
dR̂(η, υ, ζ) dR̂(η, υ, ζ) dζ(υ, η)
=
+
dυ
dυ
dζ
dυ
∝ fη (υ),
(63)
where ∝ denotes proportionality up to a positive multiplicative factor, with fη (υ) given by (36).
Therefore, R̂(ζ, η, υ) is an increasing function of υ iff fη (υ) > 0. We now show that fη (υ) is a
strictly decreasing function of υ, with limits given by (37).
Note that υ̂ can be determined using the bisection method. In fact, fη (υ) is a decreasing
function of υ (see proof of the theorem), with
υ→ 21
lim
2
fη (υ) = ∞
(64)
η +3η−2
η−1
and
fη (υmax (η)) < 0
(65)
(see second part of the proof). Therefore, there exists a unique υ̂ ∈ ( 21 η
that fη (υ̂) = 0, and
dR̂(η,υ,ζ(υ,η))
dυ
> 0 for υ < υ̂ and
dR̂(η,υ,ζ(υ,η))
dυ
2 +3η−2
η−1
, υmax (η)) such
< 0 for υ > υ̂.
Therefore, the maximum of R̂(η, υ, ζ(υ, η)) with respect to υ ∈ ( 12 η
2 +3η−2
η−1
, υmax (η)) is attained
at υ = υ̂. By combining this result with the constraint υ ≥ υmin (η), we obtain (35).
January 30, 2018
DRAFT
22
Thus, we now show that fη (υ) is a decreasing function of υ. We have
dfη (υ)
dυ
(η − 1)(υ + η/2 − 1) + 2η
= −(η − 1)
(1 + ζ(υ, η))
2υη 2
(η − 1)(η/2 − 1) + 2η
(1 + ζ(υ, η))
+ [υ − ûcomm (υ, η)]
2υ 2 η
(η − 1)(υ + η/2 − 1) + 2η 0
+ [υ − ûcomm (υ, η)]
ζ (υ, η)
2υη
(η − 1)
(η − 1)(υ + η/2 − 1) 0
−
ζ(υ, η)(1 + ζ(υ, η)) −
ζ (υ, η)
η
η
(1 + ζ(υ, η))2
+ ηζ 0 (υ, η)(1 + ζ(υ, η))
+ [1/υ − 1/ûcomm (υ, η)/η](η/2 + 1)(1 + ζ(υ, η))2 ,
where ζ 0 (υ, η) =
dζ(υ,η)
.
dυ
(66)
By simplifying and reorganizing the expression, we obtain
1
df (υ)
= − [υ − ûcomm (υ, η)]2
(1 + ζ(υ, η))2
dυ
υ
η+2
υ − ûcomm (υ, η)
×
+
2ηυûcomm (υ, η) 4υ(υ + η/2 − 1)
[υ − ûcomm (υ, η)]
− ζ(υ, η)
υ2
2
η3
(η + 2)υ
η /2 + 3/2η − 1
+
+
×
υ − ûcomm (υ, η)
(υ + η/2 − 1)(η − 1) ηûcomm (υ, η)
[υ − ûcomm (υ, η)]2
η 2 + 3η − 2
− ζ(υ, η)
1+
υ2
2(υ + η/2 − 1)(η − 1)
"
1
− ζ(υ, η)2 [υ/ûcomm (υ, η) − 1](1/2 + 1/η)
υ
#
2
2
η
η /2 + 3/2η − 1
+
(υ + η/2 − 1)(η − 1) υ − ûcomm (υ, η)
2 1 υ − ûcomm (υ, η) + 2η 2
− ζ(υ, η)
η + 1 + υ − ûcomm (υ, η) < 0,
υ (υ + η/2 − 1)(η − 1)
(67)
(68)
where inequality holds since ζ(υ, η) ≥ 0 and υ > ûcomm (υ, η). This proves that fη (υ) is strictly
decreasing in υ.
Now, note that, in the limit υ →
1 η 2 +3η−2
,
2
η−1
we obtain υ → ûcomm (υ, η) and ζ(υ, η) →
∞, yielding (64). On the other hand, when υ = υmax (η), by letting x , P̂max (η − 1)[1 +
January 30, 2018
DRAFT
23
q
1 + 2η/P̂max ] > 0 we obtain
x[η + 2 + x]
fη (υmax (η)) = −(η − 1) 2
η + 3η − 2 + 2ηx
(η − 1)x
(η/2 + 1) , g(x).
+ ln 1 + 2
η /2 + 3/2η − 1 + x
(69)
(70)
The derivative of the above expression with respect to x satisfies
dg(x)
∝ −[η 2 /2 + 3/2η − 1][2η + 2x + xη] − x2 η < 0,
dx
(71)
which satisfies the inequality since η ≥ 2, hence g(x) is maximized at x = 0, yielding
fη (υmax (η)) = g(x) < g(0) = 0.
(72)
The Theorem is thus proved.
January 30, 2018
DRAFT
| 7 |
Mathematical Execution:
A Unified Approach for Testing Numerical Code
Zhoulai Fu
Zhendong Su
arXiv:1610.01133v1 [] 4 Oct 2016
Department of Computer Science, University of California, Davis, USA
zhoulai.fu@gmail.com
su@cs.ucdavis.edu
Abstract
This paper presents Mathematical Execution (ME), a new, unified
approach for testing numerical code. The key idea is to (1) capture
the desired testing objective via a representing function and (2) transform the automated testing problem to the minimization problem of
the representing function. The minimization problem is to be solved
via mathematical optimization. The main feature of ME is that it
directs input space exploration by only executing the representing
function, thus avoiding static or symbolic reasoning about the program semantics, which is particularly challenging for numerical
code. To illustrate this feature, we develop an ME-based algorithm
for coverage-based testing of numerical code. We also show the
potential of applying and adapting ME to other related problems,
including path reachability testing, boundary value analysis, and
satisfiability checking.
To demonstrate ME’s practical benefits, we have implemented
CoverMe, a proof-of-concept realization for branch coverage based
testing, and evaluated it on Sun’s C math library (used in, for
example, Android, Matlab, Java and JavaScript). We have compared
CoverMe with random testing and Austin, a publicly available
branch coverage based testing tool that supports numerical code
(Austin combines symbolic execution and search-based heuristics).
Our experimental results show that CoverMe achieves near-optimal
and substantially higher coverage ratios than random testing on all
tested programs, across all evaluated coverage metrics. Compared
with Austin, CoverMe improves branch coverage from 43% to 91%,
with significantly less time (6.9 vs. 6058.4 seconds on average).
1.
Introduction
Testing has been a predominant approach for improving software
quality. Manual testing is notoriously tedious [13]; automated testing
has been an active research topic, drawing on a rich body of
techniques, such as symbolic execution [12, 14, 17, 36], random
testing [1, 11, 30] and search-based strategies [9, 38, 42, 49].
Automated testing is about producing program failures [51]. Let
FOO be a program and dom(FOO) be its input domain. An automated
testing problem is to systematically find x ∈ dom(FOO) such that
FOO(x) ⇓ wrong, where FOO(x) ⇓ wrong denotes “FOO goes wrong if
executed on input x.” It is difficult to specify FOO(x) ⇓ wrong, which
is known as the test oracle problem [50, 69]. This paper assumes
that an algorithm for checking FOO(x) ⇓ wrong is given.
An important problem in testing is the testing of numerical code,
i.e., programs with floating-point arithmetic, non-linear variable
relations, or external function calls (such as logarithmic and trigonometric functions). These programs are pervasive in safety-critical
systems, but ensuring their quality remains difficult. Numerical code
presents two specific challenges for existing automated testing techniques: (1) Random testing is easy to employ and fast, but ineffective
in finding deep semantic issues and handling large input spaces; and
(2) symbolic execution and its variants can perform systematic path
exploration, but suffer from path explosion and are weak in dealing
with complex program logic involving numerical constraints.
Our Approach. This paper introduces a new, unified approach
for automatically testing numerical code. It proceeds as follows:
We derive from the program under test FOO another program FOO_R,
called representing function, which represents how far an input x ∈
dom(FOO) is from reaching the set {x | FOO(x) ⇓ wrong}. We require
that the representing function returns a non-negative value for all x,
which diminishes when x gets close to the set and vanishes when x
goes inside. Intuitively, this representing function is similar to a sort
of distance. It allows to approach the automated testing problem,
i.e., the problem of finding an element in {x | FOO(x) ⇓ wrong}, as
the problem of minimizing FOO_R. This approach can be justified
with a strong guarantee:
FOO(x) ⇓ wrong
⇔ x minimizes FOO_R,
(1)
assuming that there exists at least one x such that FOO(x) ⇓ wrong
(details in Sect. 4). Therefore, the essence of our approach is to
transform the automated testing problem to a minimization problem.
Minimization problems are well studied in the field of Mathematical
Optimization (MO) [53]. MO works by executing its objective
function only (see Sect. 2). That is to say, our approach does not need
to analyze the semantics of the tested programs. Instead, it directs
input space exploration by only executing the representing function.
We call this approach Mathematical Execution (abbreviated as ME).
Note that mathematical optimization by itself does not necessarily provide a panacea for automated testing because many MO
problems are themselves intractable. However, efficient algorithms
have been successfully applied to difficult mathematical optimization problems. A classic example is the NP-hard traveling salesman
problem, which has been nicely handled by simulated annealing [37],
a stochastic MO technique. Another example is Monte Carlo Markov
Chain [8], which has been effectively adapted to testing and verification [28, 29, 34, 63]. A major finding of this work is that using
mathematical optimization for testing numerical code is a powerful
approach. If we carefully design the representing function so that
certain conditions are respected, we can come up with mathematical
optimization problems that can be efficiently solved by off-the-shelf
MO tools.
To demonstrate the feasibility of our approach, we have applied
ME on coverage-based testing [54] of floating-point code, a fundamental problem in testing. The experimental results show that
our implemented tool, CoverMe, is highly effective. Fig. 1 gives a
small program from our benchmark suite Fdlibm [3]. The program
operates on two double input parameters. It first takes |x|’s high
word by bit twiddling, including a bitwise AND (&), a pointer reference (&) and a dereference (*) operator. The bit twiddling result is
stored in integer variable ix (Line 3), followed by four conditional
statements that examine ix (Lines 4–15). The tool CoverMe yields:
1 #define __HI(x) *(1+(int*)&x)
2 double __kernel_cos(double x, double y){
3
ix = __HI(x)&0x7fffffff; /* ix = |x|’s high word */
4
if(ix<0x3e400000) {
/* if |x| < 2**(-27) */
5
if(((int)x)==0) return ...; /* generate inexact */
6
}
7
...;
8
if(ix < 0x3FD33333)
/* if |x| < 0.3 */
9
return ...;
10
else {
11
if(ix > 0x3fe90000) { /* if |x| > 0.78125 */
12
...;
13
} else {
14
...;
15
}
16
return ...;
17
}
18 }
Figure 1: The benchmark program __kernel_cos taken from the Fdlibm [3]
library (http://www.netlib.org/fdlibm/k_cos.c).
100%
87.5%
line coverage
branch coverage
When investigating why CoverMe fails to achieve full branch
coverage, we find that one out of the eight branches in the program
cannot be reached. The condition if ((int) x) == 0 (Line 5)
always holds because it is nested within the |x| < 2−27 branch (Line
4). 1 Therefore, the 87.5% branch coverage is, in fact, optimal. We
have compared CoverMe with Austin [43], a publicly available, stateof-the-art coverage-based testing tool that can handle floating-point
code. Austin achieves 37.5% branch coverage in 1885.1 seconds,
whereas CoverMe achieves the optimal coverage in 15.4 seconds
(see Sect. 5).
Contributions. Our contributions follow:
• We introduce Mathematical Execution, a new general approach
for testing numerical code;
• We develop an effective coverage-based testing algorithm using
the ME approach;
• We demonstrate that ME is a unified approach by showing how
to apply ME to several important testing problems; and
• We implement the coverage-based testing tool CoverMe and
show its effectiveness on real-world numerical library code.
Paper Outline. Sect. 2 gives the background on mathematical
optimization. Sect. 3 illustrates ME by studying the case of branch
coverage based testing. We define the problem, demonstrate the
ME solution, and give the algorithmic details. Sect. 4 lays out the
theoretical foundation for ME and demonstrates ME with several
additional examples. Sect. 5 presents an implementation overview of
CoverMe and describes our experimental results. Sect. 6 discusses
the current limitations of ME. Finally, Sect. 7 surveys related work
and Sect. 8 concludes. For completeness, Appendix A lists the
benchmark programs in Fdlibm that CoverMe does not support and
their reasons, and Appendix B gives implementation details.
Notation. The sets of real and integer numbers are denoted by
and respectively. For two real numbers a and b, the usage
aEb means a ∗ 10b . In this presentation, we do not distinguish
a mathematical expression, such as x2 + |y|, and its implemen-
R
Z
1 Sun’s developers decided to use this redundant check to trigger the
inexact exception of floating-point as a side effect. From the program
semantics perspective, no input of __kernel_cos can trigger the false branch
of if (((int) x) == 0).
p4
p0
xn
(a)
p2
p1
p3
p5
(b)
Figure 2: (a) Local optimization with the curve of λ x.x ≤ 1 ? 0 : (x − 1)2 .
The illustrated technique uses tangents of the curve to converge quickly
to a minimum point; (b) Global optimization with the curve of λ x.x ≤
1 ? ((x + 1)2 − 4)2 : (x2 − 4)2 . The MCMC method starts from p0 , converges
to local minimum p1 , performs a Monte-Carlo move to p2 and converges to
p3 . Then it moves to p4 and finally converges to p5 .
tation, such as x*x + abs(y). Similarly, we use a lambda expression to mean either a mathematical function or its implementation. For example, an implementation λ x.x2 may refer to the
code double f (double x) {return x*x;}. We use the C-like syntax A? v1 : v2 to mean an implementation that returns v1 if A holds,
or v2 otherwise.
2.
Background
We begin with some preliminaries on mathematical optimization
following the exposition of [28]. A complete treatment of either is
beyond the scope of this paper. See [8, 53, 71] for more details.
A Mathematical Optimization (MO) problem is usually formulated as:
minimize f (x)
(2)
subject to x ∈ S
where f is called the objective function, and S the search space.In
general, mathematical optimization problems can be divided into
two categories. One focuses on how functions are shaped at local
regions and where a local minimum can be found near a given
input. This local optimization is classic, usually involving standard
techniques such as Newton’s or the steepest descent methods. Local
optimization not only provides the minimum value of a function
within a neighborhood of the given input points, but also aids global
optimization, which determines the function minimum over the
entire search space.
Local Optimization. Let f be a function defined over a Euclidean
space with distance function d. We call x∗ a local minimum point
if there exists a neighborhood of x∗ , namely {x | d(x, x∗ ) < δ } for
some δ > 0, so that all x in the neighborhood satisfy f (x) ≥ f (x∗ ).
The value f (x∗ ) is called a local minimum of f .
Local optimization problems can usually be efficiently solved if
the objective function is smooth (such as continuous or differentiable
to some degree) [56]. Fig. 2(a) shows a common local optimization
method with the objective function λ x.x ≤ 1 ? 0 : (x − 1)2 . It uses
tangents of the curve to quickly converge to a minimum point. The
smoothness of the curve makes it possible to deduce the function’s
behavior in the neighborhood of a particular point x by using
information at x only.
Global Optimization and MCMC. If f (x∗ ) ≤ f (x) for all x in
the search space, we call x∗ a global minimum point (or minimum
point for short), and f (x∗ ) the global minimum (or minimum for
short) of the function f . In this presentation, if we say “x∗ minimizes
the function f ”, we mean x∗ is a global minimum point of f .
We use the Monte Carlo Markov Chain (MCMC) sampling to
solve global optimization problems. A fundamental fact regarding
MCMC is that it follows the target distribution asymptotically.
For simplicity, we give the results [8] with the discrete-valued
probability.
Lemma 2.1. Let x be a random variable, A be an enumerable set
of the possible values of x. Let f be a target probability distribution
for x, i.e., the probability of x taking value a is f (a). Then, for an
MCMC sampling sequence x1 , . . . , xn . . . and a probability density
function P(xn = a) for each xn , we have P(xn = a) → f (a).
For example, consider the target distribution of coin tossing
with 0.5 probability for having the head. An MCMC sampling is a
sequence of random variables x1 ,. . . , xn , . . ., such that the probability
of xn being “head”, denoted by Pn , converges to 0.5.
MCMC provides multiple advantages in practice. Because
such sampling can simulate an arbitrary distribution (Lem. 2.1),
MCMC backends can sample for a target distribution in the form of
λ x. exp− f (x) where f is the function to minimize, which allows its
sampling process to attain the minimum points more frequently than
the other points. Also, MCMC has many sophisticated techniques
that integrate with classic local search techniques, such as the Basinhopping algorithm [45] mentioned above. Some variants of MCMC
can even handle high dimensional problems [60], or non-smooth
objective functions [25]. Fig. 2(b) illustrates a typical MCMC cycle.
Steps p0 → p1 , p2 → p3 , and p4 → p5 are the local optimization;
Steps p1 → p2 and p3 → p4 aim to prevent the MCMC sampling
from getting trapped in the local minima.
3.
in any LLVM-supported language
type_t FOO (double x1, double x2, ...)
pen
(.cpp)
double pen (int i, int op, double lhs, double rhs)
FOO_I:
Instrumented program (.bc)
type_t FOO_I (double x1, double x2, ...)
FOOloader
: Program
under
(.cpp)
test
in any LLVM-supported language
void LOADER (double* P)
type_t FOO (double x1, double x2, ...)
FOO_R (.cpp)
pen (.cpp)
void FOO_R (double* P)
1. double lhs, double rhs)
double pen (int i, Step
int op,
libr.so
FOO_I:: Instrumented
program
Instrumented program
(.bc)
void FOO_R (double* P)
type_t FOO_I (double x1, double x2, ...)
MCMC minimization procedure (.py)
(.cpp)
loader
basinhopping (func, sp, n_iter, callback)
void LOADER (double* P)
FOO_R
LLVM
(.cpp) pass
Linking
void FOO_R (double* P)
FOO: Program
libr.so
under test
in any LLVM-supported language
void FOO_R (double* P)
type_t FOO (double x1, double x2, ...)
Branch Coverage Based Testing
This section shows a detailed ME procedure in solving branch
coverage based testing for numerical code.
3.1
FOO:: Program
undertest
test
Program under
Problem Statement
MCMC minimization procedure (.py)
(.cpp)
pen
basinhopping (func, sp, n_iter, callback)
double pen (int i, int op, double lhs, double rhs)
LLVM pass
Instrumented program (.bc)
Linking
FOO_I:
Definition 3.1. Let FOO be the program under test with N conditional statements, labeled by l0 ,. . . ,lN−1 . Each li has a true branch
iT and a false branch iF . The problem of branch coverage based
testing aims to find a set of inputs X ⊆ dom(FOO) that covers all
2 ∗ N branches of FOO. Here, we say a branch is “covered” by X if it
is passed through by the path of executing FOO with an x ∈ X. We
scope the problem with three assumptions:
(a) The inputs of FOO are floating-point numbers.
(b) Each Boolean condition in FOO is an arithmetic comparison in
the form of a op b, where op ∈ {==, ≤, <, 6=, ≥, >}, and a and
b are floating-point variables or constants.
(c) Each branch of FOO is feasible, i.e., it is covered by dom(FOO).
Assumptions (a) and (b) are set for modeling numerical code. Assumption (c) is set to simplify our presentation. Our implementation
will partially relax these assumptions (details in Appendix B).
We introduce the concept of a saturated branch and use it to
reformulate Def. 3.1.
Definition 3.2. Let X be a set of inputs generated during the process
of testing. We say a branch is saturated by X if the branch itself and
all its descendant branches, if any, are covered by X. Here, a branch
b0 is called a descendant branch of b if there exists a segment of
control flow path from b to b0 . Given X ⊆ dom(FOO), we write
Saturate(X)
for the set of branches saturated by X.
To illustrate Def. 3.2, suppose that an input set X covers {0T , 0F , 1F } in the controlflow graph on the right, then Saturate(X) =
{0F , 1F }; 1T is not saturated because it is not
covered, and 0T is not saturated because its
descendant branch 1T is not covered.
(3)
type_t FOO_I (double x1, double x2, ...)
loader
(.cpp) under test
FOO: Program
in any
LLVM-supported
Step
2.
void
LOADER
(double
* P)
language
type_t FOO (double x1, double x2, ...)
Representing function
FOO_R :(.cpp)
pen
(.cpp)
void FOO_R (double* P)
double pen (int i, int op, double lhs, double rhs)
libr.so
FOO_I:
Instrumented program (.bc)
void FOO_R (double* P)
_
_
type t FOO I (double x1, double x2, ...)
Step 3.
MCMC
minimization procedure (.py)
loader (.cpp)
_
Generated(double
test inputs
basinhopping
sp,
void LOADER (func,
P) n iter, callback)
*
_R’s(.cpp)
LLVM
pass minimum points, which
X: A set of FOO
global
saturates (therefore
covers)
all branches of FOO: Program under test
Linking
_
void FOO R (double* P)
in any LLVM-supported language
libr.so
_t FOO (double x1,
typegoal
Figure 3: Branch coverage based
testing via ME. The
is to saturate
void FOO_R (double P)
(therefore cover) all branches of FOO, i.e.,*{0T , 0F , 1T ,pen
1F }.(.cpp)
MCMC minimization procedure (.py)
double x2, ...)
double pen (int i, int op, double lhs, d
basinhopping (func, sp, n_iter, callback)
_
FOO by
I: Instrumented
Observe that if all FOO
’s branches
are covered
an input setprogram (.bc)
LLVM
pass
_t FOO
_I (double x1, double x2, ...)
type
X, they are also saturated
by X, and vice versa.
This
observation
Linking
allows us to reformulate the branch coverage based
testing problem
loader (.cpp)
following the lemma below.
void LOADER (double* P)
Lemma 3.3. Let FOO be a program under test with _the assumptions
FOO R (.cpp)
Def. 3.1(a-c) satisfied. The goal of branch coverage based testing
void FOO_R (double* P)
can be stated as to find a small set of inputs X ⊆ dom(FOO) that
saturates all FOO’s branches.
libr.so
voidinput
FOO_R set
(double
P)
Remark 3.4. In Lem. 3.3, we expect the generated
X to* be
“small”, because otherwise we can use X = dom(MCMC
FOO) which
already
minimization procedure (.py)
saturates all branches under the assumption Def. 3.1(c).
_
basinhopping (func, sp, n iter, callback
LLVM pass
Linking
3.2
An Example
We use a simple program FOO in Fig. 3 to illustrate our approach.
The program has two conditional statements l0 and l1 , and their true
and false branches are denoted by 0T , 0F and 1T , 1F respectively.
The objective is to find an input set that saturates all branches. Our
approach proceeds in three steps:
Table 1: A scenario of how our approach saturates all branches of FOO by
repeatedly minimizing FOO_R. Column “Saturate”: Branches that have been
saturated. Column “FOO_R”: The representing function and its plot. Column
“x∗ ”: The point where FOO_R attains the minimum. Column “X”: Generated
test inputs.
# Saturate
Step 1. We inject a global variable r in FOO, and, immediately
before each control point li , we inject an assignment (Fig. 3)
r=
pen,
(4)
where pen invokes a code segment with parameters specific to li .
The idea of pen is to capture the distance of the program input from
saturating a branch that has not yet been saturated. As illustrated in
Fig. 3, this distance returns different values depending on whether
the branches at li are saturated or not.
We denote the instrumented program by FOO_I. The key in Step 1
is to design pen to meet certain conditions that allow us to approach
the problem defined in Lem. 3.3 as a mathematical optimization
problem. We will specify the conditions in the next step.
Step 2. This step constructs the representing function that we have
mentioned in Sect. 1. The representing function is the driver program
FOO_R shown in Fig. 3. It initializes r to 1, invokes FOO_I and then
returns r as the output of FOO_R. That is, FOO_R(x) for a given input
x calculates the value of r at the end of executing FOO_I(x).
Our approach requires two conditions on FOO_R:
C1. FOO_R(x) ≥ 0 for all x, and
C2. FOO_R(x) = 0 if and only if x saturates a new branch. In other
words, a branch that has not been saturated by the generated
input set X becomes saturated with X ∪ {x}, i.e., Saturate(X) 6=
Saturate(X ∪ {x}).
Imagine that we have designed pen so that FOO_R meets both C1 and
C2. Ideally, we can then saturate all branches of FOO by repeatedly
minimizing FOO_R as shown in the step below.
Step 3. In this step, we use MCMC to calculate the minimum
points of FOO_R. Other mathematical optimization techniques, e.g.,
genetic programming [40], may also be applicable, which we leave
for future investigation.
We start with an input set X = 0/ and Saturate(X) = 0.
/ We
minimize FOO_R and obtain a minimum point x∗ which necessarily
saturates a new branch by condition C2. Then we have X = {x∗ }
and we minimize FOO_R again which gives another input x∗∗ and
{x∗ , x∗∗ } saturates a branch that is not saturated by {x∗ }. We
continue this process until all branches are saturated. When the
algorithm terminates, FOO_R(x) must be strictly positive for any
input x, due to C1 and C2.
Tab. 1 illustrates a scenario of how our approach saturates all
branches of FOO. Each “#n” below corresponds to one line in the
table. We use pen0 and pen1 to denote pen injected at l0 and l1
respectively. (#1) Initially, Saturate = 0.
/ Any input saturates a new
branch. Both pen0 and pen1 set r = 0, and FOO_R = λ x.0 (Fig. 3).
Suppose x∗ = 0.7 is found as the minimum point. (#2) The branch
1F is now saturated and 1T is not. Thus, pen1 sets r = (y − 4)2 .
Minimizing FOO_R gives x∗ = −3.0, 1.0, or 2.0. We have illustrated
this MCMC procedure in Fig. 2(b). Suppose x∗ = 1.0 is found. (#3)
Both 1T and 1F , as well as 0T , are saturated by the generated inputs
{0.7, 1.0}. Thus, pen1 returns the previous r. Then, FOO_R amounts
to pen0 , returning 0 if x > 1, or (x − 1)2 + ε otherwise, where ε is a
small positive constant. Suppose x∗ = 1.1 is found as the minimum
point. (#4) All branches have been saturated. In this case, both
pen0 and pen1 return the previous r. Then, FOO_R becomes λ x.1,
understanding that FOO_R initializes r as 1. The minimum point, e.g.,
1
0/
2
{1F }
λ x.
FOO_R
x∗
X
λ x.0
0.7
{0.7}
(
((x + 1)2 − 4)2
(x2 − 4)2
x≤1
else
1.0 {0.7, 1.0}
.
3
(
0
{0T , 1T ,
λ x.
1F }
(x − 1)2 + ε
4
{0T , 1T ,
0F , 1F }
x>1
else
λ x.1
1.1
{0.7, 1.0,
1.1}
−5.2
{0.7, 1.0,
1.1, −5.2}
x∗ = −5.2, necessarily satisfies FOO_R(x∗ ) > 0, which terminates
the algorithm.
Remark 3.5. Given an input x, the value of FOO_R(x) may change
during the minimization process. In fact, FOO_R is constructed with
injected pen which returns different values at li depending on
whether the branches iT and iF have been saturated. Thus, the minimization step in our algorithm differs from existing mathematical
optimization techniques where the objective function is fixed [71].
3.3
Algorithm
We provide details corresponding to the three steps in Sect. 3.2. The
algorithm is summarized in Algo. 1.
Algorithm for Step 1. The outcome of this step is the instrumented program FOO_I. As explained in Sect. 3.2, the essence is
to inject the variable r and the assignment r = pen before each
conditional statement (Algo. 1, Lines 1-4).
To define pen, we first introduce a set of helper functions that
are sometimes known as branch distance. There are many different
forms of branch distance in the literature [38, 49]. We define ours
with respect to an arithmetic condition a op b.
R
Definition 3.6. Let a, b ∈ , op ∈ {==, ≤, <, 6=, ≥, >}, ε ∈
We define branch distance dε (op, a, b) as follows:
def
dε (==, a, b) = (a − b)2
(5)
def
2
(6)
def
2
(7)
dε (≤, a, b) = (a ≤ b) ? 0 : (a − b)
dε (<, a, b) = (a < b) ? 0 : (a − b) + ε
def
dε (6=, a, b) = (a 6= b) ? 0 : ε
def
R>0 .
(8)
def
and dε (≥, a, b) = dε (≤, b, a), dε (>, a, b) = dε (<, b, a). Usually,
the parameter ε is a small constant, so we drop the explicit reference
to ε when using the branch distance.
The intention of d(op, a, b) is to quantify how far a and b are
from attaining a op b. For example, d(==, a, b) is strictly positive
when a 6= b, becomes smaller when a and b go closer, and vanishes
when a == b. The following property holds:
d(op, a, b) ≥ 0 and d(op, a, b) = 0 ⇔ a op b.
Algorithm 1: Branch coverage based testing via ME.
Program under test
Number of starting points
LM
Local optimization used in MCMC
n_iter
Number of iterations for MCMC
Output: X
Generated input set
Input:
(9)
As an analogue, we set pen to quantify how far an input is from
saturating a new branch. We define pen following Algo. 1, Lines 1423.
Definition 3.7. For branch coverage based testing, the function pen
has four parameters, namely, the label of the conditional statement
li , and op, a and b from the arithmetic condition a op b.
(a) If neither of the two branches at li is saturated, we let pen return
0 because any input saturates a new branch (Lines 16-17).
(b) If one branch at li is saturated but the other is not, we set r to be
the distance to the unsaturated branch (Lines 18-21).
(c) If both branches at li have already been saturated, pen returns
the previous value of the global variable r (Lines 22-23).
For example, the two instances of pen at l0 and l1 are invoked as
pen(li , ≤, x, 1) and pen(l1 , ==, y, 4) respectively in Fig. 3.
Algorithm for Step 2. This step constructs the representing function FOO_R (Algo. 1, Line 5). Its input domain is the same as that
of FOO_I and FOO, and its output domain is double, so to simulate a
real-valued mathematical function which can then be processed by
the mathematical optimization backend.
FOO_R initializes r to 1. This is essential for the correctness of
the algorithm because we expect FOO_R returns a non-negative value
when all branches are saturated (Sect. 3.2, Step 2). FOO_R then calls
FOO_I(x) and records the value of r at the end of executing FOO_I(x).
This r is the returned value of FOO_R.
As mentioned in Sect. 3.2, it is important to ensure that FOO_R
meets conditions C1 and C2. The condition C1 holds true since
FOO_R returns the value of the instrumented r, which is never
assigned a negative quantity. The lemma below states FOO_R also
satisfies C2.
Lemma 3.8. Let FOO_R be the program constructed in Algo. 1,
and S the branches that have been saturated. Then, for any input
x ∈ dom(FOO), FOO_R(x) = 0 ⇔ x saturates a branch that does not
belong to S.
Proof. We first prove the ⇒ direction. Take an arbitrary x such that
FOO_R(x) = 0. Let τ = [l0 , . . . ln ] be the path in FOO passed through
by executing FOO(x). We know, from Lines 2-4 of the algorithm, that
each li is preceded by an invocation of pen in FOO_R. We write peni
for the one injected before li and divide {peni | i ∈ [1, n]} into three
groups. For the given input x, we let P1, P2 and P3 denote the groups
of peni that are defined in Def. 3.7(a), (b) and (c), respectively. Then,
we can always have a prefix path of τ = [l0 , . . . lm ], with 0 ≤ m ≤ n
such that each peni for i ∈ [m + 1, n] belongs to P3, and each peni
for i ∈ [0, m] belongs to either P1 or P2. Here, we can guarantee
the existence of such an m because, otherwise, all peni belong in
P3, and FOO_R becomes λ x.1. The latter contradicts the assumption
that FOO_R(x) = 0. Because each peni for i > m does nothing but
performs r = r, we know that FOO_R(x) equals to the exact value of
r that penm assigns. Now consider two disjunctive cases on penm .
If penm is in P1, we immediately conclude that x saturates a new
branch. Otherwise, if penm is in P2, we obtains the same from
Eq. (9). Thus, we have established the ⇒ direction of the lemma.
To prove the ⇐ direction, we use the same notation as above,
and let x be the input that saturates a new branch, and [l0 , . . . , ln ] be
the exercised path. Assume that lm where 0 ≤ m ≤ n corresponds to
the newly saturated branch. We know from the algorithm that (1)
penm updates r to 0, and (2) each peni such that i > m maintains the
value of r because their descendant branches have been saturated.
We have thus proven the ⇐ direction of the lemma.
FOO
n_start
/* Step 1
1
2
3
4
/* Step 2
5
*/
Inject global variable r in FOO
for conditional statement li in FOO do
Let the Boolean condition at li be a op b where
op ∈ {≤, <, =, >, ≥, 6=}
Insert assignment r = pen(li , op, a, b) before li
*/
Let FOO_I be the newly instrumented program, and FOO_R be:
double FOO_R(double x) {r = 1; FOO_I(x); return r;}
/* Step 3
12
Let Saturate = 0/
Let X = 0/
for k = 1 to n_start do
Randomly take a starting point x
Let x∗ = MCMC(FOO_R, x)
if FOO_R(x∗ ) = 0 then X = X ∪ {x∗ }
Update Saturate
13
return X
6
7
8
9
10
11
14
15
16
17
18
19
20
21
22
23
24
25
Function pen(li , op, a, b)
Let iT and iF be the true and the false branches at li
if iT 6∈ Saturate and iF 6∈ Saturate then
return 0
else if iT 6∈ Saturate and iF ∈ Saturate then
return d(op, a, b) /* d: Branch distance
else if iT ∈ Saturate and iF 6∈ Saturate then
return d(op, a, b) /* op: the opposite of op
else /* iT ∈ Saturate and iF ∈ Saturate
return r
27
28
29
30
31
32
33
34
*/
*/
*/
Function MCMC( f , x)
xL = LM( f , x)
/* Local minimization
26
*/
*/
for k = 1 to n_iter do
Let δ be a random perturbation generation from a
predefined distribution
Let xeL = LM( f , xL + δ )
if f (xeL ) < f (xL ) then accept = true
else
Let m be a random number generated from the
uniform distribution on [0, 1]
Let accept be the Boolean m < exp( f (xL ) − f (xeL ))
if accept then xL = xeL
return xL
Algorithm for Step 3. The main loop (Algo. 1, Lines 8-12) relies
on an existing MCMC engine. It takes an objective function and
a starting point and outputs x∗ that it regards as a minimum point.
Each iteration of the loop launches MCMC from a randomly selected
starting point (Line 9). From each starting point, MCMC computes
the minimum point x∗ (Line 10). If FOO_R(x∗ ) = 0, x∗ is added to
the set of the generated inputs X (Line 11). Lem. 3.8 ensures that
x∗ saturates a new branch in the case of FOO_R(x∗ ) = 0. Therefore,
in theory, we only need to set n_start = 2 ∗ N where N denotes the
number of conditional statements, so to saturate all 2 ∗ N branches.
In practice, however, we set n_start > 2 ∗ N because MCMC cannot
guarantee that its output is a true global minimum point.
The MCMC procedure (Algo. 1, Lines 24-34) is also known
as the Basinhopping algorithm [45]. It is an MCMC sampling
over the space of the local minimum points [46]. The random
starting point x is first updated to a local minimum point xL
(Line 25). Each iteration (Lines 26-33) is composed of the two
phases that are classic in the Metropolis-Hastings algorithm family
of MCMC [16]. In the first phase (Lines 27-28), the algorithm
proposes a sample xeL from the current sample x. The sample xeL is
obtained with a perturbation δ followed by a local minimization, i.e.,
xeL = LM( f , xL +δ ) (Line 28), where LM denotes a local minimization
in Basinhopping, and f is the objective function. The second phase
(Lines 29-33) decides whether the proposed xeL should be accepted
as the next sampling point. If f (xeL ) < f (xL ), the proposed xeL will
be sampled; otherwise, xeL may still be sampled, but only with
the probability of exp(( f (xL ) − f (xeL ))/T ), in which T (called the
annealing temperature [37]) is set to 1 in Algo. 1 for simplicity.
4.
Mathematical Execution
Predicate as function is a common concept in mathematics. As
Gödel stated in his 1931 work [22, 31],
Z
There shall correspond to each relation R (R ⊆ n ) a representing function φ (x1 , . . . , xn ) = 0 if R(x1 , . . . , xn ) and
φ (x1 , . . . , xn ) = 1 if ¬R(x1 . . . xn ).
Traditional representing function in its Boolean nature is a predicate/set indicator of two states, essentially being true or false. For
example, even(N) that decides whether an integer N is even can be
represented by the function λ x.(x mod 2 == 0).
In this section, we present Mathematical Execution (ME) by
extending the Boolean-valued representing function to a real-valued
calculus to address a spectrum of automated testing problems for
numerical code, which we unify in the category of the search
problem.
4.1
Search Problem
Definition 4.1. The search problem with regard to a set X aims to
(a) find an x ∈ X if X 6= 0,
/ and
(b) report “not found” if X = 0.
/
Usually, we have a search space U, and X is specified implicitly as a
subset of U. We denote the search problem by (X,U). In this paper,
we deal with numerical code, and thus, we assume that X is a subset
of N . We also assume that X is decidable so that we can check
whether an x ∈ U is an element of X.
R
Example 4.2. A search problem can be any computational task that
attempts to find an element from a set.
(a) As per the notation used in Sect. 1, an automated testing
problem of program FOO is a search problem (X,U) where
X = {x | FOO(x) ⇓ wrong} and U = dom(FOO).
(b) Another search problem is satisfiability checking, where X is
the set of the models of a constraint, and U is the value domain
to which the variables can be assigned.
4.2
Representing Function
Definition 4.3. A function R is said to be a representing function for
the search problem (X,U) if with any x ∈ U there is an associated
real value R(x), such that
(a) R(x) ≥ 0;
(b) every root of the representation function is a solution of the
search problem, i.e., R(x) = 0 =⇒ x ∈ X; and
(c) the roots of R include all solutions to the search problem, i.e.,
x ∈ X =⇒ R(x) = 0.
Example 4.4. Let (X,U) be a search problem.
(a) A trivial representing function is λ x.(x ∈ X)? 0 : 1.
(b) A generic representing function is the point-set distance. Imagine
that the search problem is embedded in a metric space [62] with
a distance dist : X × X → . As a standard practice, we can lift
dist to dist X defined as λ x. inf{dist(x, x0 ) | x0 ∈ X}, where inf
refers to the greatest lower bound, or infimum. Intuitively, dist X
measures the distance between a point x ∈ U and the set X. It
can be shown that dist X satisfies conditions Def. 4.3(a-c), and
therefore, is a representing function.
(c) The representing function used in branch coverage based testing
is the FOO_R constructed in Sect. 3, where X is the input that
saturates a new branch and U is the input domain. We have
proved that FOO_R is a representing function in Lem. 3.8.
R
The theorem below allows us to approach a search problem by
minimizing its representing function.
Theorem 4.5. Let R be the representing function for the search
problem (X,U), and R∗ be the global minimum of R.
(a) Deciding the emptiness of X is equivalent to checking the sign
of R∗ , i.e., X = 0/ ⇔ R∗ > 0.
(b) Assume X 6= 0.
/ Then, ∀x ∈ U, x minimizes R ⇔ x ∈ X.
Proof. Proof of (a): Suppose X 6= 0.
/ Let x0 be an element of X. We
have R∗ ≥ 0 by Def. 4.3(a). In addition, we have R∗ ≤ R(x0 ) since
R∗ is the minimum. Then we have R∗ ≤ 0 because R(x0 ) = 0 due
to Def. 4.3(c). Thus R∗ = 0. Conversely, R∗ = 0 implies that there
exists an x∗ ∈ U s.t. R(x∗ ) = 0. By Def. 4.3(b), x∗ ∈ X. Thus X 6= 0.
/
Proof of (b): Let 0R denote the set of the roots of R, and MR
be the set of the minimum points of R. It suffices to show that
0R ⊆ MR ⊆ X ⊆ 0R under the condition X 6= 0.
/ We have 0R ⊆ MR
because a root of R is necessarily a minimum point; we have MR ⊆ X
because X 6= 0/ implies R∗ = 0 by Thm. 4.5(a). Take an arbitrary
x∗ ∈ MR , R(x∗ ) = R∗ = 0 holds, and therefore x∗ ∈ X by Def. 4.3(b);
we have X ⊆ 0R from Def. 4.3(c).
Remark 4.6. An instance of Thm. 4.5(a) is shown in Tab. 1, Line 4.
There, FOO_R attains the minimum 1, which means that all branches
have been saturated (namely X = 0).
/ An instance of Thm. 4.5(b) is
shown in Eq. (1). Note that Thm. 4.5(b) does not hold if we drop the
assumption X 6= 0.
/ In fact, any R such as R(x) > 0 is a representing
function for a search problem (0,U),
/
but its minimum point, if any,
can never be an element of the empty set X.
4.3
The ME procedure
The representing function paves the way toward a generic solution
to the search problem. The key parameter in the ME procedure is
mathematical optimization.
Definition 4.7. Let U be a set, µ be a mathematical optimization
algorithm that attempts to calculate a minimum point for an objective
function defined over U. We define the Mathematical Execution
procedure as follows:
Input: A search problem (X,U)
Output: An element of X, or “not found”
M1. Construct the representing function R.
M2. Minimize R. Let x∗ be the minimum point obtained by µ.
M3. Return x∗ if x∗ ∈ X. Otherwise return “not found”.
The following corollary states that the ME procedure solves
the search problem, under the condition that the ME procedure is
equipped with an ideal mathematical optimization backend. Its proof
(omitted) follows Thm. 4.5.
double pen (int i, int op, double lhs, double rhs)
FOO_I:
FOO:
Instrumented program (.bc)
Program under test
in any LLVM-supported language
type_t FOO_I (double x1, double x2, ...)
type_t FOO (double x1, double x2, ...)
loader
pen
(.cpp)
(.cpp)
void LOADER (double* P)
double pen (int i, int op, double lhs, double
_R :(.cpp)
Representing
FOOrhs)
testtest
FOO: Program
Programunder
under
program
FOO_I: Instrumented
Instrumented
program
FOO:
(.bc)
type_t FOO_I (double x1, double x2, ...)
in any LLVM-supported language
loader
type_t FOO (double x1, double
Step 1.x2, ...)
pen
(.cpp)
Step 2.
FOO_R
(.cpp)
Instrumented program (.bc)
libr.so
void LOADER (double* P)
Step 3.
Generated test inputs
{ 3, 1}rhs)
lhs, double
FOO I: Instrumented program (.bc)
LLVM
type_pass
t FOO_I (double x1, double x2, ...)
Linking
loader (.cpp)
_t FOO_I (double
typereachability
Path
problemx1, double x2, ...)
void FOO_R (double* P)
Find x that
triggers
loader
(.cpp)
def
t = [0T , 1T ]
void FOO_R (double* P)
pen (.cpp)
_
basinhopping
_ (func, sp, n iter, callback)
double pen (int i, int op, double lhs, double rhs)
void FOO_R (double* P)
FOO_I:
language
libr.so
type_t FOO (double x1, double x2, ...)
MCMC
minimization
procedure
(.py)
double
pen (int i,
int op, double
void LOADER (double* P)
(.cpp)
function
Program under test
void FOO_R (double* P)
in any LLVM-supported
void LOADER (double* P)
MCMC minimization procedure
(.py)
_R (.cpp)
x triggers t , x minimizes FOO
FOO_R
basinhopping (func, sp, n_iter, callback)
void FOO_R (double* P)
FOO_RFigure
(.cpp)4: Path reachability testing via ME. The goal of this example is to find a test input that triggers the path [0T , 1T ] of the program FOO.
LLVM pass
libr.so
void FOO_R (double P)
Linking
Corollary
libr.so4.8. Let (X,U) be a search problem. Assume that µ
yields a true global minimum point in M2 of the ME procedure.
void FOO_R (double* P)
Then,
*
procedure
(a)MCMC
the ME minimization
procedure returns
an x∗ ∈(.py)
X if X 6= 0,
/ and
(b)basinhopping
the ME procedure
returns
“not
found”
if
X
=
0.
/
_
(func, sp, n iter, callback)
A known issue with testing is its incompleteness, i.e.,
Remark
4.9. pass
LLVM
“Testing
shows the presence of bugs, but not its absence” [24]. In the
Linking
context of Mathematical Execution, the above well-known remark
corresponds to the fact that, in practice, if the ME procedure returns
“not found” for the search problem (X,U), X may still be non-empty.
We call this phenomenon practical incompleteness.
The practical incompleteness occurs when the MO backend fails
to yield an accurate minimum point. To clarify, we use x∗ for an
exact global minimum point, and we use x∗ for the one calculated
by the MO backend µ. Then we consider four disjunctive cases:
(a)
(b)
(c)
(d)
R(x∗ ) = 0 and R(x∗ ) = 0;
R(x∗ ) > 0 and R(x∗ ) > 0;
R(x∗ ) = 0 and R(x∗ ) > 0;
R(x∗ ) > 0 and R(x∗ ) = 0.
The ME procedure remains correct for both (a) and (b). The case
(c) cannot happen because R(x∗ ) < R(x) for all x. The practical
incompleteness occurs in (d), where the ME procedure returns “not
found” but X 6= 0.
/ Sect. 6 further discusses this incompleteness.
4.4
Additional Examples
This subsection aims to show that ME is a unified approach by
applying it on several other important search problems besides
coverage-based testing. In each example, we illustrate ME with a
different representing function.
4.4.1
Path Reachability Testing
Given a path τ of program FOO, we call path reachability testing the
search problem (X,U) with
X = {x | x triggers the path τ},U = dom(FOO).
(10)
The path reachability problem has been studied as an independent
research topic [52], or more commonly, as a subproblem in other
testing problems [30, 38, 42].
Consider the program FOO in Fig. 4(left), which is the same as
that in Fig. 3. Suppose that we want to trigger the path τ = [0T , 1T ]
(we denote a path by a sequence of branches). Our approach to this
example problem is similar to the three steps explained in Sect. 3.2,
void FOO_R (double
P)
*
except that we design a different
representing function here. We
illustrate theMCMC
ME approach
in
Fig.
minimization4.procedure (.py)
_iter, callback)
(func,
sp, nr
Step 1. Webasinhopping
inject a global
variable
in FOO and the assignment
LLVM pass
r = r + d(op, a, b)
(11)
Linking
before the conditional statements, where d is the branch distance
defined in Def. 3.6. The instrumented program is shown as FOO_I in
Fig. 4. The assignment is to measure how far the input has attained
the desired path.
Step 2. The value of r is then retrieved through a driver program
FOO_R (Fig. 4), which initializes r to 0 (unlike in Sect. 3.2, where r
is initialized to 1), calls FOO_I and then returns r.
Step 3. A global minimum point of FOO_R is calculated by an
MO algorithm. As shown in the graph of FOO_R (Fig. 4), there
are three local minimum points, {−3, 1, 2}. Two of them, {−3, 1},
attain the global minimum 0 and either solves the path reachability
testing problem. Note that the representing function is discontinuous
at x = 1, but the MCMC procedure can easily find it (similar to
Fig. 2(b)).
Below we prove the correctness of the ME solution.
Corollary 4.10. An input x triggers τ iff x minimizes FOO_R.
Proof. First, it can be shown that the constructed FOO_R is a representing function for the path reachability problem, thus we can
apply Thm. 4.5. By Thm. 4.5(a), X 6= 0/ since FOO_R attains the
global minimum 0. Then we conclude from Thm. 4.5(b).
4.4.2
Boundary Value Analysis
In testing, test inputs that explore “boundary conditions” usually
have a higher payoff than those that do not [70]. The problem of
generating such inputs is expressed abstractly as boundary value
analysis [39, 54, 58], which can be seen as the search problem with
X = {x | x triggers a boundary condition},U = dom(FOO). (12)
Consider again the program FOO in Fig. 4. The boundary value
analysis is to find test inputs that trigger a boundary condition (a)
x = 1 at l0 , or (b) y = 4 at l1 . With manual reasoning, condition (a)
can only be triggered by input 1. Condition (b) can be triggered
if x = 2 or x = −2 before l1 . For each case, we reason backward
across the two branches at l0 and then merge the results. Eventually
we have {−3, 1, 2} as the test inputs to generate.
double pen (int i, int op, double lhs, double rhs)
FOO_I:
FOO:
Program under test
_
in any LLVM-supported languagetype t
Instrumented program (.bc)
FOO_I (double x1, double x2, ...)
loader
type_t FOO (double x1, double x2, ...)
pen
(.cpp)
void LOADER (double* P)
(.cpp)
_
FOO double
R :(.cpp)
Representing
double pen (int i, int op, double lhs,
rhs)
program
FOO_I: Instrumented
Instrumented
program
function
_
(.bc)void FOO R (double* P)
type_t FOO_I (double x1, double x2,libr.so
...)
loader
void FOO_R (double* P)
(.cpp)
void LOADER (double* P)
FOO_R
MCMC minimization procedure (.py)
basinhopping (func, sp, n_iter, callback)
(.cpp)
void FOO_R (double* P)
libr.so
LLVM pass
Linking
void FOO_R (double* P)
MCMC minimization procedure (.py)
Figure 5: Boundary value analysis via ME. This goal is to find a test input to
_iter, callback)
basinhopping
sp, n
trigger
a boundary(func,
condition,
namely,
(a) x = 1 at l0 or (b) y = 4 at l1 of the
program FOO in Fig. 4.
LLVM pass
Linking
Our ME solution follows: As before, we introduce a global
variable r to estimate how far a program input is from triggering a
boundary condition. We inject the assignment
r = r ∗ d(==, a, b)
(13)
before each condition a op b, where function d is the branch distance
defined in Def. 3.6. Fig. 5 illustrates FOO_I and FOO_R. Then we can
solve the boundary value analysis problem via minimizing FOO_R.
The correctness of this procedure follows:
Corollary 4.11. An input x triggers a boundary condition if and
only if x minimizes FOO_R.
Proof. It can be shown that (1) r is always assigned to a non-negative
value; (2) if r = 0 at the end of the program execution, then one
of the boundary conditions must hold; and (3) r = 0 at the end of
the program execution if at least one of the boundary conditions
holds. Thus, the constructed FOO_R satisfies the conditions for being
a representing function (Def. 4.3). We can then prove the corollary
following Thm. 4.5 (in the same way as we prove Cor. 4.10).
4.4.3
Satisfiability Checking
Consider the satisfiability problem with constraint
R
π = 2x ≤ 5 ∧ x2 ≥ 5 ∧ x ≥ 0
(14)
where x ∈ . If we write x∗ |= π to mean that π becomes a tautology
by substituting its free variable x with x∗ . Then, this satisfiability
problem can be expressed as the search problem ({x∗ | x∗ |= π}, ).
Its representing function can be defined as (Fig. 6):
R
def
R = λ x.(2x ≤ 5) ? 0 : (2x − 5)2 + (x2 ≥ 5) ? 0 : (x2 − 5)2
+ (x ≥ 0) ? 0 : x2 .
(15)
Then we can solve the satisfiability checking problem of constraint
π in Eq. (14) by minimizing its representing function R in Eq. (15).
The
√ ME procedure can easily locate the minimum points in between
5 (≈ 2.24) and log2 5 (≈ 2.32). Each of the minimum points is
necessarily a model of π following Thm. 4.5. The correctness of
such an approach follows:
Corollary 4.12. x∗ |= π ⇔ x∗ minimizes R.
Remark 4.13. The representing function in Eq. (15) is defined on
. It is to illustrate the concept and allows us to ignore issues about
floating-point inaccuracy. A more realistic representing function for
satisfiability checking has been shown in a recent work [28] where
the representing function is constructed based on the binary forms
R
Figure 6: Satisfiability checking via ME. This goal of this example is to find
a model of π defined in Eq. (14). The curve depicts its representing function
defined in Eq. (15).
of floating-point numbers to avoid floating-point arithmetic in the
first place.
5.
Evaluation
5.1
The ME-powered System CoverMe
We have implemented CoverMe, a proof-of-concept realization for
branch coverage based testing. CoverMe has a modular design and
can be adapted to other automated testing problems, such as path
reachability and boundary value analysis (Sect. 4.4). This section
presents the architecture of CoverMe (Fig. 7), in which we identify
three layers and their corresponding use cases. Implementation
details are given in Appendix B.
Client. This layer provides the program under test FOO and specifies the inputs to generate X. The search problem involved will be
(X, dom(FOO)). X is usually implicit, e.g., specified via a path to
trigger as in path reachability testing (Sect. 4.4.1), and checking
x ∈ X has to be feasible following Def. 4.1. FOO is a piece of code
in the LLVM intermediate representation [44] or in any language
that can be transformed to it (such as code in Ada, the C/C++ language family, or Julia). The current CoverMe has been tested on
C, and we require dom(FOO) ⊆ N , as the system backend outputs
floating-point test data by default (see the ME kernel layer below).
R
Researcher. This layer sets two parameters. One is the initial
value of the representing function, denoted by r0 . It is usually
either 0 or 1 from our experience. For example, r0 is set to 0 in
path reachability testing (Sect. 4.4.1), and 1 in both coverage-based
testing (Sect. 3.2) and boundary value analysis (Sect. 4.4.2). The
other parameter is the assignment to inject before each conditional
statement in FOO. In practice, the Researcher specifies this code
segment in the pen procedure and injects r = pen, as shown in
Sect. 3.
ME Kernel. This layer takes as inputs the program FOO provided
by the Client, r0 and pen set by the Researcher, and operates the
three steps described in Sect. 3.2. It (1) uses Clang [2] to inject r
and pen into FOO to construct FOO_I, (2) initializes r to r0 , invokes
FOO_I and returns r in FOO_R, and (3) minimizes FOO_R with MO.
As mentioned in Sect. 3.3, CoverMe uses Basinhopping, an off-theshelf implementation from SciPy optimization package [7] as the
MO backend. Basinhopping is then launched from different starting
points as shown in Algo. 1, Lines 8-12. These starting points are
randomly generated from the Hypothesis library [5].
5.2
Experimental Setup
This section discusses the benchmarks, the tools for comparison, the
settings and hardware, and two evaluation objectives.
FOO:
Program under test
FOO: Program under test
in any LLVM-supported
in any
language
LLVM-supported language
_t FOO
type_t FOO (double x1, type
double
x2, (double
...)
x1, double x2, ...)
Client
Researcher
Table 2: Gcov metrics and explanations
pentest
(.cpp) Specify X pen (.cpp)
Provide FOO: Program under
in any LLVM-supported
language
double pen
(int i, int double
op, double
pen (int
lhs, i,
double
int op,
rhs)double lhs, double rhs)
Metrics
Gcov message Description
Note
FOO: Program under test
type_t FOO (double x1,_double x2, ...)
FOOlanguage
I: Instrumented program
FOO_I: Instrumented
(.bc)
program (.bc)
in any LLVM-supported
Line%
Lines executed Covered source lines a.k.a. line or
pen
(.cpp)
Design
pen x1, type
_
_t FOOx2,
_I (double
Set
r0_I...)
t FOO
(double x1,
type
double
...)
x1, double x2, ...)
type_
t FOO
(double
double
x2,
over total source lines statement coverage
pen
double pen (int i, int op, double lhs, double rhs)
(.cpp)
loader
(.cpp)
loader
_I: Instrumented program (.bc)
FOOpen
voiddouble
LOADERlhs,
(double
double
(int i, int op,
double
rhs)
* P)void
ME Kernel
(.cpp)
Condition%
LOADER (double* P)
Branch%
FOO_R (.cpp)
Construct FOO_I: Instrumented
Constructprogram
(.bc)Minimize FOO_R (.cpp)
loader
void
FOO_Rx2,
(double
type_
t FOO_I (.cpp)
(double x1,
double
...) * P) void FOO_R (double* P)
type_t FOO_I (double x1, double x2, ...)
Branches
executed
Covered conditional
statements over total
conditional statements
Branches taken Covered branches over a.k.a.
at least once
total branches
branch coverage
Calls executed Covered calls over
total calls
Call%
void LOADER
P)
Figure 7: Architecture ofloader
CoverMe.
Each (double
layerlibr.so
is* associated
with typical
use
(.cpp)
libr.so
cases. FOO: program under test;
X:
inputs to generate;
pen: procedure that
_Rtest
FOO
(.cpp)
void
LOADER
(double* P)
void FOO_R (double* P) void FOO_R (double* P)
updates r; r0 : initial value of the representing
function; FOO_I: instrumented
_
void
FOO
R
(double
P)
*
program; FOO_R: representing
FOO_Rfunction.
(.cpp)
MCMC minimizationMCMC
procedure
minimization
(.py)
procedure (.py)
“Branches” and “Calls” in Col. 1, Tab. 2. Col. 2-4 give the corresponding Gcov report message, the descriptions and the general
void FOO_R (double* P)
metrics for coverage testing. (2) Efficiency: we measure the wall
libr.so
LLVM
pass
LLVM
pass
Benchmarks. We use the C math library Fdlibm 5.3 [3] as our
time reported by the standard Unix command “time”. The timeout
Linking
Linking
minimization
procedure
(.py) Oracle).
void MCMC
FOO_Rare
(double
* P)
benchmark. These programs
developed
by Sun (now
limit for the compared tools is 48 hours.
_
basinhopping
sp, n iter, operations,
callback)
They are real-world programs,
rich in(func,
floating-point
and
MCMC
minimization
procedure
(.py)
have been used in Matlab,
Java,
JavaScript
Android.
LLVM
pass sp, n_and
5.3 Quantitative Results
basinhopping
(func,
iter, callback)
Fdlibm includes 80 programs.
Linking Each has one or multiple entry
This subsection presents two sets of experimental results. The first
LLVM
passentry functions. Among them, we
functions. In total, Fdlibm
has 92
validates our approach by comparing it against random testing. The
exclude (1) 36 functions Linking
that do not have branches, (2) 11 functions
second compares CoverMe with Austin [43], an open-source tool
involving non-floating-point input parameters, and (3) 5 static C
that combines symbolic execution and search-based strategies.
functions. Our benchmark suite includes all remaining 40 functions
in Fdlibm. For completeness, we list untested functions and reasons
5.3.1 CoverMe versus Random Testing
why they are not selected in Appendix A.
We have compared CoverMe with Rand by running them on the
Compared Tools. We have considered tools publicly available
benchmarks described in Sect. 5.2. In Tab. 3, we sort all benchmark
to us, including Austin [43], Pex [68], and Klee [15] and its
programs (Col. 1) and entry functions (Col. 2) by their names, give
two variants, namely Klee-Mulitsolver [57] and Klee-FP [18]. We
the numbers of source lines, branches and invoked functions (Col. 3tried Klee-Mulitsolver [57] with Z3 [23] as the SMT backend but
5). All coverage results are given by Gcov. It can be seen that the
found that the expression language of Klee [15] did not support
programs are rich in branches. The largest number of branches is
floating-point constraints. Besides, some common operations in our
114 (ieee754_pow). Especially, some tested functions have more
benchmark programs, such as pointer reference and dereference,
branches than lines. For example, ieee754_atan2 has 44 branches,
type casting, external function calls, etc., are not supported by Z3
but only 39 lines; ceil has 30 branches, but only 29 lines.
or any other backend solvers compatible with Klee-Multisolver.
The times used by CoverMe are given by Col. 6 in Tab. 3.
Klee-FP supports symbolic reasoning on the equivalence between
Observe that the times used by CoverMe vary considerably through
floating-point values but does not support coverage testing [18]. Pex,
all entry functions, from 0.1 to 22.1 seconds, but all under half a
unfortunately, can only run for .NET programs on Windows whereas
minute. We find the numbers of lines (Col. 3) and branches (Col. 4)
Fdlibm is in C, and our testing platform is based on Unix.
are not correlated with the running times (Col. 6). CoverMe takes
To our best knowledge, Austin [43] is the only publicly available
1.1 seconds to run the function expm1 (with 42 branches and 56
tool supporting coverage testing for Fdlibm. Austin combines
lines) and 10.1 seconds to run the function floor (with 30 branches
symbolic execution and search-based heuristics, and has been
and 30 lines). It shows the potential for real-world program testing
thoroughly tested for branch coverage based testing on floating-point
since CoverMe may not be very sensitive to the number of lines or
programs [41]. For empirical comparison, we have also implemented
branches. We set the timeout limit of Rand as 600 seconds, since
a random sampling tool, which samples inputs from the function’s
Rand does not terminate by itself and 600 seconds are already larger
input domains using a standard pseudo-random number generator.
than the times spent by CoverMe by orders of magnitude.
We refer to the tool as Rand.
Col. 7-14 in Tab. 3 show the coverage results of Rand and
CoverMe. We use Gcov to compute four metrics: lines, condition,
Settings and Hardware. CoverMe supports the following combranches and calls (as defined in Tab. 2). All coverage results
mand line options while generating test inputs from floating-point
reported by Gcov have been included in Tab. 3 for completeness.
programs: (1) the number of Monte-Carlo iterations n_iter, (2) the
The columns “Line” and “Branch” refer to the commonly used
local optimization algorithm LM and (3) the number of starting points
line/statement coverage and branch coverage, respectively. The
n_start. These options correspond to the three input parameters in
average values of the coverage are shown in the last row of the
Algo. 1. We set n_iter = 5, n_start = 500, and LM=“powell” which
table. All values in Col. 8, 10, 12 and 14 are larger than or equal
refers to Powell’s local optimization algorithm [59].
to the corresponding values in Col. 7, 9, 11 and 13. It means that
For both Austin and CoverMe, we use the default settings for
CoverMe achieves higher coverage than Rand for every benchmark
running the benchmarks. All experiments were performed on a
program and every metric. For the line and condition coverages,
laptop with a 2.6 GHz Intel Core i7 and 4GB RAM running Ubuntu
CoverMe achieves full coverage for almost all benchmarks, with
14.04 virtual machine.
97.0% line coverage and 98.2% condition coverage on average,
Evaluation Objectives. There are two specific evaluation objecwhereas Rand achieves 54.2% and 61.2% for the two metrics. For
tives. (1) Coverage: We use the standard Gnu coverage tool Gcov [4]
the most important branch coverage (since it is a primary target
to analyze the coverage. Gcov generates four metrics for source
of this paper), CoverMe achieves 100% coverage for 11 out of 40
code coverage analysis, which are listed as “Lines”, “Conditions”,
benchmarks with an average of 90.8% coverage, while Rand does
void libr.so
FOO_R (double* P) basinhopping (func, sp,basinhopping
n_iter, callback)
(func, sp, n_iter, callback)
Table 3: Comparing Random testing and CoverMe. The benchmark programs are taken from Fdlibm [3]. We calculate the coverage percentage using Gcov [4].
The metrics of Gcov are explained in Tab. 2. “n/a” in the last two columns indicates that no function call exists in the program; lines with “n/a” are excluded
when calculating the mean values of the last two columns.
Program
Benchmark characteristics
Time (s)
Line (%)
Condition (%) Branche (%)
Call (%)
Entry function
#Line #Branch #Call
Rand CoverMe Rand CoverMe Rand CoverMe Rand CoverMe
e_acos.c
e_acosh.c
e_asin.c
e_atan2.c
e_atanh.c
e_cosh.c
e_exp.c
e_fmod.c
e_hypot.c
e_j0.c
ieee754_acos(double)
ieee754_acosh(double)
ieee754_asin(double)
ieee754_atan2(double, double)
ieee754_atanh(double)
ieee754_cosh(double)
ieee754_exp(double)
ieee754_frmod(double, double)
ieee754_hypot(double, double)
ieee754_j0(double)
ieee754_y0(double)
e_j1.c
ieee754_j1(double)
ieee754_y1(double)
e_log.c
ieee754_log(double)
e_log10.c
ieee754_log10(double)
e_pow.c
ieee754_pow(double, double)
e_rem_pio2.c ieee754_rem_pio2(double, double*)
e_remainder.c ieee754_remainder(double, double)
e_scalb.c
ieee754_scalb(double, double)
e_sinh.c
ieee754_sinh(double)
e_sqrt.c
ieee754_sqrt(double)
k_cos.c
kernel_cos(double, double)
s_asinh.c
asinh(double)
s_atan.c
atan(double)
s_cbrt.c
cbrt(double)
s_ceil.c
ceil(double)
s_cos.c
cos (double)
s_erf.c
erf(double)
erfc(double)
s_expm1.c
expm1(double)
s_floor.c
floor(double)
s_ilogb.c
ilogb(double)
s_log1p.c
log1p(double)
s_logb.c
logb(double)
s_modf.c
modf(double, double*)
s_nextafter.c nextafter(double, double)
s_rint.c
rint(double)
s_sin.c
sin (double)
s_tan.c
tan(double)
s_tanh.c
tanh(double)
33
15
31
39
15
20
31
70
50
29
26
26
26
39
18
139
64
27
9
19
68
15
14
28
24
29
12
38
43
56
30
12
46
8
32
36
34
12
6
16
12
10
14
44
12
16
24
60
22
18
16
16
16
22
8
114
30
22
14
20
46
8
12
26
6
30
8
20
24
42
30
12
36
6
10
44
20
8
4
12
MEAN
not achieve any 100% coverage and attains only 38.0% coverage on
average. CoverMe achieves 100% call coverage on average, whereas
Rand achieves 39.5%.
5.3.2
CoverMe versus Austin
Tab. 4 reports the testing results of Austin and CoverMe. It uses
the same set of benchmarks as Tab. 3 (Col. 1-2). We use the time
(Col. 3-4) and the branch coverage metric (Col. 5-6) to evaluate the
efficiency and the coverage. We choose branch coverage instead of
the other three metrics in Gcov since branch coverage is the major
concern in this paper. Besides, Gcov needs to have access to the
generated test inputs to report the coverage, but currently, there is
no viable way to access the test inputs generated by Austin. Unlike
Rand, the branch coverage percentage of Austin (Col. 5) is provided
by Austin itself, rather than by Gcov.
Austin also shows large performance variances over different
benchmarks, from 667.1 seconds (the program sin) to hours. As
0
2
0
0
0
3
0
0
0
2
5
2
4
0
1
0
1
1
0
2
0
0
2
0
0
0
6
2
2
0
0
0
0
0
0
0
0
6
3
0
7.8
2.3
8.0
17.4
8.1
8.2
8.4
22.1
15.6
9.0
0.7
10.2
0.7
3.4
1.1
18.8
1.1
2.2
8.5
0.6
15.6
15.4
8.4
8.5
0.4
8.8
0.4
9.0
0.1
1.1
10.1
8.3
9.9
0.3
3.5
17.5
3.0
0.3
0.3
0.7
18.2
46.7
19.4
59.0
40.0
50.0
25.8
54.3
66.0
55.2
69.2
65.4
69.2
87.7
83.3
15.8
29.7
77.8
66.7
57.9
85.3
73.3
57.1
25.0
87.5
27.6
100.0
21.1
18.6
21.4
26.7
33.3
71.7
87.5
31.2
72.2
26.5
100.0
100.0
43.8
100.0 33.3
93.3 60.0
100.0 28.6
79.5 54.6
100.0 16.7
100.0 75.0
96.8 33.3
77.1 66.7
100.0 63.6
100.0 55.6
100.0 87.5
100.0 75.0
100.0 87.5
100.0 90.9
100.0 100.0
92.7 28.1
92.2 46.7
100.0 72.7
100.0 85.7
100.0 60.0
94.1 87.0
100.0 75.0
100.0 66.7
96.4 30.8
91.7 100.0
100.0 20.0
100.0 100.0
100.0 50.0
100.0 41.7
100.0 33.3
100.0 20.0
91.7 33.3
100.0 61.1
87.5 100.0
100.0 46.7
88.9 81.8
100.0 30.0
100.0 100.0
100.0 100.0
100.0 50.0
6.9 54.2
97.0 61.2
100.0
100.0
100.0
84.1
100.0
100.0
100.0
80.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
92.7
100.0
100.0
100.0
100.0
92.7
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
83.3
100.0
100.0
100.0
95.5
100.0
100.0
100.0
100.0
16.7
40.0
14.3
34.1
8.8
37.5
20.8
48.3
40.9
33.3
56.3
50.0
56.3
59.1
62.5
15.8
33.3
45.5
50.0
35.0
69.6
37.5
41.7
19.2
50.0
10.0
75.0
30.0
25.0
21.4
10.0
16.7
38.9
50.0
33.3
59.1
15.0
75.0
50.0
33.3
100.0 n/a
90.0 50.0
92.9 n/a
63.6 n/a
91.7 n/a
93.8 0.0
96.7 n/a
70.0 n/a
90.9 n/a
94.4 0.0
100.0 0.0
93.8 0.0
100.0 0.0
90.9 n/a
87.5 100.0
81.6 n/a
93.3 100.0
100.0 100.0
92.9 n/a
95.0 0.0
82.6 n/a
87.5 n/a
91.7 50.0
88.5 n/a
83.3 n/a
83.3 n/a
100.0 83.3
100.0 0.0
100.0 0.0
97.6 n/a
83.3 n/a
75.0 n/a
88.9 n/a
83.3 n/a
100.0 n/a
79.6 n/a
90.0 n/a
100.0 83.3
100.0 66.7
100.0 n/a
n/a
100.0
n/a
n/a
n/a
100.0
n/a
n/a
n/a
100.0
100.0
100.0
100.0
n/a
100.0
n/a
100.0
100.0
n/a
100.0
n/a
n/a
100.0
n/a
n/a
n/a
100.0
100.0
100.0
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
100.0
100.0
n/a
98.2 38.0
90.8 39.6
100.0
shown in the last row of Tab. 4, Austin needs 6058.4 seconds on
average for the testing. It should be mentioned that the average time
does not include the benchmarks where Austin crashes or times out.
Compared with Austin, CoverMe is faster (Tab. 4, Col. 4) with 6.9
seconds on average (The results are also shown in Tab. 3(Col. 6)).
CoverMe achieves a much higher branch coverage (90.8%) than
Austin (42.8%). We also compare across Tab. 4 and Tab. 3. On
average, Austin provides slightly better branch coverage (42.8%)
than Rand (38.0%).
Col. 7-8 are the improvement metrics of CoverMe against Austin.
The Speedup (Col. 7) is calculated as the ratio of the time spent by
Austin and the time spent by CoverMe. The coverage improvement
(Col. 7) is calculated as the difference between the branch coverage
of CoverMe and that of Austin. We observe that CoverMe provides
3,868X speedup and 48.9% coverage improvement on average.
Remark 5.1. Our evaluation shows that CoverMe has achieved
high code coverage on most tested programs. One may wonder
Table 4: Comparison of Austin [43] and CoverMe. The benchmark programs are taken from the Fdlibm library [3].
Program
e_acos.c
e_acosh.c
e_asin.c
e_atan2.c
e_atanh.c
e_cosh.c
e_exp.c
e_fmod.c
e_hypot.c
e_j0.c
Benchmark
Entry function
ieee754_acos(double)
ieee754_acosh(double)
ieee754_asin(double)
ieee754_atan2(double, double)
ieee754_atanh(double)
ieee754_cosh(double)
ieee754_exp(double)
ieee754_frmod(double, double)
ieee754_hypot(double, double)
ieee754_j0(double)
ieee754_y0(double)
e_j1.c
ieee754_j1(double)
ieee754_y1(double)
e_log.c
ieee754_log(double)
e_log10.c
ieee754_log10(double)
e_pow.c
ieee754_pow(double, double)
e_rem_pio2.c ieee754_rem_pio2(double, double*)
e_remainder.c ieee754_remainder(double, double)
e_scalb.c
ieee754_scalb(double, double)
e_sinh.c
ieee754_sinh(double)
e_sqrt.c
iddd754_sqrt(double)
k_cos.c
kernel_cos(double, double)
s_asinh.c
asinh(double)
s_atan.c
atan(double)
s_cbrt.c
cbrt(double)
s_ceil.c
ceil(double)
s_cos.c
cos (double)
s_erf.c
erf(double)
erfc(double)
s_expm1.c
expm1(double)
s_floor.c
floor(double)
s_ilogb.c
ilogb(double)
s_log1p.c
log1p(double)
s_logb.c
logb(double)
s_modf.c
modf(double, double*)
s_nextafter.c nextafter(double, double)
s_rint.c
rint(double)
s_sin.c
sin (double)
s_tan.c
tan(double)
s_tanh.c
tanh(double)
MEAN
Time (second)
Branch coverage(%)
Improvement metrics
Austin CoverMe Austin
CoverMe Speedup Coverage (%)
6058.8
2016.4
6935.6
14456.0
4033.8
27334.5
2952.1
timeout
5456.8
6973.0
5838.3
4131.6
5701.7
5109.0
1175.5
timeout
timeout
4629.0
1989.8
5534.8
crash
1885.1
2439.1
7584.7
3583.4
7166.3
669.4
28419.8
6611.8
timeout
7620.6
3654.7
11913.7
1064.4
1795.1
7777.3
5355.8
667.1
704.2
2805.5
7.8
2.3
8.0
17.4
8.1
8.2
8.4
22.1
15.6
9.0
0.7
10.2
0.7
3.4
1.1
18.8
1.1
2.2
8.5
0.6
15.6
15.4
8.4
8.5
0.4
8.8
0.4
9.0
0.1
1.1
10.1
8.3
9.9
0.3
3.5
17.5
3.0
0.3
0.3
0.7
16.7
40.0
14.3
34.1
8.3
37.5
75.0
n/a
36.4
33.3
56.3
50.0
56.3
59.1
62.5
n/a
n/a
45.5
57.1
35.0
n/a
37.5
41.7
26.9
50.0
36.7
75.0
30.0
25.0
n/a
36.7
16.7
61.1
50.0
50.0
50.0
35.0
75.0
50.0
33.3
100.0
90.0
92.9
63.6
91.7
93.8
96.7
70.0
90.9
94.4
100.0
93.8
100.0
90.9
87.5
81.6
93.3
100.0
92.9
95.0
82.6
87.5
91.7
88.5
83.3
83.3
100.0
100.0
100.0
97.6
83.3
75.0
88.9
83.3
100.0
79.6
90.0
100.0
100.0
100.0
776.4
887.5
867.0
831.2
495.4
3327.7
349.7
n/a
350.9
776.5
8243.5
403.9
8411.0
1481.9
1061.3
n/a
n/a
2146.5
233.8
9695.9
n/a
122.6
290.8
890.6
9109.4
812.3
1601.6
3166.8
62020.9
n/a
757.8
438.7
1205.7
3131.8
507.0
445.4
1808.3
1951.4
2701.9
4075.0
83.3
50.0
78.6
29.6
83.3
56.3
21.7
n/a
54.6
61.1
43.8
43.8
43.8
31.8
25.0
n/a
n/a
54.6
35.7
60.0
n/a
50.0
50.0
61.6
33.3
46.7
25.0
70.0
75.0
n/a
46.7
58.3
27.8
33.3
50.0
29.6
55.0
25.0
50.0
66.7
6058.4
6.9
42.8
90.8
3868.0
48.9
whether the generated inputs have triggered any latent bugs. Note
that when no specifications are given, program crashes have been
frequently used as an oracle for finding bugs in integer programs.
Floating-point programs, on the other hand, can silently produce
wrong results without crashing. Thus, program crashes cannot be
used as a simple, readily available oracle as for integer programs.
Our experiments, therefore, have focused on assessing the efficiency
of ME in solving the problem defined in Def. 3.1 or Lem. 3.3, and
do not evaluate its effectiveness in finding bugs, which is orthogonal
and interesting future work.
6.
Incompleteness
The development of ME toward a general solution to the search
problem, as illustrated in this presentation, has been grounded in the
concept of representing function and mathematical optimization.
While our theory guarantees that the ME procedure solves the
search problem correctly (Cor. 4.8), the practical incompleteness
(Remark 4.9) remains a challenge in applying ME to real-world
automated testing problems. Below we discuss three sources of
practical incompleteness and how it can be mitigated.
Well-behaved Representing Functions. The well-behavedness
of the representing function is essential for the working of ME.
Well-behavedness is a common term in the MO literature to describe
that the MO backend under discussion can efficiently handle the
objective function. A common requirement, for example, is that the
objective function (or the representing function for ME) should be
smooth to some degree.
Consider again the path reachability problem in Sect. 4.4.1. If
FOO_I is instrumented as in Fig. 8(left), the corresponding FOO_R
also satisfies the conditions in Def. 4.3, but this FOO_R is discontinuous at {−3, 1, 2} (Fig. 8(right)). The representing function changes
its value disruptively at its minimum points. These minimum points
cannot be calculated by MO tools. The example also shows that
the conditions in Def. 4.3(a-c) can be insufficient for avoiding illbehaved representing functions. However, it is unclear to us whether
there exists a set of conditions that are both easy-to-verify and
sufficient for excluding ill-behaved functions.
E
inputs x such that [|FOO|] (x) ∈ , where [|FOO|] (x) denotes the path
of executing FOO with input x.
void FOO_I(double x){
r = r + (x <= 1)? 0:1;
l0 : if (x <= 1) x++;
double y = square(x);
r = r + (y == 4)? 0:1 ;
l1 : if (y == 4) ...;
}
Figure 8: An example of ill-behaved representing function for the path
reachability problem in Sect. 4.4.1. The left is the instrumented FOO_I; the
right is the graph of representing function FOO_R.
Reliance on Program Execution. We have seen that the ME
procedure generates test inputs of program FOO by minimizing, and
therefore, running, another program FOO_R. This execution-reliant
feature has both benefits and risks. It allows us to generate test inputs
of complex programs without analyzing their semantics; it also
means that ME can give different results with different compilers or
machines.
In our experiments, we realize that the feature becomes disadvantageous if inaccuracy occurs in the program execution. Consider
the satisfiability testing problem with the constraint π = x ≥ 1E-20.
Suppose we use the representing function Rπ (x) = x ≥ 1E-20 ? 0 :
(x − 1E-20)2 and implement Rπ as a double-precision floatingpoint program. We can verify that Rπ is a representing function
in the sense of Def. 4.3, but it evaluates to 0 not only when
x ≥ 1E-20, but also when 0 ≤ x < 1E-20 because the smallest
machine-representable double is in the order of 1E-324 [32]. Then
the ME procedure may return x∗ = 0 in step M2 and then returns
“not found” in step M3 because x∗ = 0 is not a model of π.
This issue described above may be mitigated using arbitraryprecision arithmetic [10]. Another option, as is demonstrated in the
XSat solver [28], is to construct the representing function based on
ULP, the unit in the last place value [32], which avoids floating-point
inaccuracy by using the binary representation of floating-points.
Efficient MO Backend. The inherent intractability of global optimization is another source of the incompleteness of ME. Even
if the representing function is well-behaved and free from computational inaccuracy, it is possible that the MO backend returns
sub-optimal or local minimum points due to weak implementation,
high-dimensional or intrinsically difficult problems, etc.. That said,
MO is still in active development, and our ME approach has the
flexibility to leverage its state-of-the-art, such as the Basinhopping
algorithm used in our experiments.
7.
Related Work
Automated Testing. Many automated testing approaches adopt
symbolic execution [12, 14, 17, 36]. They repeatedly select a target
path τ and gather the conjunction of logic conditions along the
path, denoted by Φτ (called path condition [36]). They then use
SMT solvers to calculate a model of Φτ . These approaches are both
sound and complete in the sense of FOO(x) ⇓ wrong ⇔ x |= Φτ .
Symbolic execution and its variants have seen much progress since
the breakthrough of SAT/SMT [55, 65], but still have difficulties
in handling numerical code. There are two well-known efficiency
issues. First, path explosion makes it time-consuming to select a
large number of τ and gather Φτ . Second, for numerical code, each
gathered Φτ involves numerical constraints that can quickly go
beyond the capabilities of modern SAT/SMT solvers.
A large number of search-based heuristics have been proposed
to mitigate issues from symbolic execution. They use fitness functions to capture path conditions and use numerical optimization to
minimize/maximize the fitness functions. A well-developed searchbased testing is Austin [43], which combines symbolic execution and
search-based heuristics. Austin has been compared with Cute [64],
a dynamic symbolic execution tool, and shows to be significantly
more efficient. These search-based solutions, however, as McMinn
points out in his survey paper [49], “are not standalone algorithms
in themselves, but rather strategies ready for adaption to specific
problems.”
Mathematical Optimization in Testing and Verification. In the
seminal work of Miller et al. [52], optimization methods are already
used in generating test data for numerical code. These methods are
then taken up in the 1990s by Koral [26, 38], which have found their
ways into many mature implementations [42, 66, 68]. Mathematical
optimization has also been employed in program verification [19,
61]. Liberti et al. [33, 47] have proposed to calculate the invariant
of an integer program as the mathematical optimization problem of
mixed integer nonlinear programming (MINLP) [67]. Recently, a
floating-point satisfiability solver XSat [28] has been developed.
It constructs a floating-point program Rπ from a formula π in
conjunctive normal form, and then decides the satisfiability of π by
checking the sign of the minimum of Rπ . This decision procedure
is an application of Thm. 4.5(a), and it calculates the models of
π using Thm. 4.5(b). Compared with XSat, our work lays out the
theoretical foundation for a precise, systematic approach to testing
numerical code; XSat is an instance of our proposed ME procedure
(see Example 4.2(b) and Sect. 4.4.3).
8.
Two Paradigms for Program Correctness. Constructing an axiomatic system [27, 35] is of central importance in ensuring program
correctness. Let FOO be a program. If we write [|FOO|] for the set of
FOO’s possible execution paths (a.k.a. trace semantics [48]), and
for the unsafe paths, then the correctness of FOO is to ensure
E
[|FOO|] ∩
E = 0./
(16)
The problem is known to be undecidable, and approximate solutions
have been extensively studied. One is abstract interpretation [20, 21],
which systematically constructs [|FOO|]] ⊇ [|FOO|], and prove Eq. (16)
by proving [|FOO|]] ∩ = 0.
/ 2 Another category of approximation
is automated testing. It attempts to disprove Eq. (16) by generating
E
2 The relationship between the abstract and concrete semantics is more
commonly formalized with a Galois connection [20]; we use [|FOO|]] ⊇ [|FOO|]
as a simplified case.
Conclusion
This paper introduces Mathematical Execution (ME), a new, unified
approach for testing numerical code. Our insight is to (1) use a
representing function to precisely and uniformly capture the desired
testing objective, and (2) use mathematical optimization to direct
input and program space exploration. What ME — and most importantly, the accompanying representation function — provides is an
approach that can handle a variety of automated testing problems, including coverage-based testing, path reachability testing, boundary
value analysis, and satisfiability checking. We have implemented a
branch coverage testing tool as a proof-of-concept demonstration
of ME’s potential. Evaluated on a collection of Sun’s math library
(used in Java, JavaScript, Matlab, and Android), our tool CoverMe
achieves substantially better coverage results (near-optimal coverage
on all tested programs) when compared to random testing and Austin
(a coverage-based testing tool that combines symbolic execution
and search-based strategies).
References
[1] American fuzzy lop. http://lcamtuf.coredump.cx/afl/. Accessed:
25 June, 2016.
[2] Clang: A C language family frontend for LLVM. http://clang.llvm.
org/. Accessed: 25 June 2016.
[3] Freely distributed math library, Fdlibm. http://www.netlib.org/
fdlibm/. Accessed: 25 June, 2016.
[4] GNU compiler collection tool, Gcov.
https://gcc.gnu.org/
onlinedocs/gcc/Gcov.html/. Accessed: 25 June 2016.
[5] Python testing library, hypothesis. https://github.com/DRMacIver/
hypothesis. Accessed: 25 June 2016.
[6] LLVM: Pass class reference. http://llvm.org/docs/doxygen/html/
classllvm_1_1Pass.html. Accessed: 25 June 2016.
[7] Scipy optimization package. http://docs.scipy.org/doc/scipydev/reference/optimize.html. Accessed: 25 June 2016.
[8] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. An introduction
to MCMC for machine learning. Machine Learning, 50(1-2):5–43,
2003.
[9] A. Baars, M. Harman, Y. Hassoun, K. Lakhotia, P. McMinn, P. Tonella,
and T. Vos. Symbolic search-based testing. In Proceedings of the
26th IEEE/ACM International Conference on Automated Software
Engineering, ASE ’11, pages 53–62, Washington, DC, USA, 2011.
[10] D. H. Bailey. High-precision floating-point arithmetic in scientific
computation. Computing in Science and Engg., 7(3):54–61, May 2005.
[11] D. L. Bird and C. U. Munoz. Automatic generation of random selfchecking test cases. IBM Syst. J., 22(3):229–245, Sept. 1983.
[12] R. S. Boyer, B. Elspas, and K. N. Levitt. A formal system for testing
and debugging programs by symbolic execution. In Proceedings of the
International Conference on Reliable Software, pages 234–245, New
York, NY, USA, 1975.
[13] F. P. Brooks, Jr. The Mythical Man-month (Anniversary Ed.). AddisonWesley Longman Publishing Co., Inc., Boston, MA, USA, 1995.
[14] C. Cadar and K. Sen. Symbolic execution for software testing: Three
decades later. Commun. ACM, 56(2):82–90, 2013.
[15] C. Cadar, D. Dunbar, and D. Engler. Klee: Unassisted and automatic
generation of high-coverage tests for complex systems programs. In
Proceedings of the 8th USENIX Conference on Operating Systems
Design and Implementation, OSDI’08, pages 209–224, Berkeley, CA,
USA, 2008.
[16] S. Chib and E. Greenberg. Understanding the Metropolis-Hastings
algorithm. The American Statistician, 49(4):327–335, Nov. 1995.
[17] L. A. Clarke. A system to generate test data and symbolically execute
programs. IEEE Trans. Softw. Eng., 2(3):215–222, May 1976.
[18] P. Collingbourne, C. Cadar, and P. H. Kelly. Symbolic crosschecking of
floating-point and SIMD code. In Proceedings of the sixth conference
on Computer systems, pages 315–328, 2011.
[19] P. Cousot. Proving program invariance and termination by parametric
abstraction, lagrangian relaxation and semidefinite programming. In
International Workshop on Verification, Model Checking, and Abstract
Interpretation, pages 1–24. Springer, 2005.
[20] P. Cousot and R. Cousot. Abstract interpretation: A unified lattice
model for static analysis of programs by construction or approximation
of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN
Symposium on Principles of Programming Languages, POPL’77, pages
238–252, 1977.
[21] P. Cousot and R. Cousot. Systematic design of program analysis
frameworks. In Proceedings of the 6th ACM SIGACT-SIGPLAN
Symposium on Principles of Programming Languages, POPL’79, pages
269–282, New York, NY, USA, 1979.
[22] M. Davis. The undecidable: Basic papers on undecidable propositions,
unsolvable problems and computable functions. Dover Publications,
Incorporated, 2004.
[23] L. De Moura and N. Bjørner. Z3: An efficient SMT solver. In
Proceedings of the Theory and Practice of Software, 14th International
Conference on Tools and Algorithms for the Construction and Analysis
of Systems, TACAS’08/ETAPS’08, pages 337–340, Berlin, Heidelberg,
2008.
[24] E. W. Dijkstra. Notes on structured programming. Apr. 1970.
[25] J. Eckstein and D. P. Bertsekas. On the Douglas—Rachford splitting
method and the proximal point algorithm for maximal monotone
operators. Mathematical Programming, 55(1-3):293–318, 1992.
[26] R. Ferguson and B. Korel. The chaining approach for software test data
generation. ACM Trans. Softw. Eng. Methodol., 5(1):63–86, Jan. 1996.
[27] R. W. Floyd. Assigning meanings to programs. Mathematical aspects
of computer science, 19(19-32):1, 1967.
[28] Z. Fu and Z. Su. XSat: A fast floating-point satisfiability solver. In
Proceedings of the 28th International Conference on Computer Aided
Verification, CAV’16, Toronto, Ontario, Canada, 2016.
[29] Z. Fu, Z. Bai, and Z. Su. Automated backward error analysis for
numerical code. In Proceedings of the ACM SIGPLAN International
Conference on Object-Oriented Programming, Systems, Languages,
and Applications, OOPSLA’15, pages 639–654, Pittsburgh, PA, USA,
2015.
[30] P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated
random testing. In Proceedings of the ACM SIGPLAN 2005 Conference
on Programming Language Design and Implementation, Chicago, IL,
USA, pages 213–223, 2005.
[31] K. Gödel. Über formal unentscheidbare sätze der principia mathematica
und verwandter systeme i. Monatshefte für mathematik und physik, 38
(1):173–198, 1931.
[32] D. Goldberg. What every computer scientist should know about
floating-point arithmetic. ACM Computing Surveys (CSUR), 23(1):
5–48, 1991.
[33] E. Goubault, S. Le Roux, J. Leconte, L. Liberti, and F. Marinelli.
Static analysis by abstract interpretation: A mathematical programming
approach. Electronic notes in theoretical computer science, 267(1):
73–87, 2010.
[34] S. Heule, M. Sridharan, and S. Chandra. Mimic: Computing models for
opaque code. In Proceedings of the 10th Joint Meeting on Foundations
of Software Engineering, pages 710–720, 2015.
[35] C. A. R. Hoare. An axiomatic basis for computer programming.
Commun. ACM, 12(10):576–580, Oct. 1969.
[36] J. C. King. Symbolic execution and program testing. Commun. ACM,
19(7):385–394, July 1976.
[37] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by
simulated annealing. SCIENCE, 220(4598):671–680, 1983.
[38] B. Korel. Automated software test data generation. IEEE Trans. Softw.
Eng., 16(8):870–879, Aug. 1990.
[39] N. Kosmatov, B. Legeard, F. Peureux, and M. Utting. Boundary coverage criteria for test generation from formal models. In Proceedings of
the 15th International Symposium on Software Reliability Engineering,
ISSRE ’04, pages 139–150, Washington, DC, USA, 2004.
[40] J. R. Koza. Genetic programming: On the programming of computers
by means of natural selection, volume 1. MIT press, 1992.
[41] K. Lakhotia, P. McMinn, and M. Harman. An empirical investigation
into branch coverage for C programs using CUTE and AUSTIN.
Journal of Systems and Software, 83(12):2379–2391, 2010.
[42] K. Lakhotia, N. Tillmann, M. Harman, and J. De Halleux. FloPSy:
Search-based floating point constraint solving for symbolic execution.
In Proceedings of the 22Nd IFIP WG 6.1 International Conference
on Testing Software and Systems, ICTSS’10, pages 142–157, Berlin,
Heidelberg, 2010.
[43] K. Lakhotia, M. Harman, and H. Gross. AUSTIN: An open source
tool for search based software testing of C programs. Information and
Software Technology, 55(1):112–125, 2013.
[44] C. Lattner and V. Adve. LLVM: A compilation framework for lifelong
program analysis & transformation. In Proceedings of the International
Symposium on Code Generation and Optimization: Feedback-directed
and Runtime Optimization, CGO ’04, pages 75–86, Washington, DC,
USA, 2004.
[45] D. Leitner, C. Chakravarty, R. Hinde, and D. Wales. Global optimization by basin-hopping and the lowest energy structures of LennardJones clusters containing up to 110 atoms. Phys. Rev. E, 56:363, 1997.
[67] M. Tawarmalani and N. V. Sahinidis. Global optimization of mixedinteger nonlinear programs: A theoretical and computational study.
Mathematical programming, 99(3):563–591, 2004.
[46] Z. Li and H. A. Scheraga. Monte Carlo minimization approach to
the multiple-minima problem in protein folding. Proceedings of the
National Academy of Sciences of the United States of America, 84(19):
6611–6615, 1987.
[68] N. Tillmann and J. De Halleux. Pex: White box test generation for
.NET. In Proceedings of the 2Nd International Conference on Tests
and Proofs, TAP’08, pages 134–153, Berlin, Heidelberg, 2008.
[47] L. Liberti, S. Le Roux, J. Leconte, and F. Marinelli. Mathematical programming based debugging. Electronic Notes in Discrete Mathematics,
36:1311–1318, 2010.
[48] L. Mauborgne and X. Rival. Trace partitioning in abstract interpretation
based static analyzers. In Programming Languages and Systems,
14th European Symposium on Programming, ESOP 2005, pages 5–
20, Edinburgh, UK, 2005.
[49] P. McMinn. Search-based software test data generation: A survey.
Softw. Test. Verif. Reliab., 14(2):105–156, June 2004.
[50] A. M. Memon, I. Banerjee, and A. Nagarajan. What test oracle should I
use for effective GUI testing? In 18th IEEE International Conference on
Automated Software Engineering, ASE 2003, pages 164–173, Montreal,
Canada, 2003.
[51] B. Meyer. Seven principles of software testing. IEEE Computer, 41(8):
99–101, 2008.
[52] W. Miller and D. L. Spooner. Automatic generation of floating-point
test data. IEEE Trans. Softw. Eng., 2(3):223–226, May 1976.
[53] M. Minoux. Mathematical programming: Theory and algorithms.
Wiley, New York, 1986.
[54] G. J. Myers. The art of software testing. pages I–XV, 1–234, 2004.
[55] R. Nieuwenhuis, A. Oliveras, and C. Tinelli. Solving SAT and
SAT modulo theories: From an abstract Davis–Putnam–Logemann–
Loveland procedure to DPLL(T). J. ACM, 53(6):937–977, 2006.
[56] J. Nocedal and S. J. Wright. Numerical optimization. Springer Science
& Business Media, 2006.
[57] H. Palikareva and C. Cadar. Multi-solver support in symbolic execution.
In Proceedings of the 25th International Conference on Computer
Aided Verification, CAV’13, pages 53–68, Berlin, Heidelberg, 2013.
[58] R. Pandita, T. Xie, N. Tillmann, and J. de Halleux. Guided test generation for coverage criteria. In 2010 IEEE International Conference on
Software Maintenance, ICSM’10, pages 1–10, 2010.
[59] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical recipes 3rd edition: The art of scientific computing. Cambridge
University Press, New York, NY, USA, 2007.
[60] H. Robbins and S. Monro. A stochastic approximation method. The
annals of mathematical statistics, pages 400–407, 1951.
[61] M. Roozbehani, A. Megretski, and E. Feron. Convex optimization
proves software correctness. In Proceedings of American Control
Conference, pages 1395–1400, 2005.
[62] W. Rudin. Principles of mathematical analysis. McGraw-Hill, New
York, 3 edition, 1976.
[63] E. Schkufza, R. Sharma, and A. Aiken. Stochastic optimization of
floating-point programs with tunable precision. In Proceedings of the
35th ACM SIGPLAN Conference on Programming Language Design
and Implementation, PLDI ’14, pages 53–64, New York, NY, USA,
2014.
[64] K. Sen, D. Marinov, and G. Agha. CUTE: A concolic unit testing
engine for C. In Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International
Symposium on Foundations of Software Engineering, ESEC/FSE-13,
pages 263–272, New York, NY, USA, 2005.
[65] J. P. M. Silva and K. A. Sakallah. GRASP: A new search algorithm
for satisfiability. In Proceedings of the 1996 IEEE/ACM International
Conference on Computer-aided Design, ICCAD ’96, pages 220–227,
Washington, DC, USA, 1996.
[66] M. Souza, M. Borges, M. d’Amorim, and C. S. Păsăreanu. CORAL:
Solving complex constraints for symbolic pathfinder. In NASA Formal
Methods Symposium, pages 359–374, 2011.
[69] E. J. Weyuker. On testing non-testable programs. Comput. J., 25(4):
465–470, 1982.
[70] L. J. White and E. I. Cohen. A domain strategy for computer program
testing. IEEE Trans. Softw. Eng., 6(3):247–257, May 1980.
[71] G. Zoutendijk. Mathematical programming methods. North-Holland,
Amsterdam, 1976.
A.
Untested Programs in Fdlibm
The programs from the freely distributed math library Fdlibm 5.3 [3]
are used as our benchmarks. Tab. 5 lists all untested programs and
functions, and explains the reason why they are not selected. Three
types of the functions are excluded from our evaluation. They are (1)
functions without any branch, (2) functions involving non-floatingpoint input parameters, and (3) static C functions.
B.
Implementation Details
As a proof-of-concept demonstration, we have implemented Algo. 1
in the tool CoverMe. This section presents the implementation and
technical details omitted from the main body of the paper.
B.1
Frontend of CoverMe
The frontend implements Step 1 and Step 2 of Algo. 1. CoverMe
compiles the program under test FOO to LLVM IR with Clang [2].
Then it uses an LLVM pass [6] to inject assignments in FOO. The program under test can be in any LLVM-supported language, e.g., Ada,
the C/C++ language family, or Julia. Our current implementation
accepts C code only.
Fig. 9 illustrates FOO as a function of signature type_t FOO
(type_t1 x1, type_t2 x2, ...). The return type (output) of the
function, type_t, can be any kind of types supported by C, whereas
the types of the input parameters, type_t1, type_t2, ..., are
restricted to double or double*. We have explained the signature of
pen in Def. 3.7. Note that CoverMe does not inject pen itself into
FOO, but instead injects assignments that invoke pen. We implement
pen in a separate C++ file.
The frontend also links FOO_I and FOO_R with a simple program
loader to generate a shared object file libr.so, which is the
outcome of the frontend. It stores the representing function in the
form of a shared object file (.so file).
B.2
Backend of CoverMe
The backend implements Step 3 of Algo. 1. It invokes the representing function via libr.so. The kernel of the backend is an
external MCMC engine. It uses the off-the-shelf implementation
known as Basinhopping from the Scipy Optimization package [7].
Basinhopping takes a range of input parameters. Fig. 9 shows
the important ones for our implementation basinhopping(f, sp,
n_iter, call_back), where f refers to the representing function
from libr.so, sp is a starting point as a Python Numpy array, n_iter
is the iteration number used in Algo. 1 and call_back is a clientdefined procedure. Basinhopping invokes call_back at the end of
each iteration (Algo. 1, Lines 24-34). The call_back procedure allows CoverMe to terminate if it saturates all branches. In this way,
CoverMe does not need to wait until passing all n_start iterations
(Algo. 1, Lines 8-12).
Table 5: Untested programs and functions in benchmark suite Fdlibm and the corresponding explanations.
Program
Entry function
Explanation
e_gamma_r.c
e_gamma.c
e_j0.c
ieee754_gamma_r(double)
ieee754_gamma(double)
pzero(double)
qzero(double)
pone(double)
qone(double)
ieee754_jn(int, double)
ieee754_yn(int, double)
sin_pi(double)
ieee754_lgammar_r(double, int*)
ieee754_lgamma(double)
kernel_rem_pio2(double*, double*, int, int, const int*)
kernel_sin(double, double, int)
kernel_standard(double, double, int)
kernel_tan(double, double, int)
copysign(double)
fabs(double)
finite(double)
frexp(double, int*)
isnan(double)
ldexp(double, int)
lib_versioin(double)
matherr(struct exception*)
scalbn(double, int)
signgam(double)
significand(double)
acos(double)
acosh(double)
asin(double)
atan2(double, double)
atanh(double)
cosh(double)
exp(double)
fmod(double, double)
gamma_r(double, int*)
gamma(double, int*)
hypot(double, double)
j0(double)
y0(double)
j1(double)
y1(double)
jn(double)
yn(double)
lgamma_r(double, int*)
lgamma(double)
log(double)
log10(double)
pow(double, double)
remainder(double, double)
scalb(double, double)
sinh(double)
sqrt(double)
no branch
no branch
static C function
static C function
static C function
static C function
unsupported input type
unsupported input type
static C function
unsupported input type
no branch
unsupported input type
unsupported input type
unsupported input type
unsupported input type
no branch
no branch
no branch
unsupported input type
no branch
unsupported input type
no branch
unsupported input type
unsupported input type
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
no branch
e_j1.c
e_jn.c
e_lgamma_r.c
e_lgamma.c
k_rem_pio2.c
k_sin.c
k_standard.c
k_tan.c
s_copysign.c
s_fabs.c
s_finite.c
s_frexp.c
s_isnan.c
s_ldexp.c
s_lib_version.c
s_matherr.c
s_scalbn.c
s_signgam.c
s_significand.c
w_acos.c
w_acosh.c
w_asin.c
w_atan2.c
w_atanh.c
w_cosh.c
w_exp.c
w_fmod.c
w_gamma_r.c
w_gamma.c
w_hypot.c
w_j0.c
w_j1.c
w_jn.c
w_lgamma_r.c
w_lgamma.c
w_log.c
w_log10.c
w_pow.c
w_remainder.c
w_scalb.c
w_sinh.c
w_sqrt.c
FOO:
Program under test
in any LLVM-supported language
type_t FOO (double x1, double x2, ...)
pen
(.cpp)
double pen (int i, int op, double lhs, double rhs)
FOO_I:
Instrumented program (.bc)
Dealing with Comparison between Non-floating-point Expressions (Relaxing Def. 3.1(b)). We have encountered situations
loader (.cpp)
where a conditional statement invokes a comparison between non
FOO:: Program
Program under
test test
FOO
under
floating-point numbers. CoverMe handles these situations by first
FOO:LOADER
Program
under
test
void
(double
P) test
FOO
: Program
under
in any
LLVM-supported
language
FOO: Program
under
test * language
FOO
: Program
in
any
LLVM-supported
languageunder test
in
any
LLVM-supported
in
any
LLVM-supported
language
promoting the non floating-point numbers to floating-point numbers
FOO: Program under
_Rtest
FOO
:
Program
under
test
FOO
(.cpp)
type_t FOO (double
x1, double
x2,
...)
in any LLVM-supported
language
indouble
any
LLVM-supported
FOO
: Program under testlanguageand then injecting pen as described in Algo. 1. For example, before a
_t (double
type__
tFOO
FOO
x1, double
x2, ...)
type
FOO
(double
x1,
x2, ...)
:
Program
under
test
type t FOO
(double
double x2, ...) language
in any LLVM-supported
language
in
anyx1,
LLVM-supported
void FOO_R (double
in any LLVM-supported language
* P)
: Program
under
test
pen
(.cpp)
_t FOO
_t FOO language
conditional statement like if (xi op yi) where xi and yi are inteFOO
: Program
under
test
typeFOO
(double
x1, LLVM-supported
double
x2,
...)
pen
(.cpp)
type
(double x1, double x2, ...)
pen
in (.cpp)
any
type_t x1,
FOO (double
double
type_FOO
t FOO
(double
x1,
double
x2,
...)
_
libr.so
type
t
FOO
(double
doublex1,
x2,
...)x2, ...)
:
Program
under
test
pen
(.cpp)
in any LLVM-supported
language
double
pen(int
(int
int
op,
double
gers, CoverMe injects r = pen (i, op, (double) x, (double)
any
LLVM-supported
language
double
pen
i, i,
int
op,
double
lhs,lhs,
double
rhs) rhs)
double
pen
(int
i,
int
op,double
double
lhs,
double
rhs)
FOO
:
Program
under
test
FOO: Programinunder
test
_t FOO (double x1, double x2, ...)
type
(.cpp)
FOO: Program
under
test pen (.cpp)
_R (double
pen (.cpp)
ininpen
any
LLVM-supported
language
FOO
y));. Note that such an approach does not allow us to handle data
* P)
anytype
LLVM-supported
language
_
_t void
_(double
_
FOO
I
:
Instrumented
program
(.bc)
in
any
LLVM-supported
language
double
pen
(int
i,
int
op,
double
lhs,
double
rhs)
pen (.cpp)
FOO
x1,
double
x2,
...)
FOO
I
:
Instrumented
program
(.bc)
FOO
I
:
Instrumented
program
(.bc)
_
pen (.cpp)
type
FOO (double
x1,
double x2, ...)
in anytLLVM-supported
language
double
pen rhs)
(int
i, int
op,double
double lhs,
double
types
thatrhs)
arerhs)
incompatible with floating-point types, e.g., conditions
_t FOOpen
double
(int
i,
int
op,
double
lhs,
double
type
(double
x1,
double
x2,
...)
double
pen
(int
i,
int
op,
lhs,
double
_
pen
(.cpp)
MCMC
minimization
procedure
(.py)
FOO
:
Program
under
test
_type
_t_I_
_(double
type t FOO (double
x1,
double
x2,
...)
type
t_tFOO
x1,
double
x2, x2,
...)
type
FOO
I(double
x1,
FOO
I (double
x1,double
double
x2, ...)
...)
_t
_t _
type
FOO
(double
double
x2,double
...) rhs)
typeint
FOO
x1, double
x2,
...)
double penpen
(int(.cpp)
i,
double
lhs,
double
rhs)
double
pen
(int
i,
int op,x1,
double
lhs,
FOO
Iop,
: (double
Instrumented
program
(.bc)
(p != Null), which CoverMe has to ignore.
_ILLVM-supported
inFOO
any
language
pen
(.cpp)
: Instrumented
program
(.bc)like if
basinhopping
(func,
sp, n_iter, callback)
pen (.cpp)
FOO
:
Program
under
test
loader
(.cpp)
_I: Instrumented
double
pen
(intFOO
i, _
int
op, double lhs,
double rhs)
loader
(.cpp)
(.cpp)
FOO(.cpp)
program
(.bc)
I: Instrumented
program
(.bc)
FOO
: Program
under
test
penloader
(.cpp)
pen
_t_tFOO
_I...)
_program
_Iint
type
x1, x1,
double
x2,x2,
...)
type
FOO(double
(double
double
...)
pen
_int
type
tpen
FOO
(double
x1,(.cpp)
double
x2,
FOO_Idouble
: Instrumented
(.bc)
double
pen
(int
i,
op,
lhs,
double
rhs)
pen
(int
i,
int
op,
double
lhs,
rhs)
in
any
LLVM-supported
language
FOO
Idouble
:double
Instrumented
program
(.bc)
double
(int
i,
lhs,
double
rhs)
Dealing
with Infeasible Branches (Relaxing Def. 3.1(c)). Infeavoid
LOADER
(double
LLVM
pass
in
any
LLVM-supported
language
void
LOADER
(double
* P)
* P)op, double
void
LOADER
(double
* P)
double
pen
(int
i,
int
op,
double
lhs,
double
rhs)
_
_
_
_
_
type tpen
FOO(int
I (double
x1,
double
x2,
...)
type
t
FOO I (double
x1, double x2, ...)
FOO
I
:
Instrumented
program
(.bc)
double
i,
int
op,
double
lhs,
double
rhs)
loader
(.cpp)
Linking
pen
(.cpp)
_(.bc)
_I_:IInstrumented
sible
branches
type
t
FOO
(double
x1,
double
x2,
...)
_
program
double
pen
(int
i,
int
op,
double
lhs,
double
rhs)are branches that cannot be covered by any input. At_
type_tFOO
FOO
(double
x1,
double
x2,
...)
_
_
_
FOO
R
(.cpp)
_
FOO
R
(.cpp)
type
x1,
double
x2,
...)
type
t FOO
I
(double
x1, double x2, ...)
loader
(.cpp)
FOO tIFOO
:FOO
Instrumented
program
(.bc)
FOO
RInstrumented
(.cpp)
__I(double
I:_:Instrumented
program
(.bc)
FOO
(.bc)
_t
_program
void
LOADER
(double
P) double lhs,
tempts
cover infeasible branches are useless and time-consuming.
double
pen
(int
i, int
doubleto
rhs)
type
I...)
(double
x1,
double
x2,
...)
* op,
_I(.cpp)
_Rdouble
_R FOO
type
(double
x1,
x2,
loader
__tI:FOO
void void
FOO
(double
P)
pen
(.cpp)
FOO
(double
P)loader (.cpp)
*
*
FOO
Instrumented
program
(.bc)
_x2,
__
_I_R(double
void
FOO
_t type
* P)
x1,
double
FOO
Ix2,
: ...)
Instrumented
void
loader
(double
P)
loader (.cpp)
type(.cpp)
FOO
(double
x1,
double
...)
_ttIFOO
_I(double
*(.cpp)
pen
loader
type
FOO
(double
x1,
double
x2,
...) program (.bc) Detecting infeasible branches is a difficult problem in general.
_double
FOO
(.cpp)
FOO
:libr.so
Program
under
doubletest
pen (int i, int FOO
op,
lhs, double program
rhs)
I_:RInstrumented
(.bc)
loader
(.cpp)
libr.so
void
loader
(double
void
loader (double* P)
_t FOO
_I (double
loader
(.cpp)
* P)double
type
x1,
x2,
...)
libr.so
loader
(.cpp)
_
_
uses a simple heuristic to detect and ignore infeasible
in pen
any
LLVM-supported
language
type
t (double
FOO
(double
x1,
x2,CoverMe
...)
void loader
(double
P)
_FOO
void
(double
P) double
*(int
i,
op,
double
lhs,
double
rhs)
loader
_tIFOO
* x1,
_
FOO
RP)_(.cpp)
(.cpp)
_Rint
loader
(.cpp)
type
FOO
(double
double x2, ...)
*__RIP)
void
RFOO
(double
P) * P)
FOO
Ivoid
: Instrumented
program
(.bc)
voiddouble
LOADER loader
(double
(double
*
*void
_R(double
branches.
When CoverMe finds a minimum that is not zero, that
FOO
(double
void
LOADER
_R (.cpp)
_void
void
loader
(double
P)
_
*P)P)
type
t FOO
(double
x1,* double
x2, ...)
*
FOO
FOO
R
(.cpp)
loader
(.cpp)
_t FOO_I (double x1, libr.so
double
x2, ...)
_R void
_Rtype
MCMC
minimization
(.py)
MCMC
minimization
procedure
(.py)(.cpp)
loader
(.cpp)
_I:loader
(.cpp)
void
FOO
(double
P)
FOO_RFOO
(.cpp)
(double
P)_procedure
FOO
Instrumented
program
*
FOO
R**loader
(.cpp)
is, FOO_R(x∗ ) > 0, CoverMe deems the unvisited branch of the
void
loader
(double
P) (.bc)
_R (.cpp)
FOO
MCMC
minimization
procedure
(.py)
pen
(.cpp)
_
_R (double
_R (double
void
FOO R (double
_iter,
voidFOO
FOO
P)
void
FOO
* P)
FOO
R(func,
(.cpp)
basinhopping
sp, nsp,
callback)
_R (double
loader
(.cpp)
basinhopping
(func,
n_iter,
callback)
** _
* P)
void
void
loader
(double
P)
void
LOADER (double
* P)
* P)
last conditional to be infeasible and adds it to Saturate, the set
void
loader
(double
void FOO_Rtype
(double
P)
__tdouble
_
_R (double
* P)
*basinhopping
_R
void
P)
FOO
I (double
x1,
x2,
...)
void
FOO
(double
P)
libr.so
*
sp,
n_
iter,
callback)
FOO
RFOO
(.cpp)
* rhs)
_FOO
pen
(int i, (func,
int
op,double
double
lhs,
double
R (.cpp)
void
LOADER (double* P) MCMC minimization procedure (.py)
LLVM
pass
LLVM
pass
_
of unvisited and deemed-to-be infeasible branches.
_
libr.so
void FOO R (double
FOO R (.cpp)
* P)
libr.so
_R (.cpp)
libr.so
FOO
libr.so
__IR: Linking
_R (.cpp)
_Rpass
LLVM
Instrumented
Linking
FOO
void
FOO
P)(.bc)
_P)
libr.so
void FOO
FOO
(double
basinhopping (func, sp, n_iter, callback)
R program
(.cpp)
(.cpp)
*(double
libr.so
Imagine that we modify l1 of the program FOO in Fig. 3 to the
void
(double
void loader
FOO_R (double
P) _R FOO
* *P)
* FOO
void FOO_R (double* P)
_
_
Linking
void
(double
P) double
voidFOO
FOO
R(double
(double
void
FOO
_t FOO
_R
_R_type
libr.so
* x1,
FOO
x2,P)
...) R (double* P)
void
P)P)
_R (double
*I *(double
void FOO
conditional statement if (y == -1). Then the branch 1T becomes
* FOO_R LLVM
_R void
pass
void
(double
P)
void FOO
(double
P)
_
*
*
loader
(double
P)
MCMC
minimization
procedure
(.py)
void
FOO
R
(double
P)
MCMC
minimization
procedure
(.py)
*
libr.so
*
libr.so
libr.so
MCMC
minimization
procedure P)
(.py)
Linking
infeasible. We rewrite this modified program below and illustrate
(.cpp)
void
FOO_R (double
libr.so
*
MCMC loader
minimization
(.py)
minimization
procedure (.py)
basinhopping
(func, sp, n_iter,procedure
callback)MCMC
libr.so
FOO_
R (double* P)
_void
_R (double
_RFOO
libr.so
basinhopping
(func,
sp,
n
iter,
callback)
MCMC
minimization
procedure
(.py)
_P)
void
P)
_
basinhopping
(func,
sp,
n
iter,
callback)
FOO
(.cpp)
*
MCMC
minimization
procedure (.py) how we deal with infeasible branches.
void
FOO
R
(double
_R*(double* P)
void LOADER (double
P)
void *
FOO
_iter,basinhopping
_iter, callback)
basinhopping
(func,
sp, nminimization
callback)
LLVM
(func,(.py)
sp, n
MCMC
procedure
void
FOO_Rpass
(double
P)
MCMC
minimization
procedure
(.py)
* npass
_R (double
_iter,
LLVM
void FOO
P)
basinhopping
(func,
sp,
callback)
FOO__RR
(.cpp)
void
FOO
(double
P)
MCMC
minimization
procedure
(.py)
basinhopping
(func,
sp,
n*_iter,
callback)l0: if (x <= 1) {x++};
LLVM
*pass
MCMC
minimization
procedure
(.py)
Linking
MCMC
minimization
procedure
(.py)
_
Linking
basinhopping
(func,
sp,
n
basinhopping
(func,
sp,
n pass
iter,
callback)_iter, callback)y = square(x);
LLVM
pass
_
LLVM
_
basinhopping
(func,
sp,
n
iter,
callback)
void
FOO
R
(double
P)
MCMC
minimization
(.py)
*
Linkingprocedure
MCMC
minimization
procedure (.py)
LLVM
pass
basinhopping
sp, LLVM
n_sp,
iter,
callback)
libr.so
pass
basinhopping
n_
iter,
callback)
Back-end:
Minimize
the (func,(func,
l1: if (y == -1) {...}
LLVM
pass
Linking
Linking
LLVM
pass
libr.so
_iter,
LLVM
pass
basinhopping
(func, sp,
n
callback)
Linking
representing
function
_iter, callback)
Linking
Linking
basinhopping
(func,
sp,
n
voidLLVM
FOO_R (double
P)
Linking
* P)
pass
_R (double
LLVM
pass
void FOO
Linking
*
where we omit the concrete implementation of square.
LLVM
passLinking
Linking
Let FOO_R denote the representing function constructed for the
MCMC
minimization procedure
procedureLLVM
(.py)
MCMC
minimization
(.py)pass
Linking
program. In the minimization process, whenever CoverMe obtains
Linking
basinhopping (func, sp, n__
iter, callback)
basinhopping (func, sp, n iter, callback)
x∗ such that FOO_R(x∗ ) > 0, CoverMe selects a branch that it regards
LLVM pass
infeasible. CoverMe selects the branch as follows: Suppose x∗
LLVM
pass
Linking
exercises a path τ whose last conditional statement is denoted by lz ,
Generated
test
inputs
Linking Generated test inputs
and, without loss of generality, suppose zT is passed through by τ,
X: A set ofX:
FOO
R’s of
global
then CoverMe regards zF as an infeasible branch.
A_set
FOO_minimum
R’s global points
minimum points
In the modified program above, if 1F has been saturated, the
representing function evaluates to (y + 1)2 or (y + 1)2 + 1, where y
Figure 9: CoverMe Implementation.
equals to the non-negative square(x). Thus, the minimum point x∗
must satisfy FOO_R(x∗ ) > 0 and its triggered path ends with branch
1F . CoverMe then regards 1T as an infeasible branch.
CoverMe then regards the infeasible branches as already satuB.3 Technical Details
rated. It means, in line 12 of Algo. 1, CoverMe updates Saturate
with saturated branches and infeasible branches (more precisely,
Sect. 3 assumes Def. 3.1(a-c) for the sake of simplification. This
branches that CoverMe regards infeasible).
section discusses how CoverMe relaxes the assumptions when hanThe presented heuristic works well in practice (See Sect. 5), but
dling real-world floating-point code. We also show how CoverMe
we do not claim that our heuristic always correctly detects infeasible
handles function calls at the end of this section.
branches.
Dealing with Pointers (Relaxing Def. 3.1(a)) We consider only
Dealing with Function Calls. By default, CoverMe injects r =
pointers to floating-point numbers. They may occur (1) in an input
peni only in the entry function to test. If the entry function invokes
parameter, (2) in a conditional statement, or (3) in the code body but
other external functions, they will not be transformed. For example,
not in the conditional statement.
in the program FOO of Fig. 3, we do not transform square(x). In
CoverMe inherently handles case (3) because it is executionthis way, CoverMe only attempts to saturate all branches for a single
based and does not need to analyze pointers and their effects.
function at a time.
CoverMe currently does not handle case (2) and simply ignores
However, CoverMe can also easily handle functions invoked by
these conditional statements by not injecting pen before them.
its entry function. As a simple example, consider:
Below we explain how CoverMe deals with case (1). A branch
coverage testing problem for a program whose inputs are pointers
void FOO(double x) {GOO(x);}
void GOO(double x) {if (sin(x) <= 0.99) ... }
to doubles, can be regarded as the same problem with a simplified program under test. For instance, finding test inputs to cover
If CoverMe aims to saturate FOO and GOO but not sin, and it sets
branches of program void FOO(double* p) {if (*p <= 1)... }
FOO as the entry function, then it instruments both FOO and GOO. Only
_
_
_
can be reduced to testing the program void FOO with no pointer
GOO has a conditional statement, and CoverMe injects an assignment
(double x) {if (x <= 1)... }. CoverMe transforms program
on r in GOO.
_
_
_
FOO to FOO with no pointer if a FOO’s input parameter is a
floating-point pointer.
Front-end: Construct
the
_t FOO_I
type
representing function
(double x1, double x2, ...)
| 6 |
A Framework in CRM Customer Lifecycle: Identify Downward
Trend and Potential Issues Detection
1
2
1
2
2
Kun Hu , Zhe Li , Ying Liu , Luyin Cheng , Qi Yang , and Yan Li
2
Abstract
Customer retention is one of the primary goals in the area of customer relationship
management. A mass of work exists in which machine learning models or business rules are
established to predict churn. However, targeting users at an early stage when they start to
show a downward trend is a better strategy. In downward trend prediction, the reasons why
customers show a downward trend is of great interest in the industry as it helps the business
to understand the pain points that customers suffer and to take early action to prevent them
from churning. A commonly used method is to collect feedback from customers by either
aggressively reaching out to them or by passively hearing from them. However, it is believed
that there are a large number of customers who have unpleasant experiences and never speak
out. In the literature, there is limited research work that provides a comprehensive and
scientific approach to identify these “silent suffers”.
In this study, we propose a novel two-part framework: developing the downward prediction
process and establishing the methodology to identify the reasons why customers are in the
downward trend. In the first prediction part, we focus on predicting the downward trend,
which is an earlier stage of the customer lifecycle compared to churn. In the second part, we
propose an approach to figuring out the cause (of the downward trend) based on a causal
inference method and semi-supervised learning. The proposed approach is capable of
identifying potential silent sufferers. We take bad shopping experiences as inputs to develop
the framework and validate it via a marketing A/B test in the real world. The test readout
demonstrates the effectiveness of the framework by driving 88.5% incremental lift in purchase
volume.
Keywords
Customer Relationship Management, Semi-supervised learning, Customer Retention,
Downward Prediction
1
eBay Research & Engineering, Shanghai, China
eBay e-Commerce, Shanghai, China
Email Address: kunhu@ebay.com (Kun Hu), zheli4@ebay.com (Zhe Li), yliu26@ebay.com (Ying Liu),
lncheng@ebay.com (Luyin Cheng), qyang3@ebay.com (Qi Yang), yanli4@ebay.com (Yan Li)
1
2
1. Introduction
Customer retention is always one of the primary goals of the customer relationship
management (CRM) field, because it brings direct value to the business (Chen and Popovich,
2003). However, in the customer lifecycle, some of the users churn for various reason. It is
reasonable to try to “rescue” these churned customers since it is expensive to acquire new
customers. An ideal approach to save customers is in the early stage, when they first shows a
downward trend. To achieve this goal, two steps should be considered. The first one is to
predict the propensity of the downward trend of a customer after a target length of time.
Next, for a customer with a high propensity of a downward trend, we need to understand the
cause of the issue. Depending on the cause, it is possible to execute personalized “rescue”
plans – for instance, sending marketing messages, or providing incentives and relevant
merchandise recommendations via multiple marketing channels.
There is some of the existing work using machine-learning methods which predict
churned customers who stop doing business with a company for a certain period of time
(Kamalraj and Malathi, 2013; Almana and Aksoy et al., 2014). Support vector machines are
applied in the churn prediction of newspaper subscribers (Coussement and Van den Poel,
2008). Churn prediction is also widely used in telecommunication industry (Wei and Chiu,
2002; Lee and Lee et al., 2017), e-commerce retailers (Subramanya and Somani, 2017), social
network (Óskarsdóttir and Bravo et al., 2017) and online new media (Yuan and Bai et al., 2017).
In addition, methods are proposed to improve prediction accuracy; for instance, handling of
imbalanced data, which usually happens in the customer relationship management field due
to lack of positive cases (Burez and Van den Poel, 2009; Xie and Li et al., 2009). Although a
large amount of research exists which concentrates on user churn, customer downward trend
prediction is quite different as the customers are still active. Currently, no widely accepted
definition exists that identifies the targeted customers with a downward trend. Lack of existing
research design drives a necessity to develop a method for exploring the cause of downward
trend.
In this work, we propose a framework to find the target population and causes, and
making it possible for a company to address the issues implicitly or explicitly in their
communication. We propose a scientific framework leveraging machine-learning
methodology to manage the downward trend. To illustrate and demonstrate the
methodology, we take an example of an e-commerce company.
The framework consists of three parts. First, a methodology is proposed to identify
customers who are in the downward trend. Second, supervised learning models are built to
identify the customers who are in a downward trend. Finally, we leverage semi-supervised
machine learning to learn the potential causes, which later we call “levers” of the downward
trend, and find silent sufferers. We will use one common lever - bad customer experience
(BCE) to develop and demonstrate the method. Traditionally, if there are enough samples
labeled as BCE sufferers and non-BCE sufferers, the issue is easy to handle with supervised
learning method. However, not all of the customers are willing to express their BCE and thus
they are silent sufferers. Meanwhile, BCE counts can be high if the user has more activities,
and it is hard to establish the correlation between BCE and downward trend. Proposing a
2
correct solution is crucial to rescuing these customers. In addition, we perform an A/B test to
verify the performance of the model.
2. Methodology
2.1 Definition of Downward Trend
In this section, we will introduce the definition of a downward trend of the customer
lifecycle. A customer typically shows a downward trend in several aspects. In an e-commerce
company, gross merchandise volume (GMV), bought item count (BI) and purchase days (PD)
all are meaningful metrics to track customer status. For each of these metrics, we define the
downward flag respectively by the norm box method.
Fig 1. Downward Definition
Take the GMV as an example. Since consumption varies among customers, it is more
appropriate to observe the GMV trend per user, rather than observing the GMV trend of the
all customers. Hence, for each user, we first compute the average of the GMV 𝜇 in the past
12 months along with its standard deviation 𝑠 before a specified target time when we plan
to communicate with the customer. In the next month, if the GMV of the customer is under
the lower bound,
𝜇 − 𝛼𝑠
then we flag the user as being in a downward trend. 𝛼 determines the sensitivity of the
definition, i.e. a large 𝛼 indicates that we are less likely to classify a user as in the downward
trend; 𝛼 should be tuned to match the business case and it is reasonable to use different 𝛼
values for different customer groups. Figure 1 illustrates the method used to define the
downward flag. The dashed line is the average GMV of a customer in the past 12 months. The
orange box is the norm box corresponding to the upper bound 𝜇 + 𝛼𝑠 and the lower bound
th
𝜇 − 𝛼𝑠. If the user’s GMV in the 13 month is lower than the lower bound, then the user is in
a downward trend.
3
2.2 Modelling the Downward Propensity
Gradient Boosting Machine (GBM) is a widely used machine learning method for both
regression and classification problems (Friedman, 2001). It shows strong performance in
solving a wide range of practical applications (Natekin and Knoll, 2013). In this work, we
leverage GBM to build binary classifiers to predict the downward trend of customers in the
next month. If the user is in a downward trend per our definition, the class label is 1 (positive
case); otherwise it is 0 (negative case). As mentioned in the previous section, three models
are built for different downward detection goals, including GMV, BI and PD. The
methodology is similar for the three metrics. Based on the purchase pattern of the customers,
we divide the customers into frequent buyers (FB) and infrequent buyers (IB). We choose
different norm box parameters 𝛼 to define the downward for FB and IB. Event rate is the
percentage of the positive case and obviously is decided by the 𝛼. Generally, a proper 𝛼
should be choose to make the event rate fit with the specific business case.
Table 1. Model Parameters
Model
Event Rate
𝛼 (FB)
𝛼 (IB)
GMV
BI
PD
9.65%
5.45%
7.32%
1
1.5
1.25
0.75
1
1
We have around 200 candidate predictors belonging to multiple categories. We list some
of them here:
Transaction: purchase date, item price, category seasonality, condition, quantity, shipping
methods.
Bad buying experience history.
Contact frequency and engagement
Site behaviors: browse, bid, offer, watch, messages, cart, search, and dwell time.
Demographics, acquisition channel
We first train rough GBM models using 200 trees and use a max depth of 5 for the 3
aspects. Based on the variable importance, we finalize the GBM model with 13 variables for
each model in order to reduce the computing resource of the features while retaining
acceptable prediction power. Table 2-4 shows the variables selected in each model.
Table 2. Variable Importance of GMV
Variable Description
Importance
Whether last month’s GMV is less than 0.75 std of previous 11 months’ GMV
1
Recent 30 days’ GMV ratio to past year
0.4221
Purchase days during past year
0.3543
Recent 180 days’ GMV ratio to past year
0.1746
Average of gap between purchase days in last month
0.0972
Last 30 days’ buying transactions ratio to past year
0.0961
Last 90 days’ GMV ratio to past year
0.0937
4
Medium of gap between purchase days
0.0928
Last 60 days’ GMV ratio to past year
0.0845
Standard deviation of gap between purchase days
0.0549
Whether last 3 months’ GMV is less than 0.75 std of previous 9 months GMV
0.0517
User country
0.0506
Whether last 2 months’ GMV is less than 0.75 std of previous 10 months GMV
0.0361
Table 3. Variable Importance of BI
Variable Description
Whether last month’s bought items is less than 1 std of previous 11 months’
bought items
Importance
1
Average of gap between purchase days in last month
0.4387
Medium of gap between purchase days
0.3253
Standard deviation of gap between purchase days
0.2705
Last 180 days’ items purchased ratio to past year
0.2588
Purchase days during past year
0.2407
Last 30 days’ items purchased ratio to past year
0.2164
Last 90 days’ items purchased ratio to past year
0.204
Last year’s items purchased count
0.1049
Whether last 3 months’ bought items is less than 1 std of previous 9 months’
bought items
0.0776
Last 30 days’ buying transactions ratio to past year
0.0757
Last 60 days’ items purchased ratio to past year
0.0708
User country
0.0431
Table 4. Variable Importance of PD
Variable Description
Importance
Purchase days during past year
1
Medium of gap between purchase days
0.64
Whether last month’s purchase days is less than 1 std of previous 11 months’
0.6221
purchase days
Average of gap between purchase days in last month
0.3376
Standard deviation of gap between purchase days
0.1687
Last 30 days’ buying transactions ratio to past year
0.1175
Medium of purchase days difference in last month
0.1003
User country
0.1001
Standard deviation of purchase days difference
0.0921
Whether last month’s bought items is less than 1 std of previous 11 months’
0.0908
bought items
Last 180 days’ buying transactions ratio to past year
0.0837
Average of purchase days difference
0.0572
Last 90 days’ buying transactions ratio to past year
0.0386
With the selected variables, we build GBM model for the three aspects using depths of 5
5
and 10. The detailed performance of the models can be found in Table 5. The performance is
decent: the AUC is beyond 0.90 for all of these models. Figure 3 shows the log loss of GMV
model along with the number of trees. 150 trees are efficient enough for prediction as the
log loss no longer decreases dramatically. Comparing the 5-depth and 10-depth model, an
obvious over-fitting issue can be observed in Figure 3 as the gap between the training and
validation group. Meanwhile, there is no significant improvement with the 10-depth model.
Thus, depth 5 model is selected as the final model. It is similar for the BI and PD models.
Figure 3 shows the ROC curve and Precision-Recall (PR) Curve of the three final models.
(a) Depth 5
(b) Depth 10
Figure 2. Log Loss of the GMV Model
In order to use the output of each model in practice, it is suggested to split the users into
buckets according to the deciles or percentiles of the prediction score rather than set a fixed
cut-off to assign positive or negative flags to the population. If bucket information is available,
then it is feasible to combine these three models into one ensemble model to predict the
downward trend. Each customer gets three buckets of the models output respectively, say
𝑦𝐺𝑀𝑉 , 𝑦𝐵𝐼 , 𝑦𝑃𝐷 . If the goal is to find as many downward customers as possible,
the 𝑚𝑎𝑥(𝑦𝐺𝑀𝑉 , 𝑦𝐵𝐼 , 𝑦𝑃𝐷 ) can be selected as the final bucket of that customer.
Table 5. Model Performance Metrics
Model
# Tree
depth
AUC
max f1
max min per class accuracy
GMB
150
5
0.90
0.47
0.82
BI
150
10
0.90
0.47
0.82
6
150
5
0.91
0.36
0.83
PD
150
10
0.91
0.37
0.83
150
5
0.90
0.42
0.82
150
10
0.91
0.42
0.82
(a) ROC Curve
(b) Precision-Recall Curve
Figure 2. Propensity Model ROC Curve and PR Curve
2.3 Issue Detection
In this section, we will propose a methodology to identify one or more downward levers,
which result in a downtrend trend. Generally, multiple causes lead a customer to a downward
trend. To illustrate the method, we mainly focus on the BCE issue as an example, but it is
convenient to apply the methodology to address other issues.
There are difficulties when considering some of the levers. Indeed, BCE issues are not
negatively correlated with a downward trend in the whole population. The reason is that active
customers tend to meet more BCE issues. The more transactions, the more chance a customer
will have a BCE. Moreover, some of the customers are not bothered by the reported BCE
because either the seller or customer service resolved the issue. Therefore, it is challenging
but also crucial to find the real sufferers who behave downward due to a BCE. In addition,
apart from customers who are willing to report their issues, there are a large group of BCE
sufferers who choose not to speak out. Traditional business rules are not able to target them.
7
Fig 3. General Procedure of the Lever Detection Framework
In order to accomplish the goal, we need to resolve two questions. The first question is
how to find the genuine sufferers from the BCE reporters. We name the genuine sufferer
group as the golden set. Casual inference is applied to establish the golden set. The second
question is how to detect the silent sufferers given the golden set; we use a semi-supervised
learning method to deal with this. Figure 3 shows the general procedure of the proposed
approach.
2.3.1 Casual Inference and Golden Set
To solve the first problem, we need to find a subset of the downward model population.
The downward trend (GMV/BI/PD) of the customers in this subset is due to a BCE. Causal
Inference is the theory of judging the relationship between two continuous variables. It helps
to identify the cause or causes of a phenomenon (Shaughnessy and Zechmeister, 1985). For
example, the method is used to detect casual signals within collections of static images
(Lopez-Paz and Nishihara et al., 2016). In this work, we use the method to decide whether
the BCE is the cause of the downward trend.
First, per business knowledge and sense, we start from a small customer sample who are
in downward score decile 7-10 and had a BCE on their last purchase day; we assume that
they are real sufferers.
Second, we try to demonstrate causality. Let X be the cumulative BCE count in the past;
Y be the model decile (higher decile, higher likelihood to be downward). We do causal
inference separately corresponding to the customer type, as the transactional behaviors of FB
8
and IB are quite different. It is expected to observe asymmetry where X can cause Y but Y
cannot cause X.
Table 6. Casual Inference
Frequent Buyer
Relationship
𝑌 ← 𝑓(𝑋) + 𝑒
𝑋 ← 𝑔(𝑌) + 𝑒
Coefficient
0.008
0.53
With CL 𝛂 = 𝟎. 𝟎𝟎𝟏
Pass
Fail
Infrequent Buyer
Relationship
𝑌 ← 𝑓(𝑋) + 𝑒
𝑋 ← 𝑔(𝑌) + 𝑒
Coefficient
0.03
0.007
With CL 𝛂 = 𝟎. 𝟎𝟎𝟏
Pass
Fail
We list the linear regression results in the Table 6. It indicates that the asymmetry exists
and the initial sample can be considered as Golden Set. For FB, with the confidence level 0.001,
the casual inference suggests that the BCE is the cause of the downward trend; but the BCE
reports is not the consequence of the downward trend. Similar conclusion can be induced for
IB.
2.3.2 Semi-supervised Learning and Look-like Model
We conclude that the customers in the Golden Set are the ones who suffered from BCE
by causal inference. However, there must be additional customers who suffered from BCE but
they did not report the issue. We here name them as silent sufferers. In this section, semisupervised learning theory is applied to find the silent sufferers. In some of the cases, labeled
data is not enough to build a supervised learning model and handling the situation semisupervised learning is a potential solution (Zhu, 2005). For our case, we have a limited golden
set; the rest are unknown labeled customers. An expectation–maximization (EM) algorithm is
used in this work (Liu and Lee et al., 2002).
The label of the Golden Set in a supervised learning fashion can be positive (i.e. 1).
However, for the remainder of the users, we do not know their labels. Some of them are silent
sufferers who should be also labeled as 1, while others are “normal” users and their labels
should be 0. In order to label these unknown customers, semi-supervised learning technique
is used in this section to solve the problem.
9
Fig 4. Initialize the Semi-supervised Learning
Algorithm 1
Initialize parameters. Set label 𝑦𝑖 of golden set 𝐺 = {(𝑥𝑖 , 𝑦𝑖 )} as 1; mixed data set 𝑀 =
{(𝑥𝑖 , 𝑦𝑖 )} as 0.
Set the stop criteria 𝜃. Set the change rate 𝛽 = 1. Set the maximum iteration 𝑁.
While 𝛽 > 𝜃 and 𝑘 < 𝑁
1 Randomly picking a subset 𝐺′ ⊂ 𝐺 set as spies.
2 Build binary classifier 𝑓𝑘 (𝑥) on the dataset 𝑀′ = 𝐺′ ∪ 𝑀.
3 Scoring the combined dataset 𝑀′ and update the labels on 𝑀′ .
4 Send the spies back to golden set and re-labeled them as 1.
5 𝑀 = 𝑀′\𝐺′
6 Compute the label change rate 𝛽 on 𝑀.
7 𝑘 = 𝑘 + 1
Figure 4 illustrates the one iteration step of the learning procedure. In step 1, the Golden
Set and the remainder of the unknown dataset are mixed without labels. We then set the
initial labels of the unlabeled customers as all 0s. In step 2, we randomly select part of the
positive samples as spies and combine with the mixed part. It is now feasible to train a
supervised learning model to get a binary classifier, although it is biased due to the unknown
labels of the mixed dataset. In step 3, send all the spies back to the Golden Set. For the
remaining mixed part, use the binary classifiers to re-label all the samples utilizing a specified
cut-off. Notice that after re-labeling, some of the samples change their labels. The overall
label change rate can be referred as stop criteria. When the change rate is lower than the
threshold, stop the iteration procedure. The detailed algorithm is organized in the
To build the binary classifier in step 3, the variables are from the following aspects: BCE
reports, customer service (call, online chat, email), help & FAQ page behaviors, buyer’s
feedback to sellers, survey data, and behavioral features. The cut-off is selected by the max
10
𝐹1 scores in Step 3. F-score is used to trade off the precision and recall. It is possible to
choose other cut-off such as 𝐹0.5 , 𝐹2 to fit with a different application (Li and Wang et al.,
2008).
2.3.3 Evaluation
It is difficult to tell the model performance on the mixed dataset since the true labels are
all unknown. Nonetheless, the performance on the Golden Set still can be observed as a
reasonable reference. In an ideal case, the model should classify all customers in the golden
set as positive cases. We divide the golden set into two parts: one part is combined with the
mixed set as a training set; the other is a holdout validation set. We check the accuracy on
the hold out validation set via the binary classifier built in each iteration. We consider the
accuracy as recall since it is similar to the original meaning of the concept. It is expected that
the recall improves along with iteration. In addition, the label change rate on the mixed
dataset should be stable after a sufficient number of iterations.
(b) Recall of Validation Set on Max F1 of each iterative
(a) Label change rate of the unknown dataset
model
Fig 6. Iteration Procedure of Trust Lever
Figure 6 shows the iteration of the algorithm. As the recall tends to be stable and the
label change rate is near to zero, we can conclude that iteration 11 is an ideal choice as the
final model. In Table 7, we list the top five variables of the selected binary classifier.
Table 7. Top five variables in the Trust Lever Semi-supervised Learning Model
Variable Name
Variable Type
# of days since last defect
Defect rate of pre 7 days(without late delivery)
Defect rate of pre 1 year(without late delivery)
BCE count of pre 1 year
BCE
BCE
BCE
BCE
11
# of days since last purchase
Transaction
3. Model Performance
In this section, we set up test to verify the correctness of the population selected by our
framework and influence of the model in real business case. As the model targets the
downward users caused by BCE, it is reasonable to reach them with an apology message
about the customer experience. We launched an A/B campaign test in US and UK in Sep.
2017. Using the labels from model prediction, we chose 1,012 customers who were in the
Golden Set or Silent Sufferers set. Next, we randomly separated those into test and control
groups. For the test group, we sent each one an email to apologize. Details of the population
can be found in Table 8.
Table 8. Settings of the Campaign
Control
Test
Grand Total
# Golden Set
143
572
715
# Silent Suffers
55
242
297
Grand Total
198
814
1,012
After the campaign run date, the lift of GMV in the test group is 88.5% comparing with
the control group. Through a two-sample t-test, which is used to determine if the means of
two groups are equal (Jones, 1994), it suggests that the improvement of the test group
comparing with the control group is significant with the confidence level α = 0.05. The
decent lift indicates that we targeted the right population and got their positive feedback.
4. Discussion
In this work, we propose a scientific framework to focus on the downward trend
customers in the business, leveraging multiple machine-learning techniques. We first propose
a method to define and detect the propensity of downward trend of customers, which
becomes the foundation for the following step. Next, casual inference and semi-supervised
learning are applied to find golden set and silent sufferers. In an A/B campaign test, we verify
the performance of the model and ensure the effectiveness of the model.
The casual inference and semi-supervised learning part are adapted to other stages of
customer lifecycle as well, including churn, early churn and one-and-only-one-transaction
cases. For lever detection in this study, we build and test the frame on the BCE lever. However,
other levers such as spend capacity are also worth efforts. Moreover, we introduce the
methodology in an e-commerce business background. It is effortless to apply in other
business areas. For instance, in the telecommunication industry, we can use the variables such
as call quality, device type, and count of the text message sent among others to develop the
12
framework. We will focus on improving performance and extend to other CRM application
scenarios.
13
References:
Almana, A. M. and M. S. Aksoy, et al. (2014). "A survey on data mining techniques in customer churn
analysis for telecom industry." Journal of Engineering Research and Applications 4 (5): 165-171.
Burez, J. and D. Van den Poel (2009). "Handling class imbalance in customer churn prediction." Expert
Systems with Applications 36 (3): 4626-4636.
Chen, I. J. and K. Popovich (2003). "Understanding customer relationship management (CRM) People,
process and technology." Business process management journal 9 (5): 672-688.
Coussement, K. and D. Van den Poel (2008). "Churn prediction in subscription services: An application
of support vector machines while comparing two parameter-selection techniques." Expert systems with
applications 34 (1): 313-327.
Friedman, J. H. (2001). "Greedy function approximation: a gradient boosting machine." Annals of
statistics: 1189-1232.
Kamalraj, N. and A. Malathi (2013). "A survey on churn prediction techniques in communication sector."
International Journal of Computer Applications 64 (5).
Lee, E. and E. Lee, et al. (2017). "Predicting customer churn in mobile industry using data mining
technology." Industrial Management & Data Systems 117 (1): 90-109.
Li, X. and Y. Wang, et al. (2008). Learning query intent from regularized click graphs. Proceedings of
the 31st annual international ACM SIGIR conference on Research and development in information
retrieval, ACM.
Liu, B. and W. S. Lee, et al. (2002). Partially supervised classification of text documents. ICML.
Lopez-Paz, D. and R. Nishihara, et al. (2016). "Discovering causal signals in images." arXiv preprint
arXiv:1605.08179.
Natekin, A. and A. Knoll (2013). "Gradient boosting machines, a tutorial." Frontiers in neurorobotics 7.
Óskarsdóttir, M. and C. Bravo, et al. (2017). "Social network analytics for churn prediction in telco:
Model building, evaluation and network architecture." Expert Systems with Applications 85: 204-220.
Shaughnessy, J. J. and E. B. Zechmeister (1985). Research methods in psychology., Alfred A. Knopf.
Subramanya, K. B. and A. Somani (2017). Enhanced feature mining and classifier models to predict
customer churn for an E-retailer. Cloud Computing, Data Science & Engineering-Confluence, 2017 7th
International Conference on, IEEE.
Wei, C. and I. Chiu (2002). "Turning telecommunications call details to churn prediction: a data mining
approach." Expert Systems with Applications 23 (2): 103-112.
Xie, Y. and X. Li, et al. (2009). "Customer churn prediction using improved balanced random forests."
Expert Systems with Applications 36 (3): 5445-5449.
Yuan, S. and S. Bai, et al. (2017). Customer Churn Prediction in the Online New Media Platform: A
Case Study on Juzi Entertainment. Platform Technology and Service (PlatCon), 2017 International
Conference on, IEEE.
Zhu, X. (2005). "Semi-supervised learning literature survey.".
14
| 2 |
A Model Predictive Control Approach for
Low-Complexity Electric Vehicle Charging
Scheduling: Optimality and Scalability
Wanrong Tang Student Member, IEEE and Ying Jun (Angela) Zhang, Senior Member, IEEE
arXiv:1502.01456v3 [math.OC] 1 Apr 2016
Department of Information Engineering, The Chinese University of Hong Kong
Shatin, New Territories, Hong Kong
Abstract—With the increasing adoption of plug-in electric vehicles (PEVs), it is critical to develop efficient charging coordination
mechanisms that minimize the cost and impact of PEV integration to the power grid. In this paper, we consider the optimal PEV
charging scheduling, where the non-causal information about
future PEV arrivals is not known in advance, but its statistical
information can be estimated. This leads to an “online” charging
scheduling problem that is naturally formulated as a finitehorizon dynamic programming with continuous state space and
action space. To avoid the prohibitively high complexity of solving
such a dynamic programming problem, we provide a Model
Predictive Control (MPC) based algorithm with computational
complexity O(T 3 ), where T is the total number of time stages.
We rigorously analyze the performance gap between the nearoptimal solution of the MPC-based approach and the optimal
solution for any distributions of exogenous random variables.
Furthermore, our rigorous analysis shows that when the random
process describing the arrival of charging demands is first-order
periodic, the complexity of proposed algorithm can be reduced
to O(1), which is independent of T . Extensive simulations show
that the proposed online algorithm performs very closely to the
optimal online algorithm. The performance gap is smaller than
0.4% in most cases.
I. I NTRODUCTION
A. Background and Contributions
The massive deployment of PEVs imposes great challenges
to smart power grid, such as voltage deviation, increased power
losses, and higher peak load demands. It is critical to design
PEV charging mechanisms that minimize the cost and impact
of PEV integration. Previously, PEV charging coordination has
been extensively studied to minimize power loss, minimize
load variance, or minimize charging cost, etc [3]–[7]. Ideally,
the load demand can be flattened as much as possible if
the information about future charging demand is known noncausally when calculating the charging schedule. However, in
practice, a PEV charging station only knows the load demand
This work was presented in part as [1]. This work was supported in part
by the National Basic Research Program (973 program Program number
2013CB336701), and three grants from the Research Grants Council of
Hong Kong under General Research Funding (Project number 2150828 and
2150876) and Theme-Based Research Scheme (Project number T23-407/13N).
W. Tang is with the Department of Information Engineering, The Chinese
University of Hong Kong, Shatin, New Territories, Hong Kong (Email:
twr011@ie.cuhk.edu.hk).
Y. J. Zhang is with the Department of Information Engineering, The Chinese
University of Hong Kong. She is also with Shenzhen Research Institute, The
Chinese University of Hong Kong (Email: yjzhang@ie.cuhk.edu.hk).
of the PEVs that have arrived, but not that of the PEVs
coming in the future. Fortunately, the statistical information
of the future charging demands can often be acquired through
historic data, which benefits the control of the PEV charging
scheduling in practical scenarios.
In this paper, we consider the optimal PEV charging
scheduling, assuming that the future charging demand is not
known a priori, but its statistical information can be estimated.
In particular, we define the cost of PEV charging as a general
strictly convex increasing function of the instantaneous load
demand. Minimizing such a cost leads to a flattened load demand, which is highly desirable for many reasons [3]–[7]. The
online PEV charging scheduling problem is formulated as a
finite-horizon dynamic programming problem with continuous
state space and action space. To avoid the prohibitively high
complexity of solving such a dynamic programming problem,
we provide a Model Predictive Control (MPC) approach to
obtain a near-optimal solution. Instead of adopting the generic
convex optimization algorithms to solve the problem, we
propose an algorithm with computational complexity O(T 3 )
by exploring the load flattening feature of the solution, where
T is the total number of time stages. We rigorously analyze
the performance gap between the near-optimal solution of the
MPC-based approach and the optimal solution, and the result
applies to any distributions of exogenous random variables.
Specially, the performance gap is evaluated by the Value
of the Stochastic Solution (VSS), which represents the gap
between the solution of the approximate approach and that
of dynamic programming problem [8]–[10]. Furthermore, our
analysis shows that when the random process describing
the arrival of charging demands is first-order periodic, the
complexity of proposed algorithm can be reduced to O(1),
which is independent of T . Extensive simulations show that
the proposed algorithm performs very closely to the optimal
solution. The performance gap is smaller than 0.4% in most
cases. As such, the proposed algorithm is very appealing for
practical implementation due to its scalable computational
complexity and close to optimal performance.
The rest of the paper is organized as follows. A review of the
related work on the PEV charging scheduling with uncertain
load demand is presented in Section I-B. We introduce the
problem formulations of both offline and online PEV charging
problem in Section II. In Section III, we propose a MPC based
online algorithm and analyze its performance gap. The O(1)-
complexity algorithm is given when the arrival process is firstorder periodic in Section IV. Simulation results are presented
in Section V. Finally, the paper is concluded in Section VI.
B. Related Work
The works on the PEV charging scheduling with uncertain
PEV load demand include both simulation-based evaluations
[11], [12] and theoretical performance guarantees [13]–[17].
Meanwhile, MPC is one of most commonly approaches for
which has been widely adopted in recent studies [12]–[15].
[12] leverages the MPC based method to design a dynamic
charging and driving cost control scheme. Both [13] and [14]
apply MPC algorithms to minimize the load variation. [15]
proposes a plug and play MPC approach to minimize the
voltage fluctuations by assuming that the load demand is timeperiodic. Compared to [12]–[15], in this paper we analyze
the performance gap between the solution of MPC approach
and the optimal solution regardless of the distribution of the
load demand. Besides, we provide a more scalable algorithm
with O(1)− complexity as well as the optimality analysis
for the case when the load demand is first-order periodic.
Additionally, the objective functions in [12]–[15] are quadratic
forms of load demand. Whereas in this paper, the objective
function is a general strictly convex increasing function which
reflects both the charging cost and the load variance.
As to the amount of information needed, the EV charging
scheduling algorithms in [16] and [17] require the probability
distribution of the random PEV arrival process. In contrast,
the proposed algorithm in this paper only requires the firstorder moment, i.e., the expected values of the random demand
patterns. In practice, it is a lot easier to obtain the expected
values of random process than to obtain the probability distribution of a random process. Convex cost functions are also
considered in [18] and [19]. Both of them devise the online
algorithms for battery charging control problems, where there
is no charging deadline for the battery. The PEV charging
scheduling problem in this paper differs from stationary battery
charging in that each PEV has a demand to be satisfied before
a certain deadline.
II. P ROBLEM F ORMULATION
We consider the PEV charging scheduling problem, where
PEVs arrive at the charging station at random instants with
random charging demands that must be fulfilled before a
random departure time.
A. Optimal Offline PEV Charging Problem
For the ease of understanding, we first introduce an ideal
case, where all the non-causal information of based load and
the PEVs, including the arrival times, departure times and
charging demands are known to the charging station before
the system time. The entire system time is divided into T
equal-length time slots. Let N denote the set of PEVs that
arrive during the system time. Notice that for a given time
slot number T , N is itself a random set due to the random
arrival of PEVs. We denote by I(t) the set of PEVs that
(s)
are in the charging station during slot t. Denote by ti and
(e)
ti the arrival time slot and the departure time slot of PEV
i, respectively. di denotes the charging demand that PEV i
requires. The charging station needs to decide, the charging
rate xit , ∀i ∈ I(t). To satisfy the demand di , xit must satisfy
Pt(e)
i
(s) xit = di . Let st be the total charging rate of time slot
t=ti
t, i.e.,
X
xit , ∀t = 1, 2, · · · , T,
st =
(1)
i∈I(t)
which is also called charging load at time t. The total load
consists of both the charging load and the inelastic base load in
the same location. The base load, denoted by lt , represents the
load of other electricity consumptions at time t except
P for PEV
charging. Then, the total load at time t is given by i∈It xit +
lt . Suppose that the charging cost at time t is a strictly convex
increasing function of the total load, denoted by f (st + lt ).
The convexity and increasing property of f (st + lt ) reflects
the fact that each unit of additional power demand becomes
more expensive to obtain and make available to the consumer.
For example, in the wholesale market, the instantaneous cost
can be modeled as an increasing quadratic function of the
instant load [4]–[6]. On the other hand, the convexity of f (st +
lt ) also captures the intent of reducing the load fluctuation
over time [7]. Then the total cost over time T is computed
PT
(s) (e)
as t=1 f (st + lt ). In the ideal case, assume that lt , ti , ti ,
and di for all t = 1, · · · , T, i ∈ N are known non-causally at
the beginning of the system time. Then, the charging station
can solve (2) and obtain the optimal charging rate, denoted by
x∗it for all time t and the optimal total cost, denoted by Ψ1 .
Such a solution is referred to as an “optimal offline solution”.
T
X
Ψ1 = min
xit
f
X
xit + lt
(2a)
xit = di , ∀i ∈ N ,
(2b)
t=1
i∈I(t)
(e)
ti
X
s.t.
(s)
t=ti
(e)
(s)
xit ≥ 0, ∀t = ti , · · · , ti , ∀i ∈ N .
(2c)
In particular, the optimal
rate, denoted by
P total charging
∗
s∗t , is defined as s∗t =
x
.
Note
that there are in
i∈I(t) it
total O(T |I(t)|) variables in (2), where |I(t)| denotes the
cardinality of the set I(t). This number can be quite large
when the number of cars present at each time slot, |I(t)|, is
large. Next, we propose an equivalent transformation of (2)
that drastically reduces the number of variables. In particular,
the following Theorem 1 shows that as long as we find the
optimal s∗t ∀t, the optimal x∗it ∀i, t can be obtained by earliest
deadline first (EDF) scheduling.
Theorem 1: If a set of st ’s satisfy the following inequality
for all n = 1, · · · , T
n
X
t=1
X
(e)
i∈{i|ti
di ≤
=t}
n
X
t=1
st ≤
n
X
t=1
X
(s)
i∈{i|ti
di ,
(3)
=t}
then there exists at least a set of xit ’s that is feasible to (2).
One such set of xit ’s can be obtained by EDF scheduling,
which charges the PEV i ∈ I(t) with the earliest deadline
at a rate st at each time t. Moreover, when st = s∗t , the set
of xit ’s obtained by EDF scheduling are the optimal solution,
x∗it , to (2).
Proof: Please see the detailed proof in Appendix A. To
see Theorem 1, note that (3) implies that the total energy
charged by any time slot n is no less than the total charging
demand that must be satisfied by time n and no more then
the total charging demand of PEVs which have arrived up
to time n. On the other hand, by EDF scheduling, PEVs
with earlier deadlines must be fully charged before those
with later deadlines can be charged. Thus, (3) guarantees
the fulfillment of the charging demands of each individual
PEV. With Theorem 1, we can transform (2) to the following
equivalent problem with T variables.
Ψ1 = min
st
s.t.
T
X
t=1
n
X
f (st + lt )
st ≥
t=1
n
X
n
X
j=1
st ≤
t=1
n
X
j=1
statistical information of the unknown future demands. The
charging rate sk , once determined, cannot be changed in the
future. Specifically, the remaining charging
Pk−1 demand of PEV
i at time k is given by dˆki = di − t=t(s) xit . Note that,
i
dˆki = di for all PEVs that have not yet arrived by time k − 1.
A close look at (4) suggests that the charging schedule st
only depends on the total charging demand that needs to be
finished before a certain time, but not the demand due to
individual
P PEVs. Thus, for notational simplicity, we define
d˜kt = i∈{i|t(e) =t} dˆki , ∀t = k, · · · , T, as the total unfinished
i
charging demand at time k that must be completed by time t.
With this, we define the state of system at time t as
Dt = [lt , d˜tt , d˜tt+1 , · · · , d˜tT ],
(4a)
(5)
d˜tt0
X
(e)
i∈{i|ti
di , ∀n = 1, · · · , T. (4b)
=j}
X
where lt is the base load at time t,
is the total unfinished
charging demand at time t that must be completed by time
t0 . Let ξ t represent the random arrival events at time t. ξ t is
defined as
t
ξt = [ιt , ηtt , ηt+1
, · · · , ηet t ],
di , ∀n = 1, · · · , T. (4c)
(s)
i∈{i|ti =j}
s∗t
to (4) has an interesting feature: it
The optimal solution
does not change with the cost function f (st + lt ), as long as
f is strictly convex. Moreover, s∗t also minimizes the variance
of total load subjecting to (4b) and (4c), where
the variance
PT
PT
t=1 st +lt 2
of total load is defined as t=1 (st + lt −
) [13],
T
[14]. This is proved in Theorem 2.
Theorem 2: The optimal solution s∗t to (4) does not change
with the cost function f (.), as long as f (.) is strictly convex.
Moreover, s∗t is a load flattening solution that minimizes the
variance of total load.
Proof: Please see the detailed proof in Appendix B.
Remark 1: In practice, a capacity constraint on st + lt is
present for each t due to the hardware limitations and security
concerns. The constraint is omitted in our formulation for
the following reason. Theorem 2 indicates that the optimal
solution s∗t to (4) minimizes the variance of total load. That is,
any other scheduling solution would have a higher peak load,
and therefore is more likely to violate the capacity constraint.
In this sense, the optimal solution s∗t to (4) is “capacity
optimal” in the sense that if the optimal solution to Problem
4 (or equivalently Problem 2) violates the capacity constraint,
then there does not exist any other scheduling solutions that
satisfy the capacity constraint.
B. Online PEV Charging problem
For the online PEV charging problem, the charging schedule only depends on the statistic information of future load
demand, the current based load and the remaining charging
demands and deadlines of the PEVs that have arrived so far.
In contrast to the ideal case in the last subsection, in practice
the charging station only knows the remaining charging demands and departure deadlines of the PEVs that have already
arrived, as well as the current base load. The online charging
scheduling algorithm computes the charging rate sk at each
time slot k based on the known causal information and the
(6)
where ιt is the base load at time t, ηtt0 is the total charging
demand that arrive at time t and must be fulfilled by time t0 ,
et is the latest deadline among the PEVs that arrive at time t.
Then, the state transition, defined as
Dt+1 := g(st , Dt , ξt+1 ),
(7)
is calculated as follows:
lt+1 = ιt+1
(8)
and
= d˜tt0 − st −
d˜t+1
t0
0
tX
−1
+ +
, ∀t0 = t+1, · · · , T. (9)
d˜tj +ηtt+1
0
j=t
+
Here, [x] = max{x, 0}. With the above definitions of system
state and state transition, we are now ready to rewrite (4) into
the following finite-horizon dynamic programming problem.
Qk (Dk ) = min
sk
f (sk + lk ) + Eξk+1 [Qk+1 (g(sk , Dk , ξk+1 ))]
(10a)
s. t.
d˜kk ≤ sk ≤
T
X
d˜kt ,
(10b)
t=k
where Qk (Dk ) is the optimal value of the dynamic programming at time k. The left side of (10b) ensures all charging
demands to be satisfied before their deadlines. The right side
of (10b) implies that the total charging power up to a certain
time cannot exceed the total demands that have arrived up to
that time. By slight abuse of notation, in the rest of the paper
we denote the optimal solutions to both the online and offline
problems as s∗k , when no confusion arises. The actual meaning
of s∗k will be clear from the context. Suppose that s∗k is the
optimal solution to (10) at stage k. Then, the total cost at the
end of system time, denoted by Ψ2 , is provided by
Ψ2 =
T
X
k=1
f (s∗k + lk ).
(11)
Note that (10a) comprises nested expectations with respect
to the random PEV arrivals in the future time stages. Except
for few special cases, it is hard to provide the closed-form
of the optimal solution to (10). On the other hand, (10)
can be solved by the standard numerical methods, such as
backward reduction and the sample average approximation
(SAA) based on Monte Carlo sampling techniques [8]–[10],
[22]. These algorithms typically incur a computational complexity that grows exponentially with both the time span T
and the dimensions of state and decision spaces. Note that (10)
involves continuous state and decision spaces. Discretization
of these spaces leads to a curse of dimensionality, rendering
the computational complexity prohibitively high.
III. MPC- BASED O NLINE C HARGING A LGORITHM
In view of the extremely high complexity of standard
numerical methods, we are motivated to obtain a near-optimal
solution by solving a much simpler problem, which replace
all exogenous random variables by their expected values. This
is referred to as the expected value problem [8]–[10] or the
MPC approach [12]–[15] in the literature. Notice that the firstorder moment, i.e., expectation, of a random process is much
easier to estimate than the other statistics, e.g., variance or the
probability distribution. Thus, the assumption of knowing the
expected values is weaker than the assumptions in other EVcharging algorithms [20], which assume that the probability
distributions of the random processes are known.
Instead of solving the problem using generic convex optimization approaches, we propose a low-complexity online
Expected Load Flattening (ELF) algorithm by exploring the
load flattening feature of the optimal solution to the expected
value problem, as shown in Section III-A. Section III-B provides the theoretical analysis of the performance gap between
the optimal solution to the expected value problem and the
optimal solution to (10).
end of system time, denoted by Ψ3 , is defined as
Ψ3 =
min
sk
s. t.
f (sk + lk ) +
T
X
f (st + νt )
(12a)
t=k+1
j
X
st ≥
j
X
t=k
t=k
j
T
X
X
t=k
st ≤
t=k
d˜kt +
j
X
j
X
µm
n , ∀j
j
d˜kt +
X
em
X
0
d¯tt00 =
µm
n , ∀j = k, · · · , T. (12c)
In each time k, we solve problem (12) and obtain the optimal
charging solution s∗k . Then, problem (12) is resolved with the
updated d˜kt according to the realization of the PEVs arrived in
next time. So on and so forth, we obtain the optimal charging
solution s∗k for time stage k = 2, · · · , T . The total cost at the
(
0
d˜tt00 , for t00 = k, · · · , T, t0 = k,
0
µtt00 , for t00 = t0 , · · · , T, t0 = k + 1, · · · , T .
(14)
The key idea of online ELF algorithm is to balance the
charging load among all time slots k, · · · , T . Specifically, step
3 - 5 is to search the time interval [i∗ , j ∗ ] that has the maximum
load density among current time slots and record the maximum
load density. The time slots with maximum load density are
then deleted, and the process is repeated until the current time
slot k belongs to the maximum-density interval, i.e., i∗ = k.
Algorithm 1: Online ELF Algorithm
1
2
3
4
input : Dk , µt , t = k + 1, · · · , T
output: sk
initialization i = 0, j = 0;
repeat
For all time slot i = k, · · · , T, j = i, · · · , T , compute
Pj
P
0
( jt00 =t0 d¯tt00 + νt0 )
0
i∗ , j ∗ = arg max { t =i
}. (15)
k≤i≤j≤T
j−i+1
Set
y∗ =
5
m=k+1 n=m
(13)
where s∗k is the optimal solution to (12) at time stage k. The
solution to (12) is always feasible to (10) in the sense that
it always guarantees fulfilling the charging demand of the
current parking PEVs before their departures. This is because
the constraints of sk in (10) are included in (12).
Due to the convexity of f (·), the optimal solution is the one
that flattens the total load as much as possible. By exploiting
the load flattening feature of the solution, we present in
Algorithm 1 the online ELF algorithm that solves (12) with
complexity O(T 3 ). The online ELF algorithm have a lower
computational complexity than generic convex optimization
algorithms, such as the interior point method, which has a
complexity O(T 3.5 ) [21]. Notice that similar algorithms have
been proposed in the literature of speed scaling problems [23],
[24] and PEV charging problems [5]. The optimality and the
complexity of the algorithm have been proved therein, and
hence omitted here. The algorithm presented here, however,
paves the way for further complexity reduction to O(1) in
Section IV. For notation brevity, we denote in the online ELF
algorithm
= k, · · · , T, (12b)
m=k+1 n=m
f (s∗k + lk ),
k=1
A. Algorithm Description
Denote the expectation of ξ t as µt = [νt , µtt , · · · , µtT ],
where νt = E[ιt ], µtt0 = E[ηtt0 ], ∀t0 = t, · · · , T. Replacing ξ t
in (10) with µt , we obtain the following deterministic problem:
T
X
6
7
Pj ∗
Pj ∗
¯t0
t00 =t0 dt00
j ∗ − i∗ + 1
t0 =i∗ (
+ νt0 )
.
(16)
Delete time slot i∗ , · · · , j ∗ and relabel the existing time
slot t > j ∗ as t − j ∗ + i∗ − 1.
until i∗ = k;
Set sk = y ∗ − lk .
B. Optimality Analysis
In this subsection, we analyze the optimality of the solution to (12). Notice that MPC approximates the non-causal
random variables by their expected values regardless of their
distribution functions. As a result, such approximation may
lead to unacceptably large performance loss, depending on the
distribution of the random variables. Therefore, the MPC approximation is not always justifiable. A well-accepted metric,
Value of the Stochastic Solution (VSS) is adopted to evaluate
optimality gap between the optimal online solution and the
solution to the expected value problem [8]–[10]. Previous
work, e.g., [9], [10], mainly evaluates VSS using numerical
simulations. Whereas in our analysis, we show that VSS is
always bounded regardless of the distribution of the future
EV charging demands. This provides a strong theoretical
justification of adopting MPC approximation to solve our
problem.
Let Ξ denote a scenario, which is defined as a possible
realization of the sequence of random load demand [22],
Ξ = [ξ2 , ξ3 , · · · , ξT ].
(17)
Here, we treat ξ 1 as deterministic information since the
demand of PEVs arrived at the first stage is known by the
scheduler. Let Φ1 , Φ2 and Φ3 be the expectation of the optimal
value of the offline problem (4), the online problem (10)
and the expected value problem (12), respectively, where the
expectation is taken over the random scenarios. That is,
Φ1 = EΞ [Ψ1 (Ξ)] , Φ2 = EΞ [Ψ2 (Ξ)] , Φ3 = EΞ [Ψ3 (Ξ)] .
(18)
It has been proved previously [8], [9] that
Φ1 ≤ Φ2 ≤ Φ3 .
(19)
To assess the benefit of knowing and using the distributions
of the future outcomes, the VSS is defined as
VSS = Φ3 − Φ2 .
(20)
To show that the online ELF algorithm yields a bounded VSS,
we need to bound Φ3 and Φ2 . Generally, it is hard to calculate
Φ2 or analyze the lower bound of Φ2 directly [9], [10]. Thus,
we choose to analyze the lower bound of Φ1 instead, since
(19) shows that the lower bound of Φ1 is also the bound of
Φ2 . In what follows, we will show the lower bound of Φ1 in
Proposition 1 and the upper bound of Φ3 in Proposition 2.
Proposition 1:
Φ1 ≥ T f
!
Pe1 ˜1 PT Pet j PT
t=1 dt +
t=2
j=t µt +
t=1 νt
T
.
(21)
Proof: Please see the detailed proof in Appendix C.
Let O(t) be the set that O(t) = {(m, n)|em ≥ t, m =
1, · · · , t, n = t, · · · , em }. Then, we show that Φ3 is bounded
by Proposition 2.
Proposition 2: For any distribution of ξ t , t = 1, · · · , T ,
there is
Φ3 ≤ E
T
X
t=1
f
X
ηnm + ιt .
(22)
(m,n)∈O(t)
Proof: Please see the detailed proof in Appendix D.
Now, we are ready to present Theorem 3, which states that
the VSS is bounded for any distribution of random variables.
Theorem 3: For any distribution of random vector ξ t , t =
1, · · · , T, n = t, · · · , T, there is
VSS ≤ E
T
X
t=1
f
X
ηnm + ιt − T f
(m,n)∈O(t)
Γ
T
,
(23)
Pe1 ˜1 PT Pet j PT
dt + t=2 j=t µt + t=1 νt .
where Γ = t=1
Theorem 3 can be easily derived by Proposition 1 and
Proposition 2. In practice, the performance gap between the
online ELF algorithm and the optimal online algorithm is often
much smaller than the bound of VSS. This will be elaborated
in the numerical results in Section V.
IV. O NLINE ELF A LGORITHM UNDER F IRST- ORDER
P ERIODIC R ANDOM P ROCESSES
Notice that the complexity of O(T 3 ) of online ELF algorithm mainly comes from step 3, which exhaustively searches
the maximum-density period [i∗ , j ∗ ] over all subintervals
within [k, T ]. When the random arrival process is first-order
periodic stochastically 1 , we argue that the searching scope
can be reduced to one period from the whole system time
T . Thus, the complexity of step 3 is limited by the length
of a period instead of T . As a result, the complexity of
the algorithm reduces from O(T 3 ) to O(1), implying that
it does not increase with the system time T , and thus the
algorithm is perfectly scalable. In practice, the arrival process
of the charging demands are usually periodic stochastically.
For example, the arrival of charging demands at a particular location is statistically identical at the same time every
day during weekdays (or weekends). the National Household
Travel Survey (NHTS) 2009 gathers information about daily
travel patterns of different types of households in 2009, and
shows that the daily travel statistics (e.g., Average Vehicle
Trip Length, Average Time Spent Driving, Person Trips,
Person Miles of Travel) are very similar for each weekday or
weekend, but different between weekday and weekend [28]. In
Section IV-A, we investigate the case when the random load
demand process is first-order periodic. In Section IV-B, we
provide a closed-form solution to (12) for a special case when
the load demand process is the first-order stationary.
A. First-Order Periodic Process
In this subsection, we consider the case when the arrival
process is first-order periodic. Specifically, the first-order periodic process means that the first-order moment (i.e., mean)
of the random process is periodic. That is, at current time stage
k, for all t = k + 1, · · · , T, we have
µt = E[ξ t ] = µt+p ,
(24)
where ξ t is the random arrival events at time t, µt is the
expectation of ξ t , and p is the length of period. Then, instead
1 The first-order periodic stochastic process is defined as a stochastic process
whose first-order moment, i.e., mean is periodic. That is, the mean of the
random arrival events, i.e., µt is periodic. However, the actual realizations of
the arrival events ξt are uncertain and not periodic.
of considering µt for t = k + 1, · · · , T , we only need to
consider µt for one period, i.e., for t = k + 1, k + p:
k+1
k+1
µk+1 = [νk+1 , µk+1
k+1 , µk+2 , · · · , µk+e1 , 0, · · · , 0],
..
.
(25)
k+p
k+p
µk+p = [νk+p , µk+p
k+p , µk+p+1 , · · · , µk+ep , 0, · · · , 0].
Here, en ≤ T, n = 1, · · · , p is the maximum parking time
for PEVs arriving at time k + n. Specially, we define ê as
ê = max{ek+1 , ek+2 , · · · , ek+p }. We decompose the search
region {i, j|i = k, · · · , T, j = i, · · · , T } into three subregions, defined as Π1 = {i, j|i = k, j = k, · · · , k + ê}, Π2 =
{i, j|i = k, j = k + ê + 1, · · · , T } and Π3 = {i, j|i = k +
1, · · · , T, j = i, · · · , T }, respectively. We denote by X̂, Ŷ , Ẑ
the maximum densities of region Π1 , Π2 , Π3 , respectively.
Indeed, the largest of X̂, Ŷ , Ẑ is the maximum density of the
interval [i∗ , j ∗ ] ⊆ [k, T ] over all possible pairs i, j ∈ {i =
k, · · · , T, j = i, · · · , T }. Let [î1 , ĵ1 ], [î2 , ĵ2 ], [î3 , ĵ3 ] be the
intervals with the maximum density over region Π1 , Π2 , Π3 ,
respectively. By definition, î1 = î2 = k. Similar to the
stationary case, X̂ can be calculated by searching ĵ1 over
{k, · · · , k + ê}. That is,
˜k
n=k (dn
Pt
X̂ =
max
k≤t≤k+ê
P
P
+ νn ) + tn=k tm=n µn
m
.
n−k+1
(26)
Moreover, Lemma 1 shows that Ŷ and Ẑ can be calculated
once the maximum density of interval [k + 1, k + ê] has been
obtained. First, we introduce some definitions which help to
show Lemma 1. Let Π4 be a subset of Π3 , where Π4 is defines
as Π4 = {i, j|i = k + 1, · · · , k + ê, j = i, · · · , k + ê}, and
[ī, j̄] be the interval with
maximum
density of region Π4 , i.e.,
Pk+en n
Pj
m=n µm +νn )
n=i (
ī, j̄ = arg max
.
j−i+1
k+1≤i≤j≤k+ê
Lemma 1: The maximum densities of Π2 and Π3 are calculated by
Pĵ2
Ŷ =
n=k (
Pk+en
m=n
µn
m + νn )
ĵ2 − k + 1
Pĵ3
, Ẑ =
n=î3
(
Pk+en
m=n
µn
m + νn )
ĵ3 − î3 + 1
,
(27)
respectively, where î3 = ī,
ĵ2 =
and
(
max{j̄, k + ê + 1}, if j̄ < ī + p,
j̄ + (r − 1)p,
otherwise.
(
j̄,
if j̄ < ī + p,
ĵ3 =
j̄ + (r − 1)p, otherwise.
(28)
(29)
Proof: Please see the detailed proof in Appendix E.
Based on Lemma 1, we can modified the searching region
of step 3 of online ELF algorithm as follows:
• if j̄ < ī + p, the interval with the maximum density
during time stages [k + 1, T ] is [ī, j̄]. Then, in step 3
of the online ELF algorithm, the search region of i, j
is reduced from {i, j|i = k, · · · , T, j = i, · · · , T } to
{i, j|i = k, · · · , ī, j = i, · · · , ī, j̄}.
• If j̄ ≥ ī+p, the interval with the maximum density during
time stages [k + 1, T ] is [ī, j̄ + (r − 1)p]. Then, in step
3 of the online ELF algorithm, the search region of i, j
can be reduced from {i, j|i = k, · · · , T, j = i, · · · , T }
to {i, j|i = k, · · · , ī, j = i, · · · , ī, j̄ + (r − 1)p}.
As a result, the searching region of the online ELF algorithm
is only related to [k + 1, k + ê] instead of T . Thus, the
computational complexity of the online ELF algorithm is O(1)
instead of O(T 3 ) under first-order periodic process.
B. First-order Stationary Process
In this subsection, we show that the optimal solution to
(12) can be calculated in closed form if the arrival process is
first-order stationary. Due to the page limit, we only provide
the main results here. By first-order stationary, we mean that
the statistical mean of ξ t , i.e., νt and µtt0 , t0 = t, · · · , T only
depends on the relative time difference τ = t0 − t, but not the
absolute value of t. We can then replace νt by ν and replace
µtt0 by µτ , where τ = t0 − t. Then, µt is no longer a function
of t, and can be represented as
µ = [ν, µ1 , µ2 , · · · , µē , 0, · · · , 0],
(30)
where ē is the maximum parking time of a PEV. We denote
by X, Y, Z the maximum densities of region Π1 , Π2 , Π3 ,
respectively. Then, X is calculated by
X=
( Pn
t=k
max
P
d˜kt + n
j=1 (n − k − j + 1)µj + lk − ν
n−k+1
k≤n≤k+ē
)
+ν
,
(31)
and Y, Z are provided in Lemma 2.
Lemma 2: The maximum densities of Π2 and Π3 are
achieved by setting i2 = k, j2 = T, i3 = k + 1, j3 = T ,
and calculated by
Y =
Pk+ē ˜k Pk+ē
t=k dt +
j=1 (T − k − j + 1)µj + lk − ν
T −k+1
(T
−
k
−
j
+
1)µj
j=1
+ ν.
T −k
+ ν, (32a)
Pk+ē
Z=
(32b)
Proof: Please see the detailed proof in Appendix F. The
largest of X, Y, and Z is the maximum density of the
interval [i∗ , j ∗ ] ⊆ [k, T ] over all possible pairs i, j ∈ {i =
k, · · · , T, j = i, · · · , T }. Specially, if X or Y is the largest
one, then k is already contained in the maximum-density
interval, and thus X or Y is the optimal charging rate at time
k. On the other hand, if Z is the largest, then the maximumdensity interval, i.e., [k + 1, T ], does not include k. Following
Algorithm 1, we will delete the maximum-density interval and
repeat the process. Now, time slot k is the only remaining time
slot after deletion. This implies that all charging demands that
have arrived by time slot k should be fulfilled during time slot
k. These arguments are summarized in Proposition 3, which
provides the closed form solution to (12).
Proposition 3: When the random load demand process is
first-order stationary, the optimal charging schedule to (12) is
TABLE I
PARAMETER SETTINGS OF THE PEV TRAFFIC PATTERNS
(33a)
(33b)
(33c)
t=k
V. S IMULATIONS
In this section, we investigate the performance of the
proposed online ELF algorithm through numerical simulations.
All the computations are solved in MATLAB on a computer
with an Intel Core i3-2120 3.30GHz CPU and 8 GB of
memory. For comparison, we also plot the optimal solution to
(10), which is obtained by SAA method [22], and a heuristic
solution by the online AVG algorithm [6], which is obtained by
charging each PEV at a fixed rate, i.e., its charging demand
divided by its parking time. Let the expected cost of AVG
algorithm is denoted by Φ4 . Define the relative performance
loss of ELF and AVG compared with the optimal online
2
2
and Φ4Φ−Φ
, respectively. Similar to [5]
solution as Φ3Φ−Φ
2
2
[6], we adopt an increasing quadratic cost function in the
simulations, i.e., f (st + lt ) = (st + lt )2 . Note that the cost
function is increasing and strictly convex, since the load st +lt
is always non-negative.
A. Average Performance Evaluation
In this subsection, we evaluate the average performance of
the online ELF algorithm under three different traffic patterns,
i.e., light, moderate, and heavy traffics. In particular, the
system time is set to be 24 hours, and each time slot lasts
10 minutes. The PEV arrivals follow a Poisson distribution
and the parking time of each PEV follows an exponential
distribution [25]–[27]. The mean arrival and parking durations
of the three traffic patterns are listed in Table I. The main
difference lies in the arrival rates at the two peak hours, i.e.
12 : 00 to 14 : 00 and 18 : 00 to 20 : 00. The settings of the
peak hour match with the realistic vehicle trips in National
Household Travel Survey (NHTS) 2009 [28]. Specially, the
average number of total PEVs simulated in scenario 1, 2
and 3 are 104, 204 and 304, respectively. We choose the
base load profile of one day in the service area of South
California Edison from [7]. Each PEV’s charging demand is
uniformly chosen from [25, 35]kW h. Each point in Fig. 1,
Fig. 2 and Table II is an average of 105 independent instances
of the scenarios listed in Table I. In Fig. 2, the total loads
s∗k are plotted over time. We notice that the total load of
the online ELF algorithm follows closely to that of optimal
online algorithm, whereas that of the AVG algorithm has a
larger gap from the optimal online algorithm. The average
costs normalized by that of the optimal online algorithm
are plotted in Fig. 1. Moreover, the VSS and the relative
performance loss are listed in Table II. Both the figure and
the table show that ELF performs very close to the optimal
online algorithm. The VSS and the relative performance loss
are no more than 0.1536 and 0.38% respectively. In contrast,
the relative performance loss of AVG algorithm is up to 5.82%,
Arrival Rate (PEVs/hour)
S. 1 S. 2
S. 3
Time of Day
08 : 00 − 10 : 00
10 : 00 − 12 : 00
12 : 00 − 14 : 00
14 : 00 − 18 : 00
18 : 00 − 20 : 00
20 : 00 − 24 : 00
24 : 00 − 08 : 00
7
5
10
5
10
5
0
7
5
35
5
35
5
0
Mean Parking
Time (hour)
7
5
60
5
60
5
0
10
1/2
2
1/2
2
10
0
TABLE II
AVERAGE PERFORMANCE COMPARISON UNDER THREE TRAFFIC PATTERNS
Scenario
VSS
Φ3 −Φ2
Φ2
Φ4 −Φ2
Φ2
1
2
3
0.1178
0.1319
0.1536
0.19%
0.28%
0.38%
3.50%
4.46%
5.82%
which is more than 15 times of that of ELF algorithm. The
relative performance loss of the approximate online algorithm
reflects the percentage of the extra cost compared with the
optimal online algorithm. Obviously, the performance loss is
always the smaller the better. For example, from the report
of Rocky Mountain Institute, the average electricity rate is
11.2cents/kW h, and the average load for a charging station is
100kW [29]. Then, the expected electricity cost of a charging
station for one years is $967680. 6% relative performance loss
means AVG algorithm leads to $58060 extra cost, while the
proposed online algorithm with 0.38% relative performance
loss leads to $3677 extra cost, or an annual saving of $54383
compared with the AVG algorithm.
B. Complexity of The Online Algorithm ELF
In this subsection, we verify the computational complexity
of the online ELF algorithm and also compare the complexity
of online ELF algorithm with that of the optimal online
algorithm and online AVG algorithm, respectively. We still
adopt the SAA method as the optimal online algorithm. Since
the complexity of SAA is too high, truncation is often adopted
in the SAA to reduce the complexity at a cost of performance
loss [8]. As such, we also simulate the complexity of truncated
SAA with a truncation period of 3 hours. Each PEV’s charging
demand is uniformly chosen from [25, 35]kW h, the arrivals
follow a Poisson distribution and the parking time of each
1.07
Optimal online
Online ELF
Online AVG
1.0582
1.06
1.05
1.0446
1.04
Normalized cost
given by the following close-form:
X − lk ,
if X = max{X, Y, Z},
Y − lk ,
if Y = max{X, Y, Z},
s∗k =
k+ē
X
d˜kt ,
otherwise.
1.035
1.03
1.02
1.01
1 1.0019
1
1.0028
1
1.0038
1
0.99
0.98
0.97
Scenario 1
Fig. 1.
Scenario 2
Scenario 3
Normalized costs of three algorithms in three scenarios.
120
Total load of optimal online
Total load of online ELF
Total load of online AVG
65
90
Total load of optimal online
Total load of online ELF
Total load of online AVG
110
100
60
80
Load (kW)
Load (kW)
Load (kW)
90
55
70
50
80
70
60
60
45
Total load of optimal online
Total load of online ELF
Total load of online AVG
50
50
40
40
40
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 2 3 4 5 6 7 8
Time (hour)
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 2 3 4 5 6 7 8
Time (hour)
(a) Scenario 1: light traffic
(b) Scenario 2: moderate traffic
Fig. 2.
(c) Scenario 3: heavy traffic
Base load and total load of three algorithms in three scenarios.
4
10
65
Optimal online without truncation
Optimal online with truncation
Online ELF
Online AVG
3
10
60
Load (kW)
Computional time (s)
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 2 3 4 5 6 7 8
Time (hour)
2
10
55
50
1
10
45
0
10
Total load of optimal online
Total load of online ELF
Total load of online ORCHARD
40
1
Fig. 3.
2
3
4
5
6
Time (hour)
7
8
9
10
CPU computational time over the system time.
PEV follows an exponential distribution, where the arrival rate
and mean parking durations are the same as that in the peak
hours of scenario 2 in Section V.A. We simulate 10 cases in
total, where the system time are set to be 1, 2, · · · , 10 hours.
For each algorithm, we record the CPU computational time at
each time stage and calculate the average CPU computational
time as the sum of CPU computational times of all time
stages divided by the number of time stages. Each point in
Fig. 3 is an average of 100 independent instances. Fig. 3
shows that the CPU computational time of both the online
algorithm ELF and AVG almost grow linearly with the system
time. In contrast, for the SAA based online algorithm with
or without truncation, the average CPU computational times
grow very quickly as system time increases. We notice that
when system time increases to 4 hours, the optimal online
algorithm without truncation consumes more than 2 hours and
the optimal online algorithm with truncation consumes more
than 30 minutes. Meanwhile the proposed online algorithm
ELF only takes several seconds. It is foreseeable that the
computational complexity of optimal online algorithm will
become extremely expensive as we further increase the system
time.
C. Performance Comparison with Online Algorithm ORCHARD [5]
In this section, we compare the proposed online algorithm
ELF with the online algorithm ORCHARD proposed in refer-
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 2 3 4 5 6 7 8
Time (hour)
Fig. 4.
Load comparison of ORCHARD and ELF
ence [5] on the properties of the optimality and computational
complexity. First, we evaluate the average performance of the
proposed online algorithm and online algorithm ORCHARD.
To facilitate the comparison, we also adopt the average performance of optimal online algorithm as a bench mark. For
the system parameters, we use the default settings of scenario
1 in Section V.A. We simulate 105 cases and plot the total
load (the sum of the base load and the PEV charging load)
over time for the three online algorithms in Fig. 4. In addition,
the average performance ratios normalized against the optimal
online solution are shown in Fig. 5 respectively. Fig. 4 shows
that compared with online algorithm ELF, the online algorithm
ORCHARD always produces a larger gap from the optimal
online solution. From Fig. 5, we can see that the proposed
online algorithm ELF achieves a much lower expected cost
than the online algorithm ORCHARD, which indicates that
online algorithm ELF owns a better average performance than
online algorithm ORCHARD.
To compare the computation complexity of online algorithm
ORCHARD and ELF, we adopt the similar case study in
Section V.D of reference [5]. Specifically, we simulate the
CPU computational time of online algorithms by varying the
arrival rates of the PEVs during one day. For the system
parameter, we use the same settings as scenario 1 in Section
V.A except the arrival rates, which are assumed to be the same
during 8 : 00 − 18 : 00 and 0 after 18 : 00. We vary the arrival
Optimal online
2
Online ORCHARD
Online ELF
1.73
1.8
1.6
Normalized cost
1.4
1.2
1.0019
1
1
0.8
0.6
0.4
0.2
problem. Instead of adopting the standard numerical methods with high complexity, we provide a MPC-based online
algorithm with O(T 3 )-complexity. We rigorously analyze the
performance gap between the solution of the MPC-based
approach and the optimal solution for any distribution of
exogenous random variables. Moreover, we show that the
proposed algorithm can be made scalable under the first-order
periodic process of load demand. Besides, our analyses are
validated through extensive simulations.
0
A PPENDIX
Fig. 5.
ELF
Average performance ratios of online algorithm ORCHARD and
A. Proof of Theorem 1:
5
Computional time (s)
10
Optimal online without truncation
Optimal online with truncation
Online ELF
Online ORCHARD
4
10
(s)
(e)
i∈{i|ti =1}
3
10
2
10
100
Fig. 6.
We use the inductive method to show that through EDF
scheduling, all the PEVs can be fulfilled charging before
deadlines. For n = 1, (3) becomes
X
X
di ≥ s1 ≥
di .
(34)
150
200
250
300
350
Number of PEVs
400
450
500
Thus, by EDF scheduling, we can first satisfy the demand of
PEVs whose deadline at time stage 1. That is, for any PEV
(e)
i ∈ {i|ti = 1}, we set
xi1 = di .
CPU computational time over the number of PEVs.
rate in 8 : 00−18 : 00 from 10 to 50 (PEVs/hour) that leads to
the average number of total PEVs during one day varies from
100 to 500. For each specified average number of PEVs, we
simulate the average performance of 108 independent instances
for the online ELF algorithm, the optimal online algorithm and
the online ORCHARD algorithm, respectively, and record the
average CPU computational times of each cases for the three
algorithms, respectively. The results are plotted in Fig. 6. Fig. 6
shows that the average CPU computational time of the optimal
online algorithm grows quickly as the number of total PEVs
increases, while the average CPU computational time of the
online ELF algorithm and the online ORCHARD algorithm
grows slowly as the number of total PEVs increases. When
the number of PEVs is 200, the average CPU computational
time of the optimal online algorithm without truncation is
more than 24 hours. Even for the the optimal online algorithm
with truncation, the average CPU computational time is about
100 minutes. Whereas, the proposed online algorithm ELF
only takes about 4 minutes. Fig. 6 also indicates that the
computational time of online ELF and online ORCHARD as
the number of the PEVs increases.
As a conclusion, the case study shows that the online
algorithm ELF has a better average performance than online
algorithm ORCHARD. It also indicates that the CPU computational time of online ELF and online ORCHARD are similar.
VI. C ONCLUSIONS
In this paper, we formulate the optimal PEV charging
scheduling problem as a finite-horizon dynamic programming
i∈{i|ti =1}
(35)
Assuming that for all time stage m, EDF scheduling can fulfill
charge all the PEVs which depart at or before time stage m,
i.e., there exists at least a set of xit ’s that satisfy
(e)
ti
X
(e)
xit = di , ∀i ∈ {i|ti
≤ m},
(36a)
(s)
t=ti
(s)
(e)
(e)
xit ≥ 0, ∀t = ti , · · · , ti , ∀i ∈ {i|ti
Since
m
X
st ≥
t=1
m
X
t=1
X
≤ m}.
di ,
(36b)
(37)
(e)
i∈{i|ti =t}
Pm
Pm P
then, t=1 st − t=1 i∈{i|t(e) =t} di represents the amount
i
of power which is outputted from the charging station during
time stage 1, · · · , m and charged to the PEVs with deadline
after time stage m. By EDF scheduling, once the PEVs which
depart at time m have been fulfilled charging, we will first
charge the PEVs which depart at time stage m + 1. Thus, if
m
X
st −
t=1
m
X
X
di ≥
t=1 i∈{i|t(e) =t}
i
X
di ,
(38)
(e)
i∈{i|ti =m+1}
we finish charging of PEVs with deadline m + 1, and then go
to charge the PEVs with deadline m + 2. If
m
X
t=1
st −
m
X
X
di <
t=1 i∈{i|t(e) =t}
i
X
di ,
(39)
(e)
i∈{i|ti =m+1}
then the
PEVs with
m + 1 have been charged as
Pm
Pmdeadline
P
power t=1 st − t=1 i∈{i|t(e) =t} di . At time stage m + 1.
i
Since
m+1
X
st ≥
t=1
m+1
X
t=1
X
di ,
(e)
i∈{i|ti
(40)
=t}
1) If x∗it1 = 0 for a particular PEV i at a time slot t1 ∈
(s)
(e)
{ti , · · · , ti }, then, by complementary slackness, we
have ωit1 > 0. From (47a),
f 0 (st1 + lt1 ) = λi + ωit1 .
then,
m
m
X
X
di −
st −
X
sm+1 ≥
(e)
i∈{i|ti
t=1
=m+1}
t=1
X
(e)
i∈{i|ti
di , (41)
=t}
which means all the PEVs with deadline m+1 can be fulfilled
charging. This is because we will charge the PEVs with
deadline m + 1 first by the EDF scheduling. Thus, there exists
at least a set of xit ’s that satisfy
(48)
2) If x∗it2 > 0 for PEV i during a time slot t2 ∈
(s)
(e)
{ti , · · · , ti }, we can infer from (47c) that ωit2 = 0.
Then,
f 0 (st2 + lt2 ) = λi .
(49)
On the other hand, since f (st +lt ) is a strictly convex function
of st + lt , then f 0 (st + lt ) is an increasing function. From the
above discussions, we get the following two conclusions:
(e)
ti
X
(e)
xit = di , ∀i ∈ {i|ti
= m + 1},
(42a)
1) If x∗it1 > 0, x∗it2 > 0, then by (49),
(s)
f 0 (st1 + lt1 ) = f 0 (st2 + lt2 ) = λi .
t=ti
xi,m+1 ≥ 0, ∀i ∈
(e)
{i|ti
= m + 1}.
(42b)
Combining (36) and (42), we get that all the PEVs whose
deadline at or before stage m + 1 can be fulfill charging, i.e.,
there exist at least a set of xit ’s that satisfy
Due to the monotonicity of f 0 (st ), we have s∗t1 + lt1 =
s∗t2 + lt2 .
2) If x∗it1 = 0, x∗it2 > 0, then by (48) and (49), there is
f 0 (st1 + lt1 ) = λi + ωit1 > f 0 (st2 + lt2 ) = λi . (51)
(e)
ti
X
(e)
xit = di , ∀i ∈ {i|ti
≤ m + 1},
(50)
Since f 0 (st ) is a increasing function, we have s∗t1 +lt1 ≥
s∗t2 + lt2 .
(43a)
(s)
t=ti
(s)
(e)
(e)
xit ≥ 0, ∀t = ti , · · · , ti , ∀i ∈ {i|ti
≤ m + 1}.
(43b)
Therefore, we can conclude that by EDF scheduling, there
always exists at least a set of xit ’s that is feasible to (2). This
completes the proof.
Consider two function fˆ(st + lt ) and f¯(st + lt ). Let x̂∗it and
denote the optimal solutions to (2) with f (st +lt ) replaced
by fˆ(st + lt ) and f¯(st + lt ), respectively. Define ŝ∗t , s̄∗t as
X
X
ŝ∗t =
x̂∗it , s̄∗t =
x̄∗it , t = 1, · · · , T,
(52)
x̄∗it
i∈I(t)
B. Proof of Theorem 2:
First, we show that if there exists a PEV parking in the
station at both time t1 and t2 , i.e.,
(s)
(e)
t1 , t2 ∈ {ti , · · · , ti },
(44)
respectively. Suppose that there exists a time slot t1 such that
ŝ∗t1 < s̄∗t1 .
Since
T
X
and
x∗it1 ≥ 0, x∗it2 > 0,
s∗t1 + lt1 ≥ s∗t2 + lt2 .
f 0(
(s)
(e)
xit + lt ) − λi − ωit = 0, i ∈ N , t = ti , · · · , ti ,
s̄∗t =
t=1
X
di ,
(54)
i∈N
ŝ∗t2 > s̄∗t2
(55)
ŝ∗t1 + ŝ∗t2 = s̄∗t1 + s̄∗t2
(56)
Thus, we can find a PEV i ∈ N such that
(47a)
x̂∗it1 < x̄∗t1 , x̂∗it2 > x̄∗it2 .
(57)
x̂∗it2 > 0
(58)
As a result,
(e)
λi (di −
T
X
and
i∈I(t)
ti
X
ŝ∗t =
(53)
there must exist another time slot t2 such that
(46)
The Karush-Kuhn-Tucker (KKT) conditions to the convex
problem (2) are
X
t=1
(45)
then the optimal total loads at time t1 and t2 must satisfy that
i∈I(t)
xit ) = 0, i ∈ N ,
(47b)
(s)
t=ti
ωit xit = 0, i ∈ N , t =
since
(s)
ti , · · ·
(e)
, ti ,
(47c)
where λ, ω are the non-negative optimal Lagrangian multipliers corresponding to (2b) and (2c), respectively. We separate
our analysis into the following two cases:
x̄∗it2
≥ 0. Based on (46), there is
ŝ∗t2 + lt2 ≤ ŝ∗t1 + lt1 .
(59)
Combining (53)(56)(59), we get
s̄∗t2 + lt2 < ŝ∗t2 + lt2 ≤ ŝ∗t1 + lt1 < s̄∗t1 + lt1 .
(60)
Since f¯(st + lt ) is a strictly convex function of st + lt , then,
based on (56) and (60), we have
f¯(s̄∗t1 + lt1 ) + f¯(s̄∗t2 + lt2 ) > f¯(ŝ∗t1 + lt1 ) + f¯(ŝ∗t2 + lt2 ). (61)
PT
t=2
Pet
j=t
µjt . Then, by Jensen’s inequality [30],
Ψ1 (E[Ξ]) ≥
T
X
f
PT
s +l
which indicates that P t=1T t t is a constant. Then, we see
T
PT
s +l
that t=1 (st + lt − t=1T t t )2 is a strictly convex function
of st + lt . This completes the proof.
C. Proof of Proposition 1:
First, we show that Ψ1 (Ξ) is a convex function of Ξ. For
any Ξ, define s∗t (Ξ) as the optimal solution that minimizes
Ψ1 (Ξ) subject to (4b) - (4c). For an arbitrary pair of Ξ0 and
Ξ00 , let Ξ000 = λΞ0 + (1 − λ)Ξ00 ∀λ ∈ [0, 1]. Then, the charging
schedule st (Ξ000 ) such that st (Ξ000 ) = λs∗t (Ξ0 )+(1−λ)s∗t (Ξ00 )
still satisfies (4b) - (4c) with Ξ = Ξ000 due to the linearity of
the constraints. Based on the convexity of f (st + lt ), we have
T
X
t=1
≤λ
T
X
f (s∗t (Ξ0 )
T
X
+ lt ) + (1 − λ)
t=1
(63)
f (s∗t (Ξ00 )
+ lt )
T
X
f (s∗t (Ξ000 ) + lt ) ≤
t=1
T
X
f (st (Ξ000 ) + lt ).
(64)
t=1
Combining (63) and (64), we have
T
X
T
X
n=t
d˜tn ≤
T
X
t=1
n=t−1
T
X
ηnt =
em
X X
ηnm ,
(68)
m∈M n=t
where the inequality holds since st−1 ≥ d˜t−1
t−1 and M =
{m|e
≥
t,
m
=
1,
·
·
·
,
t}
Thus,
we
have st ≤
m
P
m
,
where
O(t)
is
a
bounded
set
for t =
η
(m,n)∈O(t) n
hP
i
P
T
m
1, · · · , T . Therefore, E
f
(
+
ι
)
is an
η
t
t=1
(m,n)∈O(t) n
upper bound of Φ3 . This completes the proof.
E. Proof of Lemma 1:
We provide the proof by discussing the following two cases:
1)If j̄ ≥ ī + p, which means that [ī, j̄] and [ī + p, j̄ + p]
overlaps with each other, then the density of interval [ī, j̄ + p]
is higher than that of [ī, j̄], and the density of [ī, j̄ + 2p] is
higher than that of [ī, j̄ + p]. So on and so forth. Finally, we
see that the interval [ī, j̄ + (r − 1)p] has the maximum density
over region Π3 . Thus, we have î3 = ī, ĵ3 = j̄ + (r − 1)p, and
Pj̄+(r−1)p Pk+en n
( m=n µm + νn )
n=ī
.
j̄ + (r − 1)p − ī + 1
(69)
Likewise, for the region {i, j|i = k, j = k + ê + 1, · · · , T },
we have ĵ2 = j̄ + (r − 1)p, and
Ŷ =
Pj̄+(r−1)p Pk+en n
( m=n µm + νn )
n=k
.
j̄ + (r − 1)p − k + 1
(70)
2)If j̄ < ī + p, then, the density of interval [ī, j̄] is higher
than that of [ī, j̄ + p], and the density of [ī, j̄ + p] is higher
than that of [ī, j̄ + 2p]. So on and so forth. Finally, we see
that the interval [ī, j̄] has the maximum density over region
Π3 . Thus, we have î3 = ī, ĵ3 = j̄, and the corresponding
maximum density
n=ī
P n n
( k+e
m=n µm + νn )
.
j̄ − ī + 1
(71)
For the region Π2 , if j̄ ≤ k + ê + 1, then
f (s∗t (Ξ00 ) + lt ).
Thus, we have established the convexity of Ψ1 (Ξ) over the
set of Ξ. Therefore, we have
E [Ψ1 (Ξ)] ≥ Ψ1 (E[Ξ]).
et
X
n=t
(65)
t=1
On the other hand, based on
1, · · · , e1 , j = t, · · · , et , we
d˜t−1
− d˜t−1
n
t−1 +
Ẑ =
f (s∗t (Ξ0 ) + lt ) + (1 − λ)
D. Proof of Proposition 2:
PT
st ≤ n=t d˜tn holds for all stage t. On the other hand, we
have
Pj̄
f (s∗t (Ξ000 ) + lt )
t=1
≤λ
(67a)
!
Pe1 ˜1 PT Pet j PT
t=1 νt
j=t µt +
t=2
t=1 dt +
.
T
(67b)
This completes the proof.
t=1
for all λ ∈ [0, 1]. On the P
other hand, let s∗t (Ξ000 ) be the optimal
T
solution that minimizes t=1 f (st + lt ) subject to (4b) - (4c)
000
with Ξ = Ξ . Then,
T
X
= Tf
Ẑ =
f (st (Ξ000 ) + lt )
T
t=1
s̄∗t
This contradicts with the fact that the
is the optimal
total charging rate for objective function f¯(st + lt ). Therefore, the optimal charging solution s∗t is the same for any
strictly convex function f (st + lt ). Next, we show that optimal solution s∗t isPa load flattening solution that minimizes
T
PT
t=1 st +lt 2
) subjecting to (4b) and (4c).
t=1 (st + lt −
T
Based on the argument that s∗t is the same for any strictly
convex function fP(st + lt ), then it is equivalent to show that
T
PT
t=1 st +lt 2
) is a strictly convex function of
t=1 (st + lt −
T
st + lt . Since
P
PT
PT
i∈N di +
t=1 lt
t=1 st + lt
=
,
(62)
T
T
!
Pe1 ˜1 PT Pet j PT
j=t µt +
t=1 νt
t=2
t=1 dt +
Pk+ê+1 Pk+en n
n=k (
m=n µm + νn )
Ŷ =
.
ê + 2
If j̄ > k + ê + 1, then
(66)
the definition
of st ,P
d˜it , µjt , i
PT
e1 ˜1
have
t=1 st =
t=1 dt
Pj̄
Ŷ =
=
+
(72)
n=k (
Pk+en
This completes the proof.
µn
m + νn )
.
j̄ − k + 1
m=n
(73)
we have
F. Proof of Lemma 2:
Pj−i+2
µt
(j − i + 2 − t)µt + t=1
+ν
j−i+2
Pj−i+1
(82)
(j − i + 2 − t)µt
t=1
≥
+ν
j−i+1
= ρ(i, j),
Pj−i+1
ρ(i, j + 1) =
Let ρ(i, j) denote the maximum density of [i, j]. For any
i = k, j = k + ē + 1, · · · , T , the density of interval [i, j] is
given by
Pk+ē
Pj−k Pj−2k+1
µt + t=k d˜kt + lk + (j − k)ν
ρ(i, j) = k=1 t=1
j−k+1
Pk+ē
Pj−k
(j − k + 1 − t)µt + t=k d˜kt + lk − ν
+ ν.
= t=1
j−k+1
(74)
To prove that the maximum density is achieved by setting
j = T , we only need to show ρ(i, j) is a non-decreasing
function of j for each given i, i.e.,
ρ(i, j) ≤ ρ(i, j + 1), ∀k + ē + 1 ≤ j ≤ T − 1.
Since
Pj−k
t=1 (j
− k + 1 − t)µt +
j−k
Pk+ē ˜k
t=k dt
≤
j−k+1
X
µt ,
t=1
(75)
(76)
we have
ρ(i, j + 1)
Pj−k
P
˜k Pj−k+1 µt + lk − ν
(j − k + 1 − t)µt + k+ē
t=k dt +
t=1
+ν
= t=1
j−k+1
Pj−k
Pk+ē ˜k
(j − k + 1 − t)µt + t=k dt + lk − ν
+ν
≥ t=1
j−k
=ρ(i, j),
(77)
which implies (75). Hence, Y is the maximum density of
[k, j], j = k + ē + 1, · · · , T . Next, we show that Z is the
maximum density of [k + 1, T ]. For any k + 1 ≤ i ≤ j ≤ T ,
the density of interval [i, j] is given by
Pj−i+1 Pj−i+2−k
µt
t=1
ρ(i, j) = k=1
+ν
j−i+1
(78)
Pj−i+1
t=1 (j − i + 2 − t)µt
=
+ ν.
j−i+1
To prove that the maximum density is achieved by setting
i = k + 1, j = T , we only need to show ρ(i, j) is a nondecreasing function of j for each given i, i.e.,
(79)
and a non-increasing function of i for each given j, i.e.,
ρ(i, j) ≥ ρ(i + 1, j), ∀k + 1 ≤ i + 1 ≤ j ≤ T.
which implies (79). On the other hand, as
Pj−i
j−i+1
X
t=1 (j − i + 1 − t)µt
≤
µt , ∀k + 1 ≤ i ≤ j ≤ T,
j−i
t=1
(83)
then
Pj−i
(j − i + 1 − t)µt
ρ(i + 1, j) = t=1
+ν
j−i
Pj−i
Pj−i+1
µt
t=1 (j − i + 1 − t)µt +
t=1
+ν
≤
j−i+1
= ρ(i, j),
(84)
which implies (80). This completes the proof.
R EFERENCES
k + ē + 1 ≤ j ≤ T − 1,
ρ(i, j) ≤ ρ(i, j + 1), ∀k + 1 ≤ i ≤ j ≤ T − 1,
t=1
(80)
On one hand, since
Pj−i+1
j−i+2
X
t=1 (j − i + 2 − t)µt
≤
µt , ∀k + 1 ≤ i ≤ j ≤ T,
j−i+1
t=1
(81)
[1] W. Tang and Y. J. Zhang, “Online electric vehicle charging control
with multistage stochastic programming,” in IEEE 48th Annu. Conf.
Information Sciences and Systems (CISS), pp. 1-6, Mar. 2014.
[2] J. A. P. Lopes, F. J. Soares, and P. M. R. Almeida, “Integration of electric
vehicles in the electric power system,” Proc. of the IEEE, vol. 99, no. 1,
pp. 168-183, 2011.
[3] E. Sortomme, M. M. Hindi, S. D. J. MacPherson, and S. S. Venkata,
“Coordinated charging of plug-in hybrid electric vehicles to minimize
distribution system losses,” IEEE Trans. on Smart Grid, vol.2, no.1, pp.
198-205, 2011.
[4] Z. Ma, D. Callaway, and I. Hiskens, “Decentralized charging control of
large populations of plug-in electric vehicles,” IEEE Trans. on Control
Systems Technology, vol. 21, no. 1, pp. 67-78, 2013.
[5] W. Tang, S. Bi, and Y. J. Zhang, “Online coordinated charging decision
algorithm for electric vehicles without future information,” IEEE Trans.
on Smart Grid, vol. 5, no. 6, pp. 2810 - 2824, 2014.
[6] Y. He, B. Venkatesh, and L. Guan, “Optimal scheduling for charging and
discharging of electric vehicles,” IEEE Trans. on Smart Grid, vol. 3, no.
3, pp. 1095-1105, 2012.
[7] L. Gan, U. Topcu, and S. H. Low, “Optimal decentralized protocol for
electric vehicle charging,” IEEE Trans. on Power System, vol.28, iss. 2,
pp. 940-951, 2012.
[8] J. R. Birge and F. Louveaux, Introduction to Stochastic Programming,
New York: Springer, 1997.
[9] B. Defourny, D. Ernst, and L. Wehenkel, “Multistage stochastic programming: a scenario tree based approach to planning under uncertainty,” Decision Theory Models for Applications in Artificial Intelligence: Concepts
and Solutions, 2011.
[10] F. Maggioni and S. Wallace, “Analyzing the quality of the expected value
solution in stochastic programming,” Annals of Operations Research, pp.
37-54, 2012.
[11] R. Leou, C. Su, and C. Lu, “Stochastic analyses of electric vehicle
charging impacts on distribution network,” IEEE Trans. on Power System,
vol. 29, no. 3, pp. 1055-1063, May 2014.
[12] L. Rao and J. Yao, “SmartCar: smart charging and driving control
for electric vehicles in the smart grid,” IEEE Global Communications
Conference (GLOBECOM), pp. 2709-2714, Dec. 2014.
[13] N. Chen, L. Gan, S. H. Low, and A. Wierman, “Distributional analysis
for model predictive deferrable load control,” 53rd IEEE Conference on
Decision and Control (CDC), pp. 6433-6438, Dec. 2014.
[14] L. Gan, A. Wierman, U. Topcu, N. Chen, and S. H. Low, “Realtime deferrable load control: handling the uncertainties of renewable generation,”
in Proceedings of the fourth international conference on Future energy
systems (ACM e-Energy) , pp. 113-124, May 2013.
[15] S. Bansal, M. N. Zeilinger, and C. J. Tomlin, “Plug-and-play model
predictive control for electric vehicle charging and voltage control in
smart grids,” 53rd IEEE Conference on Decision and Control (CDC), pp.
5894-5900, 2014.
[16] G. Li and X. Zhang, “Modeling of Plug-in Hybrid Electric Vehicle
Charging Demand in Probabilistic Power Flow Calculations,” IEEE Trans.
on Vehicular Technology, vol. 63, no. 6, pp. 2600-2612, 2014.
[17] T. Zhang, W. Chen, Z. Han, and Z. Cao, “Charging scheduling of electric
vehicles with local renewable energy under uncertain electric vehicle
arrival and grid power price,” IEEE Trans. on Smart Grid, vol. 3, no.
1, pp. 492-499, 2012.
[18] I. Koutsopoulos, V. Hatzi, and L. Tassiulas, “Optimal energy storage
control policies for the smart power grid,” in Proc. of IEEE International
Conference on Smart Grid Communications (SmartGridComm), pp. 475480, 2011.
[19] L.
Huang, J.
Walrand, and K.
Ramchandran, “Optimal demand response with energy storage management,”
http://arxiv.org/pdf/1205.4297.pdf, 2012.
[20] W. Feller, An introduction to probability theory and its applications,
John Wiley & Sons, 2008.
[21] Y. Ye, Interior Point Algorithms: Theory and Analysis, WileyInterscience Press, 1997.
[22] A. Shapiro, D. Dentcheva, and A. Ruszczynski, Lectures on Stochastic
Programming: Modeling and Theory, MPS-SIAM, Philadelphia, 2009.
[23] F. Yao, A. Demers, and S. Shenker, “A scheduling model for reduced
cpu energy,” in Proc. IEEE Symp. Foundations of Computer Science, pp.
374-382, 1995.
[24] N. Bansal, T. Kimbrel, and K. Pruhs, “Speed scaling to manage energy
and temperature,” Journal of the ACM (JACM), vol. 54, no. 1, pp. 1-39,
2007.
[25] M. Alizadeh, A. Scaglione, J. Davies, and K. S. Kurani, “A scalable
stochastic model for the electricity demand of electric and plug-in hybrid
vehicles,” IEEE Trans. on Smart Grid, vol. 5, no. 2, pp. 848-860, 2014.
[26] X. Zhang and S. Grijalva, “An Advanced Data Driven Model for
Residential Electric Vehicle Charging Demand,” technique report, Georgia
Institute of Technology, 2015.
[27] S. Chen and L. Tong, “iEMS for large scale charging of electric vehicles
architecture and optimal online scheduling,” in Proc. IEEE Int. Conf.
Smart Grid Commun. (SmartGridComm), pp. 629-634, Nov. 2012.
[28] A. Santos, A. N. McGuckin, H. Y. Nakamoto, D. Gray, and S. Lis,
Summary of travel trends: 2009 national household travel survey, Federal
Highway Administration, Washington, DC, 2011.
[29] James W. May and Matt Mattila, “Plugging In: A Stakeholder Investment Guide for Public Electric-Vehicle Charging Infrastructure,” Rocky
Mountain Institute, 2009.
[30] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge university press, 2004.
| 3 |
Constructions cachées en algèbre abstraite (3)
Dimension de Krull, Going Up, Going Down
Thierry Coquand (∗) Henri Lombardi (†),
arXiv:1712.04728v1 [] 13 Dec 2017
janvier 2001
Résumé
Nous rappelons des versions constructives de la théorie de la dimension de Krull dans
les anneaux commutatifs et dans les treillis distributifs, dont les bases ont été posées par
Joyal, Espanõl et les deux auteurs. Nous montrons sur les exemples de la dimension des
algèbres de présentation finie, du Going Up, du Going Down, . . .que cela nous permet de
donner une version constructive des grands théorèmes classiques, et par conséquent de
récupérer un contenu calculatoire explicite lorsque ces théorèmes abstraits sont utilisés
pour démontrer l’existence d’objets concrets. Nous pensons ainsi mettre en oeuvre une
réalisation partielle du programme de Hilbert pour l’algèbre abstraite classique.
MSC 2000 : 13C15, 03F65, 13A15, 13E05
Mots clés : Dimension de Krull, Going Up, Going Down, Mathématiques constructives.
Key words : Krull dimension, Going Up, Going Down, Constructive Mathematics.
∗
Chalmers, University of Göteborg, Sweden, email : coquand@cs.chalmers.se
Equipe de Mathématiques, CNRS UMR 6623, UFR des Sciences et Techniques, Université de FrancheComté, 25 030 BESANCON cedex, FRANCE, email : lombardi@math.univ-fcomte.fr
†
1
Table des matières
Introduction
3
1 Définition constructive de la dimension de Krull
1.1 Chaı̂nes idéales . . . . . . . . . . . . . . . . . . .
1.2 Collapsus simultanés . . . . . . . . . . . . . . . .
1.3 Suites pseudo régulières et dimension de Krull . .
1.4 Dimension de Krull et principe local-global . . . .
2 Treillis distributifs, relations implicatives et
2.1 Treillis distributifs, idéaux, filtres et spectre
2.2 Treillis distributifs et relations implicatives
2.3 Dimension de Krull des treillis distributifs .
des anneaux commutatifs
5
. . . . . . . . . . . . . . . . . 5
. . . . . . . . . . . . . . . . . 7
. . . . . . . . . . . . . . . . . 11
. . . . . . . . . . . . . . . . . 14
dimension
. . . . . . .
. . . . . . .
. . . . . . .
de
. .
. .
. .
Krull
16
. . . . . . . . . . . 16
. . . . . . . . . . . 20
. . . . . . . . . . . 22
3 Treillis de Zariski et de Krull dans un anneau commutatif
32
4 Going Up et Going Down
34
4.1 Dimension de Krull relative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Going Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Going Down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Bibliographie
45
2
Introduction
Nous rappelons des versions constructives de la théorie de la dimension de Krull dans les
anneaux commutatifs et dans les treillis distributifs, dont les bases ont été posées par Joyal,
Espanõl, et les deux auteurs ([7, 6, 12]). Nous montrons sur les exemples du Going Up, du
Going Down, de la dimension des variétés algébriques, que cela nous permet de donner une
version constructive des grands théorèmes classiques, et surtout de récupérer un contenu calculatoire explicite lorsque ces théorèmes abstraits sont utilisés pour démontrer l’existence d’objets
concrets. Nous pensons ainsi mettre en oeuvre une réalisation partielle du programme de Hilbert
pour l’algèbre abstraite classique.
Notre exposé est dans le style des mathématiques constructives à la Bishop (cf. en algèbre
[18]). Nous avons réduit au minimum l’appel aux notions de logique de manière à obtenir un
texte dont la forme ne rebute pas les algébristes. Lorsque nous disons que nous avons une
version constructive d’un théorème d’algèbre abstraite, c’est que nous avons un théorème dont
la preuve est constructive, dont la signification calculatoire est claire, et à partir duquel nous
pouvons récupérer le théorème classique correspondant par une utilisation immédiate d’un
principe non constructif bien répertorié. Un théorème abstrait classique peut avoir plusieurs
versions constructives intéressantes.
Dans le cas des théorèmes d’algèbre abstraite classique, un principe non constructif qui
permet de faire le travail en question est en général le théorème de complétude de Gödel, qui
permet de construire un modèle pour une théorie formelle cohérente. Lorsqu’il s’applique à
des structures algébriques de présentation énumérable (en un sens convenable) le théorème de
complétude de Gödel n’est autre qu’une reformulation du LLPO de Bishop (tout nombre réel
est ≥ 0 ou ≤ 0).
Notre volonté de ne pas utiliser le théorème de complétude de Gödel n’est pas au premier
chef un choix philosophique mais un choix pratique. L’usage du théorème de complétude de
Gödel conduit en effet à remplacer des preuves directes (cachées) par des preuves indirectes qui
ne sont au fond qu’un double renversement par l’absurde de la preuve directe, d’où la perte de
son contenu calculatoire. Par exemple dans la preuve abstraite du 17ème problème de Hilbert
on dit : si le polynome P n’était pas une somme de carrés de fractions rationnelles, on aurait
un corps K dans lequel on décèlerait une absurdité en lisant la preuve (constructive) que le
polynome est partout positif ou nul. La remise sur pied de cette preuve abstraite est de dire :
partons de la preuve (constructive) que le polynome est partout positif ou nul, et montrons
(par les arguments explicites présents dans les preuves classiques) que la tentative de construire
K échoue à coup sûr. Cela nous donne la somme de carrés que nous cherchions. Entretemps il
a fallu remplacer le théorème abstrait non constructif : “tout corps réel peut être ordonné” par
le théorème constructif : “dans un corps qui refuse toute tentative d’être ordonné, −1 est une
somme de carrés”. Et on passe du second au premier par le théorème de complétude de Gödel, tandis que la preuve constructive du second est cachée dans les manipulations algébriques
contenues dans la preuve classique du premier (cf. [11]).
Voici un plan rapide du papier.
Suites pseudo régulières, dimension de Krull des anneaux commutatifs
Dans la section 1 nous donnons des preuves “plus lisibles” pour certains résultats contenus dans
[12], lesquels y étaient démontrés en utilisant la notion de structure algébrique dynamique. La
notion centrale qui est mise en oeuvre est celle de spécification partielle pour une chaı̂ne d’idéaux
premiers. Les constructions abstraites de chaı̂nes d’idéaux premiers ont leur contrepartie constructive sous forme d’un théorème de collapsus simultané (théorème 1.14). Nous développons la
notion de suite pseudo régulière (une variante un peu affaiblie de la notion de suite régulière),
3
qui permet de caractériser constructivement la dimension de Krull. Nous montrons ainsi que la
notion de dimension de Krull a un contenu calculatoire explicite en terme d’existence (ou non) de
certains types d’identités algébriques. Ceci confirme l’adage selon ldequel l’algèbre commutative
admet une interprétation calculatoire comme machinerie de production d’identités algébriques
(certaines, parmi les plus fameuses, sont appelées des Nullstellensätze.) Enfin nous donnons une
version constructive du théorème qui affirme que la dimension de Krull est la borne supérieure
de la dimension des localisés en les idéaux maximaux.
Treillis distributifs, relation implicative et dimension de Krull
Dans la section 2 nous développons la théorie constructive de la dimension de Krull des treillis
distributifs, d’une part dans le style de la section 1, d’autre part dans le style de la théorie de
Joyal, et nous faisons le lien entre les deux points de vue. Nous faisons également le lien entre
le premier point de vue et les développements apportés par Español à la théorie de Joyal. Une
très grande simplification des preuves et des calculs est obtenue en mettant systématiquement
en avant la notion de relation implicative (en anglais entailment relation) qui a son origine dans
la règle de coupure du calcul des séquents de Gentzen, avec le théorème fondamental 2.10.
Treillis de Zariski et de Krull dans un anneau commutatif
Dans la section 3 nous introduisons le treillis de Zariski d’un anneau commutatif (ses éléments
sont les radicaux d’idéaux de type fini), qui est la contrepartie constructive du spectre de
Zariski : les points de ce dernier sont les idéaux premiers du treillis de Zariski, et les parties constructibles du spectre de Zariski sont les éléments du treillis booléen engendré par le
treillis de Zariski. L’idée de Joyal est de définir la dimension de Krull d’un anneau commutatif
comme étant celle de son treillis de Zariski, ce qui évite tout recours aux idéaux premiers. Nous
établissons ensuite l’équivalence entre le point de vue (constructif) de Joyal et le point de vue
(constructif) donné à la section 1 pour la dimension de Krull d’un anneau commutatif.
Going Up et Going Down
La section 4 développe la théorie constructive des fameux théorèmes de montée et de descente
(Going up et Goingd down) dans les extensions entières, ainsi que celle du Going down pour
les extensions plates.
Nous montrons que ces théorèmes apparemment très abstraits (qui affirment l’existence de
certains idéaux premiers) ont une signification tout à fait concrète en terme de machinerie de
construction d’identités algébriques. Cette version concrète implique la version abstraite par
une utilisation simple du théorème de complétude de Gödel. Et le plus important, la preuve
de la machinerie concrète est déjà à l’oeuvre de manière cachée dans la preuve abstraite du
théorème absrait. Nous avons besoin pour ce déchiffrage de développer un analogue constructif
pour la dimension de Krull relative d’une extension d’anneaux. Cet analogue constructif qui
semble au premier abord un peu étrange (définition 4.1) n’a pas été tiré du chapeau, mais a
été fourni comme un outil qui s’impose de lui-même dans le déchiffrage des preuves classiques.
On notera que les théorèmes constructifs établis dans cet article concernant la dimension
des anneaux de polynomes, celle des algèbres de présentation finie sur un corps, le Going up et
le Going down sont nouveaux (ils n’avaient pu être obtenus dans le cadre de la théorie de Joyal
tant qu’elle se limitait à parler du treillis de Zariski sans aller voir de plus près les calculs en
jeu en terme d’identités algébriques).
Conclusion
4
Cet article constitue une confirmation de la possiblité concrète de réaliser le programme de
Hilbert pour de larges pans de l’algèbre abstraite (cf. [1, 4, 8, 9, 10, 11, 12, 13, 14, 15]). L’idée
générale est de remplacer les structures abstraites idéales par des spécifications partielles de ces
structures. La jolie et très courte preuve abstraite qui utilise les objets idéaux a une contrepartie
purement calculatoire au niveau des spécifications partielles de ces objets idéaux. La plupart
des théorèmes d’algèbre abstraite, dans la preuve desquels l’axiome du choix et le principe du
tiers exclu semblent offrir un obstacle sérieux à une interprétation explicite en termes de calculs
algébriques, auraient ainsi une version constructive, à partir de laquelle l’utilisation du théorème de complétude de Gödel fournirait la version classique abstraite. Plus important encore,
la preuve abstraite idéale contiendrait toujours, de manière plus ou moins cachée, la preuve
constructive du théorème constructif correspondant.
Signalons enfin qu’un traitement constructif du Principal ideal theorem de Krull est donné
dans [3].
1
Définition constructive de la dimension de Krull des
anneaux commutatifs
Soit un anneau commutatif A, on note hJi ou s’il y a ambigüité hJiA l’idéal de A engendré
par la partie J ⊆ A. On note M(U ) le monoı̈de ( 1 ) engendré par la partie U ⊆ A.
1.1
Chaı̂nes idéales
Définition 1.1 Dans un anneau commutatif A
— Une spécification partielle pour un idéal premier (ce que nous abrégeons en premier
idéal, en anglais idealistic prime) est un couple P = (J, U ) de parties de A.
— Un premier idéal P = (J, U ) est dit complet si J est un idéal, U est un monoı̈de et si
J + U = U.
— Soient P1 = (J1 , U1 ) et P2 = (J2 , U2 ) deux premiers idéaux. On dit que P1 est contenu
dans P2 et on écrit P1 ⊆ P2 si J1 ⊆ J2 et U2 ⊆ U1 .
On dit que P2 est un raffinement de P1 et on écrit P1 ≤ P2 si J1 ⊆ J2 et U1 ⊆ U2 .
— Une spécification partielle pour une chaı̂ne d’idéaux premiers (ce que nous abrégeons en
chaı̂ne idéale, en anglais idealistic chain) est définie comme suit. Une chaı̂ne idéale de
longueur ` est une liste de ` + 1 premiers idéaux : C = (P0 , . . . , P` ) (Pi = (Ji , Ui )). On
note C (i) pour Pi . La chaı̂ne idéale est dite finie si toutes les parties Ji et Ui sont finies.
— Une chaı̂ne idéale C = (P0 , . . . , P` ) est dite complète si les premiers idéaux Pi sont
complets et si on a les inclusions Pi ⊆ Pi+1 (i = 0, . . . , ` − 1).
— Soient deux chaı̂nes idéales de longueur `, C = (P0 , . . . , P` ) et C 0 = (P00 , . . . , P`0 ). On dit
que C 0 est un raffinement de C et on écrit C ≤ C 0 si Pi ≤ Pi0 (i = 0, . . . , `).
Nous regardons une chaı̂ne idéale C de longueur ` dans A comme une spécification partielle pour
une chaı̂ne croissante d’idéaux premiers (au sens usuel) P0 , . . . , P` vérifiant C ≤ (Q0 , . . . Q` ),
où Qi = (Pi , A \ Pi ) (i = 0, . . . , `).
Fait 1.2 Toute chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) engendre une chaı̂ne idéale complète
minimale
C 0 = ((I0 , V0 ), . . . , (I` , V` ))
1. Un monoı̈de sera toujours un monoı̈de multiplicatif.
5
définie par I0 = hJ0 i, I1 = hJ0 ∪ J1 i,. . ., I` = hJ0 ∪ · · · ∪ J` i, Ui0 = M(Ui ) (i = 0, . . . , `),
0
V` = U`0 + I` , V`−1 = U`−1
V` + I`−1 , . . ., V0 = U00 V1 + I0 = U00 (U10 (· · · (U`0 + I` ) + · · ·) + I1 ) + I0 .
En outre tout élément de V0 se réécrit sous forme
u0 · (u1 · (· · · (u` + j` ) + · · ·) + j1 ) + j0 = u0 · · · u` + u0 · · · u`−1 · j` + · · · + u0 · j1 + j0
avec les ji ∈ hJi i et ui ∈ M(Ui ).
Définition 1.3 Un idéal I et un monoı̈de S sont dits conjugués lorsqu’on a :
(s · a ∈ I, s ∈ S)
an ∈ I
(j ∈ I, s ∈ S)
s1 · s2 ∈ S
a∈I
a ∈ I (n ∈ N, n > 0)
s+j ∈S
s1 ∈ S
=⇒
=⇒
=⇒
=⇒
Dans ce cas, on dit aussi que le premier idéal (I, S) est saturé.
Par exemple un idéal premier détachable( 2 ) et le monoı̈de complémentaire sont conjugués.
Lorsqu’un idéal I et un monoı̈de S sont conjugués, on a
1∈I
⇐⇒
0∈S
⇐⇒
(I, S) = (A, A)
Définition 1.4 (Collapsus) Soit C = ((J0 , U0 ), . . . , (J` , U` )) une chaı̂ne idéale et C 0 = ((I0 , V0 ),
. . . , (I` , V` )) la chaı̂ne idéale complète qu’elle engendre.
— On dit que la chaı̂ne idéale C collapse ou encore s’effondre si on a 0 ∈ V0 . Il revient au
même de dire : il existe ji ∈ hJi i, ui ∈ M(Ui ), (i = 0, . . . , `), vérifiant l’égalité
u0 · (u1 · (· · · (u` + j` ) + · · ·) + j1 ) + j0 = 0
c’est-à-dire encore
u0 · · · · u` + u0 · · · · u`−1 j` + · · · + u1 j1 + j0 = 0
— Une chaı̂ne idéale est dite saturée si elle est complète et si les premiers idéaux (Ji , Ui )
sont saturés.
— La chaı̂ne idéale ((A, A), . . . , (A, A)) est dite triviale : une chaı̂ne saturée qui collapse
est triviale.
Notez que le premier idéal (0, 1) collapse si et seulement si 1 =A 0.
Le lemme suivant est immédiat.
Fait 1.5 Une chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) dans laquelle on a Uh ∩ Jh 6= ∅ pour
un indice h collapse. Plus généralement si une chaı̂ne idéale C 0 extraite d’une chaı̂ne idéale C
collapse, alors C collapse. De même si C collapse, tout raffinement de C collapse.
Les preuves des propriétés suivantes sont instructives.
Fait 1.6 Soit deux chaı̂nes idéales C1 = (P0 , . . . , P` ) et C2 = (P`+1 , . . . , P`+r ) d’un anneau A.
Soit C = C1 • C2 = (P0 , . . . , P`+r ).
2. Rappelons qu’une partie d’un ensemble est dite détachable lorsqu’on a un test pour l’appartenance à la
partie. Par exemple les idéaux de type fini d’un anneau de polynomes à coefficients entiers sont détachables.
6
(1) Supposons C1 saturée. Alors C collapse dans A si et seulement si P` • C2 collapse dans
A si et seulement si C2 collapse dans le quotient A/J` (P` = (J` , U` )).
(2) Supposons C2 complète. Alors C collapse dans A si et seulement si C1 • P`+1 collapse
dans A si et seulement si C2 collapse dans le localisé AU`+1 (P`+1 = (J`+1 , U`+1 )).
(3) Supposons C1 saturée et C2 complète. Alors C collapse dans A si et seulement si
(P` , P`+1 ) collapse dans A si et seulement si J` ∩ U`+1 6= ∅.
Preuve.
Laissée à la lectrice.
1.2
2
Collapsus simultanés
Notation et définition 1.7 Dans la suite nous utiliserons plusieurs notations pour un premier idéal ou une chaı̂ne idéale obtenu(e) en raffinant un(e) autre. Si P = (J, U ), C =
(P1 , . . . Pn ), nous notons
— P & {x ∈ P} ou encore (J, x; U ) pour (J ∪ {x} , U )
— P & {x ∈
/ P} ou encore (J; x, U ) pour (J, U ∪ {x})
— P & {I ⊆ P} pour (J ∪ I, U )
— P &{V ⊆ A \ P} pour (J, U ∪ V )
— C & x ∈ C (i) pour (P0 , . . . , Pi & {x ∈ Pi } , . . . , Pn )
/ Pi } , . . . , Pn )
— C& x∈
/ C (i) pour (P0 , . . . , Pi & {x ∈
— (a1 , . . . , am ; v1 , . . . , vp ) pour ({a1 , . . . , am } , {v1 , . . . , vp })
Un premier idéal Q = P & {I ⊆ P} & {V ⊆ A \ P} est appelé un raffinement fini de P si I et
V sont finis.
Collapsus simultané pour les premiers idéaux
Théorème 1.8 (Collapsus simultané pour les premiers idéaux)
Soit un premier idéal P = (J, U ) dans un anneau commutatif A.
(1) Soit x ∈ A. Supposons que les premiers idéaux P & {x ∈ P} et P & {x ∈
/ P} s’effondrent
tous les deux, alors P s’effondre également.
(2) Le premier idéal P engendre un premier idéal saturé minimum. Celui-ci s’obtient en
rajoutant dans U (resp. J) tout élément x ∈ A tel que le premier idéal P & {x ∈ P}
(resp. P & {x ∈
/ P}) collapse.
Preuve.
La preuve du point (1) est le truc de Rabinovitch. A partir de deux égalités u1 + j1 + ax = 0
et u2 xm + j2 = 0 (avec ui ∈ M(U ), ji ∈ hJi, a ∈ A, n ∈ N) on en fabrique une troisième,
u3 + j3 = 0 en éliminant x : on obtient u2 (u1 + j1 )m + (−a)m j2 = 0, avec u3 = u2 um
1 .
Le point (2) résulte du point (1) comme suit. Soit P 0 = (hJi , hJi + M(U )) le premier idéal
complet engendré par P. Soit P 00 = (I 0 , S 0 ) un premier idéal saturé qui raffine P. Soit P1 =
(K, S) le premier idéal décrit en (2). On a facilement P ≤ P 0 ≤ P1 ≤ P 00 . Il faut donc vérifier
que P1 est un premier idéal saturé. Nous allons voir que cela résulte du point (1) sans calcul
supplémentaire. Montrons par exemple que K + K ⊆ K. Soient x et y dans K c’est-à-dire
tels que (I; x, U ) et (I; y, U ) collapsent. Nous voulons montrer que (I; x + y, U ) collapse. Pour
cela, d’après le point (1), il suffit de montrer que P2 = (I, x; x + y, U ) et P3 = (I; x + y, x, U )
collapsent. Pour P3 c’est par hypothèse. Si nous complétons P2 , nous obtenons y = x + y − x
dans le monoı̈de, donc c’est un raffinement de (I; y, U ) et il collapse par hypothèse.
Les autres vérifications sont soit immédiates, soit par des arguments du même style, sans
nouveau calcul.
2
7
Notez que le saturé de (0, 1) est (N , A× ) où N est le nilradical de A et A× le groupe des
unités.
Corollaire 1.9 (théorème de Krull ou Nullstellensatz de Hilbert formel)
Soit P = (J, U ) un premier idéal dans un anneau commutatif A. Le théorème de complétude
de Gödel implique que les propriétés suivantes sont équivalentes :
— Pour tout j ∈ hJi, u ∈ M(U ), on a u 6= j.
— Il existe un idéal premier détachable Q tel que P ≤ Q, c’est-à-dire tel que J ⊆ Q et
U ∩ Q = ∅.
— Il existe un homomorphisme ψ de A vers un anneau intègre B vérifiant ψ(J) = 0 et
0∈
/ ψ(U ).
Preuve.
Voir la preuve du théorème 1.17 (plus général).
2
On en déduit immédiatement.
Corollaire 1.10 Soit P = (J, U ) un premier idéal dans un anneau commutatif A et (I, V ) le
saturé de P. Le théorème de complétude de Gödel implique que I est l’intersection des idéaux
premiers contenant J et ne coupant pas U , tandis que V est la réunion des complémentaires de
ces mêmes idéaux premiers.
Exemples de versions constructives
Nous donnons maintenant des exemples de versions constructives de théorèmes usuels en
mathématiques classiques concernant les idéaux premiers.
Nous introduisons tout d’abord des notations.
Notation 1.11 Soit P = (I, U ) un premier idéal d’un anneau A, x ∈ A et P 0 = (I 0 , U 0 ) le
saturé de P. On écrit
— x ∈ ! P pour signifier x ∈ I 0 ,
— x∈
/ ! P pour signifier x ∈ U 0 et
— x∈
/ P pour signifier ¬(x ∈ ! P).
Si X est une partie de A on note
— X ⊆ ! P pour signifier que tous les x ∈ X vérifient x ∈ ! P et
— X hors de ! P pour signifier que tous les x ∈ X vérifient x ∈
/ ! P.
Notez que le collapsus de P s’écrit donc 1 ∈ ! P et le non collapsus de P s’écrit 1 ∈
/ P.
Notre premier exemple concerne les anneaux arithmétiques, c’est-à-dire les anneaux dont
les idéaux de type fini sont localement principaux. Par exemple un anneau de Prüfer ou de
Dedekind est un anneau arithmétique. La définition constructive d’un anneau arithmétique A
est la suivante : pour tous x, y ∈ A on peut trouver un élément s ∈ A tel que dans As on a
hx, yi = hxi et dans A1−s on a hx, yi = hyi. Cela revient encore à dire qu’on peut trouver u, v,
w tels que ux = vy et (1 − u)y = wx.
Lemme 1.12 Énoncé classique. Dans un anneau arithmétique deux idéaux premiers incomparables sont comaximaux.
Énoncé constructif. Dans un anneau arithmétique deux premiers idéaux incomparables sont
comaximaux.
8
Preuve.
Le substitut constructif pour deux idéaux premiers incomparables est donné par deux premiers
idéaux P = (I, U ) et Q = (J, V ) incomparables au sens suivant : on connaı̂t deux éléments
x et y de A qui témoignent que P et Q ne peuvent pas être raffinés en des idéaux premiers
comparables. Autrement dit on a x ∈ ! P et x ∈
/ ! Q, y ∈ ! Q et y ∈
/ ! P. Autrement dit encore,
si P 0 = (I 0 , U 0 ) et Q0 = (J 0 , V 0 ) sont les saturés de P et Q on a x ∈ I 0 ∩ V 0 et y ∈ J 0 ∩ U 0 . La
conclusion est que I 0 +J 0 = h1i. La preuve est immédiate : on considère u, v, w comme ci-dessus
(définition constructive d’un anneau arithmétique), puisque x ∈ I 0 , y ∈ U 0 , (1 − u)y = wx et
P 0 est saturé, on a 1 − u ∈ I 0 , et de façon analogue on a u ∈ J 0 .
2
Il est clair que le théorème classique découle de sa version constructive. Et que le
théorème classique implique la version constructive en mathématiques classiques. Mais la version constructive n’a pas besoin d’idéaux premiers et elle a une signifcation calculatoire claire.
Notre deuxième exemple est le célèbre lemme d’évitement des idéaux premiers. Une version
constructive naturelle (qui implique l’énoncé classique) est le lemme 1.13 suivant. On peut
trouver une autre version constructive dans [18]. Nous verrons plus loin que dans un anneau
noethérien cohérent fortement discret, si P est l’un des premiers idéaux obtenus en saturant une
chaı̂ne idéale finie, alors P vérifie l’hypothèse de décidabilité requise pour les premiers idéaux
dans le lemme 1.13.
Lemme 1.13 (Lemme d’évitement des idéaux premiers). Soit I = hy1 , . . . , ym i un idéal de type
fini et P1 , . . . , Pn des premiers idéaux d’un anneau A. Supposons qu’on sache tester le collapsus
des raffinements finis des Pk . Alors on sait construire des raffinements finis Q1 , . . . , Qn de
P1 , . . . , Pn tels que ou bien pour un k on a I ⊆ ! Qk ou bien ∃x ∈ I ∀k x ∈
/ ! Qk .
Preuve.
Il suffit de recopier la preuve classique de [5].
2
Collapsus simultané pour les chaı̂nes idéales
Théorème 1.14 (Collapsus simultané pour les chaı̂nes idéales)
Soit une chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) dans un anneau commutatif A
(1) Soit x ∈ A et i ∈ {0, . . . , `}. Supposons que les chaı̂nes idéales C & x ∈ C (i) et
C& x∈
/ C (i) s’effondrent toutes les deux, alors C s’effondre également.
(2) La chaı̂ne idéale C engendre une chaı̂ne idéale saturée minimum. Celle-ci s’obtient
en
(i)
rajoutant dans
U
(resp.
J
)
tout
élément
x
∈
A
tel
que
la
chaı̂ne
idéale
C
&
x
∈
C
i
i
(resp. C & x ∈
/ C (i) ) collapse.
Preuve.
Notons (1)` et (2)` les énoncés avec ` fixé. Remarquons que (1)0 et (2)0 ont fait l’objet du
théorème 1.8. Nous allons faire une preuve par récurrence sur `. Nous pouvons supposer la
chaı̂ne idéale C complète (cela n’influe pas sur l’existence des collapsus).
Le fait que (1)` ⇒ (2)` n’offre pas de difficulté, on raisonne comme dans le théorème 1.8.
Il reste donc à montrer ((1)`−1 et (2)`−1 ) ⇒ (1)` (pour ` > 0).
Si i < ` on utilise le fait 1.6 qui nous ramène à des chaı̂nes idéales de longueur i dans l’anneau
localisé AUi+1 et on applique l’hypothèse de récurrence.
Si i = ` on considère la chaı̂ne idéale de longueur ` − 1 ((K0 , S0 ), . . . , (K`−1 , S`−1 )) obtenue
en saturant ((J0 , U0 ), . . . , (J`−1 , U`−1 )) (on applique (2)`−1 ). Pour des ji ∈ hJi i et ui ∈ M(Ui )
arbitraires (0 ≤ i ≤ `), considérons les affirmations suivantes :
u0 · (u1 · (· · · (u`−1 · (u` + j` ) + j`−1 ) + · · ·) + j1 ) + j0 = 0
9
(α)
(u` + j` ) ∈ K`−1
(β)
∃n ∈ N u0 · (u1 · (· · · (u`−1 · (u` + j` )n + j`−1 ) + · · ·) + j1 ) + j0 = 0
(γ)
On a (α) ⇒ (β) ⇒ (γ). On a donc les propriétés équivalentes suivantes. Primo : la chaı̂ne idéale
C collapse dans A (certifié par une égalité de type (α) ou de type (γ)). Secundo : le premier
idéal (J` , U` ) collapse dans A/K`−1 (certifié par une égalité de type (β)). On est donc ramené
(sur l’anneau A/K`−1 ) au cas (1)0 traité au théorème 1.8.
2
Remarquons que nous ne nous sommes pas appuyés, dans la fin du raisonnement, sur le fait
1.6, qui n’est pas assez fort dans cette situation. Nous donnons dans le corollaire qui suit des
conséquences simples du théorème 1.14. Notez que le deuxième point permet une amélioration
dans l’utilisation du fait 1.6.
Corollaire 1.15
— Une chaı̂ne idéale C collapse si et seulement si toute chaı̂ne idéale saturée qui raffine C
est triviale.
— On ne change pas le collapsus d’une chaı̂ne idéale C si on remplace une chaı̂ne extraite
par sa saturée C 0 ou par n’importe quelle chaı̂ne idéale C 00 telle que C ≤ C 00 ≤ C 0 .
— Soit x1 , . . . , xk ∈ A. Supposons que les chaı̂nes idéales ((J0 , U0 ), . . . , (Ji ∪ {(xh )h∈H }, Ui ∪
{(xh )h∈H 0 }), . . . , (J` , U` )) s’effondrent pour tout couple de parties complémentaires
(H, H 0 ) de {1, . . . , k}, alors C s’effondre également.
— Soient X, Y ⊆ A, C = (P0 , . . . , P` ) une chaı̂ne idéale et C 0 = (Q0 , . . . , Q` ) sa saturée.
Supposons Pk = (Ik , Uk ) et Qk = (Jk , Vk ), alors la chaı̂ne idéale
(P0 , . . . , (Ik , X; Y, Uk ), . . . , P` )
collapse si et seulement si le premier idéal (Jk , X; Y, Vk ) collapse.
— Soient x ∈ A et C = (P0 , . . . , P` ) si chacune des chaı̂nes idéales
C & {x ∈ P0 }, C & {x ∈ P1 } & {x ∈
/ P0 }, . . ., C & {x ∈ P` } & {x ∈
/ P`−1 } et C & {x ∈
/ P` }
collapsent, alors C collapse.
Définition 1.16
— Deux chaı̂nes idéales qui engendrent la même chaı̂ne idéale saturée sont dites
équivalentes.
— Une chaı̂ne idéale équivalente à une chaı̂ne idéale finie est appelée une chaı̂ne idéale de
type fini.
— Une chaı̂ne idéale est dite stricte si on a Vi ∩ Ii+1 6= ∅ (i = 0, . . . , ` − 1) dans la chaı̂ne
idéale saturée qu’elle engendre.
— Une chaı̂ne idéale saturée C = ((J0 , U0 ), . . . , (J` , U` )) est dite figée si elle ne collapse
pas et si on a pour tout i = 0, . . . , `, Ji ∪ Ui = A. Une chaı̂ne idéale est dite figée si
sa saturée est figée. Donner une chaı̂ne idéale figée (stricte) revient donc à donner une
chaı̂ne (strictement) croissante d’idéaux premiers détachables.
Notre idée est que le théorème 1.14 révèle un contenu calculatoire “caché” dans les preuves
classiques concernant les chaı̂nes croissantes d’idéaux premiers. Cette idée est illustrée par le
théorème suivant qui, en mathématiques classiques, donne une caractérisation concrète des
chaı̂nes idéales qui spécifient incomplètement des chaı̂nes croissantes d’idéaux premiers.
Théorème 1.17 (Nullstellensatz formel pour les chaı̂nes d’idéaux premiers dans un anneau
commutatif) Soit A un anneau et ((J0 , U0 ), . . . , (J` , U` )) une chaı̂ne idéale dans A. Le théorème de complétude de Gödel implique que les propriétés suivantes sont équivalentes :
10
(a) Il existe ` + 1 idéaux premiers détachables P0 ⊆ · · · ⊆ P` tels que Ji ⊆ Pi , Ui ∩ Pi = ∅,
(i = 0, . . . , `).
(b) Pour tous ji ∈ hJi i et ui ∈ M(Ui ), (i = 0, . . . , `)
u0 · (u1 · (· · · (u` + j` ) + · · ·) + j1 ) + j0 6= 0
Preuve.
Seul (b) ⇒ (a) pose problème. Commençons par une preuve qui s’appuie, non sur le théorème
de complétude de Gödel, mais sur le principe du tiers exclu et le lemme de Zorn. On considère
une chaı̂ne idéale C1 = ((P0 , S0 ), . . . , (P` , S` )) maximale (pour la relation de raffinement) parmi
les chaı̂nes idéales qui raffinent C et qui ne collapsent pas. Tout d’abord, il est clair que C1 est
complète, puisqu’on ne change pas le collapsus en la complétant. Si ce n’était pas une chaı̂ne
d’idéaux premiers avec leurs compléments, on aurait pour un indice i : Si ∪ Pi 6= A. Dans ce
cas, soit x ∈ A \ (Si ∪ Pi ). Alors ((P0 , S0 ), . . . , (Pi ∪ {x}, Si ), . . . , (P` , S` )) doit collapser (par
maximalité). La même chose pour ((P0 , S0 ), . . . , (Pi , Si ∪ {x}), . . . , (P` , S` )). Le théorème 1.14
nous dit donc que la chaı̂ne idéale ((P0 , S0 ), . . . , (P` , S` )) collapse, ce qui est absurde.
Voyons maintenant une preuve dans laquelle intervient seulement le théorème de complétude
de Gödel, ce qui restreint l’usage du principe du tiers exclu par rapport à la preuve précédente.
Considérons la théorie formelle décrivant un anneau commutatif avec chaı̂ne croissante d’idéaux
premiers de longueur `. Dans cette théorie on prend des prédicats pour x ∈ Pi et x ∈
/ Pi .
Les axiomes sont ceux des anneaux commutatifs, ceux des chaı̂nes idéales complètes dans les
anneaux commutatifs, et pour chaque indice i les deux axiomes qui disent que les prédicats
x ∈ Pi et x ∈
/ Pi sont deux prédicats opposés :
∀a ∈ A (a ∈ Pi ∨ a ∈
/ Pi )
et
∀a ∈ A ((a ∈ Pi ∧ a ∈
/ Pi ) ⇒ Faux )
Cette condition force la chaı̂ne idéale à être formée d’idéaux premiers détachables. Rajoutons
comme constantes les éléments de A et comme axiomes, d’une part les égalités (du type a+b = c
ou ab = d) vraies dans A et d’autre part les axiomes qui traduisent Ji ⊆ Pi , Ui ∩Pi = ∅. D’après
le théorème 1.14 la théorie obtenue est cohérente. D’après le théorème de complétude de Gödel,
elle admet un modèle. Ce modèle donne un homomorphisme ϕ : A → B et une chaı̂ne croissante
d’idéaux premiers Q0 ⊆ · · · ⊆ Q` dans B avec ϕ(Ji ) ⊆ Qi et ϕ(Ui ) ∩ Qi = ∅. On considère les
Pi = ϕ−1 (Qi ) et on a gagné.
2
Cette deuxième preuve est un peu elliptique, voire carrément osée. Il y a un théorème
d’élimination des coupures qui est caché dedans. Pour plus de détails, on se reportera à [12].
On notera également que le théorème 1.17 implique (en deux lignes) le théorème de collapsus
simultané 1.14. Ce dernier peut donc être légitiment considéré comme la version constructive
du premier.
Un corollaire du théorème 1.17 serait une caractérisation de la chaı̂ne idéale saturée engendrée par une chaı̂ne idéale C au moyen de la famille des chaı̂nes d’idéaux premiers qui sont
des raffinements de C, comme dans le corollaire 1.10.
1.3
Suites pseudo régulières et dimension de Krull
Dans un cadre constructif, il est parfois préférable de considérer une relation d’inégalité
x 6= 0 qui ne soit pas simplement l’impossibilité de x = 0. Par exemple un nombre réel est dit 6= 0
lorsqu’il est inversible, c’est-à-dire clairement non nul. Chaque fois que nous mentionnons une
relation d’inégalité x 6= 0 dans un anneau, nous supposons donc toujours implicitement que cette
relation a été définie au préalable dans l’anneau que nous considérons. Nous demandons que
11
cette relation soit une inégalité standard, c’est-à-dire qu’elle puisse être démontrée équivalente à
¬(x = 0) en utilisant le principe du tiers exclu. Nous demandons en outre que l’on ait constructivement (x 6= 0, y = 0) ⇒ x + y 6= 0, xy 6= 0 ⇒ x 6= 0, et ¬(0 6= 0). Enfin x 6= y est défini
par x − y 6= 0. En l’absence de précisions concernant x 6= 0, on peut toujours considérer qu’il
s’agit de la relation ¬(x = 0). Lorsque l’anneau est un ensemble discret, c’est-à-dire lorsqu’il
possède un test d’égalité à zéro, on choisit toujours l’inégalité ¬(x = 0). Néanmoins ce serait
une erreur de principe grave de considérer que l’algèbre commutative ne doit travailler qu’avec
des ensembles discrets.
Définition 1.18 Soit (x1 , . . . , x` ) une suite de longueur ` dans un anneau commutatif A.
— La chaı̂ne idéale ((0, x1 ), (x1 , x2 ), . . . , (x`−1 , x` ), (x` , 1)) est appelée une chaı̂ne idéale élémentaire. On dit qu’elle est associée à la suite (x1 , . . . , x` ). On la note (x1 , . . . , x` ).
— On dit que la suite (x1 , . . . , x` ) est une suite pseudo singulière lorsque la chaı̂ne idéale élémentaire associée (x1 , . . . , x` ) collapse. Précisément, cela signifie qu’il existe a1 , . . . , a` ∈
A et m1 , . . . , m` ∈ N tels que
m`
m2
1
xm
1 (x2 · · · (x` (1 + a` x` ) + · · · + a2 x2 ) + a1 x1 ) = 0
ou encore qu’il existe a1 , . . . , a` ∈ A et m ∈ N tels que
m+1
(x1 x2 · · · x` )m + a` (x1 · · · x`−1 )m xm+1
+ a`−1 (x1 · · · x`−2 )m xm+1
=0
`
`−1 + · · · + a1 x1
— On dit que la suite (x1 , . . . , x` ) est une suite pseudo régulière lorsque la chaı̂ne idéale
élémentaire associée ne collapse pas. Précisément, pour tous a1 , . . . , a` ∈ A et tous
m1 , . . . , m` ∈ N on a
m`
m2
1
xm
1 (x2 · · · (x` (1 + a` x` ) + · · · + a2 x2 ) + a1 x1 ) 6= 0
On notera que la longueur de la chaı̂ne idéale élémentaire associée à une suite est égal au
nombre d’éléments de la suite.
Le lien avec les suites régulières est donné par la proposition suivante, qui est immédiate.
Proposition 1.19 Dans un anneau commutatif A toute suite régulière est pseudo régulière.
Le lemme suivant est parfois utile.
Lemme 1.20 Soient (x1 , . . . , x` ) et (y1 , . . . , y` ) dans un anneau commutatif A. Supposons que
pour chaque j, xj divise une puissance de yj et yj divise une puissance de xj . Alors la suite
(x1 , . . . , x` ) est pseudo singulière si et seulement si la suite (y1 , . . . , y` ) est pseudo singulière
Preuve.
En effet, si x divise une puissance de y et y divise une puissance de x, on établit les relations
de raffinement suivantes :
(a; x)(x; b) ≤ (a; x, y)(x, y; b) ≤ la saturée de la suite (a; x)(x; b)
on rajoute le premier y en disant que yc = xk (y est donc dans le saturé du monoı̈de engendré
par x), on rajoute le second en disant que y m = dx (y est donc dans le radical de l’idéal engendré
par x). On en déduit par symétrie que (a; x)(x; b) et (a; y)(y; b) ont la même saturation.
2
Un corollaire immédiat du théorème 1.17 est le théorème suivant 1.21.
12
Théorème 1.21 (suites pseudo régulières et chaı̂nes croissantes d’idéaux premiers) Le théorème de complétude de Gödel implique le résultat suivant. Dans un anneau A une suite (x1 , . . . , x` )
est pseudo régulière si et seulement si il existe ` + 1 idéaux premiers P0 ⊆ · · · ⊆ P` avec
x1 ∈ P1 \ P0 , x2 ∈ P2 \ P1 , . . . x` ∈ P` \ P`−1 .
Ceci conduit à la définition suivante, qui assure un contenu constructif explicite à la notion
de dimension de Krull d’un anneau.
Définition 1.22 (Dimension de Krull d’un anneau)
— Un anneau A est dit de dimension −1 lorsque 1 =A 0. Il est dit de dimension ≥ 0 si
1 6=A 0, > −1 si ¬(1 =A 0) et < 0 si ¬(1 6=A 0).
Soit maintenant ` ≥ 1.
— L’anneau est dit de dimension ≤ ` − 1 si toute chaı̂ne idéale élémentaire de longueur `
collapse.
— L’anneau est dit de dimension ≥ ` s’il existe une suite pseudo régulière de longueur `.
— L’anneau est dit de dimension ` s’il est à la fois de dimension ≥ ` et ≤ `.
— Il est dit de dimension < ` lorsqu’il est impossible qu’il soit de dimension ≥ `
— Il est dit de dimension > ` lorsqu’il est impossible qu’il soit de dimension ≤ ` ( 3 ).
Notez que certaines de ces définitions utilisent la relation d’inégalité 6=A de l’anneau A.
Un anneau est donc de dimension (de Krull) ≤ ` − 1 si pour toute suite (x1 , . . . , x` ) dans
A, on peut trouver a1 , . . . , a` ∈ A et m1 , . . . , m` ∈ N tels que
m`
1
xm
1 (· · · (x` (1 + a` x` ) + · · ·) + a1 x1 ) = 0
En particulier un anneau est de dimension ≤ 0 si et seulement si pour tout x ∈ A il existe
n ∈ N et a ∈ A tels que xn = axn+1 . Tandis qu’il est de dimension < 1 si et seulement si il est
absurde qu’on puisse trouver x ∈ A avec, pour tout n ∈ N et tout a ∈ A, xn 6= axn+1 .
Notez que R est un anneau local de dimension < 1, mais on ne peut pas prouver constructivement qu’il est de dimension ≤ 0.
Notez aussi qu’un anneau est local zéro-dimensionnel si et seulement si on a
∀x ∈ A x est inversible ou nilpotent
Dimension de Krull d’un anneau de polynomes sur un corps discret
Nous avons tout d’abord.
Proposition 1.23 Soit K un corps discret, A une K-algèbre commutative, et x1 , . . ., x` dans
A algébriquement dépendants sur K. Alors la suite (x1 , . . . , x` ) est pseudo singulière.
Preuve.
Soit Q(x1 , . . . , x` ) = 0 une relation de dépendance algébrique sur K. Ordonnons les monomes
de Q dont le coefficient est non nul selon l’ordre lexicographique. On peut supposer sans perte
de généralité que le coefficient du premier monome non nul (pour cet ordre) est égal à 1. Si
m`
1 m2
xm
est ce monome, il est clair que Q s’écrit sous forme
1 x2 · · · x`
1+m`−1
m`
1+m`
m1
1
1
Q = xm
R` + xm
1 · · · x` + x 1 · · · x`
1 · · · x`−1
1 1+m2
1
R`−1 + · · · + xm
R2 + x1+m
R1
1 x2
1
2
ce qui est le collapsus recherché.
3. En fait, il existe une et une seule chaı̂ne idéale élémentaire de longueur 0 : (0, 1), donc il n’y avait pas
besoin de commencer par une définition particulière pour les anneaux de dimension −1. Dans ce cadre, on
retrouve les distinctions entre dimension ≥ 0 et dimension > −1, ainsi que entre dimension ≤ −1 et dimension
< 0.
13
On en déduit.
Théorème 1.24 Soit K un corps discret. La dimension de Krull de l’anneau K[X1 , . . . , X` ]
est égale à `.
Preuve.
Vue la proposition 1.23 il suffit de vérifier que la suite (X1 , . . . , X` ) est pseudo-régulière. Or elle
est régulière.
2
On notera que nous avons obtenu ce résultat de base avec un minimum d’effort, et que
notre preuve vaut évidemment en mathématiques classiques (avec la définition classique de la
dimension de Krull dès qu’on admet le théorème de complétude de Gödel). Ceci infirme l’opinion couramment admise que les preuves constructives sont nécessairement plus compliquées
que les preuves classiques.
1.4
Dimension de Krull et principe local-global
Monoı̈des comaximaux
Définition 1.25
(1) Des monoı̈des S1 , . . . , Sn de l’anneau A sont dits comaximaux si un idéal de A qui coupe
chacun des Si contient toujours 1, autrement dit si on a :
∀s1 ∈ S1 · · · ∀sn ∈ Sn ∃a1 , . . . , an ∈ A
n
X
ai s i = 1
i=1
(2) On dit que les monoı̈des S1 , . . . , Sn de l’anneau A recouvrent le monoı̈de S si S est
contenu dans les Si et si un idéal de A qui coupe chacun des Si coupe toujours S,
autrement dit si on a :
∀s1 ∈ S1 · · · ∀sn ∈ Sn ∃a1 , . . . , an ∈ A
n
X
ai s i ∈ S
i=1
Notation 1.26 Si (I; U ) est un premier idéal de A, nous notons S(I; U ) le monoı̈de M(U )+hIi
du premier idéal obtenu en complétant (I; U ).
L’exemple fondamental de monoı̈des comaximaux est le suivant : lorsque s1 , . . . , sn ∈ A
vérifient hs1 , . . . , sn i = h1i, les monoı̈des M(si ) sont comaximaux.
Les deux lemmes suivants sont aussi très utiles pour construire des monoı̈des comaximaux.
Lemme 1.27 (calculs immédiats)
(1) (associativité) Si les monoı̈des S1 , . . . , Sn de l’anneau A recouvrent le monoı̈de S et si
chaque S` est recouvert par des monoı̈des S`,1 , . . . , S`,m` , alors les S`,j recouvrent S.
(2) (transitivité) Soit S un monoı̈de de l’anneau A et S1 , . . . , Sn des monoı̈des comaximaux de l’anneau localisé AS . Pour ` = 1, . . . , n soit V` le monoı̈de de A formé par les
numérateurs des éléments de S` . Alors les monoı̈des V1 , . . . , Vn recouvrent S.
Lemme 1.28 Soit U et I des parties de l’anneau A et a ∈ A, alors les monoı̈des S(I, a; U ) et
S(I; a, U ) recouvrent le monoı̈de S(I; U ).
14
Preuve.
Pour x ∈ S(I; U, a) et y ∈ S(I, a; U ) on doit trouver une combinaison linéaire x1 x + y1 y ∈
S(I; U ) (x1 , y1 ∈ A). On écrit x = u1 ak + j1 , y = (u2 + j2 ) − (az) avec u1 , u2 ∈ M(U ),
j1 , j2 ∈ I(I), z ∈ A. L’identité fondamentale ck − dk = (c − d) × · · · donne un y2 ∈ A tel que
y2 y = (u2 + j2 )k − (az)k = (u3 + j3 ) − (az)k et on écrit z k x + u1 y2 y = u1 u3 + u1 j3 + j1 z k = u4 + j4 .
2
Corollaire 1.29 Soient a1 , . . . , an ∈ A. Soit Sk = S((ui )i>k ; uk ) (k = 1, . . . , n), S0 =
S((ui )i=1,...,n ; 1). Alors les monoı̈des S0 , S1 , . . . , Sn sont comaximaux.
Les monoı̈des comaximaux constituent un outil constructif qui permet en général de remplacer des arguments abstraits de type local-global par des calculs explicites. Si S1 , . . . , Sn sont
des monoı̈des comaximaux de l’anneau A, l’anneau produit des localisés ASi est une A-algèbre
fidèlement plate. Donc de très nombreuses propriétés sont vraies avec A si et seulement si elles
le sont avec chacun des ASi .
Nous le vérifions pour la dimension de Krull dans le paragraphe suivant.
Caractère local de la dimension de Krull
La proposition suivante est immédiate.
Proposition 1.30 Soit A un anneau. Sa dimension de Krull est toujours supérieure ou égale
à celle d’un quotient ou d’un localisé de A. Plus précisément toute chaı̂ne idéale élémentaire
qui collapse dans A collapse dans tout quotient et dans tout localisé de A, et toute chaı̂ne idéale
élémentaire dans un localisé de A est équivalente (dans le localisé) à une chaı̂ne idéale élémentaire écrite dans A. Enfin, si une chaı̂ne idéale élémentaire (a1 , . . . , a` ) de A collapse dans un
localisé AS , il existe m dans S tel que (a1 , . . . , a` ) collapse dans A[1/m].
Proposition 1.31 Soient S1 , . . . , Sn des monoı̈des comaximaux de l’anneau A et C une chaı̂ne
idéale dans A. Alors C collapse dans A si et seulement si elle collapse dans chacun des ASi .
En particulier la dimension de Krull de A est ≤ ` si et seulement si la dimension de Krull de
chacun des ASi est ≤ `.
Preuve.
Il faut montrer qu’une chaı̂ne idéale C collapse dans A si elle collapse dans chacun des ASi .
Pour simplifier prenons une chaı̂ne de longueur 2 : ((J0 , U0 ), (J1 , U1 ), (J2 , U2 )) avec des idéaux
Jk et des monoı̈des Uk . Dans chaque ASi on a une égalité
u0,i u1,i u2,i + u0,i u1,i j2,i + u0,i j1,i + j0,i = 0
avec uk,i ∈ Uk et jk,i ∈ Jk ASi . Après avoir chassé les dénominateurs et multiplié par un élément
convenable de Si on obtient une égalité dans A du type suivant
0
0
0
si u0,i u1,i u2,i + u0,i u1,i j2,i
+ u0,i j1,i
+ j0,i
=0
Q
0
avec si ∈ Si , uk,i ∈ Uk et jk,i
∈ Jk . On pose uk = i uk,i . En multipliant l’égalité précédente
par un produit convenable on obtient une égalité
00
00
00
+ u0 j1,i
+ j0,i
=0
si u0 u1 u2 + u0 u1 j2,i
P
00
avec si ∈ Si , uk ∈ Uk et jk,i
∈ Jk . Il reste à écrire i ai si = 1, à multiplier l’égalité correspondant
à Si par ai et à faire la somme.
2
15
Exemple d’application
En mathématiques classiques la dimension de Krull d’un anneau est la borne supérieure
des dimensions de Krull des localisés en tous les idéaux maximaux. Cela résulte facilement (en
mathématiques classiques) des propositions 1.30 et 1.31.
La proposition 1.31 devrait permettre d’obtenir constructivement les mêmes conséquences
concrètes (que celles qu’on obtient non constructivement en mathématiques classiques en appliquant la propriété ci-dessus) même lorsqu’on n’a pas accès aux idéaux premiers.
Nous nous contentons ici d’un exemple simple, dans lequel on a accès aux idéaux premiers. Supposons que nous ayions une preuve constructive simple que la dimension de Krull de
Z(p) [x1 , . . . , x` ] est ≤ ` + 1 (p est un nombre premier arbitraire, et Z(p) est le localisé de Z en
pZ). Alors nous pouvons en déduire le même résultat pour A = Z[x1 , . . . , x` ] en appliquant le
principe local-global précédent.
Soit en effet une liste (a1 , . . . , a`+2 ) dans A. Le collapsus de la chaı̂ne idéale élémentaire
(a1 , . . . , a`+2 ) dans Z(2) [x1 , . . . , x` ] se relit comme un collapsus dans Z[1/m0 ][x1 , . . . , x` ] pour
un certain m0 impair. Pour chacun des diviseurs premiers pi de m (i = 1, . . . , k), le collapsus de la chaı̂ne idéale élémentaire (a1 , . . . , a`+2 ) dans Z(pi ) se relit comme un collapsus dans
Z[1/mi ][x1 , . . . , x` ] pour un certain mi étranger à pi . Les entiers mi (i = 0, . . . , k) engendrent
l’idéal h1i, donc les monoı̈des M(mi ) sont comaximaux et on peut appliquer la proposition
1.31.
2
2.1
Treillis distributifs, relations implicatives et dimension
de Krull
Treillis distributifs, idéaux, filtres et spectre
Un treillis distributif est un ensemble ordonné avec sup et inf finis, un élément minimum
(noté 0) et un élément maximum (noté 1). On demande que les lois sup et inf soient distributives
l’une par rapport à l’autre (une distributivité implique l’autre). On note ces lois ∨ et ∧ . La
relation a ≤ b peut être définie par a ∨ b = b. La théorie des treillis distributifs est alors
purement équationnelle. Il y a donc des treillis distributifs définis par générateurs et relations.
Une règle particulièrement importante, dite coupure, est la suivante
(((x ∧ a) ≤ b) & (a ≤ (x ∨ b))) =⇒ a ≤ b
pour la démontrer on écrit x ∧ a ∧ b = x ∧ a et a = a ∧ (x ∨ b) donc
a = (a ∧ x) ∨ (a ∧ b) = (a ∧ x ∧ b) ∨ (a ∧ b) = a ∧ b
Un ensemble totalement ordonné est un treillis distributif s’il possède un maximum et un
minimum. On note n un ensemble totalement ordonné à n éléments (c’est un treillis distributif
si n 6= 0.) Tout produit de treillis distributifs est un treillis distributif. Les entiers naturels,
munis de la relation de divisibilité, forment un treillis distributif (mais le minimum absolu est
1 et le maximum absolu est 0). Si T et T 0 sont deux treillis distributifs, l’ensemble Hom(T, T 0 )
des homomorphismes (applications conservant, sup, inf, 0 et 1) de T vers T 0 est muni d’une
structure d’ordre naturelle donnée par
ϕ≤ψ
def
⇐⇒
∀x ∈ T ϕ(x) ≤ ψ(x)
Une application entre deux treillis totalement ordonnés T et S est un homomorphisme si et
seulement si elle est croissante et 0T et 1T ont pour images 0S et 1S .
La proposition suivante est facile.
16
Proposition 2.1 Soit T un treillis distributif et J une partie de T . On considère le treillis
distributif T 0 engendré par T et par les relations x = 0 pour les x ∈ J (T 0 est un quotient de
T ). Alors
— la classe d’équivalence de 0 est l’ensemble des a qui vérifient, pour au moins une partie
finie J0 de J :
_
a ≤
x dans T
x∈J0
— la classe d’équivalence de 1 est l’ensemble des b qui vérifient, pour au moins une partie
finie J0 de J :
!
_
1 = b ∨
x
dans T
x∈J0
— Plus généralement on a a ≤T 0 b si et seulement si il existe une partie finie J0 de J telle
que, dans T :
!
_
a ≤ b ∨
x
x∈J0
Dans la proposition ci-dessus la classe d’équivalence de 0 est appelée un idéal du treillis,
c’est l’idéal engendré par J. On le note hJiT . On vérifie sans difficulté qu’un idéal I est soumis
aux seules contraintes suivantes :
0∈I
x, y ∈ I =⇒ x ∨ y ∈ I
x ∈ I, z ∈ T =⇒ x ∧ z ∈ I
(la dernière se réécrit (x ∈ I, y ≤ x) ⇒ y ∈ I).
En outre pour tout homorphisme de treillis distributifs ϕ : T1 → T2 , ϕ−1 (0) est un idéal de
T1 .
Un idéal principal est un idéal engendré par un élément a. On a haiT = {x ∈ T ; x ≤ a}.
Tout idéal de type fini est principal.
La notion duale de celle d’idéal est la notion de filtre. Un filtre F est l’image réciproque de
1 par un homomorphisme de treillis distributifs. Il est soumis aux seules contraintes
1∈F
x, y ∈ F =⇒ x ∧ y ∈ F
x ∈ F, z ∈ T =⇒ x ∨ z ∈ F
Notation 2.2 On note Pf (X) l’ensemble des parties finies de l’ensemble X. Si A est une partie
finie d’un treillis distributif T on notera
_
_
^
^
A :=
x
et
A :=
x
x∈A
x∈A
On note A ` B ou A `T B la relation définie comme suit sur l’ensemble Pf (T ) des parties finies
d’un treillis distributif T .
^
_
def
A ` B
⇐⇒
A ≤
B
Notez que la relation A ` B est bien définie sur l’ensemble des parties finies parce que les lois
∧ et ∨ sont associatives commutatives et idempotentes. Notez que ∅ ` {x} ⇒ x = 1 et
17
{y} ` ∅ ⇒ y = 0. Cette relation vérifie les axiomes suivants, dans lesquels on écrit x pour
{x} et A, B pour A ∪ B.
A ` B
(A, x ` B) & (A ` B, x)
=⇒
=⇒
a ` a
A, A0 ` B, B 0
A ` B
(R)
(M )
(T )
on dit que la relation est réflexive, monotone et transitive. La troisième règle (transitivité)
s’appelle aussi la règle de coupure. Signalons aussi les deux règles suivantes de “distributivité” :
(A, x ` B) & (A, y ` B)
(A ` B, x) & (A ` B, y)
⇐⇒
⇐⇒
A, x ∨ y ` B
A ` B, x ∧ y
La proposition suivante est un corollaire de la proposition 2.1.
Proposition 2.3 Soit T un treillis distributif et (J, U ) un couple de parties de T . On considère
le treillis distributif T 0 engendré par T et par les relations x = 0 pour les x ∈ J et y = 1 pour
les y ∈ U (T 0 est un quotient de T ). Alors
— la classe d’équivalence de 0 est l’ensemble des a qui vérifient :
∃J0 ∈ Pf (J), U0 ∈ Pf (U )
a, U0 `T J0
— la classe d’équivalence de 1 est l’ensemble des b qui vérifient :
∃J0 ∈ Pf (J), U0 ∈ Pf (U )
U0 `T b, J0
— Plus généralement on a a ≤T 0 b si et seulement si il existe une partie finie J0 de J et
une partie finie U0 de U telles que, dans T :
a, U0 `T b, J0
Nous noterons T /(J = 0, U = 1) le treillis quotient T 0 décrit à la proposition 2.3. Soit
ψ : T → T 0 la projection canonique. Si I est l’idéal ψ −1 (0) et F le filtre ψ −1 (1), on dit que
l’idéal I et le filtre F sont conjugués. D’après la proposition précédente, un idéal I et un filtre
F sont conjugués si et seulement si on a :
[x ∈ T, I0 ∈ Pf (I), F0 ∈ Pf (F ), (x, F0 ` I0 )] =⇒ x ∈ I
[x ∈ T, I0 ∈ Pf (I), F0 ∈ Pf (F ), (F0 ` x, I0 )] =⇒ x ∈ F
et
Cela peut se reformuler comme suit :
(f ∈ F, x ∧ f ∈ I) =⇒ x ∈ I
et (j ∈ I, x ∨ j ∈ F ) =⇒ x ∈ F
Lorsqu’un idéal I et un filtre F sont conjugués, on a
1 ∈ I ⇐⇒ 0 ∈ F ⇐⇒ (I, F ) = (T, T )
On notera aussi T 0 = T /(J = 0, U = 1) par T /(I, F ). D’après la proposition 2.3, tout homomorphisme ϕ de T vers un autre treillis distributif T1 vérifiant ϕ(J) = {0} et ϕ(U ) = {1} se
factorise de manière unique par le treillis quotient T 0 .
On voit sur l’exemple des ensembles totalement ordonnés qu’une structure quotient d’un
treillis distributif n’est pas en général caractérisée par les classes d’équivalence de 0 et 1.
18
Classiquement un idéal premier I d’un treillis T 6= 1 est un idéal dont le complémentaire F
est un filtre (qui est alors un filtre premier). En mathématiques classiques il revient au même
de dire que
1∈
/I
et
(x ∧ y) ∈ I =⇒ (x ∈ I ou y ∈ I)
(∗)
ou encore de dire que I est le noyau d’un homorphisme de T vers le treillis à deux éléments,
noté 2. En mathématiques constructives il semble logique de choisir la définition (∗) qui est
moins contraignante, mais en donnant sa pleine signification constructive au “ou”. On définit
la notion de filtre premier de manière duale.
On appelle spectre du treillis distributif T et on note Spec(T ) l’ensemble ordonné Hom(T, 2).
Lorsque T 6= 1 il revient au même de considérer l’ensemble des idéaux premiers détachables
de T . La relation d’ordre correspond alors à l’inclusion renversée des idéaux premiers. On a
Spec(2) ' 1, Spec(3) ' 2, Spec(4) ' 3, etc. . .
Définition 2.4 Soit T un treillis distributif.
— Un premier idéal dans T est donné par un couple (J, U ) de parties de T . Nous le
considérons comme une spécification incomplète pour un idéal premier P qui vérifie
J ⊆ P et U ∩ P = ∅. Il est dit fini si J et U sont finis, trivial si J = U = T .
— Un premier idéal (J, U ) est dit saturé si J est un idéal, U un filtre et si J et U sont
conjugués. Tout premier idéal engendre un premier idéal saturé (I, F ) décrit à la proposition 2.3.
— On dit que le premier idéal (J, U ) collapse ou s’effondre si le premier idéal saturé (I, F )
qu’il engendre est trivial. Autrement dit le treillis quotient T 0 = T /(J = 0, U = 1) est
réduit à un point, c’est-à-dire 1 ≤T 0 0, ce qui signifie qu’il existe une partie finie J0 de
J et une partie finie U0 de U telles que
U0 ` J0
On a le théorème suivant analogue au théorème 1.8.
Théorème 2.5 (Collapsus simultané pour les premiers idéaux) Soit un premier idéal (J, U ) et
un élement x dans un treillis distributif T .
(1) Si les premiers idéaux (J ∪ {x}, U ) et (J, U ∪ {x}) collapsent, alors (J, U ) collapse
également.
(2) Le premier idéal (J, U ) engendre un premier idéal saturé minimum. Celui-ci s’obtient
en rajoutant dans U (resp. J) tout élément x ∈ A tel que le premier idéal (J ∪ {x}, U )
(resp. (J, U ∪ {x})) collapse.
Preuve.
Voyons le point (1). On a des parties finies J0 , J1 de J et des parties finies U0 , U1 de U telles
que
x, U0 ` J0 et U1 ` x, J1
donc
x, U0 , U1 ` J0 , J1
et U0 , U1 ` x, J0 , J1
Donc, par coupure,
U0 , U1 ` J0 , J1
Le point (2) a déjà été démontré (avec une formultaion légèrement différente) dans la proposition
2.3.
2
19
On remarquera le role décisif de la règle de coupure.
On en déduit.
Proposition 2.6 Le théorème de complétude de Gödel implique le résultat suivant. Si (J, U )
est un premier idéal qui ne collapse pas, il existe ϕ ∈ Spec(T ) tel que J ⊆ ϕ−1 (0) et U ⊆ ϕ−1 (1).
En particulier si a 6≤ b, il existe ϕ ∈ Spec(T ) tel que ϕ(a) = 1 et ϕ(b) = 0. Également, si T 6= 1,
Spec(T ) est non vide.
Un corollaire est le théorème de représentation suivant (théorème de Birkhoff)
Théorème 2.7 (Théorème de représentation) Le théorème de complétude de Gödel implique le
résultat suivant. L’application θT : T → P(Spec(T )) définie par a 7→ {ϕ ∈ Spec(T ) ; ϕ(a) = 1}
est un homomorphisme injectif de treillis distributifs. Autrement dit, tout treillis distributif peut
être représenté comme un treillis de parties d’un ensemble.
Sous une forme voisine on peut dire : tout treillis distributif peut être vu comme un sous
treillis distributif d’un treillis d’ensemble de parties. Un autre corollaire est le suivant.
Proposition 2.8 Le théorème de complétude de Gödel implique le résultat suivant. Soit
ϕ : T → T 0 un homomorphisme de treillis distributifs. Alors ϕ est injectif si et seulement
si Spec(ϕ) : Spec(T 0 ) → Spec(T ) est surjectif.
Preuve.
On a l’équivalence
a 6= b
⇐⇒
a ∧ b 6= a ∨ b
⇐⇒
a ∨ b 6≤ a ∧ b
Supposons que Spec(ϕ) est surjectif. Si a 6= b dans T , soit a0 = ϕ(a), b0 = ϕ(b) et ψ ∈ Spec(T )
tel que ψ(a ∨ b) = 1 et ψ(a ∧ b) = 0. Puisque Spec(ϕ) est surjectif il existe ψ 0 ∈ Spec(T 0 ) tel
que ψ = ψ 0 ϕ donc ψ 0 (a0 ∨ b0 ) = 1 et ψ 0 (a0 ∧ b0 ) = 0, donc a0 ∨ b0 6≤ a0 ∧ b0 et a0 6= b0 .
Supposons que ϕ est injectif. On identifie T à un sous treillis distributif de T 0 . Si ψ ∈ Spec(T ),
soient I = ψ −1 (0) et F = ψ −1 (1). Alors (I, F ) ne peut pas collapser dans T 0 car cela le ferait
collapser dans T . Donc il existe ψ 0 ∈ Spec(T 0 ) tel que ψ 0 (I) = 0 et ψ 0 (F ) = 1, ce qui signifie
ψ = ψ 0 ϕ.
2
Naturellement, il est difficile d’attribuer un contenu calculatoire (autre que le théorème 2.5)
aux trois résultats précédents. Une solution intuitive est de dire qu’on ne risque sûrement pas
de se tromper en faisant comme si un treillis distributif donné T était un sous treillis distributif
d’un ensemble de parties. Le but du programme de Hilbert est de donner un contenu précis à
cette affirmation intuitive. Que faut-il entendre précisément par “ne risquer sûrement pas de se
tromper” et par “en faisant comme si” ?
2.2
Treillis distributifs et relations implicatives
Une manière intéressante d’aborder la question des treillis distributifs définis par générateurs
et relations est de considérer la relation A ` B définie sur l’ensemble Pf (T ) des parties finies
d’un treillis distributif T . En effet, si S ⊆ T engendre T comme treillis, alors la connaissance de
la relation ` sur Pf (S) suffit à caractériser sans ambigüité le treillis T , car toute formule sur
S peut être réécrite, au choix, en forme normale conjonctive (inf de sups dans S) ou normale
disjonctive (sup de infs dans S). Donc si on veut comparer deux éléments du treillis engendré
par S on écrit le premier en forme normale disjonctive, le second en forme normale conjonctive,
et on remarque que
_ ^
^ _
Ai ≤
Bj
⇐⇒
&(i,j)∈I×J (Ai ` Bj )
i∈I
j∈J
20
Définition 2.9 Pour un ensemble S arbitraire, une relation sur Pf (S) qui est réflexive, monotone et transitive (voir page 18) est appelée une relation implicative (en anglais entailment
relation).
L’origine des relations implicatives est dans le calcul des séquents de Gentzen, qui mit le
premier l’accent sur la règle (T ) (la coupure). Le lien avec les treillis distributifs a été mis en
valeur dans [2, 4]. Le théorème suivant (cf. [2]) est fondamental. Il dit que les trois propriétés
des relations implicatives sont exactement ce qu’il faut pour que l’interprétation en forme de
treillis distributif soit adéquate.
Théorème 2.10 (théorème fondamental des relations implicatives) On considère un ensemble
S avec une relation implicative `S sur Pf (S). On considère le treillis distributif T défini par
générateurs et relations comme suit : les générateurs sont les éléments de S et les relations sont
les
A `T B
chaque fois que A `S B. Alors pour toutes parties finies A et B de S on a
A `T B =⇒ A `S B
Preuve.
On donne une description explicite possible du treillis distributif T . Les éléments de T sont
représentés par des ensembles finis d’ensembles finis d’éléments de S
X = {A1 , . . . , An }
V
V
(intuitivement X représente A1 ∨ · · · ∨ An ). On définit
alors
V
W deVmanière inductive la
relation A ≺ Y pour A ∈ Pf (S) et Y ∈ T (intuitivement, A ≤ C∈Y ( C)) comme suit
— si B ∈ Y et B ⊆ A alors A ≺ Y
— si on a A `S y1 , . . . , ym et A, yj ≺ Y pour j = 1, . . . , m alors A ≺ Y
On montre facilement que si A ≺ Y et A ⊆ A0 alors on a A0 ≺ Y. On en déduit que A ≺ Z si
A ≺ Y et B ≺ Z pour tout B ∈ Y . On peut alors définir X ≤ Y par A ≺ Y pour tout A ∈ X
et on vérifie que T est alors un treillis distributif 4 pour les opérations
0=∅
1 = {∅}
X ∨Y =X ∪Y
X ∧ Y = {A ∪ B | A ∈ X, B ∈ Y }
Pour ceci on montre que si C ≺ X et C ≺ Y alors on a C ≺ X ∧ Y par induction sur les
preuves de C ≺ X et C ≺ Y .
On remarque alors que si A `S y1 , . . . , ym et A, yj `S B pour tout j alors A `S B en utilisant
m fois la règle de coupure. Il en résulte que si on a A `T B, c’est à dire A ≺ {{b} | b ∈ B},
alors on a A `S B.
2
Comme première application, nous pouvons citer la description du treillis booléen engendré
par un treillis distributif. Rappelons qu’un treillis booléen est un treillis distributif muni d’une
loi x 7→ x qui vérifie, pour tout x : x ∧ x = 0 et x ∨ x = 1. L’application x 7→ x est alors un
isomorphisme du treillis sur son dual.
Proposition 2.11 Soit T un treillis distributif (6= 1). Il existe un treillis booléen engendré par
T . Il peut être décrit comme le treillis distributif engendré par l’ensemble T1 = T ∪ T ( 5 ) muni
4. T est en fait le quotient de Pf (Pf (S)) par la relation d’équivalence : X ≤ Y et Y ≤ X.
5. T est une copie de T disjointe de T .
21
de la relation implicative `T1 définie comme suit : si A, B, A0 , B 0 sont quatre parties finies de
T on a
def
A, B `T1 A0 , B 0 ⇐⇒ A, B 0 ` A0 , B dans T
Si on note TBool ce treillis (qui est booléen), T1 s’injecte naturellement dans TBool et la relation
implicative associée à TBool induit sur T1 la relation `T1 .
Preuve.
Voir [2].
2
Notez que d’après le théorème 2.10 on a x `T y si et seulement si x `T1 y donc l’homomorphisme canonique T → T1 est injectif et T s’identifie à une partie de T1 .
2.3
Dimension de Krull des treillis distributifs
Pour développer une théorie constructive de la dimension de Krull d’un treillis distributif
T il faut trouver un substitut constructif aux chaı̂nes croissantes d’idéaux premiers.
On peut le faire selon les mêmes lignes que ce qui a été développé pour les anneaux commutatifs dans la section 1, ou bien utiliser une idée de Joyal. Construire un treillis Kr` (T )
universel attaché à T tel que les points de Spec( Kr` (T )) soient (de manière naturelle) les
chaı̂nes d’idéaux premiers de longueur `. Nous allons exposer ces deux points de vue et montrer
qu’ils sont isomorphes.
Chaı̂nes partiellement spécifiées d’idéaux premiers
Définition 2.12 Dans un treillis distributif T
— Une spécification partielle pour une chaı̂ne d’idéaux premiers (ce que nous abrégeons en
chaı̂ne idéale) est définie comme suit. Une chaı̂ne idéale de longueur ` est un liste de
` + 1 premiers idéaux de T : C = ((J0 , U0 ), . . . , (J` , U` )). La chaı̂ne idéale est dite finie si
toutes les parties sont finies. Une chaı̂ne idéale de longueur 0 n’est autre qu’un premier
idéal.
— Une chaı̂ne idéale est dite saturée si les (Ji , Ui ) sont des couples d’idéaux et filtres
conjugués, et si on a les relations Ji ⊆ Ji+1 , Ui+1 ⊆ Ui (i = 0, . . . , ` − 1).
— On dit qu’une chaı̂ne idéale C 0 = ((J00 , U00 ), . . . , (J`0 , U`0 )) est un raffinement de la chaı̂ne
idéale C = ((J0 , U0 ), . . . , (J` , U` )) si on a Jk ⊆ Jk0 , Uk ⊆ Uk0 ,
— On dit qu’une chaı̂ne idéale C collapse ou encore s’effondre si la seule chaı̂ne idéale
saturée qui raffine C est la chaı̂ne idéale triviale ((T, T ), . . . , (T, T )).
Lemme 2.13 Une chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) dans laquelle on a Uh0 ` Jh0 avec
Uh0 ∈ Pf (Uh ) et Jh0 ∈ Pf (Jh ) (en particulier si Uh ∩ Jh 6= ∅) pour un indice h collapse.
Preuve.
Soit ((I0 , F0 ), . . . , (I` , F` )) une chaı̂ne idéale saturée qui raffine C. Puisque le premier idéal
(Ih , Fh ) collapse et puisque Ih et Fh sont conjugués, on a 1 ∈ Ih et 0 ∈ Fh . Pour tout indice
j > h on a donc 1 ∈ Ij donc 0 ∈ Fj . De même pour tout indice j < h on a 0 ∈ Fj donc 1 ∈ Ij .
2
Dans le théorème suivant les points (3) et (2) sont analogues à (1) et (2) dans le théorème
1.14.
Théorème 2.14 (Collapsus simultané pour les chaı̂nes idéales dans les treillis distributifs)
Soit une chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) dans un treillis distributif T.
22
(1) La chaı̂ne idéale C collapse si et seulement si il existe x1 , . . . , x` ∈ T et une chaı̂ne idéale
finie C 0 = ((J00 , U00 ), . . . , (J`0 , U`0 )) dont C est un raffinement, avec les relations suivantes
dans T (où ` est la relation implicative de T ) :
x1 , U00
x2 , U10
..
.
0
x` , U`−1
U`0
` J00
` J10 , x1
..
..
.
.
0
` J`−1 , x`−1
` J`0 , x`
(2) La chaı̂ne idéale C engendre une chaı̂ne idéale saturée minimum. Celle-ci s’obtient en
rajoutant dans Ui (resp. Ji ) tout élément a ∈ A tel que la chaı̂ne idéale ((J0 , U0 ), . . . , (Ji ∪
{a}, Ui ), . . . , (J` , U` )) (resp. ((J0 , U0 ), . . . , (Ji , Ui ∪ {a}), . . . , (J` , U` ))) collapse.
(3) Soit x ∈ T . Supposons que les chaı̂nes idéales ((J0 , U0 ), . . . , (Ji ∪ {x}, Ui ), . . . , (J` , U` ))
et ((J0 , U0 ), . . . , (Ji , Ui ∪{x}), . . . , (J` , U` )) s’effondrent toutes les deux, alors C s’effondre
également.
Preuve.
Voyons d’abord les deux premiers points. Il n’est pas vraiment restrictif de supposer que la
chaı̂ne idéale C est finie, car une fois le résultat établi dans ce cas, on passe au cas général en
regardant la chaı̂ne idéale donnée comme limite inductive des chaı̂nes idéales finies dont elle
est un raffinement. Nous supposons donc C finie. Nous remarquons alors que nous pouvons
systématiquement remplacer Ui0 par Ui et Ji0 par Ji . Soit C1 = ((I0 , F0 ), . . . , (I` , F` )) la chaı̂ne
idéale construite en (2). On va démontrer
(α) Si C vérifie les inégalités données en (1) toute chaı̂ne idéale saturée qui raffine C est
triviale (i.e., C collapse).
(β) La chaı̂ne idéale C1 est bien saturée.
(γ) Toute chaı̂ne idéale saturée qui raffine C raffine également C1 .
(δ) Si C1 est triviale, C vérifie les inégalités données en (1).
Cela suffira bien à montrer (1) et (2).
(α) Soit ((I00 , F00 ), . . . , (I`0 , F`0 )) une chaı̂ne idéale saturée qui raffine C. Considérons les inégalités
données dans (1)
x1 , U0 ` J0
x2 , U1 ` J1 , x1
..
..
..
.
.
.
x` , U`−1
U`
` J`−1 , x`−1
` J` , x`
Puisque I00 et F00 sont conjugués, la première inégalité donne x1 ∈ I00 . Donc x1 ∈ I10 , Donc la
deuxième inégalité donne x2 ∈ I10 . En poursuivant, on obtient à la dernière inégalité U` ` J` , x` .
avec x` ∈ I`0 , d’où le collapsus (lemme 2.13).
(β) Nous faisons la preuve avec ` = 3. Montrons d’abord que les Ij sont des idéaux. Nous
faisons la preuve avec j = 1. Pour montrer 0 ∈ I1 on prend x1 = 0, x2 = x3 = 1. De même
pour montrer J1 ⊆ I1 on prend un x ∈ J1 et x1 = 0, x2 = x3 = 1. Le fait que x ∈ I1 et y ≤ x
impliquent y ∈ I1 est immédiat : on garde les mêmes xi . Supposons maintenant x, y ∈ I1 et
montrons x ∨ y ∈ I1 . Par hypothèse nous avons des xi et yi vérifiant les inégalités suivantes.
x1 , U 0
x2 , U 1 , x
x3 , U 2
U3
`
`
`
`
J0
J1 , x1
J2 , x2
J3 , x3
y1 , U 0
y2 , U 1 , y
y3 , U 2
U3
23
`
`
`
`
J0
J1 , y1
J2 , y2
J3 , y3
On en déduit, en utilisant la distributivité
(x1 ∨ y1 ), U0
(x2 ∧ y2 ), U1 , (x ∨ y)
(x3 ∧ y3 ), U2
U3
`
`
`
`
J0
J1 , (x1 ∨ y1 )
J2 , (x2 ∧ y2 )
J3 , (x3 ∧ y3 )
Montrons maintenant que idéaux et filtres sont conjugués. par exemple avec I1 et F1 . Nous
supposons x ∧ y ∈ I1 , y ∈ F1 et nous montrons x ∈ I1 . Par hypothèse nous avons des xi et yi
vérifiant les inégalités suivantes.
x1 , U 0
x2 , U1 , (x ∧ y)
x3 , U 2
U3
`
`
`
`
J0
J1 , x1
J2 , x2
J3 , x3
y1 , U 0
y2 , U 1
y3 , U 2
U3
`
`
`
`
J0
J1 , y1 , y
J2 , y2
J3 , y3
On en déduit, en utilisant la distributivité
(x1 ∨ y1 ), U0
(x2 ∧ y2 ), U1 , x, y
(x2 ∧ y2 ), U1 , x
(x3 ∧ y3 ), U2
U3
`
`
`
`
`
J0
J1 ,
J1 ,
J2 ,
J3 ,
(x1
(x1
(x2
(x3
∨
∨
∧
∧
y1 )
y1 ), y
y2 )
y3 )
Les inégalités no 2 et 3 donnent par coupure
(x2 ∧ y2 ), U1 , x
` J1 , (x1 ∨ y1 )
La preuve est complète.
(γ) Nous faisons la preuve avec ` = 3. Soit ((I00 , F00 ), . . . , (I30 , F30 )) une chaı̂ne idéale saturée qui
raffine C. Montrons par exemple I1 ⊆ I10 . Soit x ∈ I1 , on a donc
x1 , U 0
x, x2 , U1
x3 , U 2
U3
`
`
`
`
J0
J1 , x1
J2 , x2
J3 , x3
On en déduit successivement x1 ∈ I00 ⊆ I10 , x3 ∈ F30 ⊆ F20 , x2 ∈ F20 ⊆ F10 , et enfin x ∈ I10 .
Remarquez que la preuve du point (α) peut être vue comme un cas particulier de celle du
point (γ).
(δ) est immédiat
Prouvons enfin (3). Nous avons x ∈ Ii et x ∈ Fi , donc C1 collapse (lemme 2.13). Donc C collapse.
2
Définition 2.15
— Deux chaı̂nes idéales qui engendrent la même chaı̂ne idéale saturée sont dites
équivalentes.
— Une chaı̂ne idéale équivalente à une chaı̂ne idéale finie est appelée une chaı̂ne idéale de
type fini.
— Une chaı̂ne idéale est dite stricte si on a Vi ∩ Ii+1 6= ∅ (i = 0, . . . , ` − 1) dans la chaı̂ne
idéale saturée qu’elle engendre.
24
— Une chaı̂ne idéale saturée C = ((J0 , U0 ), . . . , (J` , U` )) est dite figée si elle ne collapse pas
et si on a Ji ∪ Ui = T pour i = 0, . . . , `. Une chaı̂ne idéale est dite figée si sa saturée
est figée. Donner une chaı̂ne idéale figée (stricte) revient donc à donner une chaı̂ne
(strictement) croissante d’idéaux premiers détachables.
Nous regardons une chaı̂ne idéale de longueur ` comme une spécification partielle pour une
chaı̂ne croisssante d’idéaux premiers P0 , . . . , P` vérifiant Ji ⊆ Pi , Ui ∩ Pi = ∅, (i = 0, . . . , `).
Du théorème de collapsus simultané on déduit le résultat suivant qui justifie cette idée de
spécification partielle.
Théorème 2.16 (Nullstellensatz formel pour les chaı̂nes d’idéaux premiers dans un treillis
distributif) Le théorème de complétude de Gödel implique le résultat suivant. Soit T un treillis distributif et ((J0 , U0 ), . . . , (J` , U` )) une chaı̂ne idéale dans T . Les propriétés suivantes sont
équivalentes :
(a) Il existe ` + 1 idéaux premiers P0 ⊆ · · · ⊆ P` tels que Ji ⊆ Pi , Ui ∩ Pi = ∅, (i = 0, . . . , `).
(b) La chaı̂ne idéale ne collapse pas.
La preuve est identique à celle du théorème 1.17.
La Théorie de Joyal
L’idée de Joyal est de construire un treillis Kr` (T ) universel attaché à T tel que les points
de Spec( Kr` (T )) soient (de manière naturelle) les chaı̂nes croissantes d’idéaux premiers de
longueur `.
Donner une telle chaı̂ne revient à donner des homomorphismes µ0 ≥ µ1 ≥ · · · ≥ µ` de T
vers 2.
Si donc on a un treillis distributif K et ` + 1 homomorphismes ϕ0 ≥ ϕ1 ≥ · · · ≥ ϕ` de T
vers K tels que, pour tout treillis T 0 et tous ψ0 ≥ ψ1 ≥ · · · ≥ ψ` ∈ Hom(T, T 0 ) on ait une
homomorphisme unique η : K → T 0 vérifiant ηϕ0 = ψ0 , ηϕ1 = ψ1 , . . ., ηϕ` = ψ` , alors on
aura les éléments de Spec(K) qui s’identifient naturellement aux chaı̂nes d’idéaux premiers de
longueur ` dans T .
L’avantage est que K est toujours un objet qu’on peut construire explicitement à partir de
T , contrairement aux idéaux premiers et aux spectres.
Le fait qu’un objet universel Kr` (T ) existe toujours de manière constructive résulte de
considérations générales d’algèbre universelle.
La description explicite de Kr` (T ) a été simplifiée par la considération des relations implicatives ([2]). Plus précisément on a le théorème suivant
Théorème 2.17 Soit T un treillis distributif. On considère le problème universel suivant, que
nous appelons “problème de Krull” : trouver un treillis distributif K et ` + 1 homomorphismes
ϕ0 ≥ ϕ1 ≥ · · · ≥ ϕ` de T vers K tels que, pour tout treillis T 0 et tous ψ0 ≥ ψ1 ≥ · · · ≥ ψ` ∈
Hom(T, T 0 ) on ait un homomorphisme unique η : K → T 0 vérifiant ηϕ0 = ψ0 , ηϕ1 = ψ1 , . . .,
ηϕ` = ψ` . Ce problème universel admet une solution (unique à isomorphisme unique près). On
note Kr` (T ) le treillis distributif correspondant. Il peut être décrit comme le treillis engendré
par la réunion disjointe S de ` + 1 copies de T (on note ϕi la bijection de T sur la copie indexée
par i) munie de la relation implicative `S définie comme suit. Si Ui et Ji (i = 0, . . . , `) sont des
parties finies de T on a
ϕ0 (U0 ), . . . , ϕ` (U` ) `S ϕ0 (J0 ), . . . , ϕ` (J` )
25
si et seulement si il existe x1 , . . . , x` ∈ T avec les relations suivantes dans T (où ` est la
relation implicative de T ) :
x1 , U 0
x2 , U 1
..
.
x` , U`−1
U`
` J0
` J1 , x 1
..
..
.
.
` J`−1 , x`−1
` J` , x `
Preuve.
On montre d’abord que la relation `S définie sur Pf (S) dans le théorème est bien une relation implicative. Le seul point délicat est la règle de coupure. Pour simplifier les notations, on
prend ` = 3. Il y a alors 3 cas possibles, et on analyse un cas possible, où X, ϕ1 (z) `S Y et
X `S Y, ϕ1 (z), les autres cas étant semblables. Par hypothèse on a x1 , x2 , x3 , y1 , y2 , y3 tels que
x1 , U 0
x2 , U 1 , z
x3 , U 2
U3
`
`
`
`
J0
J1 , x1
J2 , x2
J3 , x3
y1 , U 0
y2 , U 1
y3 , U2
U3
`
`
`
`
J0
J1 , y1 , z
J2 , y2
J3 , y3
Les deux relations implicatives sur la deuxième ligne donnent
x2 , y 2 , U 1 , z
` J1 , x 1 , y 1
x2 , y 2 , U 1
x2 , y 2 , U 1
` J1 , x1 , y1
` J1 , x1 , y1 , z
donc par coupure
c’est-à-dire
x2 ∧ y 2 , U 1
` J1 , x1 ∨ y1
Finalement, en utilisant la distributivité
(x1 ∨ y1 ), U0
(x2 ∧ y2 ), U1
(x3 ∧ y3 ), U2
U3
`
`
`
`
J0
J1 , (x1 ∨ y1 )
J2 , (x2 ∧ y2 )
J3 , (x3 ∧ y3 )
et donc ϕ0 (U0 ), . . . , ϕ3 (U3 ) `S ϕ0 (J0 ), . . . , ϕ3 (J3 ).
Il reste à voir que le treillis Kr` (T ) défini à partir de (S, `S ) vérifie bien la propriété universelle
voulue. La solution du problème universel posé existe pour des raisons générales d’algèbre
universelle. Et on sait que Kr` (T ) est nécessairement engendré par S. Il suffit donc de définir
la relation implicative la moins contraignante possible (avec la condition que les ϕi forment
une suite croissante d’homomorphismes). Or la relation qui a été définie est clairement une
condition nécessaire. Puisque c’est une relation implicative, c’est bien la moins contraignante
possible.
2
Notez que les homomorphismes ϕi sont injectifs : on voit facilement que pour a, b ∈ T la
relation ϕi (a) `S ϕi (b) implique a ` b, donc ϕi (a) = ϕi (b) implique a = b.
26
Comparaison des deux points de vue
On est frappé par la ressemblance des preuves des théorèmes 2.14 et 2.17. En fait ces deux
théorèmes mis ensemble montrent qu’une chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) collapse dans
T si et seulement si le premier idéal P = (ϕ0 (J0 ), . . . , ϕ` (J` ); ϕ0 (U0 ), . . . , ϕ` (U` )) collapse dans
Kr` (T ). Ce n’est pas un hasard : vue la propriété universelle qui définit Kr` (T ) il revient au
même de se donner un idéal premier détachable de Kr` (T ) ou une chaı̂ne croissante d’idéaux
premiers détachables de T (de longueur `). On peut donc considérer qu’on a fait une preuve
inutile.
En mathématiques classiques, on pourrait organiser les choses comme suit. On définirait a
priori le collapsus d’un premier idéal (resp. d’une chaı̂ne idéale) comme signifiant l’impossibilité
d’un raffinement du premier idéal en un idéal premier (resp. de la chaı̂ne idéale en une chaı̂ne
croissante d’idéaux premiers). Les théorèmes de collapsus simultanés (théorèmes 2.5 (1) et
2.14 (3)) sont dans ce cadre immédiats. Par ailleurs, la caractérisation algébrique du collapsus
d’un premier idéal (J, U ) (c’est-à-dire U0 ` J0 pour des parties finies U0 ⊆ U et J0 ⊆ J) est
facile à établir. La description de Kr` (T ) donnée dans le théorème 2.17 implique donc (en tenant
compte de la caractérisation algébrique du collapsus d’un premier idéal) la caractérisation
algébrique du collapsus d’une chaı̂ne idéale, c’est-à-dire le point (1) du théorème 2.14.
En mathématiques constructives, on a défini le collapsus d’un premier idéal (resp. d’une
chaı̂ne idéale) comme signifiant l’impossibilité d’un raffinement du premier idéal en un premier
idéal saturé non trivial 6 (resp. de la chaı̂ne idéale en une chaı̂ne idéale saturée non triviale). Pour
faire résulter la caractérisation algébrique du collapsus d’une chaı̂ne idéale de la caractérisation
algébrique du collapsus d’un premier idéal et de la description de Kr` (T ) (ce qui éviterait de
faire “deux fois la même preuve”) il suffit d’expliquer comment la donnée d’une chaı̂ne idéale
saturée ((I0 , F0 ), . . . , (I` , F` )) de T permet de produire une suite croissante d’homomorphismes
(ψ0 , . . . , ψ` ) de T dans un treillis distributif avec ψk−1 (0) = Ik et ψk−1 (1) = Fk (k = 0, . . . , `). Or
il suffit d’appliquer le lemme suivant.
Lemme 2.18 Soit C = ((I0 , F0 ), . . . , (I` , F` )) une chaı̂ne idéale saturée dans un treillis distributif T . Soit TC le treillis distributif quotient de Kr` (T ) par ϕ0 (I0 ) = · · · = ϕ` (I` ) = 0, ϕ0 (F0 ) =
· · · = ϕ` (F` ) = 1. Soit π la projection canonique de Kr` (T ) sur TC et ψk = π ◦ ϕk . Alors
ψk−1 (0) = Ik et ψk−1 (1) = Fk (k = 0, . . . , `).
Preuve.
Par exemple ψk−1 (0) = {x ∈ T ; ϕk (x) =TC 0} est, par application de la proposition 2.3 égal à
x ∈ T ; ϕk (x), ϕ0 (F0 ), . . . , ϕ` (F` ) ` Kr` (T ) ϕ0 (I0 ), . . . , ϕ` (I` )
c’est-à-dire l’ensemble des x tels qu’il existe x1 , . . . , x` tels que (ci-dessous ` est la relation
implicative de T )
x1 , F0 ` I0
x2 , F1 ` I1 , x1
..
..
..
.
.
.
x, xk+1 , Fk ` Ik , xk
..
..
..
.
.
.
x` , F`−1 ` I`−1 , x`−1
F` ` I` , x`
Comme la chaı̂ne idéale C est saturée on a de proche en proche x1 ∈ I1 ⊆ I2 , x2 ∈ I2 , . . .xk ∈ Ik ,
et x` ∈ F` , . . ., xk+1 ∈ Fk+1 ⊆ Fk , d’où enfin x ∈ Ik .
2
6. A ceci près que la double négation (. . .impossibilité . . .non trivial ) dans la phrase est, naturellement, vue
sous forme d’une affirmation explicite.
27
Définition constructive de la dimension d’un treillis distributif
Puisqu’une chaı̂ne idéale C = ((J0 , U0 ), . . . , (J` , U` )) collapse dans T si et seulement si le premier idéal P = (ϕ0 (J0 ), . . . , ϕ` (J` ); ϕ0 (U0 ), . . . , ϕ` (U` )) collapse dans Kr` (T ), les deux variantes
dans la définition ci-dessous pour la dimension d’un treillis distributif sont bien équivalentes.
Définition 2.19 1) Une chaı̂ne idéale élémentaire dans un treillis distributif T est une chaı̂ne
idéale de la forme suivante
((0, x1 ), (x1 , x2 ), . . . , (x` , 1))
(avec les xi dans T ).
2) Un treillis distributif T est dit de dimension ≤ `−1 s’il vérifie l’une des conditions équivalents
suivantes
— Toute chaı̂ne idéale élémentaire de longueur ` collapse.
— Pour toute suite x1 , . . . , x` ∈ T on a
ϕ0 (x1 ), . . . , ϕ`−1 (x` ) ` ϕ1 (x1 ), . . . , ϕ` (x` )
dans Kr` (T ),
La condition dans (2) est que : ∀x1 , . . . , x` ∈ T
∃a1 , . . . , a` ∈ T tels que
` 0
` a1 , x 1
..
..
.
.
a` , x` ` a`−1 , x`−1
1 ` a` , x `
a1 , x 1
a2 , x 2
..
.
En particulier le treillis distributif T est de dimension ≤ −1 si et seulement si 1 = 0 dans
T , et de dimension ≤ 0 si et seulement si T est une algèbre de Boole (tout élément de T a un
complément).
Nous ne donnerons pas dans le cadre des treillis distributifs de définition pour dim(T ) < `
ni pour dim(T ) ≥ ` et nous nous contenterons de dire que dim(T ) > ` signifie l’impossibilité
de dim(T ) ≤ `. On pourra introduire les mêmes raffinements que dans la section 1.3 lorsqu’on
a une relation d’inégalité dans T qui n’est pas la simple impossibilité de l’égalité.
La deuxième variante est assez sympathique parce qu’elle permet d’obtenir facilement la
caractérisation plus simple suivante.
Lemme 2.20 Un treillis distributif T engendré par une partie S est de dimension ≤ ` − 1 si
et seulement si pour toute suite x1 , . . . , x` ∈ S
ϕ0 (x1 ), . . . , ϕ`−1 (x` ) ` ϕ1 (x1 ), . . . , ϕ` (x` )
dans Kr` (T ).
En effet les règles de distributivité permettent par exemple de déduire
a ∨ a0 , A ` b ∨ b 0 , B
de a, A ` b, B et a0 , A ` b0 , B. Par ailleurs tout élément de T est un inf de sups d’éléments de
S.
On remarquera l’analogie entre la formulation de cette condition et la définition de suite
pseudo régulière 1.18.
28
Connections avec la définition de Joyal
Soit T un treillis distributif, Joyal [6] donne la définition suivante de dim(T ) ≤ ` − 1. Soit
ϕ`i : T → Kr` (T ) les ` + 1 morphismes universels. Par universalité de Kr`+1 (T ), on obtient
` + 1 morphismes σi : Kr`+1 (T ) → Kr` (T ) tels que σi ◦ ϕ`+1
= ϕ`j si j ≤ i et σi ◦ ϕ`+1
= ϕ`j−1
j
j
si j > i. Joyal définit alors dim(T ) ≤ ` par le fait que (σ0 , . . . , σ` ) : Kr`+1 (T ) → Kr` (T )`+1 est
injectif. Cette définition peut se motiver à l’aide de la proposition 2.8 : les éléments dans l’image
de Sp(σi ) sont les chaı̂nes d’ideaux premiers (α0 , . . . , α` ) avec αi = αi+1 , et Sp(σ0 , . . . , σ` ) est
surjectif si et seulement si pour toute chaı̂ne (α0 , . . . , α` ) il existe i < ` tel que αi = αi+1 . Ceci
dit donc exactement qu’il n’y a pas de chaı̂nes d’ideaux premiers non triviales de longueur l + 1.
En utilisant le théorème de complétude de Gödel, on voit donc l’équivalence avec la définition
2.19. Nous allons montrer ici directement l’équivalence avec la définition 2.19.
Théorème 2.21 On a dim(T ) ≤ `, au sens de la définition 2.19, si et seulement si
(σ0 , . . . , σ` ) : Kr`+1 (T ) → Kr` (T )`+1 est injectif.
Preuve.
Pour simplifier les notations, nous nous limitons au cas ` = 2 et nous écrivons φi pour φ2i et
ψi pour φ3i . Notons que l’application σi : Kr3 (T ) → Kr2 (T )3 correspond en termes d’idéaux
premiers aux applications
(α1 , α2 , α3 ) → ((α1 , α1 , α2 , α3 ), (α1 , α2 , α2 , α3 ), (α1 , α2 , α3 , α3 ))
et que, classiquement cette application est surjective sur les chaines croissantes d’idéaux ssi
dim(T ) ≤ 2.
Si (σ0 , σ1 , σ2 ) est injectif on montre que l’on a
ψ0 (x1 ), ψ1 (x2 ), ψ2 (x3 ) ` ψ1 (x1 ), ψ2 (x2 ), ψ3 (x3 )
pour toute suite x1 , x2 , x3 en observant que l’on a pour chaque i
σi ψ0 (x1 ), σi ψ1 (x2 ), σi ψ2 (x3 ) ` σi ψ1 (x1 ), σi ψ2 (x2 ), σi ψ3 (x3 )
car σi ψi+1 (xi ) = φi (xi ) = σi ψi (xi ).
Réciproquement, supposons que l’on a
ψ0 (x1 ), ψ1 (x2 ), ψ2 (x3 ) ` ψ1 (x1 ), ψ2 (x2 ), ψ3 (x3 )
pour toute suite x1 , x2 , x3 et montrons que (σ0 , σ1 , σ2 ) est injectif. On doit montrer que l’on
a X ` Y dans Kr3 (T ) ssi σi (X) ` σi (Y ) dans Kr2 (T ) pour chaque i. Comme Kr3 (T ) est
engendré par ∪ψi (T ) il suffit en fait de montrer que si l’on a
σi ψ0 (a0 ), . . . , σi ψ3 (a3 ) ` σi ψ0 (b0 ), . . . , σi ψ3 (b3 )
pour chaque i alors on a
ψ0 (a0 ), . . . , ψ3 (a3 ) ` ψ0 (b0 ), . . . , ψ3 (b3 )
Pour ceci, on remarque que l’on a par hypothèse
ψ0 (a1 ), . . . , ψ2 (a3 ) ` ψ1 (a1 ), . . . , ψ3 (a3 )
29
et donc, il suffit de montrer
ψ0 (a0 ), . . . , ψi−1 (ai−1 ), ψi+1 (ai ai+1 ), . . . , ψ3 (a3 ) ` ψ0 (b0 ), . . . , ψ3 (b3 )
pour chaque i = 1, 2, 3. Montrons par exemple le cas i = 1 soit
ψ0 (a0 ), ψ2 (a1 a2 ), ψ3 (a3 ) ` ψ0 (b0 ), . . . , ψ3 (b3 )
On a par hypothèse
σ1 ψ0 (a0 ), . . . , σ1 ψ3 (a3 ) ` σ1 ψ0 (b0 ), . . . , σ1 ψ3 (b3 )
ce qui s’écrit
ϕ0 (a0 ), ϕ1 (a1 ), ϕ1 (a2 ), ϕ2 (a3 ) ` ϕ0 (b0 ), ϕ1 (b1 ), ϕ1 (b2 ), ϕ2 (b3 )
soit
ϕ0 (a0 ), ϕ1 (a1 a2 ), ϕ2 (a3 ) ` ϕ0 (b0 ), ϕ1 (b1 b2 ), ϕ2 (b3 )
Par universalité de ϕ0 ≥ ϕ1 ≥ ϕ2 appliquée à ψ0 ≥ ψ2 ≥ ψ3 ceci entraine
ψ0 (a0 ), ψ2 (a1 a2 ), ψ3 (a3 ) ` ψ0 (b0 ), ψ2 (b1 b2 ), ψ3 (b3 )
et comme ψ2 (b1 ) ` ψ1 (b1 ) ceci entraine bien
ψ0 (a0 ), ψ2 (a1 a2 ), ψ3 (a3 ) ` ψ0 (b0 ), ψ1 (b1 ), ψ2 (b2 ), ψ3 (b3 )
2
Connections avec le travail d’Español
Soit T un treillis distributif, Espanõl [6] donne une caractérisation élégante de dim(T ) ≤ `−1
en terme de l’algèbre de Boole engendrée par T . Le but de cette section est de présenter cette
caractérisation et de montrer son équivalence avec la définition 2.19.
La définition 2.19 dit que l’on a dim(T ) ≤ ` − 1 si et seulement si toute chaı̂ne idéale élémentaire (x1 , . . . , x` ) collapse. Le lemme suivant dit que l’on peut se limiter aux chaı̂nes idéales
élémentaires (x1 , . . . , x` ) telles que x1 ≥ x2 ≥ · · · ≥ x` .
Lemme 2.22 Si pour toute suite x1 ≥ x2 ≥ · · · ≥ x` la chaı̂ne idéale élémentaire (x1 , . . . , x` )
collapse alors dim(T ) ≤ ` − 1.
Preuve.
HUM je crois encore qu’il faut permuter droite et gauche, et donner un peu plus de détails,
en tout cas je n’ai pas compris
Soit y1 , . . . , y` une suite arbitraire. On considère alors x1 = y1 , x2 = y1 ∧ y2 , . . . Par
hypothèse on a a1 , . . . , a` tels que 1 = x1 ∨ a1 , 0 = x` ∧ a` et xi ∧ ai ≤ xi+1 ∨ ai+1 . Si on
prend b1 = a1 , b2 = a2 ∧ y1 , . . . on a alors 1 = y1 ∨ b1 , 0 = y` ∧ b` et yi ∧ bi ≤ yi+1 ∨ bi+1 et
donc la chaı̂ne idéale élémentaire (y1 , . . . , y` ) collapse.
2
Nous décrivons maintenant la caractérisation présentée dans [6]. Soit B l’algèbre de Boole
engendrée
par T . Rappelons [16] que tout élément de B peut être décrit comme une union finie
W
(ai − bi ) de différences formelles, avec ai+1 ≤ bi ≤ ai ∈ T. En général, on ne peut rien dire
sur la longueur minimale d’une telle suite.
Español [6] donne la caractérisation suivante de dim(T ) ≤ ` (au sens de Joyal).
30
W
On a dim(T ) ≤ 2k + 1 si et seulement si tout élément de B peut W
s’écrire 1≤i≤k (ai − bi ) et
dim(T ) ≤ 2k si et seulement si tout élément de B peut s’écrire a ∨ 1≤i≤k−1 (ai − bi ).
Pour simplifier les notations, si a1 , . . . , a` ∈ T on écrira
(a1 − a2 ) ∨ (a3 − a4 ) ∨ · · ·
pour
(a1 − a2 ) ∨ (a3 − a4 ) ∨ · · · ∨ (a2k−1 − a2k )
si ` = 2k et pour
(a1 − a2 ) ∨ (a3 − a4 ) ∨ · · · ∨ (a2k−1 − a2k ) ∨ a2k+1
si ` = 2k + 1.
Avec cette notation, la condition d’Español devient : on a dim(T ) ≤ ` si et seulement si
tout élément de B peut s’écrire
(a1 − a2 ) ∨ · · ·
pour une suite a1 , . . . , a` dans T . Pour montrer l’équivalence entre ces deux caractérisations,
on utilisera les faits suivants.
Lemme 2.23 Si x1 ≥ . . . ≥ x` et a1 ≥ . . . ≥ a` vérifient
(1 − x1 ) ∨ (x2 − x3 ) ∨ · · · = (a1 − a2 ) ∨ (a3 − a4 ) ∨ · · ·
on a
1 = x1 ∨ a1 , 0 = x` ∧ a` , xi ∧ ai ≤ xi+1 ∨ ai+1
Preuve.
Simple vérification.
2
Lemme 2.24 Si x1 ≥ x2 ≥ . . . et a1 , a2 . . . vérifient
1 = x1 ∨ a1 , 0 = x` ∧ a` , xi ∧ ai ≤ xi+1 ∨ ai+1
et on pose b1 = a1 ∨ x2 , b2 = ((a1 ∧ a2 ) ∧ x1 ) ∨ x3 , . . . alors on a
1 = x1 ∨ b1 , 0 = x` ∧ b` , xi ∧ bi ≤ xi+1 ∨ bi+1
et b1 ≥ b2 . . .
Preuve.
Simple vérification. On remarquera que l’on a b1 ≥ b2 ≥ . . .
2
Théorème 2.25 On a dim(T ) ≤ ` − 1, au sens de la définition 2.19, si et seulement si tout
élément de B peut s’écrire
(a1 − a2 ) ∨ · · ·
pour une suite a1 ≥ · · · ≥ a` dans T .
Preuve.
On suppose dim(T ) ≤ ` − 1 au sens de la définition 2.19. En général si x1 ≥ x2 ≥ . . . on a
(x1 − x2 ) ∨ (x3 − x4 ) ∨ . . . = x1 − ((x2 − x3 ) ∨ . . .)
et il suffit de montrer que pour toute suite x0 ≥ x1 , . . . ≥ x` on peut trouver a1 ≥ . . . ≥ a` telle
que
(x0 − x1 ) ∨ (x2 − x3 ) ∨ . . . = (a1 − a2 ) ∨ . . .
Comme (x0 − x1 ) ∨ (x2 − x3 ) ∨ . . . = x0 ∧ ((1 − x1 ) ∨ (x2 − x3 ) ∨ . . .) on se ramène au cas
x0 = 1, qui résulte du lemmes 2.24.
La réciproque résulte directement du lemme 2.23.
2
31
3
Treillis de Zariski et de Krull dans un anneau commutatif
Treillis de Zariski d’un anneau commutatif
Dans un anneau commutatif A, le treillis de Zariski Zar(A) a pour éléments les radicaux
d’idéaux de type
est l’inclusion).
Il est
treillis.
√ fini (la
√ relation
√ d’ordre
√
√
√ bien défini en tant
√ que √
Autrement dit
impliquent I1 I2 = J1 J2 (ceci définit I1 ∧ I2 ) et
√ I1 = J1 et I2 =
√ J2 √
√
I1 + I2 = J1 + J2 (ceci définit I1 ∨ I2 ). Le treillis de Zariski de A est toujours
√ un treillis
√
distributif, mais en général l’égalité n’est pas testable. Néanmoins une inclusion I1 ⊆ I2
peut être certifiée de manière finie si l’anneau A est discret. Ce treillis contient toutes les
informations nécessaires au développement du point de vue constructif concernant la théorie
abstraite et non constructive
du spectre de Zariski.
p
e
Nous notons e
a pour hai. Pour une partie
p S de A nous notons S la partie de Zar(A) formée
des se pour s ∈ S. On a ae1 ∨ · · · ∨ af
ha1 , . . . , am i et ae1 ∧ · · · ∧ af
· · · am
m =
m = a1^
Soient U et J deux familles finies dans A, on a
Y
p
6 ∅
U ` Zar(A) J ⇐⇒
u ∈ hJi ⇐⇒ M(U ) ∩ hIi =
u∈U
c’est-à-dire encore
(J, U ) collapse dans A
⇐⇒
eU
e ) collapse dans Zar(A)
(J,
Cela suffit à décrire ce treillis. Plus précisément on a :
Proposition 3.1 Le treillis Zar(A) d’un anneau commutatif A est (à isomorphisme près) le
treillis engendré par (A, ` ) où ` est la plus petite relation implicative vérifiant
0A
`
` 1A
x, y
xy
` xy
` x
x+y
` x, y
Preuve.
Il est clair que la relation U ` J définie par “M(U ) intersecte hJi” vérifie ces axiomes. Il
est clair aussi que la relation implicative engendrée par ces axiomes contient cette relation.
Montrons donc que cette relation est une relation implicative. Seule la règle de coupure n’est
pas directe. Supposons que M(U, a) intersecte hJi et que M(U ) intersecte hJ, ai. On peut alors
trouver m1 , m2 ∈ M(U ) et k, x tels que ak m1 ∈ hJi , m2 + ax ∈ hJi. En éliminant a ceci
entraine que M(U ) intersecte hJi .
2
Notez que l’application canonique de A dans Zar(A) est a 7→ e
a et que l’on a e
a = eb si et
seulement si a divise une puissance de b et b divise une puissance de a.
Proposition 3.2 Dans un anneau commutatif A il revient au même de se donner un idéal du
treillis Zar(A) ou un idéal radical de A. Si I est un idéal radical de A on lui associe l’idéal
I = {J ∈ Zar(A) | J ⊆ I}
de Zar(A). Réciproquement si I est un idéal de Zar(A) on lui associe l’idéal
[
I=
J = {x ∈ A | x
e ∈ I},
J∈I
qui est un idéal radical de A. Dans cette bijection les idéaux premiers correspondent aux idéaux
premiers.
32
Preuve.
Nous expliquons seulement la dernière affirmation. Si I est un idéal premier de A, si
J, J 0 p
∈ Zar(A) et J ∧ J 0 ∈ I, soient a1 , . . . , an ∈ A des “générateurs” de J (c’est-à-dire
J = ha1 , . . . , an i) et b1 , . . . , bm ∈ A des génerateurs de J 0 . On a alors ai bj ∈ I d’où ai ∈ I ou
bj ∈ I pour tout i, j. Il en résulte que l’on a ai ∈ I pour tout i ou bj ∈ I pour tout j. Donc
J ∈ I ou J 0 ∈ I et I est un idéal premier de Zar(A).
Réciproquement si I est un idéal premier de Zar(A) et si on a x
fy ∈ I alors on a x
e ∧ ye ∈ I et
donc x
e ∈ I ou ye ∈ I ce qui montre que {x ∈ A | x
e ∈ I} est un idéal premier de A.
2
Définition 3.3 On définit Kru` (A) := Kr` ( Zar(A)). On l’appelle le treillis de Krull d’ordre `
de l’anneau A
Théorème 3.4 Soit C = ((J0 , U0 ), . . . , (J` , U` )) une chaı̂ne idéale dans un anneau commutatif
f0 ), . . . , (Je` , Ue` )) collapse dans Zar(A).
A. Elle collapse si et seulement si la chaı̂ne idéale ((Je0 , U
Par exemple si C est finie, les propriétés suivantes sont équivalentes :
1. il existe ji ∈ hJi i, ui ∈ M(Ui ), (i = 0, . . . , `), vérifiant l’égalité
u0 · (u1 · (· · · (u` + j` ) + · · ·) + j1 ) + j0 = 0
2. il existe x1 , . . . , x` ∈ Zar(A) avec les relations suivantes dans Zar(A) :
f0
x1 , U
f1
x2 , U
..
.
x` , Ug
`−1
Ue`
`
`
..
.
Je0
Je1 , x1
..
.
g
` J`−1 , x`−1
` Je` , x`
e
3. même chose mais avec x1 , . . . , x` ∈ A
Preuve.
Il est clair que 1 entraine 3 : on prend simplement
v` = u` + j` , v`−1 = v` u`−1 + j`−1 , . . . , v0 = v1 u0 + j0
et xi = vei
et que 3 entraine 2. Le fait que 2 entraine 1 peut se voir en reformulant 2 de la manière suivante.
On considère la chaı̂ne idéale C1 = ((K0 , V0 ), . . . , (K` , V` )) obtenue en saturant la chaı̂ne idéale
C. On définit les ` + 1 idéaux radicaux I0 , . . . , I` de A
— I0 = {x ∈ A | M(x, U0 ) ∩ hJ0 i =
6 ∅}
— I1 = {x ∈ A | M(x, U1 ) ∩ (hJ1 i + I0 ) 6= ∅}
.
— ..
— I`−1 = {x ∈ A | M(x, U`−1 ) ∩ (hJ`−1 i + I`−2 ) 6= ∅}
— I` = hJ` i + I`−1
Il est clair que Ii ⊆ Ki (i = 0, . . . , `). Dans la corrrespondance donnée en 3.2 ces idéaux
correspondent aux idéaux suivants de Zar(A)
f0 ` Je0 }
— I0 = {u ∈ Zar(A) | u, U
f1 ` Je1 , v}
— I1 = {u ∈ Zar(A) | (∃v ∈ I0 ) u, U
.
— ..
g
— I`−1 = {u ∈ Zar(A) | (∃v ∈ I0 ) u, Ug
`−1 ` J`−1 , v}
33
La condition 2 signifie alors que l’on a Ue` ` Je` , v pour un v ∈ I`−1 . Autrement dit, M(U` )
intersecte I` , or I` ⊆ K` . Donc C1 collapse, donc C collapse.
j0 ∈ I(J0 ),
Donnons une autre preuve directe que (2) entraine (3). On réécrit les relations implicatives
de (2) comme suit. Chaque Uei peut être remplacé par un uei avec ui ∈ A, chaque Jei peut être
remplacé par un radical d’idéal de type fini Ii de A, et on note Li à la place de xi pour se
rappeler qu’il s’agit du radical d’un idéal de type fini. On obtient :
L1 , ue0
L2 , ue1
L3 , ue2
ue3
`
`
`
`
I0
I1 , L1
I2 , L2
I3 , L3
La dernière ligne signifie que M(u3 ) coupe I3 + L3 ou encore I3 + hy3 i pour un élément y3 de
L3 , et donc aussi que l’on a u
e3 ` I3 , ye3 . Comme ye3 ≤ L3 dans Zar(A) l’avant-dernière ligne
implique ye3 , u
e2 ` I2 , L2 . On a donc remplacé dans ses deux occurences L3 par ye3 On raisonne
comme précédemment et on voit qu’on peut remplacer les deux occurences de L2 par un ye2
convenable, puis les deux occurences de L1 par un ye1 convenable. On a bien obtenu (3)
2
Corollaire 3.5 La dimension de Krull d’un anneau commutatif A est ≤ ` si et seulement si
la dimension de Krull du treillis de Zariski Zar(A) est ≤ `.
Preuve.
Appliquer le théorème précédent et le lemme 2.20.
2
N’a-t-on pas fait une preuve en trop dans la comparaison des deux points de vue
constructifs pour la dimension de Krull d’un anneau commutatif
HUM A Réécrire, je reste persuadé qu’il y avait du vrai dans le fait factCompar1 et que
cela permettrait de comprendre qu’on pourrait économiser des preuves et des calculs dans notre
exposé (ce qui ne veut pas dire que je voudrais raccourcir, je veux seulement mieux comprendre
les relations entre les deux points de vue). Henri
Des trois énoncés 1.14, factCompar1 et 3.4 (l’équivalence de 1 et 2), on peut déduire le
premier des deux derniers ou le dernier des deux premiers. Il serait instructif de mettre à plat
cette remarque et par exemple d’expliquer comment les calculs donnés pour justifier les deux
derniers impliquent un calcul sous-jacent au premier.
4
4.1
Going Up et Going Down
Dimension de Krull relative
Généralités sur la dimension de Krull relative
Nous développons ici un analogue constructif pour les chaı̂nes croissantes d’idéaux premiers
qui coupent tous un sous anneau donné selon un même idéal premier. Ce paragraphe est en
fait suffisamment général pour fonctionner dans le cadre d’un treillis distributif arbitraire (ici
c’est Zar(A)) avec les adaptations évidentes. Il n’y a pas de vrais calculs, seulement un peu de
combinatoire.
Définition 4.1 Soit A ⊆ B des anneaux commutatifs et C = ((J0 , U0 ), . . . , (J` , U` )) une chaı̂ne
idéale dans B.
34
— On dit que la chaı̂ne idéale C collapse au dessus de A s’il existe a1 , . . . , ak ∈ A tels que
pour tout couple de parties complémentaires (H, H 0 ) de {1, . . . , k}, on ait le collapsus de
la chaı̂ne idéale
({(ah )h∈H } ∪ J0 , U0 ), (J1 , U1 ) . . . , (J` , U` ∪ {(ah )h∈H 0 })
— On dit que la dimension de Krull (relative) de l’extension B/A est ≤ `−1 si toute chaı̂ne
idéale élémentaire ((0, x1 ), (x1 , x2 ), . . . , (x` , 1)) collapse au dessus de A.
— On dit que la dimension de Krull (relative) de l’extension B/A est ≥ ` si il existe
x0 , . . . , x` dans B tels que la chaı̂ne idéale élémentaire ((0, x1 ), (x1 , x2 ), . . . , (x` , 1)) ne
collapse pas( 7 ) au dessus de A.
— On dit que la dimension de Krull (relative) de l’extension B/A est < ` s’il est impossible
qu’elle soit ≥ `.
— On dit que la dimension de Krull (relative) de l’extension B/A est > ` s’il est impossible
qu’elle soit ≤ `.
On peut considérer le cas d’une extension d’anneaux plus générale : on a un homomorphisme
A → B non nécesssairement injectif. On peut alors adapter la définition précédente en remplaçant A par son image dans B.
On a un collapsus simultané relatif.
Théorème 4.2 (Collapsus simultané relatif pour les chaı̂nes idéales) Soit A ⊆ B des anneaux
commutatifs et C une chaı̂ne idéale de longueur ` dans B.
(1) Soit x ∈ B et i ∈ {0, . . . , `}. Supposons que les chaı̂nes idéales C & x ∈ C (i) et
C& x∈
/ C (i) collapsent toutes les deux au dessus de A, alors C collapse également
au dessus de A.
/ C (`) collapsent
(2) Soit x ∈ A. Supposons que les chaı̂nes idéales C & x ∈ C (0) et C & x ∈
toutes les deux au dessus de A, alors C collapse également au dessus de A.
C’est une conséquence facile du théorème (non relatif) 1.14, que nous laissons au lecteur. De là,
on déduit (en mathématiques classiques) une caractérisation des chaı̂nes idéales qui collapsent
relativement.
Théorème 4.3 (Nullstellensatz formel pour les chaı̂nes d’idéaux premiers dans une extension
d’anneaux commutatifs) Le théorème de complétude de Gödel implique le résultat suivant. Soit
A ⊆ B des anneaux commutatifs et C = ((J0 , U0 ), . . . , (J` , U` )) une chaı̂ne idéale dans B. Les
propriétés suivantes sont équivalentes :
(a) Il existe un idéal premier détachable P de A et ` + 1 idéaux premiers détachables P0 ⊆
· · · ⊆ P` de B tels que Ji ⊆ Pi , Ui ∩ Pi = ∅ et Pi ∩ A = P (i = 0, . . . , `).
(b) La chaı̂ne idéale C ne collapse pas au dessus de A.
Preuve.
On a évidemment (a) ⇒ (b). Pour (b) ⇒ (a) nous faisons la preuve (plus facile) qui s’appuie sur le principe du tiers exclu et le lemme de Zorn. On considère une chaı̂ne idéale
7. Du point de vue constructif on dira précisément : pour tout entier k et tous a1 , . . . , ak ∈ A il existe un
couple de parties complémentaires (H, H 0 ) de {1, . . . , k}, tel que la chaı̂ne idéale
({(ah )h∈H }; x1 ), (x1 , x2 ), . . . , (x` ; {(ah )h∈H 0 })
“ne collapse pas” au sens de la relation d’inégalité définie sur A (cf. l’explication au début de la section 1.3 page
11).
35
C1 = ((P0 , S0 ), . . . , (P` , S` )) maximale (pour la relation d’extension) parmi les chaı̂nes idéales
qui raffinent C et qui ne collapsent pas au dessus de A. Vu le collapsus simultané relatif, la
même preuve que pour le théorème 1.17 montre que c’est une chaı̂ne croissante d’idéaux premiers (avec leurs compléments). Il reste à voir que les Pi ∩ A sont tous égaux, ce qui revient à
dire S0 ∩P` ∩A = ∅. Si ce n’était pas le cas soit x ∈ S0 ∩P` ∩A. Alors ((P0 ∪{x}, S0 ), . . . , (P` , S` ))
et ((P0 , S0 ), . . . , (P` , S` ∪ {x})) collapsent (tout court) donc C1 collapse au dessus de A (avec la
partie finie {x}). C’est absurde.
2
On a le résultat constructif suivant.
Théorème 4.4 Soit A ⊆ B des anneaux commutatifs.
(1) Supposons que la dimension de Krull de A est ≤ m et que la dimension de Krull de
l’extension B/A est ≤ n, alors la dimension de Krull de B est ≤ (m + 1)(n + 1) − 1.
(2) Supposons que A et B sont munis du prédicat 6= 0 défini comme la négation de = 0.
Supposons que la dimension de Krull de l’extension B/A est ≤ n et qu’on a un test
pour le collapsus des chaı̂nes idéales élémentaires dans A. Si on donne une suite pseudo
régulière de longueur (m+1)(n+1) dans B, on peut construire une suite pseudo régulière
de longueur m + 1 dans A.
Preuve.
En mathématiques classiques la preuve est immédiate. On considère une chaı̂ne strictement
croissante dans B avec (m + 1)(n + 1) + 1 termes. Comme n + 2 termes consécutifs ne peuvent
avoir la même intersection avec A cela fournit une chaı̂ne strictement croissante de m+2 idéaux
de A, ce qui est absurde. En mathématiques constructives on peut mimer cette preuve et on
obtient le résultat sous forme constructive, ce qui nous donne une véritable information de
nature algorithmique. Voici ce que cela donne.
Nous prouvons d’abord le point (1).
Tout d’abord nous traitons le cas où A est de dimension zéro avec la dimension relative égale
à n. Nous voulons montrer que toute chaı̂ne idéale élémentaire de longueur n dans B collapse.
Soit C = ((0, x1 ), . . . , (xn , 1)) une telle chaı̂ne idéale élémentaire. Nous savons qu’elle collapse
au dessus de A. Soit F = {a1 , . . . , ak } la partie finie de A correspondante. Si F est vide c’est
bon. Si F n’est pas vide, on va montrer qu’on peut enlever un élément à F et on aura gagné
par induction. Soit donc F = H ∪ {ak }. Soit maintenant G et G0 deux parties complémentaires
de H. On a les collapsus (tout court) des chaı̂nes idéales suivantes, la première par collapsus au
dessus de A, la deuxième parce qu’elle contient ((0, ak ), (ak , 1)) et que A est de dimension 0 :
((G, x1 ), . . . , (xn ; G0 , ak ))
((0; x1 , ak ), . . . , (ak , xn ; 1))
donc aussi
donc aussi
((G; x1 , ak ), . . . , (xn ; G0 , ak ))
((G; x1 , ak ), . . . , (ak , xn ; G0 ))
Par collapsus simultané, les deux colonnes de droite donnent le collapsus de
((G; x1 , ak ), . . . , (xn ; G0 )). Comme on a aussi, puisque {ak } ∪ G ∪ G0 = F , le collapsus
(tout court) de la chaı̂ne idéale ((ak , G; x1 ), . . . , (xn ; G0 )), un collapsus simultané donne celui
de la chaı̂ne idéale ((G; x1 ), . . . , (xn ; G0 )). Et nous avons gagné.
Nous passons au cas général. Nous traitons l’exemple où m = 2, n = 3, qui, nous l’espérons,
est suffisamment éclairant. Nous devons donc considérer une suite de 3 × 4 = 12 éléments xi
de B et montrer que la chaı̂ne idéale élémentaire suivante collapse :
((0, x1 ), (x1 , x2 ), . . . , (x11 , x12 ), (x12 , 1))
Par hypothèse les chaı̂nes idéales
((0, x1 ), . . . , (x4 , x5 )),
((x4 , x5 ), . . . , (x8 , x9 ))
((x8 , x9 ), . . . , (x12 , 1))
36
et
collapsent au dessus de A, ce qui nous fournit trois listes finies F1 , F2 , F3 d’éléments de A et
les collapsus correspondants dans B. Nous pouvons montrer que la chaı̂ne idéale C collapse en
utilisant un processus de recollement analogue à celui utilisé lorsque m = 0. Nous commençons
par remarquer que toute chaı̂ne idéale du type
((0; x1 , b1 ), . . . , (b1 , x4 ; x5 , b2 ), . . . , (b2 , x8 ; x9 , b3 ), . . . , (b3 , x12 ; 1))
avec les bi ∈ A collapse parce qu’elle “contient” la chaı̂ne idéale
((0; b1 ), (b1 ; b2 ), (b2 ; b3 ), (b3 ; 1))
On va établir que toute chaı̂ne idéale du type
((0; x1 ), . . . , (x4 ; x5 , b2 ), . . . , (b2 , x8 ; x9 , b3 ), . . . , (b3 , x12 ; 1))
avec b2 , b3 ∈ A collapse elle aussi. Après on pourra recommencer (on supprimera b2 puis b3 ). On
voit qu’on peut reproduire à l’identique le cas déjà traité (A zéro dimensionnel) : en rajoutant
partout la queue ; x5 , b2 ), (x5 , x6 ) . . . , (b2 , x8 ; x9 , b3 ), . . . , (b3 , x12 ; 1)).
Nous prouvons ensuite le point (2).
Il faut reprendre la preuve du point (1) (en faisant comme un semblant raisonnement par
l’absurde) et regarder à quel endroit elle ne marche plus. Elle est basée sur des collapsus de
chaı̂nes idéales élémentaires dans A et des collapsus de chaı̂nes idéales élémentaires de B au
dessus de A. Les seuls endroits où cela ne marche plus à tout coup, c’est avec les collapsus
de chaı̂nes idéales élémentaires dans A. Or justement, on suppose qu’on a un test pour ces
collapsus-là. Donc l’un d’entre eux au moins ne marche pas, explicitement, et cela fournit la
suite pseudo-régulière que l’on cherche.
2
Cas des extensions entières
Dans la proposition suivante le point (1) est la version constructive du “théorème d’incomparabilité” (voir le théorème 13.33 du livre de Sharp [19].)
Proposition 4.5 Soit A ⊆ B des anneaux commutatifs.
(1) Si B est entier sur A la dimension de Krull relative de l’extension B/A est nulle.
(2) Plus généralement on a le même résultat si tout élément de B est un zéro d’un polynome
de A[X] ayant un coefficient égal à 1. Par exemple si A est un anneau de Prüfer intègre,
cela s’applique à n’importe quel sur-anneau de A dans son corps de fractions.
(3) En particulier, en appliquant le théorème 4.4 si dim(A) ≤ n alors dim(B) ≤ n.
Preuve.
On montre (2). On veut montrer que pour tout x ∈ B la chaı̂ne idéale ((0, x), (x, 1)) collapse
au dessus de A. La liste finie dans
donnée par les coefficients du polynome qui
P A est celle
i
annule x. Supposons que xk =
a
x
.
Soit
G, G0 deux parties complémentaires de
i6=k,i≤r i
{ai ; i 6= k}. Le collapsus de ((G, x), (x, G0 )) est donné par une égalité xm (g 0 + bx) = g avec
g ∈ hGiB , g 0 ∈ M(G0 ), b ∈ B. En fait nous prenons g ∈ G[x] et b ∈ A[x]. Si G0 est vide on
prend m = k, g 0 = 1. Sinon soit h le plus petit indice ` tel que a` ∈ G0 . Tous les aj avec j < h
sont dans G. Si h < k on prend m = h, g 0 = ah . Si h > k on prend m = k, g 0 = 1.
NB : on remarque que la disjonction porte sur r cas et non pas sur 2r cas :
— a0 ∈ G0 , ou
— a0 ∈ G, a1 ∈ G0 , ou
— a0 , a1 ∈ G, a2 ∈ G0 , ou
37
—
..
.
2
HUM Donner un exemple d’anneau intègre A avec A ⊂ B ⊂ F rac(A) et dim(B) > dim(A).
Dimension de Krull relative des anneaux de polynomes
Nous donnons la version constructive du théorème classique concernant la dimension de
Krull relative de l’extension A[x1 , . . . , xn ]/A.
Nous aurons besoin d’un lemme d’algèbre linéaire élémentaire.
Lemme 4.6 Soit V1 , . . . , Vn+1 des vecteurs de An .
— Si A est un corps discret, il existe un indice k ∈ {1, . . . , n + 1} tel que Vk est une combinaison linéaire des vecteurs qui suivent (si k = n + 1 cela signifie que Vn+1 = 0).
— Si A est un anneau commutatif, notons V la matrice dont les vecteurs colonnes sont les
Vi . Soit µ1 , . . . , µ` (avec ` = 2n − 1) la liste des mineurs de V extraits sur les n ou n − 1
ou . . . ou 1 dernières colonnes, rangée par ordre de taille décroissante. On pose µ`+1 = 1
(le mineur correspondant à la matrice extraite vide). Pour chaque k ∈ {1, . . . , ` + 1} on
pose Ik = h(µi )i<k i et Sk = S(Ik ; µk ). Si le mineur µk est d’ordre j, le vecteur Vn+1−j
est, dans l’anneau (A/Ik )Sk , égal à une combinaison linéaire des vecteurs qui suivent.
Preuve.
Pour le deuxième item, on applique les formules de Cramer.
2
Proposition 4.7 Soit B = A[X1 , . . . , Xn ] un anneau de polynomes. La dimension de Krull
relative de l’extension B/A est égale à n. Donc si la dimension de Krull de A est ≤ r, celle de
B est ≤ r + n + rn. Par ailleurs si la dimension de Krull de B est ≤ r + n celle de A est ≤ r.
Preuve.
La dernière affirmation résulte du fait que si la suite (a1 , . . . , ar , X1 , . . . , Xn ) est pseudo singulière dans B, alors la suite (a1 , . . . , ar ) est pseudo singulière dans A : on a en effet, en prenant
pour simplifier m = n = 2 une égalité dans B de la forme
p1 p2
p1 +1
m1 m2 p1 p2 +1
1 m2
1 m2
1 m2 +1
1 +1
am
R4 + am
R 3 + am
R2 + am
R1 = 0
1 a2 X 1 X 2 + a1 a2 X 1 X 2
1 a2 X 1
1 a2
1
En regardant dans le polynome du premier membre le coefficient de X1p1 X2p2 on trouve
m1 m2 +1
1 m2
1 +1
am
r2 + am
r1 = 0
1 a2 + a1 a2
1
qui donne le collapsus de (a1 , a2 ) dans A.
La deuxième affirmation résulte de la première (cf. théorème 1.24(1)).
La preuve de la première affirmation en mathématiques classiques s’appuie de manière directe
sur le cas de corps. Nous donnons une preuve constructive qui s’appuie également sur la cas
de corps (discrets). Nous reprenons la preuve de la proposition 1.23 et nous lui faisons subir
une relecture (le corps K est remplacé par un anneau A) qui nous permet de faire fonctionner
le définition du collapsus au dessus de A. Soit (y1 , . . . , yn+1 ) dans A[X1 , . . . , Xn ]. Considérons
une preuve simple (c’est-à-dire directement écrite comme une preuve d’algèbre linéaire) du fait
que les yi sont algébriquement dépendants sur A lorsque A est un corps discret.
P Par exemple
mn+1
si les yi sont des polynomes de degré ≤ d, les polynomes y1m1 · · · yn+1
avec i mi ≤ m sont
38
dans l’espace vectoriel des polynomes de degre ≤ dm, qui est de dimension ≤ dm+n
, et ils
n
m+n+1
m+n+1
dm+n
sont au nombre de n+1 . Pour une valeur explicite de m on a n+1 >
(car un
n
polynome de degré n + 1 l’emporte sur un polynome de degré n). On considère désormais que
mn+1
m1
m est
Pfixé à cette valeur. Rangeons les “vecteurs” y1 · · · yn+1 correspondants (c’est-à-dire tels
que i mi ≤ m) dans l’ordre lexicographique pour (m1 , . . . , mn+1 ). Nous pouvons nous limiter
à dm+n
+ 1 vecteurs. En appliquant le lemme 4.6, on obtient que dans chacun des anneaux
n
mn+1
est égal à une combinaison linéaire des vecteurs qui suivent.
(A/Ik )Sk un vecteur y1m1 · · · yn+1
Ceci donne, comme dans la preuve de la proposition 1.23 un collapsus, mais cette fois-ci nous
devons rajouter au début et à la fin de la chaı̂ne idéale élémentaire (y1 , . . . , yn+1 ) les “hypothèses
supplémentaires” : c’est donc la chaı̂ne idéale
((µi )i<k , y1 ; y2 ), (y2 ; y3 ) . . . , (yn−1 , (yn ), (yn ; yn+1 , µk )
qui collapse (pour chaque k). En effet, pour le collapsus d’une chaı̂ne idéale on peut toujours
passer au quotient par le premier des idéaux (ou par un idéal plus petit) et localiser en le dernier
des monoı̈des (ou en un monoı̈de plus petit).
Finalement, tous ces collapsus fournissent le collapsus de (y1 , . . . , yn+1 ) au dessus de A en
utilisant la famille finie (µi ).
2
4.2
Going Up
Si A est un sous anneau de B, une chaı̂ne idéale de A qui collapse dans A collapse dans B
et la trace sur A d’un premier idéal saturé de B est un premier idéal saturé de A. Entre autres
choses, nous allons voir dans cette section que dans le cas des extensions entières on a aussi les
implications réciproques.
Lemme 4.8 Soit A ⊆ B des anneaux commutatifs avec B entier sur A. Soit I un idéal de A
et x ∈ A. Alors
√
√
x ∈ I ⇐⇒ x ∈ IB
Preuve.
√
P
Supposons x ∈ IB, i.e., xn =
ji bi , avec ji ∈ I, bi ∈ B. Les bi et 1 engendrent un
sous A-module fidèle et de type fini M de B et xn s’exprime comme combinaison linéaire à
coefficients dans I sur un système générateur de M . Le polynome caractéristique de la matrice
de la multiplication par xn (exprimée sur ce système générateur) a donc tous ses coefficients
(sauf le coefficient dominant) dans I.
2
Définition 4.9 Soit A ⊆ B des anneaux commutatifs, P = (J, V ) un premier idéal de B et
C = (P1 , . . . Pn ) une chaı̂ne idéale de B. On dit que (J ∩ A, V ∩ A) est la trace de P sur A. On
note P|A ce premier idéal de A. On dit que (P1 |A , . . . Pn |A ) est la trace de C sur A. On note
C|A cette chaı̂ne idéale de A.
Il est clair que la trace d’une chaı̂ne idéale complète (resp. saturée) est complète (resp. saturée).
Corollaire 4.10 (Lying over) Soit A ⊆ B des anneaux commutatifs avec B entier sur A.
— Soit P un premier idéal dans A.
(1) Si P collapse dans B, il collapse dans A.
(2) Si Q est le saturé de P dans B, alors Q|A est le saturé de P dans A.
— Le théorème de complétude de Gödel implique le résultat suivant. Tout idéal premier de
A est la trace sur A d’un idéal premier de B.
39
Preuve.
Nous prouvons le point (1), le reste en découle facilement. Si P = (I, U ) collapse dans B un
élément de M(U ) est dans le radical de hIiB = hIiA B donc, par le lemme précédent, dans le
radical de hIiA .
2
Théorème 4.11 (Going Up) Soit A ⊆ B des anneaux commutatifs avec B entier sur A. Soit
C1 une chaı̂ne idéale saturée de B et C2 une chaı̂ne idéale de A.
(1) La chaı̂ne idéale C = C1 • C2 collapse dans B si et seulement si la chaı̂ne idéale C1 |A • C2
collapse dans A.
(2) Soit C 0 la saturée de C dans B. La trace de C 0 sur A est la saturée de C1 |A • C2 dans A.
En particulier toute chaı̂ne idéale de A qui collapse dans B collapse dans A, et la trace sur A
de la saturée dans B d’une chaı̂ne idéale de A est égale à sa saturée dans A.
Preuve.
Posons C1 = ((J1 , V1 ), . . . , (J` , V` )) (dans B), C1 |A = ((I1 , U1 ), . . . , (I` , U` )) sa trace sur A et
C2 = ((I`+1 , U`+1 ), . . . , (I`+r , U`+r )). Notons (1)`,r et (2)`,r les énoncés avec ` et r précisés. Ces
deux énoncés sont en fait immédiatement équivalents, vu la caractérisation de la saturée d’une
chaı̂ne idéale en terme de collapsus. Notez aussi que (1)0,1 et (2)0,1 sont le Lying over.
Montrons (2)0,r ⇒ (1)`,r . La chaı̂ne idéale C1 |A est saturée dans A. On considère les anneaux
quotients A0 = A/I` ⊆ B 0 = B/J` . On a encore B 0 entier sur A0 . (2)0,r appliqué à ces quotients
donne (1)`,r en utilisant le fait 1.6 (1).
Il suffit maintenant de montrer (2)1,r ⇒ (1)0,r+1 . Soit (P1 , . . . , Pr+1 ) une chaı̂ne idéale dans A
qui collapse dans B. Soit Q1 le saturé de P1 dans B. La trace de Q1 sur A est P1 d’après le
Lying over. On applique donc (2)1,r avec C1 = Q1 et C2 = (P2 , . . . , Pr+1 ).
2
Corollaire 4.12 (Going Up, version classique) Le théorème de complétude de Gödel implique le résultat suivant. Soit A ⊆ B des anneaux commutatifs avec B entier sur A. Soient
Q1 ⊆ · · · ⊆ Q` des idéaux premiers dans B, Pi = Qi ∩ A (i = 1, . . . , `), et P`+1 ⊆ · · · ⊆ P`+r
des idéaux premiers dans A avec P` ⊆ P`+1 . Il existe Q`+1 , . . . , Q`+r , idéaux premiers dans B
qui vérifient Q` ⊆ Q`+1 ⊆ · · · ⊆ Q`+r et Q`+j ∩ A = P`+j pour j = 1, . . . , r.
Preuve.
On considère les chaı̂nes idéales C1 (dans B) et C2 (dans A) associées aux chaı̂nes données dans
l’énoncé. Par hypothèse, C1 |A • C2 ne collapse pas dans A, donc par le Going Up constructif,
C = C1 • C2 ne collapse pas dans B. On considère donc une chaı̂ne d’idéaux premiers de B
qui raffine la chaı̂ne idéale C (théorème 1.17). Comme C1 est figée elle ne bouge pas dans le
processus (sous peine de collapsus). Quant à la queue de la chaı̂ne Q`+1 , . . . , Q`+r , sa trace sur
A est figée, donc ne peut qu’être égale à P`+1 , . . . , P`+r (sous peine de collapsus).
2
Notons qu’il semble difficile de démontrer directement en mathématiques classiques le théorème
4.11 à partir du corollaire 4.12, même en utilisant le théorème 1.17 qui relie les chaı̂nes idéales
et les chaı̂nes d’idéaux premiers.
Corollaire 4.13 (Dimension de Krull d’une extension entière) Soit A ⊆ B des anneaux commutatifs avec B entier sur A.
(1) La dimension de Krull de A est ≤ n si et seulement si la dimension de Krull de B est
≤ n.
(2) Une suite pseudo régulière dans A est pseudo régulière dans B.
(3) Supposons qu’on a un test pour le collapsus des chaı̂nes idéales élémentaires dans A. À
partir d’une suite pseudo régulière dans B on peut construire une suite pseudo régulière
de même longueur dans A.
40
NB : dans les points (2) et 3 les anneaux sont supposés munis de l’inégalité ¬(x = 0).
Preuve.
(1) La proposition 4.5 (3) donne dim(A) ≤ n ⇒ dim(B) ≤ n, la dernière affirmation du
théorème 4.11 donne la réciproque.
(2) est obtenu par contrapposition à partir de la dernière affirmation du théorème 4.11.
(3) Comme la dimension de Krull relative de B sur A est nulle, on applique le théorème 4.4 (2).
2
Un corollaire du résultat précédent et du théorème 1.24 est le théorème suivant, qui nous dit
que la dimension de Krull d’une algèbre de présentation finie sur un corps discret est bien celle
que nous donne la mise en position de Noether.
Théorème 4.14 Soit K un corps discret, I un idéal de type fini de l’anneau K[X1 , . . . , X` ] et
A l’algèbre quotient. Une mise en position de Noether de l’idéal I fournit un entier r et des
éléments y1 , . . ., yr de A qui sont algébriquement indépendants sur K et tels que A est un
module de type fini sur K[y1 , . . . , yr ]. La dimension de Krull de A est égale à r.
4.3
Going Down
Cette section est recopiée sur le traitement donné dans le livre de Sharp. Comme on ne sait
pas en général calculer le polynome minimal d’un élément algébrique sur un corps, le premier
lemme est un peu moins simple que le lemme correspondant en mathématiques classiques, et
la preuve du going down introduit un algorithme qui recherche un polynome “faisant office de
polynome minimal pour le calcul en cours”.
Lemme 4.15 Soit A ⊆ B des anneaux intègres avec B entier sur A et A intégralement clos.
Soit I un idéal radical de A et x ∈ IB. Il existe un polynome S(X) unitaire dont les coefficents
non dominants sont dans I et qui annule x. Soit P un autre polynome de A[X] qui annule x.
Soit K le corps des fractions de A et Q le pgcd unitaire de P et S dans K[X]. Alors Q a ses
coefficents non dominants dans I, il annule x et il divise S et P dans A[X].
Preuve.
P
L’existence de S est facile, on écrit x = ak bk avec les ak dans I et les bk dans B, on considère
la sous A-algèbre unitaire C de B engendrée par les bk . C’est un A-module fidèle de type fini.
On écrit la matrice de la multiplication par x sur un système générateur du A-module C, elle
est à coefficients dans I, et on prend son polynome caractéristique.
Soit L le corps des fractions de B. Vue la relation de Bezout U P + V S = Q, on a Q(x) = 0
dans K[x] ⊆ L donc dans B. Vue la relation QS
√ 1 = S, et puisque A est intégralement clos, Q
et S1 sont à coefficients non dominants dans I = I. Enfin le quotient P1 = P/Q peut être
calculé dans A[X] par division euclidienne.
2
NB : ce lemme décrit précisément le contenu calculatoire du lemme en mathématiques classiques
qui affirme que le polynome minimum unitaire de x a ses coefficients non dominants dans I.
Proposition 4.16 (Going Down à une étape) Soit A ⊆ B des anneaux intègres avec B entier
sur A et A intégralement clos. Soit Q1 un premier idéal saturé de B, P1 = (I1 , U1 ) sa trace
sur A et P0 = (I0 , U0 ) un premier idéal saturé de A avec I0 ⊆ I1 . Si la chaı̂ne idéale (P0 , Q1 )
collapse dans B, alors le premier idéal Q1 collapse dans B (et a fortiori le premier idéal P1
collapse dans A).
41
Preuve.
Posons Q1 = (J1 , V1 ). Puisque Q1 est complet, le collapsus peut s’écrire sous la forme
u0 v1 = j0
avec u0 ∈ U0 , v1 ∈ V1 , j0 ∈ I0 B
On sait que j0 annule un polynome R unitaire à coefficients non dominants dans I0
X
R(X) = X k +
ai X i
avec ai ∈ I0
i<k
donc v1 annule R(u0 X)
0 = R(u0 v1 ) = uk0 v1k +
X
(ai ui )v1i
i<k
On sait aussi que v1 annule un polynome S unitaire à coefficients dans A de degré d.
k−i
Premier cas : ud0 S(X) = R(u0 X). Alors les coefficients non dominants de√ S, les b√
i = ai /u0
sont dans I0 , puisque P0 est saturé dans A. Donc v1k ∈ I0 B, donc v1 ∈ I0 B ⊆ I1 B ⊆ J1
(I0 ⊆ I1 ⊆ J1 et J1 est un idéal radical de B). Donc v1 ∈ V1 ∩ J1 .
Deuxième cas : On n’a pas cette chance. On va se ramener au premier cas. On applique le
lemme précédent avec v1 et l’idéal A. On obtient que v1 annule le polynome S1 pgcd unitaire
de R(u0 X) et S(X), à coefficients dans A. Soit d1 le degré de S1 . On considère le polynome
unitaire à coefficients dans A R1 (X) = ud01 S1 (X/u0 ). On a R1 (j0 ) = 0, R1 (u0 X) = ud01 S1 (X),
R1 (X) est le pgcd unitaire de R(X) et ud0 S(X/u0 ). On applique le lemme précédent avec j0 et
l’idéal I0 . On obtient que R1 a ses coefficients non dominants dans I0 . On est donc ramené avec
R1 et S1 au premier cas envisagé.
2
On notera que le début de la preuve (avant l’examen du deuxième cas, que Sharp évite par la
considération du polynome minimal de j0 ) est presque copié mot pour mot de la preuve dans le
livre de Sharp. Sharp n’utilise pas le mot de collapsus, il met dans l’hypothèse Q1 comme donné
par un vrai idéal premier, et il prouve que ce serait absurde d’avoir une égalité u0 v1 = j0 puisque
cela conduirait à v1 ∈ V1 ∩ J1 qui est contraire à son hypothèse. Ici se vérifie que notre travail
constitue pour l’essentiel la mise à jour d’algorithmes déjà existants sous une forme cachée
dans les preuves classiques. On retrouve ici aussi un trait saillant systématique dans les preuves
classiques, elles inversent inconsciemment le négatif et le positif par l’introduction d’objets
idéaux abstraits : le collapsus, qui est un fait bien concret, est vu sous forme négative (ce serait
absurde puisqu’on a dans l’hypothèse un vrai idéal premier) tandis que le non callapsus, qui
demande a priori une infinité de vérifications, et donc est par essence négatif, est ressenti comme
un phénomène de nature positive, qui garantit l’existence d’un objet abstrait. Le prix à payer
pour le confort apparent que procurent les objets abstraits idéaux, est de transformer les preuves
constructives en preuves non constructives, inconsciemment, par un double renversement de la
preuve (constructive directe) de P ⇒ Q en une preuve par l’absurde de ¬Q ⇒ ¬P , c’est-à-dire
une preuve de ¬¬P ⇒ ¬¬Q.
Théorème 4.17 (Going Down) Soit A ⊆ B des anneaux intègres avec B entier sur A et A
intégralement clos. Soit C1 une chaı̂ne idéale saturée de A et C2 une chaı̂ne idéale saturée de B,
non vides. Soit I` le dernier des idéaux dans la chaı̂ne idéale C1 et I`+1 le premier des idéaux
dans la chaı̂ne idéale C2 |A . On suppose I` ⊆ I`+1 . Si la chaı̂ne idéale C1 • C2 collapse dans B,
alors la chaı̂ne idéale C2 collapse dans B.
Preuve.
Si ` ≥ 1 et r ≥ 1 sont les nombres de premiers idéaux dans les chaı̂nes idéales C1 et C2 , notons
42
GD`,r la propriété que nous voulons montrer. On a déjà établi GD1,1 dans le Going Down à
une étape.
Puisque les chaı̂nes idéales C2 et C2 |A sont saturées, et vu le fait 1.6 (2) seul intervient en
matière de collapsus le premier des premiers idéaux de la seconde chaı̂ne idéale. Il nous suffit
donc de montrer GD`,1 , c’est-à-dire le cas où C2 ne contient qu’un premier idéal. On fait une
récurrence sur `. Soit C1 = (P1 , . . . , P` ) une chaı̂ne idéale saturée dans A, (Pk = (Ik , Uk )) et Q
un premier idéal saturé dans B. Supposons que C1 • Q collapse dans B. Soit C = (P2 , . . . , P` , Q)
et C 0 sa saturée dans B. Si Q2 = (J2 , V2 ) est le premier des premiers idéaux dans C 0 on a
I1 ⊆ I2 ⊆ (J2 ∩ A), donc on peut appliquer le Going Down à une étape (ou plus précisément
GD1,` ) : C 0 collapse dans B. Donc C collapse dans B. On peut maintenant appliquer l’hypothèse
de récurrence.
2
Corollaire 4.18 (Going Down, version classique) Le théorème de complétude de Gödel implique le résultat suivant. Soit A ⊆ B des anneaux intègres avec B entier sur A et A intégralement
clos. Soient Q`+1 ⊆ · · · ⊆ Q`+r des idéaux premiers dans B, Pi = Qi ∩A (i = `+1, . . . , `+r), et
P1 ⊆ · · · ⊆ P` des idéaux premiers dans A avec P` ⊆ P`+1 . Il existe Q1 , . . . , Q` , idéaux premiers
dans B qui vérifient Q1 ⊆ · · · ⊆ Q` ⊆ Q`+1 et Qj ∩ A = Pj pour j = 1, . . . , `.
Preuve.
Comme la preuve du corollaire 4.12.
2
Nous terminons avec un Going Down pour les extensions plates (cf. [17]).
Théorème 4.19 (Going Down pour les extensions plates) Soit A ⊆ B des anneaux avec B
plat sur A.
(1) Soit Q1 = (J1 , V1 ) un premier idéal saturé de B, P1 = (I1 , U1 ) sa trace sur A et
P0 = (I0 , U0 ) un premier idéal saturé de A avec I0 ⊆ I1 . Si la chaı̂ne idéale (P0 , Q1 )
collapse dans B, alors le premier idéal Q1 collapse dans B (et a fortiori le premier idéal
P1 collapse dans A).
(2) Soit C1 une chaı̂ne idéale saturée de A et C2 une chaı̂ne idéale saturée de B, non vides.
Soit I` le dernier des idéaux dans la chaı̂ne idéale C1 et I`+1 le premier des idéaux dans
la chaı̂ne idéale C2 |A . On suppose I` ⊆ I`+1 . Si la chaı̂ne idéale C1 • C2 collapse dans B,
alors la chaı̂ne idéale C2 collapse dans B.
Preuve.
Il suffit de prouver le point (1), car on prouve ensuite le point (2) comme le théorème 4.17. Soit
J0 = I0 B. Si (P0 , Q1 ) collapse dans B, on a v1 ∈ V1 , u0 ∈ U0 et j0 ∈ J0 avec v1 u0 + j0 = 0.
On écrit j0 = i1 b1 + · · · + ir br avec les ik ∈ I0 et les bk ∈ B. On obtient donc une relation de
dépendance linéaire sur A entre les éléments v1 , b1 , . . . , br de B.
v1
b1
(u0 , i1 , . . . , ir )
... = 0
br
Puisque B est plat sur A cette relation de dépendance linéaire s’explique sous la forme suivante
0
b1
v1
b1
b02
(u0 , i1 , . . . , ir )M = (0, . . . , 0) et
... = M ...
br
43
b0s
où M = (mk,` ) ∈ As×(r+1) et les b0k sont dans B. Chaque relation
u0 m0,` + i1 m1,` + · · · + ir m1,` = 0
implique que m0,` ∈ I0 puisque P0 est saturé. A fortiori m0,` ∈ J1 . Donc la relation
v1 = m0,1 b01 + · · · + m0,s b0s
2
est un collapsus de Q1 dans B.
Pour cette preuve, qui s’imposait d’elle-même, nous n’avons pas essayé de décrypter la
preuve très abstraite donnée par Matsumura pour le corollaire suivant.
Corollaire 4.20 (Going Down pour les extensions plates, version classique) Le théorème de
complétude de Gödel implique le résultat suivant. Soit A ⊆ B des anneaux avec B plat sur A.
Soient Q`+1 ⊆ · · · ⊆ Q`+r des idéaux premiers dans B, Pi = Qi ∩ A (i = ` + 1, . . . , ` + r), et
P1 ⊆ · · · ⊆ P` des idéaux premiers dans A avec P` ⊆ P`+1 . Il existe Q1 , . . . , Q` , idéaux premiers
dans B qui vérifient Q1 ⊆ · · · ⊆ Q` ⊆ Q`+1 et Qj ∩ A = Pj pour j = 1, . . . , `.
44
Références
[1] Coste M., Lombardi H., Roy M.-F. Dynamical method in algebra : Effective Nullstellensätze
à paraı̂tre dans Annals of Pure and Applied Logic. 5
[2] Cederquist, Coquand T. Entailment relations and Distributive Lattices Logic Colloquium
’98 (Prague), 127–139, Lect. Notes Log., 13. Assoc. Symbol. Logic, Urbana, (2000). 21,
22, 25
[3] Coquand T., Lombardi H. The principal ideal theorem. In preparation. 5
[4] Coquand T., Persson H. Valuations and Dedekind’s Prague Theorem. A paraı̂tre dans le
Journal of Pure and Applied Algebra. 5, 21
[5] Eisenbud D. Commutative algebra. With a view toward algebraic geometry. Graduate Texts
in Mathematics, 150. Springer-Verlag, New York, 1995. 9
[6] Espanõl L. Constructive Krull dimension of lattices. Rev. Acad. Cienc. Zaragoza (2) 37
(1982), 5–9. 3, 29, 30
[7] Joyal A. Le théorème de Chevalley-Tarski. Cahiers de Topologie et Géometrie Differentielle,
1975. 3
[8] Kuhlmann F.-V., Lombardi H. Construction du hensélisé d’un corps valué. Journal of
Algebra 228, (2000), 624–632. 5
[9] Lombardi H. Un nouveau positivstellensatz effectif pour les corps valués. Séminaire “Structures Ordonnées” (Paris 6-7) (18 pages, paru en janvier 96 dans la livraison 94-95. Editeurs :
F. Delon, A. Dickmann, D. Gondard) 5
[10] Lombardi H. Le contenu constructif d’un principe local-global avec une application à la
structure d’un module projectif de type fini . Publications Mathématiques de Besançon.
Théorie des nombres. (1997). Fascicule 94–95 & 95–96. 5
[11] Lombardi H. Relecture constructive de la théorie d’Artin-Schreier. Annals of Pure and
Applied Logic, 91, (1998), 59–92. 3, 5
[12] Lombardi H. Dimension de Krull, Nullstellensätze et Évaluation dynamique. À paraı̂tre
dans Math. Zeitschrift. 3, 5, 11
[13] Lombardi H. Platitude, localisation et anneaux de Prüfer : une approche constructive.
Preprint (1999). 5
[14] Lombardi H. Constructions cachées en algèbre abstraite (1) Relations de dépendance
intégrale. Preprint 1999. 5
[15] Lombardi H., Quitté C. Constructions cachées en algèbre abstraite (2) Théorème de Horrocks, du local au global. Preprint 1999. 5
[16] MacNeille H. M. Partially ordered sets. Trans. Amer. Math. Soc. 42 (1937), no. 3, 416–460.
30
[17] Matsumura H. Commutative ring theory. Cambridge studies in advanced mathematics no 8.
Cambridge University Press. 1989 43
[18] Mines R., Richman F., Ruitenburg W. A Course in Constructive Algebra. Universitext.
Springer-Verlag, 1988. 3, 9
[19] Sharp Steps in Commutative Algebra. L.M.S. Student Texts 19. Cambridge University
Press. 37
45
| 0 |
A Critical Analysis of String APIs:
the Case of Pharo
Damien Polleta , Stéphane Ducassea
a RMoD
— Inria & Université Lille, France
arXiv:1711.10713v1 [] 29 Nov 2017
Abstract
Most programming languages, besides C, provide a native abstraction for character strings, but string APIs vary widely in size,
expressiveness, and subjective convenience across languages. In Pharo, while at first glance the API of the String class seems rich, it
often feels cumbersome in practice; to improve its usability, we faced the challenge of assessing its design. However, we found
hardly any guideline about design forces and how they structure the design space, and no comprehensive analysis of the expected
string operations and their different variations. In this article, we first analyse the Pharo 4 String library, then contrast it with its
Haskell, Java, Python, Ruby, and Rust counterparts. We harvest criteria to describe a string API, and reflect on features and design
tensions. This analysis should help language designers in understanding the design space of strings, and will serve as a basis for a
future redesign of the string library in Pharo.
Keywords: Strings, API, Library, Design, Style
1. Introduction
While strings are among the basic types available in most
programming languages, we are not aware of design guidelines,
nor of a systematic, structured analysis of the string API design
space in the literature. Instead, features tend to accrete through
ad-hoc extension mechanisms, without the desirable coherence.
However, the set of characteristics that good APIs exhibit is
generally accepted [1]; a good API:
• is easy to learn and memorize,
• leads to reusable code,
• is hard to misuse,
• is easy to extend,
• is complete.
To evolve an understandable API, the maintainer should assess
it against these goals. Note that while orthogonality, regularity
and consistency are omitted, they arise from the ease to learn
and extend the existing set of operations. In the case of strings,
however, these characteristics are particularly hard to reach, due
to the following design constraints.
For a single data type, strings tend to have a large API: in
Ruby, the String class provides more than 100 methods, in Java
more than 60, and Python’s str around 40. In Pharo1 , the String
class alone understands 319 distinct messages, not counting inherited methods. While a large API is not always a problem per
Email addresses: damien.pollet@inria.fr (Damien Pollet),
stephane.ducasse@inria.fr (Stéphane Ducasse)
1 Numbers from Pharo 4, but the situation in Pharo 3 is very similar.
Preprint submitted to Science of Computer Programming
se, it shows that strings have many use cases, from concatenation
and printing to search-and-replace, parsing, natural or domainspecific languages. Unfortunately, strings are often abused to
eschew proper modeling of structured data, resulting in inadequate serialized representations which encourage a procedural
code style2 . This problem is further compounded by overlapping
design tensions:
Mutability: Strings as values, or as mutable sequences.
Abstraction: Access high-level contents (words, lines, patterns),
as opposed to representation (indices in a sequence of characters, or even bytes and encodings).
Orthogonality: Combining variations of abstract operations;
for instance, substituting one/several/all occurrences corresponding to an index/character/sequence/pattern, in a
case-sensitive/insensitive way.
In previous work, empirical studies focused on detecting nonobvious usability issues with APIs [2–4]; for practical advice on
how to design better APIs, other works cite guideline inventories
built from experience [5, 6]. Joshua Bloch’s talk [7] lists a
number of interesting rules of thumb, but it does not really
bridge the gap between abstract methodological advice (e.g. API
design is an art, not a science) and well-known best practices
(e.g. Avoid long parameter lists). Besides the examples set
by particular implementations in existing languages like Ruby,
Python, or Icon [8], and to the best of our knowledge, we are not
aware of string-specific analyses of existing APIs or libraries
and their structuring principles.
2 Much like with Anemic Domain Models, except the string API is complex:
http://www.martinfowler.com/bliki/AnemicDomainModel.html
November 30, 2017
In this paper, we are not in a position to make definitive, normative design recommendations for a string library; instead, we
adopt a descriptive approach and survey the design space to
spark discussion around its complexity and towards more understandable, reusable, and robust APIs. To this end, we study the
string libraries of a selection of programming languages, most
object-oriented for a comparison basis with Pharo, with Haskell
and Rust thrown in for some contrast due to their strong design
intents. We consider these languages to be general purpose and
high-level enough that readability, expressivity, and usability are
common goals. However, a caveat: each language comes with
its own culture, priorities, and compromises; we thus have to
keep a critical eye and put our findings in the perspective both of
the design intent of the studied language, and of our own goals
in Pharo. Similarly, we focus the study on the API of the String
class or its equivalent only, and we limit the discussion of related
abstractions to their interactions in the string API. Extending
the study to the APIs of other text processing abstractions like
streams, regular expressions, or parser combinators at the same
level of detail as strings would only make the paper longer.
Section 2 shows the problems we face using the current
Pharo 4 string library. In Sections 3 and 4, we identify idioms
and smells among the methods provided by Pharo’s String class.
Section 5 examines the relevant parts of the ANSI Smalltalk
standard. We survey the features expected of a String API in
Section 6, then existing implementations in several generalpurpose languages such as Java, Haskell, Python, Ruby, and
Rust in Section 7. Finally, we highlight a few design concerns
and takeaways in Section 8, before concluding the paper.
the package of String. That large number of methods makes it
difficult to explore the code, check for redundancies, or ensure
completeness of idioms.
Using the code browser, the developer can group the methods
of a class into protocols. However, since a method can only
belong to one protocol, the resulting classification is not always
helpful to the user. For example, it is difficult to know at first
sight if a method is related to character case, because there is no
dedicated protocol; instead, the case conversion methods are all
part of a larger converting protocol which bundles conversions
to non-string types, representation or encoding conversions, extracting or adding prefixes.
Multiple intertwined behaviors. Strings provide a complex set
of operations for which it is difficult to identify a simple taxonomy. Consider the interaction between features: a single operation can be applied to one or multiple elements or the whole
string, and can use or return an index, an element, a subset or a
subsequence of elements:
Operations: insertion, removal, substitution, concatenation or
splitting
Scope: element, pattern occurrence, anchored subsequence
Positions: explicit indices, intervals, matching queries
Occurrences: first, last, all, starting from a given one
In Pharo we can replace all occurrences of one character by
another one using the replaceAll:with: inherited from SequenceableCollection, or all occurrences of one character by a subsequence (copyReplaceAll:with:). Like these two messages, some
operations will copy the receiver, and some other will change it
in place. This highlights that strings are really mutable collections of characters, rather than pieces of text, and that changing
the size of the string requires to copy it. Finally, replacing
only one occurrence is yet another cumbersome message (using
replaceFrom:to:with:startingAt:).
2. Pharo: Symptoms of Organic API Growth
As an open-source programming environment whose development branched off from Squeak, Pharo inherits many design
decisions from the original Smalltalk-80 library. However, since
the 1980’s, that library has grown, and its technical constraints
have evolved. In particular, since Squeak historically focused
more on creative and didactic experimentation than software
engineering and industrial use, the library has evolved organically more than it was deliberately curated towards a simple and
coherent design.
Even though we restrict the scope of the analysis to the String
class, we face several challenges to identify recurring structures
and idioms among its methods, and to understand and classify
the underlying design decisions.
’aaca’ replaceAll: $a with: $b
’aaca’ copyReplaceAll: ’a’ with: ’bz’
’aaca’ replaceFrom: 2 to: 3 with: ’bxyz’ startingAt: 2
→ ’bbcb’
→ ’bzbzcbz’
→ ’axya’
Lack of coherence and completeness. Besides its inherent complexity, intertwining of behaviors means that, despite the large
number of methods, there is still no guarantee that all useful
combinations are provided. Some features are surprisingly absent or unexploited from the basic String class. For instance,
string splitting and regular expressions, which are core features
in Ruby or Python, have long been third-party extensions in
Pharo. They were only recently integrated, so some methods
like lines, substrings:, or findTokens: still rely on ad-hoc implementations. This reveals refactoring opportunities towards better
composition of independent parts.
Moreover, some methods with related behavior and similar
names constrain their arguments differently. For instance, findTokens: expects a collection of delimiter characters, but also
accepts a single character; however, findTokens:keep: lacks that
Large number of responsibilities. As explained in Section 1,
strings propose a wide, complex range of features. For example,
Pharo’s String defines a dozen class variables for character and
encoding properties.
Large number of methods. The current Pharo String class alone
has 319 methods, excluding inherited methods. However, Pharo
supports open-classes: a package can define extension methods
on classes that belong to another package [9, 10]; we therefore
exclude extension methods, since they are not part of the core
behavior of strings. Still, this leaves 180 methods defined in
2
special case. Perhaps more confusingly, some methods with
similar behavior use dissimilar wording: compare the predicates
isAllDigits and onlyLetters, or the conversion methods asUppercase and asLowercase but withFirstCharacterDownshifted.
of the string. A first layer of convenience methods eliminates
the need for two explicit predicates, either by passing the same
one for both ends, or by passing one that disables trimming
at one end (trimBoth:, trimLeft:, and trimRight:). A second layer
of convenience methods passes the default predicate that trims
whitespace (trimLeft, trimBoth, and trimRight). Finally, two additional methods provide concise verbs for the most common case:
whitespace, both ends (trim and trimmed, which are synonymous
despite the naming).
Convenience methods can also change the result type; the
following list shows a few examples of convenience predicates
wrapping indexing methods.
Impact of immutability. In some languages such as Java and
Python, strings are immutable objects, and their API is designed
accordingly. In Smalltalk, strings historically belong in the
collections hierarchy, and therefore are mutable.
In practice, many methods produce a modified copy of their
receiver to avoid modifying it in place, but either there is no
immediate way to know, or the distinction is made by explicit
naming. For instance, replaceAll:with: works in-place, while
copyReplaceAll:with: does not change its receiver. Moreover,
the VisualWorks implementation supports object immutability,
which poses the question of how well the historic API works in
the presence of immutable strings.
Trimming ends trim, trimmed, trimLeft:right:,
trimBoth, trimBoth:, trimLeft, trimleft:, trimRight, trimRight:
Index of character indexOf:, indexOf:startingAt:,
indexOf:startingAt:ifAbsent:
Duplicated or irrelevant code. A few methods exhibit code
duplication that should be factored out. For instance, withBlanksCondensed and withSeparatorsCompacted both deal with repeated whitespace, and findTokens: and findTokens:keep: closely
duplicate their search algorithm.
Similarly, some methods have no senders in the base image,
or provide ad-hoc behavior of dubious utility. For instance, the
method comment of findWordStart:startingAt: mentions “HyperCard style searching” and implements a particular pattern match
that is subsumed by a simple regular expression.
Index of substring findString:, findString:startingAt:,
findString:startingAt:caseSensitive:, and related predicates
includesSubstring:, includesSubstring:caseSensitive:
Macro expansion expandMacros, expandMacrosWith: etc., expandMacrosWithArguments:
Sort order compare:, compare:caseSensitive:,
compare:with:collated:, and predicates sameAs:, caseInsensitiveLessOrEqual:, and caseSensitiveLessOrEqual:
Spelling correction correctAgainst:, correctAgainst:continuedFrom:, correctAgainstDictionary:continuedFrom:, correct-
3. Recurring Patterns
AgainstEnumerator:continuedFrom:
We list here the most prominent patterns or idioms we found
among the analyzed methods. Although these patterns are not
followed systematically, many of them are actually known idioms that apply to general Smalltalk code, and are clearly related
to the ones described by Kent Beck [5]. This list is meant more
as a support for discussion than a series of precepts to follow.
Lines lines, lineCount, lineNumber:, lineCorrespondingToIndex:,
linesDo:, lineIndicesDo:
Missed opportunity substrings does not delegate to substrings:
This idiom allows concise code when there is a convention
or an appropriate default, without giving up control in other
cases. However, its induced complexity depends on the argument
combinations necessary; it then becomes difficult to check all
related methods for consistency and completeness.
We propose to broaden and clarify the use of this idiom wherever possible, as it is an indicator of how flexible the canonical
methods are, and promotes well-factored convenience methods.
There are several missed opportunities for applying this idiom
in String: for instance copyFrom:to: could have copyFrom: (up to
the end) and copyTo: (from the start) convenience methods.
Layers of convenience. One of the clearest instances in this
study is the group of methods for trimming (Figure 1). Trimming
a string is removing unwanted characters (usually whitespace)
from one or both of its extremities.
The library provides a single canonical implementation that
requires two predicates to identify characters to trim at each end
trimLeft:right:
trimLeft:
trimLeft
trimBoth:
canonical: both sides explicit
trimRight:
one explicit predicate block,
one implicit (same or no trim)
trimRight
both sides implicit
(trim whitespace)
trimBoth
trim, trimmed
Pluggable sentinel case. When iterating over a collection, it is
common for the canonical method to expect a block to evaluate
for degenerate cases. This leads to methods that are more akin
to control flow, and that let the caller define domain computation
in a more general and flexible way.
Methods that follow this idiom typically include either ifNone:
or ifAbsent: in their selector. For context, in a typical Pharo
image as a whole, there are 47 instances of the ifNone: pattern,
and 266 instances of ifAbsent:.
concise, fluent name
Figure 1: Chains of convenience methods delegating to a single canonical
behavior: trimming at one or both ends.
3
Index lookup indexOf:startingAt:ifAbsent:,
Conversion or manipulation. String provides 24 methods whose
selector follows the asSomething naming idiom, indicating a
change of representation of the value. Conversely, past participle
selectors, e.g. negated for numbers, denote a transformation of
the value itself, therefore simply returning another value of the
same type. However, this is not strictly followed, leading to
naming inconsistencies such as asUppercase vs. capitalized.
indexOfSubCollection:startingAt:ifAbsent:
We promote this idiom in all cases where there isn’t a clear-cut
choice of how to react to degenerate cases. Indeed, forcing either
a sentinel value, a Null Object [11], or an exception on user code
forces it to check the result value or catch the exception, then
branch to handle special cases. Instead, by hiding the check, the
pluggable sentinel case enables a more confident, direct coding
style. Of course, it is always possible to fall back to either a
sentinel, null, or exception, via convenience methods.
Type conversions asByteArray, asByteString, asDate, asDateAndTime, asDuration, asInteger, asOctetString, asSignedInteger,
asString, asStringOrText, asSymbol, asTime, asUnsignedInteger, asWideString
Sentinel index value. When they fail, many index lookup methods return an out-of-bounds index; methods like copyFrom:to:
handle these sentinel values gracefully. However, indices resulting from a lookup have two possible conflicting interpretations:
either place of the last match or last place examined. In the
former case, a failed lookup should return zero (since Smalltalk
indices are one-based); in the latter case, one past the last valid
index signifies that the whole string has been examined. Unfortunately, both versions coexist:
’abc’ findString: ’x’ startingAt: 1
’abc’ findAnySubStr: #(’x’ ’y’) startingAt: 1
Value transformation or escapement asCamelCase,
asComment, asFourCode, asHTMLString, asHex, asLegalSelector,
asLowercase, asPluralBasedOn:, asUncommentedCode,
asUppercase
Past participles read more fluidly, but they do not always make
sense, e.g. commented suggests adding a comment to the receiver, instead of converting it to one. Conversely, adopting
asSomething naming in all cases would be at the price of some
contorted English (asCapitalized instead of capitalized).
→0
→4
We thus prefer the pluggable sentinel, leaving the choice to user
code, possibly via convenience methods.
4. Inconsistencies and Smells
findLasZero index findSubstring:in:startingAt:matchTable:,
tOccurrenceOfString:startingAt:, findWordStart:startingAt:,
indexOf:startingAt:,
indexOfFirstUppercaseCharacter,
indexOfWideCharacterFrom:to:, lastSpacePosition, indexOf-
Here we report on the strange things we found and that could
be fixed or improved in the short term.
Redundant specializations. Some methods express a very similar intent, but with slightly differing parameters, constraints,
or results. When possible, user code should be rewritten in
terms of a more general approach; for example, many of the
pattern-finding methods could be expressed as regular expression matching.
SubCollection:
Past the end findAnySubStr:startingAt:, findCloseParenthesisFor:,
findDelimiters:startingAt:
Iteration or collection. Some methods generate a number of
separate results, accumulating and returning them as a collection.
This results in allocating and building an intermediate collection,
which is often unnecessary since the calling code needs to iterate
them immediately. A more general approach is to factor out the
iteration as a separate method, and to accumulate the results as
a special case only. A nice example is the group of line-related
methods that rely on lineIndicesDo:; some even flatten the result
to a single value rather than a collection.
Substring lookup findAnySubStr:startingAt: and findDelimiters:startingAt: are synonymous if their first argument is a collection of single-character delimiters; the difference is that
the former also accepts string delimiters.
Character lookup indexOfFirstUppercaseCharacter is redundant
with SequenceableCollection»findFirst: with very little performance benefit.
Collection lines, allRangesOfSubstring:, findTokens:, findTokens:keep:, findTokens:escapedBy:, substrings, substrings:
Ad-hoc behavior. Ad-hoc methods simply provide convenience
behavior that is both specific and little used. Often, the redundant specialization also applies.
Iteration linesDo:, lineIndicesDo:
In our opinion, this idiom reveals a wider problem with
Smalltalk’s iteration methods in general, which do not decouple the iteration per se from the choice of result to build —
in fact, collections define a few optimized methods like select:thenCollect: to avoid allocating an intermediate collection.
There are many different approaches dealing with abstraction
and composeability in the domain of iteration: push or pull values, internal or external iteration, generators, and more recently
transducers [12, 13].
Numeric suffix numericSuffix has only one sender in the base
Pharo image; conversely, it is the only user of stemAndNumericSuffix and endsWithDigit; similarly, endsWithAColon
has only one sender.
Finding text findLastOccurrenceOfString:startingAt: has only one
sender, related to code loading; findWordStart:startingAt: has
no senders.
4
Collection
Find tokens findTokens:escapedBy: has no senders besides tests;
findTokens:includes: has only one sender, related to email
address detection; findTokens:keep: only has two senders.
Magnitude
SequencedContractibleCollection
ExtensibleCollection
AbstractDictionary
Replace tokens copyReplaceTokens:with: has no senders and is
convenience for copyReplaceAll:with:asTokens:; redundant
with regular expression replacement.
Dictionary
SequencedReadableCollection
ReadableString
SequencedCollection
Set Bag
Interval
IdentityDictionary SortedCollection OrderedCollection Array ByteArray String Symbol
Figure 2: Inheritance of the ANSI Smalltalk protocols.
Miscellaneous lineCorrespondingToIndex
Mispackaged or misclassified methods. There are a couple methods that do not really belong to String:
SequencedReadableCollection. The sequencedReadableCollection protocol conforms to the collection protocol; it provides
behavior for reading an ordered collection of objects whose elements can be accessed using external integer keys between one
and the number of elements in the collection. It specifies that
the compiler should support the following messages — we add
some of the argument names for clarity:
• asHex concatenates the literal notation for each character
(e.g., 16r6F) without any separation, producing an ambiguous result; it could be redefined using flatCollect:.
• indexOfSubCollection: should be defined in SequenceableCollection; also, it is eventually implemented in terms of findString:, which handles case, so it is not a simple subsequence lookup.
Concatenation: , tail (the comma binary message)
Equality: = other
Element access: at: index, at: index ifAbsent: block, first, last,
before: element, after:, findFirst: block, findLast:
Many ad-hoc or dubious-looking methods with few senders seem
to come from the completion engine; the multiple versions and
forks of this package have a history of maintenance problems,
and it seems that methods that should have been extensions have
been included in the core packages.
Subsequence access: from: startIndex to: stopIndex do: block
Transforming: reverse
Substitution: copyReplaceAll: elements with: replacingElements,
copyReplaceFrom: startIndex to: stopIndex with: replacingElements, copyReplacing: targetElement withObject: replacingElement, copyReplaceFrom: startIndex to: stopIndex withObject:
replacingElement
Misleading names. Some conversion-like methods are actually
encoding or escaping methods: they return another string whose
contents match the receiver’s, albeit in a different representation
(uppercase, lowercase, escaped for comments, as HTML. . . ).
Index of element(s): indexOf: element, indexOf:ifAbsent:,
indexOfSubCollection:startingAt:,
Duplicated code. Substring testing methods beginsWithEmpty:caseSensitive: and occursInWithEmpty:caseSensitive: are
clearly duplicated: they only differ by a comparison operator.
They are also redundant with the generic beginsWith:, except for
case-sensitivity. Moreover, the –WithEmpty: part of their selector
is confusing; it suggests that argument is supposed to be empty,
which makes no sense. Finally, their uses hint that were probably
defined for the completion engine and should be packaged there.
indexOfSubCollection:startingAt:ifAbsent:
Copy: copyFrom: startIndex to: lastIndex, copyWith: element, copyWithout:
Iteration: do:, from:to:keysAndValuesDo:, keysAndValuesDo:, reverseDo:, with:do:
Many operations require explicit indices that have to be obtained
first, making the API not very fluid in practice. Moreover, the
naming is often obscure: for example, copyWith: copies the
receiver, and appends its argument to it.
5. The ANSI Smalltalk Standard
ReadableString. This protocol provides messages for string
operations such as copying, comparing, replacing, converting,
indexing, and matching. All objects that conform to the readableString protocol are comparable. The copying messages inherited from the sequencedReadableCollection protocol keep the
same behavior. Here is the list of messages:
The ANSI standard defines some elements of the Smalltalk
language [14]. It gives the definition “String literals define
objects that represent sequences of characters.” However, there
are few guidelines helpful with designing a string API.
The ANSI standard defines the readableString protocol as conforming to the magnitude protocol (which supports the comparison of entities) and to the sequencedReadableCollection protocol,
as shown in Figure 2 [14, section 5.7.10]. We present briefly the
protocol sequencedReadableCollection.
Concatenation: , (comma)
Comparing: <, <=, >, >=
5
Converting: asLowercase, asString, asSymbol, asUppercase
Testing. Strings provide many predicates, most importantly determining emptiness, or inclusion of a particular substring, prefix
or suffix. Other predicates range from representation concerns,
like determining if all characters belong to the ASCII subset,
or of a more ad-hoc nature, like checking if the string is all
uppercase or parses as an identifier.
Substituing: copyReplaceAll:with:,
copyReplaceFrom:to:with:,
copyReplacing:withObject:, copyWith:
Subsequence access: subStrings: separatorCharacters
Testing: sameAs:
Iterating. Strings are often treated as collections of items. In
Pharo a string is a collection of characters and as such it inherits
all the high-level iterators defined in SequenceableCollection and
subclasses. Similarly, Haskell’s Data.String is quite terse (just 4
or so functions), but since strings are Lists, the whole panoply of
higher-level functions on lists are available: foldr, map, etc.
Analysis and ANSI Compliance. Indices are omnipresent, and
very few names are specific to strings as opposed to collections,
which makes the protocol feel shallow, low-level and implementation revealing. In particular, because the underlying design is
stateful, the copyReplace* messages have to explicitly reveal that
they do not modify their receiver through cumbersome names. In
a better design, naming would encourage using safe operations
over unsafe ones.
We believe that the value added by complying with the ANSI
standard is shallow. Indeed, the standard has not been updated to
account for evolutions such as immutability, and it does not help
building a fluent, modern library. ANSI should not be followed
for the design of a modern String library.
Endogenous conversion. Strings can be transformed into other
strings according to domain-specific rules: this covers encoding and escaping, case transpositions, pretty-printing, natural
language inflexion, etc.
Exogenous conversion. Since strings serve as a human-readable
representation or serialization format, they can be parsed back
into non-string types such as numbers, URLs, or file paths.
Mutating vs copying. Strings may be considered as collections
and provide methods to modify their contents in-place, as opposed to returning a new string with different contents from the
original. Note that this point is orthogonal to the other ones, but
influences the design of the whole library.
Mutating strings is dangerous, because strings are often used
as value objects, and it is not clear at first sight if a method
has side-effects or not. For example, in translateToUppercase,
the imperative form hints that it is an in-place modification,
but not in trim. Also, safe transformations often rely on their
side-effect counterpart: for instance, the safe asUppercase sends
translateToUppercase to a copy of its receiver.
In the case of strings, we believe methods with side effects
should be clearly labeled as low-level or private, and their use
discouraged; moreover, a clear and systematic naming convention indicating the mutable behavior of a method would be a
real plus. Finally, future developments of the Pharo VM include
the Spur object format, which supports immutable instances;
this is an opportunity to make literal strings safe4 , and to reduce
copying by sharing character data between strings.
6. An Overview of Expected String Features
Different languages do not provide the exact same feature
set3 , or the same level of convenience or generality. However,
comparing various programming languages, we can identify
the main behavioral aspects of strings. Note that these aspects
overlap: for instance, transposing a string to upper-case involves
substitution, and can be performed in place or return a new
string; splitting requires locating separators and extracting parts
as smaller strings, and is a form of parsing.
Extracting. Locating or extracting parts of a string can be supported by specifying either explicit indices, or by matching
contents with various levels of expressiveness: ad-hoc pattern,
character ranges, regular expressions.
Splitting. Splitting strings into chunks is the basis of simple
parsing and string manipulation techniques, like counting words
or lines in text. To be useful, splitting often needs to account
for representation idiosyncrasies like which characters count as
word separators or the different carriage return conventions.
7. Strings in Other Languages
Merging. The reverse of splitting is merging several strings
into one, either by concatenation of two strings, or by joining a
collection of strings one after another, possibly with separators.
To support the analysis and redesign of the current string
libraries in Pharo, we analysed the situation in several other
languages. We took two criteria into account to select the languages below: mainstream object-oriented languages but also
new languages showing alternative designs. Indeed, our study
is about the design of the API at the level of features, how they
compose together and in relation with other types, and how they
are organized in terms of individual methods or functions. In
Substituting. The popularity of Perl was built on its powerful
pattern-matching and substitution features. The difficulty with
substitution is how the API conveys whether one, many, or all
occurrences are replaced, and whether a sequence of elements
or a single element is replaced.
3 They
4 While clever uses for mutable literals have been demonstrated in the past,
we think it is a surprising feature and should not be enabled by default.
can even rely on specific syntax, like Ruby’s string interpolation.
6
that light, we believe that the underlying programming paradigm
is just one of many factors that influence the API design. For instance, it would be possible and probably desirable to have fewer
side-effects and more declarative method names in Pharo’s string
API, resulting in a style much closer to functional programming
than the current string implementation; Haskell, with its own
limits, provides a worthwile reference point in that direction.
We will present the key characteristics of the design of strings
in Haskell, Java, Python, Ruby, and Rust. Then we will discuss
some of the used design.
Haskell— 60 functions common to both Data.List and Data.Text:
(!!) index
(++) append
all
any
break
concat
concatMap
drop
dropWhile
dropWhileEnd
filter
find
findIndex
foldl
foldl’
foldl1
foldl1’
foldr
foldr1
group
groupBy
head
init
inits
intercalate
intersperse
isInfixOf
isPrefixOf
isSuffixOf
last
length
lines
map
mapAccumL
mapAccumR
maximum
minimum
null
partition
replicate
reverse
scanl
scanl1
scanr
scanr1
span
splitAt
stripPrefix
tail
tails
take
takeWhile
transpose
uncons
unfoldr
unlines
unwords
words
zip
zipWith
56 functions specific to Data.List:
7.1. Haskell
(\\)
and
cycle
delete
deleteBy
deleteFirstsBy
elem
elemIndex
elemIndices
findIndices
genericDrop
genericIndex
genericLength
genericReplicate
In Haskell, the default string implementation Data.String is
actually a linked list of characters. This was a design choice
to reuse the existing pattern matching and list manipulation
functions with virtually no string-specific code; but it is also
known to have a huge space overhead and bad performance
characteristics for usual string use. However, if we look further
than the core libraries that come with GHC, the Haskell Platform
distribution also provides Data.Text, an implementation of strings
based on a packed array of UTF-16 codepoints. The same
package also includes a lazy variant of that data structure.
In terms of interfaces, Data.List5 and Data.Text6 are of similar
sizes (respectively 116 and 94 functions), but share 60 functions in common, including Data.Text.append and Data.Text.index
which are defined as the (++) and (!!)) operators in Data.List (see
Table 1). This is because many list functions do not apply to lists
of characters: lookup expects an association list, and & or expect
lists of booleans, sum expects a list of numbers, etc. Conversely,
Data.Text defines additional functions that are related to formatting (center, justifyLeft, toLower, toTitle), cleaning up (dropAround,
strip), or parsing text (breakOn, split), or parsing text (breakOn,
split).
genericSplitAt
genericTake
insert
insertBy
intersect
intersectBy
isSubsequenceOf
iterate
lookup
maximumBy
minimumBy
notElem
nub
nubBy
or
permutations
product
repeat
scanl’
sort
sortBy
sortOn
subsequences
sum
union
unionBy
unzip
unzip3
unzip4
unzip5
unzip6
unzip7
zip3
zip4
zip5
zip6
zip7
zipWith3
zipWith4
zipWith5
zipWith6
zipWith7
snoc
split
splitOn
strip
stripEnd
stripStart
stripSuffix
takeEnd
takeWhileEnd
toCaseFold
toLower
toTitle
toUpper
unfoldrN
unpack
unpackCString#
34 functions specific to Data.Text:
breakOn
breakOnAll
breakOnEnd
center
chunksOf
commonPrefixes
compareLength
cons
copy
count
dropAround
dropEnd
empty
justifyLeft
justifyRight
pack
replace
singleton
Table 1: Functions defined by Haskell modules Data.List and Data.Text
7.3. Python
7.2. Java
Python’s string type is str8 , an immutable sequence of Unicode codepoints, whose methods are listed in Table 3. Besides
those methods, it also inherits special methods that implement
the behavior for the sequence-related expressions (index-based
access, count and presence of elements). A few additional functions are defined in module string9 , most notably printf-style
formatting, and Python also provides io.StringIO, a stream-like
object to compose large strings efficiently, but this provides a
limited API similar to a file stream, unlike Java’s StringBuilder
which supports insertion and replace operations.
The general impression is that the API is pretty terse, especially since there are some symmetric sets of methods, i.e.,strip,
lstrip, rstrip. Some methods seem too specialized to be present in
such a small API (e.g.,swapcase, title, istitle).
Finally, since Python, like Ruby, does not have an individual
character type, some character-specific behavior is reported on
strings: out of 11 predicates, only two really apply specifically
In Java, instances of the String class are immutable (See Table 2). This means that strings can be shared, but also that
concatenating them allocates and copies memory. To build complex strings while limiting memory churn, the standard library
provides StringBuilder and StringBuffer; both have the exact same
interface, except the latter is thread-safe. Finally, CharSequence
is an interface which groups a few methods for simple read-only
access to string-like objects; it seems like it has a similar purpose as Rust’s slices, but Java strings do not appear to share
their underlying character data: subSequence() is the same as
substring(), which copies the required range of characters.
Third-party libraries such as Apache Commons7 provide additional string-related methods in utility classes such as StringUtils.
However, since those classes only define static methods, they do
not lend themselves to late binding and polymorphism.
5 https://hackage.haskell.org/package/base-4.9.0.0/docs/Data-List.html
6 https://hackage.haskell.org/package/text-1.2.2.1/docs/Data-Text.html
8 https://docs.python.org/3.6/library/stdtypes.html#textseq
7 https://commons.apache.org
9 https://docs.python.org/3.6/library/string.html
7
Java— 35 methods in String:
charAt
endsWith
codePointAt
equals
equalsIgnoreCase
codePointBefore
codePointCount
getBytes
compareTo
getChars
compareToIgnoreCase
concat
indexOf
contains
intern
contentEquals
isEmpty
Ruby— 116 methods in String:
lastIndexOf
length
matches
offsetByCodePoints
regionMatches
replace
replaceAll
replaceFirst
split
startsWith
subSequence
substring
toCharArray
toLowerCase
toString
toUpperCase
trim
hashCode
%
∗
+
–
<<
<=>
==
===
=~
[]
[ ]=
ascii_only?
b
bytes
bytesize
byteslice
capitalize (!)
casecmp
center
chars
chomp (!)
chop (!)
chr
clear
24 methods in StringBuffer/StringBuilder:
append
appendCodePoint
capacity
charAt
codePointAt
codePointBefore
codePointCount
delete
deleteCharAt
ensureCapacity
getChars
indexOf
insert
lastIndexOf
length
offsetByCodePoints
replace
reverse
setCharAt
setLength
subSequence
substring
toString
trimToSize
Table 2: Methods defined in Java on string-like classes
Python— 42 methods in str:
capitalize
casefold
center
count
encode
endswith
expandtabs
find
format
format_map
index
isalnum
isalpha
isdecimal
isdigit
isidentifier
islower
isnumeric
isprintable
isspace
istitle
isupper
join
ljust
lower
lstrip
partition
replace
rfind
rindex
rjust
rpartition
rstrip
split
splitlines
startswith
strip
swapcase
title
translate
upper
zfill
codepoints
concat
count
crypt
delete (!)
downcase (!)
dump
each_byte
each_char
each_codepoint
each_line
empty?
encode (!)
encoding
end_with?
eql?
force_encoding
freeze
getbyte
gsub (!)
hash
hex
include?
index
initialize
replace
insert
inspect
intern
length
lines
ljust
lstrip (!)
match
next (!)
oct
ord
partition
prepend
replace
reverse (!)
rindex
rjust
rpartition
rstrip (!)
scan
scrub (!)
setbyte
size
slice (!)
split
squeeze (!)
start_with?
strip (!)
sub (!)
succ (!)
sum
swapcase (!)
to_c
to_f
to_i
to_r
to_s
to_str
to_sym
tr (!)
tr_s (!)
unpack
upcase (!)
upto
valid_encoding?
Table 4: Methods defined in Ruby’s String class. Methods marked with (!)
have an associated in-place version following the Ruby naming convention; e.g.
upcase returns an uppercased copy while upcase! modifies the receiver in-place.
Table 3: Methods defined in Python on the str text sequence type
• a range object, locating the substring by start/end bounds
instead of by its length,
to strings (isidentifier and istitle), the other 9 being universally
quantified character predicates. Encoding and decoding between bytes and Unicode strings is done via the str.encode() and
bytes.decode() methods, which rely on another package: codecs;
here again, character-specific or encoding-specific behavior does
not seem to exist as first-class objects, as codecs are specified
by name (strings).
• a regular expression, optionally with a capture group specifying which part of the matched substring to return,
• another string, returning it if it occurs in the receiver.
Note also that indices can be negative, in which case they are
relative to the end of the string.
Another widely adopted naming convention in Ruby is that
methods with names terminated by an exclamation point modify
their receiver in-place instead of returning a modified copy;
strings are a nice example of this pattern, as more than a third of
the methods belong to such copy/in-place pairs.
7.4. Ruby
Ruby’s strings are mutable sequence of bytes10 ; however, each
String instance knows its own encoding. Ruby’s message send
syntax is quite expressive, and many of its APIs make extensive
use of optional parameters and runtime type cases to provide
behavior variants.
A first example is the convention that iteration methods
each_byte, each_char, each_codepoint, and each_line either behave as an internal iterator (i.e., a higher-order function) when
passed a block, or return an enumerator object when the block
is omitted (external iteration).
A second example is the [ ] method, which implements the
square bracket notation for array access; on strings, this is used
for substring extraction, and accepts a number of parameter
patterns:
7.5. Rust
Rust has two main types for character strings: string slices,
represented by the pointer type &str11 , and the boxed type String12
(Table 5). Both types store their contents as UTF-8 bytes; however, while String is an independent object that owns its data,
allocates it on the heap and grows it as needed, &str is a view over
a range of UTF-8 data that it does not own itself. Literal strings
in Rust code are immutable &str slices over statically-allocated
character data.
Making a String from a &str slice thus requires allocating a
new object and copying the character data, while the reverse
operation is cheap. In fact, the compiler will implicitly cast
a String into a &str as needed, which means that in practice,
• a single index, returning a substring of length one (Ruby
does not have an individual character type),
• a start index and an explicit length,
11 https://doc.rust-lang.org/std/primitive.str.html
10 http://www.rubydoc.info/stdlib/core/String
12 https://doc.rust-lang.org/std/string/struct.String.html
8
Rust— 43 methods defined on string slices &str:
as_bytes
as_ptr
bytes
char_indices
chars
contains
encode_utf16
ends_with
escape_debug
escape_default
escape_unicode
find
into_string
is_char_boundary
is_empty
len
lines
match_indices
matches
parse
replace
replacen
rfind
rmatch_indices
rmatches
rsplit
rsplit_terminator
rsplitn
split
split_at
split_at_mut
split_terminator
split_whitespace
Python:
ord(’a’) ⇒ 97
ord(’abc’) TypeError: ord() expected a character, but string of length 3 found
ord(”)
TypeError: ord() expected a character, but string of length 0 found
splitn
starts_with
to_lowercase
to_uppercase
trim
trim_left
trim_left_matches
trim_matches
trim_right
trim_right_matches
Ruby:
?a.class
?a.ord
’a’.ord
’abc’.ord
”.ord
26 methods defined on the boxed String type:
as_bytes
as_mut_str
as_str
capacity
clear
drain
from_utf16
from_utf16_lossy
from_utf8
from_utf8_lossy
insert
insert_str
into_boxed_str
into_bytes
is_empty
len
new
pop
push
push_str
remove
⇒ String
⇒ 97
⇒ 97
⇒ 97
ArgumentError: empty string
Table 6: Character / string confusion in Python and Ruby. Both languages use
degenerate strings in place of characters; Ruby does have a literal character
syntax, but it still represents a one-character string.
reserve
reserve_exact
shrink_to_fit
truncate
with_capacity
First-class characters or codepoints. In Ruby or Python, characters are strings of length one, which has strange implications
on some methods, as shown in table 6. Was that choice made
because the concept of character or codepoint was deemed useless? If so, is it due to lack of need in concrete use-cases, or
due to early technical simplifications, technical debt and lack
of incentives to change? If not, is it undertaken by separate
encoding-related code, or by strings, even though it will be often used on degenerate single-character instances? There is a
consensus nowadays around Unicode, which makes encoding
conversions a less pressing issue; however, Unicode comes with
enough complexities of its own —without even considering
typography— that it seems a dedicated character/codepoint type
would be useful. For instance, Javascript implements strings
as arrays of 16-bit integers to be interpreted as UTF-16, but
without taking surrogate sequences into account, which means
that the length method is not guaranteed to always return the
actual number of characters in a string.
Table 5: Methods defined in Rust on strings and string slices
all methods of slices are also available on boxed strings, and
String only adds methods that are concerned with the concrete
implementation.
An surprising design decision in Rust is that strings do not
implement the array-like indexing operator. Instead, to access
the contents of a string, the library requires explicit use of iterators. This motivated by the tension between the need, as a
systems programming language, to have precise control of memory operations, and the fact that practical, modern encodings
(be it UTF-8 or UTF-16) encode characters into a varying number of bytes. Variable-length encoding makes indexed access
to individual characters via dead-reckoning impossible: since
the byte index depends on the space occupied by all preceding
characters, one has to iterate from the start of the string. The implications are two-fold: first, this design upholds the convention
that array-like indexing is a constant-time operation returning
values of fixed size. Second, multiple iterators are provided on
equal footing (methods bytes(), chars(), lines(), or split()), each
of them revealing a different abstraction level, with no intrinsic
or default meaning for what the n-th element of a string is; this
also makes the interface more uniform.
Sharing character data. Second, there is a compromise between
expressivity and control over side effects, data copying, and
memory allocation. Many applications with heavy reliance on
strings (e.g., parsers, web servers) benefit from sharing character
data across several string instances, because of gains both in
memory space and in throughput; however, this requires that
the shared data does not change. In this regard, Rust’s string
slices are interesting because they provide substrings of constant
size and creation time without adding complexity to the API.
Conversely, Haskell’s lazy string compositions, or data structures
like ropes, provide the equivalent for concatenation, without a
distinct interface like Java’s StringBuilder.
8. Reflection on String APIs
It is difficult to form an opinion on the design of an API before
getting feedback from at least one implementation attempt. Still,
at this stage, we can raise some high level points that future
implementors may consider. We start by discussing some issues
raised in the analysis of the previous languages, then we sketch
some proposals for a future implementation.
Matching and regular patterns. Regular patterns, in languages
where they are readily available, are highly effective at analyzing strings. We did not discuss them here, because while they
are a sister feature of strings, they really are a domain-specific
language for working on strings that can be modularized independently, much like full-fledged parsers. A way to do that
is to make regular expressions polymorphic with other stringaccessing types such as indices, ranges, or strings as patterns);
Ruby does this by accepting various types as argument of its indexing/substring methods, and Rust by defining a proper abstract
8.1. Various APIs in Perspective
While proper assessment of API designs would be more suited
for a publication in cognitive sciences, putting a few languages
in perspective during this cursory examination of the string API
raised a few questions.
9
type Pattern that regular patterns implement.
Consistency and cleanups. Finally, we would like to consolidate
close methods into consistently named groups or even chains of
methods whenever possible. Immutable strings would favor a
declarative naming style.
The current implementation suffers from the presence of many
ad-hoc convenience methods, many of which do not belong in
the core API of strings and should be extracted or removed.
Several methods are related to converting between strings
and other kinds of objects or values. These conversion methods
come in a limited set that is neither generic nor complete; instead
we would prefer a clear, generic, but moldable API for parsing
instances of arbitrary classes out of their string representations.
8.2. Concerns for a New String Implementation
For an API to provide rich behavior without incurring too
much cognitive load, it has to be regular and composable.
Strings and characters are different concepts. The distinction
between character and string types distributes functionality in
adequate abstractions. Characters or codepoints can offer behavior related to their encoding, or even typographic or linguistic
information such as which alphabet they belong to.
Note that the implementation does not have to be naive and
use full-fledged character objets everywhere. In Pharo, String is
implemented as a byte or word array in a low-level encoding,
and Character instances are only created on demand. Most importantly, a character is not a mere limited-range integer. In this
regard, the design of Rust validates that design choice.
9. Discussion and Perspectives
In this paper, we assess the design of character strings in
Pharo. While strings are simple data structures, their interface is
surprisingly large. Indeed, strings are not simple collections of
elements; they can be seen both as explicit sequences of characters, and as simple but very expressive values from the domain
of a language or syntax. In both cases, strings have to provide
a spectrum of operations with many intertwined characteristics:
abstraction or specialization, flexibility or convenience. We analyze the domain and the current implementation to identify
recurring idioms and smells.
The idioms and smells we list here deal with code readability and reuseability at the level of messages and methods; they
fall in the same scope as Kent Beck’s list [5]. While the paper
focuses on strings, the idioms we identify are not specific to
strings, but to collections, iteration, or parameter passing; modulo differences in syntax and style usages, they apply to other
libraries or object-oriented programming languages. To identify the idioms and smells, we rely mostly on code reading and
the usual tools provided by the Smalltalk environment. This is
necessary in the discovery stage, but it raises several questions:
Strings are sequences, but not collections. Strings differ from
usual lists or arrays in that containing a specific element does
not really matter per se; instead, their contents have to be interpreted or parsed. We think this is why their iteration interface
is both rich and ad hoc, and follows many arbitrary contextual
conventions like character classes or capitalization. From this
perspective, we should probably reconsider the historical design
choice to have String be a Collection subclass.
Iterations. Strings represent complex data which can be queried,
navigated, iterated in multiple ways (bytes, characters, words,
lines, regular expression matches. . . ).
Iteration based on higher-order functions is an obvious step
in this direction; Smalltalk dialects use internal iterators as the
iconic style to express and compose iterations, but this seems to
have discouraged the appearance of an expressive set of streaming or lazy abstractions like Ruby’s enumerators or Rust’s iterators. Therefore, external iterators should be investigated, under
the assumption that extracting the control flow may lead to better composeability. Of course, co-design between strings and
collection/stream libraries would be beneficial.
• How to document groups of methods that participate in a
given idiom? As we say in Section 2, method protocols are
not suitable: they partition methods by feature or theme,
but idioms are overlapping patterns of code factorization
and object interaction.
Encodings. It is misguided to assume that characters always
directly map to bytes, or that any sequence of bytes can be
viewed as characters. To bridge bytes and characters, encodings
are required; the API should take them into account explicitly,
including provisions for impossible conversions and probably
for iteration of string contents simultaneously as characters and
as encoded data.
• How to specify, detect, check, and enforce idioms in the
code? This is related to architecture conformance techniques [15].
String buffers and value strings. Pharo strings currently have
a single mutable implementation which is used in two distinct
roles: as a value for querying and composing, and as a buffer
for in-place operations. Streams can assemble large strings
efficiently, but more complex editing operations rely on fast data
copying because of the underlying array representation.
Distinguishing these two roles would allow for internal representations more suited to each job and for a more focused API.
In particular, the guarantees offered by immutable strings and
views like Rust’s slices open many possibilities for reducing
data copies and temporary object allocations.
References
[1] J. Blanchette, The little manual of API design, http://www4.in.tum.de/
~blanchet/api-design.pdf (Jun. 2008).
[2] J. Stylos, S. Clarke, B. Myers, Comparing API design choices with usability studies: A case study and future directions, in: P. Romero, J. Good,
E. A. Chaparro, S. Bryant (Eds.), 18th Workshop of the Psychology of
Programming Interest Group, University of Sussex, 2006, pp. 131–139.
doi:10.1.1.102.8525.
[3] J. Stylos, B. Myers, Mapping the space of API design decisions, in: IEEE
Symposium on Visual Languages and Human-Centric Computing, 2007,
pp. 50–57. doi:10.1109/VLHCC.2007.44.
10
Extracting
Methods returning particular substrings.
[4] M. Piccioni, C. A. Furia, B. Meyer, An empirical study of API usability, in: IEEE/ACM Symposium on Empirical Software Engineering and
Measurement, 2013. doi:10.1109/ESEM.2013.14.
[5] K. Beck, Smalltalk Best Practice Patterns, Prentice-Hall, 1997.
URL
http://stephane.ducasse.free.fr/FreeBooks/
BestSmalltalkPractices/Draft-Smalltalk%20Best%20Practice%
20Patterns%20Kent%20Beck.pdf
[6] K. Cwalina, B. Abrams, Framework Design Guidelines: Conventions,
Idioms, and Patterns for Reusable .Net Libraries, 1st Edition, AddisonWesley Professional, 2005.
[7] J. Bloch, How to design a good api and why it matters, in: Companion to
the 21st ACM SIGPLAN Symposium on Object-oriented Programming
Systems, Languages, and Applications, OOPSLA ’06, ACM, 2006, pp.
506–507. doi:10.1145/1176617.1176622.
URL http://doi.acm.org/10.1145/1176617.1176622
[8] R. E. Griswold, M. T. Griswold, The Icon Programming Language, Peerto-Peer Communications, 1996.
URL http://www.peer-to-peer.com/catalog/language/icon.html
[9] A. Bergel, S. Ducasse, O. Nierstrasz, R. Wuyts, Classboxes:
Controlling visibility of class extensions, Journal of Computer
Languages, Systems and Structures 31 (3-4) (2005) 107–126.
doi:10.1016/j.cl.2004.11.002.
URL
http://rmod.inria.fr/archives/papers/
Berg05a-CompLangESUG04-classboxesJournal.pdf
[10] C. Clifton, G. T. Leavens, C. Chambers, T. Millstein, MultiJava: Modular
open classes and symmetric multiple dispatch for Java, in: OOPSLA 2000
Conference on Object-Oriented Programming, Systems, Languages, and
Applications, 2000, pp. 130–145.
[11] B. Woolf, Null object, in: R. Martin, D. Riehle, F. Buschmann (Eds.),
Pattern Languages of Program Design 3, Addison Wesley, 1998, pp. 5–18.
[12] S. Murer, S. Omohundro, D. Stoutamire, C. Szyperski, Iteration abstraction
in sather, ACM Transactions on Programming Languages and Systems
18 (1) (1996) 1–15. doi:10.1145/225540.225541.
[13] R. Hickey, Clojure transducers, http://clojure.org/transducers.
[14] ANSI, New York, American National Standard for Information Systems –
Programming Languages – Smalltalk, ANSI/INCITS 319-1998, http://wiki.
squeak.org/squeak/uploads/172/standard_v1_9-indexed.pdf (1998).
[15] S. Ducasse, D. Pollet, Software architecture reconstruction: A processoriented taxonomy, IEEE Transactions on Software Engineering 35 (4)
(2009) 573–591. doi:10.1109/TSE.2009.19.
URL
http://rmod.inria.fr/archives/papers/
Duca09c-TSE-SOAArchitectureExtraction.pdf
wordBefore:
findSelector
mispackaged, specific to code browser
findTokens:
findTokens:escapedBy:
no senders (besides tests)
findTokens:includes:
one sender
findTokens:keep:
lineCorrespondingToIndex:
squeezeOutNumber
ugly parser, one sender
splitInteger
what is the use-case?
stemAndNumericSuffix
duplicates previous method
Splitting
Methods returning a collection of substrings.
lines
subStrings:
substrings
findBetweenSubStrs:
keywords
not a call to previous one, why?
adhoc, assumes receiver is a selector
Enumerating
linesDo:
lineIndicesDo:
tabDelimitedFieldsDo:
Conversion to other objects
Many core classes such as time, date and duration that have
a compact and meaningful textual description extend the class
String to offer conversion from a string to their objects. Most
of them could be packaged with the classes they refer to, but
splitting a tiny core into even smaller pieces does not make a lot
of sense, and there are legitimate circular dependencies in the
core: a string implementation cannot work without integers, for
example. Therefore, most of these methods are part of the string
API from the core language point of view:
Appendix — Classifying the Pharo String API
asDate
asTime
asDuration
asDateAndTime
Finding
Methods returning places in the string (indices, ranges).
findString:
findString:startingAt:
findString:startingAt:caseSensitive:
findLastOccurrenceOfString:startingAt:
allRangesOfSubString:
findAnySubStr:startingAt:
findCloseParenthesisFor:
findDelimiters:startingAt:
findWordStart:startingAt:
no senders
findIn:startingAt:matchTable:
auxiliary method
findSubstring:in:startingAt:matchTable:
auxiliary method
findSubstringViaPrimitive:in:startingAt:matchTable:
one sender
asNumber
asInteger
asSignedInteger
asTimeStamp
asString
asSymbol
asStringOrText
asByteArray
Some other methods are not as essential:
asFourCode
romanNumber
string
stringhash
Conversion between strings
A different set of conversion operations occurs between
strings themselves.
• typography and natural language: asLowercase, asUppercase, capitalized, asCamelCase, withFirstCharacterDownshifted, asPluralBasedOn:, translated, translatedIfCorresponds,
indexOf:
indexOf:startingAt:
indexOf:startingAt:ifAbsent:
indexOfSubCollection:
mispackaged
indexOfSubCollection:startingAt:ifAbsent:
indexOfFirstUppercaseCharacter
redundant, one sender
indexOfWideCharacterFrom:to:
lastSpacePosition
• content formatting: asHTMLString, asHex, asSmalltalkComment, asUncommentedSmalltalkCode,
adhoc or mispackaged
• internal representation: asByteString, asWideString, asOctet-
lastIndexOfPKSignature:
skipAnySubStr:startingAt:
translatedTo:
String
skipDelimiters:startingAt:
11
Streaming
printOn:
putOn:
padded:to:with:
surroundedBy:
trimLeft:right:
trimLeft
trimLeft:
storeOn:
Comparing
caseInsensitiveLessOrEqual:
compare:with:collated:
compare:
caseSensitiveLessOrEqual:
compare:caseSensitive:
sameAs:
convertFromEncoding:
convertToEncoding:
convertToSystemString
withLineEndings:
withUnixLineEndings
withCRs
ad-hoc
should be an extension
endsWithAColon
includesSubstring:
hasWideCharacterFrom:to:
isAllAlphaNumerics
convertFromWithConverter:
convertToWithConverter:
encodeDoublingQuoteOn:
withSqueakLineEndings
withInternetLineEndings
convenience, used a lot
Matching
alike:
howManyMatch:
similarity metrics
charactersExactlyMatching:
bad name: common prefix length
match:
startingAt:match:startingAt:
inconsistent name
isLiteral
isLiteralSymbol
Low-Level Internals
bad name, duplicate
bad name, mispackaged
hash
typeTable
byteSize
byteAt:
writeLeadingCharRunsOn:
Querying
lineCount
lineNumber:
lineNumberCorrespondingToIndex:
leadingCharRunLengthAt:
initialIntegerOrNil numericSuffix
indentationIfBlank:
numArgs
selector-related
parseLiterals
contents of a literal array syntax
byteAt:put:
Candidates for removal
While performing this analysis we identified some possibly
obsolete methods.
asPathName
asIdentifier:
do:toFieldNumber:
indexOfFirstUppercaseCharacter
Substituting
copyReplaceAll:with:asTokens:
copyReplaceTokens:with:
expandMacros
expandMacrosWithArguments:
expandMacrosWith:
expandMacrosWith:with:
expandMacrosWith:with:with:
expandMacrosWith:with:with:with:
format:
replaceFrom:to:with:startingAt:
primitive
translateWith:
translateFrom:to:table:
translateToLowercase
translateToUppercase
Correcting
correctAgainst:
correctAgainst:continuedFrom:
correctAgainstDictionary:continuedFrom:
correctAgainstEnumerator:continuedFrom:
Operations
contractTo:
truncateWithElipsisTo:
encompassLine:
withNoLineLongerThan:
withSeparatorsCompacted
withoutQuoting
withoutLeadingDigits
withoutPeriodSuffix
padLeftTo:
padRightTo:
trim
trimBoth
trimBoth:
Encoding
Testing
endsWith:
endsWithAnyOf:
startsWithDigit
endsWithDigit
hasContentsInExplorer
includesSubstring:caseSensitive:
includesUnifiedCharacter
isAllDigits
isAllSeparators
onlyLetters
isString
isAsciiString
isByteString
isOctetString
isWideString
beginsWithEmpty:caseSensitive:
occursInWithEmpty:caseSensitive:
duplicates the two previous
surroundedBySingleQuotes
trimmed
trimRight
trimRight:
truncateTo:
encompassParagraph:
withBlanksCondensed
withoutTrailingDigits
withoutTrailingNewlines
padLeftTo:with:
padRightTo:with:
12
asLegalSelector
| 6 |
SemTK: An Ontology-first, Open Source Semantic Toolkit for Managing
and Querying Knowledge Graphs
PAUL CUDDIHY, GE Global Research, cuddihy@ge.com
JUSTIN MCHUGH, GE Global Research, mchugh@ge.com
JENNY WEISENBERG WILLIAMS, GE Global Research, weisenje@ge.com
VARISH MULWAD, GE Global Research, varish.mulwad@ge.com
KAREEM S. AGGOUR, GE Global Research, aggour@ge.com
ABSTRACT: The relatively recent adoption of Knowledge Graphs as an enabling technology in multiple high-profile
artificial intelligence and cognitive applications has led to growing interest in the Semantic Web technology stack. Many
semantics-related tools, however, are focused on serving experts with a deep understanding of semantic technologies. For
example, triplification of relational data is available but there is no open source tool that allows a user unfamiliar with
OWL/RDF to import data into a semantic triple store in an intuitive manner. Further, many tools require users to have a
working understanding of SPARQL to query data. Casual users interested in benefiting from the power of Knowledge
Graphs have few tools available for exploring, querying, and managing semantic data. We present SemTK, the Semantics
Toolkit, a user-friendly suite of tools that allow both expert and non-expert semantics users convenient ingestion of relational
data, simplified query generation, and more. The exploration of ontologies and instance data is performed through
SPARQLgraph, an intuitive web-based user interface in SemTK understandable and navigable by a lay user. The open
source version of SemTK is available at http://semtk.research.ge.com.
KEYWORDS
Visual SPARQL query generation, data triplification, data ingestion, semantic data management
1
INTRODUCTION
With the success of several commercial artificial intelligence and cognitive applications such as Siri, Cortana,
Google Now and Watson, knowledge graphs have been rapidly gaining traction. However, the Semantic Web
technology stack, which provides a foundation to maintain and query knowledge graphs, poses a significant
barrier to their adoption by non-semantic subject matter experts in scientific and industrial communities. Tools
such as Protégé[1] and SADL1[2] have made rapid strides in reducing barriers for ontology design and creation.
However, there exist very few tools with the same level of maturity to explore, query and manage semantic data
in knowledge graphs, and to the best of our knowledge, none provide a seamless integrated experience to
perform all three tasks within a single tool.
In this paper, we present the Semantics Toolkit (SemTK), an open source project designed to make semantics
accessible to both expert and non-expert users in a user-friendly manner. SemTK allows users to explore, query
and manage semantic data through its SPARQLgraph user interface. While existing approaches for natural
language querying over RDF data and techniques for data triplification tend to abstract and hide the semantic
model from users, SemTK leverages the domain ontology coupled with the domain expertise of subject matter
experts to simplify these tasks. Chiefly through SPARQLgraph, it allows users to explore and search complex
ontologies and construct a subgraph of interest both for query generation and data ingestion. SemTK has been
developed in the context of the needs of a large industrial business, General Electric (GE), but with applicability
1http://sadl.sourceforge.net/
Cuddihy et al.
to a much wider audience well outside of the industrial domains in which GE operates. Given that the toolkit
was initially developed in a relatively controlled industrial environment rich with subject matter experts with
immense knowledge of their respective domains, we chose to develop SemTK with an “ontology first”
mentality.
This paper introduces SemTK’s novel capabilities to save semantic queries and execute them with runtime
constraints in a manner similar to SQL stored procedures. We focus on the power and specificity of the
SPARQL query language as opposed to higher level abstractions, and have sought to harness its power by
making SPARQL easy to author for both data ingestion and retrieval by developing a toolkit designed to work
with SPARQL 1.1 compliant data stores such as OpenLink’s Virtuoso 2 . In this paper, we lay out the basic
architecture of SemTK and use its SPARQLgraph user interface to describe its underlying concepts of
connections and nodegroups (a subgraph of interest to the user) and how we leverage them for query generation,
data triplification and ingestion. We explore local pathfinding and the use of automatically-generated SPARQL
queries to suggest contents of VALUES clauses, and how these innovations greatly enhance a user’s ability to
quickly generate SPARQL. We discuss issues and optimization strategies for different query types, and lay out a
novel approach for generating INSERT queries that ingest tabular data. The use of query storage with runtime
constraints constitutes a stored procedure-like capability that facilitates the use of semantics within higher-level
applications. An External Data Connectivity (EDC) service provides Ontology-Based Data Access to data
residing outside of the semantic store. Overall, we show how these research contributions combine to create a
powerful open source toolkit that accelerates the process of ingesting and exploring data, and provides a service
layer on which other knowledge-driven applications can be built.
2
SEMANTICS TOOLKIT
SemTK is comprised of a suite of Java REST services that work with SPARQL 1.1 compliant triple stores. A
web-based tool called SPARQLgraph has been built on top of those microservices. SPARQLgraph has been
designed to both highlight and make the features of SemTK easy to use. The REST services layer is designed to
be deployed in standard cloud environments such as Amazon Web Services, and to be used directly by a wide
range of semantics-enabled applications. For the purposes of this paper, however, most functionality will be
described in terms of the SPARQLgraph interface. We begin by describing fundamental concepts associated
with SemTK and SPARQLgraph.
2.1
Ontology Connection and Ontology Info
A SPARQLgraph session first requires the specification of triple store endpoint connections, one to ontology
information and another to instance data. An ontology connection contains the “model domain” which defines
the base URI of classes and attributes contained in a model. The ontology may reference imported entities such
as XMLSchema number, string, and date primitives.
Ontology information is loaded via SPARQL queries into a local cache for fast access. This cache is called
“Ontology Info”, and includes classes, subclass relationships, class properties and types, and permitted values
for enumeration classes. This information is displayed to the user as a hierarchical list in the ontology panel (the
left-hand pane of SPARQLgraph) as shown in Figure 1. This panel allows users to explore the loaded domain
ontologies in preparation for querying or data ingestion. The hierarchical list conveys the subclass relationship
between classes. On the expansion of each class, it also displays the associated datatype and object properties
along with their respective ranges. To deal with the challenge of ontologies with 100s to 1000s of classes and
2
https://virtuoso.openlinksw.com
SemTK: An Ontology-first, Open Source Semantics Toolkit for Managing and Querying Knowledge Graphs
properties, this panel provides a search control enabling users to quickly drill down and highlight any part of the
ontology matching the search string. Users begin constructing a subgraph by selecting classes from this panel.
Figure 1: The SPARQLgraph Interface displays the ontology in the left panel and the subgraph of interest on the
right
2.2
Query Construction and Generation
With the ontology information loaded and searchable, the user can begin drag-and-drop query generation and
query execution against the second type of connection: the data endpoints. SemTK supports the generation of
the following subset of SPARQL query types available in SPARQL 1.1 and SPARQL Update: select, construct,
ask, delete, and insert. Query generation with SemTK allows for several advanced features, including regex,
“values” clauses, counting, limits and optional values.
Query generation begins with the construction of a nodegroup. A nodegroup is a SemTK-specific
representation of a subgraph of interest. It is built by dragging and dropping classes from the ontology panel to
the visualization window. A simple nodegroup is shown in Figure 2.
Cuddihy et al.
Figure 2: Visual representation of the nodegroup showing multiple instances of the same class (Cell) connecting to
different object properties (cell1, cell2)
Each node in the nodegroup represents a class variable in a query, and has a unique name and a list of
properties. The properties are split such that those outside the ontology domain are listed first. These properties
are often primitive OWL datatype properties, but may also be properties linked to objects outside the domain
(e.g. objects in DBpedia). When nodes are linked together into a subgraph as shown, the nodegroup is a
powerful representation crucial to almost every SemTK function.
2.2.1 Pathfinding. It is often the case that a user may want to create a query incorporating entities separated
by many links in a large ontology. This would require a complex SPARQL query consisting of many clauses
representing the chain of relationship between the two entities. Further, there may be more than one possible
path to connect the two entities. SPARQLgraph and SemTK simplifies the construction of SPARQL queries
over such ontologies with a feature known as pathfinding. Pathfinding allows users to drop two classes on the
canvas and select the desired connecting path. Pathfinding greatly enhances the ease-of-use in building
nodegroup; as a new class is dragged from the ontology panel onto the nodegroup canvas, SemTK uses a slight
variation of the well-known A* algorithm3 to suggest various paths by which the new class might connect to the
nodes already in the nodegroup. SemTK uses object property ranges to identify possible classes in a path.
To accomplish this, the A* algorithm has been modified with stopping logic. The search is limited to a
maximum path length (typically 10 links) and/or search time (typically 30 seconds). Further, search path lengths
can be limited such that once a path of length n is found, searching ends when all paths of length (n+m) have
been explored. This search, restricted to local paths instead of a theoretically complete set, provides an
indispensable feature for the efficient assembly of nodegroups. The combination of pathfinding with the
performance enabled by the ontology cache allows users to quickly build complex nodegroups, and
subsequently auto-generate SPARQL.
2.2.2 Property Selections and Filters. Once the nodegroup is constructed, the user may click to select the
properties of interest to return from each of the nodes. Further, SemTK provides a user-friendly way to add
SPARQL FILTER and VALUES clauses to constrain any property in the query. Users can easily apply regular
expression filters by hand-editing a simple FILTER statement template pre-populated by SemTK for the given
property. For example, such a statement for a battery id property might look like this: FILTER regex(?batteryId,
"AABB"), with the user typing in the text between the quotes. Alternatively, SemTK can suggest valid values
for a property based on the contents of the triple store, which the user may then select for inclusion in a
VALUES clause. In this case, SemTK queries under the hood for all possible values, and then presents them in
list form to the user. The queries used to populate this list are SELECT DISTINCT queries with two
modifications: only the target variable is returned, and all filters or values clauses are removed from that
variable. Only those values that satisfy the constraints and relations specified in the nodegroup are returned. A
3
https://en.wikipedia.org/wiki/A*_search_algorithm
SemTK: An Ontology-first, Open Source Semantics Toolkit for Managing and Querying Knowledge Graphs
search control helps users narrow down values to select in cases where there are a large number of possible
values.
Figure 3: Nodegroup for values clauses on ?BatteryID and ?Color
For example, the nodegroup in Figure 3 represents all DuraBattery instances that have a Cell object property
on cell1 where the Cell has a Color. When the user opens the dialog to build a values clause on ?Color, the
system uses the algorithm above to execute the query shown in Figure 4 in the background.
Figure 4: Example of automatically generated query
The results of this query are a list of cell colors in the position cell1 in any DuraBattery. When the user
selects one or more colors from the list, the system automatically builds a values clause similar to Figure 5.
Figure 5: Generated values clause
This functionality becomes more powerful when it is chained together. For example, if the user were now to
ask for possible values for ?DuraBattery’s batteryId, the same process would result in a list of only batteryId
values that are attached to ?DuraBattery instances that have cell1 objects with colors of blue or red. Using this
iterative method of building values clauses, a user can quickly down-select and navigate through complex data.
2.2.3 Query Generation. Once a nodegroup is constructed, SemTK can automatically generate SPARQL to
perform various operations (e.g. select, delete) using the nodegroup. To generate a SPARQL select statement,
SemTK walks the nodegroup and generates clauses for each node, including clauses for class type, relationships
to other nodes, and relationships to properties. Filter and values clauses are generated where the user has
specified them, and items marked for return are listed at the top of the SELECT DISTINCT statement. Consider
Cuddihy et al.
the nodegroup in Figure 2. If ?DuraBattery’s batteryId was selected for return, then SemTK would generate
SPARQL shown in Figure 6.
Figure 6: Generated SPARQL query based on nodegroup shown in Figure 3
2.2.4 Optional. When constructing a query, a user may mark a class or property as optional, indicating that
they wish to retrieve the requested data whether the marked class or property is present or not. In this case,
SemTK query generation encloses all nodes downstream (or upstream) of the connecting arrow inside a
SPARQL OPTIONAL clause. Clauses are nested as the SPARQL is generated.
2.2.5 Additional Clauses. The user interface enables specification of a maximum number of results to return,
which SemTK implements by adding a LIMIT clause to the query. Further, users can select the “count” query
type to return the number of results rather than the result contents. In this case, SemTK transforms the original
select query into a count query by wrapping it using “SELECT (COUNT(*) as ?count)”.
2.2.6 Delete Queries. A user may construct a nodegroup representing data to be deleted from the triple
store. SemTK generates DELETE queries from such nodegroups. Deletion is straightforward for class attributes
(including links between nodes), and consists of simply removing any instance of the given attribute from
instances of the given class, taking account any surrounding constraints. In contrast, in the case of deleting
instances of classes, the user may choose from several modes to control the scope of the deletion. The simplest
mode removes triples of the specified class. Other modes provide the option to delete all triples with the node
variable in the subject or object, or to limit the deletion to predicates in the loaded model, or predicates in the
nodegroup. Together, these delete modes provide the user with a wide range of options to support practical
applications.
2.3
SPARQL Query Optimization
SemTK’s SPARQL query generation required proactive optimization to maintain acceptable performance. In
early testing, the benchmark triple stores (OpenLink Virtuoso7, Jena TDB 4 and Fuseki 5 ) showed poor
performance when attempting to execute generated queries. These early queries were generated by naively
stepping through the contents of the nodegroup without regard for the impact of clause ordering on SPARQL
query performance.
4
5
https://jena.apache.org/documentation/tdb/
https://jena.apache.org/documentation/serving_data/
SemTK: An Ontology-first, Open Source Semantics Toolkit for Managing and Querying Knowledge Graphs
To optimize query performance, the SemTK strategy is to order clauses from those expected to be most
specific to least specific. The system assumes that the relationships described in the ontology are directional.
Any relationship not stated explicitly as being bi-directional is assumed to be outgoing from the subject to the
object. It is also assumed that requested instances of classes with no incoming relationships are, in many cases,
more tightly specified than other instances. This is because in the sort of use cases for which the nodegroup is
most useful, these instances often have outgoing connections. These instances are placed first in the generated
queries, followed immediately by their outgoing connections. Applying these outgoing connections act as
constraints. This ordering has the effect of decreasing the search space the SPARQL engine must examine to
fulfill the later clauses. By the time the least specified instances are bound, the potential space has been
decreased because the outgoing connections from the more specified entries have limited their scope.
The use of the above technique produced significant improvements over the original naïve implementation.
In the case of Virtuoso, this led to greatly improved query execution times, making SPARQLgraph usable as a
practical, interactive web-based tool. In the case of Fuseki and Jena TDB, it resulted in rapid responses, instead
of queries which ran for multiple minutes before failing due to timeout errors. Further improvements to the
query optimization are in progress and will be discussed in the future direction section.
2.4
Data Triplification and Ingestion
SemTK also provides capabilities to convert CSV data into RDF triples and ingest them into a triple store.
Using the same ontology-based interaction model as the rest of the system, data triplification ingestion is
intuitive and provides advanced features for checking the incoming data. The triplification and ingestion of data
using SemTK takes place in three steps. The first step is the constructing a nodegroup in SPARQLgraph that
will be used to define the structure of the triples to be generated. The second step is the drag-and-drop mapping
of columns from a CSV (or table) to the constructed nodegroup. The third step is running the ingestion process
itself, applying the mapping to the input data, generating triples, and inserting them into the triple store.
Figure 7: Example nodegroup used for ingestion
2.4.1 Building a Nodegroup for Ingestion. Continuing with the DuraBattery example, consider the case of
ingesting a table containing ids for batteries and their four cells, cell colors, and battery assembly date and
description. The first step of ingestion is building a nodegroup to represent the subgraph describing the instance
data, as shown in Figure 7.
Cuddihy et al.
Figure 8: Mapping of properties in nodegroup from Figure 7 (left) to columns from a csv file (right)
2.4.2
Creating the Ingestion Mapping. Next, SPARQLgraph provides drag-and-drop generation of
ingestion mappings. The mappings associate one or more columns in the CSV data (shown in green at right) to
one or more attributes in the nodegroup (shown on the left). Additionally, transformations are available,
allowing the column values to be altered before being assigned. Free text values can be added to the mapping as
well. Figure 8 shows the mapping, which in this case is composed primarily of one-to-one mappings of table
columns to properties. The URI values are slightly more complex. The value of the DuraBattery URI will be the
text “BATT_” prepended to the value from the table’s BATTERY_ID column. The BATTERY_ID value will
also be processed by a user-defined text transformation named “ALPHANUM” which removes nonalphanumeric characters in order to form a legal URI. This type of user-specified URI is particularly useful for
linking data when instance data is ingested from multiple sources and mappings. The user is currently
responsible for managing uniqueness and legality of URIs and matching them across multiple ingestion
mappings. On the other hand, Cell URIs have been left blank, so they will be automatically generated by the
ingestion process.
2.4.3 Ontology-based Data Checking. SemTK’s ingestion uses the ontology to validate ingestion attempts
before proceeding. This validation occurs via two mechanisms. The first is a check of the nodegroup-based
mapping. The second is a per-record check of the incoming data to guarantee conformance. This primary check
SemTK: An Ontology-first, Open Source Semantics Toolkit for Managing and Querying Knowledge Graphs
compares the mapping and its nodegroup against the current version of the ontology. The entire ontology need
not match, but all classes used in the mapping must still exist. Further, the domains and ranges of all properties
used in the mapping must still match those in the ontology. If this check passes, the mapping is valid and
ingestion proceeds.
The secondary check is more exhaustive and relies on the results of the first check. For a consistent ingestion
that respects the type and structural constraints imposed by the ontology, each incoming record from the input
data is checked and each class instance relationship must be valid.
Using the nodegroup to give structure to the ingested records enforces the correctness and validity of the
class instance relationships. This allows SemTK to avoid having to perform this check for each incoming record.
Consolidating these actions into a single check improves performance dramatically over sequential solutions.
The checking of the actual data is straightforward, and consists of confirming that each incoming record
conforms to the type specified by the ontology. If each value in a record is valid, the record is considered
conformant. In the case of URIs, the outgoing value must be a proper URI.
Finally, the ingestion tool handles blank values in incoming records by pruning any path in the nodegroup
that has no non-blank values. For example, if a record has neither an id nor a color for cell3, then no triples will
be generated for ?Cell3 or ?Color_3.
2.4.4 Preflight data checking mode. Ingestion using SemTK offers two modes. Preflight mode examines
all data in the input set prior to ingestion, and checks for per-record failures which would be encountered during
ingestion. If any failures are found, the insertion to the triple store does not occur and an error report is
generated. This report lists each failing record, its original record number in the set and the first reason
encountered that would have caused a failure for that record. If no problems are encountered, the ingestion is run
normally and data is inserted into the triple store. This is the default mode.
An alternative mode ingests records without checking the entire set first. In this mode, failures are treated as
independent and acceptable. This mode is treated as an advanced user feature and is less commonly used than
preflight mode, as records in an input set are often not independent.
2.4.5 Ingestion Optimizations. SemTK employs optimizations that accelerate the ingestion process,
reducing the amount of work required to validate the inserted triples. This is achieved by taking advantage of the
nodegroup structure and the ontology itself. Using the nodegroup as its structural underpinning allows the
ingestion process to check the single structure once, complete with the expected mappings and data types,
regardless of the number of records that are to be ingested. A second optimization treats class instance relations
as implicit and allows these associations to also be validated exactly once. Optimizations to reduce technical
overhead are also employed, but are highly implementation-specific and thus not described here.
2.4.6 Graph Management Operations. SPARQLgraph provides a collection of operations used to manage
data in the graphs storing both the ontology and instance data. These features are intended to facilitate
operations that cannot be performed using the nodegroup-based generated queries. These include:
•
•
•
Upload OWL: uploads an OWL file to the target graph
Clear prefix: removes all triples which have URIs containing the given prefix
Clear graph: removes all the triples from the target graph
Cuddihy et al.
2.5
Stored Procedure Support
Central to SemTK is the mission of making a subset of Semantic Web technologies readily accessible to nonexperts. A critical step is making ingestion and query support available to existing workloads in a way easily
understood by users. To this end, SemTK includes support for nodegroup-derived stored procedures. In
relational database systems, a stored procedure is a subroutine available to simplify the execution of a complex
query or operation. Typically, these are accessed by name and allow the caller to pass a collection of parameters.
SemTK adopts this concept by allowing a user to store a nodegroup for subsequent execution. The stored
nodegroup acts as the basis of a stored procedure but, unlike in SQL, the stored nodegroups can perform
multiple tasks based on the selected query type. SemTK’s stored procedure support allows the definition of
parameters to be specified at runtime. These parameters are defined when the nodegroup is created. They are
specifically intended to be overridden at runtime, as opposed to filter constraints which are permanently fixed in
stored nodegroups. This dichotomy allows the creator of the nodegroup to divide the constraints into those that
are meaningful to the nature of the workload from those that can be used to alter results. Runtime parameters are
intended to be convenient to use by users unfamiliar with SPARQL. To this end, inserting arbitrary SPARQL is
not permitted. Rather, the user must provide the parameter name, operation type, and collection of values. For
each data type supported (Date/Time, String, URI, numeric), the number of operations supported is tightly
controlled. SemTK provides features to store and execute stored nodegroups as well as to set, examine and apply
runtime parameters. Together, these features form basic stored procedure support ready to be integrated into
traditional workloads.
2.6
External Data Connection
While the triple store is an effective storage mechanism for many datasets, it is not suitable for many types of
data, particularly those binary in nature (e.g., image data, files) or those requiring high overhead to store in a
triple store (e.g. time series data). To address this, SemTK includes extensive capabilities for Ontology-Based
Data Access (OBDA) [3], enabling data stored outside of the triple store to be linked, browsed, and queried in
domain-relevant terms as if it were part of the triple store. This functionality is achieved by storing metadata in
the triple store that describes data stored elsewhere. SemTK’s OBDA features are referred to as External Data
Connection, and are closed-source at present. While this feature is not a focus of this paper, further discussion of
this capability is described in [4].
3
CONCLUSION AND FUTURE WORK
We presented SemTK, a Semantics Toolkit that enables expert and non-expert users alike to triplify, query
and manage semantic data in an easy, user-friendly and intuitive manner. Using SPARQLgraph, we detailed
how users can connect to existing triple stores to retrieve their domain ontologies and explore them. We also
described how users can easily generate SPARQL queries via a drag and drop web-based user interface. The
process of constructing a query is highly simplified with features such as pathfinding, which helps users connect
two arbitrary classes without deep knowledge of the semantic model. We also described how SemTK can be
used to not only generate different types of queries but also for generating mappings to translate CSV data into
triples and ingesting them into a triple store. Finally, we detailed how users can save and re-use SPARQL
queries via a SQL stored procedure-like capability with runtime constraints. Demos are available at
http://semtk.research.ge.com, and the source code is provided under the MIT license at https://github.com/gesemtk/semtk.
There are several exciting future directions for SemTK. We plan to focus on further optimizing the generated
SPARQL queries by intelligently reordering the clauses. For instance, by keeping a count of the number of
instances of each class present in the triple store, SemTK could automatically rearrange SPARQL queries,
placing the clauses involving the fewest number of instances toward the top of the query. SemTK’s ontologyfirst approach can be expanded by inferring an approximate semantic model from instance data, thereby
SemTK: An Ontology-first, Open Source Semantics Toolkit for Managing and Querying Knowledge Graphs
allowing users to query against arbitrary SPARQL endpoints in Linked Open Data. Finally, SemTK’s
triplification and ingestion process can be improved by reconciling and linking data cells in a CSV to existing
instances and objects in triple store.
ACKNOWLEDGMENTS
The authors acknowledge the technical contributions of Ravi Palla and Christina Leber, and program support
from Steven Gustafson, Matthew C. Nielsen, Arvind Menon, Tim Healy, David Hack, Eric Pool, Parag Goradia
and Ryan Oattes.
REFERENCES
[1] Mark A. Musen. 2015. The Protégé Project: A Look Back and A Look forward. AI matters 1 4 (2015), 4–12
[2] Andrew Crapo and Abha Moitra. 2013. Toward a Unified English-Like Representation of Semantic Models, Data, and Graph Patterns
for Subject Matter Experts. International Journal of Semantic Computing 7, 03 (2013), 215–236.
[3] Holger Wache, Thomas Voegele, Ubbo Visser, Heiner Stuckenschmidt, Gerhard Schuster, Holger Neumann, and Sebastian Hübner.
2001. Ontology-based integration of information-a survey of existing approaches. In IJCAI-01 workshop: ontologies and information
sharing, Vol. 2001. Seattle, USA, 108–117.
[4] Jenny Weisenberg Williams, Paul Cuddihy, Justin McHugh, Kareem S. Aggour, Arvind Menon, Steven M. Gusta fson, and Timothy
Healy. 2015. Semantics for Big Data access & integration: Improving industrial equipment design through increased data usabil ity. In
Proceedings of the IEEE International Conference on Big Data. 1103–1112. https://doi.org/10.1109/BigData.2015.7363864
| 2 |
Uniform Proofs of Normalisation and
Approximation for Intersection Types
Kentaro Kikuchi
RIEC, Tohoku University
Katahira 2-1-1, Aoba-ku, Sendai 980-8577, Japan
kentaro@nue.riec.tohoku.ac.jp
We present intersection type systems in the style of sequent calculus, modifying the systems that
Valentini introduced to prove normalisation properties without using the reducibility method. Our
systems are more natural than Valentini’s ones and equivalent to the usual natural deduction style
systems. We prove the characterisation theorems of strong and weak normalisation through the proposed systems, and, moreover, the approximation theorem by means of direct inductive arguments.
This provides in a uniform way proofs of the normalisation and approximation theorems via type
systems in sequent calculus style.
1
Introduction
A traditional way of proving strong normalisation for typed λ -terms is the reducibility method [20],
which uses set-theoretic comprehension. Other methods without using reducibility have also been studied in the literature (see, e.g. Section 5 of [19] for a review of those methods). Some of them use an
inductive characterisation of strongly normalising λ -terms given by van Raamsdonk and Severi [18].
In [21], Valentini introduced, instead of using the inductive characterisation, an intersection type system that is closed under the rules of the original system, and proved strong normalisation by a simple
induction on the typing derivation.
In this paper we develop Valentini’s approach further providing an improvement on his system and its
extensions with an axiom for the type constant ω . These systems are in the style of sequent calculus and
equivalent to the original intersection type systems in natural deduction style. Using the new systems, we
prove the characterisation theorems of strong and weak normalisation, which are well-known properties
of intersection type systems [17, 8].
Another important point in our approach is that we design new systems that derive the same sequents
as the original natural deduction style systems do, so that we can prove various other properties than
normalisation by simple inductions on the typing derivation (cf. [15]). In the present paper we illustrate
that by showing the approximation theorem for the type system with ω , which is usually proved using
reducibility predicates over a typing context and a type (see, e.g. [11, 4]).
The difference between the systems in [21] and ours is the following. First, some rules of the systems
in [21] have restrictions on types to be type variables. Also, the rule for abstraction takes a form that
implies the η -rule. On the other hand, our systems do not have the restrictions on types, and our rule
for abstraction is the usual one. In this natural setting, we show that our system is closed under the rules
of the original natural deduction style system. This part of the proof of strong normalisation is much
shorter than that in [21]. Secondly, the system characterising weakly normalising λ -terms in [21] does
not have the type constant ω , and is not related to the original natural deduction style system. In this
paper, we introduce new systems with an axiom for the type constant ω , and prove weak normalisation
Jakob Rehof (Ed.): Intersection Types and Related Systems (ITRS)
EPTCS 177, 2015, pp. 10–23, doi:10.4204/EPTCS.177.2
K. Kikuchi
11
of λ -terms that are typable with ω -free types in the original system. The closure under the rules of the
original system is shown by almost the same argument as that in the case of the system without ω .
In [21], only normalisation properties are discussed, and other properties than normalisation are not
proved using the sequent calculus style systems. Some other papers [18, 16, 9, 1] have studied strong
normalisation for terms typable with intersection types without using reducibility. Each of them uses
an inductive characterisation of strongly normalising terms, but any other properties than normalisation
have not been treated. So the present paper seems to be the first to apply a proof method for normalisation
without reducibility to other properties of intersection type systems.
There is also an attempt in [3] to give uniform proofs of the characterisation theorems of normalisation and the approximation theorem. The method is through strong normalisation for reduction on
typing derivations. However, it uses reducibility predicates to prove the strong normalisation, and the
proof seems more complicated than ours.
The organisation of the paper is as follows. In Section 2 we introduce two kinds of intersection type
systems. In Section 3 we prove the characterisation theorem of strong normalisation through the new
type system. In Section 4 we introduce type systems with ω , and prove the characterisation theorem of
weak normalisation. In Section 5 we prove the approximation theorem using one of the new systems
with ω .
2
Intersection type systems
In this section we introduce two intersection type systems: one is in the ordinary natural deduction style
and the other in sequent calculus style. They prove to be equivalent, and both characterise strongly
normalising λ -terms.
First we introduce some basic notions on the λ -calculus [5]. The set Λ of λ -terms is defined by
the grammar: M ::= x | MM | λ x.M where x ranges over a denumerable set of variables. We use letters
x, y, z, . . . for variables and M, N, P, . . . for λ -terms. The notions of free and bound variables are defined
as usual. The set of free variables occurring in a λ -term M is denoted by FV(M). We identify α convertible λ -terms, and use ≡ to denote syntactic equality modulo α -conversion. [ := ] is used for
usual capture-free substitution.
The β -rule is stated as (λ x.M)N → M[x := N], and β -reduction is the contextual closure of the β rule. We use −→β for one-step reduction, and −→∗β for its reflexive transitive closure. A λ -term M is
said to be strongly (weakly) normalising if all (some, respectively) β -reduction sequences starting from
M terminate. The set of strongly (weakly) normalising λ -terms is denoted by SNβ (WNβ , respectively).
The set of types is defined by the grammar: σ ::= ϕ | σ → σ | σ ∩ σ where ϕ ranges over a denumerable set of type variables. We use letters σ , τ , ρ , . . . for arbitrary types. The type assignment systems λ∩
Γ ,x : σ ⊢ x : σ
(Ax)
Γ ,x : σ ⊢ M : τ
(→ I)
Γ ⊢ λ x.M : σ → τ
where x ∈
/Γ
Γ ⊢ M : σ → τ Γ ⊢ N : σ (→ E)
Γ ⊢ MN : τ
Γ ⊢ M : σ Γ ⊢ M : τ (∩ I)
Γ ⊢ M : σ ∩τ
Γ ⊢ M : σ ∩ τ (∩ E)
Γ ⊢M:σ
Γ ⊢ M : σ ∩ τ (∩ E)
Γ ⊢M:τ
Figure 1: Natural deduction style system λ∩
12
Uniform Proofs of Normalisation and Approximation for Intersection Types
Γ , x : σ ⊢s x : σ
(Ax)
Γ ⊢s M[x := N]N1 . . . Nn : σ Γ ⊢s N : τ
(Beta)s
Γ ⊢s (λ x.M)NN1 . . . Nn : σ
Γ ⊢s N : σ1 Γ , y : σ2 ⊢s yN1 . . . Nn : τ
(L →)
Γ , x : σ1 → σ2 ⊢s xNN1 . . . Nn : τ
where y ∈
/ FV(N1 ) ∪ · · · ∪ FV(Nn ) and y ∈
/Γ
Γ , x : σ1 , x : σ2 ⊢s xN1 . . . Nn : τ
(L ∩)
Γ , x : σ1 ∩ σ2 ⊢s xN1 . . . Nn : τ
Γ , x : σ ⊢s M : τ
(R →)
Γ ⊢s λ x.M : σ → τ
where x ∈
/Γ
Γ ⊢s M : σ Γ ⊢s M : τ
(R ∩)
Γ ⊢s M : σ ∩ τ
Figure 2: Sequent calculus style system λ∩s
and λ∩s are defined by the rules in Figures 1 and 2, respectively. A typing context is defined as a finite set
of pairs {x1 : σ1 , . . . , xn : σn } where the variables are pairwise distinct in the system λ∩ while they may
be the same in the system λ∩s . A variable with different types is intended to have the type of intersection
of all of them. The typing context Γ , x : σ denotes the union Γ ∪ {x : σ }, and x ∈
/ Γ means that x does not
appear in Γ , i.e., for no type σ , x : σ ∈ Γ . Note that x : σ ∈ Γ is possible in the typing context Γ , x : σ .
In particular, the premisses of the rule (L →) may have x : σ1 → σ2 in Γ . In that case, x : σ1 → σ2 is
introduced by the rule (L →) with implicit contraction.
The system in [21] has the restriction in λ∩s that the type σ in the rules (Ax) and (Beta)s and the type
τ in the rules (L →) and (L ∩) must be type variables. Also, the rule (R →) takes the following form:
Γ , x : σ ⊢s Mx : τ
Γ ⊢s M : σ → τ
where x ∈
/ Γ and x ∈
/ FV(M), so that the system includes the η -rule and is not equivalent to the system
λ∩ . (For example, ⊢s λ x.x : (σ → τ ) → ((ρ ∩ σ ) → τ ) is derivable in the system of [21], but ⊢ λ x.x :
(σ → τ ) → ((ρ ∩ σ ) → τ ) is not derivable in λ∩ .)
Example 2.1. Self-application can now be typed naturally in λ∩s , as follows (cf. [21, pp. 478–479]).
x : σ ⊢s x : σ x : σ , y : τ ⊢s y : τ
(L →)
x : σ , x : σ → τ ⊢s xx : τ
(L ∩)
x : σ ∩ (σ → τ ) ⊢s xx : τ
(R →)
⊢s λ x.xx : (σ ∩ (σ → τ )) → τ
The (Beta)s -free part of the system λ∩s types exactly the terms in β -normal form, and any β -redex in
a typed term must be constructed through the rule (Beta)s . So it is immediately seen that the terms that
are not head-normalising (e.g. (λ x.xx)(λ x.xx)) can not be typed in the system λ∩s .
Proposition 2.2. Γ , x : σ1 ∩ σ2 ⊢s M : τ if and only if Γ , x : σ1 , x : σ2 ⊢s M : τ .
Proof. By induction on the derivations.
Henceforth we write Γ∩ for the typing context in which each variable has the type of intersection of
all the types that the variable has in Γ .
K. Kikuchi
3
13
Characterisation of strongly normalising λ -terms
If one tries to prove strong normalisation for terms typed in the system λ∩ directly by induction on
derivations, a difficulty arises in the case of the rule (→ E). One way of overcoming this difficulty is
to use reducibility predicates [20]. Here we use the sequent calculus style system λ∩s instead. For the
system λ∩s , we can prove strong normalisation for typed terms directly by induction on derivations.
Theorem 3.1. If Γ ⊢s M : σ then M ∈ SNβ .
Proof. By induction on the derivation of Γ ⊢s M : τ in λ∩s . The only problematic case is where the last
rule applied is (Beta)s . In that case, by the induction hypothesis, we have M[x := N]N1 . . . Nn ∈ SNβ and
N ∈ SNβ . From the former we have M, N1 , . . . , Nn ∈ SNβ . Then any infinite reduction sequence starting
from (λ x.M)NN1 . . . Nn must have the form
(λ x.M)NN1 . . . Nn −→β∗
−→β
−→β
(λ x.M ′ )N ′ N1′ . . . Nn′
M ′ [x := N ′ ]N1′ . . . Nn′
...
where M−→β∗ M ′ , N−→β∗ N ′ and Ni −→∗β Ni′ for i ∈ {1, . . . , n}. But then there is an infinite reduction
sequence
M[x := N]N1 . . . Nn −→β∗ M ′ [x := N ′ ]N1′ . . . Nn′
−→β . . .
contradicting the hypothesis. Hence (λ x.M)NN1 . . . Nn ∈ SNβ .
To complete a proof of strong normalisation for terms typed in the system λ∩ , what remains to be
shown is that if M is typable in λ∩ then it is typable in λ∩s . This is proved using several lemmas below.
First we show that λ∩s is closed under the weakening rule.
Lemma 3.2. If Γ ⊢s M : τ then Γ , x : σ ⊢s M : τ .
Proof. By induction on the derivation of Γ ⊢s M : τ .
The next two lemmas are the essential difference from the proof of [21]. These are used in the proof
of Lemma 3.5 below. The simply typed counterpart of Lemma 3.3 is found in the second proof of strong
normalisation for the simply typed λ -calculus in [12].
Lemma 3.3. If Γ ⊢s M : σ → τ and x ∈
/ Γ then Γ , x : σ ⊢s Mx : τ .
Proof. By induction on the derivation of Γ ⊢s M : σ → τ . Here we show a few cases.
•
•
(Ax)
Γ , y : σ → τ ⊢s y : σ → τ
In this case we take two axioms Γ , x : σ ⊢s x : σ and Γ , x : σ , z : τ ⊢s z : τ , and obtain Γ , x : σ , y :
σ → τ ⊢s yx : τ by an instance of the (L →) rule.
Γ ⊢s M[y := N]N1 . . . Nn : σ → τ Γ ⊢s N : ρ
(Beta)s
Γ ⊢s (λ y.M)NN1 . . . Nn : σ → τ
By the induction hypothesis, we have Γ , x : σ ⊢s M[y := N]N1 . . . Nn x : τ , and by Lemma 3.2, we
have Γ , x : σ ⊢s N : ρ . From these, we obtain Γ , x : σ ⊢s (λ y.M)NN1 . . . Nn x : τ by an instance of
the (Beta)s rule.
Uniform Proofs of Normalisation and Approximation for Intersection Types
14
•
Γ , y : σ ⊢s M : τ
(R →)
Γ ⊢s λ y.M : σ → τ
where y ∈
/ Γ . From Γ , y : σ ⊢s M : τ , we have Γ , x : σ ⊢s M[y := x] : τ . From this and the axiom
Γ , x : σ ⊢s x : σ , we obtain Γ , x : σ ⊢s (λ y.M)x : τ by an instance of the (Beta)s rule.
Lemma 3.4. If Γ ⊢s M : σ ∩ τ then Γ ⊢s M : σ and Γ ⊢s M : τ .
Proof. By induction on the derivation of Γ ⊢s M : σ ∩ τ .
Now we are in a position to prove the following important lemma.
Lemma 3.5. λ∩s is closed under substitution, i.e., if Γ , x : σ1 , . . . , x : σm ⊢s P : τ where x ∈
/ Γ , m ≥ 0 and
σi 6= σ j for i 6= j, and, for any i ∈ {1, . . . , m}, Γ ⊢s N : σi , then Γ ⊢s P[x := N] : τ .
Proof. The proof is by main induction on the number of ‘→’ and ‘∩’ occurring in σ1 , . . . , σm and subinduction on the length of the derivation of Γ , x : σ1 , . . . , x : σm ⊢s P : τ . We proceed by case analysis
according to the last rule used in the derivation of Γ , x : σ1 , . . . , x : σm ⊢s P : τ . Here we consider a few
cases.
• Suppose the last rule in the derivation is
Γ , x : σ ⊢s M[y := Q]N1 . . . Nn : τ Γ , x : σ ⊢s Q : ρ
(Beta)s
Γ , x : σ ⊢s (λ y.M)QN1 . . . Nn : τ
where x : σ = x : σ1 , . . . , x : σm . By the subinduction hypothesis, we obtain both
Γ ⊢s M[y := Q][x := N]N1 [x := N] . . . Nn [x := N] : τ
and
Γ ⊢s Q[x := N] : ρ
Since y is a bound variable, we can assume that it does not occur in N. Hence the first judgement
is
Γ ⊢s M[x := N][y := Q[x := N]]N1 [x := N] . . . Nn [x := N] : τ
From this and Γ ⊢s Q[x := N] : ρ , we obtain
Γ ⊢s (λ y.M[x := N])Q[x := N]N1 [x := N] . . . Nn [x := N] : τ
by an instance of the (Beta)s rule.
• Suppose the last rule in the derivation is
Γ , x : σ ⊢s M : ρ1 Γ , x : σ , y : ρ2 ⊢s yN1 . . . Nn : τ
(L →)
Γ , x : σ , x : ρ1 → ρ2 ⊢s xMN1 . . . Nn : τ
/ FV(N1 ) ∪ · · · ∪ FV(Nn ) and y ∈
/ Γ , x : σ . By
where {x : σ , x : ρ1 → ρ2 } = {x : σ1 , . . . , x : σm }, y ∈
the subinduction hypothesis, we obtain both
Γ ⊢s M[x := N] : ρ1
(1)
Γ , y : ρ2 ⊢s (yN1 . . . Nn )[x := N] : τ
(2)
and
K. Kikuchi
15
Now consider the assumption Γ ⊢s N : ρ1 → ρ2 and a fresh variable z. Then by Lemma 3.3, we
have Γ , z : ρ1 ⊢s Nz : ρ2 . From this and (1), we have Γ ⊢s NM[x := N] : ρ2 by the main induction
hypothesis. Then, again by the main induction hypothesis, we obtain
Γ ⊢s NM[x := N]N1 [x := N] . . . Nn [x := N] : τ
from (2) and Γ ⊢s NM[x := N] : ρ2 .
• Suppose the last rule in the derivation is
Γ , x : σ , x : ρ1 , x : ρ2 ⊢s xN1 . . . Nn : τ
(L ∩)
Γ , x : σ , x : ρ1 ∩ ρ2 ⊢s xN1 . . . Nn : τ
where {x : σ , x : ρ1 ∩ ρ2 } = {x : σ1 , . . . , x : σm }. Then, applying Proposition 2.2 to the conclusion,
we have Γ , (x : σ )′ , x : ρ1 , x : ρ2 ⊢s xN1 . . . Nn : τ where (x : σ )′ = x : σ \ {x : ρ1 ∩ ρ2 }. Now, from
the assumption Γ ⊢s N : ρ1 ∩ ρ2 , we have Γ ⊢s N : ρ1 and Γ ⊢s N : ρ2 by Lemma 3.4. Hence, by
the main induction hypothesis, we obtain Γ ⊢s NN1 [x := N] . . . Nn [x := N] : τ .
Now we can show that the system λ∩s is closed under the (→ E) rule.
Lemma 3.6. If Γ ⊢s M : σ → τ and Γ ⊢s N : σ then Γ ⊢s MN : τ .
Proof. By Lemma 3.3, we have Γ , x : σ ⊢s Mx : τ for any fresh variable x. Hence by the previous lemma,
we obtain Γ ⊢s (Mx)[x := N] ≡ MN : τ .
Now we can prove the announced theorem.
Theorem 3.7. If Γ ⊢ M : σ then Γ ⊢s M : σ .
Proof. By induction on the derivation of Γ ⊢ M : σ in λ∩ , using Lemmas 3.4 and 3.6.
The converse of this theorem also holds when typing contexts are restricted to those of λ∩ . To prove
it, we need some lemmas on properties of the system λ∩ .
Lemma 3.8. If Γ ⊢ M : τ and z ∈
/ Γ then Γ , z : σ ⊢ M : τ .
Proof. By induction on the derivation of Γ ⊢ M : τ .
Lemma 3.9. λ∩ is closed under substitution, i.e., if Γ , x : σ ⊢ P : τ where x ∈
/ Γ and Γ ⊢ N : σ then
Γ ⊢ P[x := N] : τ .
Proof. By induction on the derivation of Γ , x : σ ⊢ P : τ .
Next we prove a Generation Lemma. For its statement we define a preorder on types.
Definition 3.10. The relation ≤ on types is defined by the following axioms and rules:
1. σ ≤ σ
3. σ ≤ τ , τ ≤ ρ ⇒ σ ≤ ρ
2. σ ∩ τ ≤ σ , σ ∩ τ ≤ τ
4. σ ≤ τ , σ ≤ ρ ⇒ σ ≤ τ ∩ ρ
Lemma 3.11. If Γ ⊢ M : σ and σ ≤ τ then Γ ⊢ M : τ .
Proof. By induction on the definition of σ ≤ τ .
16
Uniform Proofs of Normalisation and Approximation for Intersection Types
Lemma 3.12 (Generation Lemma).
1. Γ ⊢ MN : σ if and only if there exist σ1 , . . . , σn , τ1 , . . . , τn (n ≥ 1) such that σ1 ∩ · · · ∩ σn ≤ σ and,
for all i ∈ {1, . . . , n}, Γ ⊢ M : τi → σi and Γ ⊢ N : τi .
2. Γ ⊢ λ x.M : σ if and only if there exist τ1 , . . . , τn , ρ1 , . . . , ρn (n ≥ 1) such that (τ1 → ρ1 )∩ · · ·∩ (τn →
ρn ) ≤ σ and, for all i ∈ {1, . . . , n}, Γ , x : τi ⊢ M : ρi .
Proof. The implications from right to left are immediate by the typing rules and Lemma 3.11. The
converses are shown by induction on the derivations.
Now we can prove a crucial lemma about type-checking in the system λ∩ .
/ Γ then there exists a type ρ such that
Lemma 3.13. If Γ ⊢ M[x := N] : σ and Γ ⊢ N : τ where x ∈
Γ , x : ρ ⊢ M : σ and Γ ⊢ N : ρ .
Proof. By induction on the structure of M, using Lemma 3.12.
We are now ready to prove the equivalence between the systems λ∩s and λ∩ .
Theorem 3.14. Γ ⊢s M : σ if and only if Γ∩ ⊢ M : σ .
Proof. The implication from right to left follows from Theorem 3.7 and Proposition 2.2. The converse
is shown by induction on the derivation of Γ ⊢s M : σ . If the last applied rule is (Beta)s , we use Lemmas 3.12 and 3.13.
Finally we show that all strongly normalising terms are typable in λ∩s .
Theorem 3.15. If M ∈ SNβ then there exist a typing context Γ and a type σ such that Γ ⊢s M : σ .
Proof. The proof is by main induction on the maximal length of all β -reduction sequences starting from
M and subinduction on the structure of M. We analyse the possible cases according to the shape of the
term M.
• M ≡ x for some variable x. In this case we just have to take x : σ ⊢s x : σ , which is an axiom.
• M ≡ xN1 . . . Nn . By the subinduction hypothesis, for any i ∈ {1, . . . , n}, there exist a typing context
Γi and a type σi such that Γi ⊢s Ni : σi . Then consider the following derivation (recall that λ∩s is
closed under the weakening rule):
∪Γi ⊢s Nn : σn ∪Γi, yn : τ ⊢s yn : τ
(L →)
∪Γi, yn−1 : σn →. τ ⊢s yn−1 Nn : τ
..
.
∪Γi ⊢s N2 : σ2 ∪Γi, y2 : σ3 → · · · → σn → τ ⊢s y2 N3 . . . Nn : τ
(L →)
∪Γi ⊢s N1 : σ1
∪Γi , y1 : σ2 → · · · → σn → τ ⊢s y1 N2 . . . Nn : τ
(L →)
∪Γi , x : σ1 → · · · → σn → τ ⊢s xN1 . . . Nn : τ
• M ≡ λ x.P. By the subinduction hypothesis, there exist a typing context Γ and a type σ such that
Γ , x : σ1 , . . . , x : σn ⊢s P : σ where x ∈
/ Γ and n ≥ 0. Then we have Γ ⊢s λ x.P : σ1 ∩ · · · ∩ σn → σ
by the (L ∩) and (R →) rules. (We use a weakening rule instead of (L ∩) when n = 0.)
• M ≡ (λ x.P)NN1 . . . Nn . By the main induction hypothesis, there exist a typing context Γ1 and a
type σ1 such that Γ1 ⊢s P[x := N]N1 . . . Nn : σ1 , and, by the subinduction hypothesis, there exist a
typing context Γ2 and a type σ2 such that Γ2 ⊢s N : σ2 . Then, by the weakening and (Beta)s rules,
we obtain Γ1 , Γ2 ⊢s (λ x.P)NN1 . . . Nn : σ1 .
K. Kikuchi
17
It is interesting to note that in the above proof we do not use the (R ∩) rule at all, so it is redundant
for characterising the strongly normalising λ -terms. The absence of the (R ∩) rule leads to a restriction
on types that is similar to those investigated in [2].
The results in this section are summarised as follows.
Corollary 3.16. For any λ -term M, the following are equivalent.
1. M is typable in λ∩ .
2. M is typable in λ∩s .
3. M is strongly normalising.
4. M is typable in λ∩s without using the (R ∩) rule.
Proof. (1 ⇒ 2) This follows from Theorem 3.7.
(2 ⇒ 3) This follows from Theorem 3.1.
(3 ⇒ 4) This follows from the proof of Theorem 3.15.
(4 ⇒ 2) This is trivial.
(2 ⇒ 1) This follows from Theorem 3.14.
4
Characterisation of weakly normalising λ -terms
In this section we are concerned with weak normalisation and some type systems obtained by extending
the systems λ∩ and λ∩s . The main goal of this section is to prove the characterisation theorem of weak
normalisation in a similar way to that of strong normalisation in the previous section.
The extended systems are listed in Figure 3. First we introduce a new rule (Beta)l , which is a
general form of the rule considered in [21] (σ is restricted to type variables in [21]). Then the system λ∩l
is obtained from λ∩s by replacing the (Beta)s rule by the (Beta)l rule. The systems λ∩ω , λ∩s ω and λ∩l ω
are obtained from λ∩ , λ∩s and λ∩l , respectively, by adding the type constant ω and the (ω ) rule. In order
to distinguish the judgements of the systems, we use the symbols ⊢l , ⊢ω , ⊢sω and ⊢lω .
For the system λ∩l , we have the following theorem.
Theorem 4.1. If Γ ⊢l M : σ then M ∈ WNβ .
Proof. By induction on the derivation of Γ ⊢l M : τ .
Γ ⊢ M[x := N]N1 . . . Nn : σ
(Beta)l
Γ ⊢ (λ x.M)NN1 . . . Nn : σ
λ∩l
λ∩ω
λ∩s ω
λ∩l ω
λ∩s − (Beta)s + (Beta)l
:=
:= λ∩ + (ω )
:= λ∩s + (ω )
:= λ∩l + (ω )
Γ ⊢M:ω
(ω )
Notation
Γ ⊢l M : σ
Γ ⊢ω M : σ
Γ ⊢ sω M : σ
Γ ⊢lω M : σ
Figure 3: Systems extended with ω
18
Uniform Proofs of Normalisation and Approximation for Intersection Types
For characterisation of weak normalisation in terms of typability in the extended systems, it is necessary to clarify the relationship among them. First we show that the terms typable in the ordinary natural
deduction style system λ∩ω are typable in λ∩s ω , in almost the same way as in the previous section.
Theorem 4.2. If Γ ⊢ω M : σ then Γ ⊢sω M : σ .
Proof. It is easy to see that Lemmas 3.2 through 3.6 hold for λ∩s ω instead of λ∩s . Then the theorem
follows by induction on the derivation of Γ ⊢ω M : σ in λ∩ω .
Next we relate the systems λ∩s ω , λ∩l ω and λ∩l . This completes one direction of the characterisation
theorem of weak normalisation.
Lemma 4.3. Γ ⊢sω M : σ if and only if Γ ⊢lω M : σ .
Proof. The implication from left to right is immediate by forgetting the right premiss of (Beta)s . For
the converse, observe that the (Beta)l rule is derivable in λ∩s ω using the rules (Beta)s and (ω ).
Lemma 4.4. Suppose σ and all types in Γ are ω -free. Then Γ ⊢lω M : σ if and only if Γ ⊢l M : σ .
Proof. The implication from right to left is trivial. For the converse, observe that every type occurring
in the derivation of Γ ⊢lω M : σ also occurs in Γ or σ .
Corollary 4.5. If Γ ⊢ω M : σ where σ and all types in Γ are ω -free, then M ∈ WNβ .
Proof. By Theorem 4.2, Lemmas 4.3 and 4.4, and Theorem 4.1.
Conversely, if a λ -term M is weakly normalising, then there exist a typing context Γ and a type σ ,
both ω -free, such that Γ ⊢ω M : σ . To prove this, we need the following lemmas on properties of the
system λ∩ω . These are shown in similar ways to the proofs of Lemmas 3.8 through 3.12.
Lemma 4.6. If Γ ⊢ω M : τ and z ∈
/ Γ then Γ , z : σ ⊢ω M : τ .
Lemma 4.7. λ∩ is closed under substitution, i.e., if Γ , x : σ ⊢ω P : τ where x ∈
/ Γ and Γ ⊢ω N : σ then
Γ ⊢ω P[x := N] : τ .
Definition 4.8. The relation ≤ω on types is defined by the axioms and rules in Definition 3.10 together
with the axiom σ ≤ω ω .
Lemma 4.9. If Γ ⊢ω M : σ and σ ≤ω τ then Γ ⊢ω M : τ .
Lemma 4.10 (Generation Lemma). Let σ be any type with ω 6≤ω σ . Then
1. Γ ⊢ω MN : σ if and only if there exist σ1 , . . . , σn , τ1 , . . . , τn (n ≥ 1) such that σ1 ∩ · · · ∩ σn ≤ω σ
and, for all i ∈ {1, . . . , n}, Γ ⊢ω M : τi → σi and Γ ⊢ω N : τi .
2. Γ ⊢ω λ x.M : σ if and only if there exist τ1 , . . . , τn , ρ1 , . . . , ρn (n ≥ 1) such that (τ1 → ρ1 ) ∩ · · · ∩
(τn → ρn ) ≤ω σ and, for all i ∈ {1, . . . , n}, Γ , x : τi ⊢ω M : ρi .
Now we can prove a crucial lemma about type-checking in the system λ∩ω .
K. Kikuchi
19
Lemma 4.11. If Γ ⊢ω M[x := N] : σ where x ∈
/ Γ then there exists a type ρ such that Γ , x : ρ ⊢ω M : σ
and Γ ⊢ω N : ρ .
Proof. By induction on the structure of M, using Lemma 4.10. If M ≡ y(6≡ x) or ω ≤ω σ , then we take
ρ = ω.
We can now prove that in the system λ∩ω , types are preserved under the inverse of β -reduction.
Lemma 4.12. If Γ ⊢ω N : σ and M −→β N then Γ ⊢ω M : σ .
Proof. By induction on the structure of M, using Lemma 4.10. If M is the β -redex then we use
Lemma 4.11.
Now we can prove the announced theorem.
Theorem 4.13. If M ∈ WNβ then there exist a typing context Γ and a type σ such that Γ ⊢ω M : σ and
both Γ and σ are ω -free.
Proof. Let M ′ be a normal form of M. By Theorem 3.15, every normal form is typable in λ∩s , so there
exist a typing context Γ and a type σ , both ω -free, such that Γ ⊢ω M ′ : σ . Hence, by Lemma 4.12, we
have Γ ⊢ω M : σ .
We can also prove the equivalence of the systems λ∩ω , λ∩s ω and λ∩l ω .
Theorem 4.14. For any typing context Γ , any λ -term M and any type σ , the following are equivalent.
1. Γ∩ ⊢ω M : σ .
2. Γ ⊢sω M : σ .
3. Γ ⊢lω M : σ .
Proof. (1 ⇒ 2) This follows from Theorem 4.2 and Proposition 2.2 with ⊢sω instead of ⊢s .
(2 ⇒ 3) This follows from Lemma 4.3.
(3 ⇒ 1) This follows by induction on the length of the derivation of Γ ⊢lω M : σ . If the last applied rule
is (Beta)l , we use Lemmas 4.10 and 4.11.
The results in this section are summarised as follows.
Corollary 4.15. For any λ -term M, the following are equivalent.
1. Γ ⊢ω M : σ for some typing context Γ and type σ , both ω -free.
2. Γ ⊢sω M : σ for some typing context Γ and type σ , both ω -free.
3. Γ ⊢lω M : σ for some typing context Γ and type σ , both ω -free.
4. Γ ⊢l M : σ for some typing context Γ and type σ .
5. M is weakly normalising.
Proof. (1 ⇒ 2) This follows from Theorem 4.2.
(2 ⇒ 3) This follows from Lemma 4.3.
(3 ⇒ 4) This follows from Lemma 4.4.
(4 ⇒ 5) This follows from Theorem 4.1.
(5 ⇒ 1) This follows from Theorem 4.13.
Uniform Proofs of Normalisation and Approximation for Intersection Types
20
5
Application to other properties
The sequent calculus style systems we introduced in the previous sections are very useful for proving
properties of intersection type systems. In this section we illustrate that by giving a simple proof of the
(logical) approximation theorem, a property that is usually proved using reducibility predicates parametrised by typing contexts (see, e.g. [11, 4]). Proofs of some other properties through the sequent
calculus style systems are found in [15], which also makes a comparison between general conditions for
applying the reducibility method and our approach.
For the statement of the approximation theorem, we introduce some preliminary definitions. The
set of λ ⊥-terms [5] is obtained by adding the constant ⊥ to the formation rules of λ -terms. The type
systems in the previous section are extended to those for λ ⊥-terms, where any λ ⊥-term containing ⊥ is
typable by the (ω ) rule.
Definition 5.1. The approximation mapping α from λ -terms to λ ⊥-terms is defined inductively by
α (λ x1 . . . xn .xN1 . . . Nm ) := λ x1 . . . xn .xα (N1 ) . . . α (Nm )
α (λ x1 . . . xn .(λ x.M)NN1 . . . Nm ) := λ x1 . . . xn .⊥
where n, m ≥ 0.
Lemma 5.2.
1. If Γ ⊢lω α (M) : σ and M−→∗β N then Γ ⊢lω α (N) : σ .
2. Let M−→β∗ N, M−→∗β N ′ , Γ ⊢lω α (N) : σ and Γ ⊢lω α (N ′ ) : τ . Then there exists N ′′ such that
M−→∗β N ′′ and Γ ⊢lω α (N ′′ ) : σ ∩ τ .
Proof. The first part is proved by induction on the derivation of Γ ⊢lω α (M) : σ . For the second part,
we use confluence of β -reduction.
Now the logical approximation theorem can be formulated as follows.
Theorem 5.3. Γ ⊢ω M : σ if and only if there exists M ′ such that M−→β∗ M ′ and Γ ⊢ω α (M ′ ) : σ .
Proof. (⇒) By Theorem 4.14, it suffices to show that if Γ ⊢lω M : σ then there exists M ′ such that
M−→β∗ M ′ and Γ ⊢lω α (M ′ ) : σ . The proof is by induction on the derivation of Γ ⊢lω M : σ . Here we
consider some cases.
Γ ⊢lω M[x := N]N1 . . . Nn : σ
•
(Beta)l
Γ ⊢lω (λ x.M)NN1 . . . Nn : σ
By the induction hypothesis, there exists M ′ such that M[x := N]N1 . . . Nn −→∗β M ′ and Γ ⊢lω
α (M ′) : σ . This M ′ also satisfies (λ x.M)NN1 . . . Nn −→∗β M ′ .
•
Γ ⊢lω N : σ1 Γ , y : σ2 ⊢lω yN1 . . . Nn : τ
(L →)
Γ , x : σ1 → σ2 ⊢lω xNN1 . . . Nn : τ
where y ∈
/ FV(N1 )∪ · · · ∪ FV(Nn ) and y ∈
/ Γ . By the induction hypothesis, there exist N ′ , N1′ , . . . , Nn′
′
∗
′
∗
such that N−→β N , Ni −→β Ni , Γ ⊢lω α (N ′ ) : σ1 and Γ , y : σ2 ⊢lω yα (N1′ ) . . . α (Nn′ ) : τ . Hence,
by an instance of the (L →) rule, we obtain Γ , x : σ1 → σ2 ⊢lω xα (N ′ )α (N1′ ) . . . α (Nn′ ) : τ . So we
take xN ′ N1′ . . . Nn′ as M ′ .
K. Kikuchi
•
21
Γ , x : σ ⊢lω N : τ
(R →)
Γ ⊢lω λ x.N : σ → τ
where x ∈
/ Γ . By the induction hypothesis, there exists N ′ such that N−→∗β N ′ and Γ , x : σ ⊢lω
α (N ′) : τ . By an instance of the (R →) rule, we obtain Γ ⊢lω λ x.α (N ′ ) : σ → τ . Since α (λ x.N ′ ) ≡
λ x.α (N ′ ), we take λ x.N ′ as M ′ .
•
Γ ⊢lω M : σ Γ ⊢lω M : τ
(R ∩)
Γ ⊢lω M : σ ∩ τ
By the induction hypothesis, there exist M1 , M2 such that M−→β∗ M1 , M−→∗β M2 , Γ ⊢lω α (M1 ) : σ
and Γ ⊢lω α (M2 ) : τ . Then by Lemma 5.2(2), there exists M ′ such that M−→∗β M ′ and Γ ⊢lω
α (M ′) : σ ∩ τ .
(⇐) We can show by induction on the derivation that if Γ ⊢ω α (M ′ ) : σ then Γ ⊢ω M ′ : σ . Hence, by
Lemma 4.12, we have Γ ⊢ω M : σ .
Thus our method has been successfully applied to proving the approximation theorem for the mapping α and the system λ∩ω . It is work in progress to give similar proofs of the approximation theorems
for the η -approximation mapping αη , which maps λ x.⊥ directly to ⊥, and type systems with various
preorders as discussed in [10, 11, 4].
6
Conclusion
We have presented uniform proofs of the characterisation theorems of normalisation properties and the
approximation theorem. The proofs have been given via intersection type systems in sequent calculus
style. As investigated in [15], our method can be considered to have embedded certain conditions for
applying reducibility directly into the typing rules of the sequent calculus style systems. (See [13] for a
recent survey of general conditions for applying the reducibility method.)
As mentioned in the introduction, there are some proofs [18, 16, 9, 1] of strong normalisation for
terms typable with intersection types without using reducibility, but they have not considered any other
properties than normalisation. Other syntactic proofs of strong normalisation for terms typable with
intersection types are found in [14, 6], where the problem is reduced to that of weak normalisation with
respect to another calculus or to another notion of reduction. The proofs of [18, 21] and ours are different
from those of [14, 6] in that strong normalisation is proved directly rather than inferring it from weak
normalisation. Yet another syntactic proof [7] uses a translation from terms typable with intersection
types into simply typed λ -terms.
There are many directions for future work. In addition to the one indicated at the last paragraph of
Section 5, it would be worth investigating the type inference and the inhabitation problems for intersection types by means of our sequent calculus style systems.
Acknowledgements I would like to thank Katsumasa Ishii for drawing my attention to Valentini’s paper
and pointing out that the system includes the η -rule. I also thank the anonymous reviewers of ITRS
2014 workshop for valuable comments. The figures of the derivations have been produced with Makoto
Tatsuta’s proof.sty macros.
22
Uniform Proofs of Normalisation and Approximation for Intersection Types
References
[1] Andreas Abel (2007): Syntactical strong normalization for intersection types with term rewriting rules. In:
Proceedings of HOR’07, pp. 5–12.
[2] Steffen van Bakel (1992): Complete restrictions of the intersection type discipline. Theoretical Computer
Science 102, pp. 135–163, doi:10.1016/0304-3975(92)90297-S.
[3] Steffen van Bakel (2004): Cut-elimination in the strict intersection type assignment system is strongly normalizing. Notre Dame Journal of Formal Logic 45, pp. 35–63, doi:10.1305/ndjfl/1094155278.
[4] Henk Barendregt, Wil Dekkers & Richard Statman (2013): Lambda Calculus with Types. Cambridge University Press, doi:10.1017/CBO9781139032636.
[5] Henk P. Barendregt (1984): The Lambda Calculus: Its Syntax and Semantics, revised edition. North-Holland,
Amsterdam.
[6] Gerard Boudol (2003): On strong normalization in the intersection type discipline. In: Proceedings of TLCA’03, Lecture Notes in Computer Science 2701, Springer-Verlag, pp. 60–74, doi:10.1007/
3-540-44904-3_5.
[7] Antonio Bucciarelli, Adolfo Piperno & Ivano Salvo (2003): Intersection types and λ -definability. Mathematical Structures in Computer Science 13, pp. 15–53, doi:10.1017/S0960129502003833.
[8] Mario Coppo, Mariangiola Dezani-Ciancaglini & Betti Venneri (1981): Functional characters of solvable
terms. Zeitschrift für Mathematische Logik und Grundlagen der Mathematik 27, pp. 45–58, doi:10.1002/
malq.19810270205.
[9] René David (2001): Normalization without reducibility. Annals of Pure and Applied Logic 107, pp. 121–130,
doi:10.1016/S0168-0072(00)00030-0.
[10] Mariangiola Dezani-Ciancaglini, Elio Giovannetti & Ugo de’Liguoro (1998): Intersection types, λ -models,
and Böhm trees. In: Theories of Types and Proofs, MSJ Memoirs 2, Mathematical Society of Japan, Tokyo,
pp. 45–97.
[11] Mariangiola Dezani-Ciancaglini, Furio Honsell & Yoko Motohama (2001): Approximation theorems for
intersection type systems. Journal of Logic and Computation 11, pp. 395–417, doi:10.1093/logcom/11.
3.395.
[12] Felix Joachimski & Ralph Matthes (2003): Short proofs of normalization for the simply-typed λ -calculus,
permutative conversions and Gödel’s T. Archive for Mathematical Logic 42, pp. 59–87, doi:10.1007/
s00153-002-0156-9.
[13] Fairouz Kamareddine, Vincent Rahli & Joe B. Wells (2012): Reducibility proofs in the λ -calculus. Fundamenta Informaticae 121, pp. 121–152, doi:10.3233/FI-2012-773.
[14] Assaf J. Kfoury & Joe B. Wells (1995): New notions of reduction and non-semantic proofs of strong β normalization in typed λ -calculi. In: Proceedings of LICS’95, IEEE Computer Society Press, pp. 311–321,
doi:10.1109/LICS.1995.523266.
[15] Kentaro Kikuchi (2009): On general methods for proving reduction properties of typed lambda terms. In:
Proof theoretical study of the structure of logic and computation, RIMS Kôkyûroku 1635, pp. 33–50. Available at http://hdl.handle.net/2433/140464. (Unrefereed proceedings).
[16] Ralph Matthes (2000): Characterizing strongly normalizing terms of a λ -calculus with generalized applications via intersection types. In: Proceedings of ICALP Satellite Workshops 2000, Carleton Scientific, pp.
339–354.
[17] Garrel Pottinger (1980): A type assignment for the strongly normalizable λ -terms. In: To H. B. Curry: Essays
on Combinatory Logic, Lambda Calculus and Formalism, Academic Press, London, pp. 561–577.
[18] Femke van Raamsdonk & Paula Severi (1995): On normalisation. Technical Report CS-R9545, CWI.
[19] Femke van Raamsdonk, Paula Severi, Morten Heine B. Sørensen & Hongwei Xi (1999): Perpetual reductions
in λ -calculus. Information and Computation 149, pp. 173–225, doi:10.1006/inco.1998.2750.
K. Kikuchi
23
[20] William W. Tait (1967): Intensional interpretations of functionals of finite type I. The Journal of Symbolic
Logic 32, pp. 198–212, doi:10.2307/2271658.
[21] Silvio Valentini (2001): An elementary proof of strong normalization for intersection types. Archive for
Mathematical Logic 40, pp. 475–488, doi:10.1007/s001530000070.
| 6 |
Attack Analysis and Resilient Control Design for
Discrete-time Distributed Multi-agent Systems
arXiv:1801.00870v1 [] 3 Jan 2018
Aquib Mustafa, Student Member, IEEE and Hamidreza Modares, Member, IEEE
Abstract—This work presents a rigorous analysis of the adverse
effects of cyber-physical attacks on discrete-time distributed
multi-agent systems, and propose a mitigation approach for
attacks on sensors and actuators. First, we show how an attack
on a compromised agent can propagate and affect intact agents
that are reachable from it. This is, an attack on a single node
snowballs into a network-wide attack and can even destabilize
the entire system. Moreover, we show that the attacker can
bypass the robust H∞ control protocol and make it entirely
ineffective in attenuating the effect of the adversarial input on
the system performance. Finally, to overcome the adversarial
effects of attacks on sensors and actuators, a distributed adaptive
attack compensator is designed based on a virtual estimator. The
adaptive attack compensator is augmented with the controller to
achieve resilient control. The developed controller achieves secure
invulnerable consensus in presence of the attacks. This controller
provides remarkable benefits over a class of the existing resilient
controller, by making no restriction on the number of agents or
agents neighbors under the effect of adversarial input, despite
it redeems compromised agent from the effect of the attack.
Effectiveness of proposed controller and analysis is validated
through simulations.
Index Terms—Resilient control, Attack analysis, Discrete-time,
Distributed multi-agent systems, Distributed adaptive attack
compensator.
I. I NTRODUCTION
Distributed control of multi-agent systems [1]-[4] have
gained remarkable attraction due to their potential applications
in different growing fields such as robotics, power system,
transportation and so on. Despite their numerous advantages,
distributed multi-agent systems (DMAS) are prone to cyberphysical attacks. For instance, in case of multi-vehicle systems,
the GPS sensor can be spoofed and corrupt the sensory data
which the vehicle receives [5]. Corruption of sensory data
or manipulation of actuator input can severely and adversely
affect the performance of the system. Therefore, design of
robust, resilient and secure architectures is required to successfully achieve desired coordinated goal in the presence of
attack.
Considerable results have been presented for the detection
[6]-[12] and mitigation of attacks in DMAS. There are generally two approaches in designing mitigation techniques for
DMAS. In the first approach, a monitor is designed to detect
the attack on neighbors and then remove compromised agents,
once identified [13]-[20]. In these approaches, each normal
agent either uses an observer for each of its neighbors to detect
Aquib Mustafa and Hamidreza Modares are with the Department of
Electrical and Computer Engineering, Missouri University of Science
and Technology, Rolla, MO 65401, USA (e-mails: a.mustafa@mst.edu;
modaresh@mst.edu).
abnormality [15] or discard neighbors information based on
the discrepancy between actual and malicious agents using
iterative strategy [13]-[14]. The former approach requires a
model for each of its neighbors are not scalable. The latter
requires meeting the F-total or the F-local condition. That
is, there should be an upper bound on F either on the total
number of adversarial agents, called as F-total or on the local
number of compromised agents in the neighborhood of each
intact agent, called as F-local. Although these approaches can
counteract variety of attacks, including attacks on sensors and
actuators, as well as attacks on the communication network,
they can harm the network connectivity by blindly rejecting the neighbor’s information. This is because, they cannot
distinguish between a change in neighbors behavior due to
attack and a legitimate change in the system. For example,
in a leader-follower synchronization problem, a legitimate
change in leader’s state can be detected as change due to
adversarial input by neighbors. Moreover, these approaches
treat all types of attacks the same by discarding compromised
agent. However, as shown in this paper, attacks on sensors
and actuators can be recovered and the compromised agent
can be brought back to the network without making any
restrictive assumption on network connectivity. This avoids
any unnecessary harm to the network connectivity.
In the second approach, local resilient control protocols
based attack mitigation are designed for a class of attacks,
without isolation of compromised agent. Reputation-based resilient control protocol is presented in [21] for leader-follower
problem under certain conditions. Game-theory based resilient
control architectures [22]-[25] have presented to minimize the
effects of adversarial input. With an assumption of having
partial knowledge of attacker, the resilient receding horizonbased control protocol is discussed in [26]-[28] for mitigation
of the replay attack. Secure state estimation and control under
sensor attack has been considered in [29]-[30]. A resilient
control protocol is presented in [31] for single and double
integrator system based on local state emulator. Then in [32],
an adaptive resilient control protocol is presented for the attack
on sensor and actuator of the system. Most of these results are
presented for continuous-time systems. However, in real-time
applications, the system communicates and broadcasts there
information at discrete instants.
To design a resilient control protocol, one needs to identify
the adverse effects of the attack on the system performance
from the attacker’s perspective. However, to the best of our
knowledge, there is no rigorous attack analysis for discretetime DMAS. In this paper, first, we illustrate that how an attack
on a compromised agent spread across the network and affects
the intact agents that are reachable from a compromised agent.
Then, we show that the attacker can design a stealthy attack
which have common mode with the system dynamics and
launch on a single root node to destabilize the entire system.
We call this as, internal model principle for the attacker in
discrete-time DMAS. The attacker does not need to know the
graph topology or agents dynamics to design its attack signal
and can eavesdrop on some sensory information to identify
one eigenvalue of the consensus dynamics. We also show that
the attacker can entirely disable robust techniques such as H∞ ,
used for attenuating the effects of adversarial inputs on the
performance of the system.
To mitigate the effect of the adversarial input, this work
presents a distributed adaptive resilient controller. First, a
virtual estimator is designed for each node to predict the actual
behavior of the system. Then, the virtual estimator is used to
design an adaptive attack compensator. The designed adaptive
attack compensator is augmented with the controller for the
mitigation of the attack. Moreover, we have shown the uniform
boundedness for the system using the proposed controller in
the presence of the attack. The proposed adaptive resilient
control protocol makes no restriction on graph topology as
compared to the existing approaches [13]-[20]. The proposed
controller preserves the network connectivity and mitigates the
effect of adversarial input on the actuator and/or sensor of the
agent in discrete-time DMAS.
II. N OTATIONS AND P RELIMINARIES
In this section, the preliminaries of graph theory and
standard distributed consensus of multi-agent systems are
provided.
A. Graph Theory
A directed graph G consists of a pair (V , E ) in which set of
nodes and set of edges is represented by V = v1 , . . . , vN and
E ⊂ V xV , respectively. The adjacency matrix is defined as
A = [ai j ], with ai j > 0 if (v j , vi ) ∈ E . The set of nodes vi with
edges incoming to node v j is called the neighbors of node vi ,
namely Ni = v j : (v j , vi ) ∈ E . The graph Laplacian matrix is
defined as L = H − A, where H = diag(hi ) is known as the
in-degree matrix, with ∑ j∈Ni ai j as the weighted in-degree of
node i. A directed tree is a connected digraph where every
node except one, known as root node, has the in-degree equal
to one. A graph is said to have a spanning tree if a subset of the
edges forms a directed tree. Ker(β ) and eig(α) represents the
null space of β and the eigenvalues of matrix α, respectively.
∆|α| and α ad j means the determinant and the adjoint of matrix
α, respectively. Furthermore, λmax (α) and λmin (α) represents
maximum and minimum eigenvalue of matrix α, respectively.
Assumption 1. The communication digraph G contains a
spanning tree.
B. Standard Distributed Consensus in MAS
This subsection presents the standard distributed control
protocol for the consensus of discrete-time multi-agent systems.
Consider N agents with identical system dynamics represented by
xi (k + 1) = Axi (k) + Bui (k), i = 1, . . . , N
(1)
where xi (k) ∈ Rn and ui (k) ∈ Rm are the state and the control
input of agent i, respectively. A and B are the system and
and input matrices, respectively. (A, B) is assumed to be
stabilizable.
Define the local neighborhood tracking error for the agent
i as
N
εi (k) = (1 + hi )−1 ∑ ai j (x j (k) − xi (k))
(2)
j=1
in which ai j is the (i, j)-th value of the adjacency matrix.
Consider the distributed control law for each node i as in
[33]
ui,n (k) = c(1 + hi )−1 Kεi , i = 1, . . . , N
(3)
where c is a positive coupling constant, and K ∈ Rm×n is a
design feedback control gain matrix. Define the global state
vector as x(k) = [x1T (k), x2T (k), . . . , xNT (k)]T ∈ RnN . Using (1)(3), the global dynamics of DMAS can be expressed as
x(k + 1) = [IN ⊗ A − c(I + H)−1 L ⊗ BK]x(k)
(4)
The normalized graph Laplacian matrix L̂ is defined as [33]
L̂ = (I + H)−1 L
(5)
Let the eigenvalues of the normalized graph Laplacian matrix
L̂ be λi , ∀ i = 1, . . . , N. Then, λi lies inside unit circle centered
at 1 + j0 for i = 2, . . . , N and λ1 = 0 [34].
Using (4), the state of agent’s global dynamics is given by
x(k) = [IN ⊗ A − cL̂ ⊗ BK]k x(0) , Akc x(0)
(6)
where Ac is the closed-loop matrix defined as
Ac = (IN ⊗ A − cL̂ ⊗ BK)
(7)
Lemma 1. [34] Let R ⊂ V be the set of root nodes and r =
[p1 , . . . , pN ]T be the left eigenvector of the normalized graph
Laplacian matrix L̂ for λ1 = 0. Then, pi > 0 if i ∈ R and pi = 0
if i ∈
/ R.
Theorem 1. [33]-[34] Let feedback gain K be designed such
that A − cλi BK is Schur stable. Then, according to Lemma 1,
the final consensus value for DMAS can written as
x1 (0)
.
x(k) = (rT ⊗ Ak )
(8)
. i = 1, . . . , N as k → ∞
xN (0)
Assumption 2. The system matrix A in (1) is marginally
stable, with all eigenvalues on or outside the unit circle
centered at origin.
III. ATTACK A NALYSIS FOR D ISCRETE - TIME
D ISTRIBUTED MAS
This section presents the attack modeling and analyzes its
adverse effects on the standard control protocol. The internal
model principle for the attacker is presented to show that how
a single compromised agent can destabilize the entire system.
Then, the effect of the attack on the local neighborhood
tracking error is analyzed to show the ineffectiveness of the
standard robust H∞ control protocol (which is well-known
disturbance attenuation technique) in the presence of a stealthy
attack.
The attack on actuators of agent i can be modeled as
udi
= ui,n + γi uai
(9)
to the sensors and actuators, respectively. The global dynamics
for DMAS (11) under the effect of attack can be written as
x(k + 1) = Ac x(k) + (IN ⊗ B) f (k), i = 1, . . . , N
where the injected global attack signal f (k) is
uai
where ui is the control law given in (3),
represents the
attacker’s signal injected into the actuator of agent i, udi is the
distorted control law applied to (1) and the scalar γi is 1 when
there is an attack on actuator of agent i and 0, otherwise.
The attack on sensors of agent i can be modeled as
xid = xi + δi xia
xi (k + 1) = Axi (k) + Bui (k) + B fi (k), i = 1, . . . , N
f (k) = −c(L̂ ⊗ K)(δ ⊗ IN )xa + (γ ⊗ IN )ua
(11)
k−1
x(k) = Akc x(0) +
N
j=1
(12)
+γi uai
Remark 1. An attacker can manipulate sensor data or actuator
without physical tampering. Spoofing of global positioning
system (GPS) of an unmanned vehicle or of phasor measurement unit’s (PMU’s) in power system are examples of attack
without physical tampering.
This subsection analyzes the effects of the attack on the
standard discrete-time DMAS (1). Theorem 2 investigates how
an attack can propagate across the network.
Definition 1: In a graph, agent i is reachable from agent j if
there is a directed path of any length from node j to Node i.
Theorem 2. Consider the discrete-time DMAS (11) under the
attack fi (k). Let the control protocol be designed as (3) such
that the closed loop matrix Ac in (7) is Schur. Then,
1) All agents reach consensus if fi (k) = 0, ∀i = 1, . . . , N.
2) The intact agent deviates from the desired consensus
value if it is reachable from a compromised agent.
3) The deviation of the network from the desired behavior
depends on the number of compromised agents, their
attack signal magnitude and the number of agents reachable from them.
Proof. It is shown in [34] that if c and K are designed so
that Ac in (7) is Schur, then all agents reach consensus. This
completes the proof of part 1.
To prove part 2, define xa = [(x1a )T , (x2a )T , . . . , (xNa )T ]T and
a
u = [(ua1 )T , (ua2 )T , . . . , (uaN )T ]T as a vector of signals injected
(15)
with k ≥ p. At the steady state, one has
k−1
∑ (IN ⊗ A)k−p−1 (IN ⊗ IN
x(k) →
p=0
−1
k−p−1
−cL̂ ⊗ A BK)
(16)
(IN ⊗ B) f (p)
For a positive integer n, binomial theorem for matrices can
be expressed as
n
(x + y)n =
∑ Ckn xn−k yk
(17)
k=0
n!
(n−k)!k!
with Ckn =
becomes
if x and y is commutative. Using (17), (16)
k−1
x(k) →
∑ (IN ⊗ A)k−p−1
p=0
(18)
k−p−1
∗
∑
Cmk−p−1 (−cL̂ ⊗ A−1 BK)m (IN ⊗ B) f (p)
m=0
Using (15) and (18), the state of the agent i yields
N k−1 k−p−1
A. Effects of Attack on Standard Distributed MAS
∑ (Ac )k−p−1 (IN ⊗ B) f (p)
p=0
where fi (k) represents the overall attack signal injected into
the agent i, which is given by
fi (k) = c(1 + hi )−1 K( ∑ ai j (δ j xaj (k) − δi xia (k))
(14)
with γ = diag(γ1 , . . . , γN ) and δ = diag(δ1 , . . . , δN ). If f 6= 0 ,
then the solution of (13) is given by
(10)
where xi represents the state of agent i, xia is the attacker’s
signal injected into the sensor of agent i, xid is the distorted
state and the scalar δi is 1 when there is an attack on sensor
of agent i and 0, otherwise.
Using the distributed control law (3) with (9) and (10) in
(1), one can express the DMAS dynamics for an agent i as
(13)
xi (k) → ∑
∑ ∑
−1
m
Ak−p−1Cmk−p−1 (−1)m cm l m
i j (A BK) B f j (p)
j=1 p=0 m=0
(19)
where limj , [(I + H)−1 L]m
and
[
]
denotes
the
element
(i,j)
ij
ij
of a matrix. m represents the length of shortest directed path
from j to i [35]. Assume now that the agent j is under direct
attack, but agent i is intact, i.e. fi (k) = 0 and f j (k) 6= 0. If
the intact agent i is reachable from the compromised agent j,
since limj 6= 0 for some 0 < m < N − 1, one can infer from (19)
that the agent state xi (k) at steady state in (19) is non-zero
which deduces that intact agent i is deviated from the desired
consensus behavior. This completes the proof of part 2.
For the proof of part 3, taking the norm from both sides of
(15), and using
k−1
∑ (Ac )k−p−1 f (p)
p=0
6
k f (k)k
|λmin (Ac )|
(20)
yields
kx(k)k 6 N f
kBk b f
|λmin (Ac )|
(21)
at steady state, where, N f is the number of agents for which fi
is non-zero, b f is bound on adversarial input fi . It was shown
in part 2 that if agent i is reachable from the disrupted agent j,
then its deviation from the desired behavior is non-zero. That
is, for the intact agent i that is reachable from a compromised
agent, kxi (k)k is non zero and as can be seen from (21),
its deviation bound depends on the number of compromised
agents N f and bound on adversarial input b f . This completes
the proof.
B. Internal Model Principle Approach for the Attacker
In order for a control system to reject a disturbance or
follow a reference trajectory, one needs to incorporate their
dynamics in the control system. This is called the internal
model principle (IMP). We showed in following Theorem 3
that the attacker can also leverage the IMP and incorporate
some eigenvalues of the consensus dynamics in its attack
design to destabilize the entire network.
We now take the role of the attacker and show that how it
can maximize the damage and cause a catastrophe. Conditions
under which the attacker achieves this objective are provided.
Let the dynamics of the attacker on a node i be defined as
f (k + 1) = W f (k)
(22)
where W ∈ Rm×m . Consider the set of eigenvalues of W as
L̂. The left and the right eigenvectors of L̂ corresponding to
zero eigenvalue of the normalized graph Laplacian matrix are
r and 1N , respectively [34]. Define
M = [1 M1 ],
M −1 = [rT M2 ]T
where M1 ∈ RN×(N−1) and M2 ∈ R(N−1)×N . Using (27) with
L̂ = MΛM −1 , one has
[INn + cMΛM −1 ⊗ G(z)K]x(z) = (IN ⊗ G(z)) f (z)
As MM −1 = IN , one can write (28) as
(M ⊗ In )[InN + cΛ ⊗ G(z)K](M −1 ⊗ In )x(z) = G(z) f (z) (29)
Define state transformation as
x̂(z) = (M −1 ⊗ In )x(z)
(23)
(30)
and premultiplying (29) with (M −1 ⊗ In ). Then, one has
x̂(z) = [INn + cΛ ⊗ G(z)K]−1 (M −1 ⊗ G(z)) f (z)
(31)
Let assume for simplicity that all the Jordan blocks are simple,
M −1 = [pi j ] and M = [mi j ]. For the agent i, using (30) and (31),
one has
N
ΛW = [λW1 , . . . , λWm ]
(28)
xi (z) =
N
∑ mih [In + cKGi (z)λi ]−1 Gi (z)∑ j=1 pi j fi (z)
(32)
h=1
and the set of eigenvalues of the system matrix A
ΛA = [λA1 , . . . , λAn ],
(24)
respectively.
Theorem 3. Consider the discrete-time DMAS (11) under the
attack fi with the control protocol (3). Let fi (k) be designed
as (22). Then,
1) The attacker destabilizes the complete network, if
∑Nj=1 p1 j fi (k) 6= 0 and the sets defined in (23) and (24)
have at least one common eigenvalue.
2) The attacker does not affect the stability of MAS
(1), but cause deviation from consensus behavior, if
∑Nj=1 p1 j fi (k) = 0 or the sets defined in (23) and (24)
have no common eigenvalues.
Proof. The transfer function for the DMAS (1), from xi (z) to
ui (z) in z-domain can be written as
Gi (z) =
xi (z)
= (zI − A)−1 B
ui (z)
The first eigenvalue of the normalized graph Laplacian matrix
L̂ is zero and its corresponding right eigenvector is 1N i.e.
mi1 = 1. Using this fact with (32), one has
N
xi (z) = Gi (z) ∑ p1 j f j (z)+
j=1
N
−1
∑ mih [In + cλh Gi (z)K]
h=2
N
(33)
Gi (z) ∑ ph j f j (z)
j=1
Now, if we show that [In + cKGi (z)λh ]−1 is Schur, then the
second term of (33) is bounded, even in the presence of attack.
Since (A − cλh BK), ∀h = 2, . . . , N is Schur, therefore if
we show that the roots of the characteristic polynomial
(A − cλh BK) are identical to the poles of [In + cKGi (z)λh ]−1 ,
then one can say [In + cKGi (z)λh ]−1 is also Schur. To this end,
using (25), one has
∆|(zIn − (A − cλh BK))| = ∆|(zIn − A + cλh BK)|
(25)
Using (3), the global control law under the influence of the
attack can be expressed as
= ∆|zIn − A|(In + cλh (zIn − A)−1 BK)
∆|zIn − A|[(∆|zIn − A| + cλh (zIn − A)ad j BK)]
=
∆|zIn − A|
(34)
with u(z) = [uT1 , . . . , uTN ]T , x(z) = [x1T , . . . , xNT ]T and f (z) =
[ f1T , . . . , fNT ]T . Using (25) and (26), the system state in the
global form can be written as
Hence, this proves that the roots of the characteristic polynomial (A − cλh BK) are identical to the poles of [In +
cKGi (z)λh ]−1 using [36]. Therefore, [In + cKGi (z)λh ]−1 is
Schur. Thus, it concludes that the second term of (33) is
bounded and has no contribution in destabilizing the system.
x(z) = (IN ⊗ G(z))u(z) = (IN ⊗ G(z))(−(cL̂ ⊗ K)x(z) + f (z))
(27)
where G(z) = diag(Gi (z)) with dimension RNxN . Let M be a
non-singular matrix such that L̂ = MΛM −1 , with Λ be the Jordan canonical form of the normalized graph Laplacian matrix
According to Lemma 1, P1 j in (33) is zero for non-root
nodes and ∑Nj=1 p1 j fi (k) 6= 0, if the attack is launched on root
nodes. Assume that the attack is on a root node with a signal
having common mode with the system dynamics. This means
that there is at least a common eigenvalue λAl between sets
u(z) = −(cL̂ ⊗ K)x(z) + f (z)
(26)
defined in (23) and (24). Using the transfer function (25) and
the attack signal defined in (22), one can write (33) as
N
xi (z) =
∑ pi j
j=1
(zIn − A)ad j B(zIn −W )ad j fi (0)
2
(z2 + λA2l ) {
n
∏
i=1,i6=l
2
(z2 + λA2i )(z2 + λW2 i ) }
N
N
h=2
j=1
+
Lemma 2. Consider the normalized graph Laplacian matrix L̂
defined in (5). Then, [L̂T L̂ − 2L̂] is negative semidefinite.
Proof. Let λk be the eigenvalue of the normalized graph
Laplacian matrix L̂. So, the eigenvalue of [L̂T L̂ − 2L̂] can be
written as
eig[L̂T L̂ − 2L̂] = λk2 − 2λk
∑ mih [1 + cKGi (z)λh ]−1 Gi (z) ∑ ph j f j (z)
= (λk − 1)2 − 1
(38)
(35)
The first term of (35) shows that the pole λAl lies on the
unit circle centered at the origin and has multiplicity greater
than 1. Thus, the system states tend to infinity in the discretetime domain as k → ∞. Therefore, the attack on the root node
destabilizes the entire network. This completes the proof of
part 1.
If the attack is on a non-root node, then ∑Nj=1 p1 j fi (k) = 0.
So, (33) can be expressed as
xi (z) =
N
N
h=2
j=1
∑ mih [1 + cKGi (z)λh ]−1 Gi (z) ∑ ph j f j (z)
(36)
Then, according to (34), [In + cKGi (z)λh ]−1 is Schur stable.
Therefore, the system states are bounded, even in the presence
of the attack. Moreover, the agents that are reachable from
the attacker shows stable behavior, but deviation from the
desired consensus value. If ΛA ∩ΛW 6= φ which implies that the
multiplicity of poles lie on the unit is one. Therefore according
to (33), the system states remain bounded and shows deviation
from the desired consensus behavior due to the adverse effect
of the attacker. This completes the proof.
Remark 2. Note that the attacker does not need to know
the system matrix A, and it can get access to the eigenvalues
of dynamics through eavesdropping the sensory informations.
Then, the attacker can identify root node and destabilize the
entire system.
Now, we present the analysis of the effects of the attack on
the local neighborhood tracking error (2). This analysis shows
that although attacks on sensors and actuators can be modeled
as disturbances, existing disturbance attenuation techniques do
not work for attack attenuation.
Disturbance attenuation approaches focus on minimizing the
effects of disturbance on the local neighborhood tracking error
[37]-[38]. More specifically, the H∞ approach for DMAS (1)
in presence of disturbance wi (k) designs a distributed control
protocol as in (3), such that the desired consensus is achieved
as in (8), if disturbance wi (k) = 0 and the bounded L2 -gain
condition is fulfilled for any disturbance wi (k) ∈ L2 [0, ∞)
∞
∞
∑ ε T (k)M̄ε(k) 6 γ 2 ∑ wT (k)N̄w(k)
k=0
(37)
k=0
where γ > 0 is attenuation constant, M̄ and N̄ are positive
definite weight matrices.
We present the following rigorous analysis of the effects
of the attack on the local neighborhood tracking error in
following Theorem 4 and show that how an attacker can
bypass existing H∞ disturbance attenuation approaches and
make them entirely ineffective.
Since all the eigenvalues of matrix L̂ lie inside unit circle
centered at 1 + j0, except λ1 = 0 [34], therefore (λk − 1)2 − 1
is less than or equal to zero for k = 1, . . . , N. This shows that
[L̂T L̂ − 2L̂] is negative semidefinite.
For the sake of simplicity, we consider the scalar system in
the following theorem.
Theorem 4. Consider the discrete-time DMAS with single
integrator dynamics (11) and the control protocol (3). Assume
that the system is under a constant attack signal f (k). Then,
the local neighborhood tracking error for intact agents is zero
while agents do not reach the desired consensus.
Proof. Consider the Lyapunov function for the discrete-time
DMAS as
V (x(k), f (k)) = (−L̂x(k) + f (k))T (−L̂x(k) + f (k))
(39)
The difference equation of the Lyapunov function (39) can be
written as
∆V (x(k), f (k)) = V (x(k + 1), f (k + 1)) −V (x(k), f (k))
= (−L̂x(k + 1) + f (k + 1))T (−L̂x(k + 1) + f (k + 1))
−(−L̂x(k) + f (k))T (−L̂x(k) + f (k))
(40)
For the constant attack signal i.e. f (k + 1) = f (k), one can
write (40) as
= (−L̂x(k + 1) + f (k))T (−L̂x(k + 1) + f (k))
−(−L̂x(k) + f (k))T (−L̂x(k) + f (k))
or equivalently,
= (−L̂x(k + 1))T (−L̂x(k + 1)) − (−L̂x(k))T (−L̂x(k))
−2 f (k)T L̂(x(k + 1) − x(k))
(41)
Using the system dynamics (1) in (41), one has
= (−L̂[x(k) + u(k)])T (−L̂[x(k) + u(k)])
−(−L̂x(k))T (−L̂x(k)) − 2 f (k)T L̂u(k)
(42)
Furthermore, One can express (42) as
= (−L̂x(k) + f (k))T [L̂T L̂ − 2L̂](−L̂x(k) + f (k))
(43)
Using Lemma 2, one has
∆V (x(k), f (k)) = (−L̂x(k) + f (k))T [L̂T L̂
−2L̂](−L̂x(k) + f (k)) 6 0
(44)
Then, using Lasalle’s invariance principle [39], the trajectories
(x(k), f (k)) converge to a set that satisfy ∆V (x(k), f (k)) = 0.
Based on (44), this yields
(−L̂x(k) + f (k)) ∈ ker(L̂T L̂ − 2L̂)
(45)
or
(−L̂x(k) + f (k)) = 0
(46)
From (45), one has (−L̂x(k) + f (k)) = c̄1N . According to
this, the single integrator system dynamics becomes xi (k +
1) = xi (k) + c̄, which shows that it destabilizes the system.
Therefore, xi (k) → ∞ as k → ∞ ∀i = 1, . . . , N and thus the local
neighborhood tracking error goes to zero for all agent. Note
that, based on Theorem 3, (54) is the possible case when the
attack is on a root node. On the other hand, for an attack on a
non-root node agent, from (46), one has (−L̂x(k) + f (k)) = 0.
Since for the intact agent i, fi (k) = 0, therefore, the local
neighborhood tracking error for the intact agents converge to
zero, even in the presence of the attack.
We now show that the intact agents do not reach the desired
consensus, despite the fact the local neighborhood tracking
error is zero. From (46), one has
L̂x(k) = f (k)
(47)
which can be written for agent i as
N
(1 + hi )−1 ∑ ai j (x j (k) − xi (k)) = fi (k)
(48)
j=1
For a compromised agent i, since fi (k) 6= 0, then, one has
xi (k) 6= x j (k) for some i, j.
Now let assume that agent i is intact. Then, one has
−1
(1 + hi )
∑ x j (k)
j∈N
i
where xavg = |N
, which is not equal to xi (k) due to
i|
incoming information from a comprised agent xic (k). From
(51), one can infer that the deviation of the intact agent from
the desired consensus value depends on the number of the inneighbors and deviation of the compromised agent ic from the
desired consensus value which depends on the magnitude of
the injected attack signal. Moreover, the closer the agent is to
the source of the attack, the more its value will be deviated
from the desired consensus.
Corollary 1. Let the attacker design its attack signal using
the internal model principle approach described in Theorem
3. Then, it bypasses the H∞ control protocol.
Proof. In the absence of the attack, minimizing the local neighborhood tracking error results in minimizing the consensus
error. Therefore, the H∞ control in (37) is used to attenuate the
effect of adversarial input on the local neighborhood tracking
error. However, according to Theorem 4, in the presence of
IMP attack, by making the local neighborhood tracking error
go to zero, agents do not reach consensus. This completes the
proof.
In existing approaches, the local neighborhood tracking
error is one of the important measures for the performance of
the DMAS. Theorem 4 and following analysis highlight that
while the local neighborhood tracking error is zero, the agents
might not reach consensus. Define a performance function
Γi (k) as
2
Γi (k) = ∑ xi (k) − x j (k)
(52)
j∈Ni
Define the set of intact agents as
N
∑ ai j (x j (k) − xi (k)) = 0
(49)
Nint = Ni − Nc ∀i = 1, . . . , N
(53)
j=1
Consider the intact agent i as an immediate neighbor of the
compromised agent ic . Let assume by contradiction that only
the compromised agent does not reach the desired consensus
but all the intact agents reach the desired consensus. Using
(49), one can write
(1 + hi )−1
∑ ai j (x j − xi )+aiic (xic − xi ) = 0
(50)
j∈Ni
Assuming that the intact agents reach consensus, xi (k) =
x j (k) ∀ j ∈ Ni . However, (50) cannot be satisfied if xi (k) =
x j (k) ∀ j ∈ Ni because xic (k) 6= xi (k) and this contradict the
assumption. Therefore, this shows that the intact agent i is
deviated from the desired consensus value. Similarly, one can
use the same argument to show that all reachable agents from
the compromised agent will deviate from the desired consensus
value. This completes the proof.
Remark 3. If a intact agent i is an immediate neighbor of a
compromised agent ic , then using (2), one can write the local
neighborhood tracking error εi (k) with ai j = 1 as
εi (k) = (1 + hi )−1
∑ (x j (k) − xi (k))
j∈Ni
= |Ni | (1 + hi )−1 (
1
|Ni |
∑ x j (k) − xi )
j∈Ni
= |Ni | (1 + hi )−1 (xavg − xi )
(51)
where Ni represents set of all agents and Nc represents set of
compromised agents of the network.
Corollary 2. Consider the performance function Γi (k) and the
local neighborhood tracking error εi (k) defined in (52) and (3),
respectively. Then,
1) Γi (k) and εi (k) ∀i = 1, . . . , N converges to zero, if
there is no attack. Moreover, agents achieve the desired
consensus.
2) If the attacker designs an IMP-based attack on the nonroot node, then εi (k) ∀i ∈ Nint converges to zero, but
Γi (k) ∀i = 1, . . . , N does not converges to zero. That is,
agents do not reach the desired consensus, while the
local neighborhood tracking error is zero.
3) If the attacker designs an IMP-based attack on the root
node, then εi (k) and Γi (k) ∀i = 1, . . . , N goes to zero,
despite agents do not achieve the desired consensus and
the entire system get destabilized.
Proof. According to Theorem 1, the system achieves the
desired consensus if there is no adversarial input in the system
and this proves part 1 of corollary. If the attacker injects a
signal having common mode with the system dynamics into
the non-root node of the DMAS, then based on Theorem, 3
and 4, one can infer εi (k) → 0. However, as shown in Theorem
4, xi (k) − x j (k) 6→ 0, so Γi (k) 6→ 0 and this proves part 2.
Based on Theorem 3, if the attacker injects a signal having
common mode with the system dynamics into the root node
of the DMAS, then xi (k) − x j (k) → 0. However, the system
gets destabilized as xi (k) → ∞ as k → ∞, while εi (k) and Γi (k)
∀i = 1, . . . , N converges to zero. This completes the proof.
and λ1 = 0. Therefore, the virtual estimator states achieve the
desired consensus value and written as
x̂1 (0)
.
(60)
x̂(k) = (rT ⊗ Ak )
. i = 1, . . . , N as k → ∞
x̂N (0)
IV. R ESILIENT D ISTRIBUTED C ONTROL P ROTOCOL FOR
ATTACKS ON S ENSOR AND ACTUATOR : AN A DAPTIVE
A PPROACH
Although the attacks on the actuators and/or sensors can
adversely affect the agents dynamics, they cannot affect the
dynamics of the distributed virtual estimator (54), unless they
entirely compromise the agent which is extremely harder to
do for the attacker.
The deviation of the agent’s behavior from the normal
behavior is estimated by distributed virtual estimator. Then,
an adaptive attack compensator is developed using a virtual
estimator. The designed adaptive compensator is augmented
with the controller for the mitigation of the adversarial input.
In contrast to existing isolation approaches [13]-[20], which
require a strong network connectivity, the developed resilient
distributed controller preserves network topology and achieves
the desired consensus without any restrictions on the number
of agents under sensor and actuator attack. Attacks on communication links i.e. denial of service (DoS) attack can be mitigated by incorporating existing attack detection/identification
methodologies [13]-[20] with the proposed approach. Therefore, agents under the influence of the adversarial input on
sensors and actuators can be recovered using the proposed
resilient controller and be brought back to the network in intact
mode without being isolated.
This section presents the design of a resilient distributed
control protocol for the mitigation of the adverse effect of the
attack on sensor and actuator of an agent in the discrete-time
DMAS. Despite the magnitude of attack f (k) on sensor and
actuator of an agent and its reachability from the intact agent,
the developed distributed adaptive compensator is resilient
against the attack and avoids catastrophic effects. To this end,
first a distributed virtual estimator for each agent is designed
to estimate the expected normal behavior of the MAS, then
an distributed adaptive compensator is designed using virtual
estimator.
Consider the estimated state for agent i as x̂i (k). The
distributed virtual estimator is designed as
N
x̂i (k + 1) = Ax̂i (k) + cBK(1 + hi )−1
∑ ai j (x̂ j − x̂i )
j=1
x̂i (0) = xi (0)
(54)
where the gain k and the coupling coefficient c are to be
designed such that to ensure Ac in (7) is Schur. The global
virtual estimator state vector for (54) can be written as
x̂(k) = [x̂1T (k), x̂2T (k), . . . , x̂NT (k)]T ∈ RnN .
Lemma 3. Consider the N virtual estimators given in (54). Let
the feedback gain K and coupling coefficient c are designed
to ensure Ac in (7) is Schur. Then, the virtual estimator state
x̂(k) converges to the desired consensus value.
Proof. The designed virtual estimator in (54) can be expressed
as
x̂i (k + 1) = Ax̂i (k) + Bûi (k), x̂i (0) = xi (0),
(55)
(56)
with local neighborhood tracking error ε̂ as
(57)
j=1
One can write the global virtual estimator state dynamics as
x̂(k + 1) = Ac x̂(k) ∈ RnN
(58)
where, ui,n (k) represents standard control protocol defined in
(3) and ui,comp (k) represents the distributed adaptive compensator protocol responsible for rejection of the adversarial input.
Consider the feedback gain K in the control protocol (3)
given as
T
K = (R1 + BT P1 B)−1 BT P1 A = R̄−1
1 B P1 A
AT P1 A − P1 − AT P1 B(R1 + BT P1 B)−1 BT P1 A = Q1
ui (k) = cKεi (k) − di (k)
(62)
(63)
(59)
As A − cλi BK is Schur stable, with λi be the eigenvalues
of the normalized graph Laplacian matrix L̂ for i = 2, . . . , N
(64)
where di (k) is the estimated response of the adaptive compensator and K is the gain given by (62) and (63). The update for
the distributed adaptive compensator is designed as
di (k + 1) = θ cK(ε̂i − εi ) + θ di (k)
which yields
x̂(k) = Akc x̂(0) ∈ RnN
(61)
with a positive definite matrix Q1 .
The designed distributed control protocol is given by
N
ε̂i (k) = (1 + hi )−1 ∑ ai j (x̂ j − x̂i ))
ui (k) = ui,n (k) + ui,comp (k)
where R1 is a positive definite design matrix, and P1 is solution
of
where
ûi (k) = c(1 + hi )−1 K ε̂i (k)
We now design a distributed resilient control protocol as
(65)
where θ > 0 is a design parameter, and εi and ε̂i are defined
in (3) and (57). Define Q2 = QT2 > 0 as
Q2 = cR2 (I + H)−1 L = cR2 L̂
(66)
with some positive definite R2 . Let the minimum eigenvalue
of graph matrix L̂ be λm .
The design of adaptive compensator using virtual estimator
is provided in the following theorem.
Theorem 5. Consider the effect of the attack on sensor and
actuator of an agent in DMAS (11). Let the control protocol
developed as (64)-(66). Then, the agents consensus error are
bounded, and the bound can be made arbitrarily small, despite
the attack.
Proof. According to Lemma 4, the virtual estimator state
converges to the desired consensus value. Therefore, the
consensus of discrete-time DMAS can be achieved by showing
the convergence of the agent state xi (k) to the virtual estimator
state x̂i (k). Define
x̃(k) = x(k) − x̂(k)
(67)
Then, with (11) and (55), one can write x̃(k + 1) as
˜
x̃(k + 1) = (IN ⊗ A − cL̂ ⊗ BK)x̃(k) − (IN ⊗ B)d(k)
(68)
where R̄1 = (R1 + BT P1 B) is a positive definite matrix. Using
(69), one can express (75) as
1 T
[d (k + 1)(R2 ⊗ R̄1 )d(k + 1) − 2d T (k + 1)(R2 ⊗ R̄1 ) f (k + 1)
θ2
˜
+ f T (k + 1)(R2 ⊗ R̄1 )( f (k + 1) − d˜T (k)(R2 ⊗ R̄1 )d(k)]
(76)
Using the dynamics of the distributed adaptive compensator
in (70) with (76), one has
= x̃T (k)(cL̂T Q2 ⊗ K T BT P1 A)x̃(k) + 2d˜T (k)(Q2 ⊗ BT P1 A)x̃(k)
+2 f T (k)(Q2 ⊗ BT P1 A)x̃(k) − 2θ −1 f T (k + 1)(Q2 ⊗ BT P1 A)x̃(k)
˜ + 2 f T (k)(R2 ⊗ R̄1 )d(k)
˜
+(1 − θ −2 )d˜T (k)(R2 ⊗ R̄1 )d(k)
˜ + f T (k)(R2 ⊗ R̄1 ) f (k)
−2θ −1 f T (k + 1)(R2 ⊗ R̄1 )d(k)
−2θ −1 f T (k + 1)(R2 ⊗ R̄1 ) f (k)
where
˜ = d(k) − f (k)
d(k)
[d1T (k), d2T (k), . . . , dNT (k)]T
where T = K T BT P1 BK.
We now consider the part 2 of the difference equation of
the Lyapunov candidate function in (72)
˜ + 1) − θ −2 d˜T (k)(R2 ⊗ R̄1 )d(k)
˜
θ −2 d˜T (k + 1)(R2 ⊗ R̄1 )d(k
(75)
(69)
+θ −2 f T (k + 1)(R2 ⊗ R̄1 )( f (k + 1)
RnN
with d(k) =
∈
as the global
adaptive compensator vector and the dynamics of the attack
f (k) is defined in (22).
Using (65), the global dynamics of the adaptive compensator can written as
T
d(k + 1) = θ cL̂ ⊗ R̄−1
1 B P1 Ax̃(k) + θ d(k)
(70)
R̄1 = R1 + BT P1 B.
where
Define the Lyapunov candidate function function as
(77)
With the dynamics of the attack in (22), one can write (77) as
= x̃T (k)[cL̂T Q2 ⊗ K T BT P1 A]x̃(k) + 2d˜T (k)(Q2 ⊗ BT P1 A)x̃(k)
˜
+2 f T (k)(I − θ −1W T )(Q2 ⊗ BT P1 A)x̃(k) + d˜T (k)(R2 ⊗ R̄1 )d(k)
T
−1
T
−2 ˜T
˜ + 2 f (k)(I − θ W )(R2 ⊗ R̄1 )d(k)
˜
−θ d (k)(R2 ⊗ R̄1 )d(k)
+ f T (k)[(R2 ⊗ R̄1 ) − 2θ −1W T (R2 ⊗ R̄1 )
+θ −2W T (R2 ⊗ R̄1 )W ] f (k)
˜
(71)
V (k) = x̃T (k)(Q2 ⊗ P1 )x̃(k) + θ −2 d˜T (k)(R2 ⊗ R̄1 )d(k)
The difference equation of the Lyapunov candidate function
can be written as
∆V (k) = V (k + 1) −V (k)
(78)
Using Young’s inequality, one can simplify (78) as
3
≤ x̃T (k)(cQ2 L̂ ⊗ AT P1 BK)x̃(k) + 2d˜T (k)(Q2 ⊗ BT P1 A)x̃(k)
2
˜
+(2 − θ −2 )d˜T (k)(R2 ⊗ R̄1 )d(k)
+4(I − θ −1 λmin (W ))2 f T (k)(R2 ⊗ R̄1 ) f (k)
T
= x̃ (k + 1)(Q2 ⊗ P1 )x̃(k + 1) − x̃T (k)(Q2 ⊗ P1 )x̃(k)
{z
}
|
(79)
part 1
˜ + 1) − θ −1 d˜T (k)(R2 ⊗ R1 )d(k)
˜
+ θ −1 d˜T (k + 1)(R2 ⊗ R1 )d(k
|
{z
}
part 2
(72)
Using (68), part 1 of the difference equation of the Lyapunov
candidate function (72) can expressed as
= x̃T (k)(Q2 ⊗ AT P1 A − 2cQ2 L̂ ⊗ AT P1 BK
+c2 L̂T Q2 L̂ ⊗ (BK)T P1 BK − (Q2 ⊗ P1 ))x̃(k)
˜
−2x̃ (k)[Q2 ⊗ A P1 B − cL̂ Q2 ⊗ (BK) P1 B]d(k)
˜
+d˜T (k)(Q2 ⊗ BT P1 B)d(k)
T
T
T
T
(73)
Integrating equation (74) and (79), one can express the difference equation of the Lyapunov candidate function as
∆V (k) = V (k + 1) −V (k) 6 −x̃T (k)(Q2 ⊗ Q1 )x̃(k)
1
−x̃T (k)(−Q2 + Q2 L̂) ⊗ AT P1 BK)x̃(k)
2
T
+2c2 λmin (L̂T L̂)λmin (T Q−1
1 ))x̃ (k)(Q2 ⊗ Q1 )x̃(k)
˜ + (2 − θ −2 )d˜T (k)(R2 ⊗ R̄1 )d(k)
˜
+2d˜T (k)(Q2 ⊗ BT P1 B)d(k)
+4(I − θ −1 λmin (W ))2 f T (k)(R2 ⊗ R̄1 ) f (k)
(80)
Further, simplify and write (80) as
Using Young’s inequality, one can further simplify and express
(73) as
6 −x̃T (k)(Q2 ⊗ Q1 )x̃(k) − x̃T (k)(−Q2 + 2cQ2 L̂) ⊗ AT P1 BK)x̃(k)
T
+2c2 λmin (c2 L̂T L̂λmin (T Q−1
1 ))x̃ (k)(Q2 ⊗ Q1 )x̃(k)
˜ + 2d˜T (k)(Q2 ⊗ BT P1 B)d(k)
˜
−2x̃T (k)(Q2 ⊗ AT P1 B)d(k)
∆V 6 −x̃T (k)(Q2 ⊗ Q1 )x̃(k)
1
−x̃T (k)(−Q2 + cQ2 L̂) ⊗ AT P1 BK)x̃(k)
2
T
(81)
+2c2 λmin (L̂T L̂)λmin (T Q−1
1 ))x̃ (k)(Q2 ⊗ Q1 )x̃(k)
˜
− (θ −2 − 2 − 2λmin (cL̂BT P1 BR̄−1 )d˜T (k)(R2 ⊗ R̄1 )d(k)
1
(74)
+4(I − θ −1 λmin (W ))2 f T (k)(R2 ⊗ R̄1 ) f (k)
One can infer that ∆V ≤ 0, if following conditions are satis2
−1
˜
fied: d(k)
> −2 4(I−θ λmin (WT )) −1 k f (k)k and λ2m < c <
(θ
q 1
.
λm 2λmin (T Q−1
1 )
−2−2λmin (cL̂B P1 BR̄1 )
This shows that the agents consensus error is
4
2
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
2
0
-2
bounded. Therefore, the actual agents x(k) achieve the desired
consensus behavior with a bound. This completes the proof.
Remark 4. As presented in Theorem 4 and Corollary 2, existing H∞ approaches minimizes the local neighborhood tracking
error of the system εi and are not capable of attenuating
the sophisticated attacks. In contrast, the designed distributed
resilient control can successfully attenuate the adverse effects
of the attacks using the distributed adaptive compensator. The
developed compensator di (k) in (70) minimizes the deviation
of the local neighborhood tracking error of the system εi from
the local neighborhood tracking error of the virtual estimator
ε̂i . We can also infer that, although the proposed controller is
designed for leader-less multi-agent systems, it can be used
for the special case of the leader-follower systems and the
containment control systems.
V. S IMULATION R ESULTS
This section presents the simulation results to evaluate the
effectiveness of the presented work. The effect of the attack
on both non-root node and root node is analyzed, and then the
efficiency of the designed resilient controller is shown.
Fig. 1: Graph topology
Consider a graph topology of DMAS shown in Fig. 1 for 5
agents communicating with each other with the dynamics of
discrete-time DMAS as
0 −1
0
xi (k + 1) =
xi (k) +
ui (k)
1 0
1
(82)
f or i = 1, . . . , 5
-4
0
-1
10
20
30
40
50
60
70
80
-2
10
20
(a)
30
40
50
60
70
80
(b)
Fig. 2: The DMAS response under the effect of IMP-based attack on agent 3 (non-root
node) without adaptive compensator. (a) The agent’s state (b) The local neighborhood
tracking error
2
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
1
0
-1
-2
1
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
0.5
0
-0.5
10
20
30
40
50
60
70
80
-1
10
20
(a)
30
40
50
60
70
80
(b)
Fig. 3: The DMAS response under the effect of IMP-based attack on agent 3 (nonroot node) with adaptive compensator. (a) The agent’s state (b) The local neighborhood
tracking error
adaptive attack compensator in (65) is applied. Fig. 3 shows
the response of the system under actuator attack using the
proposed controller. The system states achieve the desired
consensus behavior and the local neighborhood tracking error
goes to zero, even in the presence of the attack on non-root
node 3. This demonstrates the mitigation of attack using the
developed resilient controller.
B. Attacks on Root Node
In this subsection, the effect of the attack on a root node is
analyzed with the IMP-based attack signal.
Consider the effect of attack on actuator of Agent 2 by IMPbased attack signal i.e. ua2 (k) = sin(k). Fig. 4(a) shows that the
compromised agent destabilizes the entire network. All agents
of the DMAS deviate from the desired consensus behavior.
The simulation results verify Theorem 2 and Theorem 3. Let
Q1 and R1 be identity matrix in (62) and (63), respectively.
Now, the proposed resilient control protocol in (64) with
adaptive attack compensator in (65) is incorporated and Fig.
4(b) shows the response of the system under actuator attack on
root node 2. The system states achieve the desired consensus
behavior, even in the presence of the attack on root node 3.
This illustrates the mitigation of sophisticated attack using the
designed resilient controller.
100
2
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
50
A. Attacks on Non-root Node
0
This subsection presents the results for the effect of the
attack on a non-root node.
Consider an IMP-based attack signal which has a common
mode with the system dynamics is launched on actuator of
Agent 3 (non-root node) i.e. ua3 (k) = sin(k). Let Q1 and R1 be
identity matrix in (62) and (63), respectively. Fig. 2 shows that
Agents 4 and 5 which are reachable from the compromised
Agent 3 do not converge to the desired consensus value and
the local neighborhood tracking error goes to zero for intact
agents. These results comply with Theorem 3 and Theorem 4.
Then, the proposed resilient control protocol in (64) with
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
1
-50
-100
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
1
0
-1
10
20
30
(a)
40
50
60
70
80
-2
10
20
30
40
50
60
70
80
(b)
Fig. 4: The DMAS response under the effect of IMP-based attack on agent 2 (root node)
with adaptive compensator. (a) The agent’s state without adaptive compensator (b) The
agent’s state with adaptive compensator
VI. C ONCLUSION
This paper presents a rigorous analysis of the effects of
attacks for leaderless discrete-time DMAS and designs a
resilient distributed control protocol their mitigation. It is
shown that the attack on a compromised agent can propagate
through the entire network and affects intact agents those are
reachable from it. Then, the IMP for the attacker shows that
an attack on a single root node can destabilize the entire
network. The attacker does not require to know about the
communication graph and the system dynamics. Furthermore,
the ineffectiveness of existing robust approach is discussed
for sophisticated attacks. To overcome the effect of the attacks on sensor and actuators of the agent in discrete-time
DMAS, a resilient controller is developed based on a virtual
estimator. The presented controller shows that the attack on
sensor and actuator can be mitigated without compromising
the connectivity of the network and achieves the desired
consensus. Although we have considered a general leaderless
consensus problem for the developed controller, it can be used
for the other DMAS problems such as leader-follower and
containment control problem. The analysis and effectiveness
of the presented work have shown in simulation results.
R EFERENCES
[1] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems”, Proceedings of the IEEE, vol.
95, pp. 215-233, Jan 2007.
[2] J. A. Fax, and R. M. Murray, “Information flow and cooperative control
of vehicle formations”, IEEE Transactions on Automatic Control, vol.
49, no. 9, pp. 1465-1476, 2004.
[3] F. Bullo, J. Cortes, and S. Martinez, Distributed control of robotic
networks: a mathematical approach to motion coordination algorithms.
Princeton University Press, 2009.
[4] A. Jadbabaie, J. Lin, and A. Morse, “Coordination of groups of mobile
autonomous agents using nearest neighbor rules”, IEEE Transactions on
Automatic Control, vol. 48, no. 6, pp. 988-1001, 2003.
[5] A.J. Kerns, D. P. Shepard, J. A. Bhatti, and T. E. Humphreys, “Unmanned aircraft capture and control via GPS spoofing”, Journal of Field
Robotics, vol. 31, pp. 617-636, 2014.
[6] F. Pasqualetti, F. Drfler, and F. Bullo, “Attack detection and identification
in cyber-physical systems”, IEEE Transactions on Automatic Control,
vol. 58, pp. 2715-2729, 2013.
[7] K. G. Vamvoudakis, J. P. Hespanha, B. Sinopoli, and Y. Mo, “Detection
in adversarial environments,” IEEE Transactions on Automatic Control,
vol. 59, no. 12, pp. 3209-3223, 2014.
[8] A. Teixeira, H. Sandberg, and K. H. Johansson, “ Strategic stealthy
attacks: the output-to-output l2 -gain,” In Proceedings of IEEE 54th
Annual Conference on Decision and Control, pp. 2582-2587, 2015.
[9] Z. Guo, D. Shi, K. H. Johansson, and L. Shi, “Optimal linear cyberattack on remote state estimation,” IEEE Transactions on Control of
Network Systems, vol. 4, no. 1, pp. 4-13, 2017.
[10] R. Mitchell, and I. Chen, “Adaptive intrusion detection of malicious
unmanned air vehicles using behavior rule specifications”, IEEE Transactions on SMC systems, vol. 44, no. 5, pp. 593-604, May 2014.
[11] N. Bezzo, J. Weimer, M. Pajic, O. Sokolsky, G. J. Pappas, and I. Lee,
“Attack resilient state estimation for autonomous robotic systems,” In
proceedings of IEEE/RSJ Conference on Intelligent Robots and Systems,
pp. 3692-3698, 2014.
[12] C. Persis, and P. Tesi, “Resilient Control under Denial-of-Service”, In
proceedings of the 19th IFAC World Congress, South Africa, 2014.
[13] S. Sundaram, and C.N. Hadjicostis, “Distributed function calculation
via linear iterative strategies in the presence of malicious agents,” IEEE
Transactions on Automatic Control, vol. 56, no. 7, pp. 1495-1508, 2011.
[14] S. Sundaram, and B. Gharesifard, “Consensus-based distributed optimization with malicious nodes,” In Proceedings of Conference on
Communication, Control, and Computing (Allerton), pp. 244-249, 2015.
[15] F. Pasqualetti, A. Bicchi, and F. Bullo, “Consensus computation in
unreliable networks: A system theoretic approach”, IEEE Transactions
on Automatic Control, vol. 57, no. 1, pp. 90-104, 2012.
[16] H.J. LeBlanc, H. Zhang, X. Koutsoukos, and S. Sundaram, “Resilient
asymptotic consensus in robust networks,” IEEE Journal on Selected
Areas in Communications, vol. 31, no. 4, pp. 766-781, 2013.
[17] K. Saulnier, D. Saldana, A. Prorok, G. Pappas, and V. Kumar, “Resilient
Flocking for Mobile Robot Teams,” IEEE Robotics and Automation
Letters, vol. 2, no. 2, pp. 1039-1046, 2017.
[18] A. Teixeira, I. Shames, H. Sandberg, and K. H. Johansson, “A secure
control framework for resource-limited adversaries,” Automatica, vol.
51, pp. 135-148, 2015.
[19] H.J. LeBlanc, H. Zhang, S. Sundaram, and X. Koutsoukos, “Consensus
of multi-agent networks in the presence of adversaries using only local
information,” In Proceedings of the 1st international conference on High
Confidence Networked Systems, pp. 1-10, 2012.
[20] H.J. LeBlanc, and X. Koutsoukos, “Resilient synchronization in robust
networked multi-agent systems,” In Proceedings of the 16th international conference on Hybrid systems: computation and control, pp. 2130, 2013.
[21] W. Zeng, and M. Chow, “Resilient distributed control in the presence of
misbehaving agents in networked control systems,” IEEE transactions
on cybernetics, vol. 44, no. 11, pp. 2038-2049, 2014.
[22] T. Alpcan, and T. Basar, “A game theoretic analysis of intrusion detection
in access control systems,” In Proceedings of IEEE Conference on
Decision and Control, pp. 1568-1573, 2004.
[23] Y. Mo, and B. Sinopoli, “Secure estimation in the presence of integrity
attacks”, IEEE Transactions on Automatic Control, vol. 60, no. 4, pp.
1145-1151, 2015.
[24] Q. Zhu, and T. Basar, “Robust and resilient control design for cyberphysical systems with an application to power systems,” In Proceedings
of IEEE Conference on Decision and Control and European Control
Conference (CDC-ECC), pp. 4066-4071, 2011.
[25] A. Hota, and S. Sundaram, “Interdependent security games on networks
under behavioral probability weighting”, IEEE Transactions on Control
of Network Systems, 2016.
[26] M. Zhu, and S. Martnez, “On distributed constrained formation control
in operatorvehicle adversarial networks,” Automatica, vol. 49, no. 12,
pp. 3571-3582, 2013.
[27] M. Zhu, and S. Martnez, “On the performance analysis of resilient
networked control systems under replay attacks,” IEEE Transactions on
Automatic Control, vol. 59, no. 3, pp.804-808, 2014.
[28] M. Zhu, and S. Martnez, “Consensus-based distributed optimization
with malicious nodes,” In Proceedings of Communication, Control, and
Computing (Allerton), pp. 244-249, 2015.
[29] H. Fawzi, P. Tabuada, and S. Diggavi, “Secure estimation and control for
cyber-physical systems under adversarial attacks,” IEEE Transactions on
Automatic Control, vol. 59, no. 6, pp. 1454-1467, 2014.
[30] Y. Shoukry, P. Nuzzo, A. Puggelli, A.L. Sangiovanni-Vincentelli, S.A.
Seshia, and P. Tabuada, “Secure state estimation for cyber physical
systems under sensor attacks: a satisfiability modulo theory approach,”
IEEE Transactions on Automatic Control, 2017.
[31] G. D. Torre and T. Yucelen, “Adaptive architectures for resilient control
of networked multi-agent systems in the presence of misbehaving
agents,” International Journal of Control, pp. 1-13, 2017.
[32] X. Jin, W.M. Haddad, and T. Yucelen, “An adaptive control architecture
for mitigating sensor and actuator attacks in cyber-physical systems,”
IEEE Transactions on Automatic Control, 2017.
[33] F. Lewis, H. Zhang, K. Hengster-Movric, and A. Das, Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches,
Communications and Control Engineering, Springer, London, 2013.
[34] Z. Li, and Z. Duan, Cooperative Control of Multi-Agent Systems:
A Consensus Region Approach. Automation and Control Engineering,
Taylor and Francis, 2014.
[35] K.G. Vamvoudakis, F. L. Lewis, and G. R. Hudas, “Multi-agent differential graphical games: Online adaptive learning solution for synchronization with optimality”, Automatica, vol. 48, no. 8, pp. 1598-1611,
2012.
[36] D. Harville, Matrix algebra from a statistician’s perspective, SpringerVerlag New York, 1997.
[37] Q. Jiao, H. Modares, F. L. Lewis, S. Xu, and L. Xie, “Distributed gain
output-feedback control of homogeneous and heterogeneous systems”,
Automatica, vol. 71, pp. 361-368, 2016.
[38] Q. Liu, Z. Wang, X. He, and D. Zhou, “Event-Based H∞ Consensus
Control of Multi-Agent Systems With Relative Output Feedback: The
Finite-Horizon Case”, IEEE Transactions on Automatic Control, vol. 60,
no. 9, pp. 2553-2558, 2015.
[39] A. Isidori, Nonlinear control systems, Springer Science and Business
Media, 2013.
| 3 |
A New Hybrid Half-Duplex/Full-Duplex
Relaying System with Antenna Diversity
Cheng Li, Bin Xia, Zhiyong Chen
arXiv:1802.07801v1 [] 21 Feb 2018
Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai,
China
Emails: {lichengg, bxia, zhiyongchen}@sjtu.edu.cn
Abstract
The hybrid half-duplex/full-duplex (HD/FD) relaying scheme is an effective paradigm to overcome
the negative effects of the self-interference incurred by the full-duplex (FD) mode. However, traditional
hybrid HD/FD scheme does not consider the diversity gain incurred by the multiple antennas of the
FD node when the system works in the HD mode, leading to the waste of the system resources. In
this paper, we propose a new hybrid HD/FD relaying scheme, which utilizes both the antennas of the
FD relay node for reception and transmission when the system works in the HD mode. With multiple
antennas, the maximum ratio combining/maximum ratio transmission is adopted to process the signals
at the relay node. Based on this scheme, we derive the exact closed-form system outage probability and
conduct various numerical simulations. The results show that the proposed scheme remarkably improves
the system outage performance over the traditional scheme, and demonstrate that the proposed scheme
can more effectively alleviate the adverse effects of the residual self-interference.
I. I NTRODUCTION
The full-duplex (FD) communications are receiving more and more interest from both industry
and academia due to the ultimate utilization of radio resources [1], [2]. Compared to the halfduplex (HD) mode, the FD mode bears the capability of reception and transmission on the
same time and frequency resource [3]. The FD mode is usually achieved with two antennas,
one for reception and the other for transmission [4]. The signals leakage, which is called selfinterference, from the transmit antenna to the local receive antenna would severely limit the
This work was supported in part by the National Key Science and Technology Specific Program under Grant 2016ZX03001015,
in part by the National Nature Science Foundation of China under Grant 61531009, and in part by the National High Technology
Research and Development Program of China under 863 5G Grant 2014AA01A704.
February 23, 2018
DRAFT
1
system performance. Although the self-interference can be greatly cancelled with various methods
[5], the performance gain of the FD mode over the HD mode is limited by the residual selfinterference (RSI).
On the other side, the relay stations are usually deployed in the remote area to extend the
coverage of the cellular networks. Integrating the FD mode into the relay communication is an
effective way to improve the rate of the cell edge users [6]. The performance of the FD relay
system have been investigated in [7]–[12]. Specifically, in [7]–[9], the authors have analyzed the
achievable rate and system outage probabilities of the FD relay networks. The diversity gain of
the FD relay system with direct link is investigated by the authors in [10]. In addition to the
performance analysis of the FD relay systems, the research has also shown that the FD system
could achieve better performance compared to their HD counterparts when the RSI is below a
certain threshold [11], [12].
To alleviate the adverse effects of the RSI, the hybrid half-duplex/full-duplex (HD/FD) relaying
scheme was proposed [13]. With the hybrid HD/FD relaying scheme, the system can dynamically
change between the HD mode and FD mode. When the RSI is high, the system converts to the
HD mode, thus, the RSI can be inherently eliminated. However, in the previous works, although
the authors considered the hybrid HD/FD relaying scheme, the system resources have not been
fully utilized. Specifically, the authors only used one antenna to receive and transmit signals when
the system works in the HD mode even if the FD relay node is equipped with two antennas
[13]. This leads to certain diversity loss, resources waste and performance degradation.
In this paper, we propose a new hybrid HD/FD relaying scheme. In this scheme, the MRC/MRT,
which utilizes the antenna diversity to combat the channel fading, is adopted to process signals
at the FD relay node when the system works in the HD mode. For performance evaluation,
we derive the exact closed-form system outage probability, which is based on the FD outage
probability and the conditional HD outage probability of the proposed HD/FD relaying scheme.
In addition, we compare the proposed scheme with the traditional scheme proposed in [13]
in terms of the outage performance. The results show that the proposed scheme improves the
system outage performance over the traditional scheme and alleviate the adverse effects of the
RSI. Finally, numerical simulations corroborates the theoretical results.
February 23, 2018
DRAFT
2
II. S YSTEM M ODEL
In this section, we elaborate on the channel mode, the proposed hybrid HD/FD relaying scheme
and the specific signaling process.
K
6
'
$QW
$QW
5
Fig. 1. The hybrid HD/FD relay system with FD mode.
6
'
$QW
$QW
5
(a) Sub-time slot 1.
"#
$%
6
'
$QW
$QW
!
5
(b) Sub-time slot 2.
Fig. 2. The hybrid HD/FD relay system with HD mode.
A. Channel Model
In this paper, we consider the one-way decode-and-forward FD relay system, which consists of
a source node S, a destination node D and a relay node R. The source node S and the destination
February 23, 2018
DRAFT
3
node D are both equipped with one antenna and can only work in the HD mode. However, the
relay node R is equipped with two antennas Ant-1 and Ant-2, and each antenna is equipped
with a Rx module and a Tx module. Thus, the relay node has the capability to work in either
the FD mode or the HD mode. We assume that the direct link between the source node S and
the destination node D does not exist due to the deep fading and strong shadowing effects [11].
The channel coefficients of h1,1 , h1,2 and h2,1 , h2,2 are shown in the Fig. 1. Since the antennas
Ant-1 and Ant-2 can naturally achieve 40 dB isolation, thus h1,1 and h1,2 , h2,1 and h2,2 are
assumed to be independent with each other [13]. We model the links from the source S to
the relay R and the relay R to the destination D as complex Gaussian channels. Then, we
can easily prove that the envelops of the channels are subject to the Rayleigh distribution. Let
E{|hi,j |2 } = Ωi,j , i, j ∈ {1, 2} to denote the channel parameters. In this work, we assume that
all the links are block fading wireless channels.
B. The Proposed Hybrid HD/FD Relaying Scheme
The proposed hybrid HD/FD scheme is elaborated on in the following.
The Full-Duplex Mode: The source node S transmits signals to the relay node, and the relay
node R forwards the previously decoded signals to the destination node D simultaneously as
depicted in Fig. 1. In the FD mode, we assume that the antenna Ant-1 is connected to the Rx
module to receive signals from the source node S and the antenna Ant-2 is connected to the
Tx module to forward the signals to the destination node D. Hence, the simultaneous reception
and transmission is achieved. It is worth noting that in the FD mode, although several selfinterference cancellation techniques can be adopted to cancel the self-interference, the system is
harassed by the RSI [13].
The Half-Duplex Mode: The source node S transmits signals to the relay node, and the relay
node R forwards the decoded signals to the destination node D in the next time slot as depicted
in Fig. 2. In the HD mode, the system is divided into two phases: reception phase and relaying
phase. During the reception phase, both the two antennas Ant-1 and Ant-2 of the relay node
are connected to the Rx modules to receive signals from the source node S. Within the relaying
phase, both the two antennas Ant-1 and Ant-2 are connected to the Tx modules to forward the
recoded signals to the destination node D. In order to reap the diversity gain, we adopt the
maximum ratio combining (MRC) to combine the signals received at the two antennas and the
maximum ratio transmission (MRT) to forward the signals to the node D.
February 23, 2018
DRAFT
4
The Hybrid Scheme: In our proposed hybrid HD/FD scheme, the system works either in the
FD mode or the HD mode determining on the instantaneous channel capacity. If the instantaneous
capacity of the FD mode surpasses that of the HD mode, i.e., Cf d > Chd , the system chooses
to work in the FD mode. The benefits of the FD mode are two-fold: i) The FD mode could
inherently achieve higher spectrum efficiency; ii) The latency at the relay node could be greatly
reduced. On the other side, if the instantaneous capacity of the HD mode is larger than that of
the FD mode, i.e., Chd > Cf d , the system converts to the HD mode. The benefits of the HD
mode are also two-fold: i) The system can inherently eliminate the self-interference perfectly;
ii) The MRC/MRT can be adopted to combat the channel fading.
C. The Specific Signaling Process
The FD mode: In this mode, the signals received at the relay node and the destination node
can be expressed as
yr = h1,1 xs + hsi xr + nr ,
(1)
yd = h2,2 xr + nd ,
(2)
respectively, where xs and xr are the signals transmitted by the source S and the relay R
with power PS = E{|xs |2 } and PR = E{|xr |2 }, respectively. nr and nd denote the thermal
noise over the relay R and the destination D, respectively, with zero mean and variance σ 2 ,
i.e., nr , nd ∽ CN (0, σ 2 ). The relay R receives the signals from the source S as well as the
self-interference signals from the antenna Ant-2 to Ant-1. hsi denotes the self-interference
channel coefficient. In this paper, we assume that the relay node could apply the self-interference
cancellation techniques in [5] to cancel hsi xr . Hence, the system is harassed by the RSI after selfinterference cancellation. We denote the RSI as e
hsi x
er . Whatever the specific distributions of e
hsi
and x
er are, due to the imperfect estimation of the self-interference channel and distortion of the
self-interference signals during the cancellation process, we assume that the effects of e
hsi x
er are
characterized by the Gaussian distribution [13], i.e., e
hsi x
er ∽ CN (0, σ 2 ), where σ 2 = Kr PR .
RSI
RSI
Kr indicates the self-interference capability of the relay node. Thus, the received signals at the
relay node after self-interference cancellation can be expressed as
yer = h1,1 xs + e
hsi x
er + nr ,
February 23, 2018
(3)
DRAFT
5
Hence, the signal-to-interference-plus-noise ratio (SINR) at the relay node and the destination
node can be expressed as
|h2,2 |2 PR
|h1,1 |2 PS
,
γ
=
,
(4)
f,d
k r PR + σ 2
σ2
denote the SINRs at the relay node R and the destination node D under the
γf,r =
where γf,r and γf,d
FD mode, respectively.
The HD Mode: In this mode, the time slot is divided into two sub-time slots. In the first
sub-time slot, the relay node R receives signals from the source node S
yr = H1 xs + nr
(5)
where vector yr = [y1,r , y2,r ]T denotes the received signals at the antennas Ant-1 and Ant-2.
H1 = [h1,1 , h1,2 ]T denotes the estimated channel vector between the source S and the relay R.
nr = [n1,r , n2,r ]T denotes the Gaussian noises over the antennas Ant-1 and Ant-2 and we assume
n1,r , n2,r ∼ CN (0, σ 2). In order to maximize the SINR at the relay R, we adopt the MRC to
combine the received signals at the antennas Ant-1 and Ant-2. The combined signals can be
expressed as
y′ r = W1 H H1 xs + W1 H nr
where (·)H denotes the conjugate transpose, W1 =
H1
||H1 ||F
(6)
denotes the processing matrix of the
MRC and || · ||F denotes the Frobenius norm.
In the second sub-time slot, the relay node R uses the MRT technique to pre-process the
signals and then forwards the signals to the destination node D. The received signals at the
destination node can be expressed as
yd = W2 H H2 xr + nd ,
where W2 =
H2
||H2 ||F
(7)
is the processing matrix of the MRT and H2 = [h2,1 , h2,2 ]T is the estimated
channel vector from the antennas at the relay node R to the destination node D.
Based on the MRC/MRT, the SINR at the relay node R and the destination node D can be
expressed as
|h1,1 |2 + |h1,2 |2
PS ,
(8)
σ2
|h2,1 |2 + |h2,2 |2
γh,d =
PR
(9)
σ2
denote the SINRs at the relay node and the destination node under the HD
γh,r =
where γh,r and γh,d
mode, respectively.
February 23, 2018
DRAFT
6
III. O UTAGE P ERFORMANCE A NALYSIS
In this section, we analyze the system outage performance of the considered one-way decodeand-forward FD relay system under the proposed hybrid HD/FD relaying scheme.
The system outage probability under the FD mode and HD mode can be defined as
fd
Pout
= P r{Cf d = log2 (1 + min{γf,r , γf,d}) < R0 },
hd
Pout
= P r{Chd =
1
log2 (1 + min{γh,r , γh,d}) < R0 },
2
(10)
(11)
respectively, where Cf d and Chd denote the system capacity of the FD mode and the HD
mode, respectively. P r{x} denotes the probability of the event x. We can note that due to the
simultaneous transmission, the pre-factor 1/2 disappears under the FD mode, which indicates
that the FD mode could effectively recover the spectrum efficiency.
With the proposed hybrid HD/FD scheme, the system outage probability can be calculated as
sys
Pout
= P r{Cf d < R0 , Chd < R0 }
= P r{Cf d < R0 }P r{Chd < R0 | Cf d < R0 },
(12)
According to the Total Probability Theorem, the system outage probability of the FD mode
can be divided into three mutual exclusive events A, B and C as follows
fd
Pout
= P r{Cfsrd < R0 , Cfrdd > R0 }
{z
}
|
A
+
P r{Cfsrd
+
P r{Cfsrd
|
|
> R0 , Cfrdd < R0 }
{z
}
B
< R0 , Cfrdd < R0 },
{z
}
(13)
C
where Cfsrd = log2 (1 + γf,r ) and Cfrdd = log2 (1 + γf,d ) denote the channel capacities of the links
from the source S to the relay R and from the relay R to the destination D, respectively. Event
A denotes that the link from the source S to the relay R is in the outage state but the link from
the relay R to the destination D is not. Event B denotes that the link from the source S to the
relay R is not in the outage state whereas the link from the relay R to the destination D is.
Event C denotes that both the links from the source S to the relay R and from the relay R to
the destination D are in the outage state.
February 23, 2018
DRAFT
7
Applying the Total Probability Theorem again, the system outage probability can be expressed
as
sys
Pout
= P r{A}P r{Chd < R0 |A}
+ P r{B}P r{Chd < R0 |B}
+ P r{C}P r{Chd < R0 |C}.
(14)
Next, we will derive the system outage probabilities under the conditions of events A, B and
C, respectively.
A. System Outage Probability Under the Event A
Recall that the complex Gaussian channels have the parameter E{|hi,j |2 } = Ωi,j , i, j ∈ {1, 2},
we can easily prove that the random variable |hi,j |2 is subject to the exponential distribution.
The probability density function (pdf) of |hi,j |2 is expressed as
f|hi,j |2 (x) = λi,j e(−λi,j x) ,
where λi,j =
1
,
Ωi,j
(15)
i, j ∈ {1, 2}.
Thus, we have
P r{A} = P r{Cfsrd < R0 , Cfrdd > R0 }
= P r{log2 (1 + γf,r ) < R0 , log2 (1 + γf,d) > R0 }
|h1,1 |2 PS
|h2,2 |2 PR
<
t
,
> t1 }
1
KR P R + σ 2
σ2
Z t1 (KR PR +σ2 )Z +∞
PS
=
f|h1,1 |2 (x)f|h2,2 |2 (y)dxdy
2
= P r{
t1 σ
PR
0
(−λ1,1 t1
= (1 − e
KR PR +σ 2
)
PS
(−λ2,2
)e
t1 σ 2
)
PR
,
(16)
where t1 = 2R0 − 1. Under the condition of the event A, the outage probability of the HD mode
can be calculated as
sr
P r{Chd < R0 |A} = P r{Chd
< R0 |A}
rd
+ P r{Chd
< R0 |A}
sr
rd
− P r{Chd
< R0 , Chd
< R0 |A},
February 23, 2018
(17)
DRAFT
8
sr
rd
where Chd
and Chd
denote the channel capacity of the links from the source S to the relay R
and from the relay R to the destination D, respectively.
sr
Then, we first calculate the P r{Chd
< R0 |A}
sr
P r{Chd
< R0 |A}
1
|h1,1 |2 + |h1,2 |2
=P r{ log2 (1 +
PS ) < R0 |A}
2
σ2
t1 (KR PR + σ 2 )
(|h1,1 |2 + |h1,2 |2 )PS
< t2 |h1,1 |2 <
}
=P r{
2
σ
PS
Z min{ t1 (KRPPR +σ2 ) , t2Pσ2Z} tP2 σ2 −x
S
S
S
=
f(|h1,1 |2 |A) (x)f|h1,2 |2 (y)dxdy
0
=
Z
0
min{m1 ,m2 }
Z
t2 σ2
PS
−x
0
0
f|h1,1 |2 (x)
f|h |2 (y)dxdy
P|h1,1 |2 (m1 ) 1,2
=
1
(1 − e(−λ1,1 min{m1 ,m2 }) )
A
P|h
(m
)
2
1
1,1 |
−
λ1,1 e(−λ1,1 m2 ) ((λ1,2 −λ1,1 ) min{m1 ,m2 })
(e
− 1) ,
λ1,2 − λ1,1
where t2 = 4R0 − 1, m1 =
t1 (KR PR +σ2 )
PS
and m2 =
t2 σ2
.
PS
(18)
P|hA1,1 |2 (m1 ) = 1 − e−λ1,1 m1 denotes the
probability of |h1,1 |2 < m1 under event A.
rd
By the similar way, we can derive the probability of P r{Chd
< R0 |A} as follows
rd
P r{Chd
< R0 |A}
Z m3 Z m3 −y
=
f(|h2,2 |2 |A) (y)f(|h2,1 |2 ) (x)dxdy
0
=
−
where m3 =
t1 σ2
.
PR
0
1
P|hA2,2 |2 (m3 )
(−λ2,2 m3 )
(e
− e(−λ2,2 m2 ))
λ2,2 e(−λ2,1 m2 ) ((λ2,1 −λ2,2 )m2 )
(e
− e((λ2,2 −λ2,1 )m3 ) ) ,
λ2,1 − λ2,2
(19)
P|hA2,2 |2 (m3 ) = e−λ2,2 m3 denotes the probability of |h2,2 |2 > m3 under event A.
Next, we derive the probability of both the links from the source S to the relay R and from
the relay R to the destination D locate in the outage region under the HD mode, i.e.,
sr
rd
P r{Chd
< R0 , Chd
< R0 |A}
Z mmin
Z m2 −yZ1 mZ2 m2 −y2
1
f (x1 )f (y1 )f (x2 )f (y2 )dx1 dy1 dx2 dy2
=
P (A) 0
0
m3 0
(a)
sr
rd
=P r{Chd
< R0 |A} × P r{Chd
< R0 |A},
(20)
where mmin = min{m1 , m2 }. P (A) = P|hA1,1 |2 (m1 ) × P|hA2,2 |2 (m2 ) denotes the probability of
the whole event A. (a) is achieved by the independence of the links from the source S to the
February 23, 2018
DRAFT
9
relay R and from the relay R to the destination D. f (x1 ) = f|h1,1 |2 (x), f (y1) = f|h1,2 |2 (y1 ),
f (x2 ) = f|h2,1 |2 (x2 ) and f (y2) = f|h2,2 |2 (y2 ) are the pdfs of the corresponding channels.
Substituting (18), (19) and (20) into (17), We can get the system outage probability under the
condition of event A. Similar to the event A, the system outage probability under the event B
and event C can also be derived in the similar methods. Due to the length limit of this paper, the
details of the specific derivations procedures are omitted here. Then according to the (14), we
can obtain the whole system outage probability of the proposed hybrid HD/FD relaying scheme.
IV. S IMULATION R ESULTS
In this section, we investigate the proposed hybrid HD/FD relaying scheme by the numerical
simulations.
0
10
System outage probability Pout
−1
10
−0.0805
10
−0.0815
10
−2
5.99
6
6.01
10
Tra_Theory
Pro_Hybrid_Theory
Tra_M−C_Sim
Pro_Hybrid_M−C_Sim
−3
10
−4
10
0
5
10
15
20
25
Transmit power PR (dB)
30
35
40
Fig. 3. The comparison with regard to the transmit power PR .
In Fig. 3, we compare the traditional HD/FD relaying scheme in [13] with the proposed
HD/FD relaying scheme with the variation of the transmit power PR . The transmit power of the
source node PS is set to 30 dBm (dB in the following). The variances of RSI and the Gaussian
noises are set to 1. The minimum data rate threshold is set to 3 bps/Hz. The results show that
within the whole regime of the transmit power PR , the proposed hybrid HD/FD relaying scheme
outperforms the traditional hybrid HD/FD relaying scheme. Especially, in the high transmit
power regime of PR , the proposed hybrid HD/FD relaying scheme dramatically improves the
system outage performance compared to the traditional HD/FD relaying scheme. The theoretical
February 23, 2018
DRAFT
10
0
10
−1
System outage probability Pout
10
−0.0523
10
−2
10
−0.0533
10
9.495
−3
10
9.5
9.505
−4
10
Tra_Hybrid_Theory
Pro_Hybrid_Theory
Tra_Hybrid_M−C_Sim
Pro_Hybrid_M−C_Sim
−5
10
−6
10
1
2
3
4
5
6
7
8
9
10
Minimum data threshold R0 (bps/Hz)
Fig. 4. The comparison with regard to the minimum data rate R0 .
evaluations of both the traditional and the proposed HD/FD scheme are validated by the Monte
Carlo (M-C in the figure) simulations.
In Fig. 4, we compared the system outage probability of the traditional HD/FD relaying
scheme proposed in [13] with that of the proposed HD/FD relaying scheme with the minimum
data rate threshold R0 . The transmit power of PS and PR are set to 30 dB. The simulation results
demonstrate the superiority of the proposed hybrid HD/FD relaying scheme over the traditional
HD/FD relaying scheme. Especially in the low data threshold R0 regime, the system outage
performance is improved by two or three orders of magnitude. These insights have revealed that
when all the antennas of the FD node is utilized within the HD mode, the system performance
can be greatly improved by the incurred diversity gain.
In Fig. 5, we plot the system outage probabilities of the proposed hybrid HD/FD relaying
scheme, the considered FD mode and the HD mode with the the transmit power PS . From
the simulation results we can observe that in the low transmit power regime of PS , the FD
mode outperforms the HD mode. However, with the utilizing of the two antennas, the outage
probabilities of the HD mode decrease faster than that of the FD mode with the increase of
the transmit power PS . In the high transmit power regime, the HD mode outperforms the FD
mode. Nevertheless, the proposed hybrid HD/FD relaying scheme can always achieve better
performance than the FD mode and the HD mode in the low transmit power and high transmit
power regimes of PS .
February 23, 2018
DRAFT
11
0
10
System outage probability Pout
10
0
−1
10
−0.003
10
0
2
4
−0.00164
10
−2
10
−0.00166
10
3.998 4 4.002
−3
10
−4
10
0
FD_Theory
HD_Theory
Pro_Hybrid_Theory
FD_M−C_Sim
HD_M−C_Sim
Pro_Hybrid_M−C_Sim
5
10
15
20
25
Transmit power Ps (dB)
30
35
40
Fig. 5. The system outage probability of the proposed scheme vs. PS .
0
System outage probability Pout
10
FD_Theory
HD_Theory
Pro_Theory
FD_M−C_Sim
HD_M−C_Sim
Pro_M−C_Sim
−0.606
10
−1
10
−0.614
10
18 19 20
−20
−15
−10
−5
0
5
10
Residual self−interference level σ2RSI (dB)
15
20
2
Fig. 6. The system outage probability of the proposed scheme vs. σRSI
.
In Fig. 6, we plot the outage probabilities of the FD mode, the HD mode and the proposed
hybrid HD/FD relaying scheme with the RSI. The results show that the RSI would severely limit
the system outage performance of the FD mode whereas the outage probabilities of the HD mode
are irrelevant to the RSI. In addition, it is noted that when the self-interference can be suppressed
to a certain threshold, the FD mode is superior to the HD mode. However, the proposed hybrid
HD/FD relaying scheme always achieves a better performance than the FD mode and HD mode
within the whole RSI regime. This indicates that the proposed hybrid HD/FD relaying scheme
can restrict the severe adverse effects of the RSI.
February 23, 2018
DRAFT
12
V. C ONCLUSION
In this paper, we proposed a new hybrid HD/FD relaying scheme, in which the two antennas
of the FD relay node could be fully utilized when the system worked in the HD mode. In
addition, we adopted the MRC/MRT technique to combine and transmit the signals to reap the
diversity gain. The simulation results showed that the proposed hybrid HD/FD relaying scheme
could dramatically improve the system outage performance compared to the traditional hybrid
HD/FD relaying scheme. Moreover, the proposed hybrid HD/FD relaying scheme could more
efficiently alleviate the adverse effects of the RSI incurred by the FD mode.
R EFERENCES
[1] S. Goyal, P. Liu, S. S. Panwar, R. A. Difazio, R. Yang, and E. Bala, “Full duplex cellular systems: will doubling interference
prevent doubling capacity?” IEEE Commun. Mag., vol. 53, no. 5, pp. 121–127, May 2015.
[2] C. Yang, Y. Yao, Z. Chen, and B. Xia, “Analysis on cache-enabled wireless heterogeneous networks,” IEEE Trans. Wireless
Commun., vol. 15, no. 1, pp. 131–145, Jan. 2016.
[3] L. Song, Y. Li, and Z. Han, “Resource allocation in full-duplex communications for future wireless networks,” IEEE
Wireless Commun., vol. 22, no. 4, pp. 88–96, Aug. 2015.
[4] M. Duarte, C. Dick, and A. Sabharwal, “Experiment-driven characterization of full-duplex wireless systems,” IEEE Trans.
Wireless Commun., vol. 11, no. 12, pp. 4296–4307, Dec. 2012.
[5] T. Riihonen, S. Werner, and R. Wichman, “Mitigation of loopback self-interference in full-duplex MIMO relays,” IEEE
Trans. Signal Process., vol. 59, no. 12, pp. 5983–5993, Dec. 2011.
[6] A. Nosratinia, T. E. Hunter, and A. Hedayat, “Cooperative communication in wireless networks,” IEEE Commun. Mag.,
vol. 42, no. 10, pp. 74–80, Oct. 2004.
[7] Z. Zhang, Z. Ma, Z. Ding, M. Xiao, and G. K. Karagiannidis, “Full-duplex two-way and one-way relaying: Average rate,
outage probability, and tradeoffs,” IEEE Trans. Wireless Commun., vol. 15, no. 6, pp. 3920–3933, Jun. 2016.
[8] I. Krikidis, H. A. Suraweera, P. J. Smith, and C. Yuen, “Full-duplex relay selection for amplify-and-forward cooperative
networks,” IEEE Trans. Wireless Commun., vol. 11, no. 12, pp. 4381–4393, Dec. 2012.
[9] B. Xia, C. Li, and Q. Jiang, “Outage performance analysis of multi-user selection for two-way full-duplex relay systems,”
IEEE Commun. Lett., vol. PP, no. 99, pp. 1–1, 2016.
[10] L. J. Rodrłguez, N. H. Tran, and T. Le-Ngoc, “Performance of full-duplex af relaying in the presence of residual selfinterference,” IEEE J. Sel. Areas Commun., vol. 32, no. 9, pp. 1752–1764, Sep. 2014.
[11] H. Cui, M. Ma, L. Song, and B. Jiao, “Relay selection for two-way full duplex relay networks with amplify-and-forward
protocol,” IEEE Trans. Wireless Commun., vol. 13, no. 7, pp. 3768–3777, Jul. 2014.
[12] T. Kwon, S. Lim, S. Choi, and D. Hong, “Optimal duplex mode for DF relay in terms of the outage probability,” IEEE
Trans.Veh. Technol., vol. 59, no. 7, pp. 3628–3634, Sep. 2010.
[13] T. Riihonen, S. Werner, and R. Wichman, “Hybrid full-duplex/half-duplex relaying with transmit power adaptation,” IEEE
Trans. Wireless Commun., vol. 10, no. 9, pp. 3074–3085, Sep. 2011.
February 23, 2018
DRAFT
| 7 |
Image matting with normalized weight and semisupervised learning
Ping Li, Tingyan Duan, Yongfeng Cao*
Big data and computer science school
Guizhou Normal University
Guizhou China
1924678362@qq.com, 1181268816@qq.com, cyfeis@whu.edu.cn
Abstract—Image matting is an important vision problem. The
main stream methods for it combine sampling-based methods and
propagation-based methods. In this paper, we deal with the combination with a normalized weighting parameter, which could
well control the relative relationship between information from
sampling and from propagation. A reasonable value range for
this parameter is given based on statistics from the standard
benchmark dataset[1]. The matting is further improved by introducing semi-supervised learning iterations, which automatically
refine the trimap without user’s interaction. This is especially
beneficial when the trimap is coarse. The experimental results on
standard benchmark dataset have shown that both the normalized weighting parameter and the semi-supervised learning iteration could significantly improve the matting performance.
Keywords—image
matting;
sampling-based
propagation-based method; semi-supervised learning
method;
I. INTRODUCTION
A. Background and related work
Matting is an important image processing technology for
accurately estimating foreground objects in images and videos.
It is often used in image processing software, virtual studio,
film post-production and so on[2]. Mathematically, for a given
image I , any pixel can be expressed as a linear combination
of foreground color F and background color B :
I z z F 1 z B ,
(1)
where z ( x, y ) represents the pixel coordinates in the image,
and z [0,1] is the foreground opacity of the pixel at z[3]. If
z 1 , the pixel is the foreground; if z 0 , then the pixel is
the background; when 0 z 1 , the pixel is a mixed pixel,
which means the pixel is affected by the foreground pixel and
the background pixel at the same time. Usually the most pixels
of a natural image are foreground and background, and only a
small number of them are mixed pixels[2]. Most matting
algorithms need a trimap as input[2]. In a trimap, there are
three areas: the foreground area F ( z =1 ), the background
area B ( z =0 ) and the unknown area U ( 0 z 1 ). The
main purpose of the matting is to accurately classify the pixels
in the unknown area.
Recently, many deep learning methods were used for image matting[4, 5]. Xu etc[4]. first predicted the initial alpha
matte with an deep convolutional encoder-decoder neural network, then made a further refinement for the initial alpha matte
with an small convolution neural network. Shen etc[5]. Proposed an automatic matting method for portrait using convolution neural network.
Except for above deep-learning-based ones, most of the
image matting methods could be categorized into samplingbased methods[3, 6], propagation-based methods[7] and combination of the two methods[8, 9]. Sampling-based methods
need to collect sample pairs similar to the unknown pixels
from the foreground area and the background area based on
pixel color similarity. If the input image has no obvious foreground color and background color or has highly textured regions, this kind of methods are less effective. The latest work
[6] makes up for this shortcoming by applying local smoothing
as a post-processing step to further improve the quality of alpha matte. Propagation-based methods propagate alpha values
of the known pixels to the unknown pixels through local
smoothing. They could work on texture images, but still not on
images with complex structures.
More effective methods are those combining sampling and
propagation. The robust matting method[8] selects high confident samples to estimate alpha value and propagates alpha values of known pixels to unknown pixels with a local window.
The local and nonlocal smooth priors method[9] adds nonlocal
smoothing information, except for the information from sampling and propagation. These methods have achieved good
matting effects. But they did not give a reasonable way to balance the data term from sampling and the local smooth term
from propagation, setting only an empirical weight on the data
term.
B. The main contributions of this paper
There are two points accounting for the main contributions
of this paper:
A normalized weight parameter is used to well control
the relative role of the data term and the local smooth
term in matting, and a reasonable value range for
setting the parameter is given based on experiments on
the standard data set[1].
from propagation-based methods. We calculate these two term
as in[8]. In contrast to the Laplacian construction formula
L W W lap of the robust matting method[8], it can be seen
that our normalized parameter can more clearly control the
relative weight between the two items. The experiment section
will suggest a range for setting .
Original Image
Trimap Image
Local Smooth Term
W lap
Data Term W
Constructing Laplacian Matrix
L=λW+(1-λ) W lap
Spectral Clustering
Optimization
Semi-Supervised
Learning
B. Optimization method
Image matting can be treated as a graph partition problem
which aims at optimally dividing a weighted graph into two or
more sub graphs. The spectral clustering method[10] solve this
problem as :
1
arg min( qT Lq ) ,
q
2
(3)
where L is the Laplacian matrix constructed in Section A, the
vector q records the alpha values of all pixels in the image
(where the foreground is 1, the background is 0, and the others
to be solved). Rewrite matrix L and vector q as
Result Alpha
Lk
L T
R
R
,
Lu
(4)
q qk
qu ,
(5)
Fig.1. Method flow chart
Semi-supervised learning iterations are introduced into
matting to increase the labeled pixels in the trimap
incrementally. It could improve matting effect without
increasing the trimap-making burden of users.
C. Paper content arrangement
The contents of this paper are arranged as follows: The first
section introduces the research background, related works and
the main contributions of this paper. The second section is the
method part of this paper. In the third section, the method is
experimented and analyzed based on the standard data set[1].
The fourth section summarizes this paper and points out the
future research direction.
II. MATTING METHOD
Our method mainly includes three steps as in Fig.1. First,
the Laplacian Matrix L is constructed by combining data term
W and local smooth term W lap with a normalized weight parameter. Second, alpha matte is got with spectral clustering
optimization based on L . Third, based on current alpha matte,
semi-supervised learning is used to refine the trimap. This
makes a loop in the process and enables our method iterating
many times to achieve good resulting matte.
A. Normalized weight parameter
In order to well control the relatively relationship between
data term and local smooth term in the matting process, this
paper constructs the Laplacian matrix with a normalized
weight parameter [0,1] as follow:
L W (1 )W lap ,
(2)
where W is the data item that is contributed by sampling-based
methods, and W lap is the local smooth term that is contributed
where Lk is the Laplacian matrix of the known region
(foreground area and background area), Lu is the Laplacian
matrix of the unknown region, qk is the alpha vector of the
known region and qu is the alpha vector of the unknown region.
By substituting (4) (5) into (3) and expanding it, it can be seen
that qu can be got by solving following linear optimization
problem:
Lu qu RT qk ,
(6)
in this paper, we use the conjugate gradient method to obtain
qu
C. Semi-supervised learning
In the trimap given by users, the more detailed the foreground and background be delineated, the better the result of
matting is. However, users want to pay as little effort as possible, so we usually obtain very rough trimap, where a large
number of pixels are unknown. In this section, we will introduce semi-supervised learning[11] to automatically increase
the number of pixels with known labels in the trimap. The specific process is shown in Fig.2. Firstly, based on the current
alpha matte, some pixels with high confidence are chosen from
the unknown area and labeled automatically. For example, if
the alpha value of a selected pixel is close to zero, it will be
labeled as the background; if the alpha value is close to one, it
will be labeled as the foreground. Then we update the trimap
with these newly-labeled pixels and do the matting process
again to produce a new alpha matte. Because there are more
pixels with known labels in the updated trimap, the resulting
alpha matte will be improved this time. To get the maximum i-
Alpha matte
trimap
Label
prediction
unknown area
pixels
(a)
(b)
(c)
Fig.3 Example images from the online benchmark for image matting. (a)
original image; (b) trimap of coarse level 1; (c) trimap of coarse level 2. In the
trimaps, the foreground is white, the background is black, and the unknown
area is gray.
Pixels label
Fig.2. Semi-supervised learning for matting
mprovement, this semi-supervised learning process can be
iterated several times. It does not increase workload of users
but computers.
The wrongly labeled pixels will bring error information into data term and local smooth term, and thus make the alpha
matte go wrong. Considering this error, we select pixels from
the unknown area with three strict constraints, trying to make
sure the predicted labels of these pixels are correct. Namely,
for each pixel x that will be updated its label in the trimap by
semi-supervised learning, it needs to meet the following conditions at the same time.
space constraint:
x U and y K , makes x y ,
(7)
where U is the set of unknown pixels in current trimap, K is
the set of foreground and background pixels in current trimap,
and y is the spatial neighborhood of pixel y (In this paper,
we choose the 4-connected pixels).
confidence constraint:
x t _ ,
(8)
where x is the alpha value of pixel x and t _ is a threshold.
proportion constraint:
x U t _ percent ,
(9)
where t _ percent is a proportion threshold (In this paper, we
use 10%), U t _ percent is the set formed by the top t _ percent
pixels of a list that was got by sorting all pixels in current unknown area in descending order according to 0.5 x .
III. EXPERIMENT AND ANALYSIS
The online benchmark for image matting on
www.alphamatting.com is used here to evaluate our methods.
We choose the low resolution image set which includes 27
input images of size from 490×800 to 719×800, their ground
truth and two set of trimaps of different coarse levels. Fig.3
shows sample images in the online benchmark.
The mean square error(MSE) is used as the indicator of m-
Fig.4. The MSE of proposed matting method ( =0.001 ) and robust matting.
atting performance in all experiments. The local window used
for calculating the local smooth term is set to 3×3.
A. Normalized weight parameter
Fig.4 shows the MSE of our proposed matting method with
the normalized parameter 0.001 and the robust matting
method[8] with parameter 0.1 . It can be seen that our
method outperforms the robust matting method in almost all
test images.
It is important to know how this normalized weight parameter affect matting performance. Fig.5 shows how MSE indicator changes with parameter . It can be seen that MSE indicator is quite stable when parameter is bigger than 0.05 and all
best MSEs for testing images are got with in [0,0.01]. A
rule that the smaller the is, the better the matting performance is works for all values of but near zero where the
matting performance goes bad when approaching zero. So
we suggest a value range [0.001, 0.01] for setting parameter
in practice.
B. Semi-supervised learning
In this section, we analyze how semi-supervised learning
affect the matting performance and how to choose a good
number of iterations. In all experiments, the normalized weight
parameter is set to 0.001. Because semi-supervised learning
is especially worth doing when labeled pixels are not enough,
for experiments in this section, we use trimaps of coarse level
2(see Fig.3 for the example).
Fig.6 shows the MSE of our proposed matting method
without and with semi-supervised learning(iterating 4 times). It
can be seen that semi-supervised iterations could improve the
matting
Fig.7. The percentage increase of matting performace(PIMP) changes with the
number of iterations
(a)
of matting results increase first and then begin to decrease after
a certain number of iterations.
In the forepart of iterations, those automatically given labels to originally unknown pixels are correct, so that the matting results are improved by the additional information. When
the number of iterations become big, those left unknown pixels
are less and tend to be mixed pixels (near the boundary of
foreground and background). This makes predicting their correct label very difficult and once some pixels are wrongly labeled, the matting results began to go bad.
(b)
Fig.5 MSE indicator changes with parameter
level 1. (b) using trimaps of coarse level 2.
. (a) using trimaps of coarse
This observation enlightens us not using too big iteration
numbers and the best iteration number for an image is related
not only with the coarseness of its trimap but also some other
characteristics (we need to find out in future research).
IV. CONCLUSION
This paper proposed a matting method based on normalized
weight and semi-supervised learning.
The normalized parameter can well control the relative
weight between data term and local smooth term in matting. A
experimental value range has been suggested for setting this
parameter. Semi-supervised learning iterations could significantly reduce users’ burden to delineate a refined trimap and
get good matting result from a coarse trimap. But the best
number of iterations depend not only on the roughness of the
trimap, and so is not easy to set. Generally, the more rough the
trimap, the more semi-supervised leaning iterations could be
taken.
Fig.6.The MSE of proposed matting methos with and without semi-supervised
learning.
Fig.7 shows the percentage increase of matting performance(PIMP), on each image by the use of semi-supervised
learning iterations. The specific formula for defining PIMP is,
MUSL
PIMP 0
1
PIMP MWUSL
,
0
PIMP 0
(10)
where MUSL is the MSE of using semi-supervised learning,
MWUSL is the MSE of without using semi-supervised learning. It can be seen that with the increase of iterations, the PIMP
Our future research will focus on adaptively selecting the
optimal weight coefficient and the number of semi-supervised
learning iterations.
ACKNOWLEDGMENT
This work was partially supported by NSF of China
(41161065), NSF of Guizhou (GZKJ [2017]1128) and Educational Commission of Guizhou (KY[2016]027)
REFERENCES
[1]
[2]
http://www.alphamatting.com.
G. L. Yao, "A Survey on Pre-Processing in Image Matting," JCST, vol.
32, pp. 122-138, 2017.
[3]
[4]
[5]
[6]
[7]
L. Karacan, A. Erdem, and E. Erdem, "Alpha Matting with KLDivergence Based Sparse Sampling," in ICCV 2016, pp. 424-432.
N. Xu, B. Price, S. Cohen, and T. Huang, "Deep Image Matting," arXiv
preprint arXiv:1703.03872, 2017.
X. Shen, X. Tao, H. Gao, C. Zhou, and J. Jia, "Deep automatic portrait
matting," in ECCV, 2016, pp. 92-107.
J. Johnson, D. Rajan, and H. Cholakkal, "Sparse codes as alpha matte,"
in BMVC, 2014.
Y. Shi, O. C. Au, J. Pang, K. Tang, W. Sun, H. Zhang, et al., "Color
clustering matting," in in ICME, 2013, pp. 1-6.
[8]
J. Wang and M. F. Cohen, "Optimized color sampling for robust
matting," in CVPR, 2007, pp. 1-8.
[9] X. Chen, D. Zou, S. Zhiying Zhou, Q. Zhao, and P. Tan, "Image matting
with local and nonlocal smooth priors," in CVPR, 2013, pp. 1902-1907.
[10] D. R. DeFord and S. D. Pauls, "Spectral Clustering Methods for
Multiplex Networks," arXiv preprint arXiv:1703.05355, 2017.
[11] Z. Yu, Y. Lu, J. Zhang, J. You, H. S. Wong, Y. Wang, et al, "Progressive
Semisupervised Learning of Multiple Classifiers," IEEE Trans.Cybern,
in press.
| 7 |
arXiv:1612.03082v1 [] 9 Dec 2016
Minimum energy control for complex
networks
Gustav Lindmark and Claudio Altafini∗
Division of Automatic Control, Dept. of Electrical Engineering,
Linköping University, SE-58183, Linköping, Sweden.
March 12, 2018
Abstract
The aim of this paper is to shed light on the problem of controlling
a complex network with minimal control energy. We show first that
the control energy depends on the time constant of the modes of the
network, and that the closer the eigenvalues are to the imaginary axis
of the complex plane, the less energy is required for complete controllability. In the limit case of networks having all purely imaginary
eigenvalues (e.g. networks of coupled harmonic oscillators), several
constructive algorithms for minimum control energy driver node selection are developed. A general heuristic principle valid for any directed
network is also proposed: the overall cost of controlling a network is
reduced when the controls are concentrated on the nodes with highest
ratio of weighted outdegree vs indegree.
Significance statement Controlling a complex network, i.e., steering the
state variables associated to the nodes of the networks from a configuration
to another, can cost a lot of energy. The problem studied in the paper
is how to choose the controls so as to minimize the overall cost of a state
transfer. It turns out that the optimal strategy for minimum energy control
of a complex network consists in placing the control inputs on the nodes that
have the highest skewness in their degree distributions, i.e., the highest ratio
between their weighted outdegree and indegree.
∗
Corresponding author: C. Altafini. Email: claudio.altafini@liu.se
1
1
Introduction
Understanding the basic principles that allow to control a complex network is
a key prerequisite in order to move from a passive observation of its functioning to the active enforcement of a desired behavior. Such an understanding
has grown considerably in recent years. For instance the authors of [22] have
used classical control-theoretical notions like structural controllability to determine a minimal number of driver nodes (i.e., nodes of the network which
must be endowed with control authority) that guarantee controllability of a
network. Several works have explored the topological properties underlying
such notions of controllability [8, 11, 21, 26, 27, 31], or have suggested to
use other alternative controllability conditions [10, 25, 47]. Several of these
approaches are constructive, in the sense that they provide receipts on how
to identify a subset of driver nodes that guarantees controllability. However, as observed for instance in [45], controllability is intrinsically a yes/no
concept that does not take into account the effort needed to control a network. A consequence is that even if a network is controllable with a certain
set of driver nodes, the control energy that those nodes require may result
unrealistically large. Achieving “controllability in practice” i.e., with a limited control effort, is a more difficult task, little understood in terms of the
underlying system dynamics of a network. In addition, in spite of the numerous attempts [4, 7, 20, 25, 28, 29, 37, 38, 41, 45, 46], no clear strategy
has yet emerged for the related problem of selecting the driver nodes so as
to minimize the control energy.
The aim of this paper is to tackle exactly these two issues, namely: i)
to shed light on what are the dynamical properties of a network that make
its controllability costly; and ii) to develop driver node placement strategies
requiring minimum control energy. We show in the paper that for linear dynamics the natural time constants of the modes of the system are key factors
in determining how much energy a control must use. Since the time constants
of a linear system are inversely proportional to the real part of its eigenvalues,
systems that have eigenvalues near the imaginary axis (i.e., nearly oscillatory behavior) are easier to control than systems having fast dynamics (i.e.,
eigenvalues with large real part). For networks of coupled harmonic oscillators, which have purely imaginary eigenvalues, we show that it is possible to
obtain explicit criteria for minimum energy driver node placement. One of
these criteria ranks nodes according to the ratio between weighted outdegree
and weighted indegree. We show that for any given network such criterion
2
systematically outperforms a random driver node assignment even by orders
of magnitude, regardless of the metric used to quantify the control energy.
2
Methods
Reachability v.s. Controllability to 0. A linear system
ẋ = Ax + Bu
(1)
is controllable if there exists an input u(t) that transfers the n-dimensional
state vector x(t) from any point xo to any other point xf in Rn . The Kalman
rank condition for controllability, rank([B AB A2 B . . . Ak B]) = n for k sufficiently large, only provides a yes/no answer but does not quantifies what
is the cost, in term of input effort, of such state transfer. A possible approach to investigate “controllability in practice” consists in quantifying the
least energy that a control requires to accomplish the state transfer, i.e.,
in computing u(t) mapping xo in xf in a certain time tf while minimizing
Rt
E(tf ) = 0 f ku(τ )k2 dτ . For linear systems like (1), a closed form solution to
this problem exists and the resulting cost is
E(tf ) = (xf − eAtf xo )T Wr−1 (tf )(xf − eAtf xo ),
(2)
Rt
T
where the matrix Wr (tf ) = 0 f eAτ BB T eA τ dτ is called the reachability
Gramian [2]. The control that achieves the state transfer xo → xf with
minimal cost can be computed explicitly:
u(t) = B T eA
T (t −t)
f
Wr−1 (tf )(xf − E Atf xo ),
t ∈ [0, tf ].
(3)
Various metrics have been proposed to quantify the difficulty of the state
transfer based on the Gramian, like its minimum eigenvalue λmin (Wr ), its
trace tr(Wr ), or the trace of its inverse tr(Wr−1 ), see [24] and also SI for a
more detailed description.
We would like now to describe how (2) depends on the eigenvalues of
A. In order to do that, one must observe that (2) is the sum of contributions originating from two distinct problems: 1): controllablity-from0 (or reachability, as it is normally called in control theory [2]) and 2):
controllablity-to-0. The first problem consists in choosing xo = 0, in which
case (2) reduces to Er (tf ) = xTf Wr−1 (tf )xf , while in the second xf = 0 leads
3
T
to Ec (tf ) = xTo Wc−1 (tf )xo where Wc (tf ) = e−A tf Wr (tf )e−Atf is a second
Gramian, called the controllability Gramian. The two problems are characterized by different types of difficulties when doing a state transfer, all
related to the stability of the eigenvalues of A. For instance the reachability
problem is difficult along the stable eigendirections of A because the control
has to win the natural decay of the unforced system to 0, while the unstable
eigenvalues help the system escaping from 0 by amplifying any small input
on the unstable eigenspaces, see Fig. 1 for a graphical explanation. The surfaces of Er (tf ) shown in Fig. 1 (a) reflect these qualitative differences. On
the contrary, the influence of the eigenvalues of A is the opposite for the
controllability-to-0 problem shown in Fig. 1 (b). Hence if we want to evaluate the worst-case cost of a transfer between any xo and any xf (problem
sometimes referred to as complete controllability [35]), we have to combine
the difficult cases of the two situations just described. This can be done
combining the two Gramians into a “mixed” Gramian Wm obtained splitting
A into its stable and antistable parts and forming a reachability subGramian
for the former and a controllability subGramian for the latter, see SI for the
details. Such Gramian can be computed in closed form only when the time
of the transfer tends to infinity. In the infinite time horizon, in fact, both
Wr and Wc diverge, but their inverses are well-posed and depend only on the
stable modes the former and the unstable modes the latter. These are the
parts constituting the inverse of Wm , see Fig. 1 (c). A finite-horizon version
of Wm (and Wm−1 ) can be deduced from the infinite horizon ones.
Eigenvalues of random matrices.
The so-called circular law states that
√
a matrix A of entries aij / n where aij are i.i.d. random variables with
zero-mean and unit variance has spectral distribution which converges to the
uniform distribution on the unit disk as n → ∞, regardless of the probability
distribution from which the aij are drawn [1]. A numerical example is shown
in Fig. 2(a) (top left panel). By suitably changing the diagonal entries, the
unit disk can be shifted horizontally at will, for instance rendering the entire
spectrum stable (Fig. 2(a), top mid panel) or antistable (Fig. 2(a), top right
panel). A random matrix is typically a full matrix, meaning that the underlying graph of interactions is fully connected. The circular law is however valid
also for sparse matrices, for instance for Erdős-Rényi (ER) topologies. If p is
√
the edge probability, then A = (aij )/ p · n still has eigenvalues distributed
uniformly on the unit disk, see Fig. S1(a).
4
A generalization of the circular law is the elliptic law, in which the unit
disk containing the eigenvalues of A is squeezed in one of the two axes. To
do so, the pairs of entries {aij , aji } of A have to be drawn from a bivariate
distribution with zero marginal means and covariance matrix expressing the
compression of one of the two axes, see [1]. Various examples of elliptic laws
are shown in the lower panels of Fig. 2 (a). Also elliptic laws generalize to
sparse matrices, see Fig. S1(a).
3
Results
Control energy as a function of the real part of the eigenvalues of A.
In a driver node placement problem, the inputs affect a single node, hence
the columns of B are elementary vectors, i.e., vectors having one entry equal
to 1 and the rest equal to 0. When A is a random matrix, the underlying
graph is generically fully connected, hence issues like selection of the number
of driver nodes based on the connectivity of the graph become irrelevant.
Having disentangled the problem from topological aspects, the dependence
of the control effort from other factors, like the spectrum of A, becomes
more evident and easier to investigate. If for instance we place driver nodes at
random and use the mixed Gramian Wm to form the various energy measures
mentioned above for quantifying the control effort, then we have the results
shown in Fig. 2(b). As expected, all indicators improve with the number of
inputs. What is more interesting is that when we repeat the computation
for the various spectral distributions of Fig. 2(a), the result is that the cost
of controllability decreases when the (absolute value of the) real part of the
eigenvalues of A decreases. All measures give an unanimous answer on this
dependence, regardless of the number of inputs considered. In particular,
when A has eigenvalues which are very close to the imaginary axis (lower
right panel of Fig. 2(a) and cyan curves in Fig. 2(b)) then the worst-case
controllability direction is easiest to control (i.e., λmin (Wm ) is bigger), but
also the average energy needed for controllability on all directions decreases
(i.e., tr(Wm ) increases and tr(Wm−1 ) decreases).
Recall that in a linear unforced dynamical system the real part of the
eigenvalues of A determines how fast/slow a system converges to the origin (stable eigenvalues, when real part of λ(A) is negative) or diverges to
∞ (unstable eigenvalues, when real part of λ(A) is positive). Such convergence/divergence speed grows with the absolute value of the real part of λ(A).
5
In the complete controllability problem, both stable and unstable modes of
A are gathered together, and all have to be “dominated” by the controls to
achieve controllability. When the modes of the system are all slow, like when
they are very close to the imaginary axis, then the energy needed to dominate them all is lower than when some of them are fast (i.e., the eigenvalues
have large real part).
An identical result is valid also for sparse matrices. In particular, for ER
graphs with edge probability p = 0.05 and coefficients from a bivariate normal
distribution (yielding elliptic laws as in Fig. S1(a)), the various norms used
to quantify input energy are shown in Fig. S1(b). Their pattern is identical
to the full graph case of Fig. 2(b).
The computations shown in Fig. 2(b) are performed with the infinitehorizon mixed Gramian Wm described in the SI, because such Wm can be
easily computed in closed form. A finite-horizon Wm (tf ) can be approximately obtained from it, but the arbitrarity of tf makes it hard to set up an
unbiased comparison of the various spectral distributions of A of Fig. 2(a),
which are characterized by widely different time constants (inversely correlated to the amplitude of the real part of λ(A)). Observe in Fig. S2 how the
various measures of controllability computed with a finite-time Wm (tf ) tend
all to the infinite-time Wm but with different speeds.
Driver node placement based on weighted connectivity. In the analysis carried out so far the driver nodes were chosen randomly. A topic that
has raised a remarkable interest in recent times (and which is still open in
our knowledge) is devising driver node placement strategies that are efficient
in terms of input energy [4, 7, 20, 28, 29, 37, 41, 46]. If we consider as
weighted indegree and outdegree at node i the sum of the weights
in abPn
solute value
Pnof all incoming or outgoing edges, i.e.,√win (i) = j=1 |aij | and
wout (i) = j=1 |aji | (a normalization factor such as p · n can be neglected),
then a strategy that systematically beats random input assignment consists
in ranking the nodes according to the ratio rw (i) = wout (i)/win (i) and placing inputs on the nodes with highest rw . In Fig. 3(a) the λmin (Wm ) of this
driver node placement strategy is compared with a random selection. If for
full graphs the improvement is minimal, as the graphs become sparser it
increases, and for ER networks with p = 0.01 the λmin (Wm ) obtained by
controlling nodes with high rw is more than twice that of the random choice
of controls, see Fig. 3(b). As can be seen in Fig. S3, all measures of input
6
energy show a qualitatively similar improvement. When the topology of the
network renders the values of rw more extreme, like when direct scale-free
(SF) graphs are chosen, with indegree exponent bigger than outdegree exponent [5], see Fig. S4, then the improvement in choosing driver nodes with
high rw becomes much more substantial, even of orders of magnitude bigger
than a random selection, see Fig. 3(b) and Fig. S5 for more details.
What we deduce from such results is that once the technical issues associated with minimal controllability can be neglected, a general criterion for
controlling a network with a limited input cost is to drive the nodes having
the maximal disembalance between their weighted outdegree and indegree.
Notice that our computation of weighted out/indegrees considers the total
sum of weights in absolute value. When signs are taken into account in
computing win and wout , then no significant improvement over random input
placement is noticeable. This is connected to the quadratic nature of the
Gramian.
It is worth emphasizing that for a dynamical system the concept of driver
node is not intrinsic, but basis-dependent. In fact, just like the idea of
adjacency matrix of a graph is not invariant to a change of basis in state
space, so inputs associated to single nodes (i.e., to single state variables) in
the original basis become scattered to all variables in another representation
of the same system, see Fig. 4(a). If we take a special basis in which the modes
are decoupled (for instance the Jordan basis), then the contribution of the
nodes to the modes (i.e., the eigenvectors of A) provide useful information
for the investigation of minimum input energy problems. The topic is closely
related to the so-called participation factors analysis in power networks [30].
Also quantities like win and wout are basis-dependent and become nearly
equal for instance if in (1) we pass to a Jordan basis. On the contrary, the
eigenvalues of A are invariant to a change of basis. Hence as a general rule, the
control energy considerations that are consequence of the time constants of
the system (like the dependence on the real part of the eigenvalues illustrated
in Fig. 2) are “more intrinsic” than those that follow from the particular basis
representation we have available for a network.
Real part of the eigenvalues and controllability with bounded controls. From what we have seen above, the control energy is least when the
real part of the eigenvalues tends to vanish. In the limit case of all eigenvalues of A being purely imaginary, we recover a known result from control
7
theory affirming that controllability from any xo to any xf can be achieved
in finite time by means of control inputs of bounded amplitude. As a matter
of fact, an alternative approach used in control theory to take into account
the control cost of a state transfer is to impose that the amplitude of the
input stays bounded for all times (rather than the total energy), and to
seek for conditions that guarantees controllability with such bounded controls [6, 15, 18]. Assume u ∈ Ω, with Ω a compact set containing the origin,
for instance Ω = [−1, 1]m , where m is the number of control inputs. The
constraint u ∈ Ω guarantees that we are using at all times a control which
has an energy compatible with the physical constraints of our network. The
consequence is, however, that reaching any point in Rn may require a longer
time, or become unfeasible. In particular a necessary and sufficient condition
for any point to be reachable from 0 in finite time when u ∈ Ω is that no
eigenvalue of A has a negative real part, see SI. This is clearly connected with
our previous considerations on the reachability problem without bounds on
u: when all modes of A are unstable then the input energy required to reach
any state from 0 is low (Fig. 1(a)) and becomes negligible for sufficiently
long time horizons. On the contrary, transferring any state to 0 in finite
time with u ∈ Ω is possible if and only if no eigenvalue of A has a positive
real part. Also in this case the extra constraints on the input amplitude
reflects the qualitative reasoning stated above and shown in Fig. 1(b). Also
in the bounded control case, considering a generic transfer from any state
xo to any other state xf means combining the two scenarios just described:
formally a system is completely controllable from any xo to any xf in finite
time and with bounded control amplitude u ∈ Ω if and only if all eigenvalues
of A have zero real part, see SI for the details. The findings discussed above
for u unbounded are completely coherent with this alternative approach to
“practical controllability”.
Systems with purely imaginary eigenvalues: the case of coupled
harmonic oscillators. A special case of linear system with purely imaginary eigenvalues is a network of n coupled harmonic oscillators, represented
by a system of second order ODEs
M q̈ + Kq = Bu
(4)
where M = M T > 0 is the inertia matrix, K = K T > 0 is the stiffness
matrix, typically of the form K = Kd + L, with Kd > 0 diagonal and L a
8
Laplacian matrix representing the couplings. In (4) the controls are forces,
and the input matrix B contains elementary vectors in correspondence of the
controlled nodes. The state space representation of (4) is
ẋ = Ao x + Bo u
(5)
with
Mq
x=
∈ R2n ,
M q̇
Ao =
I
0
,
−KM −1 0
and
Bo =
0
.
B
The system (5) has purely imaginary eigenvalues equal to ±iωj , j = 1, . . . , n,
where ωj are
frequencies of the oscillators. If Ω2 = diag(ω12 , . . . , ωn2 )
1the natural
n
and Ψ = ψ . . . ψ is the matrix of corresponding eigenvectors, then in the
so-called modal basis the oscillators are decoupled and one gets the state
space representation
ż = A1 z + B1 u
(6)
0
I
0
=
z+
u
2
T
−1
−Ω 0
Ψ M B
−1
0
Ψ
x. See SI for the details. When a system has
where z =
0 Ψ−1
purely imaginary eigenvalues, the finite time Gramian diverges as tf → ∞.
However, in the modal basis (6) the Gramian is diagonally dominant and
linear in tf , hence as tf grows it can be approximated by a diagonal matrix
which can be computed explicitly [3]:
(ψ1 )2
j
n
X βj
Wz (tf ) ≈
2
2Mj
j=1
ω12
..
tf ,
.
(ψjn )2
2
ωn
(ψj1 )2
..
.
(7)
(ψjn )2
where βj = 1 if the j-th input is present and 0 otherwise. Using (7), the
three measures of control energy adopted in this paper give rise to simple
strategies for minimum energy driver nodes placement, which in some cases
9
can be computed exactly for any n (for instance for the metric tr(Wz ), see
SI). Fig. 4 shows that such strategies are always beating a random driver
node placement, often by orders of magnitude.
Also wout /win is still a good heuristic for driver node placement strategy.
This can be understood by observing that the model (4) is symmetric hence
for it in- and out-degrees are identical. However, since Ao has rows rescaled
by M −1 , wout is affected
directly: when the inertia Mii is big, the correP
sponding wout (i) = nj=1 Kji /Mii is small and viceversa. No specific effect is
instead induced on win . In the representation (5), selecting nodes according
to wout /win means placing control inputs on the lighter masses, see Fig. 4 (d).
When the harmonic oscillators are decoupled (L = 0) then m < n means controllability is lost, but nevertheless the least control energy of the m inputs
is indeed obtained when driving the oscillators of least inertia. A weak (and
sparse) coupling allows to recover controllability, while the least inertia as optimal driver node strategy becomes suboptimal. When the coupling becomes
stronger (for instance when the coupling graph is more connected) then the
inertia of an oscillator is less significant as a criterion for selection of driver
nodes: the modes of the system are now spread throughout the network and
no longer localized on the individual nodes. As shown in Fig. S6, in a fully
connected network of harmonic oscillators, driver node strategies based on
wout /win and on tr(Wz ) tend to perform considerably worse for the other
measures (λmin (Wz ) and tr(Wz−1 )), while for a sparse graph (here ER graphs
with p = 0.05), of the three explicit optimal driver node placement strategies
available in this case, tr(Wz ) has a high overlap with wout /win , see Fig. 4
(b), while the other two tend to rank controls in somewhat different ways.
Given that in this case we have three strategies that are (near) optimal for
the chosen measure of control energy, the dissimilarity of the node rankings
of these three strategies means that the driver node placement problem is
heavily dependent on the way control energy is quantified.
Controlling vibrations of polyatomic molecules. Coupled harmonic
oscillators are used in several applicative contexts, for instance in the active
control of mechanical vibrations [12, 19] or in that of flexible multi-degree
structures (aircrafts, spacecrafts, see [17]) where our controllability results
can be applied straightforwardly (and compared with analogous/alternative
approaches described for instance in [3, 14, 17, 19, 32, 39, 42]). Another
context in which they are used is in controlling molecular vibrations of poly-
10
atomic molecules [43, 48]. The assumption of harmonicity is valid in the
regime of small oscillations near equilibrium, in which the potential energy
is approximated well by the parabola V (q) = 12 q T Kq (here q is the quantum
expectation value of the displacement from the equilibrium position and only
vibrational degrees of freedom are considered1 ). The node-driving setting described by (5) corresponds here to controlling the vibrations of single bonds,
as in mode-selective chemistry [9, 36, 48]. In the modal basis (6), the free
evolution of the modes (i.e., A1 ) is decoupled, but not the input matrix B1 ,
see Fig. 4(a). The B1 terms in (6) specify how the energy exciting a specific
vibrational bond propagates through the entire molecule and affects all its
modal frequencies. The methods developed above for quantifying the control
energy are applicable also to this context. In particular the Gramian can be
used to estimate the energy spreading of a monochromatic input field: for
the j-th bond it is proportional to the j-th row of Ψ (i.e., it consists of the
j-th components of all eigenvectors ψ 1 , . . . , ψ n ).
The basic principle we have adopted so far (minimize the input energy,
here the fluence of the pumping field) implicitly favours the selection of inputs
having a broad “coverage” in the modal basis, or, said otherwise, favours the
intramolecular spreading of vibrational energy to the entire molecule. This
is clearly not the main task of selective-mode laser chemistry, which on the
contrary aims at keeping as much energy as possible concentrated on specific
bonds or modes. Given that the power of the laser field is not a limiting
factor, a control problem that can be formulated using the tools developed in
this paper is for instance to choose m monochromatic excitations selectively
tuned on the stiffness constants of m bonds (i.e. for us certain columns
of Bo ) so as i) to guarantee controllability; ii) to maximize the power at
a certain mode ωk (representing for example a conformational change that
one wants to achieve in the molecule [9]). For the diagonal Gramian
Pm(7), this
amounts to choosing the indexes j1 , . . . , jm ∈ {1, . . . , n} such that `=1 (ψjk` )2
is maximized, a problem which can be solved exactly once Ψ is known. Notice
that even when energy spreads through the bonds because of the couplings,
it is in principle possible to “refocus” it towards a single bond using the
dynamics of the system. In the linear regime, a formula like (3) can be used
to compute explicitly the controls needed to refocus it on a specific bond,
1
For a molecule of n atoms the number of independent degrees of freedom is 3n − 5
i.e., 3 for each atom, minus the coordinates of the center of mass. All masses and stiffness
constants should be rescaled relative to it.
11
corresponding for instance to qf having a single non-zero component. This
does not require to solve an optimal control problem, as proposed for instance
in [34, 33].
Minimum energy control of power grids. In the linear regime, power
grids can be modeled as networks of weakly damped coupled harmonic oscillators [13]. The so-called swing equation corresponds in fact to the following
network of damped and coupled harmonic oscillators
M q̈ + Dq̇ + Kq = Bu,
(8)
where D is the matrix of dampings which we assume to be proportional, that
is, that in the modal basis D1 = ΨT M −1 DM −1 Ψ is diagonal. In the state
space representation (5), one gets then
0
I
0
x+
u,
(9)
ẋ =
−KM −1 −DM −1
B
while in the modal basis
0
I
0
ż =
z+
u.
−Ω2 −D1
ΨT M −1 B
(10)
For weak damping, the driver node selection strategies illustrated above can
be applied to the model (9) and so can the method based on wout /win . We
have investigated the minimum energy control of several power grids listed in
Table S1, varying the dampings across several orders of magnitude, see Fig. 5
(a). As expected, for all of them the energy required to achieve controllability
increases as the real part of the eigenvalues moves away from the imaginary
axis, see Fig. 5 (b) and Figs. S7-S10. All strategies still beat a random driver
node placement, even those based on the Gramian (7), formally valid only
for undamped dynamics.
12
SUPPLEMENTARY MATERIAL
4
4.1
Methods
Control energy: finite-time horizon formulation
Consider a linear system
ẋ = Ax + Bu
(S11)
where x ∈ Rn is the state vector, A ∈ Rn×n is the state update matrix,
B ∈ Rn×m is the input matrix, and u is the m-dimensional input vector. The
reachable set of (S11) in time tf from xo is the set
Rtf (xo ) = {x ∈ Rn s. .t. ∃ u : [0, tf ] → Ω s. t. φ(tf , u, xo ) = x}
where φ(t, u, xo ) is the solution of (S11) at time t with input u and Ω is the
admissible set of control inputs, here Ω = Rm .
The system (S11) is reachable (or controllable from the origin [2]) in time
tf if any xf ∈ Rn can be reached from 0 by some control u ∈ Ω in time tf ,
i.e. if Rtf (0) = Rn . It is said controllable to the origin if any xo ∈ Rn can
be brought to 0 by some control u ∈ Ω in time tf . The system (S11) is said
completely controllable in time tf if Rtf (xo ) = Rn for any xo ∈ Rn .
Finite-time Gramians. The time-tf reachability (or controllability from
0) Gramian is the symmetric matrix
Z tf
T
eAτ BB T eA τ dτ,
Wr (tf ) =
(S12)
0
while the time-tf controllability to 0 Gramian (normally called the controllability Gramian, [2]) is
Z tf
T
Wc (tf ) =
e−Aτ BB T e−A τ dτ.
(S13)
0
The two Gramians are positive definite whenever (A, B) is controllable, and
are related by
T
Wr (tf ) = eAtf Wc (tf )eA tf .
13
Similarly, for their inverses,
Wr−1 (tf ) = e−A
Tt
f
Wc−1 (tf )e−Atf .
Finite-time control energy for state transfer. The transfer of the state
from any xo into any other xf in time tf can be accomplished by many
controls. In order to quantify how costly a state transfer is on a system, one
can choose to consider the control input that minimizes the input energy,
i.e., the functional
Z
tf
ku(τ )k2 dτ.
E(tf ) =
(S14)
0
Such control can be computed explicitly [2] as
u(t) = B T eA
T (t −t)
f
Wr−1 (tf )(xf − E Atf xo ),
t ∈ [0, tf ]
(S15)
and the corresponding transfer cost as
E(tf ) = (xf − eAtf xo )T Wr−1 (tf )(xf − eAtf xo ).
(S16)
The various metrics that have been proposed in the recent and old literature to quantify the energy needed for state transfer between any two states
xo and xf are in fact all based on the Gramian [24]:
1. λmin (Wr ) = λmax (Wr−1 ): the min eigenvalue of the Gramian (equal
to the max eigenvalue of Wr−1 ) is a worst-case metric, estimating the
energy required to move along the direction which is most difficult to
control.
2. tr(Wr ): the trace of the Gramian is inversely proportional to the average energy required to control a system.
3. tr(Wr−1 ): the trace of the inverse of the Gramian is proportional to the
average energy needed to control the system.
Minimizing the control energy means maximizing the first and second
measure or minimizing the third. To be more precise on how the control energy is formed, we have to split the state transfer energy (S16) into subtasks:
1. xo = 0 (reachability problem)
=⇒
Er (tf ) = xTf Wr−1 (tf )xf ;
14
2. xf = 0 (controllability to 0 problem)
=⇒
Ec (tf ) = xTo eA
Tt
f
Wr−1 (tf )eAtf xo = xTo Wc−1 (tf )xo .
In particular, both Wr (tf ) and Wc (tf ) enter into the cost function. In particular, to quantify the amount of control energy of these problems we need
to compute the inverse of Wr (tf ) and Wc (tf ).
Let us look at how the stability/instability of the eigenvalues influences
the two costs Er (tf ) and Ec (tf ).
• If A is stable (i.e., Re[λ(A)] < 0), then “escaping from 0” (i.e., the
reachability problem) requires more energy than transferring to 0, (i.e.,
the controllability to 0 problem) because the modes of A naturally tend
to converge to 0.
• If A is antistable (i.e., Re[λ(A)] > 0, −A is stable), then the opposite
considerations are valid: the modes of A tend to amplify the magnitude
of the state, simplifying the reachability problem but complicating the
controllability to 0 problem.
• If A has eigenvalues with both negative and positive real part, the two
situations coexist.
Hence computing a plausible measure of control energy for a generic state
transfer xo → xf requires to take into account the “difficult” directions of
both cases.
4.2
Control energy: infinite-time horizon formulation
When tf → ∞, then E(tf ) converges (or diverges) to a quantity
Z ∞
E=
ku(τ )k2 dτ,
(S17)
0
and so do Er (tf ) and Ec (tf ).
When tf → ∞, both Gramians become infinite-time integrals, which may
be convergent or divergent, depending on the modes of A. If A stable, then
Z ∞
T
Wr =
eAτ BB T eA τ dτ
(S18)
0
15
exists finite and it is positive definite if (A, B) controllable. If instead A is
antistable, then it is
Z ∞
T
e−Aτ BB T e−A τ dτ
(S19)
Wc =
0
to exist finite and positive definite when (A, B) controllable. In the mixed
eigenvalues cases the two expression (S18) and (S19) both diverge.
Controllability to 0 in the infinite-time horizon. Let us observe what
happens for instance to the controllability to 0 problem according to the
eigenvalues of A.
• If A is stable, then in correspondence of u = 0, limt→∞ x(t) = 0 for all
xo , meaning that the controllability to 0 problem can be solved with
zero energy Ec = 0. Furthermore, the integral (S12) converges to (S18),
whose value can be computed solving the following Lyapunov equation:
AWr + Wr AT + BB T = 0.
(S20)
Such a solution always exists and it is Wr > 0 (positive definite) if the
pair (A, B) is controllable.
• When instead A is antistable, then the integral (S12) diverges as tf →
∞. Hence the solution with u = 0 is no longer feasible as all modes
are unstable (and diverge as soon as xo 6= 0), meaning that to find a
minimizer of (S17) we have to proceed in some other way. Since (A, B)
controllable, we can determine u(t) as if we were computing a stabilizing
feedback law, i.e., expressing u(t) as a function of the state x(t) so
that the resulting closed loop system converges to 0 asymptotically.
Such a feedback law can be computed solving in P an algebraic Riccati
equation (ARE)
P (−A) + (−AT )P + P BB T P = 0.
(S21)
Such ARE admits a positive definite solution P , which can in turn be
computed solving in L the Lyapunov equation (in −A, which is stable,
hence a solution L > 0 always exist)
(−A)L + L(−AT ) + BB T = 0,
16
(S22)
and then setting P = L−1 . It can be verified directly that the controllability Gramian Wc in (S19) is one such solution L. Correspondingly
we obtain P = Wc−1 . From the theory of linear-quadratic regulators
(in particular [40], Ch. 10) the feedback controller
u = −B T P x(t)
(S23)
guarantees stability of the closed-loop system
ẋ = (A − BB T P )x
i.e., A − BB T P is a stable matrix. The feedback law (S23) also minimizes the input energy (S17) which is equal to Ec = xTo Wc−1 xo .
• When A has eigenvalues with both positive and negative real part
and no purely imaginary eigenvalues, then the two situations described
above occur simultaneously. Assume A is split into two diagonal blocks,
one consisting of only eigenvalues with negative real part and the second
only of eigenvalues of positive real part. This can always be achieved
through a change of basis [49]. Split B and x(t) accordingly:
x1
A1 0
B1
Re[λ(A1 )] < 0
x=
, A=
, B=
,
x2
0 A2
B2
Re[λ(A2 )] > 0
(S24)
In the infinite time horizon, the u = 0 control steers optimally the
x1 subvector, while for the x2 part a feedback controller provides the
energy-minimizing solution. From (S21) we obtain that the ARE has
solution
0 0
P =
0 P2
where P2 solves the ARE for the (A2 , B2 ) subsystem. Hence the control
input
0
T
T 0
u = −BB P = −BB
−1
0 W2,c
achieves a transfer to the origin with minimal energy cost equal to
−1
Ec = xTo P xo = xT2,o W2,c
x2,o . Furthermore, combining (S20) and (S22),
we have that for (S24) the following two Lyapunov equations must hold
simultaneously:
A1 W1,r + W1,r AT1 + B1 B1T = 0
(−A2 )W2,c + W2,c (−AT2 ) + B2 B2T = 0
17
(S25a)
(S25b)
Reachability in the infinite-time horizon. Let us now consider the
reachability problem (i.e., controllability from 0). Now the roles of stable
and unstable eigenvalues are exchanged.
• If A stable, reachability requires an active control (here a destabilizing state feedback law) in order to steer x(t) out of the origin. The
energy-optimal solution consists in choosing u = −B T P x(t) with P > 0
solution of the ARE
P A + AT P + P BB T P = 0
(S26)
or, equivalently, P = K −1 with K solution of the Lyapunov equation
AK + KAT + BB T = 0.
(S27)
A is stable, hence K > 0 solving (S27) and P > 0 solving (S26) always
exist. The resulting closed loop matrix A − BB T P must be antistable.
From (S20) and (S27) it can also be K = Wr , the reachability Gramian.
• If A antistable, then u = 0 is the minimal energy controller (an infinitesimal amount energy at t = 0 is enough to “kick” the system towards
the right direction xf when initialized in xo = 0; this amount of energy
is negligible in the infinite time horizon considered here). Since −A
stable, a Lyapunov equation like (S22) holds, with solution L = Wc .
• When A has eigenvalues with both positive and negative real part and
no purely imaginary eigenvalues, then a decomposition like (S24) can
be obtained through a change of basis. The complete ARE has now
solution
−1
W1,r 0
P1 0
P =
=
,
0 0
0
0
and the controller achieving the transfer with minimal energy is
−1
0
T W1,r
,
u = −BB
0
0
for an amount of energy equal to
−1
Er = xT1,f W1,r
x1,f
The decomposition (S24) also in this case induce a pair of Lyapunov
equations identical to (S25).
18
4.3
Mixed Gramian in infinite and finite time horizon
In order to assemble the considerations of the previous sections, it is useful
to introduce a third Gramian which we call mixed Gramian, Wm and which
gathers the directions difficult to control of both the reachability and the
controllability to 0 problems.
Infinite-time horizon mixed Gramian Assume that the spectrum of
A contains k, 0 6 k 6 n, eigenvalues with negative real part, and n − k
eigenvalues with positive real part (and no purely imaginary eigenvalues).
Then, as already mentioned above, there exist a change of basis V bringing
A into the form (S24):
Ā1 0
= V AV −1
(S28)
0 Ā2
and, correspondingly,
B̄1
=VB
B̄2
with Re[λ(Ā1 )] < 0 and Re[λ(Ā2 )] > 0. In the new basis, the two Lyapunov
equations (S25) hold, which can be rewritten as
with
T
T
0
B̄1 B̄1T
Ā1
0
Ā1
0
=0
+
W̄ + W̄
0
B̄2 B̄2
0 −Ā2
0 −Ā2
(S29)
W̄1,r
0
W̄m =
0
W̄2,c
the mixed Gramian. Following [49], the expression of the mixed Gramian in
the original basis is Wm = V −1 W̄m V −T . By construction, the mixed Gramian
matrix Wm always exists and summarizes the infinite-horizon contribution
of the stable eigenvalues to the reachability problem and of the unstable
eigenvalues to the controllability to 0 problem.
Finite-time horizon mixed Gramian Using the insight given by the previous arguments, it is possible to construct also a finite-time mixed Gramian,
which weights only the modes that are difficult to control in the two state
transfer problems. In the basis in which A is split into stable and antistable
19
diagonal blocks, (S28), this is given by
W̄1,r (tf )
0
W̄m (tf ) =
0
W̄2,c (tf )
where W̄1,r (tf ) and W̄2,c (tf ) are the equivalent of (S12) and (S13) for the two
subsystems (Ā1 , B̄1 ) and (Ā2 , B̄2 ). An equation like (S29) has no coupling
terms between the two subsystems (i.e., terms of the form B̄1 B̄2 ). These
terms disappear asymptotically, but transiently they give a contribution,
hence the finite-time formulation of W̄m (tf ) is only an approximation. In the
original basis, Wm (tf ) = V −1 W̄m (tf )V −T , and the input energy is
T
−1 x1,f
T
.
Em (tf ) = x1,f x2,o Wm (tf )
x2,o
Clearly a proxy for this quantity is obtained by simply flipping the sign the
real part of the unstable eigenvalues of A and considering only the reachability problem on the resulting stable system, or shifting the eigenvalues of
A by adding a diagonal term (see Fig. 2). Equivalently, all eigenvalues can
be made unstable, and the controllability to 0 problem considered.
4.4
Controllability with bounded controls
Consider the system (S11). Assume u ∈ Ω, where Ω is a compact set of Rm
containing the origin in its interior. Assume (A, B) is controllable. Then we
have the following, see [18, 6] and [35], p. 122.
• A necessary and sufficient condition for the origin to be steered to any
point of Rn in finite time (i.e., the reachability problem) is that no
eigenvalue of A has negative real part.
• A necessary and sufficient condition for any point of Rn to be steered
to the origin in finite time (i.e., the controllability to 0 problem) is that
no eigenvalue of A has positive real part.
Combining the two:
• A necessary and sufficient condition for complete controllability (from
any point xo to any point xf ) in finite time is that all eigenvalues have
zero real part.
20
4.5
Control of coupled harmonic oscillators
A network of n coupled harmonic oscillators can be written as a system of
second order differential equations
Mi q̈i + (ki +
n
X
j=1
kij )qi −
n
X
kij qj = βi ui ,
i = 1, . . . , n,
(S30)
j=1
where Mi > 0 is the mass of the i-th oscillator, ki > 0 its stiffness, kij > 0
the coupling stiffness between the i-th and j-th oscillators, and βi ∈ {0, 1}
indicates the presence or absence of a forcing term in the i-th oscillator. In
matrix form, (S30) can be rewritten as
M q̈ + Kq = Bu,
(S31)
where M = M T = diag(Mi ) > 0 is the mass matrix, K = K T > 0 the
stiffness matrix, and B is a n × m matrix whose columns are the elementary
vectors corresponding to the βi = 1. When u = 0, the solutions of (S31) have
the form q = φeiωt in correspondence of the pairs ωj and φj , j = 1, . . . , n,
that are the solutions of the generalized eigenvalues/eigenvector equation
(−ω 2 M + K)φ = 0.
(S32)
The ωj are called the natural frequencies of (S31). Denote
Φ = φ1 . . . φn
the matrix of eigenvectors. Φ can be used to pass to a so-called modal basis,
in which the oscillators are decoupled. In fact, it can be verified directly
that in correspondence of the change of basis q1 = Φ−1 q, M1 = ΦT M Φ and
K1 = ΦT KΦ are both diagonal matrices, hence
M1 q̈1 + K1 q1 = ΦT Bu
has decoupled dynamics (but coupled inputs).
The state space representation of (S31) is 2n dimensional. If
M 0
q
x=
,
0 M q̇
then
ẋ = Ao x + Bo u =
I
0
0
x+
u.
−KM −1 0
B
21
(S33)
(S34)
In terms of the state space model (S34), the eigenvalues are λj = ±iωj ,
j = 1, . . . , n, of eigenvectors
ψj
vj =
ωj ψ j
where ψ j = M φj , from which the purely oscillatory nature of Ao is evident.
If we denote
ω12
ω22
2
Ω =
,
.
.
.
2
ωn
then, from (S32), M −1 KΦ = ΦΩ which implies
Ω2 = Φ−1 M −1 KΦ
= Φ−1 M −1 Φ−T ΦT KΦ
= M1−1 K1 .
If Ψ = M Φ, the state space representation in the modal basis
−1
Ψ
0
z = Tx =
x
0 Ψ−1
is given by
ż = A1 z + B1 u =
0
0
I
z+
u.
−Ω2 0
Ψ−1 B
(S35)
From M1 = ΨT M −1 Ψ, one gets Ψ−1 B = M1−1 ΨT M −1 B.
A key advantage of the modal representation is that the Gramian of the
pair (A1 , B1 ) can be computed explicitly. As a matter of fact, when the
eigenvalues are on the imaginary axis, the integral (S12) (or (S13)) diverge,
and hence the infinite-time Gramian cannot be computed. However, in the
modal basis z, Wz (tf ) (or more precisely Wz,r (tf )) is diagonally dominant,
and for tf sufficiently long it can be approximated by its diagonal terms.
These terms are computed explicitly in [3]:
−1 T −1 T −1 −T
(M1 Ψ M BB M ΨM1 )jj tf
for 1 6 j 6 n
2ωj
(Wz (tf ))jj =
−1 T
−1 BB T M −1 ΨM −T
M
Ψ
M
t
( 1
1 )jj f
for n + 1 6 j 6 2n.
2
22
If we assume that the mass matrix M is diagonal, then it is always possible
to choose Ψ so that M1 = I and K1 diagonal, by suitably rescaling the
eigenvectors ψ j . In this case
0
0
0
T
T
−1
0 B M Ψ =
,
B1 B1 =
ΨT M −1 B
0 ΨT M −1 BB T M −1 Ψ
and the Gramian is determined by the lower diagonal block. When the
columns of B are elementary vectors as in our case, the product ΨT M −1 BB T M −1 Ψ
can be written explicitly as sum of rank-1 matrices:
1
n
X βj ψ.j
1
n
.
ψ
.
.
.
ψ
ΨT M −1 BB T M −1 Ψ =
.
j
j
Mj2
j=1
ψjn
(only m of the n factors βj ∈ {0, 1} are nonzero) and its diagonal entries are
1 2
(ψ
)
n
j
X βj
..
(S36)
diag(ΨT M −1 BB T M −1 Ψ) =
.
.
2
Mj
n 2
j=1
(ψj )
Hence the expression for the Gramian in the modal basis is
P
1 2
Wz (tf ) ≈
βj (ψj )
n
j=1 Mj2 2ω12
..
.
n 2
βj (ψj )
2
j=1 Mj2 2ωn
Pn
1 2
βj (ψj )
j=1 Mj2 2
Pn
..
.
βj
j=1 Mj2
Pn
Notice the linearity in tf , meaning that all components diverge to ∞ with
the same speed when tf → ∞. This expression can be used to compute
the various measures of control energy we have adopted in the paper, and
hence to optimize the driver node placement problem. For instance, selecting
23
tf .
(ψ n )2
j
2
inputs according to λmin (Wz ) amounts to solving the following MILP maxmin problem:
max min
βj
i
n
X
βj i 2
(ψj )
2
M
j
j=1
subject to
n
X
βi = m
j=1
βj ∈ {0, 1}
which can be solved exactly only for systems of moderate size. However,
efficient heuristics can be derived for it, such as Algorithm 1.
Algorithm 1 Driver node placement that maximizes tr(Wz ).
Input:
yi =
1 h (ψi1 )2
...
2
Mi2 2ω1
(ψin )2
2
2ωn
(ψi1 )2
2
...
(ψin )2
2
i
,
i = 1, . . . , n
1. Choose î = argmaxi (−ky i k∞ ) = argmaxi (mink yki )
• y s = y î
• I = {1, 2, . . . , n} \ { î }
• O = { î }
2. For c = 2, 3, . . . , m
• compute ĵ = argmaxj∈I (−ky s + y j k∞ )
• y s = y s + y ĵ
• I = I \ { ĵ }
• O = O ∪ { ĵ }
Output: O
24
If instead we choose to maximize the tr(Wz ), then we get
n X
n
X
βj (ψji )2
max
βj
Mj2 2(1 + ωi2 )
i=1 j=1
subject to
n
X
βj = m
j=1
βj ∈ {0, 1}
Since tr(Wz ) is linear in the βj , this is a linear optimization problem, hence
solvable exactly and efficiently for any n. Finally, also for the minimization
of tr(Wz−1 )
min tr(Wz−1 )
βj
subject to
n
X
βj = m
j=1
βj ∈ {0, 1}
an efficient heuristic can be set, as outlined in Algorithm 2.
25
Algorithm 2 Driver node placement that minimizes tr(Wz−1 ).
Input:
yi =
1 h (ψi1 )2
...
2
Mi2 2(1+ω1 )
1. Choose î = argmaxi
(ψin )2
2)
2(1+ωn
i
,
i = 1, . . . , n
Pn
1
k=1 yki
• y s = y î
• I = {1, 2, . . . , n} \ { î }
• O = { î }
2. For c = 2, 3, . . . , m
• compute ĵ = argmaxj∈I
Pn
1
k=1 y s +y j
k
k
• y s = y s + y ĵ
• I = I \ { ĵ }
• O = O ∪ { ĵ }
Output: O
Looking at an expression like (S36), it is possible to understand what kind
of behavior yields good controllability properties to certain driver nodes. For
instance when measuring according to λmin (Wz ), from (S36), the columns
of ΨT express how nodes in the original basis are spread among the state
variables in the modal basis. What is needed is an eigenbasis such that the
j-th component of all eigenvectors has “support” on all the directions of the
state space, i.e., all ψj1 , . . . , ψjn are nonvanishing and possibly all as large as
possible. Since Wz is approximated well by a diagonal matrix, the “coverage”
effect of control inputs is additive, hence choosing a pair of controls i and
j for which the sum of {ψik }k=1...,n and {ψjk }k=1...,n has all components that
are as large as possible guarantees an improvement in the control cost with
respect to taking only one of the two inputs.
Notice that although M and K are both symmetric, KM −1 need not be,
hence Ψ need not be an orthogonal matrix. It can be rendered orthogonal if
a slightly different modal basis is chosen, see [12] for the details. In that case
26
ΨT = Ψ−1 i.e., the “coverage” discussed here is given by the left eigenvectors
of (S32), condition sometimes considered in the literature [14].
Another basis
for the state space that can be used in place of (S33) is
q
given by x̃ =
. With this choice, the state space realization is
q̇
I
0
0
˙x̃ = Ão x̃ + B̃o u =
x̃ +
u.
(S37)
−M −1 K 0
M −1 B
It is straightforward to verify that in this basis the roles of win and wout
are exchanged, hence a criterion for driver node selection becomes ranking
according to win /wout (instead of wout /win ).
5
Datasets
The power networks used in the paper are listed in Table S1. All nodes are
treated equally, regardless of their function as generators or loads in the real
grid.
Network type
nodes
edges
source
North EU power grid
IEEE 300 test grid
French power grid
USA power grid
236
300
1888
4941
320
409
2308
6591
[23]
https://www.ee.washington.edu/research/pstca/
[16]
[44]
Table S1: Power grids used in this study.
References
[1] Stefano Allesina and Si Tang. The stability–complexity relationship at
age 40: a random matrix perspective. Population Ecology, 57(1):63–75,
2015.
[2] P.J. Antsaklis and A.N. Michel. Linear Systems. Birkhäuser Boston,
2005.
[3] Ami Arbel. Controllability measures and actuator placement in oscillatory systems. International Journal of Control, 33(3):565–574, 1981.
27
[4] N. Bof, G. Baggio, and S. Zampieri. On the Role of Network Centrality
in the Controllability of Complex Networks. ArXiv e-prints, September
2015.
[5] Béla Bollobás, Christian Borgs, Jennifer Chayes, and Oliver Riordan.
Directed scale-free graphs. In Proceedings of the Fourteenth Annual
ACM-SIAM Symposium on Discrete Algorithms, SODA ’03, pages 132–
139, Philadelphia, PA, USA, 2003. Society for Industrial and Applied
Mathematics.
[6] R. F. Brammer. Controllability in linear autonomous systems with positive controllers. SIAM J of Control, 10:339–353, 1972.
[7] Yu-Zhong Chen, Le-Zhi Wang, Wen-Xu Wang, and Ying-Cheng Lai.
Energy scaling and reduction in controlling complex networks. Royal
Society Open Science, 3(4), 2016.
[8] Sean P. Cornelius, William L. Kath, and Adilson E. Motter. Realistic
control of network dynamics. Nat Commun, 4, 06 2013.
[9] Brian C. Dian, Asier Longarte, and Timothy S. Zwier. Conformational
dynamics in a dipeptide after single-mode vibrational excitation. Science, 296(5577):2369–2373, 2002.
[10] Jin Ding, Yong-Zai Lu, and Jian Chu. Studies on controllability of
directed networks with extremal optimization. Physica A: Statistical
Mechanics and its Applications, 392(24):6603 – 6615, 2013.
[11] Jianxi Gao, Yang-Yu Liu, Raissa M D’Souza, and Albert-László
Barabási. Target control of complex networks. Nature communications,
5, 2014.
[12] Wodek K. Gawronski. Dynamics and Control of Structures. A Modal
Approach. Mechanical Engineering Series. Springer-Verlag, New York,
1998.
[13] L.L. Grigsby. Power System Stability and Control. The Electric Power
Engineering Hbk, Second Edition. CRC Press, 2007.
[14] A. M.A. Hamdan and A. H. Nayfeh. Measures of modal controllability
and observability for first- and second-order linear systems. Journal of
Guidance, Control, and Dynamics, 12:421–428, 1989.
28
[15] D.H. Jacobson. Extensions of Linear-Quadratic Control, Optimization
and Matrix Theory, volume 133 of Mathematics in Science and Engineering. Academic Press, London, 1977.
[16] C. Josz, S. Fliscounakis, J. Maeght, and P. Panciatici. AC Power Flow
Data in MATPOWER and QCQP Format: iTesla, RTE Snapshots, and
PEGASE. ArXiv e-prints, March 2016.
[17] J.L. Junkins and Y. Kim. Introduction to Dynamics and Control of
Flexible Structures. AIAA education series. American Institute of Aeronautics & Astronautics, 1993.
[18] E.B. Lee and L. Markus. Foundations of Optimal Control Theory. R.E.
Krieger Publishing Company, 1986.
[19] S. Leleu, H. Abou-Kandil, and Y. Bonnassieux. Piezoelectric actuators and sensors location for active control of flexible structures. IEEE
Transactions on Instrumentation and Measurement, 50(6):1577–1582,
Dec 2001.
[20] Guoqi Li, Wuhua Hu, Gaoxi Xiao, Lei Deng, Pei Tang, Jing Pei, and
Luping Shi. Minimum-cost control of complex networks. New Journal
of Physics, 18(1):013012, 2016.
[21] Y.-Y. Liu and A.-L. Barabási. Control Principles of Complex Networks.
ArXiv e-prints, August 2015.
[22] Yang-Yu Liu, Jean-Jacques Slotine, and Albert-Laszlo Barabasi. Controllability of complex networks. Nature, 473(7346):167–173, 2011.
[23] Peter J. Menck, Jobst Heitzig, Jürgen Kurths, and Hans
Joachim Schellnhuber. How dead ends undermine power grid stability. Nat Commun, 5, 06 2014.
[24] P.C. Müller and H.I. Weber. Analysis and optimization of certain qualities of controllability and observability for linear dynamical systems.
Automatica, 8(3):237 – 246, 1972.
[25] Jose C. Nacher and Tatsuya Akutsu. Analysis of critical and redundant
nodes in controlling directed and undirected complex networks using
dominating sets. Journal of Complex Networks, 2(4):394–412, 2014.
29
[26] Tams Nepusz and Tams Vicsek. Controlling edge dynamics in complex
networks. Nature Physics, 05 2012.
[27] A. Olshevsky. Minimal controllability problems. Control of Network
Systems, IEEE Transactions on, 1(3):249–258, Sept 2014.
[28] A. Olshevsky. Eigenvalue Clustering, Control Energy, and Logarithmic
Capacity. ArXiv e-prints, October 2015.
[29] F. Pasqualetti, S. Zampieri, and F. Bullo. Controllability metrics, limitations and algorithms for complex networks. IEEE Transactions on
Control of Network Systems, 1(1):40–52, March 2014.
[30] I. J. Perez-arriaga, G. C. Verghese, and F. C. Schweppe. Selective modal
analysis with applications to electric power systems, part i: Heuristic
introduction. IEEE Transactions on Power Apparatus and Systems,
PAS-101(9):3117–3125, Sept 1982.
[31] Justin Ruths and Derek Ruths. Control profiles of complex networks.
Science, 343(6177):1373–1376, 2014.
[32] Hamid Reza Shaker and Maryamsadat Tahavori. Optimal sensor and actuator location for unstable systems. Journal of Vibration and Control,
2012.
[33] Shenghua Shi and Herschel Rabitz. Optimal control of selective vibrational excitation of harmonic molecules: Analytic solution and restricted forms for the optimal fields. The Journal of Chemical Physics,
92(5):2927–2937, 1990.
[34] Shenghua Shi, Andrea Woody, and Herschel Rabitz. Optimal control of
selective vibrational excitation in harmonic linear chain molecules. The
Journal of Chemical Physics, 88(11):6870–6883, 1988.
[35] Eduardo D. Sontag. Mathematical Control Theory: Deterministic Finite
Dimensional Systems (2Nd Ed.). Springer-Verlag New York, Inc., New
York, NY, USA, 1998.
[36] Peter F. Staanum, Klaus Hojbjerre, Peter S. Skyt, Anders K. Hansen,
and Michael Drewsen. Rotational laser cooling of vibrationally and
translationally cold molecular ions. Nat Phys, 6(4):271–274, 04 2010.
30
[37] T. H. Summers, F. L. Cortesi, and J. Lygeros. On submodularity and
controllability in complex dynamical networks. IEEE Transactions on
Control of Network Systems, 3(1):91–101, March 2016.
[38] Jie Sun and Adilson E. Motter. Controllability transition and nonlocality
in network control. Phys. Rev. Lett., 110:208701, May 2013.
[39] M. Tarokh. Measures for controllability, observability and fixed modes.
IEEE Transactions on Automatic Control, 37(8):1268–1273, Aug 1992.
[40] H. Trentelman, A.A. Stoorvogel, and M. Hautus. Control Theory for
Linear Systems. Communications and Control Engineering. Springer
London, 2012.
[41] V. Tzoumas, M. A. Rahimian, G. J. Pappas, and A. Jadbabaie. Minimal
actuator placement with bounds on control effort. IEEE Transactions
on Control of Network Systems, 3(1):67–78, March 2016.
[42] Marc van de Wal and Bram de Jager. A review of methods for input/output selection. Automatica, 37(4):487 – 510, 2001.
[43] W. S. Warren, H. Rabitz, and M. Dahleh. Coherent control of quantum
dynamics: The dream is alive. Science, 259:1581, 1993.
[44] Duncan J. Watts and Steven H. Strogatz. Collective dynamics of smallworld networks. Nature, 393(6684):440–442, 06 1998.
[45] Gang Yan, Jie Ren, Ying-Cheng Lai, Choy-Heng Lai, and Baowen Li.
Controlling complex networks: How much energy is needed? Phys. Rev.
Lett., 108:218703, May 2012.
[46] Gang Yan, Georgios Tsekenis, Baruch Barzel, Jean-Jacques Slotine,
Yang-Yu Liu, and Albert-Laszlo Barabasi. Spectrum of controlling and
observing complex networks. Nat Phys, 11(9):779–786, 09 2015.
[47] Zhengzhong Yuan, Chen Zhao, Zengru Di, Wen-Xu Wang, and YingCheng Lai. Exact controllability of complex networks. Nat Commun, 4,
09 2013.
[48] Ahmed H. Zewail. Laser selective chemistry: is it possible? Physics
Today, 33(11):27–33, 1980.
31
[49] Kemin Zhou, Gregory Salomon, and Eva Wu. Balanced realization and
model reduction for unstable systems. International Journal of Robust
and Nonlinear Control, 9(3):183–198, 1999.
32
Energy
A mixed
A stable
2
2
0
1
0
1
0
-1
x2
A antistable
2
0
1
0
0
0
-1
1
x1
-1
x2
-1
0
1
x1
-1
x2
-1
0
1
x1
(a)
Energy
A mixed
A stable
A antistable
2
2
2
0
1
0
1
0
1
0
-1
x2
0
0
0
-1
1
x1
-1
x2
-1
0
1
x1
-1
x2
-1
0
1
x1
(b)
Energy
| Re[λ (A)] | <2
| Re[λ (A)] | <0.5
5
0
1
0
1
0
1
0
x2
| Re[λ (A)] | <0.1
5
5
-1
0
0
0
-1
x1
1
x2
-1
-1
0
x1
1
x2
-1
-1
0
1
x1
(c)
Figure 1: Reachability and Controllability-to-0 problems. (a): The reachability
(or controllability-from-0) problem is difficult along the stable eigendirections of A
(red curves in the leftmost panel) and easy along the unstable ones (blue). This is
reflected in the surfaces of Er (tf ) = xTf Wr−1 (tf )xf shown in the 3 rightmost panels.
In particular, the reachability problem requires limited control energy when A is
antistable (rightmost panel). (b): The controllability-to-0 problem is difficult along
the unstable eigendirections of A (red) and easy along the stable ones (blue). The
input energy surfaces, Ec (tf ) = xTo Wc−1 (tf )xo , reflect these properties. The case of
A stable requires the least control energy. (c): The problem studied in this paper
is a mixture of the two cases, collecting the worst-case of both. When the real
part of the eigenvalues of A is squeezed towards the imaginary axis as in the lower
right panels of Fig. 2(a), the input energy reduces accordingly.
33
(a)
tr(Wm )/n
λ min (Wm )
0.6
0.4
10
tr(W-1
)/n
m
10 4
1
0.8
2
10 10
10 5
0.2
10 0
10 0
200
400
n. inputs
600
800
200
400
n. inputs
600
800
200
400
600
800
n. inputs
(b)
Figure 2: (a): Circular law and eigenvalue location. For a random matrix, the
circular law allows to obtain state matrices A with eigenvalues in prescribed locations, for instance in the unit disk (blue) or in the shifted unit disk (red and yellow)
by altering the diagonal of A. The elliptic law allows to squeeze the eigenvalue
location along one of the two axes of the complex plane (violet, green and cyan).
(b): Control energy for various metrics when the number of (randomly chosen)
inputs grows. The data show a mean over 100 realizations of dimension n = 1000
(for each realization 100 different edge weights assignments are considered). The
color code is as in (a). For all three metrics used to measure the control energy
−1 ) which should
(λmin (Wm ), tr(Wm ) which should both be maximized, and tr(Wm
be minimized), the performances are strictly a function of the position of the eigenvalues of A. The minimum of the control energy is achieved when the eigenvalues
have very small real part (cyan) and worsen with growing real part, following the
order: cyan, green, blue, violet.
34
SF, γ =3.14, γ
ratio of λ min (Wm )
0.1
λmin (Wm )
ER, p = 0.05
0.08
0.06
0.04
ratio w out /w in
0.02
out
=2.87
ratio of λ min (Wm )
in
102
2
ER, p = 0.01
1.5
ER, p = 0.05
101
full
1
0
200
400
600
n. inputs
random
100
200
300
400
500
600
100
n. inputs
0
100
200
300
400
500
600
n. inputs
(a)
(b)
(c)
Figure 3: Driver node placement strategy: ranking according to rw = wout /win .
(a): When the driver nodes are chosen according to the ratio rw , then all the
control energy measures improve with respect to a random node selection. Here
λmin (Wm ) is shown, the other energy measures are in Fig. S3. Measures are means
(and st. dev.) over 100 realizations of size n = 1000; for each realization 100
edge weight assignments are tested. (b): For ER networks, the improvement in
λmin (Wm ) increases with the sparsity of the graph (inset: zoomed comparison in
linear scale). For other topologies, like SF directed graphs with indegree exponent
γin = 3.14 and outdegree exponent γout = 2.87, the improvement is remarkably
more significant (two orders of magnitude, violet curve, see also Fig. S5 for more
details). (c): the ratio rw of ranked nodes is shown. For SF networks, the fraction
of nodes having high rw is much bigger than that of ER networks, and this leads
to the much better performances in terms of control energy. The shaded areas
represent the values of m tested in our computations.
35
(a)
10 0
10 15
0.02
tr(Wz )
tr(W-1
)
z
w out /w in
0.01
random
tr(W-1
)/n
z
0.015
tr(Wz)/n
λ min (Wz)
λ min (Wz )
10 -1
0.005
100
200
300
400
500
100
600
200
300
400
500
10 10
10 5
600
100
200
n. inputs
n. inputs
300
400
500
600
8
10
n. inputs
(b)
0.5
0
overlap with w in/w out
0.5
0
0
200
400
n. inputs
600
1
overlap with λ min (Wz )
0.5
0
0
200
400
n. inputs
(c)
600
2
w out (i)/win (i)
1
%
overlap with w out /w in
%
%
1
0
200
400
600
1.5
1
0.5
0
0
2
4
6
Mi
n. inputs
(d)
Figure 4: Driver node placement strategies for a network of coupled harmonic
oscillators. (a): The concept of driver node is basis dependent: when the basis
changes in state space (for instance we pass from (5) to (6)), the control inputs no
longer target a single node, but become spread across the entire state space (now
decoupled into non-interacting modes). (b): Comparison of different driver node
placement strategies for n = 1000 coupled harmonic oscillators. Shown are means
over 100 realizations (with 100 edge weights samples taken for each realization).
Red: driver node placement based on λmin (Wz ). Violet: placement based on
tr(Wz ). Green: placement based on tr(Wz−1 ). Cyan: placement based on wout /win .
Blue: random input assignment. All driver node placement strategies always beat
a random assignment, often by orders of magnitude. The green and red curves
give similar performances and so do the cyan and violet. Notice that for tr(Wz )
the violet curve gives the exact optimum. (c): Overlap in the node ranking of the
different driver node placement strategies. Color code is the same as in (b). The
only highly significant overlap is between wout /win and tr(Wz ), while λmin (Wz ) and
tr(Wz−1 ) correspond to different node ranking patterns. Notice that none of the
strategies orders nodes according to win /wout (mid panel). (d) Inverse correlation
between Mi and wout /win (correlation coefficient
around −0.75 in average).
36
(a)
×10 -3
12
10
λ min (W) placement
w out /w in placement
×10 -6
14
-2
0.05
10
-2
12
8
10
6
10 -15
4
50
100
0.04
0.03
λ min (W)
-10
λ min (W)
λ min (W)
10
10 -10
10 -14
0.02
50
100
0.01
2
40
60
n. inputs
80
100
10
10 -10
8
6
random placement
10 -5
10
-15
50
4
100
2
40
60
n. inputs
80
100
40
60
80
100
n. inputs
(b)
Figure 5: Minimum energy control of power grids for varying damping coefficients.
(a): The eigenvalues of the state space system (9) for the North EU power grid
[23] with uniformly distributed masses (hMi i = 10) and damping coefficients that
vary across 4 orders of magnitude. (b): Control energy for the metric λmin (Wr )
when the driver nodes are placed according to λmin (Wr ) (left panel), wout /win
(mid panel), or randomly (right panel). The values of λmin (Wr ) corresponding
to the 4 choices of damping made in (a) are shown in solid lines (same color
code as in (a)), while in dotted lines the values of λmin (Wz ) are shown (suitably
normalized to eliminate the explicit dependence from tf , see (7)). The insets show
the same quantities in log scale. Values are all averages over 100 realizations.
For all three driver node placement strategies, the performances worsen as the
damping is increased. Comparing the three panels, wout /win performs similarly to
λmin (Wr ), and both outperform a random placement by orders of magnitude.
37
(a)
3
2
1.5
1
10 4
tr(W-1
)/n
m
tr(Wm )/n
λ min (Wm )
2.5
10 2
0.5
400
n. inputs
600
800
10 5
10 0
10 0
200
10 10
200
400
n. inputs
600
800
200
400
600
800
n. inputs
(b)
Figure S1: Analogous of Fig. 2, but for networks with an ER topology with
edge probability p = 0.05. (a): Six different eigenvalues locations in the complex
plane for ER graphs of size n = 1000 and random edges weights. The circular
law and the elliptic law are still valid. (b): Control energy for various metrics
when the number of (randomly chosen) inputs grows. The data show a mean over
100 realizations of dimension n = 1000 (for each realization 100 different edge
weights assignments are considered). The color code is as in (a). For all three
metrics used to measure the control energy (λmin (Wm ), tr(Wm ) which should both
−1 ) which should be minimized), the performances are
be maximized, and tr(Wm
strictly a function of the position of the eigenvalues of A. The minimum of the
control energy is achieved when the eigenvalues have very small real part (cyan)
and worsen with growing real part, following the order: cyan, green, blue, violet.
38
5000
4000
0.25
tr(Wm )/n
λ min (Wm )
0.3
0.2
0.15
3000
2000
0.1
1000
0.05
20
100
200
20
∞
100
time
200
∞
200
∞
time
×10 7
2.5
λ max (Wm )/λ min (Wm )
120
tr(W-1
)/n
m
100
80
60
40
20
20
100
200
∞
time
2
1.5
1
0.5
20
100
time
Figure S2: Computing control energies in finite time and infinite time. For the
various measures of control energy considered in the paper (λmin (Wm ), tr(Wm )
−1 )), the plots show the profile in time when computations are performed
and tr(Wm
using Wm (tf ), for various values of tf . The value for tf = ∞ is also shown for
comparison. For this specific example (full network of size n = 1000 and m = 400
−1 )
controls), some measures converge much faster than others. For instance tr(Wm
achieves its infinite-time value extremely quickly, while tr(Wm ) converges very
slow. Also the condition number of Wm (i.e., λmax (Wm )/λmin (Wm ), lower right
panel) converges to its asymptotic value.
39
0.1
1500
tr(Wm )/n
λmin (Wm )
0.08
0.06
0.04
500
0.02
200
300
400
500
600
λmax(Wm )/λmin (Wm )
100
tr(W-1
)/n
m
10 6
10
10
1000
4
2
100
200
300
400
500
100
200
100
200
300
400
500
600
300
400
500
600
10 12
10 10
10
8
600
n. inputs
n. inputs
ratio of tr(W m )
ratio of λmin (Wm )
(a)
1.3
1.2
1.1
1
1.2
1.1
1
200
400
600
1
0.95
0.9
0.85
0.8
0.75
0
200
400
600
ratio of λmax(Wm )λmin (Wm )
0
ratio of tr(W -1
)
m
1.3
n. inputs
0
200
400
600
0
200
400
600
1
0.95
0.9
0.85
0.8
0.75
n. inputs
(b)
Figure S3: Driver node placement for ER networks with p = 0.05. (a): Comparison between the value of the various measures of control energy obtained for driver
node placement strategies based on rw = wout /win (red) and the same measure
for random driver node assignments (blue). As can be seen on the ratios shown in
(b), all measures improve, especially when m is low. Notice that also the condition
number of Wm (i.e., λmax (Wm )/λmin (Wm ), lower right panel) improves.
40
cum. prob.
10 0
cum. prob.
10 0
ER, p=0.05
ER, p=0.01
SF, γ in =3.14, γ out =2.87
10 -5
10 0
10 1
10 -5
10 0
10 2
10 1
indegree
10 2
outdegree
(a)
3
full
ER, p=0.05
ER, p=0.01
SF
d out /d in
2.5
2
1.5
1
10
1
10
2
10
3
nodes
(b)
Figure S4: Degree distributions of the various networks used in this study. (a):
Average indegree and outdegree of the ER networks (with p = 0.05 and p = 0.01)
and SF networks. The SF networks are generated using the algorithm of [5],
with indegree exponent γin = 3.14 and outdegree exponent γout = 2.87. To avoid
problems with controllability, extra edges are added randomly among different
strongly connected components, until strong connectivity is achieved on the entire
graph. The difference in the in/out exponent is still clearly visible. (b): Histogram
of outdegree/indegree ratio for the various networks. The SF networks reflect our
choice of γout < γin .
41
×10 10
0.1
8
tr(Wm )/n
λmin (Wm )
0.08
0.06
0.04
0.02
4
2
200
300
400
500
600
λmax(Wm )/λmin (Wm )
100
tr(W-1
)/n
m
6
10 10
10 5
10 0
100
200
300
400
500
100
200
100
200
300
400
500
600
300
400
500
600
10 25
10 20
10 15
600
n. inputs
n. inputs
10 1
ratio of tr(W m )
700
600
500
400
300
200
100
0
200
400
ratio of tr(W -1
)
m
10 0
10 -5
0
200
400
10 0
600
600
ratio of λmax(Wm )λmin (Wm )
ratio of λmin (Wm )
(a)
n. inputs
0
200
400
600
0
200
400
600
10 0
10 -1
10 -2
n. inputs
(b)
Figure S5: Driver node placement for SF networks with γin = 3.14 and γout = 2.87.
(a): Comparison between the value of the various measures of control energy obtained for driver node placement strategies based on rw = wout /win (red) and
the same measure for random driver node assignments (blue). As the ratios in
(b) show, the improvement in all measures is normally of several orders of magnitude. Also the condition number of Wm (i.e., λmax (Wm )/λmin (Wm )) improves
substantially.
42
tr(Wz )
8
tr(W-1
)
z
6
random
10 8
w out /w in
4
tr(W-1
)/n
z
λ min (Wz )
10
tr(Wz)/n
λ min (Wz)
×10 -4
12
10 -1
10
6
10 4
2
100
200
300
400
500
600
100
n. inputs
200
300
400
500
600
100
n. inputs
200
300
400
500
600
n. inputs
(a)
1
overlap with w out /w in
0.6
overlap with w in/w out
1
overlap with λ min (Wz )
%
%
%
0.4
0.5
0.5
0.2
0
0
0
200
400
n. inputs
600
0
0
200
400
n. inputs
600
0
200
400
600
n. inputs
(b)
Figure S6: Driver node placement for a network of n = 1000 coupled harmonic
oscillators. The figure is the analogous of Fig. 4, but now the coupling matrix
K is fully connected. (a): Shown are means over 50 realizations (with 100 edge
weight samples taken for each realization). Red: driver node placement based
on λmin (Wz ). Violet: placement based on tr(Wz ). Green: placement based on
tr(Wz−1 ). Cyan: placement based on wout /win . Blue: random input assignment.
All driver node placement strategies still beat a random assignment, but with
worse performances with respect to Fig. 4. Of the four measures, λmin (Wz ) and
tr(Wz−1 ) tend to behave similarly and so do wout /win and tr(Wz ) (in the mid plot
they completely overlap, and both give the true optimum). (b): Overlap in the
node ranking of the different driver node placement strategies. Color code is the
same as in (a). The only highly significant overlap is still between wout /win and
tr(Wz ) (> 90%), while λmin (Wz ) and tr(Wz−1 ) correspond to different node ranking
patterns. None of the strategies orders nodes according to win /wout , as expected.
43
10
3
10
10 2
10
random placement
4
10
tr(W)/n
10
w out /w in placement
tr(W)/n
tr(W)/n
λ min (W) placement
4
10 3
10 2
10
40
60
80
100
4
10 3
10 2
10
40
n. inputs
60
80
100
40
n. inputs
60
80
100
n. inputs
(a)
λ min (W) placement
random placement
10 10
40
60
n. inputs
80
100
10 25
tr(W-1 )/n
tr(W-1 )/n
10 20
-1
tr(W )/n
10
w out /w in placement
20
10 10
40
60
n. inputs
80
100
10 20
40
60
80
100
n. inputs
(b)
Figure S7: Minimum energy control of power grids with varying damping coefficients. North EU power grid. This Figure complements Fig. 5(b) of the paper.
(a): Control energy for the metric tr(Wr ) when the driver nodes are placed according to λmin (Wr ) (left panel), wout /win (mid panel), or randomly (right panel).
The color code is a function of the damping coefficients, with the same convention
as in Fig. 5(a) of the paper. The values of tr(Wr ) are shown in solid lines, while
in dotted lines the values of tr(Wz ) are shown (suitably normalized to eliminate
the explicit dependence from tf ). Values are averages over 100 realizations. (b):
Control energy for the metric tr(Wr−1 ), with the same conventions as in (a).
44
w
λ min (W) placement
out
/w
in
placement
×10 -5
random placement
0.07
0.06
10 -10
0.02
50
100
0.01
0.05
λ min (W)
0.03
λ min (W)
0.04
λ min (W)
2.5
0.04 10 -10
0.03
50
0.02
100
60
80
100
40
120
1
50
100
0.5
0.01
40
2 10 -10
1.5
60
80
100
40
120
60
80
100
120
n. inputs
n. inputs
n. inputs
(a)
w out /w in placement
10 4
3
10 3
10
10 2
random placement
10
tr(W)/n
10 4
tr(W)/n
tr(W)/n
λ min (W) placement
10 2
4
10 3
10 2
10
10
10
40
60
80
100
120
40
n. inputs
60
80
100
120
40
n. inputs
60
80
100
120
n. inputs
(b)
λ min (W) placement
w out /w in placement
10 10
random placement
20
tr(W-1 )/n
10
tr(W-1 )/n
-1
tr(W )/n
10 20
10 10
10 15
10 0
10 0
40
60
80
n. inputs
100
120
10 20
40
60
80
n. inputs
100
120
40
60
80
100
120
n. inputs
(c)
Figure S8: Minimum energy control of power grids with varying damping coefficients. IEEE 300 bus test power network. (a): Control energy for the metric
λmin (Wr ) when the driver nodes are placed according to λmin (Wr ) (left panel),
wout /win (mid panel), or randomly (right panel). The color code is a function of
the damping coefficients, using the same convention as in Fig. 5(a) of the paper.
The values of λmin (Wr ) are shown in solid lines, while in dotted lines the values
of λmin (Wz ) are shown (suitably normalized to eliminate the explicit dependence
from tf ). Values are averages over 100 realizations. Data are missing when the
Gramian Wr is too close to singular in too many trials. (b): Control energy for
the metric tr(Wr ) when the driver nodes are placed according to λmin (Wr ) (left
panel), wout /win (mid panel), or randomly (right panel). The values of tr(Wr ) are
shown in solid lines, while in dotted lines the values of tr(Wz ) are shown. (c):
Control energy for the metric tr(Wr−1 ), with the same conventions as in (a) and
(b).
45
3.5
w out /w in placement
λ min (W) placement
10 -3
12
(W)
10 -8
-13
400
1
8
min
2
1.5 10
10
600
800
10
-4
-10
10 -14
6
λ
λ min (W)
10
14
3
2.5
×10 -11random
×10 -5
λ min (W)
×10 -3
400
4
0.5
600
800
400
500
600
700
800
300
900
6
10
5
10 -15
4
3
10 -20
400
2
600
800
1
2
300
placement
-11
400
500
600
700
800
900
300
400
500
600
700
800
900
n. inputs
n. inputs
n. inputs
(a)
λ min (W) placement
random placement
10 4
10 3
10 2
10
tr(W)/n
4
tr(W)/n
tr(W)/n
10
w out /w in placement
10 3
400
500
600
700
800
900
10 3
10 2
10 2
300
4
300
400
n. inputs
500
600
700
800
900
300
400
n. inputs
500
600
700
800
900
n. inputs
(b)
λ min (W) placement
15
10 5
400
500
600
700
n. inputs
800
900
tr(W-1 )/n
10 10
300
random placement
10 25
10 20
tr(W-1 )/n
tr(W-1 )/n
10
w out /w in placement
10 10
300
400
500
600
700
n. inputs
800
900
10 20
300
400
500
600
700
800
900
n. inputs
(c)
Figure S9: Minimum energy control of power grids with varying damping coefficients. French high/mid voltage power grid. (a): Control energy for the metric
λmin (Wr ) when the driver nodes are placed according to λmin (Wr ) (left panel),
wout /win (mid panel), or randomly (right panel). The color code is a function of
the damping coefficients, using the same convention as in Fig. 5(a) of the paper.
The values of λmin (Wr ) are shown in solid lines, while in dotted lines the values
of λmin (Wz ) are shown (suitably normalized to eliminate the explicit dependence
from tf ). Data are missing when the Gramian Wr is too close to singular (mostly
when driver nodes are chosen randomly, right column). (b): Control energy for
the metric tr(Wr ) when the driver nodes are placed according to λmin (Wr ) (left
panel), wout /win (mid panel), or randomly (right panel). The values of tr(Wr ) are
shown in solid lines, while in dotted lines the values of tr(Wz ) are shown. (c):
Control energy for the metric tr(Wr−1 ), with the same conventions as in (a) and
(b).
46
λ
×10 -10 min
w
(W) placement
×10 -11 out
/w in placement
×10 -15random
3 10 -10
600
800
5
(W)
min
10 -20
400
600
2.5
800
0.5
10 -20
2
1.5
λ
(W)
min
400
10
λ
λ min (W)
2
1.5
1
3
15
2.5
placement
400
1
600
800
0.5
400
500
600
700
400
800
500
600
700
800
400
n. inputs
n. inputs
500
600
700
800
n. inputs
(a)
10 3
10 3
10 2
w out /w in placement
random placement
10 4
tr(W)/n
10 4
tr(W)/n
tr(W)/n
λ min (W) placement
10 4
10 2
10
10
400
500
600
700
800
10 3
10 2
10
400
500
n. inputs
600
700
800
400
500
n. inputs
600
700
800
n. inputs
(b)
λ min (W) placement
w out /w in placement
random placement
10
10 30
tr(W-1 )/n
10 20
tr(W-1 )/n
tr(W-1 )/n
10 30
10 25
10 25
10
400
500
600
n. inputs
700
800
400
500
600
n. inputs
700
800
400
500
600
700
800
n. inputs
(c)
Figure S10: Minimum energy control of power grids with varying damping coefficients. USA power grid. (a): Control energy for the metric λmin (Wr ) when the
driver nodes are placed according to λmin (Wr ) (left panel), wout /win (mid panel),
or randomly (right panel). The color code is a function of the damping coefficients,
using the same convention as in Fig. 5(a) of the paper. The values of λmin (Wr ) are
shown in solid lines, while in dotted lines the values of λmin (Wz ) are shown (suitably normalized to eliminate the explicit dependence from tf ). Data are missing
when the Gramian Wr is numerically too close to singular in too many trials. (b):
Control energy for the metric tr(Wr ) when the driver nodes are placed according
to λmin (Wr ) (left panel), wout /win (mid panel), or randomly (right panel). The
values of tr(Wr ) are shown in solid lines, while in dotted lines the values of tr(Wz )
are shown. (c): Control energy for the metric tr(Wr−1 ), with the same conventions
as in (a) and (b).
47
| 3 |
arXiv:cs/0703116v2 [] 28 Jun 2007
On the Design of Generic Static Analyzers
for Modern Imperative Languages
ROBERTO BAGNARA
Department of Mathematics, University of Parma, Italy
and
PATRICIA M. HILL
School of Computing, University of Leeds, UK
and
ANDREA PESCETTI, and ENEA ZAFFANELLA
Department of Mathematics, University of Parma, Italy
The design and implementation of precise static analyzers for significant fragments of modern
imperative languages like C, C++, Java and Python is a challenging problem. In this paper, we
consider a core imperative language that has several features found in mainstream languages such
as those including recursive functions, run-time system and user-defined exceptions, and a realistic data and memory model. For this language we provide a concrete semantics —characterizing
both finite and infinite computations— and a generic abstract semantics that we prove sound with
respect to the concrete one. We say the abstract semantics is generic since it is designed to be
completely parametric on the analysis domains: in particular, it provides support for relational
domains (i.e., abstract domains that can capture the relationships between different data objects).
We also sketch how the proposed methodology can be extended to accommodate a larger language
that includes pointers, compound data objects and non-structured control flow mechanisms. The
approach, which is based on structured, big-step G∞ SOS operational semantics and on abstract
interpretation, is modular in that the overall static analyzer is naturally partitioned into components with clearly identified responsibilities and interfaces, something that greatly simplifies both
the proof of correctness and the implementation.
Categories and Subject Descriptors: F3.1 [Logics and Meanings of Programs]: Specifying
and Verifying and Reasoning about Programs.
General Terms: Languages, Verification.
Additional Key Words and Phrases: Abstract interpretation, structured operational semantics.
1. INTRODUCTION
The last few years have witnessed significant progress toward achieving the ideal of
the program verification grand challenge [Hoa03]. Still, the distance separating us
from that ideal can be measured by the substantial lack of available tools that are
able to verify the absence of relevant classes of run-time errors in code written in
(reasonably rich fragments of) mainstream imperative languages like C, C++, Java
and Python. True: there is a handful of commercial products that target generic
applications written in C, but little is known about them. In contrast, several
papers explain the essence of the techniques employed by the ASTRÉE analyzer
This work has been partly supported by MIUR project “AIDA — Abstract Interpretation: Design
and Applications” and by a Royal Society (UK) International Joint Project (ESEP) award.
2
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
to formally and automatically verify the absence of run-time errors in large safetycritical embedded control/command codes [BCC+ 02; BCC+ 03]; however, ASTRÉE
is specially targeted at a particular class of programs and program properties, so
that widening its scope of application is likely to require significant effort [Cou05]. It
is interesting to observe that, among the dozens of software development tools that
are freely available, there are hardly any that, by analyzing the program semantics,
are able to certify the absence of important classes of run-time hazards such as,
say, the widely known buffer overflows in C code.
The reason for the current, extreme scarcity of the resource “precise analyzers
for mainstream programming languages” is that the design and implementation of
such analyzers is a very challenging problem. The theory of abstract interpretation
[CC77a; CC92a] is crucial to the management of the complexity of this problem
and, in fact, both ASTRÉE and the existing commercial analyzers are (as far as
we know) based on it. Static analysis via abstract interpretation is conducted by
mimicking the execution of the analyzed programs on an abstract domain. This
is a set of computable representations of program properties equipped with all
the operations required to mirror, in an approximate though correct way, the real,
concrete executions of the program. Over the last decade, research and development
on the abstract domains has led to the availability of several implementations of
a wide range of abstract domains: from the most efficient though imprecise, to
the most precise though inefficient. Simplification and acceleration techniques have
also been developed to mitigate the effects of this complexity/precision trade-off.
So the lack of semantics-based static analyzers is not ascribable to a shortage of
abstract domains and their implementations. The point is that there is more to a
working analyzer than a collection of abstract domains:
(i) A concrete semantics must be selected for the analyzed language that models
all the aspects of executions that are relevant to the properties of interest. This
semantics must be recognizable as a sound characterization of the language
at the intended level of abstraction.
(ii) An abstract semantics must be selected and correlated to the concrete semantics. This requires a proof of correctness that, while greatly simplified
by abstract interpretation theory, can be a time-consuming task by highly
qualified individuals.
(iii) An algorithm to finitely and efficiently compute (approximations of) the abstract semantics must be selected.
(iv) For good results, the abstract domain needs to be an object that is both
complex and easily adaptable. So, instead of designing a new domain from
scratch, it is often better if one can be obtained by combining simpler, existing, abstract domains. Even though the theory of abstract interpretation
provides important conceptual instruments for the design of such a combination, a significant effort is still needed to achieve, as far as possible, the desired
precision and efficiency levels. Note that this point can have an impact on
points (ii) and (iii): a generic abstract semantics has the advantage of not
requiring an entirely new proof and a new algorithm each time the abstract
domain changes.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
3
This paper, which is the first product of a long-term research plan that is meant to
deal with all of the points above, specifically addresses points (i) and (ii) and refers
to a slight generalization of existing techniques for point (iii).
1.1 Contribution
We build on ideas that have been around for quite some time but, as far as we
know, have never been sufficiently elaborated to be applied to the description and
analysis of realistic imperative languages. In extreme synthesis, the contribution
consists in filling a good portion of the gaps that have impeded the application of
these ideas to complex imperative programming languages such as C.1
More precisely, here we define the concrete and generic abstract semantics constructions for a language —called CPM— that incorporates all the features of
mainstream, single-threaded imperative programming languages that can be somehow problematic from the point of view of static analysis. Most notably, the CPM
language features: a non-toy memory model; exceptions; run-time errors modeled
via exceptions (for instance, an exception is raised whenever a division by zero
is attempted, when a stack allocation request causes a stack overflow or when
other memory errors occur); array types; pointer types to both data objects and
functions; short-circuit evaluation of Boolean operators; user-defined (possibly recursive) functions; and non-structured control flow mechanisms.
For the description of the concrete dynamic semantics of the language we have
used a structured operational semantics (SOS) approach extended to deal with
infinite computations, mainly building on the work of Kahn, Plotkin and Cousot.
With respect to what can be found in the literature, we have added the treatment
of all non-structured control flow mechanisms of the C language. Of course, as
the ultimate goal of this research is to end up with practical analysis tools, the
concrete dynamic semantics has been defined in order to facilitate as much as
possible the subsequent abstraction phase. Still, our dynamic semantics retains all
the traditional good features: in particular, the concrete rule schemata are plainly
readable (assuming the reader becomes sufficiently familiar with the unavoidable
notational conventions) and fairly concise.
For the abstract semantics, we build on the work of Schmidt by providing the
concrete dynamic semantics rules with abstract counterparts. As far as we know,
this is the first time that Schmidt’s proposal is applied to the analysis of a realistic
programming language [D. Schmidt, personal communication, 2004]. A remarkable
feature of our abstract semantics is that it is truly generic in that it fully supports
relational abstract domains: the key step in this direction is the identification
and specification of a suitable set of operators on (concrete and abstract) memory
structures, that allow for domain-independent approximations but without inherent
limitations on the obtainable precision.
1 It
is worth noticing that we improperly refer to the C language to actually mean some more
constrained language —like CIL, the C Intermediate Language described in [NMRW02]— where
all ambiguities have been removed, in addition to an ABI (Application Binary Interface) that
further defines its semantics. Similarly, by ‘Python’ we mean a tractable subset of the language,
such as the RPython subset being developed by the PyPy project (http://pypy.org/).
4
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Schmidt’s proposal about the abstract interpretation of natural semantics has, in
our opinion, two important advantages: concrete and abstract rules can be made
executable and are easily correlated. We review these two aspects in turn.
Even though here we do not provide details in this respect, a prototype system
—called ECLAIR2 — has been developed in parallel with the writing of the present
paper. The Prolog implementation exploits nice features of a semantics construction based on SOS approach: the concrete semantics rule schemata can be directly
translated into Prolog clauses; and the resulting interpreter, with the help of a
C++ implementation of memory structures, is efficient enough to run non-trivial
programs. Similar considerations apply to the modules implementing the abstract
semantics: the abstract semantics rules are almost directly translated to generic
Prolog code that is interfaced with specialized libraries implementing several abstract domains, including accurate ones such as the ones provided by the Parma
Polyhedra Library [BHRZ05; BHZ05; BHZ06]. So, following this approach, the
distance between the expression of the concrete semantics and its executable realization is, as is well known, very little; but the same can be said about the distance
between the specification of the abstract semantics and the static analyzer that results from its implementation. This prototype system therefore gives us confidence
that both the concrete and abstract semantics are correctly modeled and that, in
this paper, no real difficulties have been overlooked.
For space reasons, only a subset of CPM is treated in full depth in the main
body of the paper (the extension of the design to the full language is only briefly
described even though all the important points are covered). For this subset, we
give a complete proof of correctness that relates the abstract semantics to the
concrete semantics. The proofs are not complicated and suggest (also because of
the way we present them) the possibility of their automatization. To summarize,
at this stage of the research work it does not seem unreasonable that we may
end up with: readable and executable representations of the concrete semantics
of mainstream programming languages; readable and executable representations of
program analyzers; correctness of the analyzers established by automatic specialized
theorem provers; and, at last, availability of sophisticated program analyzers for
such languages.
A final word is due to address the following concern: if the target languages are
“real” imperative programming languages, why choose CPM, an unreal one? The
reason is indeed quite simple: Java and Python miss some of the “hard” features of
C; C misses exceptions; C++ is too hard, for the time being. So, choosing any one of
these real languages would have been unlikely to provide us with the answer we were
looking for, which was about the adequacy of Schmidt’s approach with respect to
the above goals. Moreover, in its ECLAIR realization, the CPM language is being
extended so as to become a superset of C (i.e., with all the floating-point and integer
types, cast and bitwise operators and so forth). Once that code has stabilized, a C
and a Java subsystem will be forked.
2 The
‘Extended CLAIR’ system targets the analysis of mainstream programming languages by
building upon CLAIR, the ‘Combined Language and Abstract Interpretation Resource’, which
was initially developed and used in a teaching context (see http://www.cs.unipr.it/clair/).
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
5
1.2 Related Work
The literature on abstract interpretation proposes several frameworks for static
analysis, where the more general approaches put forward in foundational papers
are partially specialized according to a given criterion. For a few examples of specializations based on the programming paradigm, one can mention the frameworks
in [Bru91] and [GDL92] for the analysis of (constraint) logic programs; the approach
in [CC94] for the analysis of functional programs; and the so called “Marktoberdorf’98 generic static analyzer” specified in [Cou99] for the analysis of imperative
programs.
All of these frameworks are “generic” in that, while fixing some of the parameters of the considered problem, they are still characterized by several degrees of
freedom. It is therefore natural to reason on the similarities and differences between these approaches. However, independently from the programming paradigm
under analysis, direct comparisons between frameworks are extremely difficult in
that each proposal typically focuses on the solution of a subset of the relevant issues, while partially disregarding other important problems. For instance, both
[Bru91] and [GDL92] study the generic algebraic properties that allow for a clean
and safe separation between the abstract domains and the abstract interpreter; in
contrast, [Cou99] provides full details for a specific instance of the proposed framework, ranging from the parsing of literal constants to the explicit implementation
of the abstract operators for the abstract domain of intervals. On the other hand,
the frameworks mentioned above differ from the one presented in this paper in that
they allow for significant simplifications of the language analyzed. Here we briefly
discuss the main differences between the language considered in our proposal and
the one in [Cou99].
At the syntactic level, as already mentioned, the language CPM is much richer
than the simple imperative language adopted in [Cou99], which has no support
for functions, nesting of block statements, exceptions, non-structured control flows
and it allows for a single data type (in particular, no pointers and arrays). These
syntactic differences are clearly mirrored at the semantics level. In particular, even
though the detection of initialization and arithmetic errors is considered by the
semantics in [Cou99], the actual process of error propagation is not modeled. In
contrast, the semantics construction we propose can easily accommodate the sophisticated exception propagation and handling mechanisms that can be found in
modern languages such as C++, Java and Python. Note that this choice has a
non-trivial impact on the specification of the other components of the semantic
construction. For example, the short-circuit evaluation of Boolean expressions cannot be normalized as proposed in [Cou99], because such a normalization process,
by influencing the order of evaluation of subexpressions, is unable to preserve the
concrete semantics as far as exceptional computation paths are concerned. A minor difference is in the modeling of integer variables and values: while [Cou99]
considers the case of possibly uninitialized variables taking values in a finite set
of machine-representable integers, for ease of presentation we have opted for definitely initialized variables storing arbitrary (i.e., unbounded) integer values. Since
the CPM language supports an extensible set of RTS exceptions, the specification
of a semantics modeling (the generation, propagation and handling of) uninitial-
6
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
ization errors is rather straightforward. An extension of the semantics to the case
of several sets of bounded and unbounded numerical types, with suitable type conversion functions, is under development. Another difference is in the generality
of the abstract semantics construction: following the approach described here, an
analyzer can take full advantage of the more accurate information provided by a
relational domain such as that of polyhedra. In contrast, the work in [Cou99] only
considers the simpler case of non-relational abstract domains. As mentioned above,
the semantics we propose also models the case of possibly recursive functions (with
a call-by-value parameter passing mechanism), which are not supported by the language syntax considered in [Cou99]. While both this paper and [Cou99] consider
the specification of a forward static analysis framework, [Cou99] also provides a
backward analysis for arithmetic expressions, to be used in reductive iterations so
as to improve precision losses that are usually incurred by non-relational approximations.
1.3 Plan of the Paper
The paper is organized as follows. Section 2 introduces the notation and terminology used throughout the paper; Section 3 defines the syntax of a subset of
the imperative language CPM, whereas Section 4 defines its static semantics; the
concrete dynamic semantics of this fragment is presented in Section 5, whereas
its abstract counterpart is defined in Section 6. The proof of correctness of the
abstract semantics is the subject of Section 7, while the computation of further approximations is treated in Section 8. The integration of the full CPM language in
the analysis framework presented in this paper is discussed in Section 9. Section 10
concludes.
2. PRELIMINARIES
Let S and T be sets. The notation S ⊆f T means that S is a finite subset of
T . We write S ⊎ T to denote the union S ∪ T , yet emphasizing the fact that
S ∩ T = ∅. The set of total (resp., partial) functions from S to T is denoted by
S → T (resp., S T ). We denote by dom(f ) the domain of a function f : S → T
(resp., f : S T ), where dom(f ) = S (resp., dom(f ) ⊆ S). Let (S, ) be a partial
order and f : S → S be a function. An element x ∈ S such that x = f (x) (resp.,
x f (x)) is called a fixpoint (resp., post-fixpoint ) of f . The notation lfp (f ) (resp.,
gfp (f )) stands, if it exists, for the least (resp., greatest) fixpoint of f . A complete
lattice is a partial order (S, ) such that lub T exists for each T ⊆ S. If f : S → S
is monotonic over the complete lattice S, the Knaster-Tarski theorem ensures that
the set of post-fixpoints of f is itself a complete lattice. The fixpoint coinduction
proof principle follows: if f is monotonic over the complete lattice S then, in order
to prove that x gfp (f ), it is sufficient to prove that x f (x).
Let S = {s1 , . . . , sn } be a finite set of cardinality n ≥ 0. Then, the notation
{s1 7→ t1 , . . . , sn 7→ tn }, where {t1 , . . . , tn } ⊆ T , stands for the function f : S → T
such that f (si ) = ti , for each i = 1, . . . , n. Note that, assuming that the codomain
T is clear from context, the empty set ∅ denotes the (nowhere defined) function
f : ∅ → T.
When denoting the application of a function f : (S1 × · · · × Sn ) → T we omit, as
customary, the outer parentheses and write f (s1 , . . . , sn ) to mean f (s1 , . . . , sn ) .
·
On the Design of Generic Static Analyzers for Modern Imperative Languages
7
Let f0 : S0 T0 and f1 : S1 T1 be partial functions. Then the function
f0 [f1 ] : (S0 ∪ S1 ) (T0 ∪ T1 ) is defined, for each x ∈ dom(f0 ) ∪ dom(f1 ), by
(
f1 (x), if x ∈ dom(f1 );
def
f0 [f1 ] (x) =
f0 (x), if x ∈ dom(f0 ) \ dom(f1 ).
(Note that, if f0 and f1 are total functions, then f0 [f1 ] is total too.)
For a partial function f : S T and a set S ′ ⊆ S, f |S ′ denotes the restriction
of f to S ′ , i.e., the function f |S ′ : S ′ T defined, for each x ∈ S ′ ∩ dom(f ), by
f |S ′ (x) = f (x). (Note that, if f is a total function, then f |S ′ is total too.) With
a minor abuse of notation, we will sometimes write f \ S ′′ to denote f |S\S ′′ .
S ⋆ denotes the set of all finite, possibly empty strings of symbols taken from S.
The empty string is denoted by ǫ. If w, z ∈ S ∪ S ⋆ , the concatenation of w and z is
an element of S ⋆ denoted by wz or, to avoid ambiguities, by w · z. The length of a
string z is denoted by |z|.
def
The integer part function int : R → Z is given, for each x ∈ R, by int(x) = ⌊x⌋, if
def
x ≥ 0, and int(x) = ⌈x⌉,
if x < 0. The integer division and the modulo operations
÷, mod : Z × Z \ {0} → Z are defined, for each x, y ∈ Z with y 6= 0, respectively
def
def
by x ÷ y = int(x/y) and x mod y = x − (x ÷ y) · y.
We assume familiarity with the field of program analysis and verification via
abstract interpretation. The reader is referred to the literature for the theory
(e.g., [Cou81; CC76; CC77a; CC79; CC92a; CC92c]) and examples of applications
[DRS01; Hal93; SKS00].
3. THE LANGUAGE SYNTAX
The run-time support of CPM uses exceptions to communicate run-time errors.
The set of RTS exceptions is left open so that it can be extended if and when
needed. That said, the basic syntactic sets of the CPM language are:
def
Identifiers. id ∈ Id = {main, x, x0 , x1 , . . .} ⊎ rId, where rId = {x, x0 , x1 , . . .};
def
Basic types. T ∈ Type = {integer, boolean};
def
Integers. m ∈ Integer = Z;
def
Booleans. t ∈ Bool = {tt, ff};
def
RTS exceptions. χ ∈ RTSExcept = {divbyzero, stkovflw, memerror, . . .}.
The identifiers in rId are “reserved” for the specification of the concrete semantics.
From the basic sets, a number of syntactic categories are defined, along with their
syntactic meta-variables, by means of the BNF rules:
Expressions.
Exp ∋ e ::= m | −e | e0 + e1 | e0 − e1 | e0 ∗ e1 | e0 / e1 | e0 % e1
| t | e0 = e1 | e0 6= e1 | e0 < e1 | e0 ≤ e1 | e0 ≥ e1 | e0 > e1
| not e | e0 and e1 | e0 or e1 | id
Sequences of expressions.
Exps ∋ es ::= | e, es
·
8
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Storable types.
sType ∋ sT ::= T
Formal parameters.
formParams ∋ fps ::= | id : sT, fps
Function bodies.
Body ∋ body ::= let d in s result e | extern : sT
Global declarations.
Glob ∋ g ::= gvar id : sT = e | function id(fps) = body | rec g | g0 ; g1
Local declarations.
Decl ∋ d ::= nil | lvar id : sT = e | d0 ; d1
Catchable types.
cType ∋ cT ::= rts exception | sT
Exception declarations.
exceptDecl ∋ p ::= χ | cT | id : sT | any
Catch clauses.
Catch ∋ k ::= (p) s | k0 ; k1
Statements.
Stmt ∋ s ::= nop | id := e | id0 := id(es) | s0 ; s1 | d; s
| if e then s0 else s1 | while e do s
| throw χ | throw e | try s catch k | try s0 finally s1
Observe that there is no need of a separate syntactic category for programs: as we
will see, a CPM program is just a global declaration defining the special function
‘main’, like in C and C++.
It should be noted that some apparent limitations of the abstract syntax of CPM
are not real limitations. For instance: the use of function calls as expressions
can be avoided by introducing temporary variables; procedures can be rendered
by functions that return a dummy value; and so forth. More generally, a slight
elaboration of the abstract syntax presented here and extended in Section 9 is used
in the ECLAIR prototype to encode the C language almost in its entirety, plus the
basic exception handling mechanisms of C++ and Java.
For notational convenience, we also define the syntactic categories of constants,
storable values3 and exceptions:
Constants.
Con ∋ con ::= m | t
3 The
reason for a distinction between the roles of constants and storable values (as well as basic
types and storable types) will become clear when discussing language extensions in Section 9.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
9
Storable values.
sVal ∋ sval ::= con
Exceptions.
Except ∋ ξ ::= χ | sval
The (partial) function type : sVal sType, mapping a storable value to its type
name ‘integer’ or ‘boolean’, is defined by:
(
integer, if sval = m ∈ Integer;
def
type(sval) =
boolean, if sval = t ∈ Bool.
For ease of notation, we also define the overloadings type : Except cType and
type : exceptDecl cType defined by
(
rts exception, if ξ = χ ∈ RTSExcept;
def
type(ξ) =
type(sval),
if ξ = sval ∈ sVal;
rts exception, if p = χ ∈ RTSExcept;
def
type(p) = cT,
if p = cT ∈ cType;
sT,
if p = id : sT and sT ∈ sType.
Note that such an overloading is consistent and the resulting function is not defined
on value any ∈ exceptDecl.
The helper function dom : cType → {Integer, Bool, RTSExcept}, which associates a catchable type name to the corresponding domain, is defined by
if cT = integer;
Integer,
def
dom(cT) = Bool,
if cT = boolean;
RTSExcept, if cT = rts exception.
4. STATIC SEMANTICS
The static semantics of the CPM language establishes the conditions under which
a program is well typed. Only well-typed programs are given a dynamic semantics.
4.1 Defined and Free Identifiers
The set of identifiers defined by sequences of formal parameters, (global or local)
declarations or exception declarations is defined as follows:
def
def
def
def
def
def
DI() = DI(nil) = DI(body) = DI(χ) = DI(cT) = DI(any) = ∅;
def
def
DI(id : sT) = DI(gvar id : sT = e) = DI(lvar id : sT = e)
def
def
= DI(function id(fps) = body) = {id};
def
DI(id : sT, fps) = DI(id : sT) ∪ DI(fps);
def
DI(rec g) = DI(g);
def
DI(g0 ; g1 ) = DI(g0 ) ∪ DI(g1 );
10
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
def
DI(d0 ; d1 ) = DI(d0 ) ∪ DI(d1 ).
The set of identifiers that occur freely in (sequences of) expressions, (exception)
declarations, statements and catch clauses is defined by:
def
def
def
def
def
FI(m) = FI(t) = FI(nop) = FI() = FI(id : sT) = FI(nil)
def
def
def
def
def
def
= FI(χ) = FI(cT) = FI(any) = FI(throw χ) = FI(extern : sT) = ∅;
def
def
FI(−e) = FI(not e) = FI(lvar id : sT = e)
def
def
def
= FI(gvar id : sT = e) = FI(throw e) = FI(e);
def
FI(e0 op e1 ) = FI(e0 ) ∪ FI(e1 ), for op ∈ {+, . . . , %, =, . . . , >, and, or};
def
FI(id) = {id};
def
FI(let d in s result e) = FI(d) ∪ FI(s) \ DI(d) ∪ FI(e) \ DI(d) ;
def
FI(function id(fps) = body) = FI(body) \ DI(fps);
def
FI(rec g) = FI(g) \ DI(g);
def
FI(g0 ; g1 ) = FI(g0 ) ∪ FI(g1 ) \ DI(g0 ) ;
def
FI(d0 ; d1 ) = FI(d0 ) ∪ FI(d1 ) \ DI(d0 ) ;
def
FI(id := e) = {id} ∪ FI(e);
def
FI(e, es) = FI(e) ∪ FI(es);
def
FI id0 := id(es) = {id, id0 } ∪ FI(es);
def
FI(d; s) = FI(d) ∪ FI(s) \ DI(d) ;
def
FI (p) s = FI(s) \ DI(p);
def
FI(k0 ; k1 ) = FI(k0 ) ∪ FI(k1 );
def
def
FI(s0 ; s1 ) = FI(try s0 finally s1 ) = FI(s0 ) ∪ FI(s1 );
def
FI(if e then s0 else s1 ) = FI(e) ∪ FI(s0 ) ∪ FI(s1 );
def
FI(while e do s) = FI(e) ∪ FI(s);
def
FI(try s catch k) = FI(s) ∪ FI(k).
4.2 Type Environments
We start by defining the convenience syntactic category of
Denotable types.
dType ∋ dT ::= sT loc | fps → sT
A type environment associates a denotable type to each identifier of a given, finite
set of identifiers.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
11
Definition 4.1. (TEnvI , TEnv.) For each I ⊆f Id, the set of type environdef
ments over I is TEnvI = I → dType; the set of all type environments is given by
def U
TEnv = I⊆f Id TEnvI . Type environments are denoted by β, β0 , β1 and so forth.
The notation β : I is a shorthand for β ∈ TEnvI .
4.3 Static Semantics Predicates
Let I ⊆f Id and β ∈ TEnvI . The well-typedness of program constructs whose
free identifiers are contained in I is encoded by the following predicates, here listed
along with their informal meaning:
β ⊢I e : sT,
β ⊢I body : sT,
e is well-formed and has type sT in β;
body is well-formed and has type sT in β;
β, fps ⊢I es,
fps : δ,
es is compatible with fps and well formed in β;
fps is well formed and yields the type environment δ;
β ⊢I g : δ,
β ⊢I d : δ,
g is well formed and yields the type environment δ in β;
d is well-formed and yields the type environment δ in β;
⊢I p : δ,
p is well-formed and yields the type environment δ;
β ⊢I k,
β ⊢I s,
k is well-formed in β;
s is well-formed in β.
These predicates are defined inductively on the abstract syntax by means of the
following rules.
Expressions.
β ⊢I m : integer
β ⊢I t : boolean
β ⊢I e : integer
β ⊢I e : boolean
β ⊢I −e : integer
β ⊢I not e : boolean
β ⊢I e0 : integer β ⊢I e1 : integer
β ⊢I e0 e1 : integer
β ⊢I e0 : integer β ⊢I e1 : integer
β ⊢I e0 e1 : boolean
β ⊢I e0 : boolean β ⊢I e1 : boolean
β ⊢I e0 ⋄ e1 : boolean
β ⊢I id : sT
if ∈ {+, −, ∗, /, %}
if ∈ {=, 6=, <, ≤, ≥, >}
if ⋄ ∈ {and, or}
if β(id) = sT loc
Sequences of expressions.
β ⊢I e : sT β, fps ⊢I es
β, ⊢I
β, (id : sT, fps) ⊢I (e, es)
12
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Sequences of formal parameters.
fps : δ
:∅
(id : sT, fps) : {id 7→ sT loc} ∪ δ
if id ∈
/ DI(fps)
Function bodies.
β ⊢ I d : β0
β[β0 ] ⊢I∪DI(d) s
β[β0 ] ⊢I∪DI(d) e : sT
β ⊢I (let d in s result e) : sT
β ⊢I (extern : sT) : sT
Declarations.
β ⊢I e : sT
β ⊢I nil : ∅
β ⊢I gvar id : sT = e : {id 7→ sT loc}
β ⊢I e : sT
β ⊢I lvar id : sT = e : {id 7→ sT loc}
fps : δ
β ⊢I
β[δ] ⊢I∪DI(fps) body : sT
function id(fps) = body : id 7→ (fps → sT)
β[δ |J ] ⊢I∪J g : δ
β ⊢I (rec g) : δ
β ⊢ I g 0 : β0
if J = FI(g) ∩ DI(g) and ∀id, sT : (id 7→ sT loc) ∈
/δ
β[β0 ] ⊢I∪DI(g0 ) g1 : β1
β ⊢I g0 ; g1 : β0 [β1 ]
β ⊢I d0 : β0
(1)
β[β0 ] ⊢I∪DI(d0 ) d1 : β1
β ⊢I d0 ; d1 : β0 [β1 ]
Note that rule (1) seems to suggest that δ must be guessed. Indeed, this is not
the case, as it can be proved that the environment generated by a declaration g
only depends on g and not on the environment used to establish whether g is well
formed. While the right thing to do is to define two static semantics predicates for
declarations —one for the generated environments and the other for well-formedness
[Plo04]— we opted for a more concise presentation. Also notice that the side
condition in rule (1) explicitly forbids recursive declarations of variables.4
Exception declarations.
⊢I χ : ∅
⊢I cT : ∅
⊢I id : sT : {id 7→ sT loc}
⊢I any : ∅
Catch clauses.
⊢I p : δ
β[δ] ⊢I∪DI(p) s
β ⊢I (p) s
4 Namely,
β ⊢I k0
β ⊢I k1
β ⊢I k0 ; k1
a recursive declaration such as rec gvar id : sT = e is not well-typed.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
13
Statements.
β ⊢I e : sT
β ⊢I nop
β ⊢I id := e
β, fps ⊢I es
β ⊢I id0 := id(es)
β ⊢ I s0
if β(id) = sT loc
if β(id0 ) = sT loc and β(id) = fps → sT
β ⊢ I s1
β ⊢ I d : β0
β ⊢ I s0 ; s1
β ⊢I e : boolean β ⊢I s0
β[β0 ] ⊢I∪DI(d) s
β ⊢I d; s
β ⊢ I s1
β ⊢I e : boolean β ⊢I s
β ⊢I if e then s0 else s1
β ⊢I while e do s
β ⊢I e : sT
β ⊢I throw χ
β ⊢I s
β ⊢I throw e
β ⊢I k
β ⊢I try s catch k
β ⊢ I s0
β ⊢ I s1
β ⊢I try s0 finally s1
A program g is said to be valid if and only if it does not contain any occurrence
of a reserved identifier id ∈ rId, ∅ ⊢∅ g : β and β(main) = → integer.
5. CONCRETE DYNAMIC SEMANTICS
For the specification of the concrete dynamic semantics for CPM, we adopt the
G∞ SOS approach of Cousot and Cousot [CC92c]. This generalizes with infinite
computations the natural semantics approach by Kahn [Kah87], which, in turn,
is a “big-step” operational semantics defined by structural induction on program
structures in the style of Plotkin [Plo04].
5.1 Absolute Locations and Indirect Locators
An absolute location (or, simply, location) is a unique identifier for a memory area
of unspecified size. The (possibly infinite) set of all locations is denoted by Loc,
while individual locations are denoted by l, l0 , l1 and so forth. We also postulate
def
the existence of a set Ind = N of indirect (stack) locators such that Loc ∩ Ind = ∅.
Indirect locators are denoted by i, i0 , i1 and so forth. For notational convenience,
def
we define the set of addresses as Addr = Loc ⊎ Ind. Addresses are denoted by a,
a0 , a1 and so forth.
5.2 Concrete Execution Environments
The concrete dynamic aspect of declarations is captured by concrete execution
environments. These map a finite set of identifiers to concrete denotable values. In
the sequel we will simply write ‘environment’ to refer to execution environments.
Definition 5.1. (Abstract, dVal, EnvI .) We define
def
Abstract = { λ fps . body | fps ∈ formParams, body ∈ Body }.
14
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
The set of concrete denotable values is
def
dVal = (Addr × sType) ⊎ Abstract.
def
For I ⊆f Id, EnvI = I → dVal is the set of concrete environments over I. The
def U
set of all environments is given by Env = I⊆f Id EnvI . Environments in EnvI are
denoted by ρ, ρ0 , ρ1 and so forth. We write ρ : I as a shorthand for ρ ∈ EnvI . For
ρ : I and β : I, we write ρ : β to signify that
∀id ∈ I : ∃(a, sT) ∈ Addr × sType . β(id) = sT loc ∧ ρ(id) = (a, sT)
∨ ∃abs = (λ fps . body) ∈ Abstract . β(id) = fps → sT ∧ β ⊢I body : sT
5.3 Memory Structures, Value States and Exception States
∧ ρ(id) = abs .
A memory structure uses a stack and suitable operators to allocate/deallocate,
organize, read and update the locations of an absolute memory map, which is a
partial function mapping a location and a storable type to a storable value. Memory
structures model all the memory areas that are used in the most common implementations of imperative programming languages: the data segment (for global
variables) and the stack segment (for local variables) are of interest for the language fragment we are considering; the text segment (where pointers to function
point to) and the heap segment (for dynamically allocated memory) are required
to deal with the extensions of Section 9. As it will be clear from the following
definition, our notion of memory structure is underspecified: while we define it and
its operations so that the semantics of programs is the expected one, we allow for
many possible implementations by leaving out many details that are inessential to
the achievement of that objective. It is for this same reason that we treat locations
as unique identifiers neglecting the mathematical structure they may or may not
have. More generally, what we call “concrete semantics” is indeed an abstraction of
an infinite number of machines and compilation schemes that could be used to execute our programs. Furthermore, since the considered fragment of CPM does not
support pointers, arrays, type casts and unions, we can here make the simplifying
assumption that there is no overlap between the storage cells associated to different
locations. In Section 9 we will hint at how these assumptions must be modified in
order to accommodate the full language.
Memory structures will be used to describe the outcome of computations whose
only observable behavior is given by their side effects. Computations yielding a
proper value will be described by a value state, which pairs the value computed
with a memory structure recording the side effects of the execution. Exceptional
behavior must, of course, be taken into proper account: thus, the result of an
exceptional computation path will be described by pairing the memory structure
with an exception, yielding what we call an exception state.
Definition 5.2. (Map, Stack, Mem, ValState, ExceptState.) The set of all
absolute maps is the set of partial functions
def
Map = (Loc × sType) sVal.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
15
Absolute maps are denoted by µ, µ0 , µ1 and so forth. The absolute map update
partial function
·[· := ·] : Map × (Loc × sType) × sVal Map
is defined, for each µ ∈ Map, (l, sT) ∈ Loc × sType such that (l, sT) ∈ dom(µ) and
sval ∈ sVal such that sT = type(sval), by
def
µ (l, sT) := sval = µ′ ,
where µ′ ∈ Map is any absolute map satisfying the following conditions:
(i) dom(µ′ ) = dom(µ);
(ii) µ′ (l, sT) = sval;
(iii) µ′ (l′ , sT′ ) = µ(l′ , sT′ ), for each (l′ , sT′ ) ∈ dom(µ) such that l′ 6= l.
⋆
def
Let W = Loc ∪ {†, ‡} . An element w ∈ W is a stack if and only if no location
occurs more than once in it. The set of all stacks is denoted by Stack. ‘ †’ is called
stack marker and ‘ ‡’ is called frame marker. The top-most frame of w ∈ Stack,
denoted by tf(w), is
⋆ the longest suffix of w containing no frame marker; formally,
tf(w) ∈ Loc ∪ {†} satisfies either w = tf(w) or w = w′ ‡ tf(w). The partial infix
operator @ : Stack × Ind Loc maps, when defined, a stack w and an indirect
locator i into an absolute location to be found in the top-most frame; formally, if
def
i < n = | tf(w)|, tf(w) = z0 · · · zn−1 and zi = l, then w @ i = l.
def
A memory structure is an element of Mem = Map × Stack. Memory structures
are denoted by σ, σ0 , σ1 and so forth.
def
A value state is an element of ValState = sVal × Mem. Value states are denoted
by υ, υ0 , υ1 and so forth.
def
An exception state is an element of ExceptState = Mem × Except. Exception
states are denoted by ε, ε0 , ε1 and so forth.
The overloading @ : Mem× Addr Loc of the partial infix operator @ is defined,
for each σ = (µ, w) and a ∈ Addr, as follows and under the following conditions:
(
a, if a ∈ Loc;
def
σ@a =
l, if a ∈ Ind and l = w @ a is defined.
The memory structure read and update operators
·[·, ·] : Mem × Addr × sType → (ValState ⊎ ExceptState),
·[· := ·] : Mem × (Addr × sType) × sVal → (Mem ⊎ ExceptState)
are respectively defined, for each σ = (µ, w) ∈ Mem, a ∈ Addr, sT ∈ sType and
sval ∈ sVal, as follows: let d = (σ @ a, sT); then
(
µ(d), σ ,
if d ∈ dom(µ);
def
σ[a, sT] =
(σ, memerror), otherwise;
(
def
µ[d := sval], w , if d ∈ dom(µ) and sT = type(sval);
σ (a, sT) := sval =
(σ, memerror),
otherwise.
16
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
The data and stack memory allocation functions
newd : ValState → (Mem × Loc) ⊎ ExceptState ,
news : ValState → (Mem × Ind) ⊎ ExceptState
are defined, for each υ = (sval, σ) ∈ ValState, where σ = (µ, w), by
(
((µ′ , w), l),
if the data segment of σ can be extended;
def
newd (υ) =
(σ, datovflw), otherwise;
(
((µ′ , w′ ), i),
if the stack segment of σ can be extended;
def
news (υ) =
(σ, stkovflw), otherwise;
where, in the case of news , w′ ∈ Stack and i ∈ Ind are such that:
(i) w′ = w · l;
(ii) i = | tf(w)|;
and, for both newd and news , µ′ ∈ Map and l ∈ Loc are such that:
(iii) for each sT ∈ sType, (l, sT) ∈
/ dom(µ);
(iv) for each (l′ , sT′ ) ∈ dom(µ), µ′ (l′ , sT′ ) = µ(l′ , sT′ );
(v) µ′ l, type(sval) = sval.
The memory structure data cleanup function cleanupd : ExceptState → ExceptState
is given, for each ε = (σ, ξ) ∈ ExceptState, by
def
cleanupd (ε) = (∅, ǫ), ξ .
The stack mark function marks : Mem → Mem is given, for each σ ∈ Mem, by
def
marks (σ) = (µ, w†),
where σ = (µ, w).
The stack unmark partial function unmarks : Mem Mem is given, for each σ ∈
Mem such that σ = (µ, w′ †w′′ ) and w′′ ∈ Loc⋆ , by
def
unmarks (µ, w′ †w′′ ) = (µ′ , w′ ),
where the absolute map µ′ ∈ Map satisfies:
(i) dom(µ′ ) = (l, sT) ∈ dom(µ) l does not occur in w′′ ;
(ii) µ′ = µ |dom(µ′ ) .
The frame link partial function links : Mem Mem is given, for each σ ∈ Mem
such that σ = (µ, w′ †w′′ ) and w′′ ∈ Loc⋆ , by
def
links (µ, w′ †w′′ ) = (µ, w′ ‡w′′ ).
The frame unlink partial function unlinks : Mem Mem is given, for each σ ∈
Mem such that σ = (µ, w′ ‡w′′ ) and w′′ ∈ Loc⋆ , by
def
unlinks (µ, w′ ‡w′′ ) = (µ, w′ †w′′ ).
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
17
For ease of notation, the stack unmark and the frame unlink partial functions are
lifted to also work on exception states. Namely, for each ε = (σ, ξ) ∈ ExceptState,
def
unmarks (σ, ξ) = unmarks (σ), ξ ;
def
unlinks (σ, ξ) = unlinks (σ), ξ .
Intuitively, global variables are allocated in the data segment using newd and are
accessed through absolute locations; function cleanupd models their deallocation
due to an RTS exception thrown during the program start-up phase. The functions
marks and unmarks use the stack marker ‘†’ to implement the automatic allocation
(through news ) and deallocation of stack slots for storing local variables, return
values and actual arguments of function calls. The functions links and unlinks use
the frame marker ‘‡’ to partition the stack into activation frames, each frame corresponding to a function call. All accesses to the top-most frame can be expressed
in terms of indirect locators (i.e., offsets from the top-most frame marker), because
at each program point the layout of the current top-most frame is statically known.
As it will be clearer when considering the concrete rules for function calls, the frame
marker is used to move the return value and the actual arguments, which are allocated by the caller, from the activation frame of the caller to the activation frame
of the callee, and vice versa.
The memory structures and operations satisfy the following property: for each
pair of memory structures σ0 and σ1 such that σ1 has been obtained from σ0 by any
sequence of operations where each links is matched by a corresponding unlinks , for
each indirect locator i ∈ Ind, if σ0 @i and σ1 @i are both defined, then σ0 @i = σ1 @i.
As anticipated, we profit from the lack of aliasing in the fragment of CPM considered here, i.e., we assume there is no overlap between the storage cells associated
to (l0 , sT0 ) and the ones associated to (l1 , sT1 ), unless l0 = l1 . Moreover, we
need not specify the relationship between µ(l, sT0 ) and µ(l, sT1 ) for the case where
sT0 6= sT1 . This also implies that the absolute map update operator is underspecified, resulting in a nondeterministic operator. Of course, any real implementation
will be characterized by a complete specification: for instance, a precise definition
of the memory overflow conditions will take the place of the informal conditions “if
the data (resp., stack) segment of σ can be extended” in the definitions of newd and
news . As is clear from the definition above, where memory is writable if and only
if it is readable, we do not attempt to model read-only memory. It is also worth
observing that, in the sequel, the “meaning” of variable identifiers will depend on
unrestricted elements of Env × Mem. As a consequence we can have dangling references, that is, a pair (ρ, σ) ∈ Env × Mem with ρ : I can be such that there exists
an identifier id ∈ I for which ρ(id) = (a, sT) and σ[a, sT] = memerror.
5.4 Configurations
The dynamic semantics of CPM is expressed by means of an evaluation (or reduction) relation, which specifies how a non-terminal configuration is reduced to
a terminal configuration. The sets of non-terminal configurations are parametric
with respect to a type environment associating every identifier to its type.
·
18
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Definition 5.3. (Non-terminal configurations.) The sets of non-terminal
configurations for expressions, local and global declarations, statements, function
bodies and catch clauses are given, respectively and for each β ∈ TEnvI , by
def
Γβe = he, σi ∈ Exp × Mem ∃sT ∈ sType . β ⊢I e : sT ,
def
Γβd = hd, σi ∈ Decl × Mem ∃δ ∈ TEnv . β ⊢I d : δ ,
def
Γβg = hg, σi ∈ Glob × Mem ∃δ ∈ TEnv . β ⊢I g : δ ,
def
Γβs = hs, σi ∈ Stmt × Mem β ⊢I s ,
def
Γβb = hbody, σi ∈ Body × Mem ∃sT ∈ sType . β ⊢I body : sT ,
def
Γβk = hk, εi ∈ Catch × ExceptState β ⊢I k .
Each kind of terminal configuration has to allow for the possibility of both a
non-exceptional and an exceptional computation path.
Definition 5.4. (Terminal configurations.) The sets of terminal configurations for expressions, local and global declarations, statements, function bodies and
catch clauses are given, respectively, by
def
Te = ValState ⊎ ExceptState,
def
def
def
def
Td = Tg = (Env × Mem) ⊎ ExceptState,
Ts = Tb = Mem ⊎ ExceptState,
def
Tk = {caught} × Ts ⊎ {uncaught} × ExceptState .
Note that Te is defined as ValState ⊎ ExceptState; as it will be apparent from the
concrete semantics, expressions never modify the memory structure, so Te could
have been defined as sVal ⊎ Except; but defining it as ValState ⊎ ExceptState simplifies the approximation relations in Section 6.
In the following, we write N and η to denote a non-terminal and a terminal concrete configuration, respectively. For clarity of notation, we often use angle brackets
to highlight that a tuple is indeed representing a configuration. Angle brackets are
not normally used for configurations made of a single element. Therefore, when
ε = (σ, ξ) ∈ ExceptState, we indifferently write ε ∈ Ts or hσ, ξi ∈ Ts , as well as
hcaught, εi ∈ Tk or caught, (σ, ξ) ∈ Tk .
A few explanatory words are needed for Tk . When the evaluation of a nonterminal configuration for catch clauses hk, εi ∈ Γβk yields the terminal configuration
hcaught, ηi ∈ Tk , then the exception ξ in ε = (σ, ξ) was caught inside k and η ∈ Ts
is the result of evaluating the corresponding exception handler statement; note that
η ∈ Ts may itself be another exception state, meaning that another exception was
thrown during the evaluation of the exception handler statement. In contrast, when
the resulting terminal configuration is huncaught, εi ∈ Tk , then the exception in ε
was not caught inside k and will be propagated to the outer context.5
5 Note
that the names of the labels caught and uncaught have been chosen as such for clarity, but
provide no special meaning: they are only needed for a correct application of the disjoint union
construction, since we have Ts ∩ ExceptState 6= ∅.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
19
5.5 Concrete Evaluation Relations
For convenience, in order to represent function closures, we extend the syntactic category of local declarations with (recursive) execution environments. These
syntactic constructs are meant to be only available in the dynamic semantics (in
non-terminal configurations): they cannot occur in the program text. Thus we have
Decl ∋ d ::= . . . | ρ | rec ρ
def
def
def
Consequently, if ρ : I we define DI(ρ) = DI(rec ρ) = I, FI(ρ) =
def
S
id∈I
FI ρ(id)
and FI(rec ρ) = FI(ρ) \ I, where the function FI is defined on elements of dVal by
def
def
def
FI(l, sT) = FI(i, sT) = ∅ and FI(λ fps . body) = FI(body) \ DI(fps). The static
semantics is extended by adding the rules
ρ:δ
β[δ |J ] ⊢I∪J ρ : δ
β ⊢I ρ : δ
β ⊢I rec ρ : δ
if J = FI(ρ) ∩ DI(ρ) and ∀id : (id 7→ sT loc) ∈
/ δ.
The concrete evaluation relations that complete the definition of the concrete
semantics for CPM are defined, as usual, by structural induction from a set of rule
schemata. The evaluation relations are of the form ρ ⊢β N → η, where β ∈ TEnvI ,
ρ ∈ EnvJ , ρ : β |J and, for some q ∈ {e, d, g, s, b, k}, N ∈ Γβq and η ∈ Tq .
5.5.1
Expressions
Constant.
ρ ⊢β hcon, σi → hcon, σi
(2)
Identifier.
ρ ⊢β hid, σi → σ ρ(id)
(3)
Unary minus.
ρ ⊢β he, σi → ε
ρ ⊢β h−e, σi → ε
ρ ⊢β he, σi → hm, σ0 i
ρ ⊢β h−e, σi → h−m, σ0 i
(4)
(5)
Binary arithmetic operations. Letting denote any abstract syntax operator in
{+, −, ∗, /, %} and ◦ ∈ {+, −, ·, ÷, mod} the corresponding arithmetic operation.
Then the rules for addition, subtraction, multiplication, division and remainder are
given by the following schemata:
ρ ⊢β he0 , σi → ε
ρ ⊢β he0 e1 , σi → ε
ρ ⊢β he0 , σi → hm0 , σ0 i ρ ⊢β he1 , σ0 i → ε
ρ ⊢β he0 e1 , σi → ε
(6)
(7)
20
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
ρ ⊢β he0 , σi → hm0 , σ0 i ρ ⊢β he1 , σ0 i → hm1 , σ1 i
ρ ⊢β he0 e1 , σi → hm0 ◦ m1 , σ1 i
if ∈
/ {/, %} or m1 6= 0
(8)
ρ ⊢β he0 , σi → hm0 , σ0 i ρ ⊢β he1 , σ0 i → h0, σ1 i
ρ ⊢β he0 e1 , σi → hσ1 , divbyzeroi
if ∈ {/, %}
(9)
Arithmetic tests. Let ∈ {=, 6=, <, ≤, ≥, >} be an abstract syntax operator and
denote with ‘≶’ the corresponding test operation in Z × Z → Bool. The rules for
the arithmetic tests are then given by the following schemata:
ρ ⊢β he0 , σi → ε
ρ ⊢β he0 e1 , σi → ε
ρ ⊢β he0 , σi → hm0 , σ0 i ρ ⊢β he1 , σ0 i → ε
ρ ⊢β he0 e1 , σi → ε
ρ ⊢β he0 , σi → hm0 , σ0 i ρ ⊢β he1 , σ0 i → hm1 , σ1 i
ρ ⊢β he0 e1 , σi → hm0 ≶ m1 , σ1 i
(10)
(11)
(12)
Negation.
ρ ⊢β hb, σi → ε
ρ ⊢β hnot b, σi → ε
ρ ⊢β hb, σi → ht, σ0 i
ρ ⊢β hnot b, σi → h¬ t, σ0 i
(13)
(14)
Conjunction.
ρ ⊢β hb0 , σi → ε
ρ ⊢β hb0 and b1 , σi → ε
ρ ⊢β hb0 , σi → hff, σ0 i
ρ ⊢β hb0 and b1 , σi → hff, σ0 i
ρ ⊢β hb0 , σi → htt, σ0 i ρ ⊢β hb1 , σ0 i → η
ρ ⊢β hb0 and b1 , σi → η
(15)
(16)
(17)
Disjunction.
ρ ⊢β hb0 , σi → ε
ρ ⊢β hb0 or b1 , σi → ε
ρ ⊢β hb0 , σi → htt, σ0 i
ρ ⊢β hb0 or b1 , σi → htt, σ0 i
ρ ⊢β hb0 , σi → hff, σ0 i ρ ⊢β hb1 , σ0 i → η
ρ ⊢β hb0 or b1 , σi → η
(18)
(19)
(20)
·
On the Design of Generic Static Analyzers for Modern Imperative Languages
5.5.2
21
Declarations
Nil.
(21)
ρ ⊢β hnil, σi → h∅, σi
Environment.
(22)
ρ ⊢β hρ0 , σi → hρ0 , σi
Recursive environment.
(23)
ρ ⊢β hrec ρ0 , σi → hρ1 , σi
if ρ1 =
∪
(
id 7→ ρ0 (id) ρ0 (id) = λfps . extern : sT
id 7→ abs1
∀i ∈ {0, 1} : absi = λfps . let di in s result e,
ρ0 (id) = abs0 , d1 = rec ρ0 \ DI(fps) ; d0
)
.
Global variable declaration.
ρ ⊢β he, σi → ε
(24)
ρ ⊢β hgvar id : sT = e, σi → cleanupd (ε)
ρ ⊢β he, σi → υ
ρ ⊢β hgvar id : sT = e, σi → cleanupd (ε)
if newd (υ) = ε
ρ ⊢β he, σi → υ
(26)
ρ ⊢β hgvar id : sT = e, σi → hρ1 , σ1 i
if newd (υ) = (σ1 , l) and ρ1 = id 7→ (l, sT) .
Local variable declaration.
ρ ⊢β he, σi → ε
(27)
ρ ⊢β hlvar id : sT = e, σi → unmarks (ε)
ρ ⊢β he, σi → υ
ρ ⊢β hlvar id : sT = e, σi → unmarks (ε)
(25)
if news (υ) = ε
ρ ⊢β he, σi → υ
ρ ⊢β hlvar id : sT = e, σi → hρ1 , σ1 i
if news (υ) = (σ1 , i) and ρ1 = id 7→ (i, sT) .
Function declaration.
ρ ⊢β function id(fps) = body0 , σ → hρ0 , σi
(28)
(29)
(30)
if ρ0 = {id 7→ λ fps . body1 } and either body0 = body1 = extern : sT or, for each
i ∈ {0, 1}, bodyi = let di in s result e, I = FI(body0 ) \ DI(fps) and d1 = ρ |I ; d0 .
22
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Recursive declaration.
(ρ \ J) ⊢β[β1 ] hg, σi → hρ0 , σ0 i
ρ ⊢β hrec ρ0 , σ0 i → η
(31)
ρ ⊢β hrec g, σi → η
if J = FI(g) ∩ DI(g), β ⊢FI(g) g : β0 and β1 = β0 |J .
Global sequential composition.
ρ ⊢β hg0 , σi → ε
(32)
ρ ⊢β hg0 ; g1 , σi → ε
ρ ⊢β hg0 , σi → hρ0 , σ0 i ρ[ρ0 ] ⊢β[β0 ] hg1 , σ0 i → ε
ρ ⊢β hg0 ; g1 , σi → ε
if β ⊢FI(g0 ) g0 : β0
ρ ⊢β hg0 , σi → hρ0 , σ0 i ρ[ρ0 ] ⊢β[β0 ] hg1 , σ0 i → hρ1 , σ1 i
ρ ⊢β hg0 ; g1 , σi → ρ0 [ρ1 ], σ1
(33)
if β ⊢FI(g0 ) g0 : β0
(34)
Local sequential composition.
ρ ⊢β hd0 , σi → ε
(35)
ρ ⊢β hd0 ; d1 , σi → ε
ρ ⊢β hd0 , σi → hρ0 , σ0 i ρ[ρ0 ] ⊢β[β0 ] hd1 , σ0 i → ε
ρ ⊢β hd0 ; d1 , σi → ε
if β ⊢FI(d0 ) d0 : β0
ρ ⊢β hd0 , σi → hρ0 , σ0 i ρ[ρ0 ] ⊢β[β0 ] hd1 , σ0 i → hρ1 , σ1 i
ρ ⊢β hd0 ; d1 , σi → ρ0 [ρ1 ], σ1
(36)
if β ⊢FI(d0 ) d0 : β0
(37)
5.5.3
Statements
Nop.
(38)
ρ ⊢β hnop, σi → σ
Assignment.
ρ ⊢β he, σi → ε
ρ ⊢β hid := e, σi → ε
ρ ⊢β he, σi → hsval, σ0 i
ρ ⊢β hid := e, σi → σ0 ρ(id) := sval
(39)
(40)
Statement sequence.
ρ ⊢β hs0 , σi → ε
(41)
ρ ⊢β hs0 ; s1 , σi → ε
ρ ⊢β hs0 , σi → σ0
ρ ⊢β hs1 , σ0 i → η
ρ ⊢β hs0 ; s1 , σi → η
(42)
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
Block.
ρ ⊢β d, marks (σ) → ε
(43)
ρ ⊢β hd; s, σi → ε
ρ ⊢β d, marks (σ) → hρ0 , σ0 i
23
ρ[ρ0 ] ⊢β[β0 ] hs, σ0 i → η
ρ ⊢β hd; s, σi → unmarks (η)
if β ⊢FI(d) d : β0
(44)
Conditional.
ρ ⊢β he, σi → ε
(45)
ρ ⊢β hif e then s0 else s1 , σi → ε
ρ ⊢β he, σi → htt, σ0 i
ρ ⊢β hs0 , σ0 i → η
(46)
ρ ⊢β hif e then s0 else s1 , σi → η
ρ ⊢β he, σi → hff, σ0 i ρ ⊢β hs1 , σ0 i → η
(47)
ρ ⊢β hif e then s0 else s1 , σi → η
While.
ρ ⊢β he, σi → ε
(48)
ρ ⊢β hwhile e do s, σi → ε
ρ ⊢β he, σi → hff, σ0 i
(49)
ρ ⊢β hwhile e do s, σi → σ0
ρ ⊢β he, σi → htt, σ0 i
ρ ⊢β hs, σ0 i → ε
(50)
ρ ⊢β hwhile e do s, σi → ε
ρ ⊢β he, σi → htt, σ0 i
ρ ⊢β hs, σ0 i → σ1
ρ ⊢β hwhile e do s, σ1 i → η
ρ ⊢β hwhile e do s, σi → η
(51)
Throw.
(52)
ρ ⊢β hthrow χ, σi → hσ, χi
ρ ⊢β he, σi → ε
(53)
ρ ⊢β hthrow e, σi → ε
ρ ⊢β he, σi → hsval, σ0 i
(54)
ρ ⊢β hthrow e, σi → hσ0 , svali
Try blocks.
ρ ⊢β hs, σi → σ0
(55)
ρ ⊢β htry s catch k, σi → σ0
ρ ⊢β hs, σi → ε0
ρ ⊢β hk, ε0 i → hu, ηi
ρ ⊢β htry s catch k, σi → η
if u ∈ {caught, uncaught}
(56)
24
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
ρ ⊢β hs0 , σi → σ0
ρ ⊢β hs1 , σ0 i → η
(57)
ρ ⊢β htry s0 finally s1 , σi → η
ρ ⊢β hs0 , σi → hσ0 , ξ0 i ρ ⊢β hs1 , σ0 i → σ1
(58)
ρ ⊢β htry s0 finally s1 , σi → hσ1 , ξ0 i
ρ ⊢β hs0 , σi → hσ0 , ξ0 i ρ ⊢β hs1 , σ0 i → ε
(59)
ρ ⊢β htry s0 finally s1 , σi → ε
Function call. Consider the following conditions:
β(id) = (fps → sT0 )
ρ(id) = λ id1 : sT1 , . . . , idn : sTn . body
(60)
d = (lvar x0 : sT0 = id0 ; lvar x1 : sT1 = e1 ; . . . ; lvar xn : sTn = en )
ρ1 = x0 7→ (0, sT0 ) ∪ idj 7→ (j, sTj ) j = 1, . . . , n , ρ0 : β0 , ρ1 : β1 . (61)
Then the rule schemata for function calls are the following:
ρ ⊢β d, marks (σ) → ε
ρ ⊢β id0 := id(e1 , . . . , en ), σ → ε
if (60) holds
(62)
ρ ⊢β d, marks (σ) → hρ0 , σ0 i
ρ[ρ1 ] ⊢β[β1 ] body, links (σ0 ) → ε
ρ ⊢β id0 := id(e1 , . . . , en ), σ → unmarks unlinks (ε)
if (60) and (61) hold
(63)
ρ ⊢β d, marks (σ) → hρ0 , σ0 i
ρ[ρ1 ] ⊢β[β1 ] body, links (σ0 ) → σ1
ρ[ρ0 ] ⊢β[β0 ] id0 := x0 , unlinks (σ1 ) → η2
ρ ⊢β id0 := id(e1 , . . . , en ), σ → unmarks (η2 )
(64)
if (60) and (61) hold
Note that parameter passing is implemented by using reserved identifiers that reference the return value (x0 ) and the actual arguments (x1 , . . . , xn ). When evaluating
the function body (i.e., after linking a new activation frame), the callee can get access to the return value and the arguments’ values by using the indirect locators 0
and 1, . . . , n, respectively; to this end, the callee uses the environment ρ1 , where
the reserved identifier x0 is still mapped to the return value, whereas the arguments
are accessible using the formal parameters’ names id1 , . . . , idn .
5.5.4
Function Bodies.
ρ ⊢β d, marks (σ) → ε
(65)
ρ ⊢β hlet d in s result e, σi → ε
ρ ⊢β d, marks (σ) → hρ0 , σ0 i
ρ[ρ0 ] ⊢β[β0 ] hs, σ0 i → ε
ρ ⊢β hlet d in s result e, σi → unmarks (ε)
if β ⊢FI(d) d : β0 (66)
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
25
ρ ⊢β d, marks (σ) → hρ0 , σ0 i
ρ[ρ0 ] ⊢β[β0 ] hs, σ0 i → σ1
ρ[ρ0 ] ⊢β[β0 ] hx0 := e, σ1 i → η0
ρ ⊢β hlet d in s result e, σi → unmarks (η0 )
(67)
if β ⊢FI(d) d : β0
ρ ⊢β extern : sT, (µ, w) → η
(68)
if ∃σ0 = (µ0 , w) ∈ Mem, ξ ∈ Except . η = σ0 ∨ η = hσ0 , ξi.
5.5.5
Catch Clauses
Catch.
ρ ⊢β hs, σi → η0
ρ ⊢β (p) s, (σ, ξ) → hcaught, η0 i
(69)
if p = ξ ∈ RTSExcept, or p = type(ξ), or p = any.
ρ ⊢β (id : sT) s, (σ, sval) → caught, unmarks (ε0 )
if sT = type(sval) and ε0 = news sval, marks (σ) .
ρ {id 7→ (i, sT)} ⊢β[{id7→sT loc}] hs, σ0 i → η0
ρ ⊢β (id : sT) s, (σ, sval) → caught, unmarks (η0 )
if sT = type(sval) and (σ0 , i) = news sval, marks (σ) .
ρ ⊢β (p) s, (σ, ξ) → uncaught, (σ, ξ)
if, letting cT = type(ξ), we have p ∈
/ ξ, cT, any and ∀id ∈ Id : p 6= id : cT.
Catch sequence.
ρ ⊢β hk0 , εi → hcaught, η0 i
ρ ⊢β hk0 ; k1 , εi → hcaught, η0 i
ρ ⊢β hk0 , εi → huncaught, ε0 i ρ ⊢β hk1 , ε0 i → η
ρ ⊢β hk0 ; k1 , εi → η
(70)
(71)
(72)
(73)
(74)
5.6 Concrete Divergence Relation
In order to capture divergent computations, we follow the approach of Cousot and
Cousot [CC92c], also advocated by Schmidt [Sch98] and Leroy [Ler06]. This consists
∞
in introducing a divergence relation by means of sequents of the form ρ ⊢β N −→,
where N ∈ Γq and q ∈ {s, b, k}. Intuitively, a divergence sequent of the form,
∞
say, ρ ⊢β hs, σi −→ means that, in the context given by ρ and σ, the execution of
statement s diverges. We now give a set of rules that (interpreted coinductively,
26
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
as we will see later) allow to characterize the behavior of divergent computations.
For instance, the following rule schemata characterize the divergence behavior of
statement sequences:
∞
∞
ρ ⊢β hs0 , σi → σ0 ρ ⊢β hs1 , σ0 i −→
====================
========
∞
ρ ⊢β hs0 ; s1 , σi −→
ρ ⊢β hs0 , σi −→
==============
∞
ρ ⊢β hs0 ; s1 , σi −→
Notice that, once the set of concrete rules characterizing finite computations is
known, the concrete rules modeling divergences can be specified systematically
(and thus implicitly). Namely, for each concrete rule
P0
···
Pi−1
ρi ⊢βi Ni → ηi
Pi+1
···
Ph−1
ρ ⊢β N → η
(side condition)
(75)
U
U
such that 0 ≤ i < h and, for q ∈ {s, b, k}, Ni ∈ Γβq i and N ∈ Γβq , there is the
corresponding divergence rule where the i-th premise is diverging, i.e.,
∞
P0 · · · Pi−1 ρi ⊢βi Ni −→
===============
=========
∞
ρ ⊢β N −→
(side condition)
Therefore, there are two rules above modeling the divergence of statement sequences, which can be obtained from rule (42). It is worth noting that a single
divergence rule schema can be obtained from more than one of the concrete rules
in Section 5.5.
We will use the terms negative and positive to distinguish the different kinds of
rules constructed in this and the previous section, respectively.
Definition 5.5. (Concrete semantics rules.) The set R+ (resp., R− ) of
positive (resp., negative) concrete semantics rules is the infinite set obtained by
instantiating the rule schemata of Section 5.5 (resp., Section 5.6) in all possible
def
ways (respecting, of course, the side conditions). Moreover, R = R+ ⊎ R− .
5.7 Concrete Semantics Trees
The concrete semantics of a program is a (possibly infinite) set of finite or infinite
trees. Such trees are defined in terms of the (infinite) set of instances of the rules
defined in the previous two sections.
Let S be the (infinite) set of sequents occurring in the premises and conclusions
of the rules in R. The concrete semantics universe, denoted by U, is the set of
finitely branching trees of at most ω-depth with labels in S.
Definition 5.6. (Concrete semantics universe.) A set P ⊆ N⋆ is prefixclosed if, for each z ∈ N⋆ and each n ∈ N, zn ∈ P implies z ∈ P . A set P ⊆ N⋆ is
canonical if, for each z ∈ N⋆ there exists h ∈ N such that
{ n ∈ N | zn ∈ P } = {0, . . . , h − 1}.
An S-tree is a partial function θ : N⋆ S such that dom(θ) is prefix-closed and
canonical. The concrete semantics universe U is the set of all S-trees.
def
For each p ∈ dom(θ), the tree θ[p] defined, for each z ∈ N⋆ , by θ[p] (z) = θ(pz),
is called a subtree of θ; it is called a proper subtree if p 6= ǫ. If dom(θ) = ∅,
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
27
then θ is the empty tree. If θ is not empty, then θ(ǫ) is the root of θ and, if
{0, . . . , h − 1} ⊆ dom(θ) and h ∈
/ dom(θ), then θ[0] , . . . , θ[h−1] are its immediate
subtrees (note that h ∈ N may be zero); in this case θ can be denoted by
θ[0] ··· θ[h−1]
.
θ(ǫ)
Definition 5.7. (Concrete semantics trees.) Let
F+ : ℘(U) → ℘(U) be the
continuous function over the complete lattice ℘(U), ⊆ given, for all U ∈ ℘(U), by
θ0 , . . . , θh−1 ∈ U,
θ ··· θ
0
h−1
def
F+ (U ) =
θ0 (ǫ) · · · θh−1 (ǫ)
∈ R+
s
s
def
.
The set of positive concrete semantics trees is Θ+ = lfp⊆ (F+ ). Consider now the
co-continuous function F− : ℘(U) → ℘(U) given, for each U ∈ ℘(U), by
θ0 , . . . , θh−2 ∈ Θ+ , θh−1 ∈ U,
θ ··· θ
0
h−1
def
.
F− (U ) =
θ
(ǫ)
·
·
·
θ
(ǫ)
0
h−1
∈ R−
s
s
def
The set of negative concrete semantics trees is Θ− = gfp⊆ (F− ). The set of all
def
concrete semantics trees is Θ = Θ+ ⊎ Θ− .
We now show that, for every concrete non-terminal configuration, there exists a
concrete semantics tree with that in the root.
Proposition 5.8. For each β ∈ TEnv, ρ ∈ Env such that ρ : β and N ∈ Γβq ,
where q ∈ {e, d, g, s, b, k}, there exists θ ∈ Θ such that
θ(ǫ) ∈
(ρ ⊢β N → η) η ∈ Tq
∞
⊎ (ρ ⊢β N −→) .
Proof. If q = e and η ∈ Te , we say that the sequent (ρ ⊢β N → η) is well-typed
if N = he, σ0 i and η = hsval, σ1 i imply β ⊢ e : type(sval). For the proof, let
def
S+ (ρ, β, N ) =
s s = (ρ ⊢β N → η), η ∈ Tq , (q = e =⇒ s is well-typed) .
We now assume that N ∈ Γβq is a fixed but arbitrary non-terminal configuration.
∞
It suffices to show there exists θ ∈ Θ such that θ(ǫ) ∈ S+ (ρ, β, N )⊎ (ρ ⊢β N −→) .
Let R0 be the set of all rules in R+ whose conclusions are in S+ (ρ, β, N ). By
inspecting the concrete evaluation rule schemata in Section 5.5, R0 6= ∅. Let j ≥ 0
be the maximal value for which there exist finite trees θ0 , . . . , θj−1 ∈ Θ+ where
P0 = θ0 (ǫ), . . . , Pj−1 = θj−1 (ǫ) are the first j premises of a rule in R0 . Let Rj ⊆ R0
be the set of all rules in R0 with P0 , . . . , Pj−1 as their first j premises; then Rj 6= ∅.
By inspecting the rule schemata in Section 5.5, it can be seen that, if there exists
28
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
P0 ··· Pj−1 Pj′ ···
s′
∈ Rj for some Pj′ ∈ S+ (ρj , βj , Nj ) and s′ ∈ S+ (ρ, β, N ), then6
P0 · · · Pj−1 Pj · · ·
∈ Rj .
(76)
s
Suppose that q ∈ {e, d, g} so that we can also assume N = hu, σi. We show by
structural induction on u that there exists θ ∈ Θ+ such that θ(ǫ) ∈ S+ (ρ, β, N ).
By inspecting the rule schemata in Section 5.5, it can be seen that, if u is atomic,
the rules in R0 have no premises (so that j = 0) and hence, letting θ ∈ Θ+
be the singleton tree consisting of the conclusion of a rule in R0 , we obtain that
θ(ǫ) ∈ S+ (ρ, β, N ). Otherwise, u is not atomic, we show that each of the rules in
Rj has exactly j premises; to do this, we assume there exists a rule in Rj with a
β
(j + 1)-th premise Pj and derive a contradiction. Let Nj ∈ Γqjj be the non-terminal
configuration in Pj . By inspecting the rule schemata in Section 5.5 in the case that
q ∈ {e, d, g}, it can be seen that:
∀Pj ∈ S+ (ρj , βj , Nj ) : ∃s ∈ S+ (ρ, β, N ) .
(i) qj ∈ {e, d, g} so that Nj has the form huj , σj i;
(ii) uj is a substructure of u unless Rj consists of instances of the schematic
rule (31) and j = 1.
If uj is a substructure of u, by property (i), we can apply structural induction to
obtain that there exists a finite tree θj ∈ Θ+ such that Pj = θj (ǫ) ∈ S+ (ρj , βj , Nj );
hence, by property (76), there exists a rule in Rj having Pj as its (j + 1)-th premise;
contradicting the assumption that j was maximal. Otherwise, by property (ii), if
uj is not a substructure of u, the rules in R0 must be instances of rule schema (31)
and j = 1; in this case, rule schema (23), which has no premises, can be instantiated with the second premise of a rule in Rj as its conclusion; and again we have
a contradiction. Thus, for any uj , all rules in Rj have exactly j premises. By Defiθ ··· θ
nition 5.7, θ = 0 s j−1 ∈ Θ+ for some s ∈ S+ (ρ, β, N ). Therefore, since Θ+ ⊆ Θ,
the thesis holds when q ∈ {e, d, g}.
Suppose now that q ∈ {s, b, k}. We prove that, if there does not exist a tree
θ ∈ Θ+ such that θ(ǫ) ∈ S+ (ρ, β, N ), then, for all n ≥ 0, there exists a tree θ
def
∞
n
(U). To this end, we reason by
such that θ(ǫ) = s∞ = (ρ ⊢β N −→) and θ ∈ F−
induction on n ≥ 0. By our assumption that there is no tree θ ∈ Θ+ such that
θ(ǫ) ∈ S+ (ρ, β, N ), there must exist a rule
P0 · · · Pj−1 Pj · · ·
∈ Rj
s
β
for some Pj ∈ S+ (ρj , βj , Nj ); let qj be such that Nj ∈ Γqjj . By the maximality
of j, there is no tree in Θ+ whose root is Pj . We have already shown that, if
6 To
help understand this property, we illustrate it in the case that q = e and the non-terminal
configuration is N = hb0 and b1 , σi; hence the concrete rule schemata (15)–(17) will apply. In all
the rule instances, the first premise is of the form P0 = (ρ ⊢β N0 → η0 ), where N0 = hb0 , σi; as
˛
˘
¯
def
a consequence, we have S+ (ρ, β, N0 ) = (ρ ⊢β N0 → η0 ) ˛ η0 ∈ B , where B = ExceptState ⊎
{ ht, σ0 i ∈ Te | t ∈ Bool, σ0 ∈ Mem }. Thus, for each terminal configuration η0 ∈ B, there is
a rule instance having η0 in its first premise — that is we instantiate rule (15) when η0 = ε,
rule (16) when η0 = hff, σ0 i and rule (17) when η0 = htt, σ0 i. Thus property (76) holds for j = 0.
Moreover, although only rule (17) applies when j = 1, the terminal configuration for the second
premise (P1 ) is just any terminal configuration in Te . Thus property (76) also holds for j = 1.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
29
qj ∈ {e, d, g}, then there exists a tree θj ∈ Θ+ such that θj (ǫ) ∈ S+ (ρj , βj , Nj );
thus, by property (76), there must be a rule in Rj whose (j + 1)-th premise is θj (ǫ);
contradicting the assumption that j ≥ 0 is maximal. Hence qj ∈ {s, b, k}. By
the definition of the negative concrete semantics rules in Section 5.6, there exists a
corresponding negative rule
P0 · · · Pj−1 P∞
∈ R−
s∞
∞
such that P∞ = (ρj ⊢βj Nj −→). Hence, by Definition 5.6, there exists a tree in
0
U = F−
(U) with root s∞ , so that the inductive hypothesis holds for n = 0. Suppose
n−1
now that n > 0. By the inductive hypothesis, there exists a tree θ∞ ∈ F−
(U)
θ0 ··· θj−1 θ∞
n
such that θ∞ (ǫ) = P∞ . Hence, by Definition 5.7,
∈ F− (U). Thus, for
s∞
n
all n ≥ 0, there exists a tree in F−
(U) with root s∞ and hence, by Definition 5.7,
there exists a tree in Θ− with root s∞ . Since Θ = Θ+ ⊎ Θ− , the thesis holds when
q ∈ {s, b, k}.
The concrete semantics of a valid program g with respect to the initial memory
def
structure σi = (∅, ǫ) ∈ Mem is a set of concrete semantics trees. This set will
always include a tree θ0 ∈ Θ (which, by Proposition 5.8, must exist) such that
θ0 (ǫ) = ∅ ⊢∅ (g; gvar x : integer = 0), σi → η0 .
If η0 = ε0 , i.e., an RTS exception is thrown during the evaluation of g, then the
concrete semantics is {θ0 }. If, instead, η0 = hρ0 , σ0 i, then the concrete semantics
is
∞
{θ0 } ∪ θ ∈ Θ θ(ǫ) = (ρ0 ⊢β N → η) or θ(ǫ) = (ρ0 ⊢β N −→) ,
where N = x := main() , σ0 ∈ Γβs and ∅ ⊢∅ (g; gvar x : integer = 0) : β.
The concrete semantics for CPM we have just presented, extended as indicated
in Section 9, allows us to reason on a number of interesting program safety properties (such as the absence of division-by-zero and other run-time errors) as well as
termination and computational complexity. In the next section, we will see how the
usually non-computable concrete semantics can be given an abstract counterpart
that is amenable to effective computation.
6. ABSTRACT DYNAMIC SEMANTICS
For the specification of the abstract semantics, we mainly follow the approach
outlined in the works by Schmidt [Sch95; Sch97; Sch98]. The specification of the
abstract semantics requires that appropriate abstract domains are chosen to provide
correct approximations for the values that are involved in the concrete computation
[CC77a; CC79; CC92a; CC92c]. For the sake of generality and extensibility, we will
not target any specific abstraction, but rather consider arbitrary abstract domains
that satisfy a limited set of properties that are sufficient to provide the correctness
of the overall analysis without compromising its potential precision.
6.1 Abstract Semantic Domains
We adopt the framework proposed in [CC92a, Section 7], where the correspondence between the concrete and the abstract domains is induced from a concrete
30
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
approximation relation and a concretization function. For the sole purpose of simplifying the presentation, we will consider a particular instance of the framework
by assuming a few additional but non-essential domain properties. The resulting
construction is adequate for our purposes and still allows for algebraically weak
abstract domains, such as the domain of convex polyhedra [CH78].
A concrete domain is modeled as a complete lattice (C, ⊑, ⊥, ⊤, ⊓, ⊔) of semantic properties; as usual, the concrete approximation relation c1 ⊑ c2 holds if c1
is a stronger property than c2 (i.e., c2 approximates c1 ). An abstract domain is
modeled as a bounded join-semilattice (D♯ , ⊑♯ , ⊥♯ , ⊔♯ ), so that it has a bottom
element ⊥♯ and the least upper bound d♯1 ⊔♯ d♯2 exists for all d♯1 , d♯2 ∈ D♯ . When
the abstract domain is also provided with a top element ⊤♯ ∈ D♯ , we will write
(D♯ , ⊑♯ , ⊥♯ , ⊤♯ , ⊔♯ ). The abstract domain D♯ is related to C by a monotonic concretization function γ : D♯ → C: in words, C is approximated by D♯ through γ;
this approximation is said to be strict if γ is a strict function.7
In order to compute approximations for specific concrete objects, we assume the
existence of a partial abstraction function
α : C D♯ such that, for each c ∈ C,
if α(c) is defined then c ⊑ γ α(c) . In particular, we assume that α(⊥) = ⊥♯ is
always defined; if an abstract top element exists, then α(⊤) = ⊤♯ is also defined.
When needed or useful, we will require a few additional properties.
Most of the concrete domains used in the concrete
semantics construction are
obtained as the powerset lattice ℘(D), ⊆, ∅, D, ∩, ∪ of some set of concrete objects
D. In such a situation, for each concrete object d ∈ D and abstract element d♯ ∈ D♯
such that the corresponding domains are related by the concretization function
γ : D♯ → ℘(D), we write d ∝ d♯ and d 6∝ d♯ to denote the assertions d ∈ γ(d♯ ) and
d∈
/ γ(d♯ ), respectively. For a lighter notation, we denote ⊑♯ , ⊥♯ , ⊤♯ and ⊔♯ by ⊑,
⊥, ⊤ and ⊔, respectively. We also overload the symbols ⊑, ⊥, ⊤, ⊔, γ and α: the
context will always make clear which incarnation has to be considered.
The approximations of composite concrete domains are typically obtained by
suitably combining the approximations already available for their basic components.
For i = 1, 2, let Di be a set of concrete
objects and consider the corresponding
powerset lattice ℘(Di ), ⊆, ∅, Di , ∩, ∪ ; let also Di♯ be an abstract domain related
to ℘(Di ) by the concretization function γi : Di♯ → ℘(Di ).
6.1.1 Approximation of Cartesian Products. Values of the Cartesian product
D1 × D2 can be approximated by elements of the Cartesian product D1♯ × D2♯ .
Namely, the component-wise ordered abstract domain D1♯ × D2♯, ⊑, ⊥, ⊔ is related
to the concrete powerset lattice ℘(D1 ×D2 ), ⊆, ∅, D1 ×D2 , ∩, ∪ by the concretization function γ : (D1♯ × D2♯ ) → ℘(D1 × D2 ) defined, for each (d♯1 , d♯2 ) ∈ D1♯ × D2♯ ,
by
def
γ(d♯1 , d♯2 ) = (d1 , d2 ) ∈ D1 × D2 d1 ∈ γ1 (d♯1 ), d2 ∈ γ2 (d♯2 ) .
(77)
Hence, (d1 , d2 ) ∝ (d♯1 , d♯2 ) holds if and only if d1 ∝ d♯1 and d2 ∝ d♯2 .
If the underlying approximations D1♯ and D2♯ are both strict, then a better approximation scheme can be obtained by adopting the strict product (also called
7 Let f : D × · · · × D → D , where (D , ⊑ , ⊥ , ⊔ ) is a bounded join-semilattice, for each i = 0,
n
1
0
i
i
i
i
. . . , n. Then, function f is strict on the i-th argument if di = ⊥i implies f (d1 , . . . , dn ) = ⊥0 .
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
31
smash product ) construction, which performs a simple form of reduction by collapsing (d♯1 , d♯2 ) to the bottom element whenever d♯1 = ⊥ or d♯2 = ⊥. Namely,
def
D1♯ ⊗ D2♯ = (d♯1 , d♯2 ) ∈ D1♯ × D2♯ d♯1 = ⊥ if and only if d♯2 = ⊥ .
The concretization function is defined exactly as in (77). The constructor function
· ⊗ · : (D1♯ × D2♯ ) → (D1♯ ⊗ D2♯ ) is defined by
(
(d♯1 , d♯2 ), if d♯1 6= ⊥ and d♯2 6= ⊥;
♯ def
♯
d1 ⊗ d2 =
⊥,
otherwise.
6.1.2 Approximation of Disjoint Unions. In order to provide an abstract domain approximating sets of concrete objects drawn from a disjoint union, we use
the following well-known construction several times.
Suppose that D1 ∩D2 = ∅. Then, values of the disjoint union D = D1 ⊎D2 can be
approximated by elements of the Cartesian product D♯ = D1♯ ×D2♯ . In this case, the
abstract domain D♯ is related to the concrete powerset lattice ℘(D), ⊆, ∅, D, ∩, ∪
by means of the concretization function γ : (D1♯ × D2♯ ) → ℘(D1 ⊎ D2 ) defined, for
each (d♯1 , d♯2 ) ∈ D1♯ × D2♯ , by
def
γ(d♯1 , d♯2 ) = γ1 (d♯1 ) ⊎ γ2 (d♯2 ).
Therefore, the approximation provided by D♯ is strict if both D1♯ and D2♯ are so.
In order to simplify notation, if d♯1 ∈ D1♯ then we will sometimes write d♯1 to also
denote the abstract element (d♯1 , ⊥) ∈ D♯ ; similarly, d♯2 ∈ D2♯ also denotes the
abstract element (⊥, d♯2 ) ∈ D♯ . As usual, for each i = 1, 2 and di ∈ Di , the
notation di ∝ (d♯1 , d♯2 ) stands for the assertion di ∈ γ(d♯1 , d♯2 ), which is equivalent to
di ∈ γi (d♯i ). For the sake of clarity, the abstract domain D♯ as specified above will
be denoted by D1♯ ⊎♯ D2♯ . It is worth stressing that D1♯ ⊎♯ D2♯ 6= D1♯ ⊎ D2♯ .
6.2 Approximation of Integers
The concrete domain of integers ℘(Integer), ⊆, ∅, Integer, ∩, ∪ is correctly ap
proximated by an abstract domain Integer♯ , ⊑, ⊥, ⊤, ⊔ , where we assume that
γ is strict. Elements of Integer♯ are denoted by m♯ , m♯0 , m♯1 and so forth. We
assume that the partial abstraction function α : ℘(Integer) Integer♯ is defined
on all singletons {m} ∈ ℘(Integer). We also assume that there are abstract binary operations ‘⊕’, ‘⊖’, ‘⊙’, ‘⊘’ and ‘ȅ’ on Integer♯ that are strict on each
argument and sound with respect to the corresponding operations on ℘(Integer)
which, in turn, are the obvious pointwise extensions of addition, subtraction, multiplication, division
and remainder over the integers. More formally, we require
γ(m♯0 ⊕ m♯1 ) ⊇ m0 + m1 m0 ∈ γ(m♯0 ), m1 ∈ γ(m♯1 ) for each m♯0 , m♯1 ∈ Integer♯ ,
to ensure that ‘⊕’ is sound with respect to addition. Likewise for ‘⊖’ and ‘⊙’
with respect to subtraction and multiplication, respectively. For the ‘⊘’ operation we require soundness with respect to integer division i.e., that, for each
m♯0 , m♯1 ∈ Integer♯ , γ(m♯0 ⊘ m♯1 ) ⊇ m0 ÷ m1 m0 ∈ γ(m♯0 ), m1 ∈ γ(m♯1 ), m1 6= 0 .
Likewise for ‘ȅ’ with respect to the ‘mod’ operation. We also assume there is a
unary abstract operation, denoted by ‘⊖’, which is strict
and sound with respect
to the unary minus concrete operation, that is, γ(⊖m♯ ) ⊇ −m m ∈ γ(m♯ ) .
32
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
6.3 Approximation of Booleans
We assume a complete lattice Bool♯ , ⊑, ⊥, ⊤, ⊓, ⊔ is given that is related to the
concrete domain of Booleans ℘(Bool), ⊆, ∅, Bool, ∩, ∪ by means of a Galois connection where γ is strict. Elements of Bool♯ are denoted by t♯ , t♯0 , t♯1 and so forth.
We assume that there are abstract operations ‘⊖’, ‘>’ and ‘?’ on Bool♯ that are
strict on each argument and sound with respect to the pointwise extensions of
Boolean negation, disjunction and conjunction over ℘(Bool). For instance, for the
operation ‘>’ to be sound with respect to disjunction on ℘(Bool), it is required that,
γ(t♯0 > t♯1 ) ⊇ t0 ∨ t1 t0 ∈ γ(t♯0 ), t1 ∈ γ(t♯1 ) for each t♯0 and t♯1 in Bool♯ . Likewise
for ‘?’. For operation ‘⊖’ to be sound with
respect to negation on ℘(Bool), we
require that, for each t♯ in Bool♯ , γ(⊖ t♯ ) ⊇ ¬ t t ∈ γ(t♯ ) .
Furthermore, we assume that there are abstract operations ‘,’, ‘6,’, ‘⊳’, ‘E’, ‘D’
and ‘⊲’ on Integer♯ that are strict on each argument and sound with respect to the
pointwise extensions over ℘(Integer) of the corresponding relational operators ‘=’,
‘6=’, ‘<’, ‘≤’, ‘≥’ and ‘>’ over the integers, considered as functions taking values in
Bool. For instance, for the operation ‘,’ to
be sound with respect to equality on
℘(Integer), we require that γ(m♯0 , m♯1 ) ⊇ m0 = m1 m0 ∈ γ(m♯0 ), m1 ∈ γ(m♯1 )
for each m♯0 , m♯1 ∈ Integer♯ . Likewise for ‘6,’, ‘⊳’, ‘E’, ‘D’ and ‘⊲’.
6.4 Approximation of Storable Values
The concrete domain of storable values ℘(sVal), ⊆, ∅, sVal, ∩, ∪ , including both
def
integers and Booleans, is abstracted by the domain sVal♯ = Integer♯ ⊎♯ Bool♯ . The
hypotheses on Integer♯ and Bool♯ imply that the approximation is strict.
6.5 Approximation of Exceptions
For the approximation of RTS exceptions, we assume that there is an abstract
domain RTSExcept♯ , ⊑, ⊥, ⊤, ⊔ , which
is related to the concrete powerset domain
℘(RTSExcept), ⊆, ∅, RTSExcept, ∩, ∪ by a strict concretization function. The
partial abstraction function α : ℘(RTSExcept) RTSExcept♯ is assumed to be
defined on all singletons. Elements of RTSExcept♯ are denoted by χ♯ , χ♯0 , χ♯1 and
so forth.
Generic exceptions, including both RTS exceptions and user-defined exceptions,
def
are approximated by elements of the domain Except♯ = RTSExcept♯ ⊎♯ sVal♯ . The
hypotheses on its components imply that the approximation is strict. Elements of
Except♯ are denoted by ξ ♯ , ξ0♯ , ξ1♯ and so forth.
6.6 Approximation of Memory Structures, Value States and Exception States
Here we differ from other published abstract semantics in that we explicitly cater for
relational abstract domains as well as for attribute-independent ones [CC79]. While
this complicates the presentation, it results in a truly generic abstract semantics.
Moreover, the approach presented here is —all things considered— quite simple
and reflects into a modular, clean design of the analyzer.
♯
Definition 6.1. (Mem♯ , ValState
, ExceptState♯ .) We assume there exists an
♯
abstract domain Mem , ⊑, ⊥, ⊔ that is related, by means of a strict concretization
function, to the concrete powerset domain ℘(Mem), ⊆, ∅, Mem, ∩, ∪ . Elements of
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
33
Mem♯ are denoted by σ ♯ , σ0♯ , σ1♯ and so forth. We assume that, for each σ ∈ Mem,
there exists σ ♯ ∈ Mem♯ such that σ ∝ σ ♯ .
def
The abstract domain of value states is ValState♯ = sVal♯ ⊗ Mem♯ . Elements of
♯
♯
♯
ValState will be denoted by υ ♯ , υ0 , υ1 and so forth.
def
The abstract domain of exception states is ExceptState♯ = Mem♯ ⊗ Except♯ .
Elements of ExceptState♯ will be denoted by ε♯ , ε♯0 , ε♯1 and so forth. To improve
readability, none♯ will denote the bottom element ⊥ ∈ ExceptState♯ , indicating that
no exception is possible.
The abstract memory structure read and update operators
·[·, ·] : (Mem♯ × Addr × sType) → (ValState♯ ⊎♯ ExceptState♯ ),
·[· :=♯ ·] : Mem♯ × (Addr × sType) × sVal♯ → (Mem♯ ⊎♯ ExceptState♯ )
are assumed to be such that, for each σ ♯ ∈ Mem♯ , a ∈ Addr, sT ∈ sType and
sval♯ ∈ sVal♯ :
γ σ ♯ [a, sT] ⊇ σ[a, sT] σ ∈ γ(σ ♯ ) ,
γ σ ♯ (a, sT) :=♯ sval♯ ⊇ σ (a, sT) := sval σ ∈ γ(σ ♯ ), sval ∈ γ(sval♯ ) .
The abstract data and stack memory allocation functions
newd ♯ : ValState♯ → (Mem♯ × Loc) ⊎♯ ExceptState♯ ,
news ♯ : ValState♯ → (Mem♯ × Ind) ⊎♯ ExceptState♯
are assumed to be such that, for each υ ∈ ValState and υ ♯ ∈ ValState♯ such that
υ ∈ γ(υ ♯ ), and each h ∈ {d, s}: if newh (υ) = (σ, a) (resp., newh (υ) = ε) and
newh ♯ (υ ♯ ) = (σ ♯ , a′ ), ε♯ , then σ ∈ γ(σ ♯ ) and a = a′ (resp., ε ∈ γ(ε♯ )).
The abstract memory structure data cleanup function
cleanupd ♯ : ExceptState♯ → ExceptState♯
is such that, for each ε♯ ∈ ExceptState♯ , we have
γ cleanupd ♯ (ε♯ ) ⊇ cleanupd (ε) ε ∈ γ(ε♯ ) .
The abstract functions
{mark♯s , unmark♯s , link♯s , unlink♯s } ⊆ Mem♯ → Mem♯
are defined to be such that, for each σ ♯ ∈ Mem♯ :
γ mark♯s (σ ♯ ) ⊇ marks (σ) σ ∈ γ(σ ♯ ) ,
γ unmark♯s (σ ♯ ) ⊇ unmarks (σ) σ ∈ γ(σ ♯ ) and unmarks (σ) is defined ,
γ link♯s (σ ♯ ) ⊇ links (σ) σ ∈ γ(σ ♯ ) and links (σ) is defined ,
γ unlink♯s (σ ♯ ) ⊇ unlinks (σ) σ ∈ γ(σ ♯ ) and unlinks (σ) is defined .
It is assumed that all the abstract operators mentioned above are strict on each of
their arguments taken from an abstract domain.
As done in the concrete, the abstract stack unmark and the abstract frame unlink
functions are lifted to also work on abstract exception states. Namely, for each
34
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
ε♯ = (σ ♯ , ξ ♯ ) ∈ ExceptState♯ ,
def
unmark♯s (σ ♯ , ξ ♯ ) = unmark♯s (σ ♯ ), ξ ♯ ,
def
unlink♯s (σ ♯ , ξ ♯ ) = unlink♯s (σ ♯ ), ξ ♯ .
Besides the abstract operators specified above, which closely mimic the concrete
operators related to concrete memory structures and exception states, other abstract operators will be used in the abstract semantics construction so as to enhance
its precision.
When dealing with Boolean guards during the abstract evaluation of conditional
and iteration statements, it might be the case that no definite information is available. In such a situation, the abstract execution can be made more precise if the
abstract memory structure is filtered according to the condition holding in the
considered computation branch.
Definition 6.2. (Memory structure filter.) An abstract memory structure
filter is any computable function φ : (Env × Mem♯ × Exp) → Mem♯ such that, for
each e ∈ Exp, each β : I with FI(e) ⊆ I and β ⊢I e : boolean, for each ρ ∈ Env
♯
with ρ : β and each σ ♯ ∈ Mem♯ , if φ(ρ, σ ♯ , e) = σtt
, then
♯
) ⊇ σtt ∈ Mem σ ∈ γ(σ ♯ ), ρ ⊢β he, σi → htt, σtt i .
γ(σtt
Similarly, abstract exception states can be filtered according to whether or not
they can be caught by the guard of a catch clause.
Definition 6.3. (Exception state filters and selectors.) The abstract exception state filters are computable functions
φ+ , φ− : (exceptDecl × ExceptState♯ ) → ExceptState♯
such that, for each p ∈ exceptDecl and each ε♯ ∈ ExceptState♯ ,
γ(ε♯ ),
if p = any;
+
♯
γ φ (p, ε ) ⊇
(σ, ξ) ∈ γ(ε♯ ) ξ = p ,
if p ∈ RTSExcept;
(σ, ξ) ∈ γ(ε♯ ) ξ ∈ dom type(p) , otherwise;
∅,
if p = any;
−
♯
♯
γ φ (p, ε ) ⊇
(σ, ξ) ∈ γ(ε ) ξ 6= p ,
if p ∈ RTSExcept;
♯
(σ, ξ) ∈ γ(ε ) ξ ∈
/ dom type(p) , otherwise.
The abstract memory structure and abstract exception selectors
mem : ExceptState♯ → Mem♯ ,
sel : (cType × ExceptState♯ ) → (RTSExcept♯ ⊎ Integer♯ ⊎ Bool♯ )
are defined, for each ε♯ = σ ♯ , χ♯ , (m♯ , t♯ ) ∈ ExceptState♯ and cT ∈ cType, by
def
mem(ε♯ ) = σ ♯ ;
♯
χ ,
def
sel(cT, ε♯ ) = m♯ ,
♯
t,
if cT = rts exception;
if cT = integer;
if cT = boolean.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
35
To simplify notation, we will write cT(ε♯ ) to denote sel(cT, ε♯ ).
The generic specification provided above for abstract memory structures and
the corresponding abstract operators plays a central role for the modularity of the
overall construction. By exploiting this “black box” approach, we achieve orthogonality not only from the specific abstract domains used to approximate (sets of
tuples of) storable values, but also from the critical design decisions that have to
be taken when approximating the concrete stack, which may be unbounded in size
due to recursive functions. Hence, while still staying in the boundaries of the current framework, we can flexibly explore, combine, and finely tune the sophisticated
proposals that have been put forward in the literature, such as the work in [JS03;
JS04], which encompasses both the functional and the call string approaches to
interprocedural analysis [CC77b; SP81].
6.7 Abstract Configurations
Terminal and non-terminal configurations of the abstract transition system are now
defined.
Definition 6.4. (Non-terminal abstract configurations.) The sets of nonterminal abstract configurations for expressions, local and global declarations, statements, function bodies and catch clauses are given, for each β ∈ TEnvI and respectively, by
def
he, σ ♯ i ∈ Exp × Mem♯ ∃sT ∈ sType . β ⊢I e : sT ,
Γβ♯
e =
def
Γβ♯
hd, σ ♯ i ∈ Decl × Mem♯ ∃δ ∈ TEnv . β ⊢I d : δ ,
d =
def
hg, σ ♯ i ∈ Glob × Mem♯ ∃δ ∈ TEnv . β ⊢I g : δ ,
Γβ♯
g =
def
Γβ♯
hs, σ ♯ i ∈ Stmt × Mem♯ β ⊢I s ,
s =
def
hbody, σ ♯ i ∈ Body × Mem♯ ∃sT ∈ sType . β ⊢I body : sT ,
Γβ♯
b =
def
Γβ♯
hk, ε♯ i ∈ Catch × ExceptState♯ β ⊢I k .
k =
We write N ♯ to denote a non-terminal abstract configuration.
The approximation relation between concrete and abstract non-terminal configurations is defined as follows. For each q ∈ {e, d, g, s, b}, N = hq1 , σi ∈ Γβq and
N ♯ = hq2 , σ ♯ i ∈ Γβ♯
q ,
N ∝ N ♯ ⇐⇒ (q1 = q2 ∧ σ ∝ σ ♯ ).
(78)
For each N = hk1 , εi ∈ Γβk and N ♯ = hk2 , ε♯ i ∈ Γβ♯
k ,
N ∝ N ♯ ⇐⇒ (k1 = k2 ∧ ε ∝ ε♯ ).
(79)
Definition 6.5. (Terminal abstract configurations.) The sets of terminal
abstract configurations for expressions, local and global declarations, statements,
function bodies and catch clauses are given, respectively, by
def
Te♯ = ValState♯ ⊎♯ ExceptState♯ ,
def
def
Td♯ = Tg♯ = (Env × Mem♯ ) ⊎♯ ExceptState♯ ,
36
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
def
def
Ts♯ = Tb♯ = Mem♯ ⊎♯ ExceptState♯ ,
def
Tk♯ = Ts♯ ⊎♯ ExceptState♯ .
We write η ♯ to denote a terminal abstract configuration.
The approximation relation η ∝ η ♯ between concrete and abstract terminal configurations is defined as follows. For expressions,
(
υ ∝ υ ♯ , if η = υ;
η ∝ hυ ♯ , ε♯ i ⇐⇒
(80)
ε ∝ ε♯ , if η = ε.
For local and global declarations,
(
(ρ1 = ρ2 ∧ σ ∝ σ ♯ ),
♯
♯
η ∝ (ρ2 , σ ), ε ⇐⇒
ε ∝ ε♯ ,
For statements and function bodies,
(
σ ∝ σ♯ ,
η ∝ hσ ♯ , ε♯ i ⇐⇒
ε ∝ ε♯ ,
if η = hρ1 , σi;
if η = ε.
(81)
if η = σ;
if η = ε.
(82)
if η = hcaught, ηs i;
if η = huncaught, εi.
(83)
For catch sequences,
η∝
hηs♯ , ε♯ i
⇐⇒
(
ηs ∝ ηs♯ ,
ε ∝ ε♯ ,
The approximation relation for sequents is trivially obtained from the approximation relations defined above for configurations.
Definition 6.6. (‘∝’ on sequents.) The approximation relation between concrete (positive and negative) sequents and abstract sequents is defined, for each
β ∈ TEnvI , for each ρ0 , ρ1 ∈ EnvJ such that ρ0 : β |J and ρ1 : β |J , for each
♯
♯
q ∈ {e, d, g, s, b, k}, N ∈ Γβq , η ∈ Tq , N ♯ ∈ Γβ♯
q and η ∈ Tq , by
(ρ0 ⊢β N → η) ∝ (ρ1 ⊢β N ♯ → η ♯ ) ⇐⇒ (ρ0 = ρ1 ∧ N ∝ N ♯ ∧ η ∝ η ♯ );
∞
♯
♯
♯
(ρ0 ⊢β N −→) ∝ (ρ1 ⊢β N → η ) ⇐⇒ (ρ0 = ρ1 ∧ N ∝ N ).
(84)
(85)
6.8 Supported Expressions, Declarations and Statements
Each abstract domain has to provide a relation saying which (abstract configuration for) expressions, declarations and statements it directly supports, as well as
an abstract evaluation function providing safe approximations of any supported
expressions, declarations and statements.
Definition 6.7. (supported♯ , eval♯ .) For each q ∈ {e, d, g, s}, we assume there
exists a computable relation and a partial and computable operation,
supported♯ ⊆ Env × Γβ♯
q
and
♯
eval♯ : (Env × Γβ♯
q ) Tq ,
such that whenever ρ : β and supported♯ (ρ, N ♯ ) holds, eval♯ (ρ, N ♯ ) is defined and
has value η ♯ ∈ Tq♯ and, for each N ∈ Γβq and each η ∈ Tq such that N ∝ N ♯ and
ρ ⊢β N → η, we have η ∝ η ♯ .
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
37
An appropriate use of ‘supported♯ ’ and ‘eval♯ ’ allows the design of the domain
of abstract memory structures to be decoupled from the design of the analyzer.
In particular, it enables the use of relational as well as non-relational domains.
For example, using the domain of convex polyhedra the proper way, one can
easily implement a safe evaluation function for (the non-terminal abstract configuration of) any affine expression e. As a consequence, one can specify the
support relation so that supported♯ ρ, he, σ ♯ i holds. Similarly, one can specify
supported♯ ρ, hid := e, σ ♯ i holds for any affine assignment, i.e., an assignment
where e is an affine expression. Other implementation choices are possible. For
instance, besides supporting affine expressions, the implementer could specify that
supported♯ ρ, hid1 ∗ id2
, σ ♯ i holds provided ρ : I, id1 , id2 ∈ I and, for at least one
♯
i ∈ {1, 2}, γ σ ρ(idi ) = {m}, for some integer value m. Similarly, the design
can impose that supported♯ ρ, hid ∗ id, σ ♯ i always holds.
6.9 Abstract Evaluation Relations
The abstract evaluation relations that provide the first part of the specification of
the abstract interpreter for CPM are now defined. These relations are of the form
ρ ⊢β N ♯ → η ♯ ,
♯
♯
where β ∈ TEnv, ρ : β and, for some q ∈ {e, d, g, s, b, k}, N ♯ ∈ Γβ♯
q and η ∈ Tq .
The definition is again by structural induction from a set of rule schemata. In
order to allow for the arbitrary weakening of the abstract descriptions in the conclusion, without having to introduce precondition strengthening and postcondition
weakening rules, and to save typing at the same time, we will use the notation
P0 · · · Pℓ−1
ρ ⊢β N ♯
η0♯
(side condition)
to denote
P0 · · · Pℓ−1
ρ ⊢β N ♯ → η ♯
(side condition) and η0♯ ⊑ η ♯
where ‘⊑’ is the natural ordering relation on the appropriate abstract lattice (i.e.,
one of the Tq♯ , for q ∈ {e, d, g, s, b, k}.
Recalling the shorthand notation introduced in Section 6.1.2, when an abstract
storable value sval♯ is expected and we write an abstract integer m♯ or an abstract
Boolean t♯ , then we are actually meaning the abstract storable value (m♯ , ⊥) or
(⊥, t♯ ), respectively; similarly, when an abstract exception ξ ♯ is expected and we
write an abstract RTS exception χ♯ or an abstract storable value sval♯ , then we are
actually meaning the abstract exceptions (χ♯ , ⊥) or (⊥, sval♯ ), respectively.
6.9.1 Unsupported Expressions. The followingrules for the abstract evaluation
of expressions apply only if supported♯ ρ, he, σ ♯ i does not hold, where e is the
expression being evaluated. This side condition will be left implicit in order not to
clutter the presentation.
38
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Constant.
ρ ⊢β hcon, σ ♯ i
(86)
α({con}) ⊗ σ ♯ , none♯
Identifier.
ρ ⊢β hid, σ ♯ i
Unary minus.
(87)
σ ♯ ρ(id)
ρ ⊢β he, σ ♯ i → (m♯ , σ0♯ ), ε♯
ρ ⊢β h−e, σ ♯ i
(88)
(⊖ m♯ , σ0♯ ), ε♯
Binary arithmetic operations. Let ∈ {+, −, ∗, /, %} be a syntactic operator
and ⊚ ∈ {⊕, ⊖, ⊙, ⊘, ȅ} denote the corresponding abstract operation. Then the
abstract rules for addition, subtraction, multiplication, division and remainder are
given by the following schemata:
ρ ⊢β he0 , σ ♯ i → (m♯0 , σ0♯ ), ε♯0
ρ ⊢β he1 , σ0♯ i → (m♯1 , σ1♯ ), ε♯1
(m♯0 ⊚ m♯1 , σ1♯ ), ε♯0 ⊔ ε♯1
ρ ⊢β he0 e1 , σ ♯ i
(89)
if ∈
/ {/, %} or 0 6∝ m♯1 .
ρ ⊢β he1 , σ0♯ i → (m♯1 , σ1♯ ), ε♯1
ρ ⊢β he0 , σ ♯ i → (m♯0 , σ0♯ ), ε♯0
ρ ⊢β he0 e1 , σ ♯ i
(m♯0 ⊚ m♯1 , σ1♯ ), ε♯0 ⊔ ε♯1 ⊔ ε♯2
(90)
if ∈ {/, %}, 0 ∝ m♯1 and ε♯2 = σ1♯ ⊗ α({divbyzero}).
Arithmetic tests. Let ∈ {=, 6=, <, ≤, ≥, >} be an abstract syntax operator and
let ⊲⊳ : (Integer♯ × Integer♯ ) → Bool♯ denote the corresponding abstract test operation in {,, 6,, ⊳, E, D, ⊲}. Then the rules for the abstract arithmetic tests are
given by
ρ ⊢β he1 , σ0♯ i → (m♯1 , σ1♯ ), ε♯1
ρ ⊢β he0 , σ ♯ i → (m♯0 , σ0♯ ), ε♯0
(m♯0 ⊲⊳ m♯1 , σ1♯ ), ε♯0 ⊔ ε♯1
ρ ⊢β he0 e1 , σ ♯ i
(91)
Negation.
ρ ⊢β hb, σ ♯ i → (t♯ , σ0♯ ), ε♯
ρ ⊢β hnot b, σ ♯ i
(92)
(⊖ t♯ , σ0♯ ), ε♯
Conjunction.
ρ ⊢β hb0 , σ ♯ i → hυ0♯ , ε♯0 i
ρ ⊢β hb0 and b1 , σ ♯ i
♯
ρ ⊢β hb1 , σtt
i → hυ1♯ , ε♯1 i
υff♯ ⊔ υ1♯ , ε♯0 ⊔ ε♯1
,
(93)
♯
if σtt
= φ(ρ, σ ♯ , b0 ), σff♯ = φ(ρ, σ ♯ , not b0 ) and υff♯ = α({ff}) ⊗ σff♯ .
Disjunction.
ρ ⊢β hb0 , σ ♯ i → hυ0♯ , ε♯0 i
ρ ⊢β hb0 or b1 , σ ♯ i
ρ ⊢β hb1 , σff♯ i → hυ1♯ , ε♯1 i
♯
hυtt
⊔ υ1♯ , ε♯0 ⊔ ε♯1 i
(94)
·
On the Design of Generic Static Analyzers for Modern Imperative Languages
39
♯
♯
♯
if σtt
= φ(ρ, σ ♯ , b0 ), σff♯ = φ(ρ, σ ♯ , not b0 ) and υtt
= α({tt}) ⊗ σtt
.
6.9.2 Unsupported
Declarations. The following rules only apply if the condition
supported♯ ρ, hq, σ ♯ i does not hold, where q ∈ Decl ⊎ Glob is the declaration being
evaluated. Again, this side condition is left implicit.
Nil.
ρ ⊢β hnil, σ ♯ i
(∅, σ ♯ ), none♯
(95)
(ρ0 , σ ♯ ), none♯
(96)
Environment.
ρ ⊢β hρ0 , σ ♯ i
Recursive environment.
ρ ⊢β hrec ρ0 , σ ♯ i
if ρ1 =
∪
(
(97)
(ρ1 , σ ♯ ), none♯
id 7→ ρ0 (id) ρ0 (id) = λfps . extern : sT
id 7→ abs1
∀i ∈ {0, 1} : absi = λfps . let di in s result e,
ρ0 (id) = abs0 , d1 = rec ρ0 \ DI(fps) ; d0
)
.
Global variable declaration.
ρ ⊢β he, σ ♯ i → hυ ♯ , ε♯0 i
ρ ⊢β hgvar id : sT = e, σ ♯ i
(ρ1 , σ1♯ ), cleanupd ♯ (ε♯0 ⊔ ε♯1 )
if newd ♯ (υ ♯ ) = (σ1♯ , l), ε♯1 and ρ1 = id 7→ (l, sT) .
(98)
Local variable declaration.
ρ ⊢β he, σ ♯ i → hυ ♯ , ε♯0 i
ρ ⊢β hlvar id : sT = e, σ ♯ i
(ρ1 , σ1♯ ), unmark♯s (ε♯0 ⊔ ε♯1 )
if news ♯ (υ ♯ ) = (σ1♯ , i), ε♯1 and ρ1 = id 7→ (i, sT) .
(99)
Function declaration.
ρ ⊢β function id(fps) = body0 , σ ♯
(ρ0 , σ ♯ ), none♯
(100)
if ρ0 = {id 7→ λ fps . body1 } and either body0 = body1 = extern : sT or, for each
i ∈ {0, 1}, bodyi = let di in s result e, I = FI(body0 ) \ DI(fps) and d1 = ρ |I ; d0 .
Recursive declaration.
(ρ \ J) ⊢β[β1 ] hg, σ ♯ i → (ρ0 , σ0♯ ), none♯
ρ ⊢β hrec g, σ ♯ i
ρ ⊢β hrec ρ0 , σ0♯ i → η ♯
η♯
if J = FI(g) ∩ DI(g), β ⊢FI(g) g : β0 and β1 = β0 |J .
(101)
·
40
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Global sequential composition.
ρ[ρ0 ] ⊢β[β0 ] hg1 , σ0♯ i → (ρ1 , σ1♯ ), ε♯1
ρ ⊢β hg0 , σ ♯ i → (ρ0 , σ0♯ ), ε♯0
(ρ0 [ρ1 ], σ1♯ ), ε♯0 ⊔ ε♯1
ρ ⊢β hg0 ; g1 , σ ♯ i
(102)
if β ⊢I g0 : β0 and FI(g0 ) ⊆ I.
Local sequential composition.
ρ[ρ0 ] ⊢β[β0 ] hd1 , σ0♯ i → (ρ1 , σ1♯ ), ε♯1
ρ ⊢β hd0 , σ ♯ i → (ρ0 , σ0♯ ), ε♯0
(ρ0 [ρ1 ], σ1♯ ), ε♯0 ⊔ ε♯1
ρ ⊢β hd0 ; d1 , σ ♯ i
(103)
if β ⊢I d0 : β0 and FI(d0 ) ⊆ I.
6.9.3 Unsupported Statements. The following rules only apply if the implicit
side condition supported♯ ρ, hs, σ ♯ i does not hold, where s is the statement being
evaluated.
Nop.
ρ ⊢β hnop, σ ♯ i
(104)
σ♯
Assignment.
ρ ⊢β he, σ ♯ i → (sval♯ , σ0♯ ), ε♯0
ρ ⊢β hid := e, σ ♯ i
hσ1♯ , ε♯0
⊔
ε♯1 i
Statement sequence.
ρ ⊢β hs0 , σ ♯ i → hσ0♯ , ε♯0 i
if σ0♯ ρ(id) :=♯ sval♯ = (σ1♯ , ε♯1 )
ρ ⊢β hs1 , σ0♯ i → hσ1♯ , ε♯1 i
(106)
σ1♯ , ε♯0 ⊔ ε♯1
ρ ⊢β hs0 ; s1 , σ ♯ i
(105)
Block.
ρ ⊢β d, mark♯s (σ ♯ ) → (ρ0 , σ0♯ ), ε♯0
ρ[ρ0 ] ⊢β[β0 ] hs, σ0♯ i → hσ1♯ , ε♯1 i
unmark♯s (σ1♯ ), ε♯0 ⊔ unmark♯s (ε♯1 )
ρ ⊢β hd; s, σ ♯ i
(107)
if β ⊢FI(d) d : β0 .
Conditional.
ρ ⊢β he, σ ♯ i → hυ0♯ , ε♯0 i
♯
ρ ⊢β hs0 , σtt
i → hσ1♯ , ε♯1 i
ρ ⊢β hs1 , σff♯ i → hσ2♯ , ε♯2 i
ρ ⊢β hif e then s0 else s1 , σ ♯ i
(108)
hσ1♯ ⊔ σ2♯ , ε♯0 ⊔ ε♯1 ⊔ ε♯2 i
♯
if σtt
= φ(ρ, σ ♯ , e) and σff♯ = φ(ρ, σ ♯ , not e).
While.
♯
ρ ⊢β he, σ ♯ i → hυ0♯ , ε♯0 i ρ ⊢β hs, σtt
i → hσ1♯ , ε♯1 i
ρ ⊢β hwhile e do s, σ1♯ i → hσ2♯ , ε♯2 i
ρ ⊢β hwhile e do s, σ ♯ i
if
♯
σtt
= φ(ρ, σ ♯ , e) and
σff♯
hσff♯
⊔
σ2♯ , ε♯0
= φ(ρ, σ ♯ , not e).
⊔
ε♯1
⊔
ε♯2 i
(109)
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
41
Throw.
ρ ⊢β hthrow χ, σ ♯ i
h⊥, ε♯ i
if ε♯ = σ ♯ ⊗ α({χ})
ρ ⊢β he, σ ♯ i → (sval♯ , σ0♯ ), ε♯0
ρ ⊢β hthrow e, σ ♯ i
if ε♯1 = σ0♯ ⊗ sval♯
h⊥, ε♯0 ⊔ ε♯1 i
(110)
(111)
Try blocks.
ρ ⊢β hs, σ ♯ i → hσ0♯ , ε♯0 i ρ ⊢β hk, ε♯0 i → (σ1♯ , ε♯1 ), ε♯2
ρ ⊢β htry s catch k, σ ♯ i
ρ ⊢β hs0 , σ ♯ i → σ0♯ , (σ1♯ , ξ1♯ )
hσ0♯ ⊔ σ1♯ , ε♯1 ⊔ ε♯2 i
ρ ⊢β hs1 , σ0♯ i → hσ2♯ , ε♯2 i
ρ ⊢β hs1 , σ1♯ i → hσ3♯ , ε♯3 i
σ2♯ , ε♯2
♯
ρ ⊢β htry s0 finally s1 , σ i
(112)
⊔
ε♯3
⊔
(σ3♯
⊗
(113)
ξ1♯ )
Function call. With reference to conditions (60) and (61) of the concrete rules
for function calls, the corresponding abstract rule schema is
ρ ⊢β d, mark♯s (σ ♯ ) → (ρ0 , σ0♯ ), ε♯0
ρ[ρ1 ] ⊢β[β1 ] body, link♯s (σ0♯ ) → hσ1♯ , ε♯1 i
ρ[ρ0 ] ⊢β[β0 ] id0 := x0 , unlink♯s (σ1♯ ) → hσ2♯ , ε♯2 i
ρ ⊢β id0 := id(e1 , . . . , en ), σ ♯
(114)
unmark♯s (σ2♯ ), ε♯
if (60) and (61) hold and ε♯ = ε♯0 ⊔ unmark♯s unlink♯s (ε♯1 ) ⊔ unmark♯s (ε♯2 ).
6.9.4
Function Bodies.
ρ ⊢β d, mark♯s (σ ♯ ) → (ρ0 , σ0♯ ), ε♯0
ρ[ρ0 ] ⊢β[β0 ] hs, σ0♯ i → hσ1♯ , ε♯1 i
ρ[ρ0 ] ⊢β[β0 ] hx0 := e, σ1♯ i → hσ2♯ , ε♯2 i
ρ ⊢β hlet d in s result e, σ ♯ i
(115)
unmark♯s (σ2♯ ), ε♯3
if β ⊢FI(d) d : β0 , ε♯3 = ε♯0 ⊔ unmark♯s (ε♯1 ⊔ ε♯2 ).
ρ ⊢β hextern : sT, σ ♯ i
σ0♯ , (σ0♯ , ⊤)
(116)
if ∀σ, σ0 ∈ Mem : σ = (µ, w) ∧ σ ∝ σ ♯ ∧ σ0 = (µ0 , w) =⇒ σ0 ∝ σ0♯ .
6.9.5
Catch Clauses
Catch.
ρ ⊢β s, mem(ε♯0 ) → η1♯
ρ ⊢β (p) s, ε♯
hη1♯ , ε♯1 i
(117)
42
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
if p = any or p = χ or p = cT, ε♯0 = φ+ (p, ε♯ ) and ε♯1 = φ− (p, ε♯ ).
ρ {id 7→ (i, sT)} ⊢β[{id7→sT loc}] hs, σ2♯ i → hσ3♯ , ε♯3 i
(118)
(σ4♯ , ε♯4 ), ε♯1
if ε♯0 = φ+ (sT, ε♯ ), ε♯1 = φ− (sT, ε♯ ), news ♯ sT(ε♯0 ), mark♯s mem(ε♯0 ) = (σ2♯ , i), ε♯2 ,
ρ ⊢β (id : sT) s, ε♯
σ4♯ = unmark♯s (σ3♯ ) and ε♯4 = unmark♯s (ε♯2 ) ⊔ unmark♯s (ε♯3 ).
Catch sequence.
ρ ⊢β hk0 , ε♯ i → (σ0♯ , ε♯0 ), ε♯1
ρ ⊢β hk1 , ε♯1 i → (σ1♯ , ε♯2 ), ε♯3
(σ0♯ ⊔ σ1♯ , ε♯0 ⊔ ε♯2 ), ε♯3
ρ ⊢β hk0 ; k1 , ε♯ i
(119)
6.9.6 Supported Expressions, Declarations and Statements. Let q ∈ {e, d, g, s}
♯
♯
and N ♯ ∈ Γβ♯
q . Then, whenever supported (ρ, N ) holds, alternate versions of the
rules above apply. For each of the rules above,
P0
···
ρ ⊢β N
Pℓ−1
♯
η♯
if (side condition) and not supported♯ (ρ, N ♯ )
we also have the rule
P0
ρ ⊢β N
···
♯
Pℓ−1
eval♯ (ρ, N ♯ )
if (side condition) and supported♯ (ρ, N ♯ )
Notice that even if eval♯ (ρ, N ♯ ) does not depend on the rule antecedents P0 , . . . , Pℓ−1 ,
these cannot be omitted, as this would neglect the sub-computations spawned by
the unsupported evaluation of N ♯ .
6.10 Abstract Semantics Trees
We now define possibly infinite abstract semantics trees along the lines of what
we did in Section 5.7. Notice that the need to consider infinite abstract trees goes
beyond the need to observe infinite concrete computations. For instance, there is
no finite abstract tree corresponding to a program containing a while command,
because (109) is the only abstract rule for while and it recursively introduces a
new while node into the tree.
Definition 6.8. (Abstract semantics rules.) The set R♯ of abstract semantics rules is the infinite set obtained by instantiating the rule schemata of Section 6.9
in all possible ways (respecting the side conditions).
Let S ♯ be the (infinite) set of sequents occurring in the premises and conclusions
of the rules in R♯ . Matching Definition 5.6, the abstract semantics universe, denoted
by U ♯ , is the set of finitely branching trees of at most ω-depth with labels in S ♯ .
Definition 6.9. (Abstract semantics trees.) Let F ♯ : ℘(U ♯ ) → ℘(U ♯ ) be
given, for each U ♯ ∈ ℘(U ♯ ), by
♯
♯
♯
θ♯ (ǫ) · · · θℓ−1
(ǫ)
θ0 · · · θℓ−1
def
♯
∈ R♯ .
{θ0♯ , . . . , θℓ−1
} ⊆ U ♯, 0
F ♯ (U ♯ ) =
s
s
def
The set of abstract semantics trees is Θ♯ = gfp⊆ (F ♯ ).
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
43
We now show that, for every non-terminal abstract configuration, there exists an
abstract tree with that in the root.
Proposition 6.10. For each β ∈ TEnv, ρ ∈ Env such that ρ : β and N ♯ ∈ Γβ♯
q ,
where q ∈ {e, d, g, s, b, k}, there exists θ♯ ∈ Θ♯ such that,
θ♯ (ǫ) ∈ (ρ ⊢β N ♯ → η ♯ ) η ♯ ∈ Tq♯ .
Proof. For the proof, let8
def
♯
S+
(ρ, β, N ♯ ) = s♯ s♯ = (ρ ⊢β N ♯ → η ♯ ), (s ∝ s♯ =⇒ s is well-typed) .
We now assume that N ♯ ∈ Γβ♯
q is a fixed but arbitrary non-terminal abstract
configuration. Suppose that supported♯ (ρ, N ♯ ) does not hold. By inspecting the
abstract evaluation rules given in Section 6.9, it can be seen that there exists
ℓ ≥ 0 and a nonempty set of rules R0 ∈ R♯ with ℓ premises and a conclusion
♯
(ρ, β, N ♯ ). If, on the other hand, supported♯ (ρ, N ♯ ) does hold, then it follows
in S+
from Section 6.9.6 that, by Definition 6.7, eval♯ (ρ, N ♯ ) is defined and, for each
rule in R0 , there is a rule
set of premises but where the conclusion
with the same
♯
ρ ⊢β N ♯ → eval♯ (ρ, N ♯ ) is also in S+
(ρ, β, N ♯ ). Thus, in both cases, by definition
♯
(ρ, β, N ♯ ).
of U ♯ , there exists a tree in U ♯ with root in S+
We prove that, for any n ∈ N, there exists a tree θ♯ ∈ F ♯n (U ♯ ) such that θ♯ (ǫ) ∈
♯
S+ (ρ, β, N ♯ ). To this end, we reason by induction on n ≥ 0. In the case n = 0,
U = F ♯n (U ♯ ) so that the hypothesis holds.
We now suppose that n > 0. Let j ∈ {0, . . . , ℓ} be the maximal value for which
♯
♯
there exist trees θ0♯ , . . . , θj−1
∈ F ♯(n−1) (U ♯ ) where P0 = θ0♯ (ǫ), . . . , Pj−1 = θj−1
(ǫ)
are the first j premises of a rule in R0 ; let Rj ⊆ R0 be the set of all rules in R0
with P0 , . . . , Pj−1 as their first j premises; then Rj 6= ∅. We assume that j < ℓ
and derive a contradiction. By inspecting the rule schemata in Section 6.9, it can
P0 ··· Pj−1 Pj′ ···
♯
∈ Rj for some Pj′ ∈ S+
(ρj , βj , Nj♯ ) and
be seen that, if there exists
ś♯
♯
ś♯ ∈ S+
(ρ, β, N ♯ ), then
♯
♯
∀Pj ∈ S+
(ρj , βj , Nj♯ ) : ∃s♯ ∈ S+
(ρ, β, N ♯ ) .
P0 · · · Pj−1 Pj · · ·
∈ Rj .
s♯
(120)
By the inductive hypothesis, there exists θj♯ ∈ F ♯(n−1) (U ♯ ) such that Pj = θj♯ (ǫ) ∈
♯
S+
(ρj , βj , Nj♯ ); hence, by (120), there must be a rule in Rj whose (j + 1)-th premise
is Pj ; contradicting the assumption that j < ℓ is maximal. Hence j = ℓ. Thus there
P ··· P
♯
(ρ, β, N ♯ ); hence, by Definition 6.9,
exists a rule 0 s♯ ℓ−1 ∈ R0 for some s♯ ∈ S+
θ ♯ ··· θ ♯
the tree 0 s♯ ℓ−1 ∈ F ♯n (U ♯ ). Therefore since, by Definition 6.9, Θ♯ = gfp⊆ (F ♯ ),
♯
there exists a tree θ♯ in Θ♯ such that θ♯ (ǫ) ∈ S+
(ρ, β, N ♯ ).
7. CORRECTNESS OF THE ABSTRACT SEMANTICS
In Section 6, we introduced the notion of sound approximation for configurations
and sequents in terms of the concretization function γ defined for each abstract
domain. We now proceed to define the notion of sound approximation for trees.
8 For
the definition of a well-typed sequent, see the proof of Proposition 5.8.
44
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Definition 7.1. (‘∝’ for trees.) Let ∝ : ℘(Θ × Θ♯ ) → ℘(Θ × Θ♯ ) be given, for
each U ∈ ℘(Θ × Θ♯ ), by
θ(ǫ) ∝ θ♯ (ǫ),
def
♯
♯
.
∝(U ) = (θ, θ ) ∈ Θ × Θ ∀i ∈ dom(θ) ∩ N :
∃j ∈ dom(θ♯ ) ∩ N . θ , θ♯ ∈ U
[i]
[j]
Then θ ∝ θ♯ if and only if (θ, θ♯ ) ∈ gfp⊆ (∝).
In words, θ ∝ θ♯ means that the root of θ is approximated by the root of θ♯ and
every immediate subtree of θ is approximated by some immediate subtrees of θ♯ .
Notice that one immediate subtree in θ♯ may be related by ‘∝’ to none, one or more
than one immediate subtree of θ.
The following result states that, for each concrete tree, there is always an abstract
tree that is generated from a corresponding non-terminal abstract configuration.
Theorem 7.2. Let θ ∈ Θ be a concrete tree such that θ(ǫ) = (ρ ⊢β N → η) or
∞
θ(ǫ) = (ρ ⊢β N −→). Then there exists θ♯ ∈ Θ♯ such that, θ♯ (ǫ) = (ρ ⊢β N ♯ → η ♯ )
and N ∝ N ♯ .
Proof. Suppose first that N = hq, σi where q ∈ {e, d, g, s, b}. By Definition 6.1,
we can always find σ ♯ ∈ Mem♯ such that σ ∝ σ ♯ . Hence, letting N ♯ = hq, σ ♯ i,
by (78) in Definition 6.4, we obtain N ∝ N ♯ . Next suppose N = hk, εi, where
ε = (σ, ξ). As before, by Definition 6.1, we can always find σ ♯ ∈ Mem♯ such that
σ ∝ σ ♯ . Moreover, by the definition of the approximation for exceptions, we can
always find ξ ♯ ∈ Except♯ such that ξ ∝ ξ ♯ . Hence, letting N ♯ = hk, σ ♯ ⊗ ξ ♯ i, by (79)
in Definition 6.4, we again obtain N ∝ N ♯ . In both cases, by Proposition 6.10,
there exists an abstract tree θ♯ such that θ♯ (ǫ) = (ρ ⊢β N ♯ → η ♯ ) and N ∝ N ♯ .
The next result states that our abstract rules only generate abstract trees that
are correct approximations of their concrete counterparts (i.e., concrete trees rooted
with the same statement, the same environment and initial memory structure).
Theorem 7.3. Let θ ∈ Θ and θ♯ ∈ Θ♯ be such that θ(ǫ) = ρ ⊢β N → η or
∞
θ(ǫ) = ρ ⊢β N −→ and θ♯ (ǫ) = ρ ⊢β N ♯ → η ♯ , where N ∝ N ♯ . Then θ ∝ θ♯ .
Theorem 7.3 is a trivial corollary of the following
Proposition 7.4. Let
∞
θ(ǫ) ∈ ρ ⊢β N → η, ρ ⊢β N −→ ,
def
S = (θ, θ♯ ) ∈ Θ × Θ♯ θ♯ (ǫ) = ρ ⊢β N ♯ → η ♯ ,
.
♯
N ∝N
(121)
Then, for all (θ, θ♯ ) ∈ S, θ ∝ θ♯ .
Proof. Let θ ∈ Θ and θ♯ ∈ Θ♯ . We define:
def
r =
θ[0] (ǫ) · · · θ[h−1] (ǫ)
θ(ǫ)
def
r♯ =
♯
♯
θ[0]
(ǫ) · · · θ[ℓ−1]
(ǫ)
θ♯ (ǫ)
where, for some h, ℓ ≥ 0, {0, . . . , h − 1} ⊆ dom(θ), {0, . . . , ℓ − 1} ⊆ dom(θ♯ ),
h∈
/ dom(θ) and ℓ ∈
/ dom(θ♯ ). By Definitions 5.7 and 6.9, r ∈ R and r♯ ∈ R♯ . Note
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
45
that, to simplify the proof, we will use the schematic concrete and abstract rules
given in Sections 5.5 and 6.9 to denote the actual rule instances r and r♯ .
Letting (θ, θ♯ ) ∈ S, we need to show that θ ∝ θ♯ ; by Definition 7.1, this is
equivalent to showing that (θ, θ♯ ) ∈ gfp⊆ (∝). To this end, by the principle of
fixpoint coinduction, we will show that (θ, θ♯ ) ∈ ∝(S).
By Definition 7.1, we need to show that the following properties hold:
(i) θ(ǫ) ∝ θ♯ (ǫ);
♯
(ii) for each i = 0, . . . , h− 1 there exists j ∈ {0, . . . , ℓ − 1} such that (θ[i] , θ[j]
) ∈ S.
The proof that properties (i) and (ii) hold is by (well-founded) induction on the
structure of the concrete tree θ. Observe that the “immediate subtree” relation
between trees in Θ+ is a well-founded partial ordering because, if θ ∈ Θ+ then,
by Definition 5.7, there are no infinite descending chains. We extend this ordering
relation to the immediate positive subtree relation between trees in Θ: θ′ is said to
be an immediate positive subtree of θ if and only if θ′ ∈ Θ+ and is an immediate
subtree of θ. Clearly, by Definition 5.7, the immediate positive subtree ordering on
trees in Θ is also well-founded.
We first note that it is not restrictive to only consider unsupported expressions,
declarations or statements: as noted in Section 6.9, the tree for any supported
expression (resp., declaration or statement) has the same structure as the tree for
the same expression (resp., declaration or statement) as if it were unsupported.
Hence, once correctness of the approximation for unsupported expressions, declarations or statements is proved, the correctness for their supported counterparts will
immediately follow from Definition 6.7.
Let
∞
θ(ǫ) = ρ ⊢β N → η or θ(ǫ) = ρ ⊢β N −→ ,
θ♯ (ǫ) = ρ ⊢β N ♯
η♯ .
By (121), N ∝ N ♯ . Therefore, by condition (85) of Definition 6.6, property (i) holds
trivially whenever θ ∈ Θ− (i.e., when r is a negative concrete rule). In addition, to
prove that property (i) holds for each θ ∈ Θ+ (i.e., when r is a positive concrete
rule), by condition (84) of Definition 6.6, we just need to show η ∝ η ♯ .
Consider next property (ii). The base cases are when the concrete rule r has
no premises (i.e., h = 0); and this property holds trivially in these cases. For the
inductive steps (i.e., h > 0) suppose i ∈ {0, . . . , h−1} and j ∈ {0, . . . , ℓ−1} are such
♯
that (θ[i] , θ[j]
) ∈ S. If θ ∈ Θ+ then, by the inductive hypothesis, we can assume that
♯
(θ[i] , θ[j]
) ∈ ∝(S); similarly, if θ ∈ Θ− and i 6= h − 1, by the inductive hypothesis,
♯
we can assume that, (θ[i] , θ[j]
) ∈ ∝(S). Hence, in both cases, by Definition 7.1,
♯
θ[i] (ǫ) ∝ θ[j]
(ǫ). Also, if θ ∈ Θ− , by Definition 5.7, θ[h−1] (ǫ) is a divergent sequent
♯
so that, by Definitions 6.6 and 7.1, θ[h−1] (ǫ) ∝ θ[j]
(ǫ). Thus, for all concrete trees
θ ∈ Θ, we can safely assume the following:
♯
♯
∀i ∈ {0, . . . , h − 1}, j ∈ {0, . . . , ℓ − 1} : (θ[i] , θ[j]
) ∈ S =⇒ θ[i] (ǫ) ∝ θ[j]
(ǫ). (122)
Moreover, we need only explicitly prove property (ii) for each of the positive rules
since, by the definition of the concrete divergence (negative) rules, (122) and Defini-
46
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Table I. Corresponding concrete and abstract rules and terminals for expressions
˙
¸
E
r
r♯
ηe
ηe♯ = (sval♯a , σa♯ ), ε♯a
(sval♯a , σa♯ )
ε♯a
con
id
−e
e0 e1
2
3
4
5
6/7
8
m0 m1
not b
b0 and b1
b0 or b1
9
10/11
12
13
14
15
16
17
18–20
86
87
88
89
90
89
90
90
91
92
93
94
hcon, σi
σ[ρ(id)]
ε
h−m, σ0 i
ε
hm0 ◦ m1 , σ1 i
α({con}) ⊗ σ♯
none♯
♯
σ [ρ(id)]
(⊖ m♯ , σ0♯ )
ε♯
(m♯0
(m♯0
(m♯0
(m♯0
(m♯0
(m♯0
⊚ m♯1 , σ1♯ )
⊚ m♯1 , σ1♯ )
⊚ m♯1 , σ1♯ )
⊚ m♯1 , σ1♯ )
⊚ m♯1 , σ1♯ )
⊲⊳ m♯1 , σ1♯ )
ε♯0 ⊔ ε♯1
ε♯0 ⊔ ε♯1 ⊔ ε♯2
ε♯0 ⊔ ε♯1
ε♯0 ⊔ ε♯1 ⊔ ε♯2
ε♯0 ⊔ ε♯1 ⊔ ε♯2
ε♯0 ⊔ ε♯1
hσ1 , divbyzeroi
ε
hm0 ≶ m1 , σ1 i
ε
(⊖ t♯ , σ0♯ )
ε♯
h¬ t, σ0 i
♯
ε
υff
⊔ υ1♯
ε♯0 ⊔ ε♯1
hff, σ0 i
η
Similar to the rows for ‘b0 and b1 ’
tion 6.4, if property (ii) holds for any positive rule it also holds for the corresponding
negative rules. Thus in the detailed proofs of properties (i) and (ii) for the inductive
steps, we only consider the positive rules.
To help the reader, Tables I, II, III, IV and V, contain a summary of the conclusions of rules r and r♯ . The first column Q ∈ {E, D, G, S, B, K}, gives the
syntactic forms in the first component of the non-terminal configurations N and
N ♯ (which, by Definition 6.4, must be the same); the second and third columns
give a concrete rule r and abstract rule r♯ , respectively, that apply to Q. Note that
we do not pair concrete rules with abstract rules that have mutually inconsistent
side conditions. Justification for the omission of any abstract rules for a particular
concrete rule r is given in the detailed proof for that case. The column headed ηq ,
where q ∈ {e, d, g, s, b, k} gives the concrete terminal configuration for r, while the
columns headed by ηq♯ give the components of the abstract terminal configuration
for r♯ . A blank entry in any table cell means that the value is exactly the same as
the value found in the same column of the previous row. To save space in Tables II,
III, IV and V, we have denoted the operations ‘cleanupd ’, ‘unmarks ’, ‘unlinks ’,
‘unmark♯s ’ and ‘unlink♯s ’ by ‘cud ’, ‘ums ’, ‘uls ’, ‘ums ♯ ’ and ‘uls ♯ ’, respectively. Note
that the premises and the side conditions for the rules are not provided in any of
the tables; reference must be made to the actual rules for this information.
7.1 Expressions
For this part of the proof, we use Table I. By (121), N ∝ N ♯ . Thus letting
N = hE, σi and N ♯ = hE, σ ♯ i, by Definition 6.4, we have the implicit hypothesis
σ ∝ σ ♯ . We show using (80) in Definition 6.5, that ηe ∝ ηe♯ .
Constant. Suppose r is an instance of (2). By definition of α : ℘(Integer)
Integer♯ and α : ℘(Bool) Bool♯ , we have con ∝ α({con}); by hypothesis, σ ∝ σ ♯
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
47
so that hcon, σi ∝ α({con}) ⊗ σ ♯ . Hence ηe ∝ ηe♯ .
Identifier. Suppose r isan instance
σ ∝ σ ♯ , by
of (3).
Since, by hypothesis,
♯
♯
Definition 6.1 we obtain σ ρ(id) ∝ σ ρ(id) . Hence, ηe ∝ ηe .
Unary Minus. Suppose r is an instance of (4) or (5). Then, by hypothesis,
♯
♯
(θ[0] , θ[0]
) ∈ S and, hence, as h = 1, property (ii) holds. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
♯
Thus, if r is an instance of (4), then ε ∝ ε ; if r is an instance of (5), then m ∝ m♯
and σ0 ∝ σ0♯ . In the latter case, by the soundness of ‘⊖’, −m ∝ ⊖ m♯ . Hence, in
both cases, ηe ∝ ηe♯ .
Binary Arithmetic Operations. Suppose that r is an instance of one of the rules
♯
♯
(6)–(9). Then, by hypothesis, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ). Note that,
in the condition for abstract rule (90), ε♯2 = σ1♯ ⊗ α({divbyzero}).
If r is an instance of (6), then h = 1 so that property (ii) holds. The property
♯
θ[0] (ǫ) ∝ θ[0]
(ǫ) implies ε ∝ ε♯0 . Therefore,9 ηe ∝ ηe♯ .
♯
If r is an instance of (7), (8) or (9), then h = 2. Property θ[0] (ǫ) ∝ θ[0]
(ǫ) implies
♯
σ0 ∝ σ0♯ and m0 ∝ m♯0 ; hence (θ[1] , θ[1]
) ∈ S and property (ii) holds. By (122),
♯
θ[1] (ǫ) ∝ θ[1]
(ǫ).
♯
If r is an instance of (7), then property θ[1] (ǫ) ∝ θ[1]
(ǫ) implies ε ∝ ε♯1 ; thus
♯
(ǫ) implies σ1 ∝ σ1♯
ηe ∝ ηe♯ . If r is an instance of (8), then property θ[1] (ǫ) ∝ θ[1]
and m1 ∝ m♯1 so that, by the soundness of ‘⊚’, (m0 m1 ) ∝ (m♯0 ⊚ m♯1 ); and
♯
hence ηe ∝ ηe♯ . If r is an instance of (9), then the condition θ[1] (ǫ) ∝ θ[1]
(ǫ) implies
σ1 ∝ σ1♯ and 0 ∝ m♯1 . Hence, by the side conditions, r♯ must be an instance of (90);
so that, as hσ1 , divbyzeroi ∝ σ1♯ ⊗ α({divbyzero}), we have ηe ∝ ηe♯ .
Test Operators. Suppose r is an instance of one of rules (10)–(12). Then, by
♯
♯
hypothesis, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
♯
If r is an instance of (10), then h = 1 and property (ii) holds. θ[0] (ǫ) ∝ θ[0]
(ǫ)
implies ε ∝ ε♯0 . Hence ηe ∝ ηe♯ .
♯
If r is an instance of (11) or (12), then h = 2. θ[0] (ǫ) ∝ θ[0]
(ǫ) implies σ0 ∝ σ0♯ and
♯
♯
) ∈ S and property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ).
m0 ∝ m♯0 . Thus (θ[1] , θ[1]
If r is an instance of (11), then ε ∝ ε♯1 ; and if r is an instance of (12), σ1 ∝ σ1♯ and
m1 ∝ m♯1 so that, by soundness of ‘⊲⊳’, (m0 ≶ m1 ) ∝ (m♯0 ⊲⊳ m♯1 ). Hence, for both
concrete rules, ηe ∝ ηe♯ .
Negation. The proof when r is an instance of (13) or (14) has the same structure
of the proof for the unary minus case shown before.
Conjunction. Suppose r is an instance of one of rules (15)–(17). By hypothesis,
♯
(θ[0] , θ[0]
) ∈ S.
and in the following, whenever we need to prove ι ∝ ι♯0 ⊔ ι♯1 , we just prove either ι ∝ ι♯0 or
ι ∝ ι♯1 and implicitly use the monotonicity of γ.
9 Here
48
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Table II.
Corresponding concrete and abstract rules and terminals for declarations
˙
¸
Q
r
r♯
ηq
ηq♯ = (ρ♯a , σa♯ ), ε♯a
(ρ♯a , σa♯ )
ε♯a
nil
ρ0
rec ρ0
gvar id : sT = e
lvar id : sT = e
function id(fps) : sT = e
rec g
g0 ; g1
d0 ; d1
21
22
23
24/25
26
27/28
29
30
31
32/33
34
35–37
95
96
97
98
99
100
101
102
103
h∅, σi
hρ0 , σi
hρ1 , σi
cud (ε)
hρ1 , σ1 i
ums (ε)
hρ1 , σ1 i
hρ0 , σi
η
ε˙
¸
ρ0 [ρ1 ], σ1
Similar
(∅, σ♯ )
(ρ0 , σ♯ )
(ρ1 , σ♯ )
(ρ1 , σ1♯ )
none♯
none♯
none♯
cud ♯ (ε♯0 ⊔ ε♯1 )
(ρ1 , σ1♯ )
ums ♯ (ε♯0 ⊔ ε♯1 )
(ρ0 , σ♯ )
none♯
η♯
(ρ0 [ρ1 ], σ1♯ )
ε♯0 ⊔ ε♯1
to the rows for ‘g0 ; g1 ’
If r is an instance of (15) or (16), then h = 1 and property (ii) holds. If r is
♯
an instance of (15), by (122), we have θ[0] (ǫ) ∝ θ[0]
(ǫ), which implies ε ∝ ε♯0 . If r
is an instance of (16), by Definition 6.2, σ0 ∝ σff♯ = φ(ρ, σ ♯ , not b0 ). Thus, since
ff ∝ α({ff}) holds by definition, we have hff, σ0 i ∝ υff♯ . Hence, for both concrete
rules, ηe ∝ ηe♯ .
♯
If r is an instance of (17), then h = 2. By Definition 6.2, σ0 ∝ σtt
, so that
♯
♯
(θ[1] , θ[1]
) ∈ S and property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ) so that η ∝ hυ1♯ , ε♯1 i.
Hence, ηe ∝ ηe♯ .
Disjunction. The proof when r is an instance of one of rules (18)–(20) is similar
to that for conjunction.
7.2 Declarations
In Table II, Q denotes a local declaration D or a global declaration G. Moreover,
ηq ∈ {Td , Tg } and ηq♯ ∈ {Td♯ , Tg♯ }, the actual domains for ηq and ηq♯ will depend on
context.
By (121) we have N ∝ N ♯ . Thus letting N = hQ, σi and N ♯ = hQ, σ ♯ i for any
Q ∈ {D, G}, by Definition 6.4, we have the implicit hypothesis σ ∝ σ ♯ . We show
using (81) in Definition 6.5, that ηq ∝ ηq♯ .
Nil. If r is an instance of (21) then, by the hypothesis, ηq ∝ ηq♯ .
(Recursive) Environment. If r is an instance of (22) or (23) then, by the hypothesis, ηq ∝ ηq♯ .
Global Variable Declaration. If r is an instance of one of rules (24)–(26) then,
♯
) ∈ S so that, as h = 1, property (ii) holds. By (122),
by the hypothesis (θ[0] , θ[0]
♯
♯
θ[0] (ǫ) ∝ θ[0]
(ǫ). If r is an instance of (24), then θ[0] (ǫ) ∝ θ[0]
(ǫ) implies ε ∝ ε♯0 ;
by Definition 6.1 and monotonicity of γ, we have cleanupd (ε) ∝ cleanupd ♯ (ε♯0 ⊔ ε♯1 ),
♯
i.e., ηq ∝ ηq♯ . If r is an instance of (25) or (26), then θ[0] (ǫ) ∝ θ[0]
(ǫ) implies
υ ∝ υ ♯ . By Definition 6.1, newd (υ) ∝ newd ♯ (υ ♯ ). By the side condition for abstract
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
49
rule (98), newd ♯ (υ) = (σ1♯ , l), ε♯1 . By the side conditions for (25) and (26), either
newd (υ) = ε ∝ ε♯1 —and hence cleanupd (ε) ∝ cleanupd ♯ (ε♯0 ⊔ε♯1 ) by Definition 6.1—
or newd (υ) = (σ1 , l) ∝ (σ1♯ , l). Thus, in both cases, ηq ∝ ηq♯ .
Local Variable Declaration. The proof for local variable declaration, when r is an
instance of one of rules (27)–(29), is the same as that for global variable declaration,
with the few necessary adjustments (i.e., using unmarks , unmark♯s , news , news ♯ and
i in place of cleanupd , cleanupd ♯ , newd , newd ♯ and l).
Function Declaration. If r is an instance of (30) then, by the hypothesis, ηq ∝ ηq♯ .
Recursive Declaration. If r is an instance of (31), then h = 2 and, by the hy♯
♯
pothesis, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ), which implies that ρ0 denotes
♯
the same environment in both r and r♯ and σ0 ∝ σ0♯ . Hence, (θ[1] , θ[1]
) ∈ S and
♯
property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ) which implies η ∝ η ♯ . Hence, ηq ∝ ηq♯ .
Global Sequential Composition. If r is an instance of one of rules (32)–(34), then
♯
♯
(ǫ).
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
1 ≤ h ≤ 2 and (θ[0] , θ[0]
♯
If r is an instance of (32), then h = 1 and property (ii) holds. Also, θ[0] (ǫ) ∝ θ[0]
(ǫ)
implies ε ∝ ε♯0 and hence ηq ∝ ηq♯ .
♯
If r is an instance of (33) or (34), then h = 2 and, since σ0 ∝ σ0♯ , (θ[1] ∝ θ[1]
) ∈ S,
♯
so that property (ii) holds. By (122), we have θ[1] (ǫ) ∝ θ[1]
(ǫ). If r is an instance
♯
of (33), then θ[1] (ǫ) ∝ θ[1]
(ǫ) implies ε ∝ ε♯1 , so that ηq ∝ ηq♯ . If r is an instance
♯
♯
of (34), then θ[0] (ǫ) ∝ θ[0]
(ǫ) and θ[1] (ǫ) ∝ θ[1]
(ǫ) imply that σ1 ∝ σ1♯ and that the
two environments ρ0 and ρ1 are the same in both r and r♯ . Hence, their composition
ρ0 [ρ1 ] is the same in both rules r and r♯ , so that ηq ∝ ηq♯ .
Local Sequential Composition. The proof when r is an instance of one of rules
(35)–(37) is similar to that for global sequential composition.
7.3 Statements
For this part of the proof, we use Table III. By (121), N ∝ N ♯ . Thus letting
N = hs, σi and N ♯ = hs, σ ♯ i, by Definition 6.4, we have the implicit hypothesis
σ ∝ σ ♯ . We show using (82) in Definition 6.5, that ηs ∝ ηs♯ .
Nop. If r is an instance of (38) then, by the hypothesis, ηe ∝ ηe♯ .
Assignment. Suppose r is an instance of (39) or (40). Then h = 1 and, by
♯
the hypothesis, (θ[0] , θ[0]
) ∈ S and hence property (ii) holds. By (122) we have
♯
θ[0] (ǫ) ∝ θ[0]
(ǫ). If r is an instance of (39), ε ∝ ε♯0 . Moreover, if r is an instance
of (40), hsval, σ0 i ∝ hsval♯0 , σ0♯ i so that, by Definition 6.1, σ0 ρ(id) := sval ∝
σ0♯ ρ(id) := sval♯ ; letting σ0♯ ρ(id) := sval♯ = (σ1♯ , ε♯1 ), this means that either we
have σ0 ρ(id) := sval ∈ ExceptState, so that σ0 ρ(id) := sval ∝ ε♯1 , or we have
σ0 ρ(id) := sval ∈ Mem, so that σ0 ρ(id) := sval ∝ σ1♯ . In all cases, ηs ∝ ηs♯ .
50
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Table III.
S
Corresponding concrete and abstract rules and terminals for statements
r
r♯
ηs
ηs♯ = hσa♯ , ε♯a i
♯
σa
ε♯a
nop
id := e
s0 ; s1
d; s
if e then s0 else s1
while e do s0
throw s
try s catch k
try s0 finally s1
id := id0 (e1 , . . . , en )
38
39
40
41
42
43
44
45
46/47
48/50
49
51
52
53
54
55
56
57
58
59
62
63
64
104
105
106
107
108
109
110
111
112
113
114
σ
ε ˆ
˜
σ0 ρ(id) := sval
ε
η
ε
ums (η)
ε
η
ε
σ0
η
hσ, χi
ε
hσ0 , svali
σ0
η
η
hσ1 , ξ0 i
ε
ε
`
´
ums uls (ε)
ums (η2 )
σ♯
σ1♯
none♯
ε♯0 ⊔ ε♯1
σ1♯
ε♯0 ⊔ ε♯1
ums ♯ (σ1♯ )
ε♯0 ⊔ ums ♯ (ε♯1 )
σ1♯ ⊔ σ2♯
ε♯0 ⊔ ε♯1 ⊔ ε♯2
♯
⊔ σ2♯
σff
ε♯0 ⊔ ε♯1 ⊔ ε♯2
⊥
ε♯
ε♯0 ⊔ ε♯1
σ0♯ ⊔ σ1♯
ε♯1 ⊔ ε♯2
σ2♯
ε♯2 ⊔ ε♯3 ⊔ (σ3♯ ⊗ ξ1♯ )
ums ♯ (σ2♯ )
ε♯ = ε♯0
´
`
⊔ ums ♯ uls ♯ (ε♯1 )
⊔ ums (ε♯2 )
Statement Sequence. Suppose r is an instance of (41) or (42). Then 1 ≤ h ≤ 2
♯
♯
and, by the hypothesis, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ). If r is an instance
of rule (41), as h = 1, property (ii) holds and also ε ∝ ε♯0 . If r is an instance of (42),
♯
then σ0 ∝ σ0♯ so that (θ[1] , θ[1]
) ∈ S; also, as h = 2, property (ii) holds; by (122),
♯
θ[1] (ǫ) ∝ θ[1]
(ǫ) so that η ∝ hσ1♯ , ε♯1 i. Hence, in both cases, ηs ∝ ηs♯ .
Block. Suppose r is an instance of (43) or (44). Then 1 ≤ h ≤ 2 and, by the
♯
♯
hypothesis and Definition 6.1, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ). If r is an
instance of (43), as h = 1, property (ii) holds and also ε ∝ ε♯0 . If r is an instance
♯
of (44), then σ0 ∝ σ0♯ so that (θ[1] , θ[1]
) ∈ S; also, as h = 2, property (ii) holds;
♯
by (122), θ[1] (ǫ) ∝ θ[1]
(ǫ); so that η ∝ hσ1♯ , ε♯1 i and therefore, by Definition 6.1,
unmarks (η) ∝ unmark♯s (σ1♯ ), unmark♯s (ε♯1 ) . Hence, in both cases ηs ∝ ηs♯ .
Conditional. Suppose r is an instance of one of rules (45)–(47). Then 1 ≤ h ≤ 2
♯
♯
and, by the hypothesis, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
If r is an instance of (45), h = 1, property (ii) holds and, as ε ∝ ε♯0 , ηs ∝ ηs♯ .
If r is an instance of (46) or (47), then h = 2 and σ0 ∝ σ0♯ . By the side
conditions and Definition 6.2, if tt ∝ t♯ , then htt, σ0 i ∝ ht♯ , σtt i and, if ff ∝ t♯ , then
♯
hff, σ0 i ∝ ht♯ , σff i. Hence, if (46) applies, θ[1] (ǫ) ∝ θ[1]
(ǫ) so that η ∝ hσ1♯ , ε♯1 i; and,
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
51
♯
if (47) applies, θ[1] (ǫ) ∝ θ[2]
(ǫ) so that η ∝ hσ2♯ , ε♯2 i. Hence, in both cases, ηs ∝ ηs♯ .
While. Suppose r is an instance of one of rules (48)–(51). Then 1 ≤ h ≤ 3 and,
♯
♯
by hypothesis, (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
If r is an instance of (48), h = 1, property (ii) holds and, as ε ∝ ε♯0 , ηs ∝ ηs♯ .
Suppose r is an instance of (49), (50) or (51). By the side conditions and Definition 6.2, if tt ∝ t♯ , then htt, σ0 i ∝ ht♯ , σtt i and, if ff ∝ t♯ , then hff, σ0 i ∝ ht♯ , σff i.
If r is an instance of (49), then, as h = 1, property (ii) holds and hence ηs ∝ ηs♯ .
♯
If r is an instance of (50), then h = 2. Thus (θ[1] , θ[1]
) ∈ S and property (ii)
♯
(ǫ) so that ε ∝ ε♯1 . Hence ηs ∝ ηs♯ .
holds. By (122), θ[1] (ǫ) ∝ θ[1]
♯
If r is an instance of (51), then h = 3. Thus (θ[1] , θ[1]
) ∈ S. By (122), θ[1] (ǫ) ∝
♯
♯
(ǫ) so that σ1 ∝ σ1♯ . Thus (θ[2] , θ[2]
) ∈ S and property (ii) holds. By (122),
θ[1]
♯
(ǫ) so that η ∝ hσ2♯ , ε♯2 i. Hence ηs ∝ ηs♯ .
θ[2] (ǫ) ∝ θ[2]
Throw. Suppose r is an instance of (52). Then s = χ ∈ RTSExcept (so that
rule (111) is not applicable). By definition of α : ℘(RTSExcept) RTSExcept♯ ,
χ ∝ α({χ}). Since, by hypothesis, σ ∝ σ ♯ , σ ♯ ⊗ α({χ}) = σ ♯ , α({χ}) so that, by
the side condition for (110), (σ, χ) ∝ ε♯ . Hence ηs ∝ ηs♯ .
Suppose r is an instance of (53) or (54). Then s = e ∈ Exp (so that rule (110)
♯
is not applicable). By hypothesis, (θ[0] , θ[0]
) ∈ S and, as h = 1, property (ii) holds.
♯
By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ). If r is an instance of (53), then ε ∝ ε♯0 , while, if r is an
instance of (54), sval ∝ sval♯ and σ0 ∝ σ0♯ . Hence, in both cases, ηs ∝ ηs♯ .
♯
Try Blocks. Suppose r is an instance of (55)–(59). By hypothesis, (θ[0] , θ[0]
) ∈ S.
♯
By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ). Note that if r is an instance of (55) or (56), only abstract
rule (112) will be applicable while if r is an instance of (57)–(59), only abstract
rule (113) will be applicable.
If r is an instance of (55), h = 1, property (ii) holds and, as σ0 ∝ σ0♯ , ηs ∝ ηs♯ .
♯
If r is an instance of (56), then ε0 ∝ ε♯0 so that (θ[1] , θ[1]
) ∈ S. Thus, as h = 2,
♯
property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ) so that hu, ηi ∝ (σ1♯ , ε♯1 ), ε♯2 where
u ∈ {caught, uncaught}. By Definition 6.5, if u = caught, then η ∝ hσ1♯ , ε♯1 i and,
if u = uncaught, then η ∝ ε♯2 . Hence, in both cases, ηs ∝ ηs♯ .
♯
If r is an instance of rule (57), σ0 ∝ σ0♯ ; hence (θ[1] , θ[1]
) ∈ S and property (ii)
♯
holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ) so that η ∝ hσ2♯ , ε♯2 i. Hence ηs ∝ ηs♯ .
If r is an instance of (58) or (59), hσ0 , ξ0 i ∝ hσ1♯ , ξ1♯ i; hence σ0 ∝ σ1♯ and ξ0 ∝ ξ1♯ so
♯
♯
that (θ[1] , θ[2]
) ∈ S and property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[2]
(ǫ). Thus, if (58)
applies, σ1 ∝ hσ3♯ , ε♯3 i so that hσ1 , ξ0 i ∝ (σ3♯ ⊗ ξ1♯ ); and, if (59) applies, ε ∝ hσ3♯ , ε♯3 i
so that ε ∝ ε♯3 . Hence, in both cases, ηs ∝ ηs♯ .
Function call. If r is an instance of one of rules (62)–(64), then 1 ≤ h ≤ 3 and
ℓ = 3. Then the conditions (60) and (61) are also conditions for abstract rule (114).
♯
♯
By hypothesis and Definition 6.1, (θ[0] , θ[0]
) ∈ S; by (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
52
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Table IV.
Corresponding concrete and abstract rules and terminals for function bodies
˙
¸
B
r
r♯
ηb
ηb♯ = (sval♯a , σa♯ ), ε♯a
(sval♯a , σa♯ )
ε♯a
let d in s result e
extern : sT
65
66
67
68
115
116
ε
ums (ε)
ums (η0 )
σ0 | hσ0 , ξi
ums ♯ (σ2♯ )
ε♯3 = ε♯0
⊔ ums ♯ (ε♯1 ⊔ ε♯2 )
σ0♯
(σ0♯ , ⊤)
If r is an instance of (62), then ε ∝ ε♯0 , h = 1 and property (ii) holds. Hence
ηs ∝ ηs♯ .
♯
If r is an instance of (63), then σ0 ∝ σ0♯ so that, by Definition 6.1, (θ[1] , θ[1]
) ∈ S;
♯
also, as h = 2, property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ) and ε ∝ ε♯1 ; by
♯
Definition 6.1, unmarks unlinks (ε) ∝ unmark♯s unlink♯s (ε1 ) . Hence ηs ∝ ηs♯ .
♯
If r is an instance of (64), then σ1 ∝ σ1♯ so that, by Definition 6.1, (θ[2] , θ[2]
) ∈ S;
♯
also, as h = 3, property (ii) holds. By (122), θ[2] (ǫ) ∝ θ[2]
(ǫ) and η2 ∝ hσ2♯ , ε♯2 i; by
Definition 6.1, unmarks (η2 ) ∝ unmark♯s (σ2♯ ), unmark♯s (ε♯2 ) . Hence ηs ∝ ηs♯ .
7.4 Function Bodies
For this part of the proof, we use Table IV. By (121), N ∝ N ♯ . Thus letting
N = hB, σi and N ♯ = hB, σ ♯ i, by Definition 6.4, we have the implicit hypothesis
σ ∝ σ ♯ . We show using (82) in Definition 6.5, that ηb ∝ ηb♯ .
Suppose r is an instance of one of rules (65)–(67). By hypothesis and Defini♯
♯
tion 6.1, marks (σ) ∝ mark♯s (σ ♯ ), so that (θ[0] , θ[0]
) ∈ S. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
If r is an instance of (65), ε ∝ ε♯0 , h = 1 and property (ii) holds. Hence ηb ∝ ηb♯ .
♯
) ∈ S and, as h = 2, propIf r is an instance of (66), σ0 ∝ σ0♯ ; hence (θ[1] , θ[1]
♯
erty (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ) so that ε ∝ ε♯1 . By Definition 6.1,
unmarks (ε) ∝ unmark♯s (ε♯1 ); hence ηb ∝ ηb♯ .
♯
If r is an instance of (67), σ0 ∝ σ0♯ ; hence (θ[1] , θ[1]
) ∈ S. By (122) we have
♯
♯
) ∈ S; as h = 3, property (ii)
(ǫ), so that σ1 ∝ σ1♯ ; hence (θ[2] , θ[2]
θ[1] (ǫ) ∝ θ[1]
♯
holds. Again, by (122), θ[2] (ǫ) ∝ θ[2]
(ǫ); hence, η0 ∝ hσ2♯ , ε♯2 i. By Definition 6.1,
unmarks (η0 ) ∝ unmark♯s (σ2♯ ), unmark♯s (ε♯2 ) ; hence ηb ∝ ηb♯ .
Suppose r is an instance of (68). Then σ = (µ, w) and σ0 = (µ0 , w). By the
hypothesis, σ ∝ σ ♯ ; hence, by the side conditions, σ0 ∝ σ0♯ ; also, ξ ∝ ⊤, so that
ηb ∝ ηb♯ .
7.5 Catch Clauses
For this part of the proof, we use Table V. By (121), N ∝ N ♯ . Thus, letting
N = hK, εi and N ♯ = hK, ε♯ i, by Definition 6.4, we have the implicit hypothesis
ε ∝ ε♯ . We show using (83) in Definition 6.5, that ηk ∝ ηk♯ .
Catch. Let K have the form (p) s for some exception declaration p.
Suppose r is an instance of one of rules (69)–(71). Then, by the hypothesis and
Definition 6.3, ε ∝ φ+ (p, ε♯ ); by the side conditions for the abstract rules, ε ∝ ε♯0 .
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
53
Table V. Corresponding concrete and abstract rules and terminals for catch clauses
K
r
r♯
ηk
ηk♯ = hηa♯ , ε♯a i
ηa♯
ε♯a
(any) s | (χ) s | (sT) s
(id : sT) s
(χ) s | (sT) s
(id : sT) s
k0 ; k1
69
70
71
72
72
73
74
117
118
117
118
119
hcaught, η0 i
˙
¸
caught, ums (ε0 )
hcaught, η0 i
˙
¸
uncaught, (σ, ξ)
˙
¸
uncaught, (σ, ξ)
hcaught, η0 i
η
η1♯
`
(σ4 , ε4 ) = ums ♯ (σ3♯ ),
´
ums ♯ (ε♯2 ) ⊔ ums ♯ (ε♯3 )
η1♯
(σ3♯ , ε♯2 ⊔ ε♯3 )
(σ0♯ ⊔ σ1♯ , ε♯0 ⊔ ε♯2 )
ε♯1
ε♯1
ε♯1
ε♯1
ε♯3
If r is an instance of (69) then ε = (σ, ξ); by Definition 6.1, σ ∝ mem(ε♯0 ); Hence
♯
♯
(θ[0] , θ[0]
) ∈ S and, as h = 1, property (ii) holds. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ), which
implies η0 ∝ η1♯ so that ηk ∝ ηk♯ .
If r is an instance of (70) or (71), then ε = (σ, sval) and type(sval) = sT; by
Definition 6.1, σ ∝ mem(ε♯0 ) and sval ∝ sT(ε♯0 ). Hence, by Definition 6.1,
news sval, marks (σ) ∝ news ♯ sT(ε♯0 ), mark♯s mem(ε♯0 ) = (σ2♯ , i), ε♯2 . (123)
If (70) applies, then h = 0, so that property (ii) holds trivially, and, by the side
condition, ε0 = news sval, marks (σ) so that by (123), ε0 ∝ ε♯2 ; by Definition 6.1,
unmarks (ε0 ) ∝ unmark♯s (ε♯2 ). If (71) applies, then, by the side condition, (σ0 , i) =
♯
) ∈ S and, as h = 1,
news sval, marks (σ) so that by (123), σ0 ∝ σ2♯ . Hence, (θ[0] , θ[0]
♯
(ǫ), which implies η0 ∝ hσ3♯ , ε♯3 i. Thus, by
property (ii) holds. By (122), θ[0] (ǫ) ∝ θ[0]
Definition 6.1, unmarks (η0 ) ∝ unmark♯s (σ3♯ ), unmark♯s (ε♯2 ) . Hence, in both cases,
ηk ∝ ηk♯ .
If r is an instance of (72), then h = 0, so that property (ii) holds trivially. We
have ε = (σ, ξ) and, by the side condition, p ∈
/ {ξ, cT, any}, where cT = type(ξ).
If p ∈ {χ, sT} then abstract rule (117) applies so that, by the hypothesis, the side
conditions and Definition 6.3, (σ, ξ) ∝ φ− (p, ε♯ ) = ε♯1 . Similarly, if p = id : sT and
abstract rule (118) applies, (σ, ξ) ∝ φ− (sT, ε♯ ) = ε♯1 . Hence, in both cases, ηk ∝ ηk♯ .
♯
Catch Sequence. If r is an instance of (73), then as h = 1 and (θ[0] , θ[0]
) ∈ S,
♯
property (ii) holds. By (122), θ[0] (ǫ) ∝ θ[0]
(ǫ), so that hcaught, η0 i ∝ (σ0♯ , ε♯0 ), ε♯1 .
By (83) in Definition 6.5, η0 ∝ (σ0♯ , ε♯0 ), which implies ηk ∝ ηk♯ .
♯
♯
If r is an instance of (74), then (θ[0] , θ[0]
) ∈ S and, by (122), θ[0] (ǫ) ∝ θ[0]
(ǫ).
Thus, huncaught, ε0 i ∝ (σ0♯ , ε♯0 ), ε♯1 , so that, by (83) in Definition 6.5, ε0 ∝ ε♯1 .
♯
♯
Hence (θ[1] , θ[1]
) ∈ S and, as h = 2, property (ii) holds. By (122), θ[1] (ǫ) ∝ θ[1]
(ǫ),
so that η ∝ (σ1♯ , ε♯2 ), ε♯3 , which implies ηk ∝ ηk♯ .
A few observations regarding the precision of the proposed approximations are in
order. Consider an abstract tree θ♯ ∈ Θ♯ such that θ♯ (ǫ) = (ρ ⊢β N ♯ → η ♯ ), where
♯
♯
N ♯ ∈ Γβ♯
s and η ∈ Ts . If the concretization functions relating the concrete and
abstract domains are strict, then the abstract tree above will encode the following
definite information:
54
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
—non-terminating computations (i.e., unreachable code), if η ♯ = ⊥;
—non-exceptional computations, if η ♯ = hσ ♯ , none♯ i and σ ♯ 6= ⊥;
—exceptional computations, if η ♯ = h⊥, ε♯ i and ε♯ 6= none♯ .
Obviously, a precise propagation of this definite information requires that all of the
abstract domain operators are strict too. Hence, if θ♯ (ǫ) = ρ ⊢β hs, ⊥i → η ♯ ,
we will also have η ♯ = ⊥. Similar properties hold when considering expressions,
declarations and catch clauses.
8. COMPUTING ABSTRACT TREES
The results of the previous section (Theorems 7.2 and 7.3) guarantee that each
concrete tree can be safely approximated by an abstract tree, provided the nonterminal configurations in the roots satisfy the approximation relation.
For expository purposes, suppose we are interested in a whole-program analysis.
For each (concrete and abstract) pair of initial memories satisfying σi ∝ σi♯ and each
g0 = (g; gvar x : integer = 0), where g is a valid program, we obtain that any ab
stract tree θ0♯ ∈ Θ♯ such that θ0♯ (ǫ) = ∅ ⊢∅ hg0 , σi♯ i → η0♯ correctly approximates
each concrete tree θ0 ∈ Θ such that θ0 (ǫ) = ∅ ⊢∅ hg0 , σi i → η0 . Notice that θ0♯ is
a finite tree. Letting η0♯ = (ρ0 , σ0♯ ), ε♯0 and assuming η0 ∈
/ ExceptState, we obtain
η0 = hρ0 , σ0 i, where σ0 ∝ σ0♯ . Hence, letting s0 = x := main() and ρ0 : β, any
abstract tree θ1♯ ∈ Θ♯ such that θ1♯ (ǫ) = ρ0 ⊢β hs0 , σ0♯ → η1♯ correctly approx-
imates each concrete tree θ1 ∈ Θ such that either θ1 (ǫ) = ρ0 ⊢β hs0 , σ0 → η1
∞
or θ1 (ǫ) = ρ0 ⊢β hs0 , σ0 −→ . We are thus left with the problem of computing
(any) one of these abstract trees, which are usually infinite. In particular, we are
interested in choosing θ1♯ in a subclass of trees admitting finite representations and,
within this class, in maintaining a level of accuracy that is compatible with the
complexity/precision trade-off dictated by the application.
A classical choice is to restrict attention to rational trees, that is, trees with only
finitely many subtrees: the algorithm sketched in [Sch95; Sch97; Sch98], which
assumes that the abstract domain is Noetherian (i.e., all of its ascending chains are
finite), guides the analysis toward the computation of a rational tree by forcing each
infinite path to contain a repetition node. Here below we describe a variation, also
working for abstract domains that admit infinite ascending chains, that exploits
widening operators [CC76; CC77a; CC92b].
Definition 8.1. (Widening operators.) Let (D♯ , ⊑, ⊥, ⊔) be an abstract domain. The partial operator ∇ : D♯ × D♯ D♯ is a widening if:
—for all x♯ , y ♯ ∈ D♯ , y ♯ ⊑ x♯ implies that y ♯ ∇ x♯ is defined and x♯ ⊑ y ♯ ∇ x♯ ;
def
—for all increasing chains x♯0 ⊑ x♯1 ⊑ · · · , the increasing chain defined by y0♯ = x♯0
def
♯
and yi+1
= yi♯ ∇ (yi♯ ⊔ x♯i+1 ), for i ∈ N, is not strictly increasing.
The algorithm works by recursively constructing a finite approximation for the
abstract subtree rootedin the current node (initially, the root of the whole tree). Let
n = ρ ⊢β hq, yn♯ i → rn be the current node, where q is a uniquely labeled program
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
55
phrase,10 y ♯ ∈ D♯ is either an abstract memory σ ♯ ∈ Mem♯ or an abstract exception
state ε♯ ∈ ExceptState♯ , and rn is a placeholder for the “yet to be computed”
conclusion. The node n is processed according to the following alternatives.
(i) If no ancestor of n is labeled by the program phrase q, the node has to be
expanded using an applicable abstract rule instance. Namely, descendants of
the premises of the rule are (recursively) processed, one at a time and from
left to right. When the expansion of all the premises has been completed,
including the case when the rule has no premise at all, the marker rn is
replaced by an abstract value computed according to the conclusion of the
rule.
♯
(ii) If there exists an ancestor node m = ρ ⊢β hq, ym
i → rm of n labeled by the
♯
same program phrase q and such that yn♯ ⊑ ym
, i.e., if node n is subsumed
by node m, then the node is not expanded further and the placeholder rn is
replaced by the least fixpoint of the equation rn = fm (rn ), where fm is the
expression corresponding to the conclusion of the abstract rule that was used
for the expansion of node m.11 Intuitively, an infinite subtree rooted in node
m has been identified and the “repetition node” n is transformed to a back
edge to the root m of this subtree.
♯
(iii) Otherwise, there must be an ancestor node m = ρ ⊢β hq, ym
i → rm of n
♯
labeled by the same program phrase q, but the subsumption condition yn♯ ⊑ ym
♯
does not hold. Then, to ensure convergence, the abstract element yn in node
♯
♯
n is further approximated by ym
∇ (ym
⊔ yn♯ ) and we proceed as in case (i).
Termination of the algorithm can be proved thanks to the following observations:
an infinite abstract tree necessarily has infinite paths (since the tree is finitely
branching); each infinite path necessarily has an infinite number of nodes labeled by
the same program phrase (since the set of program phrases is finite); the application
of case (iii) leads to the computation, along each infinite path, of increasing chains
of abstract elements and, by Definition 8.1, these chains are necessarily finite; hence,
case (ii) is eventually applied to all infinite paths, leading to a finite representation
of the rational tree where all the infinite paths are expressed by using back edges.
It should be stressed that, as far as efficiency is concerned, the algorithm outlined
above can be improved by the adoption of well studied memoization techniques;
as noted in [Sch97], by clearly separating design concerns from implementation
concerns, the adopted methodology produces simpler proofs of correctness. Also
note that the choice of the widening operator has a deep impact on the precision of
the results obtained and, moreover, even a precise widening can lead to inaccurate
results if applied too eagerly. However, precision problems can be mitigated by the
application of suitable “widening delay” techniques [CC92b; HPR97; BHRZ05].
10 Unique
labels (e.g., given by the address of the root node for q in the program parse tree) ensure
that different occurrences of the same syntax are not confused [Sch95]; this also means that, in
each node n, the type and execution environments ρ and β are uniquely determined by q.
11 As explained in [Sch95; Sch97; Sch98], the computation of such a least fixpoint (in the context
of a coinductive interpretation of the abstract rules) is justified by the fact that here we only
need to approximate the conclusions produced by the terminating concrete computations, i.e., by
the concrete rules that are interpreted inductively. Also note that the divergence rules have no
conclusion at all.
56
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
9. EXTENSIONS
In this section we outline how the techniques presented in the first part of the
paper can be extended so as to encompass the C language and all the imperative
aspects of C++ (including, of course, exceptions): Section 9.1 shows how the set of
primitive types can be extended by discussing the introduction of bounded integer
and floating-point types; Section 9.2 provides a sketch of how C-like pointers, arrays
and records can be dealt with; dynamic memory allocation and deallocation is
treated in Section 9.3; and Section 9.4 illustrates how all the non-structured control
flow mechanisms of C and C++ can be accounted for.
Once an ABI (Application Binary Interface) has been fixed and its characteristics
have been reflected into concrete and abstract memory structures, C struct and
union compound types can be accommodated, even in presence of pointer casts
and unrestricted pointer arithmetics, by compiling down all their uses to memory
reads and writes performed through pointer dereferencing [Min06].
While we have not yet tried to incorporate object-oriented features (like classes,
inheritance, method calls with dynamic binding and so forth) we do not see what,
in the current design, would prevent such an extension.
9.1 Additional Arithmetic Types
The addition of more arithmetic types such as (signed and unsigned) finite integer
and floating-point types is fairly straightforward. It is assumed that a preprocessor
will add, as needed, a value cast operator that, for a given numeric type and constant
expression, ensures that either the returned value is in the domain of that type or
an appropriate exception is thrown. With this assumption, all the operations need
only to be specified for operands of the very same type.
9.1.1 Syntax. For floating-point numbers, we add a new basic type float that
represents a fixed and finite subset of the reals together with a set of special values
denoting infinities, NaN (Not a Number ) value and so forth. The exact format and
range of a floating-point literal is unspecified. The addition of other floating-point
types to represent double and extended precision numbers can be done the same
way. To exemplify the inclusion of signed and unsigned bounded integer types, we
also add the signed char and unsigned char basic types.
def
Integer types. iT ∈ iType = {integer, signed char, unsigned char, . . .};
def
Numeric types. nT ∈ nType = iType ∪ {float, . . .};
def
Basic types. T ∈ Type = nType ∪ {boolean};
Floating-point literals. fl ∈ Float;
Signed char literals. sc ∈ sChar;
Unsigned char literals. uc ∈ uChar.
Expressions and constants. Expressions are extended with floating-point constants,
bounded integer constants, and vcast, a value cast operator for converting values
from one basic type to another, when possible, or yielding an appropriate exception:
Exp ∋ e ::= . . . | fl | sc | uc | vcast(nT, e)
Con ∋ con ::= . . . | fl | sc | uc.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
57
The functions dom : cType → {Integer, Bool, RTSExcept, Float, sChar, uChar}
and type : sVal sType are easily extended:
def
dom(float) = Float,
def
type(fl) = float,
def
type(sc) = signed char,
def
type(uc) = unsigned char.
dom(signed char) = sChar,
dom(unsigned char) = uChar,
def
def
9.1.2 Static Semantics. The required adjustments to functions FI and DI are
straightforward and thus omitted. Then, we add the following static semantic rules,
where ∈ {+, −, ∗, /, %} and ∈ {=, 6=, <, ≤, ≥, >}:
Expressions.
β ⊢I fl : float
β ⊢I sc : signed char
β ⊢I e : nT
β ⊢I uc : unsigned char
β ⊢I −e : nT
β ⊢I e0 : nT
β ⊢I e1 : nT
β ⊢I e0 e1 : nT
β ⊢I e : T 0
β ⊢I vcast(T1 , e) : T1
β ⊢I e0 : nT
β ⊢I e1 : nT
β ⊢I e0 e1 : boolean
if casting T0 to T1 is legal.
9.1.3 Concrete Dynamic Semantics. The added numeric types and the operations upon them bring in a considerable degree of complexity. Consider the C
language, for example: unsigned bounded integers employ modular arithmetic; for
signed bounded integers, overflow yields undefined behavior; the results of floatingpoint operations depend on the rounding mode in effect and on the settings that
cause floating-point exceptions to be trapped or ignored; relational operators may
or may not raise a floating-point exception when one or both arguments are NaN.
In order to factor out these details and delegate them to the memory structure, we
resort to a device like the one used to model supported and unsupported language
elements in the abstract semantics. We thus postulate the existence of the partial
functions
evalvc : (nType × Con × Mem) ValState ⊎ ExceptState,
eval−1 : (Con × Mem) ValState ⊎ ExceptState,
eval : (Con × Con × Mem) ValState ⊎ ExceptState,
eval : (Con × Con × Mem) ValState ⊎ ExceptState,
that model the cast operator, unary minus, binary operators ∈ {+, −, ∗, /, %}
and relational operators ∈ {=, 6=, <, ≤, ≥, >}, respectively. Such functions need
not be always defined: for example, there is no need to define eval+ (con0 , con1 , σ)
for the case type(con0 ) 6= type(con1 ).
58
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Value casts. The following concrete rule schemata use the corresponding evaluation function to specify the execution of the vcast operator.
ρ ⊢β he, σi → ε
ρ ⊢β he, σi → hcon, σ0 i
ρ ⊢β hvcast(nT, e), σi → ε
ρ ⊢β hvcast(nT, e), σi → evalvc (nT, con, σ0 )
Arithmetic evaluation. By using the evaluation functions, we can substitute rules
(5), (8) and (9) with the following (note that they also capture the case when a
divide-by-zero exception is thrown):
ρ ⊢β he, σi → hcon, σ0 i
ρ ⊢β h−e, σi → eval−1 (nT, con, σ0 )
ρ ⊢β he0 , σi → hcon0 , σ0 i ρ ⊢β he1 , σ0 i → hcon1 , σ1 i
ρ ⊢β he0 e1 , σi → eval (con0 , con1 , σ1 )
Arithmetic tests. Similarly, rule (12) is replaced by the more general rule
ρ ⊢β he0 , σi → hcon0 , σ0 i ρ ⊢β he1 , σ0 i → hcon1 , σ1 i
ρ ⊢β he0 e1 , σi → eval (con0 , con1 , σ1 )
9.2 C-like Pointers, Arrays and Records
9.2.1 Syntax. Recall that in Sections 3 and 4 we defined the set of storable types,
whose values can be read from and written to memory, and the set of denotable
types, that can occur in declarations. The introduction of pointer, array and record
types requires the adoption of a finer classification. The set of all memory types is
partitioned into object types and function types: the latter differ in that we cannot
read or update the “value” of a function; rather, we execute it. Object types
are further partitioned into elementary types (also called scalar types, including
basic types and pointer types) and aggregate types (arrays and records). All the
elementary types are storable, meaning that their values can be read directly from
or written directly to memory, as well as passed to and returned from functions.
Regarding aggregate types, the C language prescribes that record types are storable,
whereas array types are not. Pointer, array and record type derivations can be
applied repeatedly to obtain, e.g., multi-dimensional arrays.
Types.
eType ∋ eT ::= T | pT
oType ∋ oT ::= sT | aT
pType ∋ pT ::= mT∗
sType ∋ sT ::= eT | rT
fType ∋ fT ::= fps → sT
mType ∋ mT ::= oT | fT
aType ∋ aT ::= array m of oT
dType ∋ dT ::= mT loc
rType ∋ rT ::= record id of id1 : oT1 , . . . , idj : oTj
We assume, without loss of generality, that the field names of record types are
unique across the entire program (for example, id1 , . . . , idj could contain id as
some kind of special prefix).
Identifiers are no longer the only way to denote a memory structure location.
This can also be referred to by combining a pointer with the indirection operator
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
59
‘∗’, an array with the indexing operator, or a record with the field selection operator.
Hence, we introduce the concept of lvalue, which can be read as “location-valued
expression.”
Offsets and lvalues.
Offset ∋ o ::= | [e] · o | . id · o
LValue ∋ lval ::= id · o | (∗ e) · o
Consequently, the syntactic production for expressions generating identifiers, as
well as the productions for statements generating assignments and function calls,
are replaced by more general versions using lvalues; expressions and declarations
are also extended with the address-of operator, null pointers and array variables.
Expressions, declarations and statements.
Exp ∋ e ::= . . . | val lval | & lval | (pT) 0
Stmt ∋ s ::= . . . | lval := e | lval := e(es)
Glob ∋ g ::= . . . | gvar id : aT = e
Decl ∋ d ::= . . . | lvar id : aT = e
9.2.2 Static Semantics. The required adjustments to functions FI and DI are
straightforward and thus omitted. The well-typedness of offsets and lvalues is
encoded by the following predicates:
β, dT0 ⊢I o : dT1 ,
β ⊢I lval : dT,
o is compatible with dT0 and has type dT1 in β;
lval is well-formed and has type dT in β.
The static semantics is thus extended by the following rules.12 Note that the evaluation of an lvalue as an expression —val lval— causes a suitable type conversion,
sometimes referred to as “type decay.” Pointer arithmetics can only be applied
to object types. In function calls, the callee is specified via an expression having
function pointer type (typically resulting from a type decay).
Offset.
β ⊢I e : integer β, oT loc ⊢I o : dT
β, dT ⊢I : dT
β, (array m of oT) loc ⊢I [e] · o : dT
β, oTi loc ⊢I o : dT
β, (record id of id1 : oT1 ; . . . ; idj : oTj ) loc ⊢I . idi · o : dT
if i ∈ {1, . . . , j}
Lvalue.
β, dT0 ⊢I o : dT1
β ⊢I id · o : dT1
if β(id) = dT0
β ⊢I e : mT ∗
β, mT loc ⊢I o : dT
β ⊢I (∗ e) · o : dT
Null pointer and address-of operator.
β ⊢I lval : mT loc
β ⊢I (pT) 0 : pT
12 The
β ⊢I & lval : mT∗
previous rules for identifier, assignment and function call are no longer used.
60
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Type decay.
β ⊢I lval : sT loc
β ⊢I lval : (array m of oT) loc
β ⊢I lval : fT loc
β ⊢I val lval : sT
β ⊢I val lval : oT∗
β ⊢I val lval : fT∗
Pointer arithmetics.
β ⊢I e0 : oT ∗ β ⊢I e1 : integer
β ⊢I e0 : integer β ⊢I e1 : oT∗
β ⊢I e0 + e1 : oT∗
β ⊢I e0 : oT ∗
β ⊢I e0 + e1 : oT∗
β ⊢I e1 : integer
β ⊢I e0 : oT ∗
β ⊢I e0 − e1 : oT∗
β ⊢I e1 : oT∗
β ⊢I e0 − e1 : integer
Pointer comparison.
β ⊢I e0 : pT
β ⊢I e1 : pT
β ⊢I e0 e1 : boolean
where ∈ {=, 6=, <, ≤, ≥, >}.
Assignment and function call.
β ⊢I lval : sT loc β ⊢I e : sT
β ⊢I lval := e
β ⊢I lval : sT loc β ⊢I e : (fps → sT) ∗
β, fps ⊢I es
β ⊢I lval := e(es)
(Multi-dimensional) Global array declaration.
β ⊢I gvar id : oT = e : {id 7→ oT loc}
β ⊢I gvar id : array m of oT = e : id 7→ (array m of oT) loc
if m > 0
The static semantics rule for a local array declaration is similar.
9.2.3 Concrete Dynamic Semantics. Concrete execution environments now map
function identifiers to (properly typed) locations, rather than function abstracts:
def
hence, we redefine dVal = Addr × mType.
A proper handling of aggregate and function types in memory structures requires
a few semantic adjustments and extensions. New memory functions allow the allocation of function abstracts in the text segment, as well as the contiguous allocation
of a number of memory cells, so as to model (multi-dimensional) arrays:
newt : (Abstract × Mem) → (Mem × Loc) ⊎ ExceptState ,
newarrayd : (Integer × ValState) → (Mem × Loc) ⊎ ExceptState ,
newarrays : (Integer × ValState) → (Mem × Ind) ⊎ ExceptState .
It can be observed that the properties stated in Definition 5.2 still hold as long as
we consider locations having non-aggregate type and properly extend the domain
and codomain of the absolute memory map:
def
Map = (Loc × (eType ⊎ fType)) (Con ⊎ Loc ⊎ Abstract).
These “elementary” memory maps need to be extended to read or update record
values. To this end, we assume the existence of a couple of helper functions working
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
61
on locations having aggregate type:
locfield : (Id × Loc × rType) (Loc × oType),
locindex : (Integer × Loc × aType) (Loc × oType).
Intuitively, when defined, these functions map a record (resp., array) typed location
to the typed location of one of its record fields (resp., array elements). Hence, for
each µ ∈ Map, the extension µ : (Loc × sType) sVal can be recursively obtained,
for each l ∈ Loc and rT = record id of id1 : oT1 ; . . . ; idj : oTj , as follows and under
the following conditions:
D
E
def
µ(l, rT) = µ locfield(id1 , l, rT) , . . . , µ locfield(idj , l, rT) ,
where, for each l ∈ Loc and aT = array m of oT ∈ aType,
h
i
def
µ(l, aT) = µ locindex(0, l, aT) , . . . , µ locindex(m − 1, l, aT) .
A similar extension is required for the memory update operator. Note that we will
still use υ as a syntactic meta-variable for ValState = sVal × Mem, but now its first
component can be either a constant, or an absolute location, or a record value.
Pointer and array indexing errors are modeled via RTS exceptions. It is assumed there exists a special location lnull ∈ Loc (the null pointer value) such that
(lnull , mT) ∈
/ dom(σ) for all σ ∈ Mem and mT ∈ mType; this also implies that
lnull cannot be returned by the memory allocation operators. Hence, any attempt
to read from or write to memory through this location will result in an exception
state. Suitable operators on memory structures are required to check the constraints
regarding pointer arithmetics (e.g., out-of-bounds array accesses), pointer comparisons (where ranges over {=, 6=, <, ≤, ≥, >}) and to perform “array-to-pointer
decay” conversions or record field selections:
ptrmove: (Integer × Loc × Mem) → ValState ⊎ ExceptState,
ptrdiff : (Loc × Loc × Mem) → ValState ⊎ ExceptState,
ptrcmp : (Loc × Loc × Mem) → ValState ⊎ ExceptState,
firstof : (Loc × Mem) → ValState ⊎ ExceptState,
field : (Id × Loc × Mem) → ValState ⊎ ExceptState.
Note that array indexing is semantically equivalent to a suitable combination of
type decay, pointer arithmetics and pointer indirection. Nonetheless, for the sake
of clarity and also to simplify the application of pointer and array dependence
analyses [EGH94], we keep the distinction of the two constructs and, to simplify
notation, we define13
index : (Loc × ValState) ValState ⊎ ExceptState
as follows:
def
index l, (m, σ) =
13 Functions
(
ε,
if firstof(l, σ) = ε;
ptrmove(m, l0 , σ0 ), if firstof(l, σ) = (l0 , σ0 ).
‘field’ and ‘index’ are similar to ‘locfield’ and ‘locindex’, but they are also meant to
check their arguments against the memory structure, possibly returning an RTS exception.
62
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
Non-terminal and terminal configurations are extended so as to allow for the
syntactic categories of offsets and lvalues, whose non-exceptional evaluation leads
to a location:
(
)
∃dT0 , dT1 ∈ dType .
β def
Γo = ho, l, σi ∈ Offset × Loc × Mem
,
β, dT0 ⊢I o : dT1
def
Γβl = hlval, σi ∈ LValue × Mem ∃dT ∈ dType . β ⊢I lval : dT ,
def
def
To = Tl = (Loc × Mem) ⊎ ExceptState,
The dynamic concrete semantics is extended with the following rule schemata.
Offset.
ρ ⊢β h, l, σi → hl, σi
ρ ⊢β he, σi → ε
ρ ⊢β he, σi → υ
ρ ⊢β [e] · o, l, σ → ε
ρ ⊢β [e] · o, l, σ → ε
ρ ⊢β he, σi → υ
ρ ⊢β ho, l0 , σ0 i → η
ρ ⊢β [e] · o, l, σ → η
ρ ⊢β ho, l0 , σ0 i → η
if field(idi , l, σ) = (l0 , σ0 )
ρ ⊢β h. idi · o, l, σi → η
ρ ⊢β hid · o, σi → η
if index(l, υ) = (l0 , σ0 )
if field(idi , l, σ) = ε
ρ ⊢β h. idi · o, l, σi → ε
Lvalue.
ρ ⊢β ho, σ @ a, σi → η
if index(l, υ) = ε
if ρ(id) = (a, mT)
ρ ⊢β he, σi → ε
ρ ⊢β he, σi → hl0 , σ0 i ρ ⊢β ho, l0 , σ0 i → η
ρ ⊢β (∗ e) · o, σ → ε
ρ ⊢β (∗ e) · o, σ → η
Null pointer and address-of operator.
ρ ⊢β hlval, σi → η
ρ ⊢β (pT) 0, σ → hlnull , σi
ρ ⊢β h& lval, σi → η
Type decay.
ρ ⊢β hlval, σi → ε
ρ ⊢β hval lval, σi → ε
ρ ⊢β hlval, σi → hl, σ0 i
ρ ⊢β hval lval, σi → σ0 [l, sT]
ρ ⊢β hlval, σi → υ
ρ ⊢β hval lval, σi → firstof(υ)
if β ⊢FI(lval) lval : sT loc
if β ⊢FI(lval) lval : aT loc
On the Design of Generic Static Analyzers for Modern Imperative Languages
ρ ⊢β hlval, σi → υ
ρ ⊢β hval lval, σi → υ
·
63
if β ⊢FI(lval) lval : fT loc
Pointer arithmetics. Let denote a binary abstract syntax operator in {+, −},
as well as the corresponding unary operation on integers. Then, the following are
added to rule schemata (6)–(9).
ρ ⊢β he0 , σi → hl, σ0 i
ρ ⊢β he1 , σ0 i → ε
ρ ⊢β he0 e1 , σi → ε
ρ ⊢β he0 , σi → hl, σ0 i
ρ ⊢β he1 , σ0 i → hm, σ1 i
ρ ⊢β he0 e1 , σi → ptrmove(m0 , l, σ1 )
if m0 = m
ρ ⊢β he0 , σi → hm, σ0 i ρ ⊢β he1 , σ0 i → hl, σ1 i
ρ ⊢β he0 + e1 , σi → ptrmove(m, l, σ1 )
ρ ⊢β he0 , σi → hl0 , σ0 i ρ ⊢β he1 , σ0 i → hl1 , σ1 i
ρ ⊢β he0 − e1 , σi → ptrdiff(l0 , l1 , σ1 )
Pointer comparison. Let denote a binary abstract syntax operator in the set
{=, 6=, <, ≤, ≥, >}. Then, the following are added to rule schemata (10)–(12).
ρ ⊢β he0 , σi → hl, σ0 i
ρ ⊢β he1 , σ0 i → ε
ρ ⊢β he0 e1 , σi → ε
ρ ⊢β he0 , σi → hl0 , σ0 i ρ ⊢β he1 , σ0 i → hl1 , σ1 i
ρ ⊢β he0 e1 , σi → ptrcmp (l0 , l1 , σ1 )
Assignment.
ρ ⊢β hlval, σi → ε
ρ ⊢β hlval, σi → (l, σ0 ) ρ ⊢β he, σ0 i → ε
ρ ⊢β hlval := e, σi → ε
ρ ⊢β hlval := e, σi → ε
ρ ⊢β hlval, σi → (l, σ0 )
ρ ⊢β he, σ0 i → hsval, σ1 i
ρ ⊢β hlval := e, σi → σ1 (l, sT) := sval
if β ⊢FI(e) e : sT
Similar changes are required for the case of a function call. First, the lvalue is
evaluated so as to obtain the target location where the result of the function call
will be stored; then, the function designator (an expression) is evaluated to obtain a
location having function type; this location is fed to the memory structure so as to
obtain the function abstract. All the other computation steps, including parameter
passing, are performed as before. On exit from the function call, the return value is
stored at the location computed in the first step. Exceptions are eventually detected
and propagated as usual. Also note that, thanks to the rules for type decay, arrays
and functions can be passed to and returned from function calls.
64
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
(Multi-dimensional) Global array declaration. In the following rule schemata, let
n > 0, aT = array m1 of (. . . (array mn of sT) . . . ) and m = m1 × . . . × mn .
ρ ⊢β he, σi → η
ρ ⊢β hgvar id : aT = e, σi → cleanupd (ε)
if either η = ε, or η = υ and newarrayd (m, υ) = ε;
ρ ⊢β he, σi → υ
ρ ⊢β hgvar id : aT = e, σi → hρ0 , σ0 i
if newarrayd (m, υ) = (σ0 , l) and ρ0 = id 7→ (l, aT) .
The rules for local array declaration are similar. Since function abstracts are now
stored in memory structures, a few minor adaptations, omitted for space reasons,
are also required for the rule of function declarations (which uses newt ) and the
rules for recursive environments and declarations.
9.3 Heap Memory Management
By adding a heap segment to memory structures, as well as suitable helper functions
(newh , deleteh and the corresponding array versions), it is possible to further extend
the language to embrace dynamic memory allocation and deallocation.
9.3.1
Syntax. We add an allocation expression and a deallocation statement:
Exp ∋ e ::= . . . | new sT = e
Stmt ∋ s ::= . . . | delete e
9.3.2
Static Semantics.
β ⊢I e : sT
β ⊢I new sT = e : sT∗
9.3.3
β ⊢I e : sT∗
β ⊢I delete e
Concrete Dynamic Semantics. This is extended with the schemata:
New expression.
ρ ⊢β he, σi → ε
ρ ⊢β he, σi → υ
ρ ⊢β hnew sT = e, σi → ε
ρ ⊢β hnew sT = e, σi → ε
ρ ⊢β he, σi → υ
ρ ⊢β hnew sT = e, σi → hl, σ0 i
if newh (υ) = ε
if newh (υ) = (σ0 , l)
Delete operator.
ρ ⊢β he, σi → ε
ρ ⊢β he, σi → υ
ρ ⊢β hdelete e, σi → ε
ρ ⊢β hdelete e, σi → deleteh (υ)
Similar rules allow for allocation and deallocation of an array on the heap: note
that, contrary to the previous cases, the dimensions of the array can be specified
as expressions that will be evaluated dynamically.
Regarding the abstract semantics, the extensions concerning C-like pointers and
arrays as well as heap memory management can be obtained along the lines followed
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
65
in Section 6. In particular, the new memory structure operators described above
are provided with safe approximations and a new abstract domain Loc♯ for locationvalued expressions has to be defined. By generalizing the abstract memory read
and update operators so as to take as input an abstract location, we realize the
so-called weak read and weak update operators, so as to correctly deal with, e.g.,
assignments or function calls whose target is not statically known. In practice, no
fundamentally new issue has to be solved as far as the specification of the abstract
interpreter is concerned. This is not to say that these extensions are trivial; rather,
the real issues (e.g., the efficient and accurate tracking of aliasing information for
pointers [Ema93; EGH94] or the appropriate summarization techniques for large
arrays [GRS05] and heap-allocated data [GDD+ 04; SRW02]) are orthogonal to the
current approach and should be addressed elsewhere.
9.4 Non-Structured Control Flow Mechanisms
It turns out that the approach we have chosen to model exceptional behavior of
programs can be easily generalized so as to capture all the non-structured control
flow mechanisms of languages such as C and C++. To exemplify such a generalization, the abstract syntax of commands is extended with branching and labeled
statements:
Label ∋ l ::= id | m | default
Stmt ∋ s ::= . . . | goto id | switch e in s | break | continue | return e | l : s
We assume that the static semantics ensures the labels used in a function body are
all distinct (if the language supports local labels, then a trivial renaming will be
required) and that every goto has access to a corresponding labeled statement, respecting the constraints imposed by the language (concerning, for instance, jumping
into and outside blocks).
The state of a computation is captured, besides the current program point, by
a control mode and a memory structure, which together constitute what we call
a control state. A control state is classified by the corresponding control mode in
either a plain execution state or an exception state; a plain execution state can be
further distinguished in either a normal execution state, or a branching state, or a
value state (for computations yielding a proper value), or an environment state (for
computations yielding an execution environment).
Definition 9.1. (GotoMode, SwitchMode, ValMode, EnvMode, ExceptMode,
CtrlMode, CtrlState.) The sets of goto, switch, value, environment, exception
and all control modes are given, respectively, by
def
GotoMode = goto(id) id ∈ Id ,
def
SwitchMode = switch(sval) sval ∈ sVal ,
def
ValMode = value(sval) sval ∈ sVal ,
def
EnvMode = env(ρ) ρ ∈ Env ,
def
ExceptMode = except(ξ) ξ ∈ Except ,
def
CtrlMode = GotoMode ⊎ SwitchMode ⊎ ValMode ⊎ EnvMode
66
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
⊎ ExceptMode ⊎ {continue, break, return, exec},
where continue, break and return are the exit modes and exec is the plain execution
mode. Control modes are denoted by cm, cm0 , cm1 and so forth.
def
A control state is an element of CtrlState = CtrlMode × Mem. Control states
are denoted by cs, cs0 , cs1 and so forth.
The concrete semantics of the goto statement can now be expressed by
ρ ⊢β goto id, (cm, σ) → hcm0 , σi
if cm = exec and cm0 = goto(id) or cm 6= exec and cm0 = cm.
The semantics of labeled statements is given by
ρ ⊢β s, (cm0 , σ) → η
ρ ⊢β l : s, (cm, σ)i → η
where cm0 = exec if cm = exec, or cm = goto(id) and l = id, or cm = switch(sval)
and l ∈ {default, sval}; otherwise cm0 = cm.
Of course, the semantics of all statements must be suitably modified. For instance, the assignment should behave like a nop unless the control mode is the
normal execution one. Statements with non trivial control flow need more work.
For example, the semantics of the conditional statement can be captured by14
ρ ⊢β e, (exec, σ) → hcm0 , σ0 i
ρ ⊢β if e then s0 else s1 , (exec, σ) → hcm0 , σ0 i
ρ ⊢β e, (exec, σ) → value(tt), σ0
if cm0 ∈ ExceptMode
ρ ⊢β s0 , (exec, σ0 ) → hcm1 , σ1 i
ρ ⊢β s1 , (cm1 , σ1 ) → η
(124)
ρ ⊢β if e then s0 else s1 , (exec, σ) → η
if cm1 ∈ GotoMode;
ρ ⊢β e, (exec, σ) → value(tt), σ0
ρ ⊢β s0 , (exec, σ0 ) → hcm1 , σ1 i
ρ ⊢β if e then s0 else s1 , (exec, σ) → hcm1 , σ1 i
if cm1 ∈
/ GotoMode;
ρ ⊢β e, (exec, σ) → value(ff), σ0
ρ ⊢β s1 , (exec, σ0 ) → η
ρ ⊢β if e then s0 else s1 , (exec, σ) → η
ρ ⊢β s0 , (cm, σ) → hcm0 , σ0 i
ρ ⊢β if e then s0 else s1 , (cm, σ) → hcm0 , σ0 i
if cm ∈ GotoMode ⊎ SwitchMode and cm0 ∈
/ GotoMode ⊎ SwitchMode;
ρ ⊢β s0 , (cm, σ) → hcm0 , σ0 i ρ ⊢β s1 , (cm0 , σ0 ) → η
ρ ⊢β if e then s0 else s1 , (cm, σ) → η
14 Recall
that, in C, it is perfectly legal to jump into the “else branch” from the “then branch.”
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
67
if cm ∈ GotoMode ⊎ SwitchMode and cm0 ∈ GotoMode ⊎ SwitchMode;
ρ ⊢β if e then s0 else s1 , (cm, σ) → hcm, σi
if cm ∈
/ GotoMode ⊎ SwitchMode ⊎ {exec}.
Likewise, the semantics of the switch statement can be captured by:
ρ ⊢β e, (exec, σ) → hcm0 , σ0 i
ρ ⊢β switch e in s, (exec, σ) → hcm0 , σ0 i
if cm0 ∈ ExceptMode
ρ ⊢β e, (exec, σ) → value(sval0 ), σ0
ρ ⊢β s, (switch(sval0 ), σ0 ) → hcm1 , σ1 i
ρ ⊢β switch e in s, (exec, σ) → hcm2 , σ1 i
(
exec, if cm1 ∈ SwitchMode ⊎ {break},
if cm2 =
cm1 , otherwise;
ρ ⊢β s, (goto(id), σ) → hcm0 , σ0 i
ρ ⊢β switch e in s, (goto(id), σ) → hcm1 , σ0 i
(
exec, if cm0 = break,
if cm1 =
cm0 , otherwise;
ρ ⊢β switch e in s, (cm, σ) → hcm, σi
if cm ∈
/ GotoMode ⊎ {exec}.
While such a semantic treatment captures all forward jumps, for backward jumps
something more is required. One simple possibility (which is not the only one) is
to explicitly introduce a looping construct that is (only) available in the abstract
syntax. That is, we extend Stmt once again as
Stmt ∋ s ::= . . . | loop s
and assume that a set of such loops has been inserted so that all backward jumps
are enclosed in at least one loop (notice that at most one such loop per function
body suffices, but more can be used as a matter of optimization). For s ∈ Stmt,
let SL(s) denote the set of statement labels in s. The concrete semantics of this
looping construct is now given by
ρ ⊢β hs, csi → hcm, σi
ρ ⊢β hloop s, csi → hcm, σi
ρ ⊢β hs, csi → goto(id), σ
if cm 6= goto(id) for each id ∈ SL(s)
ρ ⊢β loop s, goto(id), σ
ρ ⊢β hloop s, csi → η
→η
if id ∈ SL(s)
Observe that the systematic use of the looping construct can make rule schema (124)
redundant.
Other rules are omitted for space reasons. However, there are no additional
difficulties besides the ones just addressed: the rules for break and continue
·
68
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
are straightforward; return e can be modeled as the assignment to the reserved
identifier x0 (see concrete rule (67)), followed by the setting of the control mode;
the rules for the while loop are a bit involved as they must support the ‘break’ and
‘continue’ control modes in addition to ‘goto’ and ‘switch’.
The proposed approach handles non-structured control flow mechanisms essentially by adding a sort of control register to the rule-based interpreter of the language. As far as the abstract semantics is concerned, a first choice to be made
concerns the approximation of the values that the control register can take. As
usual, there is a complexity/precision trade-off to be faced: the simple solution
is to approximate ℘(CtrlMode) by some (simple) abstract domain CtrlMode♯ and
then approximate CtrlState = CtrlMode × Mem by CtrlMode♯ ⊗ Mem♯ ; a more
precise solution is to approximate ℘(CtrlState) by an abstract domain CtrlState♯
that captures relational information connecting the control modes to the memory
structures they can be coupled with. The abstract rules schemata must of course
be modified to match the concrete world. For instance, the abstract rule for the
conditional statement becomes:
ρ ⊢β he, cs♯cond i → cs♯0
ρ ⊢β hs0 , cs♯then i → cs♯1
ρ ⊢β hif e then s0 else s1 , cs♯ i
ρ ⊢β hs1 , cs♯else i → cs♯2
cs♯3
where
cs♯cond = Φe (ρ, cs♯ , tt),
cs♯then = Φe (ρ, cs♯ , e) ⊔ Φm (cs♯ , GotoMode ⊎ SwitchMode),
cs♯else = Φe (ρ, cs♯ , not e) ⊔ Φm (cs♯1 , GotoMode) ⊔ cs♯jump ,
(
⊥,
if Φm (cs♯ , GotoMode ⊎ SwitchMode) = ⊥,
♯
csjump =
♯
Φm (cs1 , Cjump ), otherwise,
Cjump = GotoMode ∪ cm ∈ CtrlMode ∃σ ∈ Mem . γ(cs♯ ) = (cm, σ) ,
cs♯3 = Φm cs♯ , CtrlMode \ {exec} ⊎ GotoMode ⊎ SwitchMode
⊔ Φm cs♯0 , CtrlMode \ ValMode ⊔ cs♯1 ⊔ cs♯2 ,
♯
♯
and the two computable filter functions
Φe : (Env ×♯ CtrlState × Exp) → CtrlState
♯
and Φm : CtrlState × ℘(CtrlMode) → CtrlState are defined as follows, for each
ρ ∈ Env, cs♯ ∈ CtrlState♯ , e ∈ Exp and C ⊆ CtrlMode such that, for some
β ∈ TEnv, β : I with FI(e) ⊆ I and β ⊢I e : boolean:
∃σ ∈ Mem . cs = (exec, σ),
♯
♯
′
γ Φe (ρ, cs , e) ⊇ cs ∈ γ(cs ) ∃σ ∈ Mem
,
′
. ρ ⊢β he, csi → value(tt), σ
♯
♯
γ Φm (cs , C) ⊇ cs ∈ γ(cs ) ∃σ ∈ Mem . cs = (cm, σ), cm ∈ C .
10. CONCLUSION
In this paper, we have confronted the problem of defining an analysis framework
for the specification and realization of precise static analyzers for mainstream im-
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
69
perative programming languages, tools in very short supply that, however, ought
to become part of the current programming practice. A proposal put forward by
Schmidt twelve years ago [Sch95] held, in our eyes, considerable promise, despite
the fact it had not been fully developed and applied in realistic contexts. It was
therefore natural to question whether the promise could be fulfilled. To investigate Schmidt’s approach, which is based on structured operational semantics and
abstract interpretation, we have defined an imperative language, CPM, that embodies all the “problematic features” of single-threaded imperative languages now
in widespread use. We have presented a concrete semantics of CPM that is suitable for abstraction while retaining all the nice features of SOS descriptions. For
a subset of the language we have formally defined an abstract semantics that can
fully exploit the precision offered by relational abstract domains, and proved its
soundness with respect to the concrete one. We have also shown how approximations of the abstract semantics can be effectively computed. In order to provide
an experimental evaluation of the ideas presented in this paper, both the concrete
and the abstract semantics —instantiated over sophisticated numeric domains and
together with a suitable fixpoint computation engine— have been incorporated into
the ECLAIR system. This work allows us to conclude that the proposal of Schmidt
can play a crucial role in the development of reliable and precise analyzers. The
key features of this approach are:
—a fairly concise concrete semantics that experts can easily read (and modify as
needed) and that everyone can execute on non-trivial examples in order to check
its agreement with the applicable language standards;
—a fairly concise abstract semantics that is fully parametric with respect to the
abstract domain, that is not difficult to prove correct with respect to the concrete
one (to the point that automatizing the proof seems to be a reasonable goal),
and that directly leads to the implementation of static analyzers.
Of course, the story does not end here. For instance, our analysis framework is
parametric on abstract memory structures. While the literature seems to provide all
that is necessary to realize very sophisticated ones, it is not difficult to predict that,
among all the code out there waiting to be analyzed, some will greatly exacerbate
the complexity/precision trade-off. However, these are research problems for the
future — now that we have, as given here, a formal design on which analyzers can
be built, our next goal is to complete the build and make the technology described
here truly available and deployable.
ACKNOWLEDGMENTS
Anna Dolma Alonso, Irene Bacchi, Danilo Bonardi, Andrea Cimino, Enrico Franchi,
Davide Masi and Alessandro Vincenzi (all students of the course on “Analysis and
Verification of Software” taught by Roberto Bagnara at the University of Parma)
and Vajirapan Panumong (University of Leeds) collaborated on previous, much
more restricted versions of this work. We are also grateful to David Merchat (formerly at the University of Parma) and Katy Dobson (University of Leeds) for the
discussions we have had on the subject of this paper.
70
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
REFERENCES
B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux, and X. Rival,
Design and implementation of a special-purpose static program analyzer for safety-critical realtime embedded software, The Essence of Computation, Complexity, Analysis, Transformation.
Essays Dedicated to Neil D. Jones [on occasion of his 60th birthday] (T. Æ. Mogensen, D. A.
Schmidt, and I. Hal Sudborough, eds.), Lecture Notes in Computer Science, vol. 2566, SpringerVerlag, Berlin, 2002, pp. 85–108.
, A static analyzer for large safety-critical software, Proceedings of the ACM SIGPLAN
2003 Conference on Programming Language Design and Implementation (PLDI’03) (San Diego,
California, USA), ACM Press, 2003, pp. 196–207.
R. Bagnara, P. M. Hill, E. Ricci, and E. Zaffanella, Precise widening operators for convex polyhedra, Science of Computer Programming 58 (2005), no. 1–2, 28–56.
R. Bagnara, P. M. Hill, and E. Zaffanella, Not necessarily closed convex polyhedra and the double
description method, Formal Aspects of Computing 17 (2005), no. 2, 222–257.
, The Parma Polyhedra Library: Toward a complete set of numerical abstractions for
the analysis and verification of hardware and software systems, Quaderno 457, Dipartimento di Matematica, Università di Parma, Italy, 2006, Available at http://www.cs.unipr.it/
Publications/. Also published as arXiv:cs.MS/0612085, available from http://arxiv.org/.
M. Bruynooghe, A practical framework for the abstract interpretations of logic programs, Journal
of Logic Programming 10 (1991), 91–124.
P. Cousot and R. Cousot, Static determination of dynamic properties of programs, Proceedings
of the Second International Symposium on Programming (Paris, France) (B. Robinet, ed.),
Dunod, Paris, France, 1976, pp. 106–130.
, Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints, Proceedings of the Fourth Annual ACM Symposium
on Principles of Programming Languages (New York), ACM Press, 1977, pp. 238–252.
, Static determination of dynamic properties of recursive procedures, IFIP Conference
on Formal Description of Programming Concepts (E. J. Neuhold, ed.), North-Holland, 1977,
pp. 237–277.
, Systematic design of program analysis frameworks, Proceedings of the Sixth Annual
ACM Symposium on Principles of Programming Languages (New York), ACM Press, 1979,
pp. 269–282.
, Abstract interpretation frameworks, Journal of Logic and Computation 2 (1992), no. 4,
511–547.
, Comparing the Galois connection and widening/narrowing approaches to abstract interpretation, Proceedings of the 4th International Symposium on Programming Language Implementation and Logic Programming (Leuven, Belgium) (M. Bruynooghe and M. Wirsing, eds.),
Lecture Notes in Computer Science, vol. 631, Springer-Verlag, Berlin, 1992, pp. 269–295.
, Inductive definitions, semantics and abstract interpretation, Proceedings of the Nineteenth Annual ACM Symposium on Principles of Programming Languages (Albuquerque, New
Mexico, USA), ACM Press, 1992, pp. 83–94.
, Higher-order abstract interpretation (and application to comportment analysis generalizing strictness, termination, projection and PER analysis of functional languages), Proceedings of the IEEE Computer Society 1994 International Conference on Computer Languages
(Toulouse, France) (H. E. Bal, ed.), IEEE Computer Society Press, 1994, Invited paper, pp. 95–
112.
P. Cousot and N. Halbwachs, Automatic discovery of linear restraints among variables of a program, Conference Record of the Fifth Annual ACM Symposium on Principles of Programming
Languages (Tucson, Arizona), ACM Press, 1978, pp. 84–96.
P. Cousot, Semantic foundations of program analysis, Program Flow Analysis: Theory and Applications (S. S. Muchnick and N. D. Jones, eds.), Prentice Hall, Englewood Cliffs, NJ, USA,
1981, pp. 303–342.
, The calculational design of a generic abstract interpreter, Calculational System Design
(M. Broy and R. Steinbrüggen, eds.), NATO ASI Series F. IOS Press, Amsterdam, NL, 1999.
On the Design of Generic Static Analyzers for Modern Imperative Languages
·
71
, The verification grand challenge and abstract interpretation, Verified Software: Theories,
Tools, Experiments (VSTTE) (ETH Zürich, Switzerland), 2005, Position paper.
N. Dor, M. Rodeh, and S. Sagiv, Cleanness checking of string manipulations in C programs
via integer analysis, Static Analysis: 8th International Symposium, SAS 2001 (Paris, France)
(P. Cousot, ed.), Lecture Notes in Computer Science, vol. 2126, Springer-Verlag, Berlin, 2001,
pp. 194–212.
M. Emami, R. Ghiya, and L. J. Hendren, Context-sensitive interprocedural points-to analysis in
the presence of function pointers, Proceedings of the ACM SIGPLAN’94 Conference on Programming Language Design and Implementation (Orlando, Florida), vol. 29, ACM SIGPLAN
Notices, no. 6, Association for Computing Machinery, 1994, pp. 242–256.
M. Emami, A practical inter-procedural alias analysis for an optimizing/paralleling C compiler,
Master’s thesis, School of Computer Science, McGill University, Montreal, Canada, August
1993.
D. Gopan, F. DiMaio, N. Dor, T. Reps, and M. Sagiv, Numeric domains with summarized dimensions, Tools and Algorithms for the Construction and Analysis of Systems, 10th International
Conference, TACAS 2004 (Barcelona, Spain) (K. Jensen and A. Podelski, eds.), Lecture Notes
in Computer Science, vol. 2988, Springer-Verlag, Berlin, 2004, pp. 512–529.
R. Giacobazzi, S. K. Debray, and G. Levi, A generalized semantics for constraint logic programs,
Proceedings of the International Conference on Fifth Generation Computer Systems (FGCS’92)
(Tokyo, Japan), ICOT, 1992, pp. 581–591.
D. Gopan, T. W. Reps, and M. Sagiv, A framework for numeric analysis of array operations,
Proceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages (Long Beach, California, USA), 2005, pp. 338–350.
N. Halbwachs, Delay analysis in synchronous programs, Computer Aided Verification: Proceedings
of the 5th International Conference (Elounda, Greece) (C. Courcoubetis, ed.), Lecture Notes
in Computer Science, vol. 697, Springer-Verlag, Berlin, 1993, pp. 333–346.
C. A. R. Hoare, The verifying compiler: A grand challenge for computing research, Journal of the
ACM 50 (2003), no. 1, 63–69.
N. Halbwachs, Y.-E. Proy, and P. Roumanoff, Verification of real-time systems using linear relation analysis, Formal Methods in System Design 11 (1997), no. 2, 157–185.
B. Jeannet and W. Serwe, Abstracting call-stacks for interprocedural verification of imperative
programs, Publication interne 1543, IRISA, Campus de Beaulieu, Rennes, France, 2003.
, Abstracting call-stacks for interprocedural verification of imperative programs, Proceedings of the 10th International Conference on Algebraic Methodology and Software Technology
(Stirling, Scotland, UK) (C. Rattray, S. Maharaj, and C. Shankland, eds.), Lecture Notes in
Computer Science, vol. 3116, Springer-Verlag, Berlin, 2004, pp. 258–273.
G. Kahn, Natural semantics, Proceedings of the 4th Annual Symposium on Theoretical Aspects of
Computer Science (Passau, Germany) (F.-J. Brandenburg, G. Vidal-Naquet, and M. Wirsing,
eds.), Lecture Notes in Computer Science, vol. 247, Springer-Verlag, Berlin, 1987, pp. 22–39.
X. Leroy, Coinductive big-step operational semantics, Programming Languages and Systems, Proceedings of the 14th European Symposium on Programming (Vienna, Austria) (P. Sestoft, ed.),
Lecture Notes in Computer Science, vol. 3924, Springer-Verlag, Berlin, 2006, pp. 54–68.
A. Miné, Field-sensitive value analysis of embedded C programs with union types and pointer
arithmetics, Proceedings of the 2006 ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (Ottawa, Ontario, Canada) (M. J. Irwin and K. De
Bosschere, eds.), ACM Press, 2006, pp. 54–63.
G. C. Necula, S. McPeak, S. P. Rahul, and W. Weimer, CIL: Intermediate language and tools for
analysis and transformation of C programs, Compiler Construction: Proceedings of the 11th
International Conference (CC 2002) (Grenoble, France) (R. N. Horspool, ed.), Lecture Notes
in Computer Science, vol. 2304, Springer-Verlag, Berlin, 2002, pp. 213–228.
G. D. Plotkin, A structural approach to operational semantics, Journal of Logic and Algebraic
Programming 60–61 (2004), 17–139.
72
·
R. Bagnara, P.M. Hill, A. Pescetti, and E. Zaffanella
D. A. Schmidt, Natural-semantics-based abstract interpretation (preliminary version), Static
Analysis: Proceedings of the 2nd International Symposium (Glasgow, UK) (A. Mycroft, ed.),
Lecture Notes in Computer Science, vol. 983, Springer-Verlag, Berlin, 1995, pp. 1–18.
, Abstract interpretation of small-step semantics, Analysis and Verification of MultipleAgent Languages (M. Dam, ed.), Lecture Notes in Computer Science, vol. 1192, Springer-Verlag,
Berlin, 1997, 5th LOMAPS Workshop Stockholm, Sweden, June 24–26, 1996, Selected Papers,
pp. 76–99.
, Trace-based abstract interpretation of operational semantics, LISP and Symbolic Computation 10 (1998), no. 3, 237–271.
R. Shaham, E. K. Kolodner, and S. Sagiv, Automatic removal of array memory leaks in Java,
Proceedings of the 9th International Conference on Compiler Construction (CC 2000) (Berlin,
Germany) (D. A. Watt, ed.), Lecture Notes in Computer Science, vol. 1781, Springer-Verlag,
Berlin, 2000, pp. 50–66.
M. Sharir and A. Pnueli, Two approaches to interprocedural data flow analysis, Program Flow
Analysis: Theory and Applications (S. S. Muchnick and N. D. Jones, eds.), Prentice Hall,
Englewood Cliffs, NJ, USA, 1981, pp. 189–233.
S. Sagiv, T. W. Reps, and R. Wilhelm, Parametric shape analysis via 3-valued logic, ACM Transactions on Programming Languages and Systems 24 (2002), no. 3, 217–298.
| 6 |
Junction Tree Variational Autoencoder for Molecular Graph Generation
Wengong Jin 1 Regina Barzilay 1 Tommi Jaakkola 1
arXiv:1802.04364v2 [cs.LG] 19 Feb 2018
Abstract
We seek to automate the design of molecules
based on specific chemical properties. In computational terms, this task involves continuous
embedding and generation of molecular graphs.
Our primary contribution is the direct realization
of molecular graphs, a task previously approached
by generating linear SMILES strings instead of
graphs. Our junction tree variational autoencoder
generates molecular graphs in two phases, by first
generating a tree-structured scaffold over chemical substructures, and then combining them into a
molecule with a graph message passing network.
This approach allows us to incrementally expand
molecules while maintaining chemical validity
at every step. We evaluate our model on multiple tasks ranging from molecular generation to
optimization. Across these tasks, our model outperforms previous state-of-the-art baselines by a
significant margin.
1. Introduction
The key challenge of drug discovery is to find target
molecules with desired chemical properties. Currently, this
task takes years of development and exploration by expert
chemists and pharmacologists. Our ultimate goal is to automate this process. From a computational perspective, we
decompose the challenge into two complementary subtasks:
learning to represent molecules in a continuous manner that
facilitates the prediction and optimization of their properties
(encoding); and learning to map an optimized continuous
representation back into a molecular graph with improved
properties (decoding). While deep learning has been extensively investigated for molecular graph encoding (Duvenaud
et al., 2015; Kearnes et al., 2016; Gilmer et al., 2017), the
harder combinatorial task of molecular graph generation
from latent representation remains under-explored.
Prior work on drug design formulated the graph generation task as a string generation problem (Gómez-Bombarelli
et al., 2016; Kusner et al., 2017) in an attempt to side-step
1
MIT Computer Science & Artificial Intelligence Lab. Correspondence to: Wengong Jin <wengong@csail.mit.edu>.
Figure 1. Two almost identical molecules with markedly different
canonical SMILES in RDKit. The edit distance between two
strings is 22 (50.5% of the whole sequence).
direct generation of graphs. Specifically, these models start
by generating SMILES (Weininger, 1988), a linear string
notation used in chemistry to describe molecular structures.
SMILES strings can be translated into graphs via deterministic mappings (e.g., using RDKit (Landrum, 2006)).
However, this design has two critical limitations. First, the
SMILES representation is not designed to capture molecular similarity. For instance, two molecules with similar
chemical structures may be encoded into markedly different
SMILES strings (e.g., Figure 1). This prevents generative
models like variational autoencoders from learning smooth
molecular embeddings. Second, essential chemical properties such as molecule validity are easier to express on graphs
rather than linear SMILES representations. We hypothesize
that operating directly on graphs improves generative modeling of valid chemical structures.
Our primary contribution is a new generative model of
molecular graphs. While one could imagine solving the
problem in a standard manner – generating graphs node
by node – the approach is not ideal for molecules. This is
because creating molecules atom by atom would force the
model to generate chemically invalid intermediaries (see,
e.g., Figure 2), delaying validation until a complete graph
is generated. Instead, we propose to generate molecular
graphs in two phases by exploiting valid subgraphs as components. The overall generative approach, cast as a junction
tree variational autoencoder, first generates a tree structured object (a junction tree) whose role is to represent the
scaffold of subgraph components and their coarse relative
arrangements. The components are valid chemical substructures automatically extracted from the training set using tree
decomposition and are used as building blocks. In the second phase, the subgraphs (nodes in the tree) are assembled
together into a coherent molecular graph.
Junction Tree Variational Autoencoder for Molecular Graph Generation
Figure 2. Comparison of two graph generation schemes: Structure
by structure approach is preferred as it avoids invalid intermediate
states (marked in red) encountered in node by node approach.
We evaluate our model on multiple tasks ranging from
molecular generation to optimization of a given molecule
according to desired properties. As baselines, we utilize
state-of-the-art SMILES-based generation approaches (Kusner et al., 2017; Dai et al., 2018). We demonstrate that
our model produces 100% valid molecules when sampled
from a prior distribution, outperforming the top performing baseline by a significant margin. In addition, we show
that our model excels in discovering molecules with desired
properties, yielding a 30% relative gain over the baselines.
2. Junction Tree Variational Autoencoder
Our approach extends the variational autoencoder (Kingma
& Welling, 2013) to molecular graphs by introducing a suitable encoder and a matching decoder. Deviating from previous work (Gómez-Bombarelli et al., 2016; Kusner et al.,
2017), we interpret each molecule as having been built from
subgraphs chosen out of a vocabulary of valid components.
These components are used as building blocks both when
encoding a molecule into a vector representation as well
as when decoding latent vectors back into valid molecular
graphs. The key advantage of this view is that the decoder
can realize a valid molecule piece by piece by utilizing the
collection of valid components and how they interact, rather
than trying to build the molecule atom by atom through
chemically invalid intermediaries (Figure 2). An aromatic
bond, for example, is chemically invalid on its own unless
the entire aromatic ring is present. It would be therefore
challenging to learn to build rings atom by atom rather than
by introducing rings as part of the basic vocabulary.
Our vocabulary of components, such as rings, bonds and
individual atoms, is chosen to be large enough so that a
given molecule can be covered by overlapping components
or clusters of atoms. The clusters serve the role analogous to
cliques in graphical models, as they are expressive enough
that a molecule can be covered by overlapping clusters without forming cluster cycles. In this sense, the clusters serve
as cliques in a (non-optimal) triangulation of the molecular
graph. We form a junction tree of such clusters and use it
as the tree representation of the molecule. Since our choice
of cliques is constrained a priori, we cannot guarantee that
a junction tree exists with such clusters for an arbitrary
molecule. However, our clusters are built on the basis of the
molecules in the training set to ensure that a corresponding
Figure 3. Overview of our molecule generation paradigm: A
molecular graph G is first decomposed into its junction tree TG ,
where each colored node in the tree represents a substructure in
the molecule. We then encode both the tree and graph into their
latent embeddings zT and zG . To decode the molecule, we first
reconstruct junction tree from zT , and then assemble nodes in the
tree back to original molecule, guided by zG .
junction tree can be found. Empirically, our clusters cover
most of the molecules in the test set.
The original molecular graph and its associated junction tree
offer two complementary representations of a molecule. We
therefore encode the molecule into a two-part latent representation z = [zT , zG ] where zT encodes the tree structure
and what the clusters are in the tree without fully capturing how exactly the clusters are mutually connected. zG
encodes the graph to capture the fine-grained connectivity.
Both parts are created by tree and graph encoders q(zT |T )
and q(zG |G). The latent representation is then decoded
back into a molecular graph in two stages. As illustrated in
Figure 3, we first reproduce the junction tree using a tree
decoder p(T |zT ) based on the information in zT . Second,
we predict the fine grain connectivity between the clusters
in the junction tree using a graph decoder p(G|T , zG ) to
realize the full molecular graph. The junction tree approach
allows us to maintain chemical feasibility during generation.
Notation A molecular graph is defined as G = (V, E)
where V is the set of atoms (vertices) and E the set of bonds
(edges). Let N (x) be the neighbor of x. We denote sigmoid
function as σ(·) and ReLU function as τ (·). We use i, j, k
for nodes in the tree and u, v, w for nodes in the graph.
Junction Tree Variational Autoencoder for Molecular Graph Generation
2.1. Junction Tree
A tree decomposition maps a graph G into a junction tree
by contracting certain vertices into a single node so that G
becomes cycle-free. Formally, given a graph G, a junction
tree TG = (V, E, X ) is a connected labeled tree whose
node set is V = {C1 , · · · , Cn } and edge set is E. Each
node or cluster Ci = (Vi , Ei ) is an induced subgraph of G,
satisfying the following constraints:
S
1. The S
union of all clusters equals G. That is, i Vi = V
and i Ei = E.
2. Running intersection: For all clusters Ci , Cj and Ck ,
Vi ∩ Vj ⊆ Vk if Ck is on the path from Ci to Cj .
Viewing induced subgraphs as cluster labels, junction trees
are labeled trees with label vocabulary X . By our molecule
tree decomposition, X contains only cycles (rings) and single edges. Thus the vocabulary size is limited (|X | = 780
for a standard dataset with 250K molecules).
Tree Decomposition of Molecules Here we present our
tree decomposition algorithm tailored for molecules, which
finds its root in chemistry (Rarey & Dixon, 1998). Our
cluster vocabulary X includes chemical structures such as
bonds and rings (Figure 3). Given a graph G, we first find all
its simple cycles, and its edges not belonging to any cycles.
Two simple rings are merged together if they have more than
two overlapping atoms, as they constitute a specific structure
called bridged compounds (Clayden et al., 2001). Each of
those cycles or edges is considered as a cluster. Next, a
cluster graph is constructed by adding edges between all
intersecting clusters. Finally, we select one of its spanning
trees as the junction tree of G (Figure 3). As a result of ring
merging, any two clusters in the junction tree have at most
two atoms in common, facilitating efficient inference in the
graph decoding phase. The detailed procedure is described
in the supplementary.
2.2. Graph Encoder
We first encode the latent representation of G by a graph
message passing network (Dai et al., 2016; Gilmer et al.,
2017). Each vertex v has a feature vector xv indicating the
atom type, valence, and other properties. Similarly, each
edge (u, v) ∈ E has a feature vector xuv indicating its
bond type, and two hidden vectors ν uv and ν vu denoting
the message from u to v and vice versa. Due to the loopy
structure of the graph, messages are exchanged in a loopy
belief propagation fashion:
X
g
g
g
ν (t)
ν (t−1)
uv = τ (W1 xu + W2 xuv + W3
wu ) (1)
w∈N (u)\v
(t)
where ν uv is the message computed in t-th iteration, initial(0)
ized with ν uv = 0. After T steps of iteration, we aggregate
those messages as the latent vector of each vertex, which
captures its local graphical structure:
X
)
hu = τ (Ug1 xu +
Ug2 ν (T
(2)
vu )
v∈N (u)
P
The final graph representation is hG = i hi /|V |. The
mean µG and log variance log σ G of the variational posterior approximation are computed from hG with two separate
affine layers. zG is sampled from a Gaussian N (µG , σ G ).
2.3. Tree Encoder
We similarly encode TG with a tree message passing network. Each cluster Ci is represented by a one-hot encoding
xi representing its label type. Each edge (Ci , Cj ) is associated with two message vectors mij and mji . We pick an
arbitrary leaf node as the root and propagate messages in
two phases. In the first bottom-up phase, messages are initiated from the leaf nodes and propagated iteratively towards
root. In the top-down phase, messages are propagated from
the root to all the leaf nodes. Message mij is updated as:
mij = GRU(xi , {mki }k∈N (i)\j )
(3)
where GRU is a Gated Recurrent Unit (Chung et al., 2014;
Li et al., 2015) adapted for tree message passing:
X
sij =
mki
(4)
k∈N (i)\j
zij
rki
e ij
m
= σ(Wz xi + Uz sij + bz )
r
r
r
= σ(W xi + U mki + b )
X
= tanh(Wxi + U
rki
(5)
(6)
mki ) (7)
k∈N (i)\j
mij
=
(1 − zij )
sij + zij
e ij
m
(8)
The message passing follows the schedule where mij is
computed only when all its precursors {mki | k ∈ N (i)\j}
have been computed. This architectural design is motivated
by the belief propagation algorithm over trees and is thus
different from the graph encoder.
After the message passing, we obtain the latent representation of each node hi by aggregating its inward messages:
X
hi = τ (Wo xi +
Uo mki )
(9)
k∈N (i)
The final tree representation is hTG = hroot , which encodes
a rooted tree (T , root). Unlike the graph encoder, we do
not apply node average pooling because it confuses the tree
decoder which node to generate first. zTG is sampled in
a similar way as in the graph encoder. For simplicity, we
abbreviate zTG as zT from now on.
This tree encoder plays two roles in our framework. First, it
is used to compute zT , which only requires the bottom-up
phase of the network. Second, after a tree Tb is decoded
Junction Tree Variational Autoencoder for Molecular Graph Generation
Algorithm 1 Tree decoding at sampling time
Require: Latent representation zT
1: Initialize: Tree Tb ← ∅
2: function SampleTree(i, t)
Set Xi ← all cluster labels that are chemically com3:
patible with node i and its current neighbors.
4:
Set dt ← expand with probability pt .
. Eq.(11)
5:
if dt = expand and Xi 6= ∅ then
6:
Create a node j and add it to tree Tb .
7:
Sample the label of node j from Xi
.. Eq.(12)
8:
SampleTree(j, t + 1)
9:
end if
10: end function
Figure 4. Illustration of the tree decoding process. Nodes are labeled in the order in which they are generated. 1) Node 2 expands
child node 4 and predicts its label with message h24 . 2) As node 4
is a leaf node, decoder backtracks and computes message h42 . 3)
Decoder continues to backtrack as node 2 has no more children. 4)
Node 1 expands node 5 and predicts its label.
b ij over the enfrom zT , it is used to compute messages m
tire Tb , to provide essential contexts of every node during
graph decoding. This requires both top-down and bottom-up
phases. We will elaborate this in section 2.5.
2.4. Tree Decoder
We decode a junction tree T from its encoding zT with a tree
structured decoder. The tree is constructed in a top-down
fashion by generating one node at a time. As illustrated
in Figure 4, our tree decoder traverses the entire tree from
the root, and generates nodes in their depth-first order. For
every visited node, the decoder first makes a topological
prediction: whether this node has children to be generated.
When a new child node is created, we predict its label and
recurse this process. Recall that cluster labels represent
subgraphs in a molecule. The decoder backtracks when a
node has no more children to generate.
At each time step, a node receives information from other
nodes in the current tree for making those predictions. The
information is propagated through message vectors hij
when trees are incrementally constructed. Formally, let
Ẽ = {(i1 , j1 ), · · · , (im , jm )} be the edges traversed in a
depth first traversal over T = (V, E), where m = 2|E| as
each edge is traversed in both directions. The model visits node it at time t. Let Ẽt be the first t edges in Ẽ. The
message hit ,jt is updated through previous messages:
hit ,jt = GRU(xit , {hk,it }(k,it )∈Ẽt ,k6=jt )
(10)
where GRU is the same recurrent unit as in the tree encoder.
Topological Prediction When the model visits node it , it
makes a binary prediction on whether it still has children to
be generated. We compute this probability by combining
zT , node features xit and inward messages hk,it via a one
hidden layer network followed by a sigmoid function:
X
pt = σ(ud ·τ (W1d xit +W2d zT +W3d
hk,it ) (11)
(k,it )∈Ẽt
Label Prediction When a child node j is generated from
its parent i, we predict its node label with
qj = softmax(Ul τ (W1l zT + W2l hij ))
(12)
where qj is a distribution over label vocabulary X . When j
is a root node, its parent i is a virtual node and hij = 0.
Learning The tree decoder aims to maximize the likelihood p(T |zT ). Let p̂t ∈ {0, 1} and q̂j be the ground truth
topological and label values, the decoder minimizes the
following cross entropy loss:1
X
X
Lc (T ) =
Ld (pt , p̂t ) +
Ll (qj , q̂j )
(13)
t
j
Similar to sequence generation, during training we perform
teacher forcing: after topological and label prediction at
each step, we replace them with their ground truth so that
the model makes predictions given correct histories.
Decoding & Feasibility Check Algorithm 1 shows how a
tree is sampled from zT . The tree is constructed recursively
guided by topological predictions without any external guidance used in training. To ensure the sampled tree could be
realized into a valid molecule, we define set Xi to be cluster
labels that are chemically compatible with node i and its
current neighbors. When a child node j is generated from
node i, we sample its label from Xi with a renormalized
distribution qj over Xi by masking out invalid labels.
2.5. Graph Decoder
The final step of our model is to reproduce a molecular graph
b E).
b
G that underlies the predicted junction tree Tb = (V,
1
The node ordering is not unique as the order within sibling
nodes is ambiguous. In this paper we train our model with one
ordering and leave this issue for future work.
Junction Tree Variational Autoencoder for Molecular Graph Generation
the assembly of the root and its neighbors according to their
scores. Then we proceed to assemble the neighbors and
their associated clusters (removing the degrees of freedom
set by the root assembly), and so on.
It remains to be specified how each neighborhood realization is scored. Let Gi be the subgraph resulting from a
particular merging of cluster Ci in the tree with its neighbors Cj , j ∈ NTb (i). We score Gi as a candidate subgraph
by first deriving a vector representation hGi and then using
fia (Gi ) = hGi · zG as the subgraph score. To this end,
let u, v specify atoms in the candidate subgraph Gi and let
αv = i if v ∈ Ci and αv = j if v ∈ Cj \ Ci . The indices
αv are used to mark the position of the atoms in the junction
b i,j summarizing the subtree, and to retrieve messages m
tree under i along the edge (i, j) obtained by running the
tree encoding algorithm. The neural messages pertaining
to the atoms and bonds in subgraph Gi are obtained and
aggregated into hGi , similarly to the encoding step, but with
different (learned) parameters:
µ(t)
uv
Figure 5. Decode a molecule from a junction tree. 1) Ground truth
molecule G. 2) Predicted junction tree Tb . 3) We enumerate different combinations between red cluster C and its neighbors. Crossed
arrows indicate combinations that lead to chemically infeasible
molecules. Note that if we discard tree structure during enumeration (i.e., ignoring subtree A), the last two candidates will collapse
into the same molecule. 4) Rank subgraphs at each node. The final
graph is decoded by putting together all the predicted subgraphs
(dashed box).
Note that this step is not deterministic since there are potentially many molecules that correspond to the same junction
tree. The underlying degree of freedom pertains to how
neighboring clusters Ci and Cj are attached to each other
as subgraphs. Our goal here is to assemble the subgraphs
(nodes in the tree) together into the correct molecular graph.
Let G(T ) be the set of graphs whose junction tree is T . Deb E)
b is a structured prediction:
coding graph Ĝ from Tb = (V,
Ĝ = arg max f a (G0 )
(14)
G0 ∈G(Tb )
where f a is a scoring function over candidate graphs. We
only consider scoring functions that decompose across the
clusters and their neighbors. In other words, each term in
the scoring function depends only on how a cluster Ci is
attached to its neighboring clusters Cj , j ∈ NTb (i) in the
tree Tb . The problem of finding the highest scoring graph Ĝ –
the assembly task – could be cast as a graphical model inference task in a model induced by the junction tree. However,
for efficiency reasons, we will assemble the molecular graph
one neighborhood at a time, following the order in which the
tree itself was decoded. In other words, we start by sampling
e (t−1)
µ
uv
e (t−1)
= τ (W1a xu + W2a xuv + W3a µ
)
(15)
uv
(P
(t−1)
αu = αv
w∈N (u)\v µwu
=
P
(t−1)
b αu ,αv + w∈N (u)\v µwu
m
αu 6= αv
The major difference from Eq. (1) is that we augment the
b αu ,αv derived by running the
model with tree messages m
b αu ,αv provides a
tree encoder over the predicted tree Tb . m
tree dependent positional context for bond (u, v) (illustrated
as subtree A in Figure 5).
Learning The graph decoder parameters are learned to
maximize the log-likelihood of predicting correct subgraphs
Gi of the ground true graph G at each tree node:
X
X
f a (Gi ) − log
exp(f a (G0i )) (16)
Lg (G) =
i
G0i ∈Gi
where Gi is the set of possible candidate subgraphs at tree
node i. During training, we again apply teacher forcing, i.e.
we feed the graph decoder with ground truth trees as input.
Complexity By our tree decomposition, any two clusters
share at most two atoms, so we only need to merge at most
two atoms or one bond. By pruning chemically invalid
subgraphs and merging isomorphic graphs, |Gi | ≈ 4 on
average when tested on a standard ZINC drug dataset. The
computational complexity of JT-VAE is therefore linear in
the number of clusters, scaling nicely to large graphs.
3. Experiments
Our evaluation efforts measure various aspects of molecular
generation. The first two evaluations follow previously
proposed tasks (Kusner et al., 2017). We also introduce a
third task — constrained molecule optimization.
Junction Tree Variational Autoencoder for Molecular Graph Generation
O-
N
N
O
NH
NH
O
N
O
NH
N
N
H
H3N+
N
N
S
NH
N
O
N
NH
NH2+
N
N
N
NH
N
NH
NH
N
N
N
NH
N
N
N
N
N
N
NH
N
N
N
N
N
NH
N
H
O
N
H
NH
N
H
NH+
H
O
N
H
NH
O
S
NH
N
N
NH2+
N
NH2+
N
N
N
N
N
N
N
Cl
O
O
O
N
N
N
N
NH
S
N
N
N
N
NH
N
N
N
N
N
N
NH
NH
N
N
N
N
N
O
N
N
N
N
O
N
N
N
H
N
N
N
N
N
N
NH
S
O
NH
N
N
NH
N
S
NH
N
O
N
S
N
NH
S
N
NH
NH
N
O
N
S
N
O
O
O
O
NH2+
S
N
N
N
N
N
NH
N
N
S
NH
N
N
N
N
NH
N
N
NH
NH
N
N
N
NH
N
N
NH
N
N
N
N
O
N
NH
N
N
O
N
N
O
N
N
N
N
N
N
N
N
H2N
F
O
Cl
O
N
N
N
NH
S
N
NH
N
N
NH
N
N
N
N
O
NH+
O
N
N
NH
N
S
O
O
N
S
NH
N
NH
N
N
N
N
N
N
N
N
N
N
N
N
NH
N
NH
N
N
N
N
O
N
NH
N
O
NH
N
N
NH
N
N
NH
N
N
N
N
O
N
NH
N
N
N
N
N
O
NH3+
S
N
O
O
NH
N
O
O
N
H2N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
NH
NH
NH
NH
N
NH
N
N
N
O
O
N
NH
S
S
N
NH
N
N
NH
N
N
NH
N
N
N
N
NH
NH
N
N
N
N
N
S
NH
N
N
O
N
N
N
NH
N
N
S
N
N
N
N
N
N
N
N
N
N
S
S
O
O
O
N
N
NH
S
S
NH
N
NH2
NH
NH2
N
NH
N
NH
N
N
NH
N
N
NH
N
N
N
N
NH
NH
N
N
N
N
NH
O
O
O
OH
O
O
O
H2N
O
NH
NH
NH2
N
N
N
N
N
NH
N
N
N
N
NH+
N
N
NH
N
NH 2
S
N
N
N
NH
N
S
NH2
N
S
N
S
S
NH2
N
NH
N
NH
S
S
NH
N
NH
N
N
NH
N
N
N
N
NH2
NH
N
N
N
O
N
OH
O
N
NH
N
N
NH+
NH2+
O
S
N
NH
O
N
N
N
O
N
N
NH2+
S
N
S
S
N
O
N
H
N
NH+
N
S
N
N
NH
NH2
NH
N
N
N
N
NH
N
N
N
N
N
O
H
O
N
S
N
NH
N
N
S
NH
O
S
N
N
NH+
N
N
NH2
S
N
O
NH2
O
NH
2
2
N
N
S
S
NH2
S
N
N
NH
N
NH
NH
O
N
NH
NH
N
O
N
N
N
O
NH2+
O
N
NH2+
H 2N
H 2N
S
N
O
NH
N
NH
OH
N
S
NH2
NH
N
S
S
S
N
S
NH
Cl
NH
N
S
NH
N
N
NH
NH
N
N
N
N
S
N
NH
NH
NH2
N
NH
2
NH+
N
S
N
N
N
S
N
NH+
N
N
S
NH2
NH2
NH
N
N
N
N
Figure 6. Left: Random molecules sampled from prior distribution N (0, I). Right: Visualization of the local neighborhood of a molecule
in the center. Three molecules highlighted in red dashed box have the same tree structure as the center molecule, but with different graph
structure as their clusters are combined differently. The same phenomenon emerges in another group of molecules (blue dashed box).
• Molecule reconstruction and validity We test the VAE
models on the task of reconstructing input molecules from
their latent representations, and decoding valid molecules
when sampling from prior distribution. (Section 3.1)
• Bayesian optimization Moving beyond generating valid
molecules, we test how the model can produce novel
molecules with desired properties. To this end, we perform Bayesian optimization in the latent space to search
molecules with specified properties. (Section 3.2)
• Constrained molecule optimization The task is to modify given molecules to improve specified properties, while
constraining the degree of deviation from the original
molecule. This is a more realistic scenario in drug discovery, where development of new drugs usually starts with
known molecules such as existing drugs (Besnard et al.,
2012). Since it is a new task, we cannot compare to any
existing baselines. (Section 3.3)
Below we describe the data, baselines and model configuration that are shared across the tasks. Additional setup details
are provided in the task-specific sections.
Data We use the ZINC molecule dataset from Kusner et al.
(2017) for our experiments. It contains about 250K drug
molecules extracted from the ZINC database (Sterling &
Irwin, 2015). We follow the same training/testing split as
the previous work.
Baselines We compare our approach with SMILES-based
baselines: 1) Character VAE (CVAE) (Gómez-Bombarelli
et al., 2016) which generates SMILES strings character by
character; 2) Grammar VAE (GVAE) (Kusner et al., 2017)
that generates SMILES following syntactic constraints given
by a context-free grammar; 3) Syntax-directed VAE (SDVAE) (Dai et al., 2018) that incorporates both syntactic
and semantic constraints of SMILES via attribute grammar. For molecule generation task, we also compare with
GraphVAE (Simonovsky & Komodakis, 2018) that directly
generates atom labels and adjacency matrices of graphs.
Model Configuration To be comparable with the above
baselines, we set the latent space dimension as 56, i.e., the
tree and graph representation hT and hG have 28 dimensions each. Full training details and model configurations
are provided in the appendix.
3.1. Molecule Reconstruction and Validity
Setup The first task is to reconstruct and sample molecules
from latent space. Since both encoding and decoding process are stochastic, we estimate reconstruction accuracy by
Monte Carlo method used in (Kusner et al., 2017): Each
molecule is encoded 10 times and each encoding is decoded 10 times. We report the portion of the 100 decoded
molecules that are identical to the input molecule. For a fair
comparison, we define two molecules as identical if they
have the same SMILES string. At testing time, we convert
all generated graphs to SMILES using RDKit.
To compute validity, we sample 1000 latent vectors from the
prior distribution N (0, I), and decode each of these vectors
100 times. We report the percentage of decoded molecules
that are chemically valid (checked by RDKit).
Junction Tree Variational Autoencoder for Molecular Graph Generation
Table 1. Reconstruction accuracy and prior validity results. Baseline results are copied from Kusner et al. (2017); Dai et al. (2018);
Simonovsky & Komodakis (2018).
Method
CVAE
GVAE
SD-VAE2
GraphVAE
JT-VAE
Reconstruction
44.6%
53.7%
76.2%
76.7%
Validity
0.7%
7.2%
43.5%
13.5%
100.0%
Results Table 1 shows that JT-VAE outperforms previous models in molecule reconstruction, and always produces valid molecules when sampled from prior distribution. These sampled molecules have non-trivial structures
such as simple chains (Figure 6). We further sampled 5000
molecules from prior and found they are all distinct from the
training set. Thus our model is not a simple memorization.
Analysis We qualitatively examine the latent space of JTVAE by visualizing the neighborhood of molecules. Given
a molecule, we follow the method in Kusner et al. (2017)
to construct a grid visualization of its neighborhood. For
comparison, we select the same molecule visualized in Dai
et al. (2018). Figure 6 shows the local neighborhood of
this molecule. Compared to the figure in Dai et al. (2018),
our neighborhood does not contain molecules with huge
rings (with more than 7 atoms), which rarely occur in the
dataset. We also highlight two groups of closely resembling
molecules that have identical tree structures but vary only
in how clusters are attached together. This demonstrates the
smoothness of learned molecular embeddings.
3.2. Bayesian Optimization
Setup The second task is to produce novel molecules with
desired properties. Following (Kusner et al., 2017), our
target chemical property y(·) is octanol-water partition coefficients (logP) penalized by the synthetic accessibility (SA)
score and number of long cycles.3 To perform Bayesian
optimization (BO), we first train a VAE and associate each
molecule with a latent vector, given by the mean of the variational encoding distribution. After the VAE is learned, we
train a sparse Gaussian process (SGP) to predict y(m) given
its latent representation. Then we perform five iterations of
batched BO using the expected improvement heuristic.
For comparison, we report 1) the predictive performance of
SGP trained on latent encodings learned by different VAEs,
measured by log-likelihood (LL) and root mean square error (RMSE) with 10-fold cross validation. 2) The top-3
molecules found by BO under different models.
2
The SD-VAE result is copied from Table 1 in Dai et al. (2018).
y(m) = logP (m) − SA(m) − cycle(m) where cycle(m)
counts the number of rings that have more than six atoms.
3
Table 2. Best molecule property scores found by each method.
Baseline results are from Kusner et al. (2017); Dai et al. (2018).
Method
CVAE
GVAE
SD-VAE
JT-VAE
1st
1.98
2.94
4.04
5.30
2nd
1.42
2.89
3.50
4.93
3rd
1.19
2.80
2.96
4.49
Figure 7. Best three molecules and their property scores found by
JT-VAE using Bayesian optimization.
Results As shown in Table 2, JT-VAE finds molecules with
significantly better scores than previous methods. Figure 7
lists the top-3 best molecules found by JT-VAE. In fact,
JT-VAE finds over 50 molecules with scores over 3.50 (the
second best molecule proposed by SD-VAE). Moreover, the
SGP yields better predictive performance when trained on
JT-VAE embeddings (Table 3).
3.3. Constrained Optimization
Setup The third task is to perform molecule optimization
in a constrained scenario. Given a molecule m, the task is
to find a different molecule m0 that has the highest property
value with the molecular similarity sim(m, m0 ) ≥ δ for
some threshold δ. We use Tanimoto similarity with Morgan
fingerprint (Rogers & Hahn, 2010) as the similarity metric,
and penalized logP coefficient as our target chemical property. For this task, we jointly train a property predictor F
(parameterized by a feed-forward network) with JT-VAE to
predict y(m) from the latent embedding of m. To optimize
a molecule m, we start from its latent representation, and
apply gradient ascent in the latent space to improve the predicted score F (·), similar to (Mueller et al., 2017). After
applying K = 80 gradient steps, K molecules are decoded
from resulting latent trajectories, and we report the molecule
with the highest F (·) that satisfies the similarity constraint.
A modification succeeds if one of the decoded molecules
satisfies the constraint and is distinct from the original.
To provide the greatest challenge, we selected 800 molecules
with the lowest property score y(·) from the test set. We
report the success rate (how often a modification succeeds),
and among success cases the average improvement y(m0 ) −
y(m) and molecular similarity sim(m, m0 ) between the
original and modified molecules m and m0 .
Junction Tree Variational Autoencoder for Molecular Graph Generation
Table 3. Predictive performance of sparse Gaussian Processes
trained on different VAEs. Baseline results are copied from Kusner
et al. (2017) and Dai et al. (2018).
Method
CVAE
GVAE
SD-VAE
JT-VAE
LL
−1.812 ± 0.004
−1.739 ± 0.004
−1.697 ± 0.015
−1.658 ± 0.023
RMSE
1.504 ± 0.006
1.404 ± 0.006
1.366 ± 0.023
1.290 ± 0.026
Table 4. Constrained optimization result of JT-VAE: mean and
standard deviation of property improvement, molecular similarity
and success rate under constraints sim(m, m0 ) ≥ δ with varied δ.
δ
0.0
0.2
0.4
0.6
Improvement
1.91 ± 2.04
1.68 ± 1.85
0.84 ± 1.45
0.21 ± 0.71
Similarity
0.28 ± 0.15
0.33 ± 0.13
0.51 ± 0.10
0.69 ± 0.06
Success
97.5%
97.1%
83.6%
46.4%
Results Our results are summarized in Table 4. The unconstrained scenario (δ = 0) has the best average improvement,
but often proposes dissimilar molecules. When we tighten
the constraint to δ = 0.4, about 80% of the time our model
finds similar molecules, with an average improvement 0.84.
This also demonstrates the smoothness of the learned latent
space. Figure 8 illustrates an effective modification resulting
in a similar molecule with great improvement.
4. Related Work
Molecule Generation Previous work on molecule generation mostly operates on SMILES strings. GómezBombarelli et al. (2016); Segler et al. (2017) built generative models of SMILES strings with recurrent decoders.
Unfortunately, these models could generate invalid SMILES
that do not result in any molecules. To remedy this issue,
Kusner et al. (2017); Dai et al. (2018) complemented the
decoder with syntactic and semantic constraints of SMILES
by context free and attribute grammars, but these grammars
do not fully capture chemical validity. Other techniques
such as active learning (Janz et al., 2017) and reinforcement
learning (Guimaraes et al., 2017) encourage the model to
generate valid SMILES through additional training signal.
Very recently, Simonovsky & Komodakis (2018) proposed
to generate molecular graphs by predicting their adjacency
matrices, and Li et al. (2018) generated molecules node by
node. In comparison, our method enforces chemical validity
and is more efficient due to the coarse-to-fine generation.
Graph-structured Encoders The neural network formulation on graphs was first proposed by Gori et al. (2005);
Scarselli et al. (2009), and later enhanced by Li et al. (2015)
with gated recurrent units. For recurrent architectures over
Figure 8. A molecule modification that yields an improvement of
4.0 with molecular similarity 0.617 (modified part is in red).
graphs, Lei et al. (2017) designed Weisfeiler-Lehman kernel
network inspired by graph kernels. Dai et al. (2016) considered a different architecture where graphs were viewed as latent variable graphical models, and derived their model from
message passing algorithms. Our tree and graph encoder are
closely related to this graphical model perspective, and to
neural message passing networks (Gilmer et al., 2017). For
convolutional architectures, Duvenaud et al. (2015) introduced a convolution-like propagation on molecular graphs,
which was generalized to other domains by Niepert et al.
(2016). Bruna et al. (2013); Henaff et al. (2015) developed
graph convolution in spectral domain via graph Laplacian.
For applications, graph neural networks are used in semisupervised classification (Kipf & Welling, 2016), computer
vision (Monti et al., 2016), and chemical domains (Kearnes
et al., 2016; Schütt et al., 2017; Jin et al., 2017).
Tree-structured Models Our tree encoder is related to recursive neural networks and tree-LSTM (Socher et al., 2013;
Tai et al., 2015; Zhu et al., 2015). These models encode
tree structures where nodes in the tree are bottom-up transformed into vector representations. In contrast, our model
propagates information both bottom-up and top-down.
On the decoding side, tree generation naturally arises in
natural language parsing (Dyer et al., 2016; Kiperwasser &
Goldberg, 2016). Different from our approach, natural language parsers have access to input words and only predict
the topology of the tree. For general purpose tree generation,
Vinyals et al. (2015); Aharoni & Goldberg (2017) applied recurrent networks to generate linearized version of trees, but
their architectures were entirely sequence-based. Dong &
Lapata (2016); Alvarez-Melis & Jaakkola (2016) proposed
tree-based architectures that construct trees top-down from
the root. Our model is most closely related to Alvarez-Melis
& Jaakkola (2016) that disentangles topological prediction
from label prediction, but we generate nodes in a depth-first
order and have additional steps that propagate information
bottom-up. This forward-backward propagation also appears in Parisotto et al. (2016), but their model is node
based whereas ours is based on message passing.
5. Conclusion
In this paper we present a junction tree variational autoencoder for generating molecular graphs. Our method significantly outperforms previous work in molecule generation
and optimization. For future work, we attempt to generalize
our method for general low-treewidth graphs.
Junction Tree Variational Autoencoder for Molecular Graph Generation
Acknowledgement
We thank Jonas Mueller, Chengtao Li, Tao Lei and MIT
NLP Group for their helpful comments. This work was
supported by the DARPA Make-It program under contract
ARO W911NF-16-2-0023.
References
Aharoni, Roee and Goldberg, Yoav.
to-tree neural machine translation.
arXiv:1704.04743, 2017.
Towards stringarXiv preprint
Alvarez-Melis, David and Jaakkola, Tommi S. Treestructured decoding with doubly-recurrent neural networks. 2016.
Besnard, Jérémy, Ruda, Gian Filippo, Setola, Vincent,
Abecassis, Keren, Rodriguiz, Ramona M, Huang, XiPing, Norval, Suzanne, Sassano, Maria F, Shin, Antony I,
Webster, Lauren A, et al. Automated design of ligands
to polypharmacological profiles. Nature, 492(7428):215–
220, 2012.
Bruna, Joan, Zaremba, Wojciech, Szlam, Arthur, and LeCun,
Yann. Spectral networks and locally connected networks
on graphs. arXiv preprint arXiv:1312.6203, 2013.
Chung, Junyoung, Gulcehre, Caglar, Cho, KyungHyun, and
Bengio, Yoshua. Empirical evaluation of gated recurrent
neural networks on sequence modeling. arXiv preprint
arXiv:1412.3555, 2014.
Clayden, Jonathan, Greeves, Nick, Warren, Stuart, and
Wothers, P. Organic Chemistry. Oxford University Press,
2001.
Dai, Hanjun, Dai, Bo, and Song, Le. Discriminative embeddings of latent variable models for structured data.
In International Conference on Machine Learning, pp.
2702–2711, 2016.
Dai, Hanjun, Tian, Yingtao, Dai, Bo, Skiena, Steven, and
Song, Le. Syntax-directed variational autoencoder for
structured data. International Conference on Learning
Representations, 2018. URL https://openreview.
net/forum?id=SyqShMZRb.
Dong, Li and Lapata, Mirella. Language to logical form
with neural attention. arXiv preprint arXiv:1601.01280,
2016.
Duvenaud, David K, Maclaurin, Dougal, Iparraguirre, Jorge,
Bombarell, Rafael, Hirzel, Timothy, Aspuru-Guzik, Alán,
and Adams, Ryan P. Convolutional networks on graphs
for learning molecular fingerprints. In Advances in neural
information processing systems, pp. 2224–2232, 2015.
Dyer, Chris, Kuncoro, Adhiguna, Ballesteros, Miguel, and
Smith, Noah A. Recurrent neural network grammars.
arXiv preprint arXiv:1602.07776, 2016.
Gilmer, Justin, Schoenholz, Samuel S, Riley, Patrick F,
Vinyals, Oriol, and Dahl, George E. Neural message passing for quantum chemistry. arXiv preprint
arXiv:1704.01212, 2017.
Gómez-Bombarelli, Rafael, Wei, Jennifer N, Duvenaud, David, Hernández-Lobato, José Miguel, SánchezLengeling, Benjamı́n, Sheberla, Dennis, AguileraIparraguirre, Jorge, Hirzel, Timothy D, Adams, Ryan P,
and Aspuru-Guzik, Alán. Automatic chemical design using a data-driven continuous representation of molecules.
ACS Central Science, 2016. doi: 10.1021/acscentsci.
7b00572.
Gori, Marco, Monfardini, Gabriele, and Scarselli, Franco.
A new model for learning in graph domains. In Neural
Networks, 2005. IJCNN’05. Proceedings. 2005 IEEE International Joint Conference on, volume 2, pp. 729–734.
IEEE, 2005.
Guimaraes, Gabriel Lima, Sanchez-Lengeling, Benjamin,
Farias, Pedro Luis Cunha, and Aspuru-Guzik, Alán.
Objective-reinforced generative adversarial networks (organ) for sequence generation models. arXiv preprint
arXiv:1705.10843, 2017.
Henaff, Mikael, Bruna, Joan, and LeCun, Yann. Deep
convolutional networks on graph-structured data. arXiv
preprint arXiv:1506.05163, 2015.
Janz, David, van der Westhuizen, Jos, and HernándezLobato, José Miguel. Actively learning what makes a
discrete sequence valid. arXiv preprint arXiv:1708.04465,
2017.
Jin, Wengong, Coley, Connor, Barzilay, Regina, and
Jaakkola, Tommi. Predicting organic reaction outcomes
with weisfeiler-lehman network. In Advances in Neural
Information Processing Systems, pp. 2604–2613, 2017.
Kearnes, Steven, McCloskey, Kevin, Berndl, Marc, Pande,
Vijay, and Riley, Patrick. Molecular graph convolutions:
moving beyond fingerprints. Journal of computer-aided
molecular design, 30(8):595–608, 2016.
Kingma, Diederik P and Welling, Max. Auto-encoding
variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Kiperwasser, Eliyahu and Goldberg, Yoav. Easy-first dependency parsing with hierarchical tree lstms. arXiv preprint
arXiv:1603.00375, 2016.
Kipf, Thomas N and Welling, Max. Semi-supervised classification with graph convolutional networks. arXiv
preprint arXiv:1609.02907, 2016.
Junction Tree Variational Autoencoder for Molecular Graph Generation
Kusner, Matt J, Paige, Brooks, and Hernández-Lobato,
José Miguel. Grammar variational autoencoder. arXiv
preprint arXiv:1703.01925, 2017.
convolutional neural network for modeling quantum interactions. In Advances in Neural Information Processing
Systems, pp. 992–1002, 2017.
Landrum, Greg. Rdkit: Open-source cheminformatics. Online). http://www. rdkit. org. Accessed, 3(04):2012, 2006.
Segler, Marwin HS, Kogej, Thierry, Tyrchan, Christian, and
Waller, Mark P. Generating focussed molecule libraries
for drug discovery with recurrent neural networks. arXiv
preprint arXiv:1701.01329, 2017.
Lei, Tao, Jin, Wengong, Barzilay, Regina, and Jaakkola,
Tommi. Deriving neural architectures from sequence and
graph kernels. arXiv preprint arXiv:1705.09037, 2017.
Li, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel,
Richard. Gated graph sequence neural networks. arXiv
preprint arXiv:1511.05493, 2015.
Li, Yujia, Vinyals, Oriol, Dyer, Chris, Pascanu, Razvan,
and Battaglia, Peter. Learning deep generative models of
graphs. 2018. URL https://openreview.net/
forum?id=Hy1d-ebAb.
Monti, Federico, Boscaini, Davide, Masci, Jonathan,
Rodolà, Emanuele, Svoboda, Jan, and Bronstein,
Michael M. Geometric deep learning on graphs and
manifolds using mixture model cnns. arXiv preprint
arXiv:1611.08402, 2016.
Mueller, Jonas, Gifford, David, and Jaakkola, Tommi. Sequence to better sequence: continuous revision of combinatorial structures. In International Conference on
Machine Learning, pp. 2536–2544, 2017.
Niepert, Mathias, Ahmed, Mohamed, and Kutzkov, Konstantin. Learning convolutional neural networks for
graphs. In International Conference on Machine Learning, pp. 2014–2023, 2016.
Parisotto, Emilio, Mohamed, Abdel-rahman, Singh,
Rishabh, Li, Lihong, Zhou, Dengyong, and Kohli, Pushmeet. Neuro-symbolic program synthesis. arXiv preprint
arXiv:1611.01855, 2016.
Rarey, Matthias and Dixon, J Scott. Feature trees: a new
molecular similarity measure based on tree matching.
Journal of computer-aided molecular design, 12(5):471–
490, 1998.
Rogers, David and Hahn, Mathew. Extended-connectivity
fingerprints. Journal of chemical information and modeling, 50(5):742–754, 2010.
Scarselli, Franco, Gori, Marco, Tsoi, Ah Chung, Hagenbuchner, Markus, and Monfardini, Gabriele. The graph
neural network model. IEEE Transactions on Neural
Networks, 20(1):61–80, 2009.
Schütt, Kristof, Kindermans, Pieter-Jan, Felix, Huziel
Enoc Sauceda, Chmiela, Stefan, Tkatchenko, Alexandre,
and Müller, Klaus-Robert. Schnet: A continuous-filter
Simonovsky, Martin and Komodakis, Nikos. Graphvae:
Towards generation of small graphs using variational autoencoders. arXiv preprint arXiv:1802.03480, 2018.
Socher, Richard, Perelygin, Alex, Wu, Jean, Chuang, Jason, Manning, Christopher D, Ng, Andrew, and Potts,
Christopher. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings
of the 2013 conference on empirical methods in natural
language processing, pp. 1631–1642, 2013.
Sterling, Teague and Irwin, John J. Zinc 15–ligand discovery
for everyone. J. Chem. Inf. Model, 55(11):2324–2337,
2015.
Tai, Kai Sheng, Socher, Richard, and Manning, Christopher D. Improved semantic representations from treestructured long short-term memory networks. arXiv
preprint arXiv:1503.00075, 2015.
Vinyals, Oriol, Kaiser, Łukasz, Koo, Terry, Petrov, Slav,
Sutskever, Ilya, and Hinton, Geoffrey. Grammar as a
foreign language. In Advances in Neural Information
Processing Systems, pp. 2773–2781, 2015.
Weininger, David. Smiles, a chemical language and information system. 1. introduction to methodology and
encoding rules. Journal of chemical information and
computer sciences, 28(1):31–36, 1988.
Zhu, Xiaodan, Sobihani, Parinaz, and Guo, Hongyu. Long
short-term memory over recursive structures. In International Conference on Machine Learning, pp. 1604–1612,
2015.
Supplementary Material
A. Tree Decomposition
Algorithm 2 presents our tree decomposition of molecules. V1 and V2 contain non-ring bonds and simple rings respectively.
Simple rings are extracted via RDKit’s GetSymmSSSR function. We then merge rings that share three or more atoms as
they form bridged compounds. We note that the junction tree of a molecule is not unique when its cluster graph contains
cycles. This introduces additional uncertainty for our probabilistic modeling. To reduce such variation, for any of the three
(or more) intersecting bonds, we add their intersecting atom as a cluster and remove the cycle connecting them in the cluster
graph. Finally, we construct a junction tree as the maximum spanning tree of a cluster graph (V, E). Note that we assign an
large weight over edges involving clusters in V0 to ensure no edges in any cycles will be selected into the junction tree.
Algorithm 2 Tree decomposition of molecule G = (V, E)
V1 ← the set of bonds (u, v) ∈ E that do not belong to any rings.
V2 ← the set of simple rings of G.
for r1 , r2 in V2 do
Merge rings r1 , r2 into one ring if they share more than two atoms (bridged rings).
end for
V0 ← atoms being the intersection of three or more clusters in V1 ∪ V2 .
V ← V0 ∪ V1 ∪ V2
E ← {(i, j, c) ∈ V × V × R | |i ∩ j| > 0}. Set c = ∞ if i ∈ V0 or j ∈ V0 , and c = 1 otherwise.
Return The maximum spanning tree over cluster graph (V, E).
Figure 9. Illustration of tree decomposition and sample of cluster label vocabulary.
B. Stereochemistry
Though usually presented as two-dimensional graphs, molecules are three-dimensional objects, i.e. molecules are defined
not only by its atom types and bond connections, but also the spatial configuration between atoms (chiral atoms and cis-trans
isomerism). Stereoisomers are molecules that have the same 2D structure, but differ in the 3D orientations of their atoms in
space. We note that stereochemical feasibility could not be simply encoded as context free or attribute grammars.
Empirically, we found it more efficient to predict the stereochemical configuration separately from the molecule generation.
Specifically, the JT-VAE first generates the 2D structure of a molecule m, following the same procedure described in
section 2. Then we generate all its stereoisomers Sm using RDKit’s EnumerateStereoisomers function, which
identifies atoms that could be chiral. For each isomer m0 ∈ Sm , we encode its graph representation hm0 with the graph
encoder and compute their cosine similarity f s (m0 ) = cos(hm0 , zm ) (note that zm is stochastic). We reconstruct the
Junction Tree Variational Autoencoder for Molecular Graph Generation
final 3D structure by picking the stereoisomer m
b = arg maxm0 f s (m0 ). Since on average only few atoms could have
stereochemical variations, this post ranking process is very efficient. Combining this with tree and graph generation, the
molecule reconstruction loss L becomes
X
L = Lc + Lg + Ls ;
Ls = f s (m) − log
exp(f s (m0 ))
(17)
m0 ∈Sm
C. Training Details
By applying tree decomposition over 240K molecules in ZINC dataset, we collected our vocabulary set X of size |X | = 780.
The hidden state dimension is 450 for all modules in JT-VAE and the latent bottleneck dimension is 56. For the graph
encoder, the initial atom features include its atom type, degree, its formal charge and its chiral configuration. Bond feature is
a concatenation of its bond type, whether the bond is in a ring, and its cis-trans configuration. For our tree encoder, we
represent each cluster with a neural embedding vector, similar to word embedding for words. The tree and graph decoder
use the same feature setting as encoders. The graph encoder and decoder runs three iterations of neural message passing.
For fair comparison to SMILES based method, we minimized feature engineering. We use PyTorch to implement all neural
components and RDKit to process molecules.
D. More Experimental Results
Sampled Molecules Note that a degenerate model could also achieve 100% prior validity by keep generating simple
structures like chains. To prove that our model does not converge to such trivial solutions, we randomly sample and plot 250
molecules from prior distribution N (0, I). As shown in Figure 10, our sampled molecules present rich variety and structural
complexity. This demonstrates the soundness of the prior validity improvement of our model.
Neighborhood Visualization Given a molecule, we follow Kusner et al. (2017) to construct a grid visualization of its
neighborhood. Specifically, we encode a molecule into the latent space and generate two random orthogonal unit vectors
as two axis of a grid. Moving in combinations of these directions yields a set of latent vectors and we decode them into
corresponding molecules. In Figure 11 and 12, we visualize the local neighborhood of two molecules presented in Dai et al.
(2018). Figure 11 visualizes the same molecule in Figure 6, but with wider neighborhood ranges.
Bayesian Optimization We directly used open sourced implementation in Kusner et al. (2017) for Bayesian optimization
(BO). Specifically, we train a sparse Gaussian process with 500 inducing points to predict properties of molecules. Five
iterations of batch BO with expected improvement heuristic is used to propose new latent vectors. In each iteration, 50 latent
vectors are proposed, from which molecules are decoded and added to the training set for next iteration. We perform 10
independent runs and aggregate results. In Figure 13, we present the top 50 molecules found among 10 runs using JT-VAE.
Following Kusner et al.’s implementation, the scores reported are normalized to zero mean and unit variance by the mean
and variance computed from training set.
Constrained Optimization For this task, a property predictor F is trained jointly with VAE to predict y(m) = logP (m) −
SA(m) from the latent embedding of m. F is a feed-forward network with one hidden layer of dimension 450 followed
by tanh activation. To optimize a molecule m, we start with its mean encoding z0m = µm and apply 80 gradient ascent
∂y
i
steps: ztm = zt−1
m + α ∂z with α = 2.0. 80 molecules are decoded from latent vectors {zm } and their property is calculated.
0
Molecular similarity sim(m, m ) is calculated via Morgan fingerprint of radius 2 with Tanimoto similarity. For each
molecule m, we report the best modified molecule m0 with sim(m, m0 ) > δ for some threshold δ. In Figure 14, we present
three groups of modification examples with δ = 0.2, 0.4, 0.6. For each group, we present top three pairs that leads to best
improvement y(m0 ) − y(m) as well as one pair decreased property (y(m0 ) < y(m)). This is caused by inaccurate property
prediction. From Figure 14, we can see that tighter similarity constraint forces the model to preserve the original structure.
Junction Tree Variational Autoencoder for Molecular Graph Generation
N
Cl
O
O
O
O
H
H
N
O
S
NH+
NH
H
O
NH
O
N
S
O
N
N
NH+
S
O
N
NH
OH
N
O
N
NH2+
NH2
O
N
O
NH
NH
N
N
N
O
F
O
NH
N
O
NH
NH
O
O
NH
OH
OH
O
F
O
N
O
O
O-
N
O
NH
NH
O
N
O
NH
H3N+
N
N
S
NH
N
H
N
NH
N
S
O
NH
O
NH
N
OH
O
NH
O
O
H
N
O
H
NH+
NH2+
N
S
N
NH
O
N
N
S
NH
O
N
N
N
N
O
H
O
N
O
N
N
NH
O
N
NH2+
N
S
O
N
O
O
NH
NH
S
NH
O
H2N
NH2
N
N
O
S
O
O
NH
O
S
NH2
O
O
OH
NH
NH
O
O
O
O
S
N
S
O
N
NH
N
O
O
N
NH
O
S
O
O
N
NH+
O
N
O
NH3+
O
N
N
O
NH
H2N
N
O
O
NH2+
O
N
N
O
O
Cl
O
N
N
N
NH
O
O
N
O
F
O
Cl
N
N
NH
NH+
H
O
N
S
O
S
NH
N
O
N
S
N
H
OH
Cl
NH
O
NH
O
O
O
O
NH
N
O
O
S
O
H2N
NH2
O
N
O
N+
N
N
NH2
N
NH2
H
O
OH
O
N
O
O
N
N
O
O
N
NH+
N
S
N
N
O
O
NH
N
NH
N
N
NH3+
O
Cl
NH+
O
N
S
NH
O-
NH
O
N
O
O
NH
N
H2 N
O
O
O
OH
N
N
Cl
F
O
S
N+
NH2
O-
NH
O
O
NH
O
S
NH2
NH
O
O
N
NH+
NH
N
NH
N
O
O
Cl
N
N
O
NH
O
O
H
S
F
O
NH
O
O
N+
NH
N
N-
S
O
NH2 N
NH
S
NH
O
N
O
O
O
O
HN
O
O
O
N
N
O
OH
H2N
N
O
O
N
NH+
N+
OH
N
NH
NH2
OH
O
NH+
OH
O
S
NH
S
N
N
N
O
S
N
O
O
O
O
O
N
O
NH
N
N
O
NH+
O
O
N
N
O
NH
O
NH
N
O
H
NH
SH
O
O
S
O
S
F
NH
N
O
Br
NH+
N
OH
F
F
HO
NH3+
N
H
S
NH
N
NH2
NH
O
O
NH2+
N
O
S
F
NH
N
O
O
O
N
N
N
F
N
O
N
O
NH2+
N
O
NH
NH+
S
H
NH
O
O
O
O
S
O
ON+
O
NH+
O
NH3+
H3N+
O
NH2
NH
H
Br
O
N
N
N
N
O
NH
N
O
O
F
S
N
O
N
O
S
O
F
NH
N
O
O
N
S
O
O
N
O
O
F
N
NH
O
NH2+
N
O
NH
N
S
N
Br
O
NH
S
H
O
O
O
NH
NH
N
NH
NH
O
O
NH
O
O
N
S
NH
N
N
F
N
F
N
N
HS
O
H
O
N
N
N
N
O
O
O
NH+
NH
N
O
S
NH
NH
H
N
NH
O
S
O
NH3+
NH
O
O
O
NH2
O
Cl
O
NH
O
NH2+
O
N
OH
O
NH2
OH
O
NH+
N
NH
N
NH2
N
N
O
O
N
N
Cl
O
H2N
O
NH3+
N
N
O
N
N
NH
F
NH
N
H
N
O
N
N
NH
OH
NH
O
NH
O
O
S
O
O
NH
Br
NH
O
O
O
O
S
N
N
O
O
NH
N
N
N
O
N
S
O
O
N
NH
S
N
NH2+
O
N
O
NH
H2N
O
NH2
NH
NH
O
S
NH2
O
NH
NH
H
N
O
O
OH
NH2
N
O
NH
NH
N
NH3+
NH
F
OH
O
N
N
NH
O
O
O
NH
O
H3N+
S
N
O
N
N
O
O
O
NH2+
N
N
NH
O
O
O
O
NH
NH
N
O
O
HO
O
O
H
NH+
NH
NH
N
H3N+
O
N
H
NH
NH
NH
O
N
N
O-
NH
H
NH
H
S
S
O
NH
NH2+
NH2+
O
O
N
NH
S
O
NH2+
OH
O
NH2
NH2+
N
O
NH
NH
O
O
N
O
O
O
N
N
N
NH+
O
O
F
S
NH
H2N
H3N+
O
NH2+
O-
Cl
NH
SH
O
N
O
S
H
S
NH
Cl
SH
H2N
O
NH
N
N
O
N
N
NH
NH
N
N
NH
O
O
O
N
H
O
F
O
NH
O
N
O
N
N
O
H2N
O
NH2+
HO
N
O
NH2+
O
N
O
S
N
N
NH2
N
O
O
NH
NH
O
N
O
OH
H3N
NH
O
+
NH3+
O
NH
N-
NH
NH3+
NH2+
NH
O
SH
O
O
NH
O
O
N
NH
S
N
NH2
O
N
O
O
N
NH
N
O
NH
N
N
N
O
N
NH
O
NH
O
O
N
NH
NH+
N
O
F
F
O
O
OH
S
O
NH2
O
O
O
O
O
Cl
O
F
NH
N
O
Br
N
O
F
NH+
S
O
O-
HO
F
S
N
N
N
Cl
O
N
N
O
N
H3N+
NH
O
O
N
H
S
O
N
O
N
O
NH
O
O
N
S
N
O
N
NH
O
NH2
H
N
NH
NH
S
NH
N
Cl
O
O
O
O
O
N
N
O
O
O
O
O
NH+
O
O
NH
NH
N
N
N
N
O
NH
N
N
NH
N
N
NH
O
N
N
O
O
OH
O
O
O
N
N-
N
N
O
N
N
NH
S
O
N
N
O
O
O
N
NH
N
O
O
NH3+
NH2
NH
NH2+
F
S
N
H2N
O
O
S
N
NH
NH
NH
N
NH
NH3+
OH
O
S
N
N
N
NH
O
O
NH2+
NH
H2N
O
O
N
O
O
O
N
NH
NH
O
O
O-
N
NH
F
OH
N
O
Br
O
N
NH
O
N
O
O
NH2+
OH
O
O-
N
O-
O
O
N
O
NH
N+
N
N
NH+
S
NH+
N
S
N
O
NH
N
NH
O
O
O
S
O
O
O
NH+
N
S
F
NH2
N
O
N
N
N
NH
NH
N
N
O
O
N
N
NH
O
S
O
O
O
O
NH2+
NH+
S
O
N
O
H2N
Cl
O
O
N
N
O
H
O
O
S
N
O
O
S
NH
NH
N
NH
NH+
NH
NH3+
HO
O
N
O
N
N
O
OH
O
N
F
O
N
F
O
N
O
S
O
N
N
NH
N
NH
NH
NH
N
NH
N
O
O
N
H2N
N
O
O
NH
O
N
O
NH2+
O
O
S
NH
N
F
O
O
N
N
NH
S
O
O
F
NH
NH
O
NH2
N
N
H
NH
NH2+
N
NH
Cl
O
O
N
O
H
NH+
NH2
O
N
N
O
N
O
N
S
Cl
O
N
N
H
H
NH
H2N
O
Cl
O
O
Br
NH2
F
O
N
O
O
O
N
N
O
O
O
NH
O
N
O
O
O
O
S
N
S
H2N
O
N
N
O
N
N
N
O
N
O
O
O
S
N
O O
N
N
NH
N
N
NH
N
N
N
N
N
O
N
NH
O
N
N
O
OH
O
NH
N
S
S
O
O
O
N
O
H2N
O
N-
O
O
H
NH
O
O
NH2+
O
O
NH3+
NH
N
N
S
N
NH
O
O
OH
O
NH
N
Cl
NH
S
O
N
O
NH
OH
N
N
S
OH
O
S
OH
O
O
O
O
O
NH2
H
O
O
HO
N
NH
O
N
N
O
S
S
Cl
O
O
S
S
O
N
N
H2N
O
O
NH
N
N
Br
H
O
N
O
N
O
NH
O
N
O
O
O-
N
N
H
O
O
N
NH
NH
N
S
N
NH+
O
NH
NH
N
O
O
O
S
H
S
O
S
HS
S
NH
O
NH2+
O
O
N
O
N
NH
NH
Cl
O
NH+
O
H
N
N
NH
NH
O
N
N
O
N
NH3+
O
NH
N
O
O
F
O
NH
O
N
NH
Cl
NH2
N
O
O
NH
O
O
O
N
Figure 10. 250 molecules sampled from prior distribution N (0, I).
O
O
Junction Tree Variational Autoencoder for Molecular Graph Generation
N
N
N
N
N
O
N
N
NH
O
N
N
NH
H
N
N
N
N
N
N
N
N
N
N
N
N
NH
N
N
N
N
N
NH
N
N
N
NH2+
N
N
N
N
NH
H
NH
NH
NH
H
N
N
N
N
H
N
N
NH
H
N
N
H
NH
H
NH
N
NH2
N
N
NH
N
H
O
N
N
NH
N
N
N
N
N
NH3+
N
N
NH
N
N
NH
N
NH
N
N
N
N
NH
N
NH
N
N
N
N
N
N
N
NH
S
N
N
N
N
NH
N
NH
N
N
N
N
N
N
NH
N
N
N
N
N
H
N
NH
NH
NH
H
N
N
N
N
N
N
N
N
NH+
N
N
N
N
N
N
NH2+
N
N
NH2+
N
N
N
N
N
N
N
N
N
NH
NH
NH
NH
N
NH
H
NH2+
N
N
H
H
N
H
H
N
H
N
N
N
N
N
N
N
NH
N
N
NH
N
N
N
NH
N
NH
N
NH+
N
NH+
N
N
N
S
N
N
N
N
S
S
N
NH
NH
N
N
NH
N
NH2+
N
N
N
NH
N
N
N
NH
N
N
N
NH
O
NH2
NH
N
N
NH
N
N
NH2+
N
N
NH2
S
N
N
NH
N
N
N
N
N
NH
H
N
N
NH
N
N
N
H
N
N
N
N
NH
H
N
N
N
NH
N
NH
NH
N
NH
N
N
N
N
N
N
N
N
N
N
N
N
N
NH
O
N
N
N
N
N
N
NH
N
O
N
N
N
N
N
N
NH2
N
N
NH2
S
N
NH2
N
N
NH2
N
N
N
NH
S
N
S
S
N
NH
NH
N
N
N
N
NH
NH2
N
NH
N
N
N
NH
N
N
N
N
N
N
N
O
N
NH
N
N
NH
N
N
N
NH
N
N
N
NH
NH
N
S
H
H
S
N
NH
NH
N
O
N
N
S
N
O
N
N
N
NH2
NH
N
N
N
N
N
N
N
N
S
N
NH2
N
NH
NH2
N
N
N
N
N
N
NH
NH
NH2
N
O
S
NH2
NH
N
N
N
N
N
N
N
NH
NH
NH2
N
N
NH
N
N
N
N
NH
N
N
NH
N
N
NH
N
N
NH
N
NH
N
N
O
N
NH
N
N
N
NH
N
S
N
N
N
N
N
S
N
NH2
N
N
N
N
H
N
H2N
N
N
N
N
N
N
N
N
N
NH
S
H2N
N
S
N
NH
N
N
N
N
N
N
S
NH
N
H
NH
N
N
N
S
NH
N
N
NH
N
N
NH
N
N
N
NH
N
NH2
NH
N
N
S
N
NH
N
N
N
N
NH2
N
N
S
N
N
NH
N
N
NH
H
N
NH2
S
N
NH
N
S
NH2
H
N
S
N
N
N
N
N
N
N
N
N
N
S
S
N
NH
S
N
N
NH
N
H
N
N
N
N
N
N
N
S
S
S
NH
NH2
NH
NH2
NH
N
N
NH
N
N
NH
N
N
NH
N
N
NH
N
N
N
N
NH2
NH2
N
S
NH2
N
H
N
S
N
S
S
H
N
N
NH
N
NH
N
NH
N
N
N
N
S
S
H
N
N
NH
NH2
NH
N
N
N
N
N
NH+
N
N
N
N
N
S
NH
N
NH
N
S
NH
NH2
NH
N
N
NH2
S
N
NH2
S
N
S
S
NH2
N
N
NH
N
NH
S
S
N
N
N
N
NH
NH
H2N
NH
NH
N
N
NH
N
N
NH2
NH
N
N
NH
S
N
N
NH
N
NH
N
N
N
N
NH2
S
S
NH2
H
N
N
NH2+
NH2
H
S
O
S
N
S
S
NH
NH
N
S
H
N
NH
N
S
S
S
NH2+
H
H2N
O
S
NH
N
NH+
N
NH
NH
N
N
S
NH2
NH
S
S
N
N
N
S
N
O
S
O
O
O
NH
H2N
S
S
NH
H2N
N
N
N
N
NH
NH2
NH2
N
N
NH+
S
N
S
S
N
NH
N
S
N
N
N
N
NH+
N
NH
NH2
N
NH+
N
NH
NH
NH
N
S
N
O
S
N
N
NH+
N
N
N
N
O
N
N
S
S
N
NH+
S
NH
H2N
N
N
N
S
N
N
N
N
S
NH
H2N
NH
N
N
N
NH+
NH
N
N
N
N
S
H
N
NH
N
N
N
N
NH2
H
S
S
H
N
N
S
N
N
NH+
NH2
N
N
S
NH2
H
S
S
O
NH
NH2
NH
N
N
NH2
S
S
N
NH
NH2
NH
N
N
N
H2N
N
NH
N
S
N
NH
N
NH2
NH+
N
S
N
NH2
NH
N
N
N
N
NH2
N
S
S
N
N
NH
NH+
NH
H2N
S
N
H
N
N
S
N
N
N
N
N
NH2+
O
S
S
H
NH
S
NH+
NH2
N
N
NH2
H
S
S
N
N
NH
N
NH
N
N
N
NH2
N
S
N
N
N
NH
N
N
NH+
N
NH
NH2
N
N
N
S
N
S
NH2
N
NH2
NH
S
S
NH+
N
N
S
S
NH2
NH
N
N
N
N
N
H
N
NH
NH2
N
NH2
NH2
O
N
O
N
NH+
N
S
S
H2N
NH
N
S
NH
N
N
N
N
NH
S
NH
N
N
S
NH
N
S
N
H2N
N
H2N+
H2N+
N
N
S
NH
S
N
N
N
N
NH2
S
S
N
N
NH+
N
O
S
N
N
N
S
O
NH
H2N
N
S
NH+
N
S
NH
H2N
NH2
N
N
NH
NH
N
S
H2N
NH
N
N
N
N
NH+
N
NH
N
NH+
N
NH
NH
N
N
N
N
S
NH2
N
N
NH2
NH2
Figure 11. Neighborhood visualization of molecule C[C@H]1CC(Nc2cncc(-c3nncn3C)c2)C[C@H](C)C1.
NH2
NH2
Junction Tree Variational Autoencoder for Molecular Graph Generation
O
N
NH3+
O
F
H2N
NH+
O
N
N
O
F
N
N
O
O
NH
N
O
O
O
NH
N
NH
N
O
NH
O
F
O
Cl
NH2+
F
OH
NH2+
F
O
NH2+
F
N
N
NH
F
N
NH+
F
O
NH2+
F
NH
O
F
F
O
F
Cl
N
O
NH
NH+
N
O
NH+
O
O
O
O
O
OH
O
O
O
O
NH3+
F
OH
OH
OH
NH
O
OH
F
OH
F
F
O
O
O
HO
F
NH+
O
NH+
F
F
NH+
O
N
F
N
F
N
N
O
O
NH
N
NH
N
O
NH
F
NH
F
N
N
F
O
NH+
F
O
NH2+
O
NH+
O
O
NH2+
F
F
F
O
NH2+
NH+
NH2+
F
F
O
O
O
F
O
O
O
O
O
NH2+
NH2+
NH2+
O-
O
NH2+
N+
N
O
O
NH2+
O
OH
F
N
O
O
F
O
O
OH
F
O
N
F
NH+
O
NH+
F
F
NH+
O
NH+
O
NH+
O
NH
F
NH
F
N
F
N
N
O
NH+
F
O
NH+
F
F
OH
NH2+
O
O
O
O
O
O
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH+
O
NH+
NH+
O
F
F
O
NH+
F
F
F
F
NH2+
O
F
F
O
O
O
F
O
O
F
O
F
F
F
F
O
O
O
NH+
O
NH+
O
NH+
O
NH+
F
F
F
O
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH2+
O
F
H2 N
F
F
F
F
F
H3N+
O
NH+
F
F
F
O
O
O
O
O
NH+
O
O
S
O
F
F
O
N
O
NH+
F
F
F
NH
O
O
NH+
O
F
O
O
OH
F
NH+
O
NH2+
F
O
NH2+
O
N
O
NH2+
F
O
O
N
NH
F
N
F
O
N
O
F
F
S
O
F
O
F
F
F
O
O
F
O
F
O
F
O
F
O
F
O
F
O
F
O
O
F
O
O
F
F
O
O
O
F
O
O
NH+
O
NH+
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH2+
NH+
F
F
F
F
F
F
F
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
F
O
H3N+
F
NH3+
O
F
O
O
NH2+
F
O
O
F
O
O
NH+
O
NH+
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH+
F
F
F
F
F
F
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
F
O
NH3+
O
F
O
NH2+
O
H
O
NH2+
NH+
O
O
NH+
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH+
F
F
O
O
F
F
S
F
F
F
F
F
F
F
F
F
F
F
O
O
F
F
F
O
O
F
O
F
O
F
O
F
O
F
O
F
O
F
O
F
O
O
F
O
F
O
Br
O
O
OH
O
NH+
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH+
F
S
F
O
NH+
F
F
F
F
F
F
F
F
F
F
F
O
NH2+
F
F
F
F
F
F
F
F
F
F
O
F
N
O
O
O
O
O
O
O
O
O
O
O
O
N
O
O
O
NH+
NH+
NH+
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH2+
NH+
F
F
O
F
N
F
N
HO
O
F
F
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
F
F
O
O
O
N
NH+
NH+
O
NH+
O
NH+
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
O
NH+
F
NH+
F
N
F
F
F
F
F
F
O
F
F
F
F
F
F
O
O
O
O
O
O
O
O
NH+
NH+
O
NH+
O
O
NH+
O
NH+
F
O
NH+
F
F
N
F
NH+
N
F
F
F
F
O
O
O
F
O
O
N
NH+
NH+
F
N
O
O
O
O
NH+
NH+
F
NH+
O
NH+
NH+
F
F
N
O
S
F
F
F
O
F
F
O
F
F
O
F
F
O
F
NH2+
S
N
F
O
O
F
O
O
NH+
NH+
F
O
F
O
F
NH+
NH+
F
O
F
O
N
N
NH+
O
F
O
O
O
F
N
F
S
S
N
O
F
O
O
O
O
S
F
N
F
O
O
O
NH2+
NH+
N
F
F
F
F
O
O
N
NH+
F
O
N
F
O
O
NH+
NH+
F
O
F
N
O
F
O
O
O
F
O
O
F
O
O
O
NH+
F
F
S
S
N
O
N
O
O
F
HO
F
O
O
O
F
N
N
NH2+
NH+
F
OH
F
F
O
F
O
O
O
N
O
F
S
F
O
F
S
O
O
O
O
O
O
Figure 12. Neighborhood visualization of molecule COc1cc(OC)cc([C@H]2CC[NH+](CCC(F)(F)F)C2)c1.
N
F
Junction Tree Variational Autoencoder for Molecular Graph Generation
Cl
Cl
F
O
Cl
O
NH
Cl
Cl
Cl
Cl
NH
NH
NH
S
NH
NH
NH
NH
O
NH
NH
Cl
O
NH
NH
O
O
5.30
O
Cl
O
4.93
4.49
4.45
4.42
Cl
Br
O
NH
Cl
S
NH
Cl
NH
O
NH
S
O
Cl
Cl
Cl
NH
S
NH
NH
O
NH
NH
NH
O
S
O
O
Cl
Cl
NH
Cl
NH
NH
4.42
4.40
4.37
4.30
4.26
F
F
Cl
S
NH
NH
O
N
S
S
Cl
S
N
Cl
4.23
4.18
4.17
NH
4.08
Cl
Cl
NH
Cl
O
NH
4.07
Cl
O
Br
O
NH
O
NH
NH
Cl
O
O
O
NH
NH
NH
N
S
Cl
NH
O
NH
NH
S
Cl
S
Br
N
NH
S
O
NH
Cl
S
Cl
S
Cl
S
O
Cl
O
4.07
4.04
4.04
4.03
4.02
Cl
O
NH
NH
NH
O
NH
S
Cl
NH
Cl
NH
O
O
NH
N
NH
NH
NH
S
O
NH
Cl
S
O
NH
NH
Cl
O
O
N
O
Cl
Cl
S
4.01
4.00
3.99
3.98
3.96
Cl
Cl
Cl
O
Cl
NH
NH
S
Br
N
NH
O
O
N
Br
NH
NH
Cl
O
N
Cl
Cl
3.86
3.83
3.81
3.79
3.72
I
S
Cl
O
O
Cl
S
NH
S
O
O
S
N
N
Cl
Cl
Cl
O
S
NH
Cl
Br
N
O
N
3.69
3.68
3.64
Cl
Cl
O
3.61
3.61
Cl
Cl
S
Cl
S
N
NH
Cl
N
N
S
F
Cl
F
Br
O
Cl
O
Cl
3.61
3.60
3.58
3.58
3.58
F
OCl
N+
O
O
Cl
O
Cl
O
NH
S
N
S
S
Cl
NH
N
N
N
N
N
NH
O
N
S
N
O
Cl
3.57
3.56
Cl
3.53
O
F
Cl
S
O
Br
O
NH
O
Br
O
3.51
3.51
Br
NH
Cl
3.52
Br
3.50
3.50
3.50
Figure 13. Top 50 molecules found by Bayesian optimization using JT-VAE.
O
3.50
Junction Tree Variational Autoencoder for Molecular Graph Generation
O-
O
-S
N
+H
3N
N
H
N
O
S
+
+
N
H
O
O
NH+
S
H
N
N
OH
O
O
N
H
Cl
O
N+
O
+
6.80
6.69
6.51
-2.05
N
N
O
O
O
O
Br
N
HO
N
H
S
N
H
N
N
O
O-
OH
S
O
N
O
N+ O
N
-O
O
O
+
N
H
O
H
N
O
O-
O
O
O-
O
+
O
N
N
H
N
H 2N
N
O
5.69
4.94
4.52
O
Br
O
N
N
N
O
HN
O
N
S
O
NH
O O
O
NH2
-2.21
O
O
O
H
N
O
S
O
N
H
N
O
NH2
NH2
NH+
O
O
O
N
H
O
S
N
O H
N
N
O
S
N
O H
N
H
3.03
H
N
O
N
N
H
O
O
O-
NH2+
O
4.00
O
O
N
O
O-
N
N N
N+
O
OH
-1.92
O
N
O
Br
2.69
-O
N
N
H
O
O
N
+
H
N
N
N N
OH
Figure 14. Row 1-3: Molecule modification results with similarity constraint sim(m, m0 ) ≥ 0.2, 0.4, 0.6. For each group, we plot the top
three pairs that leads to actual property improvement, and one pair with decreased property. We can see that tighter similarity constraint
forces the model to preserve the original structure.
| 9 |
Generation and analysis of lamplighter programs
Carlos Martin
arXiv:1707.02652v1 [cs.DM] 9 Jul 2017
Abstract
We consider a programming language based on the lamplighter group that uses only composition and iteration as control structures. We derive generating functions and counting formulas
for this language with the goal of avoiding repetition of equivalent programs.
1
Introduction
The structured program theorem states that every computable function can be implemented in a
programming language that uses only composition and iteration as control structures [1][2]. In this
paper, we consider a programming language that manipulates an infinite sequence of bits with the
following instructions:
t
r
l
[E]
toggle current bit
move right
move left
repeat E while current bit is 1
We can combine these primitive instructions to create more complex ones. For example, [t] sets
the current bit to 0, [t]t sets it to 1, and []t[] loops forever. t[tEt]t repeats E while the current bit is
0, inverting the loop condition. The state of the current bit is preserved because either t preceding
a check for the loop condition is immediately cancelled by a subsequent t.
We can swap two bits without additional memory using the xor swap algorithm:
[rn tln ]
| {z }
xn := xn ⊕ x0
rn [ln trn ]ln
| {z }
x0 := x0 ⊕ xn
[rn tln ]
| {z }
xn := xn ⊕ x0
A more complex instruction is [tr[r]r[r]trt[l]l[l]r], which doubles a string of 1s using two buffers
as described below. The loop stops when the left buffer has been exhausted.
t
r[r]r[r]
trt
[l]l[l]r
remove 1 tally mark from left buffer
move to right buffer
add 2 tally marks to right buffer
move to left buffer
The programming language is generated by the grammar
E = (t + r + l + [E])∗
1
Therefore, its generating function is the solution to
E(z) = (z + z + z + z 2 E(z))∗
1
=
1 − (3z + z 2 E(z))
which is
E(z) =
1 − 3z +
2
p
(1 − z)(1 − 5z)
= 1 + 3z + 10z 2 + 36z 3 + 137z 4 + 543z 5 + . . .
where the nth coefficient, denoted by [z n ]E(z), is the number of expressions of length n. Since
E(z) has branch points at z = 1 and z = 51 , its radius of convergence is 15 . Therefore [4]
lim ([z n ]E(z))1/n = 5
n→∞
Suppose
−β
A(z) + B(z) 1 − zr
f (z) =
zα
where r is the radius of convergence of f , the radius of convergence of A and B is greater than
r, and B(r) 6= 0. Then an asymptotic expression for [z n ]f (z) is given by [4]
[z n ]f (z) ∼
B(r)
rα Γ(β)
nβ−1 r−n
For E(z) we have r = 15 , α = 2, β = − 21 , and
1 − 3z
2
(1 − z)1/2
B(z) = −
2
A(z) =
Thus an asymptotic expression for [z n ]E(z) is [6]
5n+3/2
[z n ]E(z) ∼ √
2 πn3
Notice that the instructions t, r, and l generate a group under composition. This group is called
the lamplighter group, and is described in the next section.
2
The lamplighter group
The lamplighter group is the restricted wreath product
L = (Z/2Z) o Z = (Z/2Z)⊕Z o Z
2
with presentation
ht, r | tn , [t, rk tr−k ] for k ≥ 1i
where r−1 = l. The group can be viewed as the action of a lamplighter on an infinite sequence
of lamps. t toggles the lamp above the lamplighter, r moves the lamplighter right, and l moves the
lamplighter left. Each group element is an ordered pair (a, b) where a ∈ (Z/2Z)⊕Z ∼
= Pfinite (Z) and
b ∈ Z. The group operation is
(a1 , b1 )(a2 , b2 ) = (a1 + σ b1 a2 , b1 + b2 )
and the group inverse is
(a, b)−1 = (−σ −b a, −b)
where σ is the shift operator, defined by (σa)(i) = a(i + 1). For example,
↓
. . . . . .
represents the element (δ−2 + δ1 , 3). The arrow points to the origin, the black squares indicate
toggled lamps, and the underscore indicates the final position of the lamplighter. The group identity
is (0, 0) and the group generators are
t = (δ0 , 0)
r = (0, 1)
l = (0, −1)
2.1
Word norm
A word is a representation of a group element as a product of group generators. The word norm of
a group element is the length of the shortest word that represents it. For instance, the norm of the
group identity is zero, since it can be expressed as the empty product. The norm of a lamplighter
group element (a, b) under the generating set {t, r, l} is [3]
(
|a↓ | + |a↑ − b| b ≥ 0
|(a, b)| = kak1 + a↑ − a↓ +
|a↑ | + |a↓ − b| otherwise
where k·k1 is the 1-norm and
(
0
a=0
(min ◦ supp)(a) otherwise
(
0
a=0
(max ◦ supp)(a) otherwise
a↓ =
a↑ =
This can be seen by dividing the movement of the lamplighter into three stages, as shown in the
following diagram:
3
a↓
0
a↑
b
The shortest word for a lamplighter group element is a solution to the traveling salesman problem
in one dimension, where the vertices are the locations of the toggled lamps. If b ≥ 0, the lamplighter
visits the leftmost toggled lamp, the rightmost toggled lamp, and b in that order. Similarly, if b < 0,
the lamplighter visits the rightmost toggled lamp, the leftmost toggled lamp, and b in that order.
The set of all lamplighter group elements with b ≥ 0 is given by
(li (g − 1)r(gr)i−1 )[i>0]
|
{z
}
stage 1
g(rg)j
| {z }
(rk (g − 1)l(gl)k−1 )[k>0]
|
{z
}
stage 2
stage 3
for all i, j, k ∈ N, where g = ε + t and [·] is the Iverson bracket (in this context). Similarly, the
set of all elements with b < 0 is given by the expression above with l and r switching places. Notice
that this representation of each lamplighter group element is of minimal length.
The word distance between two group elements is the smallest number of generators needed to
change one into the other. For a symmetric generating set, it can be expressed as
d(g, h) = |g −1 h|
Notice we also have the inequality
|ab| ≤ |a| + |b|
2.2
Growth function
The growth function G(z) of a group G is a formal power series whose nth coefficient is the number
of group elements with a norm of n. For example,
(Z/nZ)(z) =
X
zk =
k∈[0,n)
1 − zn
1−z
with respect to the singleton generating set, and
Z(z) =
X
z |k| =
k∈Z
4
1+z
1−z
with respect to the standard generators {1, −1}. The growth function of G o Z can be expressed
in terms of the growth function of G as follows [5]:
(G o Z)(z) = (1 + z 2 (G(z) − 1)(z 2 G(z))∗ )2 G(z)Z(zG(z))
2
1 + zG(z)
1 − z2
G(z)
=
2
1 − z G(z)
1 − zG(z)
Letting G = Z/2Z yields
L(z) =
1 − z2
1 − z2 − z3
2
(1 + z)
1 + z + z2
1 − z − z2
which has a radius of convergence of ϕ−1 , where ϕ is the golden ratio:
√
1+ 5
ϕ=
= 1.6180 . . .
2
Hence
lim ([z n ]L(z))1/n = ϕ
n→∞
Since
A(z) + B(z) 1 −
L(z) =
zα
−1
where r = ϕ , α = 0, β = 1, and
z −β
r
A(z) = 0
B(z) =
ϕ(1 − z)2 (1 + z)3 (1 + z + z 2 )
(ϕ + z)(1 − z 2 − z 3 )2
The lamplighter group grows asymptotically as
√
15 + 7 5 n
[z ]L(z) ∼
ϕ
5
n
2.3
Lamplighter programs
Recall the grammar that generates our programming language:
E = (G + [E])∗
where G = t + r + l are our generators. Since (a + b)∗ = a∗ (ba∗ )∗ , we have
E = (G + [E])∗
= G ∗ ([E]G ∗ )∗
To avoid counting equivalent generator sequences, we would like to represent each lamplighter
group element with its shortest word. Hence we replace G ∗ with L:
E = L([E]L)∗
5
Thus the new generating function is given by
E(z) = L(z)(zE(z)zL(z))∗
=
L(z)
1−
z 2 E(z)L(z)
1+
p
Solving for E(z) yields
E(z) =
2L(z)
1 − 4z 2 L(z)2
Its radius of convergence is the smallest root of the polynomial
−1 + 3z + 7z 2 − 11z 4 − 9z 5 + 2z 6 + 7z 7 + 3z 8
which is approximately 0.2256. Hence
lim ([z n ]E(z))1/n ≈ 4.432
n→∞
which is smaller than the exponential growth rate of 5 we had previously. Note that [z n ]E(z) ≥
[z ]L(z), since there must be at least one program for every lamplighter group element. Therefore,
the exponential growth rate of E will always be bounded below by ϕ ≈ 1.618.
n
2.4
k-shift subsets
Consider the subset of G o Z containing all elements with a shift of k:
(G o Z)k = G⊕Z × {k}
The growth function of this subset is
(G o Z)k (z) =
1 − z2
1 − z 2 G(z)
2
G(z)(zG(z))|k|
Notice that (G o Z)0 ∼
= G⊕Z is a normal subgroup of G o Z:
(a1 , b1 )(a2 , 0)(a1 , b1 )−1 = (a1 + σ b1 a2 , b1 )(a1 , b1 )−1
= (a1 + σ b1 a2 , b1 )(−σ −b1 a1 , −b1 )
= (a1 + σ b1 a2 − σ b1 σ −b1 a1 , b1 − b1 )
= (a1 + σ b1 a2 − a1 , 0)
= (σ b1 a2 , 0)
In the lamplighter group, we have
Lk (z) =
1 − z2
1 − z2 − z3
2
(1 + z)(z + z 2 )|k|
The radius of convergence of this function is ρ−1 , where ρ is the plastic number:
p
p
√
√
3
3
108 + 12 69 + 108 − 12 69
ρ=
= 1.3247 . . .
6
6
Thus ρ is the exponential growth rate of Lk . Finally, let Lk,1 and Lk,0 denote the set of k-shift
elements which do and do not, respectively, toggle the final bit:
Lk,0 (z) =
1 − z2
1 − z2 − z3
2
(z + z 2 )|k|
Lk,1 (z) = Lk,0 (z)z
3
Dead loops
3.1
After a loop
In this section, we show that certain loops are never entered and can be eliminated from a program
without changing its behavior. For example, the current bit is always 0 immediately after any loop.
Hence we can eliminate the second of two consecutive loops:
[a][b] = [a]
More generally, let b ∈ L0,0 be a 0-shift element that does not toggle the central bit. Then
b ∈ L0,0 =⇒ [a]b[c] = [a]b
Since [a]b[c] is equivalent to a shorter program (which will be generated earlier in the generation
process), we can exclude it. Hence, we exclude all expressions of the form [E]L0,0 [E] by taking
E = L([E]L)∗
= L(ε + ([E]L)∗ [E]L)
= L + L([E]L)∗ [E]L
and remove all Ls between loops that are in L0,0 :
E = L + L([E](L − L0,0 ))∗ [E]L
Therefore
E(z) = L(z) + L(z)(z 2 E(z)(L(z) − L0,0 (z)))∗ z 2 E(z)L(z)
= L(z) +
z 2 E(z)L(z)2
1 − z 2 E(z)(L(z) − L0,0 (z))
which yields
E(z) =
1 − z 2 L(z)L0,0 (z) +
2L(z)
p
(1 − 2xL(z) + z 2 L(z)L0,0 (z))(1 + 2xL(z) + z 2 L(z)L0,0 (z))
The radius of convergence of this function is approximately 0.2409. Hence
lim ([z n ]E(z))1/n ≈ 4.150
n→∞
7
3.2
Inside a loop
Let a ∈ L0,1 be a 0-shift lamplighter group element that toggles the central bit. Then
a ∈ L0,1 =⇒ [a[b]c] = [ac]
since the current bit is always 1 at the beginning of a loop body. Thus we exclude all expressions
of the form [L0,1 [E]E] by creating a new nonterminal symbol Y for loops:
E = L + L(Y (L − L0,0 ))∗ Y L
Y = [E]
Expanding the expression inside the loop:
E = L + L(Y (L − L0,0 ))∗ Y L
Y = [L + L(Y (L − L0,0 ))∗ Y L]
and replacing the inner L with L − L0,1 :
E = L + L(Y (L − L0,0 ))∗ Y L
Y = [L + (L − L0,1 )(Y (L − L0,0 ))∗ Y L]
Hence
L(z)2 Y (z)
E(z) = L(z) +
1 − Y (z)(L(z) − L0,0 (z))
L(z)Y (z)(L(z) − L0,1 (z))
2
Y (z) = z L(z) +
1 − Y (z)(L(z) − L0,0 (z))
The radius of convergence of E(z) when solved is approximately 0.244303. Hence
lim ([z n ]E(z))1/n ≈ 4.093
n→∞
The following graph illustrates the growth sequences we have derived so far:
8
4
Loop unrolling
We can turn a doubly-nested loop into a singly-nested loop:
[[a]] = [a]
This is because the inner loop is entered whenever the outer loop is entered, and the outer loop
ends whenever the inner loop ends. Thus we exclude expressions of the form [[E]] = [Y ]:
E = L + L(Y (L − L0,0 ))∗ Y L
Y = [L + (L − L0,1 )(Y (L − L0,0 ))∗ Y L − Y ]
In general, we can unroll the first iteration of a loop when we know that the loop will be executed
at least once. By inverting the loop elimination conditions from the previous section, we obtain the
following loop unrolling conditions:
b ∈ L0,1 =⇒ [a]b[c] = [a]bc[c]
a ∈ L0,0 =⇒ [a[b]c] = [ab[b]c]
This might seem to only make programs longer. However, if we know that the loop can only be
executed at most once, we can eliminate the rest of the loop and thus inline the loop body:
[a]b[c] = [a]bc[c] = [a]bc
[a[b]c] = [ab[b]c] = [abc]
9
A loop that is executed at most once is called a transient loop. This occurs when the current
bit is always 0 at the end of the loop body, preventing another iteration. For example,
a ∈ L0,1 =⇒ [a] is transient
c ∈ L0,0 =⇒ [a[b]c] is transient
Hence we have the following reductions:
b ∈ L0,1 , c ∈ L0,1 =⇒ [a]b[c] = [a]bc[c] = [a]bc
b ∈ L0,1 , e ∈ L0,0 =⇒ [a]b[c[d]e] = [a]bc[d]e[c[d]e] = [a]bc[d]e
a ∈ L0,0 , b ∈ L0,1 =⇒ [a[b]c] = [ab[b]c] = [abc]
a ∈ L0,0 , d ∈ L0,0 =⇒ [a[b[c]d]e] = [ab[c]d[b[c]d]e] = [ab[c]de]
noting that L0,a L0,b = L0,a+b mod 2 . Since they are provably equivalent to smaller expressions,
we can exclude expressions of the following forms from the generation process:
Transient type 1
Transient type 2
5
After a loop
[E]L0,1 [L0,1 ]
[E]L0,1 [E[E]L0,0 ]
Inside a loop
[L0,0 [L0,1 ]E]
[L0,0 [E[E]L0,0 ]E]
Infinite loops
Because the halting problem is undecidable, no algorithm can determine whether a program terminates, for all possible programs. Nonetheless, we can prove that certain programs will (or will not)
terminate. This is the goal of termination analysis.
For example, if a loop body consists of a 0-shift lamplighter group element that does not toggle
the central bit, the loop will never terminate if entered. This is because the current bit, which is 1
at the beginning of the body, will remain 1 at the end of it. Since the loop condition is always true
at the end of the loop body, the loop cannot be escaped.
a ∈ L0,0 =⇒ [a] is an infinite loop
Similarly, suppose there is a 0-shift toggling element that follows an inner loop at the end of a
loop body. Either the inner loop does not terminate, which a fortiori results in an infinite loop, or
it terminates when the current bit becomes 0. The current bit is toggled from 0 to 1 as the end of
the loop is reached, resulting in an infinite loop:
c ∈ L0,1 =⇒ [a[b]c] is an infinite loop
Since they are extensionally equivalent, we can exclude all expressions of the previous two forms
from the generation process except for the smallest infinite loop, namely the empty loop [].
Suppose we use ⊥ denote a point in a program which, if reached, guarantees non-termination.
Any instructions before ⊥ are absorbed by it since either they do not terminate, which results in
⊥, or they do terminate, allowing ⊥ to be reached. Similarly, any instructions after ⊥ do not affect
the non-termination outcome, and are also absorbed by it:
a⊥b = ⊥
10
From the previous examples of infinite loops, we have
a ∈ L0,0 =⇒[a] = [⊥]
c ∈ L0,1 =⇒[a[b]c] = [⊥]
Finally, from the loop unrolling conditions of the previous section we have
a ∈ L0,0 =⇒[a[⊥]b] = [⊥]
b ∈ L0,1 =⇒[a]b[⊥] = ⊥
These rules allow us to propagate the detection of non-termination through a program.
6
Fixed-shift programs
Let Ei be defined as follows:
Ei = Li +
X
Lj [E0 ]Ei−j
j∈Z
Ei is a set of programs which always shift by i bits. Each element of Ei is equivalent to a partial
function (Z/2Z)B → (Z/2Z)B
⊥ where B ∈ Pfinite (Z) is the finite set of bits that are read or toggled,
which can be determined statically.
A machine configuration consists of its current memory, given by the infinite bit sequence, and a
program counter indicating which instruction is being executed. The set of configurations assumed
by a machine executing an expression in Ei is finite, since B is finite. Therefore, it is easy to detect
non-termination by detecting whether the machine configuration repeats.
For any set of bits B and any partial function over those bits (Z/2Z)B → (Z/2Z)B
⊥ , we can find
the shortest representative program and exclude all longer, equivalent programs.
(k)
(k)
Let Li be the subset of Li that does not toggle bit k. Then we can define Ei as a subset of
Ei that never toggles bit k:
X (k) (k−j) (k−j)
(k)
(k)
Ei = Li +
Lj [E0
]Ei−j
j∈Z
This allows us to extend some of our previous results. For example, we have
(0)
b ∈ E0
(0)
since E0
=⇒ [a]b[c] = [a]b
returns to the same bit without ever toggling it. Thus, for example,
n 6= 0 =⇒ [a]rn [t]ln [b] = [a]rn [t]ln
(0)
In general, we can replace the role of L0,0 with E0 .
7
Conclusion
We have constructed a programming language based on P00 that manipulates an infinite sequence of
bits using function composition and iteration. We derived combinatorial formulas for this language
11
and for subsets of it that collapse certain classes of equivalent programs. Our analysis of equivalent
programs included dead loops, nested loops, loop unrolling, and infinite loops.
This analysis can be applied to program search. The goal of program search is to search efficiently
through the space of possible programs and find one which satisfies a desired specification or inputoutput behavior. This can be used for automatic program synthesis.
We hope that this combinatorial analysis will be extended to more complex classes of equivalent
programs, yielding further reductions in the search space. In particular, we would like to find tighter
upper and lower bounds on limn→∞ ([z n ]E(z))1/n for the set E of non-equivalent programs.
References
[1] Corrado Böhm. On a family of Turing machines and the related programming language. International Computation Centre Bulletin, 3:185–194, July 1964.
[2] Corrado Böhm and Giuseppe Jacopini. Flow diagrams, Turing machines and languages with
only two formation rules. Communications of the ACM, 9(5):366–371, 1966.
[3] Sean Cleary and Jennifer Taback. Dead end words in lamplighter groups and other wreath
products. The Quarterly Journal of Mathematics, 56(2):165–178, 2005.
[4] Philippe Flajolet and Robert Sedgewick. Analytic Combinatorics. Cambridge University Press,
Princeton NJ, 2009. Chapter 4.
[5] Walter Parry. Growth series of some wreath products. Transactions of the American Mathematical Society, 331(2):751–759, 1992.
[6] Neil Sloane. The encyclopedia of integer sequences. Sequence A002212.
12
| 6 |
IMPARTIAL AVOIDANCE AND ACHIEVEMENT GAMES FOR
GENERATING SYMMETRIC AND ALTERNATING GROUPS
arXiv:1508.03419v2 [math.CO] 14 Jul 2016
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
Abstract. Anderson and Harary introduced two impartial games on finite groups. Both
games are played by two players who alternately select previously-unselected elements of a finite
group. The first player who builds a generating set from the jointly-selected elements wins the
first game. The first player who cannot select an element without building a generating set
loses the second game. We determine the nim-numbers, and therefore the outcomes, of these
games for symmetric and alternating groups.
1. Introduction
Anderson and Harary [3] introduced two impartial games Generate and Do Not Generate
in which two players alternately take turns selecting previously-unselected elements of a finite
group G. The first player who builds a generating set for the group from the jointly-selected
elements wins the achievement game GEN(G). The first player who cannot select an element
without building a generating set loses the avoidance game DNG(G). The outcomes of both
games were studied for some of the more familiar finite groups, including abelian, dihedral,
symmetric, and alternating groups in [3, 4], although the analysis was incomplete for alternating
groups.
Brandenburg studies a similar game in [6]. This variation of the game is played on a finitely
generated abelian group G. Players alternate replacing G by the quotient G/hgi for a choice of
a nontrivial element g. The player with the last possible move wins.
A fundamental problem in the theory of impartial combinatorial games is finding the nimnumber of a game. The nim-number determines the outcome of the game and also allows for
the easy calculation of the nim-numbers of game sums. The last two authors [11] developed
tools for studying the nim-numbers of both games, which they applied in the context of certain
finite groups including cyclic, abelian, and dihedral. In [5], we developed a complete set of
criteria for determining the nim-numbers of avoidance games and calculated the corresponding
nim-numbers for several families of groups, including symmetric groups.
The previous work in [5, 11] left open the determination of the nim-numbers of DNG(An ),
GEN(Sn ), and GEN(An ), except for a handful of cases. In this paper, we provide a complete
classification of the nim-numbers of the games involving the symmetric and alternating groups.
Since nim-numbers determine the outcome, our classification completes the analysis from [4].
In particular, under optimal play, the first player will win the achievement game and lose the
avoidance game for most symmetric and alternating groups. Moreover, building on the work
of [5], we lay the foundation for how one might ascertain the nim-numbers for finite simple
groups.
Date: January 6, 2018.
2000 Mathematics Subject Classification. 91A46, 20B30, 20D06, 20D30.
Key words and phrases. impartial game, maximal subgroup, symmetric group, alternating group.
This work was conducted during the third author’s visit to DIMACS partially enabled through support from
the National Science Foundation under grant number #CCF-1445755.
1
2
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
The paper is organized as follows. In Section 2, we recall the basic terminology of impartial
games, establish our notation, and review several necessary results from [11]. The following
section describes how to compute GEN(G) for G in a family called Γ1 , which includes the
nonabelian alternating and symmetric groups, as well as all other finite nonabelian simple
groups. In Section 4, we review the known results for DNG(Sn ) and compute the nim-numbers
for GEN(Sn ), and then we find the nim-numbers for DNG(An ) and GEN(An ) in Section 5. We
close with some open questions.
The authors thank the referee for making suggestions that improved the quality of this paper.
2. Preliminaries
2.1. Impartial Games. In this section, we briefly recall the basic terminology of impartial
games and establish our notation. For a comprehensive treatment of impartial games, see [2, 19].
An impartial game is a finite set X of positions together with a starting position and a
collection {Opt(P ) ⊆ X | P ∈ X}, where Opt(P ) is the set of possible options for a position
P . Two players take turns replacing the current position P with one of the available options
in Opt(P ). A position P with Opt(P ) = ∅ is called terminal. The player who moves into a
terminal position wins. All games must come to an end in finitely many turns.
The minimum excludant mex(A) of a set A of ordinals is the smallest ordinal not contained
in the set. The nim-number of a position P is the minimum excludant
nim(P ) := mex{nim(Q) | Q ∈ Opt(P )}
of the set of nim-numbers of the options of P . Since the minimum excludant of the empty set
is 0, the terminal positions of a game have nim-number 0. The nim-number of a game is the
nim-number of its starting position. The nim-number of a game determines the outcome since
the second player has a winning strategy if and only if the nim-number for a game is 0. The
winning strategy is to always pick an option that has nim-number 0. This puts the opponent
into a losing position at every turn.
The sum of the games P and R is the game P + R whose set of options is
Opt(P + R) := {Q + R | Q ∈ Opt(P )} ∪ {P + S | S ∈ Opt(R)}.
We write P = R if the second player of the game P + R has a winning strategy.
The one-pile NIM game with n stones is denoted by the nimber ∗n. The set of options of ∗n is
Opt(∗n) = {∗0, . . . , ∗(n − 1)}. The Sprague–Grundy Theorem [2, 19] states that P = ∗ nim(P )
for every impartial game P .
2.2. Achievement and Avoidance Games for Generating Groups. We now provide a
more precise description of the achievement and avoidance games for generating a finite group
G. For additional details, see [3, 4, 11]. For both games, the starting position is the empty set.
The players take turns choosing previously-unselected elements to create jointly-selected sets
of elements, which are the positions of the game.
In the avoidance game DNG(G), the first player chooses x1 ∈ G such that hx1 i =
6 G and at
the kth turn, the designated player selects xk ∈ G \ {x1 , . . . , xk−1 } such that hx1 , . . . , xk i =
6 G
to create the position {x1 , . . . , xk }. Notice that the positions of DNG(G) are exactly the nongenerating subsets of G. A position Q is an option of P if Q = P ∪ {g} for some g ∈ G \ P ,
where hQi =
6 G. The player who cannot select an element without building a generating set is
the loser. We note that there is no avoidance game for the trivial group since the empty set
generates the whole group.
IMPARTIAL GAMES FOR GENERATING SYMMETRIC AND ALTERNATING GROUPS
3
In the achievement game GEN(G), the first player chooses any x1 ∈ G and at the kth turn, the
designated player selects xk ∈ G \ {x1 , . . . , xk−1 } to create the position {x1 , . . . , xk }. A player
wins on the nth turn if hx1 , . . . , xn i = G. In this case, the positions of GEN(G) are subsets of
terminal positions, which are certain generating sets of G. Note that the second player has a
winning strategy if G is trivial since h∅i = G, so the first player has no legal opening move. In
particular, GEN(S1 ) = GEN(A1 ) = GEN(A2 ) = ∗0.
Maximal subgroups play an important role because a subset S of a finite group is a generating
set if and only if S is not contained in any maximal subgroup. Let M be the set of maximal
subgroups of G. Following [11], the set of intersection subgroups is defined to be the set
I := {∩N | ∅ =
6 N ⊆ M}
containing all possible intersections of maximal subgroups. Note that the elements of I are in
fact subgroups of G. The smallest intersection subgroup is the Frattini subgroup Φ(G) of G,
and we say that an intersection subgroup I is non-Frattini if I 6= Φ(G). Not every subgroup of
G need be an intersection subgroup. For instance, no proper subgroup of a nontrivial Frattini
subgroup is an intersection subgroup. Even a subgroup that contains the Frattini subgroup
need not be intersection subgroup, as shown in [5, Example 1].
The set I of intersection subgroups is partially ordered by inclusion. For each I ∈ I, we let
XI := P(I) \ ∪{P(J) | J ∈ I, J < I}
be the collection of those subsets of I that are not contained in any other intersection subgroup
smaller than I. We define X := {XI | I ∈ I} and call an element of X a structure class. The
starting position ∅ is in XΦ(G) for both games.
The set X of structure classes is a partition of the set of game positions of DNG(G). The partition X is compatible with the option relationship between game positions [11, Corollary 3.11]:
if XI , XJ ∈ X and P, Q ∈ XI 6= XJ , then Opt(P ) ∩ XJ 6= ∅ if and only if Opt(Q) ∩ XJ 6= ∅.
We say that XJ is an option of XI and write XJ ∈ Opt(XI ) if Opt(I) ∩ XJ 6= ∅.
For the achievement game GEN(G), we must include an additional structure class XG containing terminal positions. A subset S ⊆ G belongs to XG whenever S generates G while S \{s}
does not for some s ∈ S. Note that we are abusing notation since XG may not contain G. The
positions of GEN(G) are the positions of DNG(G) together with the elements of XG . The set
Y := X ∪ {XG } is a partition of the set of game positions of GEN(G). As in the avoidance case,
the partition Y is compatible with the option relationship between positions [11, Corollary 4.3].
For any position {g} of either DNG(G) or GEN(G), we define ⌈g⌉ to be the unique element
of I ∪ {G} such that {g} ∈ X⌈g⌉ . For example, if e is the identity of G, then ⌈e⌉ = Φ(G).
We define pty(S) of a set S to be 0 if |S| is even and 1 if |S| is odd, and we say that S is
even if pty(S) = 0 and odd if pty(S) = 1. The type of the structure class XI is the triple
type(XI ) := (pty(I), nim(P ), nim(Q)),
where P, Q ∈ XI with pty(P ) = 0 and pty(Q) = 1; by [11, Proposition 3.15], this is welldefined. The type can be calculated as in the following example: if an odd XI has options of
type (0, a, b) and (1, c, d), then type(XI ) = (1, mex{b, d, x}, x), where x = mex{a, c}. In the
achievement case, the type of the structure class XG is defined to be type(XG ) = (pty(G), 0, 0).
The option type of XI is the set
otype(XI ) := {type(XK ) | XK ∈ Opt(XI )}.
4
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
For both the avoidance and achievement games, the nim-number of the game is the same as
the nim-number of the initial position ∅, which is an even subset of Φ(G). Because of this, the
nim-number of the game is the second component of type(XΦ(G) ).
2.3. Achievement and Avoidance Game Strategy and Intuition. Let G be a nontrivial
finite group. In general, describing strategies in terms of nim-numbers is difficult, however we
are able to provide a full description of the relationship between the strategy and the nimnumber of DNG(G). A complete characterization is elusive for GEN(G).
After playing the avoidance game, the two players can notice that they spent the entire game
choosing elements from a single maximal subgroup M. The first player will have won if M
is odd, and the second player will have won if M is even. These facts immediately yield the
winning strategies for the two players. The second player should attempt to choose any legal
element of even order since that will ensure a final maximal subgroup of even order, while the
first player should try to select an element g ∈ G such that hg, ti = G for any element t ∈ G
of even order. Of course, only one of these strategies can be successful. Note that a winning
strategy for the first player is equivalent to choosing an element g ∈ G that is not contained
in any even maximal subgroup. The nim-numbers of the avoidance game were characterized in
the following theorem.
Theorem 2.1. [5, Theorem 6.3] Let G be a nontrivial finite group.
(1) If |G| = 2 or G is odd, then DNG(G) = ∗1.
(2) If G is cyclic with |G| ≡4 0 or the set of even maximal subgroups covers G, then
DNG(G) = ∗0.
(3) Otherwise, DNG(G) = ∗3.
It follows that a nim-number of 0 indicates that there is always an element of even order for
the second player to choose, regardless of the first player’s initial choice. Notice that G only
has odd maximal subgroups if |G| = 2 or G is odd, so a nim-number of 1 indicates that the
first player cannot lose, independent of strategy. A nim-number of 3 indicates that there is an
element not contained in any even maximal subgroup, and choosing this element on the first
move is sufficient to guarantee victory for the first player. However, the first player may lose by
failing to choose this element. Thus, the difference between DNG(G) = ∗1 and DNG(G) = ∗3
is that the first player needs to make a wise first move to ensure a win only in the latter case.
Now let G be Sn or An for n ≥ 5. We provide a description of the strategy for GEN(G),
but different positive nim-numbers for GEN(G) do not seem to translate to any obvious grouptheoretic description. The group G has the property that for every nontrivial g ∈ G, there is
an x ∈ G such that hg, xi = G. The first player can win by choosing the identity with the first
selection and an x such that hg, xi = G for the third move after the second player chooses a
nonidentity element g.
3. Achievement Games for Γ1 Groups
In this section we develop some general results about GEN(G) for a special class of groups
containing all of the nonabelian alternating groups and all but one of the nonabelian symmetric
groups.
Following [13, Definition 1.1], two elements x and y of a group G are called mates if hx, yi = G.
We say that a finite nonabelian group is in Γ1 if every nontrivial element of the group has a
mate (see [7, Definition 1.01] for a general definition of Γr ). It is known that Sn ∈ Γ1 for
n 6∈ {1, 2, 4} [16] and An ∈ Γ1 for n 6∈ {1, 2, 3} [8]. Moreover, Guralnick and Kantor [14] proved
that every finite nonabelian simple group is in Γ1 .
IMPARTIAL GAMES FOR GENERATING SYMMETRIC AND ALTERNATING GROUPS
5
For the remainder of this paper, we will rely on Γ1 groups having trivial Frattini subgroups.
Proposition 3.1. If G ∈ Γ1 , then Φ(G) = {e}.
Proof. For sake of a contradiction suppose that x is a nontrivial element of Φ(G). Since G ∈ Γ1 ,
x has a mate y. Then G = hx, yi ≤ hΦ(G), yi = hyi, where the last equality holds because
Φ(G) is the set of all nongenerators of G (see [15, Problem 2.7], for instance). This implies G
is cyclic and hence abelian, which contradicts the assumption that G ∈ Γ1 .
The following general results about Γ1 groups will be used to determine the nim-numbers of
GEN(Sn ) and GEN(An ). We define T := {t0 , . . . , t4 }, where the types ti are defined in Table 1.
t0
t1
t2
t3
t4
:= (pty(G), 0, 0)
:= (1, 1, 2)
:= (1, 2, 1)
:= (1, 4, 3)
:= (0, 1, 2)
otype(XI )
type(XI )
otype(XI )
type(XI )
{t0 }
{t0 , t1 }
{t0 , t2 }
{t0 , t3 }
{t0 , t1 , t2 }
{t0 , t1 , t3 }
{t0 , t2 , t3 }
{t0 , t1 , t2 , t3 }
t2 or t4
t1
t2
t2
t3
t1
t2
t3
{t0 , t4 }
{t0 , t1 , t4 }
{t0 , t2 , t4 }
{t0 , t3 , t4 }
{t0 , t1 , t2 , t4 }
{t0 , t1 , t3 , t4 }
{t0 , t2 , t3 , t4 }
{t0 , t1 , t2 , t3 , t4 }
t1 or t4
t1
t3
t1
t3
t1
t3
t3
Table 1. Complete list of possible option types for a nonterminal structure
class XI in GEN(G) with I 6= Φ(G) when G is a Γ1 group. Note that pty(I) and
otype(XI ) determine type(XI ).
Proposition 3.2. If I is a non-Frattini intersection subgroup of a Γ1 group G, then type(XI )
in GEN(G) satisfies
t1 , I is odd, otype(XI ) ∩ {t1 , t4 } =
6 ∅, t2 6∈ otype(XI )
t , I is odd, otype(X ) ∩ {t , t } = ∅
2
I
1 4
type(XI ) =
t
,
I
is
odd,
otype(X
)
∩
{t
6 ∅, t2 ∈ otype(XI )
3
I
1 , t4 } =
t , I is even.
4
Proof. The type of a terminal structure class must be t0 . The option type of any nonterminal
structure class XI of a Γ1 group with I 6= Φ(G) contains t0 . Structural induction and the
calculations in Table 1 shows that otype(XI ) ⊆ T implies type(XI ) ∈ T . That is, no types
outside of T arise. Note that a structure class of type t4 can never have an odd option by
Lagrange’s Theorem, so t4 only appears if otype(XI ) is a subset of {t0 , t4 }.
For odd G, only some of the calculations from Table 1 are needed since the only possible
option types are {t0 } and {t0 , t2 }. This observation is sufficient to determine GEN(G) for odd
G.
Corollary 3.3. If I is a non-Frattini intersection subgroup of an odd Γ1 group G, then type(XI )
in GEN(G) is t2 .
Proposition 3.4. If G is an odd Γ1 group, then GEN(G) = ∗2.
6
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
Proof. Every option of Φ(G) has type t2 by Corollary 3.3, so otype(XΦ(G) ) = {t2 }. Then
type(XΦ(G) ) = (1, mex{0, 1}, mex{2}) = (1, 2, 0),
so GEN(G) = ∗2.
Example 3.5. Since H := Z7 ⋊Z3 is in Γ1 , GEN(H) = ∗2 by Proposition 3.4. Note that H is the
smallest odd Γ1 group. The smallest odd nonabelian group that is not Γ1 is K := (Z3 ×Z3 )⋊Z3 ,
yet GEN(K) = ∗2, too. It is possible to get nim-numbers other than ∗2 for odd groups; in fact,
GEN(Z3 × Z3 × Z3 ) = ∗1. These three examples agree with [11, Corollary 4.8], which states
that GEN(G) ∈ {∗1, ∗2} for odd nontrivial G.
Proposition 3.6. Assume G ∈ Γ1 and I is an odd non-Frattini intersection subgroup of
G. If I is only contained in odd (respectively, even) maximal subgroups, then type(XI ) is t2
(respectively, t1 ).
Proof. Since I 6= Φ(G) and G ∈ Γ1 , t0 ∈ otype(XI ). In both cases of the proof, we use structural
induction on the structure classes.
First, we assume that I is only contained in odd maximal subgroups. If XJ is a nonterminal
option of XI , then J is also only contained in odd maximal subgroups. So type(XJ ) = t2 by
induction. Hence {t0 } ⊆ otype(XI ) ⊆ {t0 , t2 }, which implies type(XI ) = t2 using Proposition 3.2.
Next, we assume that I is only contained in even maximal subgroups. Since I is odd, there
is an involution t ∈
/ I such that I ∪ {t} is contained in an even maximal subgroup. The
structure class XJ containing I ∪ {t} is a type t4 option of XI . So we may conclude that
{t0 , t4 } ⊆ otype(XI ). Let XJ be a nonterminal option XI . If XI is even, then type(XJ ) = t4
by Table 1. If XJ is odd, then type(XJ ) = t1 by induction, since J is contained only in even
maximal subgroups. Hence {t0 , t4 } ⊆ otype(XI ) ⊆ {t0 , t1 , t4 }, which implies type(XI ) = t1
using Proposition 3.2.
For the rest of the paper we will only consider even groups.
Proposition 3.7. If G is an even Γ1
∗1,
GEN(G) = ∗3,
∗4,
group, then
t2 ∈
/ otype(XΦ(G) )
t2 ∈ otype(XΦ(G) ), t3 ∈
/ otype(XΦ(G) )
t2 , t3 ∈ otype(XΦ(G) ).
Proof. Since G ∈ Γ1 , XΦ(G) has no option of type t0 . Let t be an involution of G and XJ be
the structure class containing Φ(G) ∪ {t}. Then XJ is a type t4 option of XΦ(G) . Hence {t4 } ⊆
otype(XΦ(G) ) ⊆ {t1 , . . . , t4 }. We compute type(XΦ(G) ) for every possibility for otype(XΦ(G) )
in Table 2. The result follows from this calculation and the fact that GEN(G) is equal to the
second component of type(XΦ(G) ).
S
Recall that a subset C of the power set of a group G is a covering of G if C = G; in this
case, we also say that C covers G or G is covered by C.
Corollary 3.8. If a Γ1 group G is covered by the set of even maximal subgroups of G, then
GEN(G) = ∗1.
Proof. We will demonstrate that t2 ∈
/ otype(XΦ(G) ), which will suffice by Proposition 3.7. Let
XI be an option of XΦ(G) . Then I = ⌈g⌉ for some g ∈ G. If ⌈g⌉ is even, then type(X⌈g⌉ ) = t4 6= t2
by Proposition 3.2. Now assume that ⌈g⌉ is odd. Since G is covered by the set of even maximal
subgroups, g is contained in an even maximal subgroup M. By Cauchy’s Theorem, there is
IMPARTIAL GAMES FOR GENERATING SYMMETRIC AND ALTERNATING GROUPS
7
otype(XΦ(G) ) otype(XΦ(G) ) type(XΦ(G) )
{t4 }
{t2 , t4 }
{t3 , t4 }
{t2 , t3 , t4 }
{t1 , t4 }
{t1 , t2 , t4 }
{t1 , t3 , t4 }
{t1 , t2 , t3 , t4 }
(1, 1, 0)
(1, 3, 0)
(1, 1, 0)
(1, 4, 0)
Table 2. Spectrum of type(XΦ(G) ) for G ∈ Γ1 . Note that Φ(G) is the trivial
group in this case.
an involution t in M. The structure class XJ containing {g, t} is a type t4 option of X⌈g⌉ by
Proposition 3.2. So type(X⌈g⌉ ) cannot be t2 again by Proposition 3.2.
If a noncyclic group G has only even maximal subgroups, then the set of even maximal
subgroups covers G. The next corollary then follows immediately from Corollary 3.8.
Corollary 3.9. If G is a Γ1 group with only even maximal subgroups, then GEN(G) = ∗1.
4. Symmetric Groups
In light of Corollary 3.9, we need only a simple proposition to completely determine the
nim-numbers for the achievement and avoidance games for symmetric groups.
Proposition 4.1. If n ≥ 4, then every maximal subgroup of Sn has even order.
Proof. An odd subgroup cannot be maximal since it is contained in the even subgroup An .
The nim-numbers for the avoidance game for generating symmetric groups were computed
in [11] for n ∈ {2, 3, 4} and in [5] for n ≥ 5. We include the statement of the result here for
completeness. Note that S1 is trivial, so DNG(S1 ) does not exist.
Theorem 4.2. The values of DNG(Sn ) are
∗1, n = 2
DNG(Sn ) = ∗3, n = 3
∗0, n ≥ 4.
We are now ready to determine the nim-numbers for GEN(Sn ). This result verifies the portion
of [11, Conjecture 9.1] on symmetric groups.
Theorem 4.3. The values of GEN(Sn ) are
∗0, n ∈ {1, 4}
∗2, n = 2
GEN(Sn ) =
∗3, n = 3
∗1, n ≥ 5.
Proof. The empty set is a generating set for the trivial group, so GEN(S1 ) = ∗0. The cases
where n ∈ {2, 3, 4} were done in [11], so assume n ≥ 5. By [16], Sn ∈ Γ1 . By Proposition 4.1,
every maximal subgroup of Sn has even order. Hence GEN(Sn ) = ∗1 by Corollary 3.9.
Theorems 4.2 and 4.3 immediately yield the following result.
Corollary 4.4. The first player has a winning strategy for
8
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
• DNG(Sn ) if and only if n ∈ {2, 3};
• GEN(Sn ) if and only if n 6∈ {1, 4}.
5. Alternating Groups
Determining the nim-numbers of DNG(An ) and GEN(An ) requires much more background.
The following well-known proposition follows from Feit and Thompson’s Odd Order Theorem [12] and the fact that every group of order 2m for some odd m has a normal subgroup of
order m (see [15, Corollary 6.12] for example).
Proposition 5.1. If U is a nonabelian simple group, then 4 divides |U|.
Below, we will make use of a special case of the O’Nan–Scott Theorem (see [17] for instance).
Recall that the general affine group of degree n over a field of size pk for a prime p is defined
to be the semidirect product AGL(n, pk ) := V ⋊ GL(n, pk ) of a vector space V of size pnk (i.e.,
dimension n) and the general linear group, where the latter acts on V by linear transformations.
Theorem 5.2 (O’Nan–Scott). Let An act on a set Ω of size n. If M is a maximal subgroup of
An , then M must be one of the following:
(1) (Intransitive Case) M = (Sm × Sk ) ∩ An with n = m + k;
(2) (Imprimitive Case) M = (Sm ≀ Sk ) ∩ An with n = mk, m, k > 1;
(3) (Affine Case) M = AGL(k, p) ∩ An with n = pk and p prime;
(4) (Diagonal Case) M = (U k .(Out(U) × Sk )) ∩ An with U a nonabelian simple group,
k ≥ 2, and n = |U|k−1 ;
(5) (Wreath Case) M = (Sm ≀ Sk ) ∩ An with n = mk , m ≥ 5, k > 1, and either k 6= 2 or
m 6≡4 2;
(6) (Almost Simple Case) U ⊳ M ≤ Aut(U), where U is a nonabelian simple group and M
acts primitively on Ω.
Remark 5.3. We will see that AGL(1, p) plays an important role in determining the nim×
numbers for alternating groups. Note that GL(1, p) ∼
= Z×
p , where Zp is the group of units
of Zp , which is cyclic of order p − 1. Then AGL(1, p) ∼
= Zp ⋊ Z×
p , where the action is field
multiplication. It is also easy to check that every element of AGL(1, p) either has order p or
has order dividing p − 1.
Corollary 5.4. Let n ≥ 5. If An has an odd maximal subgroup, then n is prime, n ≡4 3,
and every odd maximal subgroup of An is isomorphic to AGL(1, n) ∩ An , and hence has order
1
n(n − 1).
2
Proof. If H is a subgroup of Sn that is not contained in An , then |H ∩ An | = 21 |H|. Then
H ∩ An is even if 4 divides |H|. By inspection, Sm × Sk and Sm ≀ Sk have orders that are
divisible by 4 under the conditions specified in the Intransitive, Imprimitive, and Wreath Cases
of Theorem 5.2, so An cannot have an odd maximal subgroup in those cases. Similarly, any
U k .(Out(U)) ×Sk ) and any almost simple M are divisible by 4 by Proposition 5.1, so An cannot
have an odd maximal subgroup in the Diagonal or Almost Simple Cases of Theorem 5.2.
This leaves only the Affine Case to be considered. Assume that n = pk for some prime
p, and let M be a maximal subgroup of An in the Affine case of Theorem 5.2. The order
of AGL(k, p) is pk (pk − 1)(pk − p)(pk − p2 ) · · · (pk − pk−1 ), which is divisible by 4 if k ≥ 2.
Then we may assume that k = 1, so p = n ≥ 5 and we conclude that p is odd. Then
M ∼
= Zp ⋊ Z×
= AGL(1, p) ∩ Ap ∼
p ∩ Ap by Remark 5.3. Since Ap does not contain a (p − 1)1
cycle, we conclude that |M| = 2 p(p − 1), which is odd if and only if p ≡4 3.
IMPARTIAL GAMES FOR GENERATING SYMMETRIC AND ALTERNATING GROUPS
9
The next result follows directly from Remark 5.3 and Corollary 5.4.
Corollary 5.5. Let n ≥ 5. If An has an odd maximal subgroup M, then every nontrivial
element of M is either an n-cycle or a power of a product of two 21 (n − 1)-cycles.
Proposition 5.6. Let n ≥ 5. Then An has an odd maximal subgroup if and only if n is prime,
n ≡4 3, and n 6∈ {7, 11, 23}.
Proof. Suppose that An has an odd maximal subgroup M. Then by Corollary 5.4, n is prime,
n ≡4 3, and M ∼
= AGL(1, n) ∩ An is not maximal if
= AGL(1, n) ∩ An . The subgroup M ∼
n ∈ {7, 11, 23} by the main theorem from [17].
Thus, we may assume that n is prime, n ≡4 3, and n 6∈ {7, 11, 23}. Again by the main
theorem from [17], we have that AGL(1, n) ∩ An is maximal in An . Its order is 21 n(n − 1) by
Corollary 5.4, which is odd because n ≡4 3.
Recall [10] that a prime p is a generalized repunit prime if there is a prime-power q and
integer n such that p = (q n − 1)/(q − 1). The next definition will simplify the statement of the
proposition that follows.
Definition 5.7. A prime p is said to be a ζ-prime if all of the following conditions hold:
(1) p ≡4 3;
(2) p ∈
/ {11, 23};
(3) p is not a generalized repunit prime.
The ζ-primes that are less than 100 are 19, 43, 47, 59, 67, 71, 79, and 83 [1]. Note that 7 = 1112
is a generalized repunit prime, so we did not have to explicitly exclude the prime 7 from
Condition (2) to match the set of exceptions from Proposition 5.6.
Proposition 5.8. If n ≥ 5, then the following are equivalent.
(1) The even maximal subgroups of n fail to cover An .
(2) There exists an n-cycle of An that is not in any even maximal subgroup.
(3) n is a ζ-prime.
Proof. First, we show that Items (1) and (2) are equivalent. If there exists an n-cycle of An
that is not in any even maximal subgroup, then the even maximal subgroups of An fail to cover
An by definition. Now suppose that the even maximal subgroups of n fail to cover An . Then
there must be an odd maximal subgroup of An , so n is equal to some prime p with p ≡4 3
and p 6∈ {7, 11, 23} by Proposition 5.6. Let r be the integer such that p = 2r + 1. If M is an
odd maximal subgroup of Ap , then each element of M is either trivial, a p-cycle, or a power
of two disjoint r-cycles by Corollary 5.5. The identity and every power of two disjoint r-cycles
is contained in an even subgroup isomorphic to (Sr × Sr ) ∩ Ap , which is contained in some
even maximal subgroup. Therefore, it must be a p-cycle that is not contained an even maximal
subgroup.
To finish, we show that Items (2) and (3) are equivalent. Let g be an n-cycle of An that is not
in any even maximal subgroup. Then g must be contained in an odd maximal subgroup because
An is not cyclic. Proposition 5.6 implies that n is a prime such that n ≡4 3 with n 6∈ {7, 11, 23}.
Theorem 5.2 and the fact that n is prime imply that any maximal subgroup containing g must
be isomorphic to AGL(1, n) ∩ An or an almost simple group H that acts primitively on a set of
size n. The former has odd order, and [18, Table 3] lists all the possibilities for H. Therefore, g
will be contained in an even maximal subgroup H if and only if H appears in [18, Table 3]. The
only rows in this table where the second column (labeled by n) is equal to the fourth column
10
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
(labeled by p, which is equal to n in our case) correspond to the first PSL(d, q), Sz(q), M23 ,
and M11 . But the row for PSL(d, q) implies that n is a generalized repunit prime, the row for
Sz(q) implies that n ≡4 1, and the other two imply that n ∈ {7, 11, 23}. So the ζ-primes were
defined exactly to exclude the entries in this table. Therefore, if g is not contained in any even
maximal subgroup, then g is not in any H listed in [18, Table 3] and hence p is a ζ-prime.
Conversely, assume that n is a ζ-prime. Then n is a prime such that n ≡4 3 and n ∈
/
{7, 11, 23}. So An has no subgroup H from [18, Table 3], and hence g is not contained in any
even maximal subgroup, proving Item (2).
5.1. Avoidance Games for Generating Alternating Groups. We will use the following
result to determine the nim-numbers of DNG(An ).
Proposition 5.9. [5, Corollary 6.4] Let G be a nontrivial finite group.
(1) If all maximal subgroups of G are odd, then DNG(G) = ∗1.
(2) If all maximal subgroups of G are even, then DNG(G) = ∗0.
(3) Assume G has both even and odd maximal subgroups.
(a) If the set of even maximal subgroups covers G, then DNG(G) = ∗0.
(b) If the set of even maximal subgroups does not cover G, then DNG(G) = ∗3.
Theorem 5.10. The values of DNG(An ) are
∗3, n ∈ {3, 4} or n is a ζ-prime
DNG(An ) =
∗0, otherwise.
Proof. The cases where n ∈ {3, 4} were done in [11], so assume n ≥ 5. If n is not a ζ-prime, then
the set of even maximal subgroups covers An by Proposition 5.8; in this case, DNG(An ) = ∗0 by
Proposition 5.9. If n is a ζ-prime, then the set of even maximal subgroups fails to cover An by
Proposition 5.8. This implies that An has an odd maximal subgroup. The group An contains
the proper subgroup h(1, 2)(3, 4)i of order 2, and so An must also contain an even maximal
subgroup. We may conclude that DNG(An ) = ∗3 if n is a ζ-prime by Proposition 5.9.
Just like for S1 , A1 and A2 are trivial, so DNG(A1 ) and DNG(A2 ) do not exist.
5.2. Achievement Games for Generating Alternating Groups. We will see that ζ-primes
play an important role in determining the nim-numbers of GEN(An ) as they did for DNG(An ).
The following theorem refutes the portion of [11, Conjecture 9.1] on alternating groups.
Theorem 5.11. The values of GEN(An ) are
∗0,
∗2,
GEN(An ) = ∗3,
∗4,
∗1,
n ∈ {1, 2}
n=3
n=4
n is a ζ-prime
otherwise.
Proof. The empty set is a generating set for the trivial group, so GEN(A1 ) = ∗0 = GEN(A2 ).
The cases where n ∈ {3, 4} were done in [11], so assume n ≥ 5. By [8], An ∈ Γ1 . If every
maximal subgroup of An has even order, then GEN(An ) = ∗1 by Corollary 3.9. We only need
to determine what happens in the case that An has an odd maximal subgroup. Then n = p for
some prime p 6∈ {7, 11, 23} such that p ≡4 3 by Proposition 5.6. We may write p = 2r + 1 for
some odd r. If p is not a ζ-prime, then the even maximal subgroups cover Ap by Proposition 5.8,
so GEN(Ap ) = ∗1 by Corollary 3.8.
IMPARTIAL GAMES FOR GENERATING SYMMETRIC AND ALTERNATING GROUPS
11
So assume that p is a ζ-prime. Then there is an odd maximal subgroup M of Ap ; we know
that M must be isomorphic to AGL(1, p) ∩ Ap by Corollary 5.4. Then M = hg, xi where g is a
p-cycle and x is the product of two r-cycles and each element of M is either trivial, a p-cycle,
or a power of two disjoint r-cycles by Corollary 5.5. By Proposition 3.6, type(M) = t2 .
The element x is contained in an even subgroup isomorphic to (Sr × Sr ) ∩ Ap , so X⌈x⌉ has an
even option of type t4 . The structure class X⌈x⌉ has the type t2 option XM , since {x, g} ∈ XM .
Proposition 3.2 implies type(X⌈x⌉ ) = t3 .
Since p is a ζ-prime, there is a p-cycle y that is not contained in any even maximal subgroup by
Proposition 5.8. Since all p-cycles are conjugate in Sp , we conclude that g is also only contained
in odd maximal subgroups, so type(X⌈g⌉ ) = t2 by Proposition 3.6. Then t2 , t3 ∈ otype(XΦ(Ap ) ),
so GEN(Ap ) = ∗4 by Proposition 3.7.
5.3. Outcomes for Alternating Groups. Theorems 5.10 and 5.11 immediately yield the
following result.
Corollary 5.12. The first player has a winning strategy for
• DNG(An ) if and only if n ∈ {3, 4} or n is a ζ-prime;
• GEN(An ) if and only if n 6∈ {1, 2}.
6. Further Questions
We conclude with a few open problems.
(1) Recall that every finite simple group is in Γ1 by [14]. It is well-known that many finite
simple groups have the property that every maximal subgroup has even order (see [5], [9],
and [20]). For such G, DNG(G) = ∗0 by the main results from [5] and GEN(G) = ∗1 by
Corollary 3.9. Can we determine the nim-numbers for DNG(G) and GEN(G) for every
finite simple group G?
(2) Can we determine the nim-numbers for DNG(G) and GEN(G) if G is almost simple?
(3) Can the conditions from Proposition 3.7 be translated into group-theoretic conditions?
For instance, can one describe when t2 ∈ otype(XΦ(G) ) based on the subgroup structure
of G?
References
1. The On-Line Encyclopedia of Integer Sequences, published electronically at https://oeis.org (2010),
Sequence A270123.
2. M.H. Albert, R.J. Nowakowski, and D. Wolfe, Lessons in play: an introduction to combinatorial game
theory, AMC 10 (2007), 12.
3. M. Anderson and F. Harary, Achievement and avoidance games for generating abelian groups, Internat. J.
Game Theory 16 (1987), no. 4, 321–325.
4. F.W. Barnes, Some games of F. Harary, based on finite groups, Ars Combin. 25 (1988), no. A, 21–30,
Eleventh British Combinatorial Conference (London, 1987).
5. B.J. Benesh, D.C. Ernst, and N. Sieben, Impartial avoidance games for generating finite groups, NorthWestern European Journal of Mathematics 2 (2016), 83–102.
6. M. Brandenburg, Algebraic games, arXiv:1205.2884 (2012).
7. J.L. Brenner and J. Wiegold, Two-generator groups. I, Michigan Math. J. 22 (1975), 53–64.
8. N. Chigira et al., Generating alternating groups, Hokkaido Mathematical Journal 26 (1997), 435–438.
9. J.H. Conway, R.T. Curtis, S.P. Norton, R.A. Parker, and R.A. Wilson, Atlas of finite groups: maximal
subgroups and ordinary characters for simple groups, Oxford University Press Eynsham, 1985.
10. H. Dubner, Generalized repunit primes, Math. Comp. 61 (1993), no. 204, 927–930.
11. D.C. Ernst and N. Sieben, Impartial achievement and avoidance games for generating finite groups,
arXiv:1407.0784 (2014).
12
BRET J. BENESH, DANA C. ERNST, AND NÁNDOR SIEBEN
12. W. Feit and J.G. Thompson, Solvability of groups of odd order, Pacific J. Math. 13 (1963), 775–1029.
13. T. Foguel, Finite groups with a special 2-generator property, Pacific J. Math. 170 (1995), no. 2, 483–495.
14. R.M. Guralnick and W.M. Kantor, Probabilistic generation of finite simple groups, J. Algebra 234 (2000),
no. 2, 743–792, Special issue in honor of Helmut Wielandt.
15. I.M. Isaacs, Algebra: a graduate course, Graduate Studies in Mathematics, vol. 100, American Mathematical
Society, Providence, RI, 2009, Reprint of the 1994 original.
16. I.M. Isaacs and T. Zieschang, Generating symmetric groups, American Mathematical Monthly (1995), 734–
739.
17. M.W. Liebeck, C.E. Praeger, and J. Saxl, A classification of the maximal subgroups of the finite alternating
and symmetric groups, J. Algebra 111 (1987), no. 2, 365–383.
18. M.W. Liebeck and J. Saxl, Primitive permutation groups containing an element of large prime order, J.
Lond. Math. Soc. 2 (1985), no. 2, 237–249.
19. A.N. Siegel, Combinatorial game theory, Graduate Studies in Mathematics, vol. 146, American Mathematical Society, Providence, RI, 2013.
20. R.A. Wilson, The finite simple groups, Graduate Texts in Mathematics, vol. 251, Springer-Verlag London,
Ltd., London, 2009.
Bret Benesh, Department of Mathematics, College of Saint Benedict and Saint John’s
University, 37 College Avenue South, Saint Joseph, MN 56374-5011, USA
E-mail address: bbenesh@csbsju.edu
Dana Ernst and Nándor Sieben, Department of Mathematics and Statistics, Northern Arizona University, PO Box 5717, Flagstaff, AZ 86011-5717, USA
E-mail address: Dana.Ernst@nau.edu, Nandor.Sieben@nau.edu
| 4 |
1
Clustering with feature selection using
alternating minimization.
Application to computational biology
arXiv:1711.02974v2 [cs.LG] 5 Dec 2017
Cyprien Gilet, Marie Deprez, Jean-Baptiste Caillau and Michel Barlaud, Fellow, IEEE
Abstract—This paper deals with unsupervised clustering with
feature selection. The problem is to estimate both labels and a
sparse projection matrix of weights. To address this combinatorial non-convex problem maintaining a strict control on the
sparsity of the matrix of weights, we propose an alternating
minimization of the Frobenius norm criterion. We provide a new
efficient algorithm named K-sparse which alternates k-means
with projection-gradient minimization. The projection-gradient
step is a method of splitting type, with exact projection on the
`1 ball to promote sparsity. The convergence of the gradientprojection step is addressed, and a preliminary analysis of the
alternating minimization is made. The Frobenius norm criterion
converges as the number of iterates in Algorithm K-sparse goes
to infinity. Experiments on Single Cell RNA sequencing datasets
show that our method significantly improves the results of
PCA k-means, spectral clustering, SIMLR, and Sparcl
methods. The complexity of K-sparse is linear in the number
of samples (cells), so that the method scales up to large datasets.
Finally, we extend K-sparse to supervised classification.
I. I NTRODUCTION
This paper deals with unsupervised clustering with feature
selection in high dimensional space. As an application, we
choose single-cell RNA-seq which is a new technology able
to measure the expression of thousands of genes (20,000
genes) in single cells. Characterization of diverse cell types
and their distinguishing features require robust and accurate
clustering methods. However, clustering in high dimension
suffers from the curse of dimensionality: as dimensions increase,
vectors become indiscernible and the predictive power of the
aforementioned methods is drastically reduced [1], [36]. In
order to overcome this issue, a popular approach for highdimensional data is to perform Principal Component Analysis
(PCA) prior to clustering. This approach is however difficult to
justify in general [45]. An alternative approach proposed in [18],
[19] is to combine clustering and dimension reduction by means
of Linear Discriminant Analysis (LDA). The heuristic used
in [19] is based on alternating minimization, which consists
in iteratively computing a projection subspace by LDA, using
the labels y at the current iteration and then running k-means
on the projection of the data onto the subspace. Departing
from this work, the authors of [5] propose a convex relaxation
M. Barlaud and C. Gilet are with I3S, Univ. Côte d’Azur &
CNRS, F-06900 Sophia Antipolis. E-mail: barlaud@i3s.unice.fr,
gilet@i3s.unice.fr.
J.-B. Caillau is with LJAD, Univ. Côte d’Azur & CNRS/Inria, F-06108 Nice.
E-mail: caillau@unice.fr
M. Deprez is with IPMC, Univ. Côte d’Azur & CNRS, F-06560 Sophia
Antipolis. E-mail: deprez@ipmc.cnrs.fr
in terms of a suitable semi-definite program (SDP). Another
efficient approach is spectral clustering where the main tools
are graph Laplacian matrices [33], [43]. However, methods
such as PCA, LDA or, more recently SIMLR, do not provide
sparsity. A popular approach for selecting sparse features in
supervised classification or regression is the Least Absolute
Shrinkage and Selection Operator (LASSO) formulation [39].
The LASSO formulation uses the `1 norm instead of `0 [11],
[12], [20], [21] as an added penalty term. A hyperparameter,
which unfortunaltely does not have any simple interpretation,
is then used to tune the sparsity. The authors of [46] use a
lasso-type penalty to select the features and propose a sparse
k-means method. A main issue is that optimizing the values
of the Lagrangian parameter λ [24], [46] is computationally
expensive [30]. All these methods [5], [18], [19], [46] require a
k-means heuristic to retrieve the labels. The alternating scheme
we propose combines such a k-means step with dimension
reduction and feature selection using an `1 sparsity constraint.
II. C ONSTRAINED UNSUPERVISED CLASSIFICATION
A. General Framework
Let X(6= 0) be the m × d matrix made of m line samples
x1 , . . . , xm belonging to the d-dimensional space of features.
Let Y ∈ {0, 1}m×k be the label matrix where k > 2 is the
number of clusters. Each line of Y has exactly one nonzero
element equal to one, yij = 1 indicating that the sample xi
¯
belongs to the j-th cluster. Let W ∈ Rd×d be the projection
¯
¯
matrix, d d, and let µ be the k × d matrix of centroids in
the projected space XW :
X
1
µ(j, :) := Pm
(XW )(i, :).
i=1 yij i s.t. y =1
ij
The j-th centroid is the model for all samples xi belonging
to the j-th cluster (yij = 1). The clustering criterion can be
cast as the within-cluster sum of squares (WCSS, [37], [46])
in the projected space
1
kY µ − XW k2F → min
(1)
2
where k.kF is the Frobenius norm induced by the Euclidean
structure on m × d¯ matrices,
p
(A|B)F := tr(AT B) = tr(AB T ), kAkF := (A|A)F .
The matrix of labels is constrained according to
yij ∈ {0, 1},
i = 1, . . . , m,
¯
j = 1, . . . , d,
(2)
2
k
X
yij = 1,
i = 1, . . . , m,
(3)
following forward-backward scheme to generate a sequence of
iterates:
Vn
j=1
m
X
Wn+1
yij > 1,
j = 1, . . . , k.
(4)
i=1
Note that (3) implies that each sample belongs to exactly one
cluster while (4) ensures that each cluster is not empty (no
fusion of clusters). This prevents trivial solutions consisting
in k − 1 empty clusters and W = 0. In contrast with the
Lagrangian LASSO formulation, we want to have a direct
control on the value of the `1 bound, so we constrain W
according to
kW k1 6 η (η > 0),
(5)
where k.k1 is the `1 norm of the vectorized d × d¯ matrix of
weights:
kW k1 := kW (:)k1 =
d X
d¯
X
|wij |.
i=1 j=1
The problem is to estimate labels Y together with the sparse
projection matrix W . As Y and W are bounded, the set of
constraints is compact and existence holds.
:= Wn + γn ∇ϕ(Wn ),
(7)
:= PC (Vn ) + εn ,
(8)
where PC denotes the projection on the convex set C (a subset
of some Euclidean space). Under standard assumptions on
the sequence of gradient steps (γn )n , and on the sequence of
projection errors (εn )n , convergence holds (see, e.g., [6]).
Theorem 1 Assume that (6) has a solution. Assume that ϕ
is convex, differentiable, and that ∇ϕ is β-Lipschitz, β > 0.
Assume finally that C is convex and that
X
|εn | < ∞, inf γn > 0, sup γn < 2/β.
n
n
n
Then the sequence of iterates of the forward-backward scheme
(7-8) converges, whatever the initialization. If moreover
(εn )n = 0 (exact projections), there exists a rank N and
a positive constant K such that for n > N
ϕ(Wn ) − inf ϕ 6 K/n.
C
In our case, ∇ϕ is Lipschitz since it is affine,
∇ϕ(W ) = X T (XW − Y µ),
Proposition 1 The minimization of the norm (1), jointly in Y
and W under the constraints (2)-(5), has a solution.
(9)
(10)
and we recall the estimation of its best Lipschitz constant.
To attack this difficult nonconvex problem, we propose an Lemma 1 Let A be a d × d real matrix, acting linearly on the
alternating (or Gauss-Seidel) scheme as in [18], [19], [46]. set of d × k real matrices by left multiplication, W 7→ AW .
Another option would be to design a global convex relaxation Then, its norm as a linear operator on this set endowed with the
to address the joint minimization in Y and W ; see, e.g., [5], Frobenius norm is equal to its largest singular value, σmax (A).
[23]. The first convex subproblem finds the best projection
Proof. The Frobenius norm is equal to the `2 norm of the
from dimension d to dimension d¯ for a given clustering.
vectorized matrix,
W1
AW 1
Problem 1 For a fixed clustering Y (and a given η > 0),
..
kW kF = k ... k2 , kAW kF = k
1
k2 ,
.
2
kY µ − XW kF → min
h
h
2
W
AW
(11)
under the constraint (5) on W .
where W 1 , . . . , W h denote the h column vectors of the d × h
Given the matrix of weights W , the second subproblem is the matrix W . Accordingly, the operator norm is equal to the
standard k-means on the projected data.
largest singular value of the kd × kd block-diagonal matrix
whose diagonal is made of k matrix A blocks. Such a matrix
Problem 2 For a fixed projection matrix W ,
readily has the same largest singular value as A.
1
kY µ − XW k2F → min
As a byproduct of Theorem 1, we get
2
under the constraints (2)-(4) on Y .
2
Corollary 1 For any fixed step γ ∈ (0, 2/σmax
(X)), the
forward-backward scheme applied to the Problem 1 with an
B. Exact gradient-projection splitting method
exact projection on `1 balls converges with a linear rate towards
To solve Problem 1, we use a gradient-projection method. It a solution, and the estimate (9) holds.
belongs to the class of splitting methods [14], [15], [29], [32],
1
[38] and is designed to solve minimization problems of the Proof. The ` ball being compact, existence holds. So
does convergence, provided the condition of the step
form
lengths is fulfilled. Now, according to the previous
ϕ(W ) → min, W ∈ C,
(6)
lemma, the best Lipschitz constant of the gradient of ϕ is
2
using separately the convexity properties of the function ϕ on σmax (X T X) = σmax
(X), hence the result.
one hand, and of the convex set C on the other. We use the
3
Algorithm 1 Exact gradient-projection algorithm
Input: X, Y, µ, W0 , N, γ, η
W ← W0
for n = 0, . . . , N do
V ← W − γX T (XW − Y µ)
W ← Pη1 (V )
end for
Output: W
Exact projection. In Algorithm 1, we denote by Pη1 (W ) the
(reshaped as a d × d¯ matrix) projection of the vectorized
matrix W (:). An important asset of the method is that it takes
advantage of the availability of efficient methods [16], [22]
to compute the `1 projection. For η > 0, denote B 1 (0, η) the
¯
closed `1 ball of radius η in the space Rdd centered at the
dd¯
origin, and ∆η the simplex {w ∈ R | w1 + · · · + wdd¯ =
¯
1, w1 > 0, . . . , wdd¯ > 0}. Let w ∈ Rdd , and let v denote the
projection on ∆η of (|w1 |, . . . , |wdd¯|). It is well known that
the projection of w on B 1 (0, η) is
¯
(ε1 (v1 ), . . . , εkd (vdd¯)), εj := sign(wj ), j = 1, . . . , dd,
(12)
and the fast method described in [16] is used to compute v
¯
with complexity O(d × d).
Fista implementation. A constant step of suitable size γ is
used in accordance with Corollary 1. In our setting, a useful
normalization of the design matrix X is obtained replacing X
by X/σmax (X). This sets the Lipschitz constant in Theorem 1
to one. The O(1/n) convergence rate of the algorithm can be
speeded up to O(1/n2 ) using a FISTA step [7]. In practice
we use a modified version [13] which ensures convergence
of the iterates, see Algorithm 2. Note that for any fixed step
2
γ ∈ (0, 1/σmax
(X)), the FISTA algorithm applied to Problem 1
with an exact projection on `1 balls converges with a quadratic
rate towards a solution, and the estimate (9) holds.
Algorithm 2 Exact gradient-projection algorithm with FISTA
Input: X, Y, µ, W0 , N, γ, η
W ← W0
t←1
for n = 0, . . . , N do
V ← W − γX T (XW − Y µ)
Wnew ← Pη1 (V )
tnew ← (n + 5)/4
λ ← 1 + (t − 1)/tnew
W ← (1 − λ)W + λWnew
t ← tnew
end for
Output: W
while the k-means computation relies on standard methods
such as k-means++ [2].
Algorithm 3 Alternating minimization clustering.
Input: X, Y0 , µ0 , W0 , L, N, k, γ, η
Y ← Y0
µ ← µ0
W ← W0
for l = 0, . . . , L do
for n = 0, . . . , N do
V ← W − γX T (XW − Y µ)
W ← Pη1 (V )
end for
Y ← kmeans(XW, k)
µ ← centroids(Y, XW )
end for
Output: Y, W
Convergence of the algorithm. Similarly to the approaches
advocated in [5], [18], [19], [46], our method involves nonconvex k-means optimization for which convergence towards
local minimizers only can be proved [9], [37]. In practice,
we use k-means++ with several replicates to improve each
clustering step. We assume that the initial guess for labels Y
and matrix of weights W is such that the associated k centroids
are all different. We note for further research that there have
been recent attempts to convexify k-means (see, e.g., [10],
[17], [31], [35]). As each step of the alternating minimization
scheme decreases the norm in (1), which is nonnegative, the
following readily holds.
Proposition 2 The Frobenius norm kY µ − XW kF converges
as the number of iterates L in Algorithm 3 goes to infinity.
This property is illustrated in the next section on biological
data. Further analysis of the convergence may build on
recent results on proximal regularizations of the Gauss-Seidel
alternating scheme for non convex problems [3], [8].
Gene selection. The issue of feature selection thanks to the
sparsity inducing `1 constraint (5) is also addressed in this
specific context. The projection Pη1 (W ) aims to sparsify the
W matrix so that the gene j will be selected if kW (j, :)k > 0.
For a given constraint η, the practical stopping criterion of the
alternating minimization algorithm involves the evolution of
the number of the selected genes (see Fig. 4 in Section III).
In the higher level loop on the bound η itself, the evolution of
criterion such as accuracy versus η is analyzed. We also note
that the extension to multi-label is obvious, as it suffices to
allow several ones on each line of the matrix Y by relaxing
constraint (3).
C. Clustering algorithm
D. Supervised learning
The resulting alternating minimization is described by
Algorithm 3. (One can readily replace the gradient-projection
step by the FISTA version described in Algorithm 2.) Labels
Y are for instance initialized by spectral clustering on X,
As a final remark, we note that a straightforward modification
of Algorithm 3 allows to address supervised classification. If
the labels Y are available, the simpler goal is to compute the
matrix of weights W as well as the resulting centroids in the
4
projected space. For the sake of completeness we include below
the corresponding update of the algorithm 3. Experimentations
in the supervised case are out of the scope of this paper and
will be reported somewhere else.
Algorithm 4 Supervised learning.
Input: X, Y, µ0 , W0 , L, N, k, γ, η
µ ← µ0
W ← W0
for l = 0, . . . , L do
for n = 0, . . . , N do
V ← W − γX T (XW − Y µ)
W ← Pη1 (V )
end for
µ ← centroids(Y, XW )
end for
Output: µ, W
III. A PPLICATION TO SINGLE CELL RNA- SEQ CLUSTERING
A. Experimental settings
We normalize the features and use the FISTA implementation
with constant step γ = 1 in accordance with Corollary 1. The
problem of estimating the number of clusters is out of the
scope of this paper, and we refer to the popular Gap method
[40]. We compare the labels obtained from our clustering
with the true labels to compute the clustering accuracy. We
also report the popular Adjusted Rank Index (ARI) [28] and
Normalized Mutual Information (NMI) criteria. Processing
times are obtained on a 2.5 GHz Macbook Pro with an i7
processor. We give tsne results for visual evaluation [42] for
five different methods: PCA k-means, spectral clustering [43],
SIMLR [44], Sparcl (we have used the R software provided
by [46]), and our method.
B. Single cell datasets
Klein scRNA-seq dataset. Klein et al. [27] characterized
the transcriptome of 2,717 cells (Mouse Embryonic
Stem Cells, mESCs), across four culture conditions
(control and with 2, 4 or 7 days after leukemia inhibitory
factor, LIF, withdrawal) using InDrop sequencing. Gene
expression was quantified with Unique Molecular Identifier
(UMI) counts (essentially tags that identify individual
molecules allowing removal of amplification bias). The
raw UMI counts and cells label were downloaded from
https://hemberg-lab.github.io/scRNA.seq.datasets/.
After filtering out lowly expressed genes (10,322 genes
remaining after removing genes that have less than 2 counts
in 130 cells) and Count Per Million normalization (CPM) to
reduce cell-to-cell variation in sequencing, we report clustering
into four cell sub-populations, corresponding to the four
culture conditions.
Zeisel scRNA-seq dataset. Zeisel et al. [26], [47] collected
3,005 mouse cells from the primary somatosensory cortex
(S1) and the hippocampal CA1 region, using the Fluidigm C1
microfluidics cell capture platform followed. Gene expression
was quantified with UMI counts. The raw UMI counts
and metadata (batch, sex, labels) were downloaded from
http://linnarssonlab.org/cortex. We applied
low expressed gene filtering (7,364 remaining genes after
removing genes that have less than 2 counts in 30 cells) and
CPM normalization. We report clustering into the nine major
classes identified in the study.
Usoskin scRNA-seq dataset. Uzoskin et al. [41] collected
622 cells from the mouse dorsal root ganglion, using a
robotic cell-picking setup and sequenced with a 5’ singlecell tagged reverse transcription (STRT) method. Filtered
(9,195 genes) and normalized data (expressed as Reads Per
Million) were downloaded with full sample annotations from
http://linnarssonlab.org/drg. We report clustering into four neuronal cell types.
Single-cell sequencing is a new technology elected "method
of the year" in 2013 by Nature Methods. The widespread
use of such methods has enabled the publication of many
datasets with ground truth cell type (label) annotations [26].
We compare algorithms on four of those public single-cell
RNA-seq datasets: Patel dataset [34], Klein dataset [27], Zeisel
dataset [47] and Usoskin dataset [41].
Patel scRNA-seq dataset. To characterize intra-tumoral
heterogeneity and redundant transcriptional pattern in
glioblastoma tumors, Patel et al. [34] efficiently profiled
5,948 expressed genes of 430 cells from five dissociated Fig. 1: The decay of the Frobenius norm for the four data sets
human glioblastomas using the SMART-Seq protocol. versus the number of loops of the alternating minimization scheme
The filtered and centered-normalized data along with emphasizes the fast and smooth convergence of our algorithm.
the corresponding cell labels were downloaded from
https://hemberg-lab.github.io/scRNA.seq.datasets/.
As described in this study, we report clustering into five
clusters corresponding to the five different dissociated tumors C. Experimental conclusions and comparison with advanced
from which cells were extracted. We did not perform any clustering methods (SIMLR and Sparcl).
other normalization or gene selection on this dataset.
Accuracy, ARI and NMI. K-sparse significantly improves
the results of Sparcl and SIMLR in terms of accuracy, ARI
5
Fig. 2: We report the detailed evolution of the Frobenius norm after the
splitting loop (l = 0.5, 1.5, ...) and after k-means++ (l = 1, 2, ...)
versus the number loops. It shows how both projection-gradient and
k-means++ steps contribute to minimize iteratively the Frobenius
norm.
Fig. 4: The evolution of the number of selected genes versus the
number of loops shows the fast and smooth convergence of our
algorithm.
Fig. 3: Evolution of kW k0 on Usoskin database versus the number
of iterations. The number of nonzero entries of the sparse matrix W
depends on the sharpness of the `1 constraint (5) defined by η, and
on the iteration n. (As n ranges from 0 to N , sparsity is increased
rapidly in the first loop).
TABLE I: Comparison between methods (Patel dataset): 5 clusters,
430 cells, 5,948 genes, d¯opt = k + 8, ηopt = 700. K-sparse selects
217 genes, and outperforms PCA K-means by 21%, spectral by
17% respectively. K-sparse has similar accuracy but better ARI
than SIMLR. K-sparse is 100 times faster than Sparcl.
Patel dataset
Accuracy (%)
ARI (%)
NMI
Time (s)
PCA
76.04
84.21
0.59
0.81
Spectral
80.46
86.93
0.65
0.46
SIMLR
97.21
93.89
0.91
8.0
Sparcl
94.18
93.8
0.85
1,027
Fig. 5: Evolution of accuracy as a function of the dimension of
¯ In order to allow comparisons for several databases,
projection d.
accuracy is plotted against d¯ minus the number of clusters, k. For the
Usoskin database, e.g., the optimal d¯ is k + 2.
K-sparse
98.37
96.3
0.95
10.0
TABLE II: Comparison between methods (Usoskin dataset): 4
clusters, 622 cells, 9,195 genes, d¯opt = k + 4, ηopt = 3000.
K-sparse selected 788 genes. K-sparse outperforms others
methods by 20%.
Usoskin dataset
Accuracy (%)
ARI (%)
NMI
Time (s)
PCA
54.82
22.33
0.29
1.06
Spectral
60.13
26.46
0.33
0.91
SIMLR
76.37
67.19
0.75
15.67
Sparcl
57.24
31.30
0.39
1,830
K-sparse
95.98
92.75
0.88
53.61
Fig. 6: The evolution of the number of selected genes versus the
constraint is a smooth monotonous function. The bound η for the `1
constraint is thus easily tuned.
and NMI.
noise sensitive processing such as Laplacian score [25].
Feature selection. K-sparse and Sparcl have built-in
feature selection, while SIMLR requires supplementary and
Convergence and scalability. K-sparse converges within
around L = 10 loops. The complexity of the inner iteration
6
TABLE III: Comparison between methods (Klein dataset): 4 clusters,
2,717 cells, 10,322 genes after preprocessing, d¯opt = k + 8, ηopt =
14000. K-sparse selects 3, 636 genes. K-sparse and SIMLR
have similar Accuracy, ARI and NMI performances. K-sparse
is 5 times faster than SIMLR and 100 times faster than Sparcl.
A main issue of Sparcl is that optimizing the values of the
Lagrangian parameter using permutations is computationally expensive.
Computing kernel for SIMLR is also computationally expensive.
Complexity of K-sparse is linear with the number of samples,
thus it scales up to large databases. However the main advantage
of K-sparse over Spectral and SIMLR is that it provides selected
genes.
Fig. 7: Selection of the optimal `1 bound, ηopt . A typical behaviour
on biological applications is the existence of a plateau-like zone: if η
too big, too many genes are selected (including irrelevant ones for
the clustering) by the presence of technical and biological noise in
their expression, which reduces accuracy. Conversely, for η too small,
not enough information is available and the accuracy of clustering is
also reduced.
Klein dataset
Accuracy (%)
ARI (%)
NMI
Time (s)
PCA
68.50
44.82
0.55
10.91
Spectral
63.31
38.91
0.54
20.81
SIMLR
99.12
98.34
0.96
511.49
Sparcl
65.11
45.11
0.56
30,384
K-sparse
99.26
98.64
0.97
101.40
TABLE IV: Comparison between methods (Zeisel dataset): 9
clusters, 3,005 cells, 7,364 genes after preprocessing, d¯opt = k + 8,
ηopt = 16000. K-sparse selected 2, 572 genes. PCA k-means has
poor clustering performances. Spectral and Sparcl have similar
performances. K-sparse outperforms other methods. K-sparse
is 7 times faster than SIMLR and 100 times faster than Sparcl.
Note that all algorithms fail to discover small clusters (less than
30 cells) and over-segment large cluster which reflect one of the
main challenge in biology to identify rare events / cell types with
few discriminative characteristics (cell type specific gene expression
patterns lost to technical and non-relevant biological noise).
Fig. 8: Accuracy versus number of genes. These results show that
a minimum number of genes is required to get the best possible
clustering accuracy. Such genes are involved in the most relevant
biological processes necessary to distinguish cell types. On the one
hand, on Patel and Klein datasets, an increasing number of genes
used for clustering between conditions will only add repetitive signal
to the minimum number of genes necessary, and will neither increase
nor decrease the clustering accuracy. On the other hand, on Zeisel and
Usoskin datasets, adding too many genes would result in a decrease
in clustering accuracy. Implying that the additional genes are noisy
due to technical and biological variations with little relevance to
distinguish between cell types.
Zeisel dataset
Accuracy (%)
ARI (%)
NMI
Time (s)
PCA
39.60
34.67
0.54
11
Spectral
59.30
50.55
0.68
23
SIMLR
71.85
64.64
0.75
464
Sparcl
65.23
59.06
0.69
28,980
K-sparse
83.42
75.66
0.76
74
TABLE V: Comparison between methods: accuracy (%). K-sparse
significantly improves the results of Sparcl and SIMLR in terms
of accuracy
Methods
Patel (430 cells, k = 5)
Klein (2,717 cells, k = 4)
Zeisel (3,005 cells, k = 9)
Usoskin (622 cells, k = 4)
PCA
76.04
68.50
39.60
54.82
SIMLR
97.21
99.12
71.85
76.37
Sparcl
94.18
65.11
65.23
57.24
K-sparse
98.37
99.26
83.42
95.98
TABLE VI: Comparison between methods: time (s). K-sparse
outperforms SIMLR on large databases.
Methods
Patel (430 cells, k = 5)
Usoskin (622 cells, k = 4)
Klein (2,717 cells, k = 4)
Zeisel (3,005 cells, k = 9)
Fig. 9: Ranked weight kW (j, :)k of selected genes.
of K-sparse is O(d × d¯ × dn (η)) for the gradient part
¯ for
(sparse matrix multiplication X T XW ), plus O(d × d)
the projection part, where dn (η) is the average number of
nonzero entries of the sparse matrix W . This number depends
on the sharpness of the `1 constraint (5) defined by η, and
on the iteration n. (As n ranges from 0 to N , sparsity is
PCA
0.81
1.06
11
11
SIMLR
8
15
511
464
K-sparse
10
53
101
74
increased as illustrated by the numerical simulations.) One
must then add the cost of k-means, that is expected to be
¯ in average. This allows K-sparse to scale up to
O(m × d)
large or very large databases. In contrast, optimizing the values
of the Lagrangian parameter using permutations Sparcl is
computationally expensive, with complexity O(m2 × d). Naive
implementation of Kernel methods SIMLR results in O(m2 )
complexity. Although the computational cost can be reduced
to O(p2 × m) [4], where p is the low rank of approximation
the computational cost is expensive for large data sets, whence
7
limitations for large databases.
any high dimensional database (in imaging, social networks,
customer relationship management...). Ongoing developments
concern the application to very large datasets.
R EFERENCES
Fig. 10: Usoskin dataset: The heatmap of XW shows the efficiency
of the projection
Fig. 11: Usoskin dataset: The heatmap of W using dendogram shows
that K-sparse selects linear combinations of genes.
IV. C ONCLUSION
In this paper we focus on unsupervised clustering. We provide
a new efficient algorithm based on alternating minimization
that achieves feature selection by introducing a `1 constraint in
the gradient-projection step. This step, of splitting type, uses
an exact projection on the `1 ball to promote sparsity, and
is alternated with k-means. Convergence of the projectiongradient method is established. Each iterative step of our
algorithm necessarily lowers the cost which is so monotonically
decreasing. The experiments on single-cell RNA-seq dataset in
Section III demonstrate that our method is very promising
compared to other algorithms in the field. Note that our
algorithm can be straightforwardly applied for clustering
[1] C. Aggarwal. On k-anonymity and the curse of dimensionality.
Proceedings of the 31st VLDB Conference, Trondheim, Norway, 2005.
[2] D. Arthur and S. Vassilvitski. k-means++: The advantages of careful
seeding. Proceedings of the eighteenth annual ACM-SIAM symposium
on Discrete algorithms, 2007.
[3] H. Attouch, J. Bolte, P. Redont, and A. Soubeyran. Proximal alternating
minimization and projection methods for nonconvex problems: an
approach based on the kurdyka-lojasiewicz inequality. Mathematics
of Operations Research, 2010.
[4] F. R. Bach. Sharp analysis of low-rank kernel matrix approximations.
International Conference on Learning Theory (COLT)., 2013.
[5] F. R. Bach and Z. Harchaoui. Diffrac: a discriminative and flexible
framework for clustering. In J. C. Platt, D. Koller, Y. Singer, and S. T.
Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 49–56. Curran Associates, Inc., 2008.
[6] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone
Operator Theory in Hilbert Spaces. Springer, New York, 2011.
[7] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding
algorithm for linear inverse problems. SIAM journal on imaging sciences,
2(1):183–202, 2009.
[8] J. Bolte, S. Sabach, and M. Teboulle. Proximal alternating linearized
minimization for nonconvex and nonsmooth problems. Mathematical
Programming, 146(1):459–494, Aug 2014.
[9] L. Bottou and Y. Bengio. Convergence properties of the k-means
algorithms. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors,
Advances in Neural Information Processing Systems 7, pages 585–592.
MIT Press, 1995.
[10] F. Bunea, C. Giraud, M. Royer, and N. Verzelen. PECOK: A convex
optimization approach to variable clustering. (1606.05100), 2016.
[11] E. J. Candès. The restricted isometry property and its implications for
compressed sensing. Comptes Rendus Acad Sciences Paris, 346(1):589–
592, 2008.
[12] E. J. Candès, M. B. Wakin, and S. P. Boyd. Enhancing sparsity by
reweighted `1 minimization. Journal of Fourier analysis and applications,
14(5-6):877–905, 2008.
[13] A. Chambolle and C. Dossal. On the convergence of the iterates of "fista".
Journal of Optimization Theory and Applications, Springer Verlag, (166),
2015.
[14] P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal
processing. In Fixed-point algorithms for inverse problems in science
and engineering, pages 185–212. Springer, 2011.
[15] P. L. Combettes and V. R. Wajs. Signal recovery by proximal forwardbackward splitting. Multiscale Modeling & Simulation, 4(4):1168–1200,
2005.
[16] L. Condat. Fast projection onto the simplex and the l1 ball. Mathematical
Programming Series A, 158(1):575–585, 2016.
[17] L. Condat. A convex approach to k-means clustering and image
segmentation. HAL, (01504799), 2017.
[18] F. de la Torre and T. Kanade. Discriminative cluster analysis. ICML 06
Proceedings of the 23rd international conference on Machine learning,
Pittsburgh, Pennsylvania, USA, 2006.
[19] C. Ding and T. Li. Adaptive dimension reduction using discriminant
analysis and k-means clustering. In Proceedings of the 24th International
Conference on Machine Learning, ICML ’07, pages 521–528, New York,
NY, USA, 2007. ACM.
[20] D. L. Donoho and M. Elad. Optimally sparse representation in general
(nonorthogonal) dictionaries via `1 minimization. Proceedings of the
National Academy of Sciences, 100(5):2197–2202, 2003.
[21] D. L. Donoho and B. F. Logan. Signal recovery and the large sieve.
SIAM Journal on Applied Mathematics, 52(2):577–591, 1992.
[22] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient
projections onto the l 1-ball for learning in high dimensions. In
Proceedings of the 25th international conference on Machine learning,
pages 272–279. ACM, 2008.
[23] N. Flammarion, B. Palaniappan, and F. R. Bach. Robust discriminative
clustering with sparse regularizers. arXiv preprint arXiv:1608.08052,
2016.
[24] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization
path for the support vector machine. Journal of Machine Learning
Research, 5:1391–1415, 2004.
8
[25] X. He, D. Cai, and P. Niyogi. Laplacian score for feature selection. In
Y. Weiss, P. B. Schölkopf, and J. C. Platt, editors, Advances in Neural
Information Processing Systems 18, pages 507–514. MIT Press, 2006.
[26] V. Y. et al Kiselev. Sc3: consensus clustering of single-cell rna-seq data.
Nature Methods, 2017.
[27] A. M. et al Klein. Droplet barcoding for single-cell transcriptomics
applied to embryonic stem cells. Cell, 2015.
[28] H. Lawrence and A. Phipps. Comparing partitions. Journal of
Classification, 1985.
[29] P.-L. Lions and B. Mercier. Splitting algorithms for the sum of two
nonlinear operators. SIAM Journal on Numerical Analysis, 16(6):964–979,
1979.
[30] J. Mairal and B. Yu. Complexity analysis of the lasso regularization
path. In Proceedings of the 29th International Conference on Machine
Learning (ICML-12), pages 353–360, 2012.
[31] D. G. Mixon, S. Villar, and R. Ward. Clustering subgaussian mixtures
with k-means. Information and inference, 00:1–27, 2017.
[32] S. Mosci, L. Rosasco, M. Santoro, A. Verri, and S. Villa. Solving
structured sparsity regularization with proximal methods. In Machine
Learning and Knowledge Discovery in Databases, pages 418–433.
Springer, 2010.
[33] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis
and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani,
editors, Advances in Neural Information Processing Systems 14, pages
849–856. MIT Press, 2002.
[34] A. P. et al Patel. Single-cell rna-seq highlights intratumoral heterogeneity
in primary glioblastoma. Science 344, 2014.
[35] J. Peng and Y. Wei. Approximating k-means-type clustering via
semidefinite programming. SIAM J. Optim., 18:186–205, 2017.
[36] M. Radovanovic, A. Nanopoulos, and M. Ivanovic. Hubs in space :
Popular nearest neighbors in high-dimensional data. Journal of Machine
Learning Research. 11: 2487?2531.
[37] S. Z. Selim and M. A. Ismail. K-means-type algorithms: A generalized
convergence theorem and characterization of local optimality. IEEE
Trans. Patt. An. Machine Intel., PAMI-6:81–87, 1984.
[38] S. Sra, S. Nowozin, and S. J. Wright. Optimization for Machine Learning.
MIT Press, 2012.
[39] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of
the Royal Statistical Society. Series B (Methodological), pages 267–288,
1996.
[40] R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of
clusters in a data set via the gap statistic. Journal of the Royal Statistical
Society. Series B (Methodological), pages 411–423, 2001.
[41] D. et al Usoskin. Unbiased classification of sensory neuron types by
large-scale single-cell rna sequencing. Nature Neuroscience, 2014.
[42] L. J. P. Van der Maaten and G. E. Hinton. Visualizing high-dimensional
data using t-sne. Journal of Machine Learning Research, 9:2579–2605,
2008.
[43] U. Von Luxburg. A tutorial on spectral clustering. Statistics and
computing, 2007.
[44] B. Wang, J. Zhu, E. Pierson, D Ramazzotti, and S. Batzoglou. Visualization and analysis of single-cell rna-seq data by kernel-based similarity
learning. Nature methods, (14), 2017.
[45] C. Wei-Chien. On using principal components before separating a mixture
of two multivariate normal distributions. Journal of the Royal Statistical
Society, 32(3), 1983.
[46] D. M Witten and R. Tibshirani. A framework for feature selection in
clustering. Journal of the American Statistical Association, 105(490):713–
726, 2010.
[47] A. et al Zeisel. Cell types in the mouse cortex and hippocampus revealed
by single-cell rna-seq. Science, 2015.
9
Fig. 12: Comparison of 2D visualization using tsne [42]. Each point represents a cell. Misclassified cells in black are reported for 3
datasets : Patel, Klein and Usoskin. K-sparse significantly improves visually the results of Sparcl and SIMLR (note that SIMLR fails to
discover a class on Usoskin). This figure shows nice small ball-shaped clusters for K-sparse and SIMLR methods.
| 2 |
The two-edge connectivity survivable-network design problem in
planar graphs
arXiv:1302.2184v2 [] 30 Sep 2015
Glencora Borradaile
Oregon State University
Philip Klein
Brown University
October 1, 2015
Abstract
Consider the following problem: given a graph with edge costs and a subset Q of vertices,
find a minimum-cost subgraph in which there are two edge-disjoint paths connecting every
pair of vertices in Q. The problem is a failure-resilient analog of the Steiner tree problem
arising, for example, in telecommunications applications. We study a more general mixedconnectivity formulation, also employed in telecommunications optimization. Given a number
(or requirement) r(v) ∈ {0, 1, 2} for each vertex v in the graph, find a minimum-cost subgraph
in which there are min{r(u), r(v)} edge-disjoint u-to-v paths for every pair u, v of vertices.
We address the problem in planar graphs, considering a popular relaxation in which the
solution is allowed to use multiple copies of the input-graph edges (paying separately for each
copy). The problem is max SNP-hard in general graphs and strongly NP-hard in planar graphs.
We give the first polynomial-time approximation scheme in planar graphs. The running time is
O(n log n).
Under the additional restriction that the requirements are only non-zero for vertices on the
boundary of a single face of a planar graph, we give a polynomial-time algorithm to find the
optimal solution.
1
Introduction
In the field of telecommunications network design, an important requirement of networks is resilience to link failures [24]. The goal of the survivable network problem is to find a graph that
provides multiple routes between pairs of terminals. In this work we address a problem concerning
edge-disjoint paths. For a set S of non-negative integers, an instance of the S-edge connectivity
design problem is a pair (G, r) where G = (V, E) is a undirected graph with edge costs c : V → <+
and connectivity requirements r : V → S. The goal is to find a minimum-cost subgraph of G that,
for each pair u, v of vertices, contains at least min{r(u), r(v)} edge-disjoint u-to-v paths.
In telecommunication-network design, failures are rare; for this reason, there has been much
research on low-connectivity network design problems, in which the maximum connectivity requirement is two. Resende and Pardalos [24] survey the literature, which includes heuristics, structural
results, polyhedral results, computational results using cutting planes, and approximation algorithms. This work focuses on {0, 1, 2}-edge connectivity problems in planar graphs.
We consider the previously studied variant wherein the solution subgraph is allowed to contain
multiple copies of each edge of the input graph (a multi-subgraph); the costs of the edges in the
solution are counted according to multiplicity. For {0, 1, 2}-connectivity, at most two copies of an
1
edge are needed. We call this the relaxed version of the problem and use the term strict to refer to
the version of the problem in which multiple copies of edges of the input graph are disallowed.
A polynomial-time approximation scheme (PTAS) for an optimization problem is an algorithm
that, given a fixed constant > 0, runs in polynomial time and returns a solution within 1 + of
optimal. The algorithm’s running time need not be polynomial in . The PTAS is efficient if the
running time is bounded by a polynomial whose degree is independent of . In this paper, we focus
on designing a PTAS for {0, 1, 2}-edge connectivity.
Two edge-connected spanning subgraphs A special case that has received much attention
is the problem of finding a minimum-cost subgraph of G in which every pair of vertices is two
edge-connected. Formally this is the strict {2}-edge connectivity design problem. This problem
is NP-hard [12] (even in planar graphs, by a reduction from Hamiltonian cycle) and max-SNP
hard [10] in general graphs. In general graphs, Frederickson and JáJá [13] gave an approximation
ratio of 3 which was later improved to 2 (and 1.5 for unit-cost graphs) by Khuller and Vishkin [17].
In planar graphs, Berger et al. [4] gave a polynomial-time approximation scheme (PTAS) for the
relaxed {1, 2}-edge connectivity design problem and Berger and Grigni [5] gave a PTAS for the strict
{2}-edge connectivity design problem. Neither of these algorithms is efficient; the degree of the
polynomial bounding the running time grows with 1/. For the relaxed version of spanning planar
2-edge connectivity, the techniques of Klein [19] can be used to obtain a linear-time approximation
scheme.
Beyond spanning When a vertex can be assigned a requirement of zero, edge-connectivity design
problems include the Steiner tree problem: given a graph with edge costs and given a subset of
vertices (called the terminals), find a minimum-cost connected subgraph that includes all vertices
in the subset. More generally, we refer to any vertex with a non-zero connectivity requirement
as a terminal. For {0, 2}-edge connectivity design problem, in general graphs, Ravi [23] showed
that Frederickson and JáJá’s approach could be generalized to give a 3-approximation algorithm
(in general graphs). Klein and Ravi [20] gave a 2-approximation for the {0, 1, 2}-edge connectivity
design problem. (In fact, they solve the even more general version in which requirements r(u, v)
are specified for pairs u, v of vertices.) This result was generalized to connectivity requirements
higher than two by Williamson et al. [27], Goemans et al. [14], and Jain [16]. These algorithms
each handle the strict version of the problem.
In their recent paper on the spanning case [5], Berger and Grigni raise the question of whether
there is a PTAS for the {0, 2}-edge connectivity design problem in planar graphs. In this paper,
we answer that question in the affirmative for the relaxed version. The question in the case of the
strict version is still open.
1.1
Summary of new results
Our main result is a PTAS for the relaxed {0, 1, 2}-edge connectivity problem in planar graphs:
Theorem 1.1. For any , there is an O(n log n) algorithm that, given a planar instance (G, r) of
relaxed {0, 1, 2}-edge connectivity, finds a solution whose cost is at most 1 + times optimal.
2
This result builds on the work of Borradaile, Klein and Mathieu [7, 8, 9] which gives a PTAS for
the Steiner tree (i.e. {0, 1}-edge connectivity) problem. This is the first PTAS for a non-spanning
two-edge connectivity problem in planar graphs.
Additionally, we give an exact, polynomial-time algorithm for the special case where are the
vertices with non-zero requirement are on the boundary of a common face:
Theorem 1.2. There is an O(k 3 n)-time algorithm that finds an optimal solution to any planar
instance (G, r) of relaxed {0, 1, 2}-edge-connectivity in which only k vertices are assigned nonzero
requirements and all of them are on the boundary of a single face. For instances of relaxed {0, 2}edge connectivity (i.e. all requirements are 0 or 2), the algorithm runs in linear time.
1.2
Organization
We start by proving Theorem 1.2 in Section 3. The proof of this result is less involved and provides
a good warm-up for the proof of Theorem 1.1. The algorithm uses the linear-time shortest-path
algorithm for planar graphs [15] and a polynomial-time algorithm for the equivalent boundary
Steiner-tree problem [11] as black boxes.
In order to prove Theorem 1.1, we need to review the framework developed for the Steiner tree
problem in planar graphs. We give an overview of this framework in Section 4 and show how to use
it to solve the relaxed {0, 1, 2}-edge connectivity problem. The correctness of the PTAS relies on a
Structure Theorem (Theorem 4.6) which bounds the number of interactions of a solution between
different regions of the graph while paying only a small relative penalty in cost. We prove this
Structure Theorem in Section 5. The algorithm itself requires a dynamic program; we give the
details for this in Section 6.
2
Basics
We consider graphs and multi-subgraphs. A multi-subgraph is a subgraph where edges may be
included with multiplicity. In proving the Structure Theorem we will replace subgraphs of a solution
with other subgraphs. In doing so, two of the newly introduced subgraphs may share an edge.
For a subgraph H of a graph G, we use V (H) to denote the set of vertices in H. For a graph
G and set of edges E, G/E denotes the graph obtained by contracting the edges E.
For a path P , P [x, y] denotes the x-to-y subpath of P for vertices x and y of P ; end(P ) and
start(P ) denote the first and last vertices of P ; rev(P ) denotes the reverse of path P . For paths A
and B, A ◦ B denotes the concatenation of A and B. See Figure 1 for an illustration of the notion
of paths crossing. A cycle is non-self-crossing if every pair of subpaths of the cycle do not cross.
We employ the usual definitions of planar embedded graphs. For a face f , the cycle of edges
making up the boundary of f is denoted ∂f . We assume the planar graph G is connected and is
embedded in the plane, so there is a single infinite face, and we denote its boundary by ∂G.
For a cycle C in a planar embedded graph, C[x, y] denotes an x-to-y path in C for vertices x and
y of C. There are two such paths and the choice between the two possibilities will be disambiguated
by always choosing the subpath in the clockwise direction. A cycle C is said to enclose the faces
that are embedded inside it. C encloses an edge/vertex if the edge/vertex is embedded inside it or
on it. In the former case, C strictly encloses the edge/vertex. For non-crossing x-to-y paths P and
Q, P is said to be left of Q if P ◦ rev(Q) is a clockwise cycle.
We will use the following as a subroutine:
3
Q
y
P
P
Q
v
x
(a)
(b)
(c)
(d)
Figure 1: (a) P crosses Q. (b) P and Q are noncrossing; Q is left of P (c) A self-crossing cycle.
(d) A non-self-crossing cycle (non-self-crossing allows for repeated vertices, i.e. v.)
Theorem 2.1. Erickson, Monma and Veinott [11] Let G be a planar embedded graph with edgecosts and let Q be a set of k terminals that all lie on the boundary of a single face. Then there is
an algorithm to find an minimum-cost Steiner tree of G spanning Q in time O(nk 3 + (n log n)k 2 )
or O(nk 3 ) time using the algorithm of [15].
2.1
Edge-connectivity basics
Since we are only interested in connectivity up to and including two-edge connectivity, we define
the following: For a graph H and vertices x, y, let
cH (x, y) = min{2, maximum number of edge-disjoint x-to-y paths in H}.
For two multi-subgraphs H and H 0 of a common graph G and for a subset S of the vertices of G,
we say H 0 achieves the two-connectivity of H for S if cH 0 (x, y) ≥ cH (x, y) for every x, y ∈ S. We
say H 0 achieves the boundary two-connectivity of H if it achieves the two-connectivity of H for
S = V (∂G).
Several of the results in the paper build on observations of the structural property of two-edge
connected graphs. The first is a well-known property:
Lemma 2.2 (Transitivity). For any graph H, for vertices u, v, w ∈ V (H), cH (u, w) ≥ min{cH (u, v), cH (v, w)}
Note that in the following, we can replace “strictly encloses no/strictly enclosed” with “strictly
encloses all/strictly not enclosed” without loss of generality (by viewing a face enclosed by C as
the infinite face).
Lemma 2.3 (Empty Cycle). Let H be a (multi-)subgraph of G and let C be a non-self-crossing
cycle of H that strictly encloses no terminals. Let H 0 be the subgraph of H obtained by removing
the edges of H that are strictly enclosed by C. Then H 0 achieves the two-connectivity of H.
Proof. See Figure 2(b). Without loss of generality, view C as a clockwise cycle. Consider two
terminals x and y. We show that there are cH (x, y) edge-disjoint x-to-y paths in H that do not
use edges strictly enclosed by C. There are two nontrivial cases:
cH (x, y) = 1 : Let P be an x-to-y path in H. If P intersects C, let xP be the first vertex of P that
is in C and let yP be the last vertex of P that is in C. Let P 0 = P [x, xP ] ◦ C[xP , yP ] ◦ P [yP , y].
If P does not intersect C, let P 0 = P . P 0 is an x-to-y path in H that has no edge strictly
enclosed by C.
4
cH (x, y) = 2 : Let P and Q be edge-disjoint x-to-y paths in H. If Q does not intersect C, then P 0
and Q are edge-disjoint paths, neither of which has an edge strictly enclosed by C (where P 0
is as defined above). Suppose that both P and Q intersect C. Define xQ and yQ as for P .
Suppose these vertices are ordered xP , xQ , yQ , yP around C. Then P [x, xP ] ◦ C[xP , yQ ] ◦
Q[yQ , y] and Q[x, xQ ] ◦ rev(C[yP , xQ ]) ◦ P [yP , y] are edge disjoint x-to-y paths that do not use
any edges enclosed by C. This case is illustrated in Figure 2; other cases (for other orderings
of {xP , xQ , yQ , yP } along C) follow similarly.
y
yQ
yP
P
xP
C
x
Q
xQ
Figure 2: An illustration of the proof of Lemma 2.3: there are edge-disjoint x-to-y paths (grey)
that do not use edges enclosed by C.
We have shown that we can achieve the boundary two-connectivity of H without using any edges
strictly enclosed by a cycle of H. The lemma follows.
2.2
Vertex-connectivity basics
The observations in this section do not involve planarity. Although our results are for edge connectivity, we use vertex connectivity in Section 5 to simplify our proofs.
Vertices x and y are biconnected (a.k.a. two-vertex-connected) in a graph H if H contains two
x-to-y paths that do not share any internal vertices, or, equivalently, if there is a simple cycle in H
that contains both x and y. For a subset S of vertices of H, we say H is S-biconnected if for every
pair x, y of vertices of S, H contains a simple cycle through x and y. We refer to the vertices of S
as terminals.
Lemma 2.4. A minimal S-biconnected graph is biconnected.
Proof. Suppose H has a cut-vertex v: H = H1 ∪ H2 where H1 ∩ H2 = {v} and Hi 6= {v} for
i = 1, 2. If H1 and H2 both have terminals then H does not biconnect every pair of terminals. If,
say, H2 does not have a terminal then H − (H2 − {v}) is a smaller subgraph that biconnects the
terminals.
For the next two proofs, we use the notion of an (open) ear decomposition. An ear decomposition
of a graph G is a partition of the edges
S into a cycle C and a sequence of paths P1 , P2 , . . . , Pk such
that the endpoints of Pi are in C ∪ ( j<i Pj ). The ear decomposition is open if the endpoints of Pi
are distinct. A graph is biconnected iff it has an open ear decomposition [25]. Ear decompositions
can be built greedily starting with any cycle.
Theorem 2.5. Let H be a minimal S-biconnected graph. Every cycle in H contains a vertex of S.
5
Proof. Assume for a contradiction that H contains a cycle C that does not contain any terminals.
By Lemma 2.4, H is biconnected and so has an open ear decomposition starting with C; let
C, P1 , P2 , . . . , Pk be an open
S ear decomposition of H. Define Ei to be the subgraph composed of C
and the first i ears: C ∪ ( j≤i Pj ). We construct another open ear decomposition with
S one fewer
0
0
0
0
0
0
ear C , P2 , . . . , Pk of H as follows (note there is no ear P1 ) and use Ei to denote C ∪ ( j≤i Pj0 ).
Let x and y be the endpoints of P1 . Let C 0 = C[y, x] ◦ P1 . Let Q01 = C[x, y] be the portion of
C that is not used in C 0 . We will maintain the invariant:
Q0i contains at least one edge and Q0i = Ei − Ei0
Clearly this invariant holds for i = 1. For i ≥ 2, we define Pi0 using Pi and Q0i−1 . Note that
0
one or more of the endpoints of Pi may be in Q0i−1 and so Ei−1
∪ Pi is not necessarily a valid
0 , allowing us to
ear decomposition. However, by the invariant, Pi ’s endpoints are in Q0i−1 ∪ Ei−1
0
0
0
define a new valid ear Pi by extending Pi along Qi−1 to reach Ei−1 as follows: Pi0 is the minimal
0
0
path of Pi ∪ Q0i−1 ∪ Ei−1
whose endpoints are in Ei−1
such that Pi is a subpath of Pi0 . Define
Q0i = Q0i−1 − (Pi0 − Pi ). Since Pi has distinct endpoints, Pi0 does not contain all the edges of Q0i−1 ,
thus maintaining the invariant.
By construction, C 0 , P20 , . . . , Pk0 is an open ear decomposition of H − Q0k and so H − Q0k is
biconnected. Since Q0k ⊂ C and C does not contain any terminals, H − Q0k is S-biconnected and
since Q0k contains at least one edge, H − Q0k contradicts the minimality of H.
Theorem 2.6. Let H be a minimal S-biconnected graph. For any cycle C in H, every C-to-C
path contains a vertex of S.
Proof. Let C be any cycle. By Theorem 2.5, C contains a terminal. We consider an ear decomposition C, P1 , P2 , . . . of H built as follows. Consider s ∈ S not spanned by C ∪ P1 ∪ · · · ∪ Pi−1 .
Then there are vertex-disjoint paths from s to C ∪ P1 ∪ · · · ∪ Pi−1 since s and the terminal on, for
example, C are biconnected. Let Pi be the ear formed by these vertex-disjoint paths. Observe that
by this construction each ear Pi contains a terminal for every i.
Suppose every path in ∪i≤k Pi with two endpoints in C strictly contains a vertex of S. We prove
that this is then the case for ∪i≤k+1 Pi . Since Pk+1 is an ear, its endpoints are in ∪i≤k Pi and so
any C-to-C path that uses an edge of Pk+1 would have to contain the entirety of Pk+1 ; therefore
Pk+1 cannot introduce a terminal-free path.
3
An exact algorithm for boundary {0, 1, 2}-edge connectivity
Our algorithm for the boundary case of {0, 1, 2}-edge connectivity, as formalized in Theorem 1.2
is based on the observation that there is an optimal solution to the problem that is the union of
Steiner trees whose terminals are boundary vertices, allowing us to employ the boundary-Steinertree algorithm of Theorem 2.1.
When terminals are restricted to the boundary of the graph, no cycle can strictly enclose a
terminal. By Lemma 2.3, we get:
Corollary 3.1. Let H be a subgraph of G and let H 0 be a minimal subgraph of H that achieves the
boundary two-connectivity of H. Then in H 0 every cycle C strictly encloses no edges.
6
In the following we will assume that the boundary of the graph G is a simple cycle; that is,
a vertex appears at most once along ∂G. Let us see why this is a safe assumption. Suppose the
boundary of G is not simple: there is a vertex v that appears at least twice along ∂G. Partition
G into two graphs G1 and G2 such that G1 ∩ G2 = v, v appears exactly once along ∂G1 and
E(∂G) = E(∂G1 ) ∪ E(∂G2 ). Let x be a vertex of ∂G1 and let y be a vertex of ∂G2 . Then
cG (x, y) = min{cG1 (x, v), cG2 (v, y)}, allowing us to define new connectivity requirements and solve
the problem separately for G1 and G2 .
Lemma 3.2. Let P and Q be leftmost non-self-crossing xP -to-yP and xQ -to-yQ paths, respectively,
where xP , yP , xQ , and yQ are vertices in clockwise order on ∂G. Then P does not cross Q.
Proof. For a contradiction, assume that Q crosses P . Refer to Figure 3(a). Let C (interior shaded)
be the cycle P ◦ rev(∂G[xP , yP ]). C strictly encloses neither xQ nor yQ . If Q crosses P , there must
be a subpath of Q enclosed by C. Let x be the first vertex of Q in P and let y be the last. There
are two cases:
x ∈ P [y, yP ] : Refer to Figure 3(a). In this case, rev(P [y, x]) is left of Q[x, y] and so Q[xQ , x] ◦
rev(P [y, x]) ◦ Q[y, yQ ] (grey path) is left of Q, contradicting the leftmostness of Q.
x ∈ P [xP , y] : Refer to Figure 3(b). In this case, Q[xQ , x] ◦ rev(P [xP , x]) ◦ ∂G[xP , xQ ] (shaded
interior) is a cycle that strictly encloses y and does not enclose yQ . Since y is the last vertex
of Q on P , Q must cross itself, a contradiction.
yQ
Q
y
P
yQ
xP
P
Q
y
x
xQ
xP
x
xQ
yP
(a)
yP
(b)
Figure 3: Illustration of Lemma 3.2: there exist leftmost paths that do not cross.
Lemma 3.3. Let H be a subgraph of G. Let S be a subset of V (∂G) such that, for every x, y ∈ S,
cH (x, y) = 2. Then there is a non-self-crossing cycle C in H such that S ⊆ V (C) and the order
that C visits the vertices in S is the same as their order along ∂G.
Proof. Assume that the vertices of S are in the clockwise order s0 , s1 , . . . , sk−1 along ∂G.
Let Pi be the leftmost non-self-crossing si−1 -to-si path in H, taking the indices modulo k. Let
C = P1 ◦ P2 ◦ · · · ◦ Pk−1 . Certainly C visits each of the vertices s0 , s1 , . . . in order. By Lemma 3.2,
Pi does not cross Pj for all i 6= j. Therefore, C is non-self-crossing, proving the lemma.
7
We now give an algorithm for the following problem: given a planar graph G with edge
costs and an assignment r of requirements such that r(v) > 0 only for vertices v of ∂G, find a
minimum-cost multi-subgraph H of G that satisfies the requirements (i.e. such that there are at
least min{r(x), r(y)} edge-disjoint x-to-y paths in H).
Boundary2EC(G, r)
1.
2.
3.
4.
Let q1 , q2 , . . . be the cyclic ordering of vertices {v ∈ V (∂G) : r(v) = 2}.
For i = 1, . . ., let Xi = {qi } ∪ {v ∈ V (∂G[qi , qi+1 ]) : r(v) = 1} ∪ {qi+1 }.
For i = 1, . . ., let Ti be the minimum-cost Steiner tree spanning Xi .
Return the disjoint union ∪i Ti .
We show that Boundary2EC correctly finds the minimum-cost multi-subgraph of G satisfying
the requirements. Let OPT denote an optimal solution. By Lemma 3.3, OPT contains a non-selfcrossing cycle C that visits q1 , q2 , . . . (as defined in Boundary2EC). By Corollary 3.1, C strictly
encloses no edges of OPT. Let Pi be the leftmost qi -to-qi+1 path in C. The vertices in Xi are
connected in OPT, by the input requirements. Let Si be the subgraph of OPT that connects
Xi . This subgraph is enclosed by ∂G[qi , qi+1 ] ◦ C[qi , qi+1 ]. Replacing Si by Ti achieves the same
connectivity among vertices v with r(v) > 0 without increasing the cost.
We will use the following lemma to give an efficient implementation of Boundary2EC.
Lemma 3.4. Let a, b and c be vertices ordered along the clockwise boundary ∂G of a planar graph
G. Let Ta be the shortest-path tree rooted at a (using edge costs for lengths). Then for any set of
terminals Q in ∂G[b, c], there is a minimum-cost Steiner tree connecting them that enclosed by the
cycle ∂G[b, c] ◦ Ta [c, b].
Proof. Refer to Figure 4. Let C = ∂G[b, c] ◦ Ta [c, b]. Let T be a minimum-cost Steiner tree in G
connecting Q. Suppose some part of T is not enclosed by C. Let T 0 be a maximal subtree of T
not enclosed by C. The leaves of T 0 are on Ta [c, b]. Let P be the minimum subpath of Ta [b, c] that
spans these leaves. Let P 0 be the start(P )-to-end(P ) path in T 0 . See Figure 4.
We consider the case when start(P 0 ) is a vertex of Ta [a, b] and end(P 0 ) is a vertex of Ta [a, c]
(the other cases, when start(P 0 ) and end(P 0 ) are either both vertices of Ta [a, b] or both vertices
of Tb [a, c], are simpler). Then P 0 must cross Ta [a, x] where x is the last vertex common to Ta [a, b]
and Ta [a, c] (i.e. the lowest common ancestor in Ta of b and c). Let y be a vertex of P 0 ∩ Ta [a, x].
Since Ta is a shortest-path tree in an undirected path, every subpath of Ta [a, z] and Ta [z, a], for
any vertex z, is a shortest path. We have that:
c(P 0 ) = c(P 0 [start(P 0 ), y]) + c(P 0 [y, end(P 0 )])
≥ c(Ta [start(P 0 ), y]) + c(Ta [y, end(P 0 )])
≥ c(Ta [start(P 0 ), end(P 0 )])
≥ c(P )
Let Tb = T − T 0 ∪ P . By construction, Tb spans Q. Using that c(P 0 ) ≥ c(P ), we have that
c(Tb) = c(T ) − c(T 0 ) + c(P ) ≤ c(T ) − c(T 0 ) + c(P 0 ) ≤ c(T ) since P 0 is a subpath of T 0 .
Repeating this process for every subtree of T not enclosed by C results in a tree enclosed by C
spanning Q that is no longer than T .
8
Figure 4: There is a tree Tb that is just as cheap as T (dotted) and spans the terminals between b
and c but is enclosed by C (whose interior is shaded). Tb is composed of the portion of T enclosed
by C plus P , the thick grey path.
We describe an O(k 3 n)-time implementation of Boundary2EC (where k is the number of
terminals). Compute a shortest-path tree T rooted at terminal q1 in linear time. For each i,
consider the graph Gi enclosed by Ci = ∂G[qi , qi+1 ] ◦ T [qi+1 , qi ]. Compute the minimum Steiner
tree spanning Xi in Gi . By Lemma 3.4, Ti has the same cost as the minimum spanning tree
spanning Xi in G. Since each edge of G appears in at most two subgraphs Gi and Gj , the trees Ti
can be computed in O(k 3 n) time (by Theorem 2.1).
Note: if the requirements are such that r(v) ∈ {0, 2} for every vertex v on the boundary of G,
then the sets Xi have cardinality 2. Instead of computing Steiner trees in Step 3, we need only
compute shortest paths. The running time for this special case is therefore linear.
This completes the proof of Theorem 1.2.
4
A PTAS framework for connectivity problems in planar graphs
In this section, we review the approach used in [9] to give a PTAS framework for the Steiner
tree problem in planar graphs. While the contents of this section are largely a summary of the
framework, we generalize where necessary for the survivable network design problem, but refer the
reader to the original paper [9] for proofs and construction details that are not unique to the focus
of this article.
Herein, denote the set of terminals by Q. OPT denotes an optimal solution to the survivable
network design problem. We overload this notation to also represent the cost of the optimal solution.
4.1
Mortar graph and bricks
The framework relies on an algorithm for finding a subgraph MG of G, called the mortar graph [9].
The mortar graph spans Q and has total cost no more than 9−1 times the cost of a minimum
Steiner tree in G spanning Q (Lemma 6.9 of [9]). Since a solution to the survivable network
problem necessarily spans Q, the mortar graph has cost at most
9−1 · OPT.
(1)
The algorithm for computing MG first computes a 2-approximate Steiner tree [21, 26, 28] and then
augments this subgraph with short paths. The resulting graph is a grid-like subgraph (the bold
edges in Figure 5(a)) many of whose subpaths are -short:
9
Definition 4.1. A path P in a graph G is -short if for every pair of vertices x and y on P ,
distP (x, y) ≤ (1 + )distG (x, y).
That is, the distance from x to y along P is at most (1 + ) times the distance from x to y in G.
For each face f of the mortar graph, the subgraph of G enclosed by that face (including the
edges and vertices of the face boundary) is called a brick (Figure 5(b)), and the brick’s boundary
is defined to be f . The boundary of a brick B is written ∂B. The interior of B is defined to be
the subgraph of edges of B not belonging to ∂B. The interior of B is written int(B).
Bricks satisfy the following:
Lemma 4.2 (Lemma 6.10 [9]). The boundary of a brick B, in counterclockwise order, is the
concatenation of four paths WB , SB , EB , NB (west, south, east, north) such that:
1. The set of edges B − ∂B is nonempty.
2. Every vertex of Q ∩ B is in NB or in SB .
3. NB is 0-short in B, and every proper subpath of SB is -short in B.
4. There exists a number t ≤ κ() and vertices s0 , s1 , s2 , . . . , st ordered from west to east along
SB such that, for any vertex x of SB [si , si+1 ), the distance from x to si along SB is less than
times the distance from x to NB in B: distSB (x, si ) < distB (x, NB ).
The number κ() is given by:
κ() = 42 (1 + 1 )
(2)
The mortar graph has some additional properties. Let B1 be a brick, and suppose B1 ’s eastern
boundary EB1 contains at least one edge. Then there is another brick B2 whose western boundary
WB2 exactly coincides with EB1 . Similarly, if B2 is a brick whose western boundary contains at least
one edge then there is a brick B1 whose eastern boundary coincides with B2 ’s western boundary.
The paths forming eastern and western boundaries of bricks are called supercolumns.
Lemma 4.3 (Lemma 6.6 [9]). The sum of the costs of the edges in supercolumns is at most OPT.
The mortar graph and the bricks are building blocks of the structural properties required for
designing an approximation scheme. Borradaile, Klein and Mathieu demonstrated that there is
a near-optimal Steiner tree whose interaction with the mortar graph is “simple” [9]. We prove a
similar theorem in Section 5. In order to formalize the notion of “simple”, we select a subset of
vertices on the boundary of each brick, called portals, and define a portal-connected graph.
4.2
Portals and simple connections
We define a subset of θ evenly spaced vertices along the boundary of every brick. The value of θ
depends polynomially on the precision and on α, a parameter that represents how complex the
solution can be within a single brick (α will be defined precisely in Equation (6))
θ() is O(−2 α())
The portals are selected to satisfy the following:
10
(3)
Lemma 4.4 (Lemma 7.1 [9]). For any vertex x on ∂B, there is a portal y such that the cost of the
x-to-y subpath of ∂B is at most 1/θ times the cost of ∂B.
Recall that, for each face f of the mortar graph M G, there is a corresponding brick B, and
that B includes the vertices and edges comprising the boundary of f . The next graph construction
starts with the disjoint union of the mortar graph M G with all the bricks. Each edge e of the
mortar graph is represented by three edges in the disjoint union: one in M G (the mortar-graph
copy of e) and one on each of two brick boundaries (the brick copies of e). Similarly, each vertex
on the boundary of a brick occurs several times in the disjoint union.
The portal-connected graph, denoted B + (M G), is obtained from the disjoint union as follows: a
copy of each brick B of M G is embedded in the interior of the corresponding face of M G, and each
portal p of B is connected by an artificial edge to the corresponding vertex of M G. The construction
is illustrated in Figure 5(c). The artificial edges are called portal edges, and are assigned zero cost.
Noting that each vertex v on the boundary of a brick occurs several times in B + (M G), we
identify the original vertex v of G with that duplicate in B + (M G) that belongs to M G. In particular, each terminal (vertex in Q) is considered to appear exactly once in B + (M G), namely in M G.
Thus the original instance gives rise to an instance in B + (M G): the goal is to compute the optimal
solution w.r.t. the terminals on M G in B + (M G) and then map the edges of this solution to G.
Since G can be obtained from B + (M G) by contracting portal edges and identifying all duplicates
of each edge and all duplicates of each vertex, we infer:
Lemma 4.5. Let H be a subgraph of B + (M G) that, for each pair of terminals u, v, contains at
least min{r(u), r(v)} edge-disjoint u-to-v paths. Then the subgraph of G consisting of edges of H
that are in G has the same property.
The graph B ÷ (M G), which we call the brick-contracted graph, is obtained by contracting each
brick in B + (M G) to a single vertex, called a brick vertex, as illustrated in Figure 5(d). This graph
will be used in designing the dynamic program in Section 6.
4.3
Structure Theorem
Lemma 4.5 implies that, to find an approximately optimal solution in G, it suffices to find a solution
in B + (M G) whose cost is not much more than the cost of the optimal solution in G. The following
theorem, which we prove in Section 5, suggests that this goal is achievable. An equivalent theorem
was proven for the Steiner tree problem [9].
Theorem 4.6 (Structure Theorem). For any > 0 and any planar instance (G, r) of the {0, 1, 2}edge connectivity problem, there exists a feasible solution S to the corresponding instance (B + (M G), r)
such that
• the cost of S is at most (1 + c)OPT where c is an absolute constant, and
• the intersection of S with any brick B is the union of a set of non-crossing trees whose leaves
are portals.
4.4
Approximation scheme
We assume that the input graph G has degree at most three. This can be achieved using a wellknown embedding-preserving transformation in which each vertex of degree d > 3 is replaced with
11
(a)
(b)
(c)
(d)
Figure 5: (a) The mortar graph in bold, (b) the set of bricks, (c) the portal-connected graph
B + (M G), and (d) the brick-contracted graph B ÷ (M G).
a cycle of d degree-three vertices, as shown in Figure 6. Making this assumption simplifies the
dynamic program using for Step 5 below.
The approximation scheme consists of the following steps.
Step 1: Find the mortar graph MG.
Step 2: Decompose MG into “parcels”, subgraphs with the following properties:
(a) The parcels partition the faces of MG. Since each edge of MG belongs to the boundaries
of exactly two faces, it follows that each edge belongs to at most two parcels.
(b) The cost of all boundary edges (those edges belonging to two parcels) is at most
1
η c(MG). We choose η so that this bound is 2 c(OP T ):
η = η() = d20−2 e
(4)
(c) The planar dual of each parcel has a spanning tree of depth at most η + 1.
Each parcel P corresponds to a subgraph of G, namely the subgraph consisting of the bricks
corresponding to the faces making up P . Let us refer to this subgraph as the filled-in version
of P .
Step 3: Select a set of “artificial” terminals on the boundaries of parcels to achieve the following:
12
Figure 6: Each vertex of degree greater than 3 is replaced with a cycle of degree-three vertices
• for each filled-in parcel, there is a solution that is feasible with respect to original and
artificial terminals whose cost is at most that of the parcel’s boundary plus the cost
of the intersection of OPT with the filled-in parcel, and
• the union over all parcels of such feasible solutions is a feasible solution for the original
graph.
Step 4: Designate portals on the boundary of each brick.
Step 5: For each filled-in parcel P , find a optimal solution in the portal-connected graph, B + (P ).
Output the union of these solutions.
Step 1 can be carried out in O(n log n) time [9]. Step 2 can be done in linear time via breadthfirst search in the planar dual of MG, and then applying a “shifting” technique in the tradition of
Baker [1]. Step 3 uses the fact that each parcel’s boundary consists of edge-disjoint, noncrossing
cycles. If such a cycle separates terminals, a vertex v on the cycle is designated an artificial terminal.
We set r(v) = 2 if the cycle separates terminals with requirement 2 and r(v) = 1 otherwise. Under
this condition, any feasible solution for the original graph must cross the cycle; by adding the
edges of the cycle, we get a feasible solution that also spans the artificial terminal. Step 3 can be
trivially implemented in linear time. Step 5 is achieved in linear time using dynamic programming
(Section 6).
5
Proof of the Structure Theorem
We are now ready to prove the Structure Theorem for {0, 1, 2}-edge connectivity, Theorem 4.6. In
order to formalize the notion of connectivity across the boundary of a brick, we use the following
definition:
Definition 5.1 (Joining vertex). Let H be a subgraph of G and P be a subpath of ∂G. A joining
vertex of H with P is a vertex of P that is the endpoint of an edge of H − P .
We will use the following structural lemmas in simplifying OPT. The first two were used in
proving a Structure Theorem for the Steiner tree PTAS [9]; in these, T is a tree and P is an
-short path on the boundary of the graph in which T and P are embedded. The third is in fact a
generalization of the second lemma that we require for maintaining two connectivity.
13
Lemma 5.2 (Simplifying a tree with one root, Lemma 10.4 [9]). Let r be a vertex of T . There is
another tree Tb that spans r and the vertices of T ∩ P such that c(Tb) ≤ (1 + 4 · )c(T ) and Tb has at
most 11 · −1.45 joining vertices with P .
Lemma 5.3 (Simplifying a tree with two roots, Lemma 10.6 [9]). Let p and q be two vertices of T .
There is another tree Tb that spans p and q and the vertices of T ∩ P such that c(Tb) ≤ (1 + c1 )c(T )
and Tb has at most c2 · −2.5 joining vertices with P , where c1 and c2 are constants.
Lemma 5.4. Let F be a set of non-crossing trees whose leaves are vertices of -short boundary
paths P and Q and such that each tree in the forest has leaves on both these paths. There is a cycle
b a set Fb of trees, and a mapping φ : F −→ Fb ∪ {C}
b with the following properties
or empty set C,
• For every tree T in F, φ(T ) spans T ’s leaves.
b for at least one of i = 1, 2 then φ(T1 ) and φ(T2 )
• For two trees T1 and T2 in F , if φ(Ti ) 6= C
are edge-disjoint (taking into account edge multiplicities).
S
b has o(−2.5 ) joining vertices with P ∪ Q.
• The subgraph Fb ∪ {C}
b + P{c(T ) : T ∈ F}
b ≤ 3c(Q) + (1 + d · ) P{c(T ) : T ∈ F} where d is an absolute
• c(C)
constant.
Proof. View the embedding of the boundary such that P is on top and Q is at the bottom. Let
T1 , . . . , Tk be the trees of F ordered according the order of their leaves from left to right.
There are two cases.
b Let a be
Case 1) k > 1/. In this case, we reduce the number of trees by incorporating a cycle C.
the smallest index such that c(Ta ) ≤ c(F ) and let b be the largest index such that c(Tb ) ≤ c(F ).
We will replace
Let Q0 be the minimal subpath of Q that spans
Sb trees Ta , Ta+1 , . . . , Tb with a cycle.
0
the leaves of i=a Ti on Q. We likewise define P . Let L be the leftmost Q-to-P path in Ta and let
R be the rightmost Q-to-P path in Tb . Since P is -short,
c(P 0 ) ≤ (1 + )c(L ∪ Q0 ∪ R).
(5)
b = P 0 ∪ L ∪ Q0 ∪ R and set
To obtain Fb from F , we replace the trees Ta , . . . , Tb with the cycle C
b
b
b
φ(Ta ), . . . , φ(Tb ) to C. By construction C spans the leaves of ∪i=a Ti .
Case 2) k ≤ 1/. In this case, the number of trees is already bounded. We set a = 2, b = 1 so
b to be the empty set.
as to not eliminate any trees, and we set C
In both cases, for each remaining tree Ti (i 6= a, a + 1, . . . , b) we do the following. Let Ti0 be a
minimal subtree of Ti that spans all the leaves of Ti on P and exactly one vertex r of Q. Let Qi be
the minimal subpath of Q that spans the leaves of Ti on Q. We replace Ti with the tree Tbi that is
the union of Q0i and the tree guaranteed by Lemma 5.2 for tree Ti0 with root r and -short path P .
By construction Tbi spans the leaves of Ti . We set φ(Ti ) = Tbi for i 6= a, . . . , b.
b has at most four joining vertices with P ∪ Q. Each tree Tbi has one joining vertex with Q and,
C
by Lemma 5.2, o(−1.5 ) joining vertices with P . By the choice of a and b, there are at most 2/ of
the trees in the second part of the construction. This yields the bound on joining vertices.
14
The total cost of the replacement cycle is:
b ≤ c(P 0 ) + c(L) + c(Q0 ) + c(R)
c(C)
≤ (2 + )(c(L) + c(Q0 ) + c(R))
by Equation (5)
0
≤ (2 + )(c(Ta ) + c(Q ) + c(Tb ))
0
≤ (2 + )(2c(F ) + c(Q ))
since L and R are paths in Ta and Tb
by the choice of a and b
0
2
≤ (4 + 2 )c(F ) + (2 + )c(Q )
The total cost of the replacement trees is:
X
X
c(Tbi ) ≤
c(Q0i ) + (1 + 4)c(Ti0 )
i=1,...,a−1,b+1...k
by Lemma 5.2
i=1,...,a−1,b+1...k
≤
X
c(Q0i ) + (1 + 4)c(Ti )
since Ti0 is a subtree of Ti
i=1,...,a−1,b+1...k
By the ordering of the trees and the fact that they are non-crossing, Q0 and the Q0i ’s are disjoint.
Combining the above gives the bound on cost.
5.1
Construction of a new solution
We start with a brief overview of the steps used to prove the structure theorem. We start with
an edge multiset forming an optimal solution, OPT. Each step modifies either the input graph G
or a subgraph thereof while simultaneously modifies the solution. The graphs and edge multisets
resulting from these steps are denoted by subscripts. Details are given in subsequent sections.
Augment We add two copies of each supercolumn, obtaining GA and OPTA . We consider the
two copies to be interior to the two adjacent bricks. This step allows us, in the restructure
step, to concern ourselves only with connectivity between the north and south boundaries of
a brick.
Cleave Cleaving a vertex refers to splitting it into two vertices and adding an artificial edge between
the two vertices. In the cleave step, we modify GA (and so in turn modify M GA and OPTA )
to create GC (and M GC and OPTC ) by cleaving certain vertices while maintaining a planar
embedding. Let JC be the set of artificial edges introduced. Note that XC /JC = XA . The
artificial edges are assigned zero cost so the metric between vertices is preserved. The artificial
edges are added to the solution, possibly in multiplicity, so connectivity is preserved.
Flatten In this step, for each brick B, we consider the intersection of the solution with int(B); we
replace some of the connected components of the intersection with subpaths of the boundary
of B. We denote the resulting solution by OPTF .
Map We map the edges of OPTF to B + (M G) creating OPTM . This step temporarily disconnects
the solution.
Restructure In this step, we modify the solution OPTM . For each brick B in M GC , the part of
OPTM strictly interior to B is replaced with another subgraph that has few joining vertices
with ∂B. We denote the resulting solution by OPTS .
15
Rejoin In order to re-establish connections broken in the Map step, we add edges to OPTS . Next,
we contract the artificial edges added in the cleave step. We denote the resulting solution by
[
OP
T.
Note that the solutions OPT, OPTA , and so on are multisets; an edge can occur more than
once. We now describe these steps in greater detail.
5.1.1
Augment
Recall that a supercolumn is the eastern boundary of one brick and the western boundary of
another, and that the sum of costs of all supercolumns is small. In the Augment step, for each
supercolumn P , we modify the graph as shown in Figure 7:
• Add to the graph two copies of P , called P1 and P2 , creating two new faces, one bounded by
P1 and P and the other bounded by P and P2 .
• Add P1 and P2 to OPT.
The resulting graph is denoted GA , and the resulting solution is denoted OPT0A . We consider P1
and P2 to be internal to the two bricks. Thus P remains part of the boundary of each of the bricks,
and M G contains P but not P1 or P2 . Since P1 and P2 share no internal vertices with P , the
joining vertices of OPT0A ∩ B with ∂B belong to NB and SB .
(a)
(b)
Figure 7: Adding the column between two adjacent bricks (solid) in the augment step. The dotted
edges represent OPT in (a) and OPTA in (b).
We perform one more step, a minimality-achieving step:
• We remove edges from OPT0A until it is a minimal set of edges achieving the desired connectivity between terminals.
Let OPTA be the resulting set. We get:
Lemma 5.5. For every brick B, the joining vertices of OPTA ∩ B with ∂B belong to NB and SB .
16
5.1.2
Cleave
We define a graph operation, cleave. Given a vertex v and a bipartition A, B of the edges incident
to v, v is cleaved by
• splitting v into two vertices, vA and vB ,
• mapping the endpoint v of edges in A to vA ,
• mapping the endpoint v of edges in B to vb , and
• introducing a zero-cost edge ev = vA vB .
This operation is illustrated in Figure 8(a) and (b). If the bipartition A, B is non-interleaving
with respect to the embedding’s cycle of edges around v then the construction maintains a planar
embedding.
A
v
B
vA
vB
ev
(a)
(b)
vB
v
vA
(c)
eA
(d)
eA
eB
e'A
e'A
e'B
v
ev
eB
ev
vA
(e)
vB
e'B
(f)
Figure 8: Cleavings illustrated. The bipartition of the edges incident to v is given by the dashed
edges A and solid edges B. (a) Starting with this bipartition, (b) the result of cleaving vertex
v according to this bipartition. A simplifying cleaving of vertex v with respect to a cycle (bold)
before (c) and after (d). A lengthening cleaving of a cycle (e) before and (f) after.
We use two types of cleavings:
17
Simplifying cleavings Refer to Figures 8(c) and (d). Let C be a clockwise non-self-crossing,
non-simple cycle that visits vertex v twice. Define a bipartition A, B of the edges incident
to v as follows: given the clockwise embedding of the edges incident to v, let A start and
end with consecutive edges of C and contain only two edges of C. Such a bipartition exists
because C is non-self-crossing.
Lengthening cleavings Refer to Figures 8(e) and (f). Let C be a cycle, let v be a vertex on
C with two edges eA and eB adjacent to v embedded strictly inside C, and let e0A and e0B
be consecutive edges of C adjacent to v such that the following bipartition is non-crossing
with respect to the embedding: A, B is a bipartition of the edges adjacent to v such that
eA , e0A ∈ A and eB , e0B ∈ B.
We perform simplifying cleavings for non-simple cycles of OPTA until every cycle is simple; the
artificial edges introduced are not included in OPT. The following lemma does not use planarity
and shows that (since cycles get mapped to cycles in this type of cleaving) simplifying cleavings
preserve two-edge connectivity.
b be the graph obtained from H by a simplifying
Lemma 5.6. Let e be an edge in a graph H. Let H
b
cleaving. Then e is a cut-edge in H iff it is a cut-edge in H.
Proof. Let u1 , u2 be the endpoints of e and let C be the cycle w.r.t. which a simplifying cleaving
was performed. If H contains an e-avoiding ui -to-C path for i = 1, 2 then e is not a cut-edge in H,
b Suppose therefore that removing e separates ui from C in H. Then the same
and similarly for H.
b
is true in H, and conversely.
Corollary 5.7. For k = 1, 2, if two vertices are k-edge connected in H then any of their copies
are k-edge connected in HC .
Moreover, after all the simplifying cleavings, every cycle is simple, so:
Lemma 5.8. Vertices that are two-edge-connected in OPTC are biconnected.
Next we perform lengthening cleavings w.r.t. the boundary of a brick and edges eA and eB
of OPTC ; we include in OPTC all the artificial zero-cost edges introduced. Lengthening cleavings
clearly maintain connectivity. Suppose that vertices x and y are biconnected in OPTC , and consider
performing a lengthening cleaving on a vertex v. Since there are two internally vertex-disjoint xto-y paths in OPTC , v cannot appear on both of them. It follows that there remain two internally
vertex-disjoint x-to-y paths after the cleaving. We obtain the following lemma.
Lemma 5.9. Lengthening cleavings maintain biconnectivity.
Lengthening cleavings are performed while there are still multiple edges of the solution embedded in a brick that are incident to a common boundary vertex. Let JC be the set of artificial
edges that are introduced by simplifying and lengthening cleavings. We denote the resulting graph
by GC , we denote the resulting mortar graph by M GC , and we denote the resulting solution by
OPTC .
As a result of the cleavings, we get the following:
Lemma 5.10. Let B be a brick in GC with respect to M GC . The intersection OPTC ∩ int(B) is
a forest whose joining vertices with ∂B are the leaves of the forest.
18
Proof. Let H be a connected component of OPTC ∩int(B). As a result of the lengthening cleavings,
the joining vertices of H with ∂B have degree 1 in H. Suppose otherwise; then there is a vertex
v of H ∩ ∂B that has degree > 1 in H. Hence v is a candidate for a lengthening cleaving, a
contradiction.
By Theorem 2.5 and the minimality-achieving step of the Augment step, any cycle in H must
include a terminal u with r(u) = 2 by Theorem 2.5. Since there are no terminals strictly enclosed
by bricks, u must be a vertex of ∂B. However, that would make u a joining vertex of H with ∂B.
As argued above, such vertices are leaves of H, a contradiction to the fact that u is a vertex of a
cycle in H. Therefore H is acyclic.
Furthermore, leaves of H are vertices of ∂B since OPTC is minimal with respect to edge inclusion
and terminals are not strictly internal to bricks.
Lemma 5.11. Let C be a cycle in OPTC . Let B be a brick. Distinct connected components of
C ∩ int(B) belong to distinct components of OPTC ∩ int(B).
Proof. Assume the lemma does not hold. Then there is a C-to-C path P in int(B). Each vertex
of P that is strictly interior to B is not a terminal. A vertex of P that was on ∂(B) would be
a candidate for a lengthening cleaving, a contradiction. Therefore P includes no terminals. This
contradicts Theorem 2.6.
5.1.3
Flatten
For each brick B, consider the edges of OPTC that are strictly interior to B. By Lemma 5.10,
the connected components are trees. By Lemma 5.5, for each such tree T , every leaf is either on
B’s northern boundary NB or on B’s southern boundary SB . For each such tree T whose leaves
are purely in NB , replace T with the minimal subpath of NB that contains all the leaves of T .
Similarly, for each such tree T whose leaves are purely in SB , replace T with the minimal subpath
of SB that contains all the leaves of T .
Let OPTF be the resulting solution. Note that OPTF is a multiset. An edge of the mortar
graph can appear with multiplicity greater than one.
5.1.4
Map
This step is illustrated in Figures 9(a) and (b). In this step, the multiset OPTF of edges resulting
from the flatten step is used to select a set OPTM of edges of B + (M GC ). Recall that every edge e
of M GC corresponds in B + (M GC ) to three edges: two brick copies (one in each of two bricks) and
one mortar-graph copy. In this step, for every edge e of M GC , we include the mortar-graph copy
of e in OPTM with multiplicity equal to the multiplicity of e in OPTF . At this point, none of the
brick copies are represented in OPTM .
Next, recall that in the augment step, for each supercolumn P , we created two new paths, P1
and P2 , and added them to OPT. The edges of these two paths were not considered part of the
mortar graph, so mortar-graph copies were not included in OPTM for these edges. Instead, for
each such edge e, we include the brick-copy of e in OPTF with multiplicity equal to the multiplicity
of e in OPTF .
Finally, for each edge e interior to a brick, we include e in OPTM with the same multiplicity
as it has in OPTF .
19
(a)
(b)
Figure 9: Map (a) The intersection of OPTC with a brick, dashed. (b) The same brick in the
portal connected graph with portal edges (double-lines) connecting the brick to the corresponding
face (outer boundary) of the mortar graph.
5.1.5
Restructure
Let B be a brick. For simplicity, we write the boundary paths of B as N, E, S, W . Let F be the
multiset of edges of OPTM that are in the interior of B. F is a forest (Lemma 5.10). As a result of
the flatten step, each component of F connects S to N . We will replace F with another subgraph
Fb and map each component T of F to a subgraph φ(T ) of Fb where φ(T ) spans the leaves of T and
is a tree or a cycle. Distinct components of F are mapped by φ to edge-disjoint subgraphs (taking
into account multiplicities).
Refer to Figure 10. We inductively define S-to-N paths P0 , P1 , . . . and corresponding integers
k0 , k1 , . . .. Let s0 , . . . , st be the vertices of S guaranteed by Lemma 4.2 (where s0 is the vertex
common to S and W and st is the vertex common to S and E). Let P0 be the easternmost path
in F from S to N . Let k0 be the integer such that start(P0 )) is in S[sk0 , sk0 +1 ). Inductively, for
i ≥ 0, let Pi+1 be the easternmost path in FN ∧S from S[s0 , ski ) to N that is vertex-disjoint from
Pi . Let ki be the integer such that start(Pi ) ∈ S[ski , ski +1 ). This completes the inductive definition
of P0 , P1 , . . .. Note that the number of paths is at most t, which in turn is at most κ() as defined
in Equation 2.
We use these paths to decompose F , as illustrated in Figure 10. Let Fi be the set of edges of
F − Pi+1 enclosed by the cycle formed by Pi , Pi+1 , N and S. Clearly F = ∪i Fi . If Pi is connected
to Pi+1 , they share at most one vertex, wi . If they are not connected, we say wi is undefined.
There are two cases: either Fi is connected or not.
yi
Pi+1
ri
s0
s1
Pi
ski
xi
ski+1
st
Figure 10: Paths used to decompose F . The brick boundary is given by the rectangle. The paths
P0 , P1 , . . . are bold. Fi is given by the shaded background.
20
Connected case:
There are two subcases. Either Fi spans vertices of S[·, ski ) or not.
Suppose Fi spans vertices of S[·, ski ). Let TS be a minimal subtree of Fi that spans Fi ∩ S and
let TN be a minimal subtree of Fi that spans Fi ∩ N . Let rN be the first vertex of Pi in TN and let
rS be the last vertex of Pi in TS . A path in Fi from TS to N that does not go through rS contradicts
the choice of Pi+1 as there would be a path in Fi from S[·, ski ) to N that is disjoint from Pi . It
follows that TS and TN are edge disjoint: if they intersect they may only do so at rS = rN . If wi
is defined, then there is a path Q from wi to Pi ; Q intersects Pi between rN and rS , for otherwise
there would be a superpath of Q that contradicts the choice of Pi+1 .
If wi−1 is defined and wi−1 ∈ TN , then we replace TN with the tree guaranteed by Lemma 5.3
with roots rN and wi−1 . Otherwise, we replace TN with the tree guaranteed by Lemma 5.2 with
root rN . We do the same for TS .
Suppose Fi does not span vertices of S[·, ski ). Let TN be a minimal connected subgraph of
Fi ∪ S[ski , start(Pi )] that spans Fi ∩ N . Let rN be the first vertex of Pi in TN . If wi is defined, then
there is a path Q from wi to Pi ∪ S[ski , start(Pi )] and Q’s intersection with Pi belongs to Pi [·, rN ],
for otherwise there would be a superpath of Q that contradicts the choice of Pi+1 . If wi−1 is defined
and wi−1 ∈ TN , then we replace Fi with the tree guaranteed by Lemma 5.3 with roots rN and wi−1
along with Q, Pi [·, rN ], and S[ski , start(Pi )]. Otherwise we replace Fi with the tree guaranteed by
Lemma 5.2 with root rN along with Q, Pi [·, rN ], and S[ski , start(Pi )].
In both cases, we define φ0 (Fi ) to be the resulting tree that replaces Fi . By construction, φ0 (Fi )
spans the leaves of Fi and wi−1 and wi (if defined).
Disconnected case: In this case, by the definition of Pi+1 , Fi ∩ S is a subset of the vertices of
S[ski , start(Pi )], for otherwise there would be a path to the right of Pi+1 that connects to N and is
disjoint from Pi .
If Fi is connected to Fi+1 , then the western-most tree TW is a tree with root wi and leaves on
S and does not connect to N as that would contradict the choice of Pi+1 ; if this is the case, let TbW
be the tree guaranteed by Lemma 5.2 and define φ0 (TW ) = TbW .
If Fi is connected to Fi−1 , let S 0 be the subpath of S that spans the eastern-most tree TE ’s
leaves on S. Let TbE be the tree guaranteed by Lemma 5.3 that spans the eastern-most tree’s leaves
on N and roots wi−1 and start(Pi ) and define φ0 (TE ) = TbE .
Let F be the set of remaining trees, let P = N , and let Q = S[ski , start(Pi )] in Lemma 5.4.
b F,
b and φ be the cycle (or empty set), set of trees, and mapping that satisfy the properties
Let C,
stated in the lemma.
b (and TbW , S 0 and TbE if defined).
We define Fbi to consist of the trees of Fb and the cycle C
We replace every Fi with Fbi , as described above, for every brick, creating OPTS . This is illustrated
in Figure 11(a). Now we define φ in terms of φ0 . A component T of F is partitioned into adjacent
trees in this restructuring: namely T1 , . . . , Tk , k ≥ 1. T1 and Tk may be restructured via the
disconnected case and all others are restructured via the connected case. Define φ(T ) = ∪ki=1 φ0 (Ti ).
If k > 1, then consecutive trees Ti and Ti+1 share a vertex wi and by construction φ0 (Ti ) and φ0 (Ti )
also share this vertex. Since φ0 (T ) spans the leaves of T , we get that φ(T ) spans the leaves of T , as
desired. Also by construction, the submapping of φ0 of trees to trees (and not cycles) is bijective;
the same holds for φ.
21
Number of joining vertices In both the connected and disconnected case, the number of leaves
is the result of a constant number of trees resulting from Lemmas 5.2, 5.3 and 5.4. Therefore, Fbi
has o(−2.5 ) joining vertices with N and S. Since i ≤ κ() = O(−3 ), OPTS has o(−5.5 ) joining
vertices with the boundary of each brick. This is the number of connections required to allow a
solution to be nearly optimal and affects the number of portals required in Equation (3):
α() is o(−5.5 )
(6)
This will allow us to prove the second part of the Structure Theorem.
(a)
(b)
Figure 11: Continuing from Figure 9, restructure and rejoin: (a) Restructured version (dark grey)
of the intersection of OPTM with the brick (dashed). (b) Connecting the restructured solution
inside the brick to the mortar graph through portals (via dark grey edges).
5.1.6
Rejoin
In this step, we make inter-brick connections for parts that were disconnected in the mapping step.
Since joining vertices represent the ends of all disconnected parts, it suffices to connect joining
vertices of OPTS with ∂B to their mortar-graph counterparts via portal edges.
This is illustrated in Figure 11 (d): We first move the edges of OPTS ∩∂B to M G for every brick
B. Such edges may have been introduced in the restructure step: for every brick B, we connect
OPTS ∩ B to the mortar graph. For every joining vertex v of OPTS ∩ B, we find the nearest portal
pv , add the subpath of ∂B and M G connecting v and pv and add the portal edge corresponding to
pv . (We need at most two copies of each portal edge.) Finally we contract the edges introduced in
[
the cleaving step. This produces a solution OP
T of B + (M G).
5.2
Analysis of connectivity
In the augment step, because the added paths P1 and P2 form a cycle, this transformation preserves
two-connectivity and connectivity between terminals. Cleaving clearly preserves connectivity and,
by Lemmas 5.8 and 5.9, terminals that require two-connectivity are biconnected in OPTC . Therefore, for terminals x and y requiring connectivity, there is a path PC in OPTC connecting them. If
x and y require two-connectivity, there is a simple cycle CC in OPTC connecting them. We follow
PC and CC through the remaining steps.
Flatten Consider a tree T that is replaced by a subpath Q of a northern or southern boundary
of a brick that spans T ’s leaves. Q spans any terminals that T spanned, since there are no
terminals internal to bricks.
22
PC ∩ T is therefore a (set of) leaf-to-leaf paths and so (PC − T ) ∪ Q contains an x-to-y path.
It follows that there is an x-to-y path PF in OPTF .
By Lemma 5.11, CC ∩ T is a single path and, by the above reasoning, is a leaf-to-leaf path.
Therefore (CC − T ) ∪ Q contains a cycle through x and y. It follows that there is a cycle CF
through x and y in OPTF
1 , P 2 , . . .), (a cyclic sequence C
1
2
Map PF (CF ) gets mapped to a sequence PM = (PM
M = (CM , CM , . . .))
M
of paths of OPTM in B + (M GC ) such that each path either consists completely of mortargraph edges or consists completely of brick-copy edges. The last vertex of one path and the
first vertex of the next path are copies of the same vertex of GC , and that vertex belongs to
a north or south boundary of M GC . By Lemma 5.10, each path in PM or CM that consists
of brick-copy edges starts and ends at the northern or southern boundary of a brick.
Restructure We define a mapping φ̂, based in part on the map φ defined in Section 5.1.5. For
a path Q in PM or CM that uses mortar-copy edges, define φ̂(Q) = Q. For a path Q in PM
or CM that uses brick-copy edges, let T be the tree in OPTM that contains Q and define
φ̂(Q) = φ(T ).
Let CS be the cyclic sequence of trees and cycles to which CM maps by φ. Since φ(T ) spans
the leaves (joining vertices) of T , consecutive trees/cycles in CS contain copies of the same
vertex. By Lemma 5.11 and that, within a brick, the preimage of the set of trees mapped to
by φ, the trees of CS are edge disjoint. (The cycles may be repeated.)
Likewise we define PS having the same properties except for the fact that the sequence is not
cyclic.
Rejoin This step reconnects PS and CS .
Consider a tree T in either of these sequences that contains brick-copy edges. The rejoin step
first moves any edge of T that is in a brick boundary to the mortar copy. It then connects
joining vertices to the mortar by way of detours to portals and portal edges. Therefore, T is
mapped to a tree TJ that connects the mortar copies of T ’s leaves.
Consider a cycle C in either PS or CS . C contains a subpath of N and S whose edges are
moved to their mortar copy and two N -to-S paths whose joining vertices are connected to
their mortar copies. Therefore C is mapped to a cycle CJ through the mortar copies of the
boundary vertices of C.
Let the resulting sequence and cyclic sequence of trees and cycles be PJ and CJ . Since
consecutive trees/cycles in CS contain copies of the same vertex, in CJ consecutive trees/cycles
contain common mortar vertices. We have that CJ is a cyclic sequence of trees and cycles,
through x and y, the trees of which are edge-disjoint. Therefore the union of these contains
a cycle through x and y.
Similarly, we argue that the union of the trees and cycles in PJ contains and x-to-y path.
5.3
Analysis of cost increase
By Lemma 4.3, the total costs of all the east and west boundaries of the bricks is an fraction of
OPT, so we have
c(OPTA ) ≤ (1 + 2)c(OPT).
(7)
23
The cleaving step only introduces edges of zero cost, so
c(OPTC ) = c(OPTA ).
(8)
The flatten step replaces trees by -short paths, and so can only increase the cost by an fraction,
giving:
c(OPTF ) ≤ (1 + )c(OPTC ).
(9)
The mapping step does not introduce any new edges, so
c(OPTM ) = c(OPTF ).
(10)
The restructure step involves replacing disjoint parts of OPTM ∩ B for each brick by applying
Lemmas 5.2, 5.3, and 5.4. This increases the cost of the solution by at most an O() fraction. Further
we add subpaths of S[ski , start(Pi )] where OPTM contains disjoint subpaths Pi from start(Pi ) to N
and by the brick properties, c(S[ski , start(Pi )]) ≤ c(Pi ). This increases the the cost of the solution
by at most another O() fraction. We get, for some constant c:
c(OPTS ) ≤ (1 + c)c(OPTM ).
(11)
In the rejoin step, we add two paths connecting a joining vertex to its nearest portal: along the
mortar graph and along the boundary of the brick. The cost added per joining vertex is at most
twice the interportal cost: at most 2c(∂B)/θ for a joining vertex with the boundary of brick B, by
Lemma 4.4. Since each mortar graph edge appears as a brick-boundary edge at most twice, the
sum of the costs of the boundaries of the bricks is at most 18−1 OPT (Equation (1)). Since there
are α() joining vertices of OPTS with each brick, the total cost added due to portal connections is
α
OPT. Replacing α and θ via Equations (6) and (3) gives us that the total cost added
at most 36 θ
due to portal connections is
O(c(OPT))
(12)
[ ≤ (1 + c0 )c(OPT) for some constant c0 , proving
Combining equations (7) through (12), c(OPT)
Theorem 4.6.
6
Dynamic Program
In this section we show that there is a dynamic program that finds an approximately optimal
solution in the portal-connected graph B + (P ).
6.1
Structure of solution
We start by showing that we can restrict our attention to solutions whose intersection with each
brick is a small collection of trees whose leaves are portals.
Lemma 6.1. Let S be a minimal solution to the instance (B + (M G), r) of the {0, 1, 2}-edge connectivity problem. Let B be a brick, and suppose that the intersection of S with B is the union of
a set of non-crossing trees whose leaves are portals. Then the number of such trees is at most three
times the number of portals.
24
Proof. Let the portals be v1 , . . . , vk in counterclockwise order. Each tree induces a subpartition on
the set of pairs {(i, i + 1) : i ∈ {1, . . . , k − 1}} as follows: if the leaves of the tree are vi1 , vi2 , . . . , vip
where i1 < i2 < · · · < ip , then the parts are
{(i1 , i1 + 1), . . . , (i2 − 1, i2 )}, {(i2 , i2 + 1), . . . , (i3 − 1, i3 )}, . . . , {(ip−1 , ip−1 + 1), . . . , (ip − 1, ip )}
Because the trees are non-crossing, the corresponding subpartitions are non-crossing as well: for
each pair T1 , T2 of trees, either each part of T2 is a subset of some part of T1 , or vice versa.
Therefore the trees themselves form a rooted tree T according to the nesting relation of the
corresponding subpartitions. Furthermore, by minimality of S, no three trees have the same sets
of leaves. This shows that a node of T with only one child has a parent with at least two children.
The number of leaves of T is at most the number of pairs (i, i + 1), which is at most k − 1. This
shows that the number of trees is at most 3(k − 1).
Corollary 6.2. There exists a function f that maps each brick B to a cardinality-at-most-3θ
subpartition f (B) of the portals of B with the following property:
for each brick B, for each part PS of f (B), let TP be any minimum-cost tree in B whose leaves
are P , and let HB be the union P ∈f (B) TP . Then there is a feasible solution S 0 to the instance
(B + (M G), r) such that
• the cost of S 0 is at most (1 + c)OPT where c is an absolute constant, and
• the intersection of S with any brick B is HB .
Proof. Theorem 4.6 states that there is a feasible solution S of cost at most (1 + c)OPT such that
the intersection of S with any brick B is the multi-set union of a family TB of non-crossing trees
whose leaves are portals. We assume that S is a minimal feasible solution. Lemma 6.1 shows that
|TB | ≤ 3θ.
Now we construct the multi-subgraph S 0 from S. For each brick B, replace each tree in TB0 with
a minimum-cost tree having the same leaves. Clearly the cost of S 0 is at most that of S. It remains
to show that S 0 is a feasible solution.
Let u and v be two terminals. Let P or C be a minimal u-to-v path or minimal cycle containing
u and v in B + (M G) using edges of S. We obtain a u-to-v path P 0 in S 0 or a cycle C 0 in S 0 containing
u and v as follows. For each brick B, the intersection of P or C with B is a set of paths P1 , . . . , Pk
through the trees in TB such that each path Pi starts and ends on a portal. For each path Pi , the
tree in TB connecting the endpoints of Pi is replaced in S 0 by another tree that includes the same
endpoints, so that tree contains a path Pi0 with the same endpoints. We replace each path Pi with
the path Pi0 . Let P 0 and C 0 be the subgraphs obtained by performing these transformations for
each brick B. The transformations ensure that P 0 and C 0 are in S 0 . The path P 0 shows that u and
v are connected in S 0 .
For a cycle C in S, we still need to show that C 0 is a cycle. In particular, we need to show that,
for each brick B, the paths P1 , . . . , Pk forming the intersection of C with B all belong to different
trees in TB0 . Assume for a contradiction that Pi and Pj belong to a single tree T ∈ TB0 . By the
assumption that the degree is at most three, Pi and Pj cannot share a vertex. Therefore there is a
Pi -to-Pj path in T containing at least one edge. However, since C is a cycle containing Pi and PJ ,
an edge of the Pi -to-Pj path can be removed while preserving feasibility, a contradiction.
25
Figure 12: The spanning tree of the dual of the parcel is transformed to be a spanning tree of
the planar dual of B ÷ (P ). For each brick, all but one of the portal edges is included in the new
spanning tree.
6.2
The tree used to guide the dynamic program
Recall that B ÷ (P ) is the brick-contracted graph, in which each brick of B + (P ) is contracted to a
vertex. Recall that a parcel P is a subgraph of M G and defines a set of bricks contained by the
faces of P . The planar dual of the parcel has a spanning tree T of depth η + 1. The algorithm
transforms T into a spanning tree T 0 of the planar dual of B ÷ (P ) as follows. Each brick is a vertex
of T ; replace that vertex with a cycle minus one edge consisting of the duals of the portal edges,
as shown in Figure 12.
Since each brick has at most θ portal edges, it follows that the spanning tree T 0 of the dual of
B ÷ (P ) has depth at most (η + 1)θ. Next, the algorithm defines Tb to be the set of edges of B ÷ (P )
whose duals are not in T 0 . A classical result on planar graphs implies that Tb is a spanning tree of
B ÷ (P ). The construction ensures that each vertex of B ÷ (P ) that corresponds to a brick has only
one incident edge in Tb. By our assumption that each vertex in the input graph has degree at most
three, every vertex of B ÷ (P ) that appears in G has degree at most three in Tb. Thus Tb has degree
at most three. The bound on the depth of T 0 ensures that, for each vertex v of Tb, the graph B ÷ (P )
contains at most 2(η + 1)(θ + 1) + 1 edges between descendents of v and non-descendents.
6.3
The dynamic programming table
The algorithm considers the spanning tree Tb of B ÷ (P ) as rooted at an arbitrary leaf. By our
assumption that the input graph has degree three, each vertex v of Tb has at most two children.
Let Tb(v) denote the subtree of Tb rooted at v. For each vertex v of Tb, define
B if v is the result of contracting a brick B
f (v) =
v otherwise
S
and define W (v) to be the subgraph of B + (P ) induced by {f (w) : w ∈ Tb(v)}. Let δ(S) be the
subset of edges with exactly one endpoint in the subgraph S (i.e. a cut). It follows that the cut
δ(W (v)) in B + (P ) is equal to the cut δ(Tb(v)) in B ÷ (P ) and so has O(θη) edges. The dynamic
programming table will be indexed over configurations, defined below.
26
6.3.1
Configurations
Let L = δ(H) for a connected, vertex-induced subgraph H of B + (P ). We define a configuration
KL corresponding to L, illustrated in Figure 13. First, for each edge e ∈ L, we defined two edges
2
e1 and
the fact that an edge can be used twice. Let L̂ be the resulting set of edges,
S e to1 reflect
2
L̂ = e∈L {e , e }. A configuration is a forest with no degree-2 vertices whose leaf edges are a subset
of L̂, and such that, for each edge e ∈ L, if the forest contains both e1 and e2 then these two edges
are incident to the same vertex in the forest, together with a {1, 2} labeling of the internal vertices.
We denote the set of all configurations on edge set L by KL .
c
b
a
c
d
e
a
(c,1)
(a,2)
(d,1)
d
e
(a,1)
h
g
(a)
f
(h,1)
h
(g,1)
(b)
g
f
(c)
Figure 13: (a) The cut edges (black) of a subgraph H (shaded). (b) A configuration (dotted forest)
for L = δ(H). (c) A subgraph (bold) that meets the configuration.
Lemma 6.3. The number of configurations for H where n = |δ(H)| is at most 16n (2n)2n−2 and
these trees can be computed in O(16n (2n)2n−2 ) time.
Proof. A configuration can be selected as follows. First, for each of the n edges, decide whether
edge e1 is to be included and whether e2 is to be included. (There are 4n choices.) Let n0 be the
number of edges e for which either e1 or e2 is to be included. Next, select a tree with the selected
0
edges as leaf edges. It follows from Cayley’s formula that the number of such trees is 2n02n −2 ,
which is at most 2n2n−2 . Next, delete some subset of non-leaf edges of the tree. There are at most
2n ways of selecting such a subset. Finally, select a {1, 2} labeling of the internal vertices. There
are at most 2n such labelings.
The set of trees can be computed in O(1) amortized time per tree [22].
Connecting A configuration KL ∈ KL is connecting if, for each internal vertex v of KL , if v has
label c then KL contains c paths from v to leaves.
Compatibility Configurations KA ∈ KA and KB ∈ KB are compatible if for every edge e ∈ Â∩ B̂
either e ∈ KA ∩ KB or e ∈
/ KA ∪ KB .
Compressed bridge-block forest For a graph G, a block is a maximal subgraph such that, for
every two vertices u and v in the subgraph, there are two edge-disjoint u-to-v paths. Contracting
each block of G to a single vertex yields the bridge-block forest of G. It is easy to see that it is
indeed a forest. An edge e of G is an edge of this forest if e is a bridge of G, i.e. if the endpoints
of e are in different connected components of G − e.
27
We define the compressed bridge-block forest to be the forest obtained from the bridge-block
forest by substituting an edge for each maximal path of internal degree two. We denote the
g
compressed bridge-block forest of G by EC(G).
Consistency We say a configuration KA is consistent with a set of mutually compatible configurations {KA1 , KA2 , . . .} if
g i KA ) that preserves the identity of leaf edges
• there is an isomorphism between KA and EC(∪
i
of KA , and
• for each vertex x of ∪i KAi that is labeled 2, x is in a block of ∪i KAi that corresponds to a
vertex in KA that is labeled 2.
Meeting Let H be a connected, vertex-induced subgraph of G. Let M be a minimal solution
to an instance (G, r). Let MH be the graph obtained from M as follows. Remove edges not in
H ∪ δ(H). Next, for each vertex v outside of H, if v has k incident edges, replace v with k copies
of v, one incident to each of these edges.
g H ) = Kδ(H) and if, for each terminal x in H,
We say M meets a configuration Kδ(H) if EC(M
MH either contains r(x) edge-disjoint paths to vertices outside of H or contains min{r(x), r(y)}
edge-disjoint paths to every other terminal y.
DP table entry The dynamic program constructs a table DPv for each vertex v of Tb. The table
DPv is indexed by the configurations of δ(W (v)). We showed in Section 6.1 that we can restrict
our attention to solutions with the following property:
the intersection with each brick is a cardinality-at-most-3θ collection of minimum-cost
trees whose leaves are portals.
For each configuration K of δ(W (v)), the entry DPv [K] is the minimum cost of a subgraph of
W (v) that meets configuration K and has the above property. We do not count the cost of edges
of δ(W (v)).
6.3.2
The filling procedure
If u is not a leaf of Tb, then we populate the entries of DPu with the procedure fill. We use the
shorthand K for Kδ({u}) and Ki for Kδ(W (ui )) . The cuts are with respect to the graph B + (P ).
fill(DPu )
Initialize each entry of DPu to ∞.
Let u1 , . . . , us be the children of u.
For every set of connecting, mutually compatible configurations K, K1 , . . . , Ks ,
For every connecting configuration K0 that is consistent with P
K, K1 , . . . , Ks ,
cost ← c(K ∩ (∪si=1 Ki )) + (c(K1 ∩ K2 ) if s = 2 else 0) + si=1 DPui [Ki ].
DPu [K0 ] ← min{DPu [K0 ], cost}
Since Tb has degree at most three, the number s of children of each vertex u in Tb is at most two.
28
If u is a leaf of Tb and u does not correspond to a brick (i.e. f (u) = u), the problem is trivial.
Each configuration K is a star: u is the center vertex, and the edges of δ({u}) are the edges of K.
Since the cost of any subgraph that is induced by {u} is zero, the value of DPu [K] is zero for every
K.
Suppose u is a leaf of Tb and f (u) is a brick B. Recall that we restrict our attention to solutions
whose intersection with B is a collection of at most 3θ minimum-cost trees whose leaves are portals.
The algorithm populates the table DPu as follows. Initialize each entry of DPu to ∞. Next, iterate
over cardinality-at-most-3θ families of subsets of the portals. For each family F,
• define a subgraph HF to be the multi-set union over subsets P in F of a minimum-cost tree
spanning P ,
• find the configuration K corresponding to HF , and
• set DPu [K] ← min{DPu [K], cost of HF }.
The minimum-cost tree spanning a subset of portals can be computed in O(θ3 n) time using the
algorithm of Theorem 2.1.
6.3.3
Running time
Consider the time required to populate the DPu for all the leaves u of Tb. We need only consider
non-crossing partitions of subsets of δ(W (v)) since HK is the union of non-crossing trees. The
number of non-crossing partitions of an n element ordered set is the nth Catalan number, which is
at most 4n /(n + 1). Therefore, the number of non-crossing sub-partitions is at most 4n . It follows
that the time to populate DPv for v a brick vertex is O(θ4 4θ |B|) which is O(2poly(1/) |B|) since θ
depends polynomially on 1/. Since a vertex appears at most twice in the set of bricks, the time
needed to solve all the base cases in O(2poly(1/) n) where n is the number of vertices in the parcel.
Consider the time required to populate the DPu for all the internal vertices u of Tb. The number
of edges in δ(W (v)) in B + (P ) is O(θη). By Lemma 6.3, it follows that the corresponding number
of configurations is O(2poly(1/) ) since θ and η each depend polynomially on 1/. There are O(n)
vertices of the recursion tree and so the time required for the dynamic program, not including the
base cases is O(2poly(1/) n).
The total running time of the dynamic program is O(2poly(1/) n) where n is the number of
vertices in the parcel.
6.4
Correctness
The connecting property guarantees that the final solution is feasible (satisfying the conectivity
requirements). The definitions of compatible and consistent guarantee the inductive hypothesis.
We show that the procedure fill correctly computes the cost of a minimum-cost subgraph Hu
of W (u) that meets the configuration K0 . We have shown that this is true for the leaves of the
recursion tree. Since K is the configuration corresponding to the cut δ({u0 }), K is a star. Therefore
c(K) is the cost of the edges of δ({u0 }): K is both the configuration and a minimum-cost subgraph
that meets that configuration. Further, c(K ∩ (∪si=1 Ki )) is the cost of the edges of K that are in
Ki (for i = 1, . . . , s). w(∩si=1 Ki ) is equal to the cost of the edges common to K1 and K2 if s = 2
and zero otherwise. By the inductive hypothesis the cost computed is that of a Hu : the subgraph
of W (u) of a minimum-cost graph that meets this configuration.
29
Consider the entries of DPr where r is the root of Tb. Since δ(W (r)) is empty, there is only one
configuration corresponding to this subproblem: the trivial configuration. Therefore, the dynamic
program finds the optimal solution in B + (P ).
As argued in Section 4.4, combining parcel solutions forms a valid solution in our input graph.
We need to compare the cost of the output to the cost of an optimal solution.
Recall that new terminals are added at the parcel boundaries to guarantee connectivity between
the parcels; let r + denote the requirements including these new terminals. Let S(G, r) denote the
optimal solution in graph G with requirements r.
For each parcel P , there is a (possibly empty) solution SP in B + (P ) for the original and new
terminals in P consisting of edges of S(B + (MG), r) ∪ ∂H (where H is the set of parcels and ∂H is
the set of boundary edges of all parcels). We have:
c(S(B + (MG), r) ∩ B + (P )) ≤ c(SP ) = c(SP − ∂H) + c(SP ∩ ∂H).
Every edge of S(B + (MG), r) not in ∂H appears in SP for exactly one parcel P , and so
X
c(SP − ∂H) ≤ c(S(B + (MG), r)).
P ∈H
Every edge of ∂H appears in at most two parcels, and so
X
c(SP ∩ ∂H) ≤ 2 · c(∂H).
P ∈H
Since a feasible solution for the original and new terminals in B + (MG) can be obtained by adding
a subset of the edges of ∂H to S(B + (MG), r), the cost of the output of our algorithms is at most
X
c(∂H) +
S(B + (P ), r + ) ≤ c(S(B + (MG), r)) + 3c(∂H).
P ∈H
Combining the cost of the parcel boundaries, the definition of η, and the cost of the mortar graph,
we obtain c(∂H) ≤ 21 c(S(G, r)) = 12 OPT. Finally, by Theorem 4.6, the cost of the output is at
most (1 + c) OPT. This gives:
Theorem 6.4. There is an approximation scheme for solving the {0, 1, 2}-edge connectivity problem
(allowing duplication of edges) in planar graphs. The running time is O(2poly(1/) n + n log n).
Comments The PTAS framework used is potentially applicable to problems where (i) the input
consists of a planar graph G with edge-costs and a subset Q of the vertices of G (we call Q the set of
terminals), and where (ii) the output spans the terminals. Steiner tree and two-edge-connectivity
have been solved using this framework. The PTAS for the subset tour problem [18] (which was the
inspiration for this framework) can be reframed using this technique. Since the extended abstract of
this work first appeared, Borradaile, Demaine and Tazari have also this framework to give PTASes
for the same set of problems in graphs of bounded genus [6], Bateni, Hajiaghayi and Marx [3]
have extended the framework to the Steiner forest problem and Bateni et al. [2] have extended the
framework to prize collecting problems.
30
Acknowledgements The authors thank David Pritchard for comments on early versions of this
work and discussions with Baigong Zheng regarding Theorems 2.5 and 2.6. This material is based
upon work supported by the National Science Foundation under Grant Nos. CCF-0964037, CCF0963921, CCF-14-09520 and by a Natural Science and Research Council of Canada Postdoctoral
Fellowship.
References
[1] B. Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal of
the ACM, 41(1):153–180, 1994.
[2] M. Bateni, C. Chekuri, A. Ene, M. Hajiaghayi, N. Korula, and D. Marx. Prize-collecting
Steiner problems on planar graphs. In Proceedings of the 22nd Annual ACM-SIAM Symposium
on Discrete Algorithms, pages 1028–1049, 2011.
[3] M. Bateni, M. Hajiaghayi, and D. Marx. Approximation schemes for Steiner forest on planar
graphs and graphs of bounded treewidth. J. ACM, 58(5):21, 2011.
[4] A. Berger, A. Czumaj, M. Grigni, and H. Zhao. Approximation schemes for minimum 2connected spanning subgraphs in weighted planar graphs. In Proceedings of the 13th European
Symposium on Algorithms, volume 3669 of Lecture Notes in Computer Science, pages 472–483,
2005.
[5] A. Berger and M. Grigni. Minimum weight 2-edge-connected spanning subgraphs in planar
graphs. In Proceedings of the 34th International Colloquium on Automata, Languages and
Programming, volume 4596 of Lecture Notes in Computer Science, pages 90–101, 2007.
[6] G. Borradaile, E. Demaine, and S. Tazari. Polynomial-time approximation schemes for subsetconnectivity problems in bounded-genus graphs. Algorithmica, 2012. Online.
[7] G. Borradaile, C. Kenyon-Mathieu, and P. Klein. A polynomial-time approximation scheme
for Steiner tree in planar graphs. In Proceedings of the 18th Annual ACM-SIAM Symposium
on Discrete Algorithms, pages 1285–1294, 2007.
[8] G. Borradaile, P. Klein, and C. Mathieu. Steiner tree in planar graphs: An O(n log n) approximation scheme with singly exponential dependence on epsilon. In Proceedings of the 10th
International Workshop on Algorithms and Data Structures, volume 4619 of Lecture Notes in
Computer Science, pages 275–286, 2007.
[9] G. Borradaile, P. Klein, and C. Mathieu. An O(n log n) approximation scheme for Steiner tree
in planar graphs. ACM Transactions on Algorithms, 5(3):1–31, 2009.
[10] A. Czumaj and A. Lingas. On approximability of the minimum cost k-connected spanning
subgraph problem. In Proceedings of the 10th Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 281–290, 1999.
[11] R. Erickson, C. Monma, and A. Veinott. Send-and-split method for minimum-concave-cost
network flows. Mathematics of Operations Research, 12:634–664, 1987.
31
[12] K. Eswaran and R. Tarjan. Augmentation problems. SIAM Journal on Computing, 5(4):653–
665, 1976.
[13] G. Frederickson and J. Jájá. Approximation algorithms for several graph augmentation problems. SIAM Journal on Computing, 10(2):270–283, 1981.
[14] M. Goemans, A. Goldberg, S. Plotkin, D. Shmoys, É. Tardos, and D. Williamson. Improved
approximation algorithms for network design problems. In Proceedings of the 5th Annual
ACM-SIAM Symposium on Discrete Algorithms, pages 223–232, 1994.
[15] M. Henzinger, P. Klein, S. Rao, and S. Subramanian. Faster shortest-path algorithms for
planar graphs. Journal of Computer and System Sciences, 55(1):3–23, 1997.
[16] K. Jain. A factor 2 approximation algorithm for the generalized Steiner network problem.
Combinatorica, 2001(1):39–60, 21.
[17] S. Khuller and U. Vishkin. Biconnectivity approximations and graph carvings. Journal of the
ACM, 41(2):214–235, 1994.
[18] P. Klein. A subset spanner for planar graphs, with application to subset TSP. In Proceedings
of the 38th Annual ACM Symposium on Theory of Computing, pages 749–756, 2006.
[19] P. Klein. A linear-time approximation scheme for TSP in undirected planar graphs with
edge-weights. SIAM Journal on Computing, 37(6):1926–1952, 2008.
[20] P. Klein and R. Ravi. When cycles collapse: A general approximation technique for constraind
two-connectivity problems. In Proceedings of the 3rd International Conference on Integer
Programming and Combinatorial Optimization, pages 39–55, 1993.
[21] K. Mehlhorn. A faster approximation algorithm for the Steiner problem in graphs. Information
Processing Letters, 27(3):125–128, 1988.
[22] S. Nakano and T. Uno. Efficient generation of rooted trees. Technical Report NII-2003-005E,
National Institute of Informatics, 2003.
[23] R. Ravi. Approximation algorithms for Steiner augmentations for two-connectivity. Technical
Report TR-CS-92-21, Brown University, 1992.
[24] M. Resende and P. Pardalos, editors. Handbook of Optimization in Telecommunications.
Springer, 2006.
[25] H. Whitney. Non-separable and planar graphs. Trans. Amer. Math. Soc., 34:339–362, 1932.
[26] P. Widmayer. A fast approximation algorithm for Steiner’s problem in graphs. In GraphTheoretic Concepts in Computer Science, volume 246 of Lecture Notes in Computer Science,
pages 17–28. Springer Verlag, 1986.
[27] D. Williamson, M. Goemans, M. Mihail, and V. Vazirani. A primal-dual approximation algorithm for generalized Steiner network problems. In Proceedings of the 25th Annual ACM
Symposium on Theory of Computing, pages 708–717, 1993.
32
[28] Y. Wu, P. Widmayer, and C. Wong. A faster approximation algorithm for the Steiner problem
in graphs. Acta informatica, 23(2):223–229, 1986.
33
| 8 |
arXiv:1704.04000v1 [] 13 Apr 2017
Dempster-Shafer Belief
Function - A New
Interpretation
Mieczyslaw A. Klopotek
Institute of Computer Science, Polish Academy of Sciences
PL 01-237 Warsaw, 21 Ordona St., e-mail: klopotek@ipipan.waw.pl
1
Introduction
Dempster-Shafer theory of evidence has been found by many researchers
very attractive as a way of modeling reasoning behaviour under uncertainty
stemming from ignorance. It provides a framework for representation of
certainty of a logical formula without necessity of expressing commitment to
any of its consequences. E.g. we can express our 100 % belief in fact that
Tweedy’s wife is either Mary or Jane and at the same time express our total
ignorance to the fact who of them is actually his wife (zero belief attached to
the statement ”Mary is Tweedy’s wife” and zero belief in ”Jane is Tweedy’s
2
MIECZYSLAW A. KLOPOTEK
wife”).
If a theory is to become of practical importance in expert systems application - as foundation for knowledge representation and reasoning, ar least
the following conditions must be fulfilled:
• there must exist an efficient method for reasoning within this framework
• there must exist a clear correspondence between the contents of the
knowledge base and the real world
• there must be a clear correspondence between the reasoning method
and some real world process
• there must exist a clear correspondence between the results of the reasoning process and the results of the real world process corresponding
to the reasoning process.
Only under such circumstances we can say that the expert system is
helpful as it allows us either to predict or to follow retrospectively real world
processes.
Dempster initiated the theory of evidence in his paper [4] and other works,
and Shafer developed this theory in his book [21] and other publications.
Though it became obvious that the DST (Dempster-Shafer Theory) captures
many intuitions behind the human dealing with uncertainty (e.g. as mentioned above), it did not become a foundation for implementation of expert
systems with uncertainty due to claimed high computational complexity [9].
DS BELIEF FUNCTION - A NEW INTERPRETATION
3
In the recent years, however, a number of efficient methods for dealing with
DS reasoning have been developed - see e.g. [23] and citations therein. So the
first of the above mentioned conditions is met. Meeting of other conditions
proved to be more complicated.
Smets [26] and also initially Shafer [21] insisted on Bels (measures of
uncertainty in the DST) not being connected to any empirical measure (frequency, probability etc.) considering the domain of DST applications as the
one where ”we are ignorant of the existence of probabilities”, and warn that
the DST is ”not a model for poorly known probabilities” ([26], p.324). The
question may be raised, however, what practically useful can be obtained
from a computer reasoning on the basis of such a DST. It would have to be
demonstrated that humans indeed reason as DST. Then the computer, if fed
with our knowledge, would be capable to predict our conclusions on a given
subject. However, to my knowledge, no experiment confirming that humans
actually use internally DST for reasoning under uncertainty has been carried
out. Under these circumstances the computer reasoning with DST would tell
us what we have to think and not what we think. Hence, from the point of
view of computer implementation the position of Smets and Shafer is not
acceptable.
The other category of DST interpretations, described by Smets as approaches assuming existence of an underlying probability distribution, which
is only approximated by the Bels (called by him PXMY models), is represented by early works of Dempster [4], papers of Kyburg [12], Fagin [7],
4
MIECZYSLAW A. KLOPOTEK
[8],, Halpern [10], Skowron [24], Grzymala-Busse [9] and others. Both Smets
[26] and Shafer [22] consider such approaches as inadequate as most of them
give rise to contradictions and counter intuitiveness. As Smets states, ”Far
too often, authors concentrate on the static component (how beliefs are allocated?) and discover many relations between TBM (transferable belief
model of Smets) and ULP (upper lower probability) models, inner and outer
measures (Fagin and Halpern [6]), random sets (Nguyen [16]), probabilities
of provability (Pearl [17]), probabilities of necessity (Ruspini [19]) etc. But
these authors usually do not explain or justify the dynamic component (how
are beliefs updated?), that is, how updating (conditioning) is to be handled
(except in some cases by defining conditioning as a special case of combination). So I (that is Smets) feel that these partial comparisons are incomplete,
especially as all these interpretations lead to different updating rules.” ([26],
pp. 324-325). Smets attributes this failure to the very nature of attempts
of assigning a probabilistic interpretation. We disagree with Smets and will
show in this paper that creation of a probabilistic interpretation of the DST
incorporating the Dempster rule of combination is actually possible. However, this new interpretation indicates the need for a drastic change in viewing
the Dempster rule: it does not accommodate evidence, but prejudices. How
this statement is to be understood, will be visible later. Nonetheless our interpretation allows for assignment of an experimentally verifiable numerical
meaning to a DS knowledge base, assigns a numerical meaning to the reasoning process (the DS rule of combination) and yields agreement between
DS BELIEF FUNCTION - A NEW INTERPRETATION
5
numerical empirical interpretation of the results of DS reasoning and results
of a real world process. This means that we have an interpretation fitting
formal interpretation of the DS theory to the largest extent ever achieved.
Smets ([26],p.327) subdivided the DST into two categories: a closed world
category (as if excluding the possibility of contradictions in the ”evidence”)
and an open world category of DST (as if allowing for this). Let us assume
that two independent experts elicited their beliefs concerning the event A:
both assigned beliefs of 0.7 to the event A, and beliefs of 0.3 to the event
¬A. The open world DST will lead us to a combined belief in A of 0.5 and
in ¬A of 0.1. The closed world assumption on the other hand will assign a
combined belief in A of 0.7 and in ¬A of 0.3. I find it a dismaying property
of a theory if collecting agreeing information from independent expert shall
decline my belief in the opinions of both experts. Hence only closed world
theories are subject of this paper.
We first recall the formal definition of the DS-Theory, then introduce some
notation used throughout the rest of the paper. Subsequently we develop our
interpretation of the joint belief distribution and of evidential updating. We
conclude with a brief comparison of our interpretation with other attempts.
6
2
MIECZYSLAW A. KLOPOTEK
Formal Definition of the Dempster-Shafer
Theory of Evidence
Let us make the remark that if an object is described by a set of discrete attributes X1 , X2 , ..., Xn taking values from their respective domains
Ξ1 , Ξ2 , ..., Ξn then we can think of it as being described by a complex attribute X having vector values, that is the domain Ξ of X is equal:
Ξ = {(x1 , x2 , ..., xn )|xi ∈ Ξi , i = 1, ..., n}
.
So unless specified otherwise let us assume that we are talking of objects
described by a single attribute X taking its values from the domain Ξ. We
say that Ξ, the domain of X is our space of discourse spanned by the attribute
X. We shall also briefly say that X is our space of discourse instead.
For the purpose of this paper we define the Bel-function as follows (compare also [10], [26], [22]):
Definition 1 The Belief Function in the sense of the DS-Theory is defined
as Bel:2Ξ → [0, 1] with Ξ = Ξ1 × Xi2 × ... × Ξn being the space spanned by
the attribute X = X1 × X2 × . . . × Xn with
∀A;A⊆Ξ
Bel(A) =
X
m(B)
B⊆A
where m(A) is a Mass Function in the sense of the DS-Theory (see Def.2
below).
DS BELIEF FUNCTION - A NEW INTERPRETATION
7
The function m is defined as
Definition 2 The Mass Function in the sense of the DS-Theory is defined
as m:2Ξ → [0, 1] with
m(∅) = 0
X
m(A) = 1
A∈2Ξ
∀A∈2Ξ
m(A) ≥ 0
.
Definition 3 Whenever m(A) > 0, we say that A is the focal point of the
Bel-Function.
Let us also introduce the Pl-Function (Plausibility) as:
Definition 4 The Plausibility Function in the sense of the DS-Theory is
defined as
∀A;A⊆Ξ
P l(A) = 1 − Bel(Ξ − A)
Beside the above definition a characteristic feature of the DS-Theory is
the so-called DS-rule of combination of independent evidence:
Definition 5 Let BelE1 and BelE2 represent independent information over
the same space of discourse. Then:
BelE1 ,E2 = BelE1 ⊕ BelE2
8
MIECZYSLAW A. KLOPOTEK
defined as:
mE1 ,E2 (A) = c ·
X
mE1 (B) · mE2 (C)
B,C;A=B∩C
(c - normalizing constant) represents the Combined Belief-Function of Two
Independent Beliefs
3
Denotation
F. Bacchus in his paper [2] on axiomatization of probability theory and first
order logic shows that probability should be considered as a quantifier binding
free variables in first order logic expressions just like universal and existential
quantifiers do. So if e.g. α(x) is an open expression with a free variable x
then [α(x)]x means the probability of truth of the expression α(x). (The
quantifier []x binds the free variable x and yields a numerical value ranging
from 0 to 1 and meeting all the Kolmogoroff axioms). Within the expression
[α(x)]x the variable x is bound. See [2] on justification why other types of
integration of probability theory and first order logic or propositional logic
fail. Also for justification of rejection of the traditional view of probability
as a function over sets. While sharing Bacchus’ view, we find his notation a
bit cumbersome so we change it to be similar to the universal and existential
quantifiers throughout this paper. Furthermore, Morgan [14] insisted that
the probabilities be always considered in close connection with the population
they refer to. Bacchus’ expression [α(x)]x we rewrite as:
ProbP (x) α(x) - the probability of α(x)] being true within the population
x
DS BELIEF FUNCTION - A NEW INTERPRETATION
9
P. The P (population) is a unary predicate with P(x)=TRUE indicating
that the object x(∈ Ω, that is element of a universe of objects) belongs to
the population under considerations. If P and P’ are populations such that
∀x P ′(x) → P (x) (that is membership in P’ implies membership in P, or in
other words: P’ is a subpopulation of P), then we distinguish two cases:
P (x)
case 1: ( Prob
P ′ (x)) = 0 (that is probability of membership in P’ with
x
respect to P is equal 0) - then (according to [14] for any expression α(x) in
P ′ (x)
Prob
α(x)) = 1
free variable x the following holds for the population P’: (
x
P (x)
case 2: ( Prob
P ′ (x)) > 0then (according to [14] for any expression α(x)
x
in free variable x the following holds for the population P’:
P ′ (x)
(
Prob
x
α(x)) =
ProbP (x) (α(x) ∧ P ′ (x))
x
ProbP (x) P ′ (x)
x
We also use the following (now traditional) mathematical symbols:
∀x α(x) - always α(x) (universal quantifier)
∃x α(x) - there exists an x such that α(x) (existential quantifier)
10
MIECZYSLAW A. KLOPOTEK
α∧β
V
B
- logical AND of expressions
α(B) - logical AND over all instantiations of
the expression α(B) in free variable B
α∨β
W
B
- logical OR of expressions
α(B) - logical OR over all instantiations of
the expression α(B) in free variable B
¬
- logical negation
P ∩Q
- intersection of two sets
P ∪Q
- union of two sets
4
A New Interpretation of Belief Functions
The empirical meaning of a new interpretation of the DS Belief function will
be explained by means of the following example:
Example 1 Let us consider a daily-life example. Buying a bottle of hair
shampoo is not a trivial task from both the side of the consumer and the
manufacturer. If the consumer arrives at the consciousness that the shampoos may fall into one of the four categories: high quality products (excellent
for maintaining cleanness and health of the consumer) (H), moderate quality
products (keeping just all Polish industry standards) (M), suspicious products (violating some industry standards) (S) and products dangerous for
health and life (containing bacteria or fungi or other microbes causing infectious or invasive diseases, containing cancerogenous or poisonous substances
DS BELIEF FUNCTION - A NEW INTERPRETATION
11
etc.) (D), he has a hard time upon leaving his house for shopping. Clearly,
precise chemical, biochemical and medical tests exist which may precisely
place the product into one of those obviously exclusive categories. But the
Citizen1 Coot2 usually neither has a private chemical laboratory nor enough
money to make use of required services. Hence Citizen Coot coins a personal
set of ”quality” tests M 1 mapping the pair (bottle of shampoo, quality) into
the set {TRUE, FALSE} (the letter O - object - stands for bottle of shampoo,
H, M, S, D indicate quality classes: high, moderate, suspicious, dangerous):
1. If the shampoo is heavily advertised on TV then it is of high quality
(M 1 (O, {H}) = T RUE) and otherwise not (M 1 (O, {H}) = F ALSE).
2. If the name of the shampoo was never heard on TV, but the bottle looks
fine (pretty colours, aesthetic shape of the bottle), then the shampoo
must be of moderate quality (M 1 (O, {M}) = T RUE) and otherwise
not (M 1 (O, {M}) = F ALSE).
3. If the packaging is not fine or the date of production is not readable
on the bottle or the product is out of date, but the shampoo smells
acceptably otherwise then it is suspicious (M 1 (O, {S}) = T RUE) and
otherwise not (M 1 (O, {S}) = F ALSE).
1
The term ”Citizen” was a fine socialist time descriptor allowing to avoid the cumber-
some usage of words like ”Mr.”, ”Mrs.” and ”Miss”
2
This family name was coined as abbreviation for ”Citizen Of Our Town”
12
MIECZYSLAW A. KLOPOTEK
4. If either the packaging is not fine or the date of production is not
readable on the bottle or the product is out of date, and at the same
time the shampoo smells awfully, then it is dangerous (M 1 (O, {D}) =
T RUE and otherwise not (M 1 (O, {D}) = F ALSE).
Notice that the criteria are partially rational: a not fine looking bottle
may in fact indicate some decaying processing of the shampoo or at least
that the product remains for a longer time on the shelf already. Bad smell
is usually caused by development of some bacteria dangerous for human
health.Notice also that test for high and moderate quality are enthusiastic,
while the other two are more cautious.
Notice that the two latter tests are more difficult to carry out in a shop
than the leading two (the shop assistant would hardly allow to open a bottle
before buying). Also, there may be no time to check whether the shampoo
was actually advertised on TV or not (as the son who carefully watches all
the running advertisements stayed home and does his lessons). Hence some
simplified tests may be quite helpful:
• M 1 (O, {S, D}): If the packaging is not fine or the product is out of
date or the production date is not readable then the product is either
suspicious or dangerous (M 1 (O, {S, D}) = T RUE and otherwise not
(M 1 (O, {D, S}) = F ALSE). .
• M 1 (O, {H, M}): If the packaging looks fine, then the product is either
of high or moderate quality (M 1 (O, {M, H}) = T RUE and otherwise
DS BELIEF FUNCTION - A NEW INTERPRETATION
13
not (M 1 (O, {M, H}) = F ALSE)..
Clearly these tests are far from being precise ones, but for the Citizen Coot
no better tests will be ever available. What is more, they are not exclusive:
if one visits a dubious shop at a later hour, one may buy a product meeting
both M 1 (O, {H}) and M 1 (O, {D}) as defined above !
Let us assume we have two types of shops in our town: good ones
(G) and bad ones (B). (Let M 2 : Ω × 2{ G, B} → {T RUE, F ALSE} indicate for each shampoo in which shop type it was available. Further, let
M 3 : Ω × 2{H,M,S,D}×{G,B} → {T RUE, F ALSE} indicate for each shampoo
both its quality and the type of shop it was available from. Let clearly
M 1 (O, Quality) ∧ M 2 (O, Shop) = M 3 (O, Quality × Shop).
The good shops are those with new furniture, well-clothed shop assistants.
Bad ones are those with always dirty floor or old furniture, or badly clothed
shop assistants. Clearly, again, both shop categories may be considered
(nearly) exclusive as seldom well clothed shop assistants do not care of floors.
Let us assume we have obtained the statistics of shampoo sales in our town
presented in Table 1:
Rows and columns are marked with those singleton tests which were
passed (e.g. in the left upper corner there are 20 shampoo bottles sold in an
undoubtedly bad shop and having exclusively high quality, that is for all those
bottles (O) M 1 (O, {H}) = T RUE, M 1 (O, {M}) = F ALSE, M 1 (O, {S}) =
F ALSE, M 1 (O, {D}) = F ALSE, and M 2 (O, {B}) = T RUE, M 2 (O, {G}) =
14
MIECZYSLAW A. KLOPOTEK
Table 1: Sold shampoos statistics
Quality true for Shop type B
G B,G
Total
H
20 100
70
190
M
80 100
110
290
S
50
5
15
70
D
10
1
3
14
H,S
15
10
14
39
M,S
30
20
25
75
H,D
8
2
3
13
M,D
15
7
10
32
228 245
250
723
total
DS BELIEF FUNCTION - A NEW INTERPRETATION
15
F ALSE.) The measurement of M 1 (O, {H}) would yield TRUE for 190+39+13
=242 bottles and FALSE for the remaining 581 bottles, the measurement of
M 1 (O, {D}) would yield TRUE for 14+13+32=59 bottles, and FALSE for
the remaining 664 bottles. The measurement M 1 (O, {S, D}) will turn true in
70+14+ 39+75+ 13+12 =343 cases and FALSE in the remaining 480 cases.✸
In general let us assume that we know that objects of a population can be
described by an intrinsic attribute X taking exclusively one of the n discrete
values from its domain Ξ = {v1 , v2 , ..., vn } . Let us assume furthermore that
to obtain knowledge of the actual value taken by an object we must apply a
measurement method (a system of tests) M
Definition 6 X be a set-valued attribute taking as its values non-empty subsets of a finite domain Ξ. By a measurement method of value of the attribute
X we understand a function:
M : Ω × 2Ξ → {T RUE, F ALSE}
. where Ω is the set of objects, (or population of objects) such that
• ∀ω;ω∈Ω
M(ω, Ξ) = T RUE (X takes at least one of values from Ξ)
• ∀ω;ω∈Ω
M(ω, ∅) = F ALSE
• whenever M(ω, A) = T RUE for ω ∈ Ω, A ⊆ Ξ then for any B such
that A ⊂ B M(ω, B) = T RUE holds,
16
MIECZYSLAW A. KLOPOTEK
• whenever M(ω, A) = T RUE for ω ∈ Ω, A ⊆ Ξ and if card(A) > 1
then there exists B, B ⊂ A such that M(ω, B) = T RUE holds.
• for every ω and every A either M(ω, A) = T RUE or M(ω, A) =
F ALSE (but never both).
M(ω, A) tells us whether or not any of the elements of the set A belong to
the actual value of the attribute X for the object ω.
The measuring function M(O,A), if it takes the value TRUE, states for
an object O and a set A of values from the domain of X that the X takes for
this object (at least) one of the values in A.
It makes sense to talk of such measuring function assigning truth values
to sets of values of an attribute if it is possibly cheaper to measure M(O,A)
than to measure M(O,B) whenever B ⊂ A and we are interested in avoiding
measuring M(O,B) whenever possible, that is whenever measuring M(O,A)
suffices. For example, measuring pH-value with a pH-meter may turn out to
be more expensive than one with litmus paper, at the advantage of a higher
precision.
The above definition assumes that this measurement method is supersetand subset-consistent that is: Whenever M(object, A) = T RUE then
∀B;A⊂B
M(object, B) = T RUE
holds, and if card(A) > 1 then
∃B;B⊂A
M(object, B) = T RUE
DS BELIEF FUNCTION - A NEW INTERPRETATION
17
holds. The superset consistency means that if a test for larger set of values
indicates FALSE then it is not necessary to test its subsets as they will not
contribute to our knowledge of the value of X (cost savings). The subset
consistency means that if the M-test for a given value set gives true than in
end effect at least of its singleton subsets would yield TRUE for the respective
M-test. It is clearly the matter of convention: we assume that we can always
provide the answer YES or NO, and whenever we are in doubt we still answer
YES.
Such a convention is not an unusual one: in various legal systems ”anything, that is not forbidden by law, is permitted”; in the default logics if a
default statement cannot be proven wrong, it is assumed correct.
In any case, this means that from the universe of all possible objects, a
concrete measurement method selects a population for which its assumptions
are satisfied. E.g. if we have a measurement method for measuring pH-values,
we surely consider an aqueous sodium solution as a member of our universal
population, but never a car as such (because then pH-value has no meaning
at all)..
Furthermore us consider this measurement method a stable one that is
whenever the same object is presented, the results are the same. However,
let us assume that the measurement method is not completely reliable: it
measures only quantities related to the quantity X and not X itself. So it
is conceivable that e.g. M(object, {v1 }) = T RUE and at the same time
M(object, {v2 }) = T RUE though both values of X are deemed to be exclu-
18
MIECZYSLAW A. KLOPOTEK
sive. For practical reasons however it may not bother us at all as either the
true value of X may not be accessible at all (e.g. the true event of killing
or not killing a person by the suspect belongs to the past and can never be
recalled as such), may be too expensive to access (e.g. if the most reliable
method of checking whether a match can inflame or not it to inflame it, but
thereafter it would be useless, so we check only for its color, dryness etc.)
or it may be prohibitive to access it for other reasons, e.g. social (sex may
be treated with extremely high precision as an exclusive attribute taking
values male,female, but we would reluctantly check the primary features before deciding to call someone Mr, Miss or Mrs). Beside this it may prove
too expensive to check all the elementary hypotheses (e.g. in the medical
diagnosis) so that after stating M(object, {v1 }) = T RUE we do not bother
of other alternatives, that is of the degree of imprecision of the relationship
between the measured quantities and the real values of X. We assume that
the approximations of X achieved by the measurement method are in most
cases sufficient for our decision making (whatever its nature), so we do not
insist on closer knowledge of X itself.
So though we wish X to take singleton values only, we actually live with
the fact that for our practical purposes X is possibly set-valued.
Let us make at this point some remarks on practical relevance.
Example 2 If we are making statistical tests on equality or non-equality of
two quantities (means, variances, distributions), we can purely logically say
DS BELIEF FUNCTION - A NEW INTERPRETATION
19
that the quantities are either equal or not equal but never both. However,
the available indirect measurement method (by sampling) may lead to a
statement that there is neither evidence to reject equality nor to reject nonequality. So we say that in those cases both equity and inequity holds.
We still enjoy statistical inference because in sufficiently many other cases
statistics provides us with more precise results.✸
Example 3 Similarly if we consider components of a chemical substance,
the measurement methods for absence and presence of a component may
be different from one another depending whether or not we should be more
sensitive to its presence or absence and hence in some cases applying both
may lead to apparently contradicting results. ✸
Let us furthermore assume that with each application of the measurement
procedure some costs are connected, increasing roughly with the decreasing
size of the tested set A so that we are ready to accept results of previous
measurements in the form of pre-labeling of the population. So
Definition 7 A label L of an object ω ∈ Ω is a subset of the domain Ξ of
the attribute X.
A labeling under the measurement method M is a function l : Ω → 2Ξ such
that for any object ω ∈ Ω either l(ω) = ∅ or M(ω, l(ω)) = T RUE.
Each labelled object (under the labeling l) consists of a pair (Oj , Lj ), Oj the jth object, Lj = l(Oj ) - its label.
By a population under the labeling l we understand the predicate P : Ω →
20
MIECZYSLAW A. KLOPOTEK
{T RUE, F ALSE} of the form P (ω) = T RUE if f l(ω) 6= ∅ (or alternatively, the set of objects for which this predicate is true)
If for every object of the population the label is equal to Ξ then we talk of an
unlabeled population (under the labeling l), otherwise of a pre-labelled one.
Let us assume that in practice we apply a modified measurement method
Ml being a function:
Definition 8 Let l be a labeling under the measurement method M. Let
us consider the population under this labeling. The modified measurement
method
Ml : Ω × 2Ξ → {T RUE, F ALSE}
where Ω is the set of objects, is is defined as
Ml (ω, A) = M(ω, A ∩ l(ω))
(Notice that Ml (ω, A) = F ALSE whenever A ∩ l(ω) = ∅.)
For a labeled object (Oj , Lj ) (Oj - proper object, Lj - its label) and a set
A of values from the domain of X, the modified measurement method tells
us that X takes one of the values in A if and only if it takes in fact a value
from intersection of A and Lj . Expressed differently, we discard a priori any
attribute not in the label.
Please pay attention also to the fact, that given a population P for which
the measurement method M is defined, the labeling l (according to its definition) selects a subset of this population, possibly a proper subset, namely
DS BELIEF FUNCTION - A NEW INTERPRETATION
21
the population P’ under this labeling. P ′ (ω) = P (ω) ∧ M(ω, l(ω)). Hence
also Ml is defined possibly for the ”smaller” population P’ than M is.
Example 4 In practice, we frequently have to do with pre-labelled population. The statistics of illnesses based on poly-clinical data are based on a
population pre-labelled by financial status (whether or not they are ready
to visit a physician with less serious disease due to economical background),
educational background (whether or not they estimate properly the seriousness of the disease, whether or not they care of symptoms) etc. Similarly in
chemical analysis knowledge of substrates pre-labels the tests on composition
of the product (not relevant measurements are a priori discarded) etc. ✸
Example 5 To continue Citizen Coot example, we may believe that in good
shops only moderate and high quality products are available, that is we assign to every shampoo ω the label l(ω) = ∅ (we discard it from our register)
if ω denies our belief that there are no suspicious nor dangerous products in
a good shop, and l(ω) = {H, M} if it is moderate or high quality product
in a good shop and l(ω) = Ξ to all the other products. After this rejection
of shampoos not fitting our beliefs we have to do with (a bit smaller) soldshampoos-population from Table 5 :
Please notice the following changes: Suspicious and dangerous products
encountered in good shops were totally dropped from the statistics (their
22
MIECZYSLAW A. KLOPOTEK
Table 2: Modified sold shampoos statistics
Quality true for Shop type B
G B,G
Total
H
20 112
70
202
M
80 127
110
317
S
65
0
0
65
D
13
0
0
13
H,S
15
0
14
29
M,S
30
0
25
55
H,D
8
0
3
11
M,D
15
0
10
25
246 239
232
717
total
DS BELIEF FUNCTION - A NEW INTERPRETATION
23
existence was not revealed to the public). Suspicious and dangerous products
from shops with unclear classification (good/bad shops) were declared to
come from bad shops. Products from good shops which obtained both the
label high quality and dangerous were simply moved into the category high
quality products (the bad smelt was just concealed) etc. This is frequently
the sense in which our beliefs have impact on our attitude towards real facts
and we will see below that the Dempster-Shafer Theory reflects such a view
of beliefs. ✸
Let us now define the following function:
Definition 9
BelPM (A)
=
ProbP (O)
(¬M(O, Ξ − A))
O
which is the probability that the test M, while being true for A, rejects every
hypothesis of the form X=vi for every vi not in A for the population P. We
shall call this function ”the belief exactly in the the result of measurement”.
Let us define also the function:
Definition 10
P lPM (A)
=
ProbP (O)
(M(O, A))
O
which is the probability of the test M holding for A for the population P. Let
us refer to this function as the ”Plausibility of taking any value from the set
A”.
24
MIECZYSLAW A. KLOPOTEK
Last not least be defined the function:
Definition 11
mM
P (A)
=
ProbP (O)
O
(
^
M(O, B) ∧
B;B={vi }⊆A
^
¬M(O, B))
B;B={vi }⊆Ξ−A
which is the probability that all the tests for the singleton subsets of A are
true and those outside of A are false for the population P.
Let us illustrate the above concepts with Citizen Coot example:
Example 6 For the belief function for sold-bottles-population and the measurement function M 3 , if we identify probability with relative frequency, we
have the focal points given in the Table 4:✸
It is easily seen that:
THEOREM 1 mM
P is the mass Function in the sense of DS-Theory.
PROOF: We shall recall the definition and construction of the DNF (Disjunctive Normal Form). If, given an object O of a population P under
the measurement method M, we look at the expression
expr(A) =
^
B;B={vi }⊆A
M(O, B) ∧
^
¬M(O, B)
B;B={vi }⊆Ξ−A
for two different sets A1 , A2 ⊆ Ξ then clearly expr(A1 ) ∧ expr(A2 )
is never true - the truth of the one excludes the truth of the other.
DS BELIEF FUNCTION - A NEW INTERPRETATION
25
Table 3: Mass and Belief Function under Measurement Method M 3
Set
{(H,B) }
{(H,G) }
mM
P
3
20/723
BelPM
3
20/723
100/723 100/723
{(H,B),(H,G) }
70/723 190/723
{(M,B) }
80/723
80/723
{(M,G) }
100/723 100/723
{(M,B),(M,G) }
110/723 290/723
{(S,B) }
50/723
50/723
{(S,G) }
5/723
5/723
{(S,B),(S,G) }
15/723
70/723
{(D,B) }
10/723
10/723
{(D,G) }
1/723
1/723
{(D,B),(D,G) }
3/723
14/723
{(H,B),(S,B) }
15/723
85/723
{(H,G),(S,G) }
10/723 115/723
{(H,B),(S,B),(H,G),(S,G) }
14/723 299/723
{(M,B),(S,B) }
30/723 160/723
{(M,G),(S,G) }
20/723 125/723
{(M,B),(S,B),(M,G),(S,G) }
25/723 435/723
{(H,B),(D,B) }
8/723
{(H,G),(D,G) }
2/723 103/723
{(H,B),(D,B),(H,G),(D,G) }
3/723 217/723
38/723
{(M,B),(D,B) }
15/723 105/723
{(M,G),(D,G) }
7/723 108/723
{(M,B),(D,B),(M,G),(D,G) }
10/723 336/723
26
MIECZYSLAW A. KLOPOTEK
They represent mutually exclusive events in the sense of the probability
theory. On the other hand:
_
expr(A) = T RUE
A;A⊆Ξ
hence:
(
ProbP (O)
O
_
(
expr(A))) = (
ProbP (O)
T RUE) = 1
O
A;A⊆Ξ
and due to mutual exclusiveness:
X
(
ProbP (O)
expr(A)) = 1
O
A;A⊆Ξ
which means:
X
mM
P (A) = 1
A;A⊆Ξ
Hence the first condition of Def.2 is satisfied.Due to the second condition of Def.6 we have
(
ProbP (O)
expr(∅)) = 1 − (
ProbP (O)
O
(M(O, Ξ))) =
O
= 1−(
ProbP (O)
T RUE) = 1 − 1 = 0
O
Hence
mM
P (∅) = 0
.The last condition is satisfied due to the very nature of probability:
Probability is never negative. So we can state that mM
P is really a Mass
Function in the sense of the DS-Theory.
Q.e.d.✷
DS BELIEF FUNCTION - A NEW INTERPRETATION
27
THEOREM 2 BelPM is a Belief Function in the sense of DS-Theory corresponding to the mM
P .
PROOF: Let A be a non-empty set. By definition
M(O, Ξ − A) =
_
M(O, C)
^
¬M(O, C)
C={vi }⊆Ξ−A
hence by de-Morgan-law:
¬M(O, Ξ − A) =
C={vi }⊆Ξ−A
On the other hand, ¬M(O, Ξ − A) implies M(O, A).
But :
M(O, A) =
_
B⊆A
^
M(O, C) ∧
C;C={vi }⊆B
^
C;C={vi }⊆A−B
¬M(O, C)
So .
¬M(O, Ξ − A) = ¬M(O, Ξ − A) ∧ M(O, A) =
^
=
¬M(O, C) ∧ M(O, A) =
C;C={vi }⊆Ξ−A
^
=
C;C={vi }⊆Ξ−A
¬M(O, C) ∧
∧
^
C;C={vi }⊆A−B
=
_
B⊆A
^
C;C={vi }⊆B
_
B⊆A
^
M(O, C)∧
C;C={vi }⊆B
¬M(O, C) =
M(O, C) ∧
^
C;C={vi }⊆Ξ−A
¬M(O, C)∧
28
MIECZYSLAW A. KLOPOTEK
∧
^
C;C={vi }⊆A−B
_
=
B⊆A
^
¬M(O, C) =
M(O, C) ∧
C;C={vi }⊆B
^
C;C={vi }⊆Ξ−B
¬M(O, C)
Hence
_
¬M(O, Ξ − A) =
expr(B)
B⊆A
and therefore:
(
ProbP (O)
¬M(O, Ξ − A)) = (
O
ProbP (O) _
O
expr(B))
B⊆A
expr(A) being defined as in the previous proof. As we have shown in
the proof of the previous theorem, expressions under the probabilities
of the right hand side are exclusive events, and therefore:
(
ProbP (O)
¬M(O, Ξ − A)) =
O
ProbP (O)
X
(
X
mM
P (B)
B⊆A
expr(B))
O
that is:
BelPM (A ∈ 2Ξ ) =
B⊆A
As the previous theorem shows that mM
P is a DS Theory Mass Function,
it suffices to show the above.
Q.e.d.✷
THEOREM 3 P lPM is a Plausibility Function in the sense of DS-Theory
and it is the Plausibility Function corresponding to the BelPM .
DS BELIEF FUNCTION - A NEW INTERPRETATION
29
PROOF: By definition:
P lPM (A)
=
ProbM (O)
(O, A)
O
hence
P lPM (A)
= 1−(
ProbP (O)
¬M(O, A))
O
But by definition:
(
ProbP (O)
¬M(O, A)) = (
O
ProbP (O)
O
¬M(O, Ξ−(Ξ−A))) = BelPM (Ξ−A)
hence
P lPM (A) = 1 − BelPM (Ξ − A)
Q.e.d.✷
Two important remarks must be made concerning this particular interpretation:
• Bel and Pl are both defined, contrary to many traditional approaches,
as THE probabilities and NOT as lower or upper bounds to any probability.
• It is Pl(A) (and not Bel(A) as assumed traditionally) that expresses
the probability of A, and Bel(A) refers to the probability of the complementary set AC .
30
MIECZYSLAW A. KLOPOTEK
Of course, a complementary measurement function is conceivable to re-
vert the latter effect, but the intuition behind such a measurement needs
some elaboration. We shall not discuss this issue in this paper.
Let us also define the following functions referred to as labelled Belief,
labelled Plausibility and labelled Mass Functions respectively for the labeled
population P:
Definition 12 Let P be a population and l its labeling. Then
BelPMl (A) =
P lPMl (A)
l
mM
P (A)
=
ProbP (ω)
ω
(
^
ProbP (ω)
¬Ml (ω, Ξ − A)
ω
=
ProbP (ω)
Ml (ω, A)
ω
Ml (ω, B) ∧
B;B={vi }⊆A
^
¬Ml (ω, B))
B;B={vi }⊆Ξ−A
Let us illustrate the above concepts with Citizen Coot example:
Example 7 For the belief function for sold-bottles-population P and the
measurement function M 3 , let us assume the following labeling:
l(ω) ={(H,G),(H,B),(M,G),(M,B),(S,B),(D,B)}
for every ω ∈ Ω, which means that we are convinced that only high and
moderate quality products are sold in good shops.For the population P’ under
DS BELIEF FUNCTION - A NEW INTERPRETATION
31
this labeling, if we identify probability with relative frequency, we have the
focal points given in the Table 4:✸
It is easily seen that:
l
THEOREM 4 mM
P is the mass Function in the sense of DS-Theory.
PROOF: To show this is suffices to show that the modified measurement
method Ml possesses the same properties as the measurement method
M.
Let us consider a labeling l and a population P under this labeling.
Let O be an object and L its label under labeling l (L = l(O)). Always
Ml (O, Ξ) = T RUE because by definition Ml (O, Ξ) = M(O, Ξ ∩ L) =
M(O, L) and by definition of a labeled population for the object’s O
label L M(O, L) = T RUE.
Second, the superset consistency is satisfied, because if A ⊂ B then
if Ml (O, A) = T RUE then also Ml (O, A) = M(O, A ∩ L) = T RUE,
but because A ∩ L ⊆ B ∩ L then also M(O, B ∩ L) = T RUE, but by
definition M(O, B ∩ L) = Ml (O, B) = T RUE and thus it was shown
that Ml (O, A) = T RUE implies Ml (O, B) = T RUE for any superset
B of the set A.
Finally, also the subset consistency holds, because if M(O, L ∩ A) =
T RUE then there exists a proper subset B of L∩A such that M(O, B) =
T RUE. But in this case B = L∩B so we can formally write: M(O, L∩
32
MIECZYSLAW A. KLOPOTEK
Table 4: Mass and Belief Function under Modified Measurement Method Ml3
Set
{(H,B) }
{(H,G) }
M3
M3
mP ′l
BelP ′l
20/717
20/717
112/717 112/717
{(H,B),(H,G) }
70/717 202/717
{(M,B) }
80/717
80/717
{(M,G) }
127/717 127/717
{(M,B),(M,G) }
110/717 317/717
{(S,B) }
65/717
65/717
{(D,B) }
13/717
13/717
{(H,B),(S,B) }
15/717 100/717
{(H,B),(S,B),(H,G) }
14/717 184/717
{(M,B),(S,B) }
30/717 175/717
{(M,B),(S,B),(M,G) }
25/717 387/717
{(H,B),(D,B) }
8/717
{(H,B),(D,B),(H,G) }
3/717 114/717
41/717
{(M,B),(D,B) }
15/717 108/717
{(M,B),(D,B),(M,G) }
10/717 228/717
DS BELIEF FUNCTION - A NEW INTERPRETATION
33
B) = T RUE. Hence we see that Ml (O, A) = T RUE implies the existence of a proper subset B of the set A such that Ml (O, B) = T RUE.
M
Hence considering analogies between definitions of mM
as
P and mP l
well as between the respective Theorems we see immediately that this
Theorem is valid.
Q.e.d.✷
THEOREM 5 BelPMl is a Belief Function in the sense of DS-Theory corl
responding to the mM
P .
PROOF: As Ml is shown to be a DS Theory Mass Function and considering
analogies between definitions of BelPM and BelP M
l as well as between
the respective Theorems we see immediately that this Theorem is valid.
Q.e.d.✷
THEOREM 6 P lPMl is a Plausibility Function in the sense of DS-Theory
and it is the Plausibility Function corresponding to the BelPMl .
PROOF: As Ml is shown to be a DS Theory Mass Function and considering
analogies between definitions of P lPM and P lP M
l as well as between the
respective Theorems we see immediately that this Theorem is valid.
Q.e.d.✷
This does not complete the interpretation.
Let us now assume we run a ”(re-)labelling process” on the (pre-labelled
or unlabeled) population P.
34
MIECZYSLAW A. KLOPOTEK
Definition 13 Let M be a measurement method, l be a labeling under this
measurement method, and P be a population under this labeling (Note that
the population may also be unlabeled). The (simple) labelling process on the
population P is defined as a functional LP : 2Ξ × Γ → Γ, where Γ is the
set of all possible labelings under M, such that for the given labeling l and a
given nonempty set of attribute values L (L ⊆ Ξ), it delivers a new labeling
l′ (l′ = LP (L, l)) such that for every object ω ∈ Ω:
1. if Ml (ω, L) = F ALSE then l′ (ω) = ∅
(that is l’ discards a labeled object (ω, l(ω)) if Ml (ω, L) = F ALSE
2. otherwise l′ (ω) = l(ω) ∩ L (that is l’ labels the object with l(ω) ∩ L
otherwise.
Remark: It is immediately obvious, that the population obtained as the
sample fulfills the requirements of the definition of a labeled population.
The labeling process clearly induces from P another population P’ (a population under the labeling l′ ) being a subset of P (hence perhaps ”smaller”
than P) labelled a bit differently. Clearly if we retain the primary measurement method M then a new modified measurement method Ml′ is induced
by the new labeling. The (re-)labelling process may be imagined as the diagnosis process made by a physician. A patient ”labelled” with symptoms
observed by himself (many symptoms remain hidden for the physician, like
the body temperature curve over last few days) is relabeled by the physician
when being ill (labelled with the diseases suspected) or rejected (declared
DS BELIEF FUNCTION - A NEW INTERPRETATION
35
healthy due to symptoms not matching physician’s diagnostic procedure).
Let us define the following
Definition 14 ”labelling process function” mLP ;L : 2Ξ → [0, 1]: is defined
as:
mLP ;L (L) = 1
∀B;B∈2Ξ ,B6=L mLP ;L (B) = 0
It is immediately obvious that:
THEOREM 7 mLP ;L is a Mass Function in sense of DS-Theory.
Let BelLP,L be the belief and P lLP,L be the Plausibility corresponding
to mLP,L . Now let us pose the question: what is the relationship between
M
BelP ′l′ , BelPMl , and BelLP,L . It is easy to show that
THEOREM 8 Let M be a measurement function, l a labeling, P a population under this labeling. Let L be a subset of Ξ. Let LP be a labeling process
and let l′ = LP (L, l). Let P’ be a population under the labeling l′ . Then
M
BelP ′l′ is a combination via DS Combination rule of BelMl , and BelLP ;L .,
that is:
M
BelP ′l′ = BelPMl ⊕ BelLP ;L
.
PROOF: Let us consider a labeled object (Oj , Lj ) from the population P
(before re-labeling, that is Lj = l(Oj )) which passed the relabeling and
36
MIECZYSLAW A. KLOPOTEK
became (Oj , Lj ∩ L), that is Lj ∩ L = l′ (Oj ).. Let us define exprB
(before relabeling) and exprA (after labeling) as:
^
exprB ((Oj , Lj ), A) =
Ml (O, B)∧
B;B={vi }⊆A
^
∧
¬Ml (O, B)
B;B={vi }⊆Ξ−A
and
^
exprA ((Oj , Lj ), A) =
Ml′ (O, B)∧
B;B={vi }⊆A
^
∧
¬Ml′ (O, B)
B;B={vi }⊆Ξ−A
Let exprB ((Oj , Lj ), C) = T RUE and exprA ((Oj , Lj ), D) = T RUE for
some C and some D. Obviously then for no other C and no other D the
respective expressions are valid. It holds also that:
^
exprB ((Oj , Lj ), C) =
M(Oj , Lj ∩ B)∧
B;B={vi }⊆C
∧
^
¬M((Oj , Lj ∩ B)
B;B={vi }⊆Ξ−D
and
exprA ((Oj , Lj ), D) =
^
M(Oj , Lj ∩ L ∩ B)∧
B;B={vi }⊆D
∧
^
¬M(Oj , Lj ∩ L ∩ B)
B;B={vi }⊆Ξ−D
In order to get truth on the first expression, C must be a subset of Lj ,
and for the second we need D to be a subset of Lj ∩ L. Furthermore,
for a singleton F ⊆ Ξ either M(Oj , Lj ∩ F ) = T RUE, M(Oj , Lj ∩ L ∩
F ) = T RUE, and then it belongs to C, L and D, orM(Oj , Lj ∩ F ) =
DS BELIEF FUNCTION - A NEW INTERPRETATION
37
T RUE, M(Oj , Lj ∩ L ∩ F ) = F ALSE, and then it belongs to C, but
not to L and hence not to D, orM(Oj , Lj ∩ F ) = F ALSE, so due to
superset consistency also M(Oj , Lj ∩ L ∩ F ) = F ALSE, and then it
belongs neither to C nor to D (though membership in L does not need
to be excluded). So we can state that D = C ∩ L,
So the absolute expected frequency of objects for which exprA (D)
holds, is given by:
X
l
samplecardinality · mM
P (C)
C;D=C∩L
that is:
X
LP ;L
l
(L)
samplecardinality · mM
P (C) · m
C;D=C∩L
which can be easily re-expressed as:
X
LP ;L
l
samplecardinality · mM
(G)
P (C) · m
C,G;D=C∩G
So generally:
M
mP ′l′ (D) = c ·
X
LP ;L
l
(G)
mM
P (C) · m
C,G;D=C∩G
with c - normalizing constant.
Q.e.d.✷
Example 8 To continue Citizen Coot example let us recall the function
BelPM from Example 6 which is one of an unlabeled population. Let us
38
MIECZYSLAW A. KLOPOTEK
define the label
L = {(H, G), (H, B), (M, G), (M, B), (S, B), (D, B)}
as in Example 7. Let us define the labeling process function as
mLP ;L (L) = 1
∀B;B∈2Ξ ,B6=L mLP ;L (B) = 0
. Let us consider the function BelPM′l from Example 7. It is easily seen that:
BelPM′l = BelPM ⊕ BelLP ;L
✸
Let us try another experiment, with a more general (re-)labeling process.
Instead of a single set of attribute values let us take a set of sets of attribute values L1 , L2 , ..., Lk (not necessarily disjoint) and assign to each one
a probability mLP,L
1 ,L2 ,...,Lk
(Ai ) of selection.
Definition 15 Let M be a measurement method, l be a labeling under this
measurement method, and P be a population under this labeling (Note that
the population may also be unlabeled). Let us take a set of (not necessarily
disjoint) nonempty sets of attribute values {L1 , L2 , ..., Lk } and let us define
the probability of selection as a function mLP,L
X
A;A⊆Ξ
mLP,L
1 ,L2 ,...,Lk
1 ,L2 ,...,Lk
(A) = 1
: 2Ξ → [0, 1] such that
DS BELIEF FUNCTION - A NEW INTERPRETATION
∀A;A∈{L1 ,L2 ,...,Lk } mLP,L
1 ,L2 ,...,Lk
∀A;A6∈{L1 ,L2 ,...,Lk } mLP,L
1 ,L2 ,...,Lk
39
(A) > 0
(A) = 0
The (general) labelling process on the population P is defined as a (randomΞ
ized) functional LP : 22 × ∆ × Γ → Γ, where Γ is the set of all possible
labelings under M, and ∆ is a set of all possible probability of selection functions, such that for the given labeling l and a given set of (not necessarily
disjoint) nonempty sets of attribute values {L1 , L2 , ..., Lk } and a given probability of selection mLP,L
1 ,L2 ,...,Lk
it delivers a new labeling l” such that for
every object ω ∈ Ω:
1. a label L, element of the set {L1 , L2 , ..., Lk } is sampled randomly according to the probability distribution mLP,L
1 ,L2 ,...,Lk
; This sampling is done
independently for each individual object,
2. if Ml (ω, L) = F ALSE then l”(ω) = ∅
(that is l” discards an object (ω, l(ω)) if Ml (ω, L) = F ALSE
3. otherwise l”(ω) = l(ω) ∩ L (that is l” labels the object with l(ω) ∩ L
otherwise.)
Again we obtain another (”smaller”) population P” under the labeling l”
labelled a bit differently. Also a new modified measurement method Ml” is
induced by the ”re-labelled” population. Please notice, that l” is not derived
deterministicly. Another run of the general (re-)labeling process LP may
result in a different final labeling of the population and hence a different
subpopulation under this new labeling.
40
MIECZYSLAW A. KLOPOTEK
Example 9 The (re-)labelling process may be imagined as the diagnosis
process made by a team of physicians in a poly-clinic. A patient ”labelled”
with symptoms observed by himself is (a bit randomly) directed by the ward
administration to one of the available internists each of them having a bit
different educational background and/or a different experience in his profession, hence taking into consideration a bit different set of working hypotheses.
The patient is relabeled by the given physician being ill (labelled with the
diseases suspected) or rejected (declared healthy) according to the knowledge of this particular physician. The final ward statistics of illnesses does
not take into account the fact that a physician may have had no knowledge
of a particular disease unit and hence qualified the patient either healthy or
ill of another, related disease unit. And it reflects the combined processes:
of random allocations of patients to physicians and of belief worlds of the
physicians rather then what the patients were actually suffering from. (We
are actually satisfied with the fact that both views of ward statistics usually
converge).✸
Clearly:
THEOREM 9 mLP,L
Let BelLP ;L
1 ,...,Lk
sponding to mLP,L
1 ,...,Lk
is a Mass Function in sense of DS-Theory.
be the belief and P lLP,L
1 ,...,Lk
1 ,...,Lk
be the Plausibility corre-
. Now let us pose the question: what is the relation-
ship between BelPM”l” , BelPMl , and BelLP,L
1 ,...,Lk
. It is easy to show that
DS BELIEF FUNCTION - A NEW INTERPRETATION
41
THEOREM 10 Let M be a measurement function, l a labeling, P a population under this labeling. Let LP be a generalized labeling process and
let l” be the result of application of the LP for the set of labels from the
set {L1 , L2 , ..., Lk } sampled randomly according to the probability distribution mLP,L
1 ,L2 ,...,Lk
;. Let P” be a population under the labeling l”. Then The
expected value over the set of all possible resultant labelings l” (and hence
populations P”) (or, more precisely, value vector) of BelPM”l” is a combination
via DS Combination rule of BelPMl , and BelLP,L
M′
1 ,...,Lk
E(BelP ”l ) = BelPMl ⊕ BelLP,L
., that is:
1 ,...,Lk
.
PROOF: By the same reasoning as in the proof of Theorem 8 we come to
the conclusion that for the given label Li and the labeling l” (instead of
l′ the absolute expected frequency of objects for which exprA (D) holds,
is given by:
X
LP ;L
l
samplecardinality · mM
P (C) · m
1 ,...,Lk
(Li )
C;D=C∩Li
as the process of sampling the population runs independently of the
sampling the set of labels of the labeling process.
But exprA (D) may hold for any Li such that C ⊆ Li , hence in all the
exprA (D) holds for as many objects as:
X
X
i;i=1,...,k C;D=C∩Li
LP ;L
l
samplecardinality · mM
P (C) · m
1 ,...,Lk
(Li )
42
MIECZYSLAW A. KLOPOTEK
which can be easily re-expressed as:
X
LP ;L
l
samplecardinality · mM
P (C) · m
1 ,...,Lk
(G)
C,G;D=C∩G
So generally:
E(mPM”l” (D)) = c ·
X
LP ;L
l
mM
P (C) · m
1 ,...,Lk
(G)
C;D=C∩G
with c - normalizing constant.Hence the claimed relationship really
holds.
Q.e.d.✷
Example 10 The generalized labeling process and its consequences may be
realized in our Citizen Coot example by randomly assigning the sold bottles
for evaluation to two ”experts”, one of them - considering about 30 % of
the bottles - is running the full M test procedure, and the other - having to
consider the remaining 70 % of checked bottles - makes it easier for himself
by making use of his belief in the labeling l of Example 7. ✸
4.1
Summary of the New Interpretation
The following results have been established in this Section:
• concepts of measurement and modified measurement methods have
been introduced
• a concept of labelled population has been developed
DS BELIEF FUNCTION - A NEW INTERPRETATION
43
• it has been shown that a labelled population with the modified measurement method can be considered in terms of a Joint Belief Distribution
in the sense of DS-Theory,
• the process of ”relabeling” of a labelled population has been defined
and shown to be describable as a Belief Distribution.
• it has been shown that the relationship between the Belief Distributions of the resulting relabeled population, the basic population and
the relabeling process can be expressed in terms of the Dempster-Ruleof-Independent-Evidence-Combination.
This last result can be considered as of particular practical importance.
The interpretation schemata of DS Theory made by other authors suffered
from one basic shortcoming: if we interpreted population data as well as
evidence in terms of their DS schemes, and then combine the evidence with
population data (understood as a Dempster type of conditioning) then the
resulting belief function cannot be interpreted in terms of the population
data scheme, with subsequent updating of evidence making thinks worse
till even the weakest relation between the belief function and the (selected
sub)population is lost.
In this paper we achieve a break-through: data have the same interpretation scheme after any number of evidential updating and hence the belief
function can be verified against the data at any moment of DS evidential
reasoning.
44
MIECZYSLAW A. KLOPOTEK
The above definition and properties of the generalized labeling process
should be considered from a philosophical point of view. If we take one by
one the objects of our domain, possibly labelled previously by an expert in
the past, and assign a label independently of the actual value of the attribute
of the object, then we cannot claim in any way that such a process may be
attributed to the opinion of the expert. Opinions of two experts may be
independent of one another, but they cannot be independent of the subject
under consideration. This is the point of view with which most people would
agree, and should the opinions of the experts not depend on the subject, then
at least one of them may be considered as not expert.
This is exactly what we want to point at with our interpretation: the
precise pinpointing at what kind of independence is assumed within the
Dempster-Shafer theory is essential for its usability. Under our interpretation, the independence relies in trying to select a label for fitting to an
object independently of whatever properties this object has (including its
previous labeling). The distribution of labels for fitting is exactly identical
from object to object. The point, where the dependence of object’s labeling
on its properties comes to appearance, is when the measurement method
states that the label does not fit. Then the object is discarded. From philosophical point of view it means exactly that we try to impose our philosophy
of life onto the facts: cumbersome facts are neglected and ignored. We suspect that this is exactly the justification of the name ”belief function”. It
expresses not what we see but what we would like to see.
DS BELIEF FUNCTION - A NEW INTERPRETATION
45
Our suspicion is strongly supported by the quite recent statement of Smets
that ”authors (of multiple interpretations in terms of upper lower probability
models, inner and outer measures, random sets, probabilities of provability,
probabilities of necessity etc.) usually do not explain or justify the dynamic
component, that is, how updating (conditioning) is to be handled (except in
some cases by defining conditioning as a special case of combination. So I
(that is Smets) feel that these partial comparisons are incomplete, especially
as all these interpretations lead to different updating rules. ” Our interpretation explains both the static and dynamic component of the DST, and does
not lead to any other but to the Dempster Rule of Combination, hence may
be acceptable from the rigorous point of view of Smets. As in the light of
Smets’ paper [26] we have presented the only correct probabilistic interpretation of the DS theory so far, we feel to be authorized to claim that our
philosophical assessment of the DST is the correct one.
We have seen from the proofs of the theorems of this paper, that our
interpretation may be called a true one. The paper of Smets [26] permits us
to claim that we have found the true interpretation.
5
Belief from Data
As the DS-belief function introduced in this paper is defined in terms of frequentist measures, there exists a direct possibility of calculating the belief
46
MIECZYSLAW A. KLOPOTEK
function from data.
It has to be assumed that we have a data set for which the measurements
of type Ml have been carried out for each singleton subset of the space of
discourse Ξ. The results of these measurements may be available for example
as a set-valued attribute associated with each object in such a way that the
values actually appearing are those for which the singleton set tests were positive (i.e. TRUE). In this case if for an object the attribute X has the value
X = A with A ⊆ Ξ then this object increases the count for the DS-Mass
Function m(A) (and for no other m).
Whenever any statistical quantity is estimated from data, there exists
some risk (uncertainty) about unseen examples. If we assume some significance levels, we can complete the estimation by taking the lower bounds as
actual estimates of m’s and shifting the remaining burden (summing up to 1)
onto the m(Ξ) just taking for granted that doubtful cases may be considered
as matching all the measurements.
6
Discussion
In the past, various interpretations have been sought for the Dempster-Shafer
Bel-Functions. Two main steams of research were distinguished by Smets
[26]: probability related approaches and probability discarding approaches
DS BELIEF FUNCTION - A NEW INTERPRETATION
47
(the former disguised, the latter welcome by Smets). Let us make some
comparisons with our interpretation and its underlying philosophy.
6.1
Shafer and Smets
Shafer [22] and Smets [26] have made some strong statements in defense
of the Dempster-Shafer theory against sharp criticism of this theory by its
opponents as well as unfortunate users of the DST who wanted to attach
it to the dirty reality (that is objectively given databases). Smets [26] and
also initially Shafer [21] insisted on Bels not being connected to any empirical measure (frequency, probability etc.) considering the domain of DST
applications as the one where ”we are ignorant of the existence of probabilities”, and not one with ”poorly known probabilities” ([26], p.324). The basic
property of probability, which should be dropped in the DST axiomatization,
should be the additivity of belief measures. Surely, it is easily possible to
imagine situations where in the real life additivity is not granted: imagine
we have had a cage with 3 pigs, we put into it 3 hungry lions two hours ago,
how many animals are there now ? (3 + 3 < 6). Or ten years ago we left one
young man and one young woman on an island in the middle of the atlantic
ocean with food and weapons sufficing for 20 years. How many human beings
are there now ? (1 + 1 > 2).
The trouble is, however, that the objects stored in databases of a computer
behave usually (under normal operation) in an additive manner. Hence the
DST is simply disqualified for any reasoning within human collected data on
48
MIECZYSLAW A. KLOPOTEK
real world, if we accept the philosophy of Smets and Shafer.
The question may be raised at this point, what else practically useful can
be obtained from a computer reasoning on the basis of such a DST. If the
DST models, as Smets and Shafer claim, human behaviour during evidential
reasoning, then it would have to be demonstrated that humans indeed reason
as DST. We take e.g. 1000 people who never heard of Dempster-Shafer
theory, briefly explain the static component, provide them with two opinions
of independent experts and expect of them to answers what are their final
beliefs. Should their answers correspond to results of the DST (at least
converge toward them), then the computer, if fed with our knowledge, would
be capable to predict our conclusions on a given subject. However, to my
knowledge, no experiment like this has ever been carried out. Under these
circumstances the computer reasoning with DST would tell us what we have
to think and not what we think. But I don’t suspect that anybody would be
happy about a computer like this.
Hence, from the point of view of computer implementation the philosophy
of Smets and Shafer is not acceptable.Compare also Discussion in [10] on the
subject.
Both of them felt a bit uneasy about a total loss of reference to any scientific experiment checking practical applicability of the DST and suggested
some probabilistic background for decision making (e.g. the pigeonistic probabilities of Smets), but I am afraid that by these interpretations they fall
precisely into the same pitfalls they claimed to avoid by their highly abstract
DS BELIEF FUNCTION - A NEW INTERPRETATION
49
philosophy.
As statistical properties of Shafer’s [21] notion of evidence are concerned,
sufficient criticism has been expressed by Halpern and Fagin ([10] in sections
4-5). Essentially it is pointed there at the fact that ”the belief that represents
the joint observation is equal to the combination is in general not equal to the
combination of the belief functions representing the individual (independent)
observations” (p.297). The other point raised there that though it is possible
to capture properly in belief functions evidence in terms of probability of
observations update functions (section 4 of [10]), it is not possible to do the
same if we would like to capture evidence in terms of beliefs of observations
update functions (section 5 of [10]).
As Smets probabilistic interpretations are concerned, let us ”continue” the
killer example of [26] on pages 330-331. ”There are three potential killers, A,
B, C. Each can use a gun or a knife. I shall select one of them, but you will
not know how I select the killer. The killer selects his weapon by a random
process with p(gun)=0.2 and p(knife)=0.8. Each of A, B, C has his own personal random device, the random devices are unrelated. ...... Suppose you are
a Bayesian and you must express your ”belief” that the killer will use a gun.
The BF (belief function) solution gives Bel(gun) = 0.2 × 0.2 × 0.2 = 0.008.
..... Would you defend 0.2 ? But this applies only if I select a killer with a
random device ...... But I never said I would use a random device; I might be
a very hostile player and cheat whenever I can. ... . So you could interpret
bel(x) as the probability that you are sure to win whatever Mother Nature
50
MIECZYSLAW A. KLOPOTEK
(however hostile) will do.”
Yes, I will try to continue the hostile Mother Nature game here. For completeness I understand that Bel(knif e) = 0.83 = 0.512 and Bel({gun, knif e}) =
1. But suppose there is another I, the chief of gangster science fiction physicians, making decisions independly of the chief I of the killers. The chief I
of physicians knows of the planned murder and has three physicians X,Y,Z.
Each can either rescue a killed man or let him die. I shall select one of
them, but you will not know how I select the physician. The physician, in
case of killing with a gun, selects his attritude by a random process with
p(rescue|gun) = 0.2 and p(let die|gun) = 0.8 and he lets the person die
otherwise. Each of X, Y, Z has his own personal random device, the random
devices are unrelated. ...... Suppose you are a Bayesian and you must express your ”belief” that the physician will rescue if the killer will use a gun.
The BF (belief function) solution gives Bel1 (rescue|gun) = 0.23 = 0.008.
Bel1 (let die|gun) = 0.83 = 0.512, Bel1 ({recue, let die}|gun) = 1. Also
Bel2 (let die|knif e) = 1. As the scenarios for Bel1 and Bel2 are independent, let us combine them by the Dempster rule: Bel12 = Bel1 ⊕ Bel2 . We
make use of the Smets’ claim that ”the de re and de dicto interpretations
lead to the same results” ([26], p. 333), that is Bel(A|B) = Bel(¬B ∨ A).
Hence
m12 ({(gun, let die), (knif e, let die), (knif e, rescue)}) = 0.480
m12 ({(gun, rescue), (knif e, rescue)}) = 0.008
DS BELIEF FUNCTION - A NEW INTERPRETATION
51
m12 ({(knif e, rescue), (gun, let die)}) = 0.512
Now let us combine Bel12 with the original Bel. We obtain:
m ⊕ m12 ((gun, let die) = 0.008 · 0.480 + 0.008 · 0.512 = 0.008 · 0.992
But these two unfriendly chiefs of gangster organizations can be extremely
unfriendly and in fact your chance of winning a bet may be as bad as
0.008 · 0.512 for the event (gun, let die). Hence the ”model” proposed by
Smets for understanding beliefs functions as ”unfriendly Mother Nature” is
simply wrong. If the Reader finds the combination of Bel2 with the other
Bels a little tricky, then for justification He should refer to the paper of Smets
and have a closer look at all the other examples.
Now returning to the philosophy of ”subjectivity” of Bel measures: Even
if a human being may possess his private view on a subject, it is only after
we formalize the feeling of subjectiveness and hence ground it in the data
that we can rely on any computer’s ”opinion”. We hope we have found
one such formalization in this paper. The notion of labeling developed here
substitutes one aspect of subjective human behaviour - if one has found one
plausible explanation, one is too lazy to look for another one. So the process
of labeling may express our personal attitudes, prejudices, sympathies etc.
The interpretation drops deliberately the strive for maximal objectiveness
aimed at by traditional statistical analysis. Hence we think this may be a
promising path for further research going beyond the DS-Theory formalism.
52
MIECZYSLAW A. KLOPOTEK
Smets [26] views the probability theory as a formal mathematical appa-
ratus and hence puts it on the same footing as his view of the DST. However,
in our opinion, he ignores totally one important thing: The abstract concept
of probability has its real world counterpart of relative frequency which tends
to behave approximately like the theoretical probability in sufficiently many
experimental settings as to make the abstract concept of probability useful
for practical life. And a man-in-the-street will expect of the DST to possess
also such a counterpart or otherwise the DST will be considered as another
version of the theory of counting devils on a pin-head.
Let us also have a look at interpretations disguised by Shafer and Smets
(i.e. all the mentioned below):
6.2
DST and Random Sets
The canonic random set interpretation [16] is one with a statistical process
over set instantiations. The rule of combination assumes then that two such
statistically independent processes are run and we are interested in their
intersections. This approach is not sound as empty intersection is excluded
and this will render any two processes statistically dependent. We overcome
this difficulty assuming in a straight forward manner that we are ”walking”
from population to population applying the Rule of Combination. Classical
DS theory in fact assumes such a walk implicitly or it drops in fact the
DS BELIEF FUNCTION - A NEW INTERPRETATION
53
assumption that Bel() of the empty set is equal 0. In this sense the random
set approaches may be considered as sound as ours.
However, in many cases the applications of the model are insane. For
example, to imitate the logical inference it is frequently assumed that we
have a Bel-function describing the actual observed value of a predicate P(x),
and a Bel-Function describing the implication ”If P(x) then Q(x)” [13]. It is
assumed further that the evidence on the validity of both Bel’s has been collected independently and one applies the DS-rule of combination to calculate
the Bel of the predicate Q(x). One has then to assume that there is a focal m
of the following expression: m({(P (x), Q(x)), (¬P (x), Q(x)), (¬P (x), ¬Q(x))})
which actually means that with non-zero probability at the same time P (x)
and ¬P (x) hold for the same object as we will see in the following example:
Let Bel1 represent our belief in the implication, with focal points:
m1 (P (x) → Q(x)) = 0.5, m1 (¬(P (x) → Q(x))) = 0.5,
Let further the independent opinion Bel2 on P(x) be available in the form of
focal points:
m2 (P (x)) = 0.5, m2 (¬P (x)) = 0.5
Let Bel12 = Bel1 ⊕ Bel2 represent the combined opinions of both experts.
The focal points of Bel12 are:
m12 ({(P (x), Q(x))}) = 0.33, m12 ({(P (x), ¬Q(x))}) = 0.33,
m12 ({(¬P (x), Q(x)), (¬P (x), ¬Q(x))}) = 0.33
54
MIECZYSLAW A. KLOPOTEK
m12 ({(P (x), Q(x))}) = 0.33 makes us believe that there exist objects for
which both P(x) and Q(x) holds. However, a sober (statistical) look at expert
opinions suggests that all situations for which the implication P (x) → Q(x)
holds, must result from falsity of P (x), hence whenever Q(x) holds then
¬P (x) holds. These two facts combined mean that P(x) and its negation
have to hold simultaneously. This is actually absurdity overseen deliberately. The source of this misunderstanding is obvious: the lack of proper
definition of what is and what is not independent. Our interpretation allows
for sanitation of this situation. We are not telling that the predicate and its
negation hold simultaneously. Instead we say that for one object we modify
the measurement procedure (set a label) in such a way that it, applied for
calculation of P (x), yields true and at the same time for another object, with
the same original properties we make another modification of measurement
procedure (attach a label to it) so that measurement of ¬P (x) yields also
true, because possibly two different persons were enforcing their different beliefs onto different subsets of data.
Our approach is also superior to canonical random set approach in the
following sense: The canonical approach requires knowledge of the complete
random set realizations of two processes on an object to determine the combination of both processes. We, however, postpone the acquisition of knowledge of the precise instantiation of properties of the object by interleaving
the concept of measurement and the concept of labeling process. This has a
DS BELIEF FUNCTION - A NEW INTERPRETATION
55
close resemblance to practical processing whenever diagnosis for a patient is
made. If a physician finds a set of hypotheses explaining the symptoms of
a patient, he will usually not try to carry out other testing procedures than
those related to the plausible hypotheses. He runs clearly at risk that there
exists a different set of hypotheses also explaining the patients’s symptoms,
and so a disease unit possibly present may not be detected on time, but usually the risk is sufficiently low to proceed in this way, and the cost savings
may prove enormous.
6.3
Upper and Lower Probabilities
Still another approach was to handle Bel and Pl as lower and upper probabilities [4]. This approach is of limited use as not every set of lower and
upper probabilities leads to Bel/Pl functions [12], hence establishing a unidirectional relationship between probability theory and the DS-theory. Under
our interpretation, the Bel/Pl function pair may be considered as a kind of
interval approximations to some ”intrinsic” probability distributions which,
however, cannot be accessed by feasible measurements and are only of interest as a kind of qualitative explanation to the physical quantities really
measured.
Therefore another approach was to handle them as lower/upper envelops
to some probability density function realization [12], [8]. However, the DS
56
MIECZYSLAW A. KLOPOTEK
rule of combination of independent evidence failed.
6.4
Inner and Outer Measures
Still another approach was to handle Bels/Pl in probabilistic structures
rather than in probabilistic spaces [7]. Here, DS-rule could be justified as one
of the possible outcomes of independent combinations, but no stronger properties were available. This is due to the previously mentioned fact that exclusion of empty intersections renders actually most of conceivable processes
dependent. Please notice that under our interpretation no such ambiguity
occurs. This is because we not only drop empty intersecting objects but also
relabel the remaining ones so that any probability calculated afterwards does
not refer to the original population.
So it was tried to drop the DS-rule altogether in the probabilistic structures, but then it was not possible to find a meaningful rule for multistage
reasoning [10]. This is a very important negative outcome. As the DempsterShafer-Theory is sound in this respect and possesses many useful properties
(as mentioned in the Introduction), it should be sought for an interpretation
meeting the axiomatic system of DS Theory rather then tried to violate its
fundamentals. Hence we consider our interpretation as a promising one for
which decomposition of the joint distribution paralleling the results for probability distributions may be found based on the data.
DS BELIEF FUNCTION - A NEW INTERPRETATION
6.5
57
Rough Set Approach
An interesting alternative interpretation of the Dempster-Shafer Theory was
found within the framework of the rough set theory [24], [9]. Essentially the
rough set theory searches for approximation of the value of a decision attribute by some other (explaining) attributes. It usually happens that those
attributes are capable only of providing a lower and upper approximation to
the value of the decision attribute (that is the set of vectors of explaining
attributes supporting only this value of the decision variable, and the set
of vectors of explaining attributes supporting also this value of the decision
variable resp.- for details see texts of Skowron [24] and Grzymala-Busse [9]).
The Dempster Rule of combination is interpreted by Skowron [25] as combination of opinions of independent experts, who possibly look at different sets
of explanation attributes and hence may propose different explanations.
The difference between our approach and the one based on rough sets
lies first of all in the ideological background: We assume that the ”decision attribute” is set-valued whereas the rough-set approach assumes it to
be single-valued. This could have been overcome by some tricks which will
not be explained in detail here.But the combination step is here essential:
If we assume that the data sets for forming knowledge of these two experts
are exhaustive, then it can never occur that these opinions are contradictory.
But the DST rule of combination uses the normalization factor for dealing
with cases like this. Also the opinions of experts may have only the form
of a simple (that is deterministic) support function. Hence, rough-set in-
58
MIECZYSLAW A. KLOPOTEK
terpretation implies axioms not actually present in the DST. Hence rough
set interpretation is on the one hand restrictive, and on the other hand not
fully conforming to the general DST. From our point of view the DST would
change the values of decision variables rather then recover them from expert
opinions.
Here, we come again at the problem of viewing the independence of experts. The DST assumes some strange kind of independence within the data:
the proportionality of the distribution of masses of sets of values among intersecting subsets weight by their masses in the other expert opinion. Particularly unhappy is the fact for the rough set theory, that given a value of
the decision variable, the respective indicating vectors of explaining variables
values must be proportionally distributed among the experts not only for this
decision attribute value, but also for all the other decision attribute values
that ever belong to the same focal point. Hence applicability of the rough set
approach is hard to justify by a simple(, ”usual” as Shafer wants) statistical
test. On the other hand, statistical independence required for Dempster rule
application within our approach is easily checked.
To demonstrate the problem of rough set theory with re combination of
opinions of independent experts let us consider an examle of two experts
having the combined explanatory attributes E1 (for expert 1) and E2 (for
expert 2) both trying to guess the decision attribute D. Let us assume that
D takes one of two values: d1 , d2, E1 takes one of three values e11 , e12 , e13 ,
E2 takes one of three values e21 , e22 , e23 . Furthermore let us assume that the
DS BELIEF FUNCTION - A NEW INTERPRETATION
59
rough set analysis of an exhaustive set of possible cases shows that the value
e11 of the attribute E1 indicates the value d1 of the decision attribute D,
e12 indicates d2 , e13 indicates the set {d1 , d2}, Also let us assume that the
rough set analysis of an exhaustive set of possible cases shows that the value
e21 of the attribute E2 indicates the value d1 of the decision attribute D, e22
indicates d2 , e32 indicates the set {d1 , d2 }, From the point of view of bayesian
analysis four cases of causal influence may be distinguished (arrows indicate
the direction of dependence).
E1 → D → E2
E1 ← D ← E2
E1 ← D → E2
E1 → D ← E2
From the point of view of bayesian analysis, in the last case attributes
E1 and E2 have to be unconditionally independent, in the remaining cases:
E1 and E2 have to be independent conditioned on D. Let us consider first
unconditional independence of E1 and E2 . Then we have tthat:
(
ProbP (ω)
E1 (ω) = e11 ∧ E2 (ω) = e22 ) =
ω
=(
ProbP (ω)
ω
E1 (ω) = e11 ) · (
ProbP (ω)
E2 (ω) = e22 ) > 0
ω
P (ω)
E1 (ω) = e11 ∧ E2 (ω) = e22 ) > 0
However, it is impossible that ( Prob
ω
because we have to do with experts who may provide us possibly with information not specific enough, but will never provide us with contradictory
60
MIECZYSLAW A. KLOPOTEK
information. We conclude that unconditional independence of experts is impossible.
Let us turn to independence of E1 and E2 if conditioned on D. We introduce
the following denotation:
p1 =
ProbP (ω)
D(ω) = d1
ω
p2 =
ProbP (ω)
D(ω) = d2
ω
e′1
e′3
=
Prob(D(ω)=d1 )∧P (ω)
E1 (ω) = e11
ω
=
f1′ =
f3′ =
e2 ” =
Prob(D(ω)=d1 )∧P (ω)
E1 (ω) = e13
ω
Prob(D(ω)=d1 )∧P (ω)
E2 (ω) = e21
ω
Prob(D(ω)=d1 )∧P (ω)
E2 (ω) = e23
ω
Prob(D(ω)=d2 )∧P (ω)
E1 (ω) = e12
ω
e3 ” =
Prob(D(ω)=d2 )∧P (ω)
E1 (ω) = e13
ω
f2 ” =
Prob(D(ω)=d2 )∧P (ω)
E2 (ω) = e22
ω
f3 ” =
Prob(D(ω)=d2 )∧P (ω)
E2 (ω) = e23
ω
Let Bel1 and m1 be the belief function and the mass function representing
the knowledge of the first expert, let Bel2 and m2 be the belief function
and the mass function representing the knowledge of the second expert. Let
DS BELIEF FUNCTION - A NEW INTERPRETATION
61
Bel12 and m12 be the belief function and the mass function representing the
knowledge contained in the combined usage of attributes E1 , E2 if used for
prediction of D - on the grounds of the rough set theory. It can be easily
checked that:
m1 ({d1 }) = e′1 · p1 , m1 ({d2 }) = e2 ” · p2 , m1 ({d1 , d2}) = e′3 · p1 , +e3 ”′ · p2
m2 ({d1}) = f1′ · p1 , m2 ({d2 }) = f2 ” · p2 , m2 ({d1 , d2}) = f3′ · p1 , +f3 ”′ · p2
and if we assume the conditional independence of E1 and E2 conditioned on
D, then we obtain:
m12 ({d1 }) = e′1 · f1′ · p1 + e′1 · f3′ · p1 + e′3 · f1′ · p1
m12 ({d2 }) = e2 ” · f2 ” · p2 + e2 ” · f3 ” · p2 + e3 ” · f2 ” · p2
m12 ({d1 , d2}) = e′3 · f3′ · p1 + e3 ” · f3 ” · p2
However, the Dempster rule of combination would result in (c - normalization constant):
m1 ⊕m2 ({d1 }) = c·(e′1 ·f1′ ·p21 +e′1 ·f3′ ·p21 +e′1 ·f3 ”·p1 ·p2 +e′3 ·f1′ ·p21 +e3 ”·f1′ ·p1 ·p2 )
m1 ⊕m2 ({d2 }) = c·(e2 ”·f2 ”·p22 +e2 ”·f3′ ·p1 ·p2 +e2 ”·f3 ”·p22 +e′3 ·f2 ”·p1 ·p2 +e3 ”·f2 ”·p22 )
m1 ⊕ m2 ({d1 , d2 }) = c · e′3 · f3′ · p21 + e3 ” · f3 ” · p22 + e′3 · f3 ” · p1 · p2 + e3 ” · f3′ · p1 · p2 )
Obviously, Bel12 and Bel1 ⊕ Bel2 are not identical in general. We conclude
that conditional independence of experts is also impossible. Hence no usual
62
MIECZYSLAW A. KLOPOTEK
staatistical indeperndence assumption is valid for the rough set interpretation of the DST. This fact points at where the difference between rough
set interpretation and our interpretation lies in: in our interpretation, traditional statistical independence is incorporated into the Dempster’s scheme
of combination (labelling process).
By the way, lack of correspondence between statistical independence and
Dempster rule of combination is characteristic not only of the rough set interpretation, but also of most of the other ones. The Reader should read
carefully clumsy statements of Shafer about DST and statistical independence in [22].
6.6
General Remarks
The Dempster-Shafer Theory exists already over two decades. Though it was
claimed to reflect various aspects of human reasoning, it has not been widely
used in expert systems until recently due to the high computational complexity. Three years ago, however, an important paper of Shenoy and Shafer
[23] has been published, along papers of other authors similar in spirit, which
meant a break-through for application of both bayesian and Dempster-Shafer
theories in reasoning systems, because it demonstrated that if joint (bayesian
or DS) belief distribution can be decomposed in form of a belief network than
it can be both represented in a compact manner and marginalized efficiently
by local computations.
This fact makes them suitable as alternative fundamentals for represen-
DS BELIEF FUNCTION - A NEW INTERPRETATION
63
tation of (uncertain) knowledge in expert system knowledge bases [11].
Reasoning in bayesian belief networks has been subject of intense research
work also earlier [20], [23], [15], [17]. There exist methods of imposing various logical constraints on the probability density function and of calculating
marginals not only of single variables but of complicated logical expressions
over elementary statements of the type X =x (x belonging to the domain of
the variable X ) [17]. There exist also methods determining the decomposition of a joint probability distribution given by a sample into a bayesian
belief network [3], [18], [1], [27].
It is also known that formally probability distributions can be treated as
special cases of Dempster-Shafer belief distributions (with sinngleton focal
points) [10].
However, for application of DS Belief-Functions for representation of uncertainty in expert system knowledge bases there exist several severe obstacles. The main one is the missing frequentist interpretation of the DS-Belief
function and hence neither a comparison of the deduction results with experimental data nor any quantitative nor even qualitative conclusions can
be drawn from results of deduction in Dempster-Shafer-theory based expert
systems [13].
Numerous attempts to find a frequentist interpretation have been reported (e.g. [7], [8], [9], [10], [12], [22], [24]). But, as Smets [26] states,
they failed either trying to incorporate Dempster rule or when explaining
the nature of probability interval approximation. The Dempster-Shafer The-
64
MIECZYSLAW A. KLOPOTEK
ory experienced therefore sharp criticism from several authors in the past
[17], [10]. It is suggested in those critical papers that the claim of DST to
represent uncertainty stemming from ignorance is not valid. Hence alternative rules of combination of evidence have been proposed. However, these
rules fail to fulfill Shenoy/Shafer axioms of local computation [23] and hence
are not tractable in practice. These failures of those authors meant to us
that one shall nonetheless try to find a meaningful frequentist interpretation
of DST compatible with Dempster rule of combination.
We have carefully studied several of these approaches and are convinced
that the key for many of those failures (beside those mentioned by Halpern
in [10]) was: (1) treating the Bel-Pl pair as an interval approximation and (2)
viewing combination of evidence as a process of approaching a point estimation. In this paper we claim that the most reasonable treatment of Bel’s Pl’s
is to consider them to be POINT ESTIMATES of probability distribution
over set-valued attributes (rather then Interval estimates of probability distribution over single valued attributes). Of course, we claim also that Bel-Pl
estimates by an interval some probability density function but in our interpretation that ”intrinsic” probability density function is of little interest for
the user. The combination of evidence represents in our interpretation manipulation of data by imposing on them our prejudices (rather then striving
for extraction of true values).
Under these assumptions a frequentionistically meaningful interpretation
to the Bel’s can be constructed, which remains consistent under combination
DS BELIEF FUNCTION - A NEW INTERPRETATION
65
of joint distribution with ”evidence”, giving concrete quantitative meaning
to results of expert system reasoning. Within this interpretation we were
able to prove the correctness of Dempster-Shafer rule. This means that this
frequentist interpretation is consistent with the DS-Theory to the largest
extent ever achieved.
7
Conclusions
• According to Smets [26] there has existed no proper frequentist interpretation of the Dempster-Shafer theory of evidence so far.
• In this paper a novel frequentist interpretation of the Dempster-ShaferTheory has been found allowing for close correspondence between Belief
and Plausibility functions and the real data.
• This interpretation fits completely into the framework of Bel/Pl definitions and into the Dempster rule of combination of independent evidence relating for the first time in DST history this rule to plain statistical independence just overcoming difficulties of many alternative
interpretations of the Dempster-Shafer-Theory. Hence this interpretation dismisses the claim of Smets [26] that such an interpretation
cannot exist.
• It is distinguished by the fact of postponing the moment of measuring object properties behind combination of evidence leading even to
66
MIECZYSLAW A. KLOPOTEK
dropping some costly measurements altogether.
• The interpretation allows for subjective treatment of Bel’s and Pl’s as
some approximations to unknown probability distribution of an intrinsic, but not accessible, attribute.
• The introduced concept of labeled population may to some extent represent subjectivity in viewing probabilities.
• This interpretation questions the common usage of the DST as a mean
to represent and to reason with uncertainty stemming from ignorance.
This view has been already shaken by works of Pearl [17] and Halpern
and Fagin [10]. What our interpretation states clearly is that the DST
should be viewed as a way to express unwillingness to accept objective
facts rather than as a mean to express ignorance about them. Hence it
should be called a theory of prejudices rather than a theory of evidence.
Finally, I feel obliged to apologize and to say that all critical remarks
towards interpretations of DST elaborated by other authors result from deviations of those interpretations from the formalism of the DST. I do not
consider, however, a deviation from DST as a crime, because modifications
of DST may and possibly have a greater practical importance than the original theory. The purpose of this paper was to shed a bit more light onto the
intrinsic nature of pure DST and not to call for orthodox attitudes towards
DST.
DS BELIEF FUNCTION - A NEW INTERPRETATION
67
Acknowledgements
I am indebted to anonymous referees who greatly contributed to enhancement
of the quality of presentation of this paper.
References
[1] S. Acid, L.M. deCampos, A. Gonzales, B. Molina, N. Perez de la Blanca:
Learning with CASTLE, Symbolic and Quantitative Approaches to Uncertainty, Kruse R., Siegel P. (eds), Lecture Notes In Computer Science
548, Springer-Verlag 1991, 99-106
[2] F. Bacchus: L.p., a logic for representing and reasoning with statistical
knowledge, Computer Intelligence 6 (1990), 209-231.
[3] C.K. Chow, C.N. Liu: Approximating discrete probability distributions
with dependence trees, IEEE Transactions on Information Theory , Vol. IT-14, No.3, (maj 1968), 462-467
[4] A.P. Dempster: Upper and lower probabilities induced by a multi-valued
mapping, Ann. Math. Stat. 38 (1967), 325-339
[5] A.P. Dempster: A generalization of Bayesian inference, J. R. Stat. Soc.
Ser.B 30 (1968), 205-247.
[6] R. Fagin, J.Y. Halpern: Uncertainty, belief, and probability, Proc. Int.
Joint Conf. AI, IJCAI89, Detroit, 1161-1167, 1989
68
MIECZYSLAW A. KLOPOTEK
[7] R. Fagin, J.Y. Halpern: Uncertainty, belief, and probability, Comput.
Intell. 7 (1991), 160-173
[8] R. Fagin, J.Y. Halpern: A new approach to updating beliefs, in: J.F.
Lemmer, L.N. Kanal eds: Uncertainty in Artificial Intelligence 6 (NorthHolland) Amsterdam, 1991, 347-374.
[9] J.W. Grzymala-Busse:
Managing Uncertainty in Expert Systems,
Kluwer, 1991.
[10] J.Y. Halpern, R. Fagin: Two views of belief: belief as generalized probability and belief as evidence,Artificial Intelligence 54(1992), 275-317
[11] M. Henrion: An introduction to algorithm for inference in belief nets,
in: Henrion M., Shachter R.D.,Kanal L.N., Lemmer J.F.: Uncertainty in
Artificial Intelligence 5, Elsevier Science Publishers B.V (North-Holland)
(1990), 129-138
[12] H.E. Kyburg Jr: Bayesian and non-Bayesian evidential updating, Artificial Intelligence 31 (1987), 271-293.
[13] Y. Ma, D.C. Wilkins: Induction of uncertain rules and the sociopathicity property in Dempster-Shafer theory, in Symbolic and Quantitative
Approaches to Uncertainty, Kruse R., Siegel P. (eds), Lecture Notes In
Computer Science 548, Springer-Verlag 1991, 238-245
[14] C.G.Morgan: Logic, probability theory and artificial intelligence - Part
I: the probabilistic foundations of logic, Comput. Intel. 7, 94-109(1991)
DS BELIEF FUNCTION - A NEW INTERPRETATION
69
[15] J.Pearl: A constraint-propagation approach to probabilistic reasoning,
in Kanal L.N., Lemmer J.F. (eds): Uncertainty in Artificial Intelligence,
Elsevier Science Publishers B.V. (North Holland), 1986, 357-369
[16] H.T. Nguyen: On random sets and belief functions, J. Math. Anal. Appl.
65, 539-542, 1978.
[17] J. Pearl: Probabilistic Reasoning in Intelligent Systems: Networks of
Plausible Influence, Morgan and Kaufmann, 1988
[18] G. Rebane, J. Pearl: The recovery of causal poly-trees from statistical
data, in Uncertainty in Artificial Intelligence 3, Kanal L.N., Levit T.S.,
Lemmer J.F. eds., Elsevier Science Publishers B.V., (North Holland)
1989, 175-182
[19] E.H. Ruspini: The logical foundation of evidential reasoning, Tech. Note
408, SRI International, Menlo Park, Calif. USA, 1986.
[20] R.D. Shachter: Evidence absorption and propagation through evidence
reversals , Henrion M., Shachter R.D.,Kanal L.N., Lemmer J.F.: Uncertainty in Artificial Intelligence 5, Elsevier Science Publishers B.V
(North-Holland) (1990), 173- 190
[21] G. Shafer: A Mathematical Theory of Evidence , Princeton University
Press, Princeton, 1976
70
MIECZYSLAW A. KLOPOTEK
[22] G. Shafer: Belief Functions. Introduction, in: G. Shafer, J. Pearl eds:
Readings in Uncertain Reasoning, (ISBN 1-55860-125-2, Morgan Kaufmann Publishers Inc., San Mateo, California, 1990), 473-481.
[23] P.P. Shenoy, G. Shafer: Axioms for probability and belief-function propagation, in: Shachter R.D., Levitt T.S., Kanal L.N., Lemmer J.F. (eds):
Uncertainty in Artificial Intelligence 4, Elsevier Science Publishers B.V.
(North Holland), 1990,
[24] A. Skowron: Boolean reasoning for decision rules generation, in: J. Komorowski, Z.W.Raś (Eds): Methodologies for Intelligent Systems, Lecture Notes in Artificial Intelligence 689, Springer-Verlag, 1993, pp.295305
[25] A. Skowron: Boolean reasoning for decision rules generation, a talk at
the Intelligent Information Systems Workshop, Augustów, June 7-11,
1993.
[26] Ph. Smets: Resolving misunderstandings about belief functions, International Journal of Approximate Reasoning 1992:6:321-344.
[27] S. Srinivas, S. Russel, A. Agogino: Automated construction of sparse
Bayesian networks from unstructured probabilistic models and domain
information Henrion M., Shachter R.D.,Kanal L.N., Lemmer J.F.: Uncertainty in Artificial Intelligence 5, Elsevier Science Publishers B.V
(North-Holland) (1990), 295- 308
| 2 |
arXiv:1712.03382v4 [] 31 Jan 2018
Visual aesthetic analysis using deep neural network:
model and techniques to increase accuracy without
transfer learning
Muktabh Mayank Srivastava
Sonaal Kant
ParallelDots, Inc.
Email: muktabh@paralleldots.com
ParallelDots, Inc.
Email: sonaal@paralleldots.com
Abstract—We train a deep Convolutional Neural Network
(CNN) from scratch for visual aesthetic analysis in images
and discuss techniques we adopt to improve the accuracy. We
avoid the prevalent best transfer learning approaches of using
pretrained weights to perform the task and train a model from
scratch to get accuracy of 78.7% on AVA2 Dataset close to the
best models available (85.6%). We further show that accuracy
increases to 81.48% on increasing the training set by incremental
10 percentile of entire AVA dataset showing our algorithm gets
better with more data.
Index Terms—Visual Aesthetic Analysis, Convolutional Neural
Networks, Deep Learning, Image Aesthetics Evaluation.
I. I NTRODUCTION
Visual aesthetic analysis is the task of classifying images
into being perceived as attractive or unattractive by humans.
While previously handcrafted image features were a common
way to solve aesthetic analysis, CNNs have recently been used
to solve the problem as state of the art approaches [4], [8],
[2], [5]. They have multiple advantages such as having lesser
inference time and the ability to be deployed in mobile devices
after quantization. A variety of standard CNN architectures
pretrained on ImageNet dataset [1] are readily available as
open source for use. They are commonly utilized to achieve
remarkable results on visual aesthetic datasets. However, the
use of pretrained weights leaves very little scope for modifying
the original architectures. On the other hand, training CNN
from scratch only on a visual aesthetic dataset faces the risk
of overfitting (due to smaller size of the dataset as compared
to ImageNet) and limits the depth of modified architectures.
The contributions of this paper are:
1) We propose a deep CNN architecture for visual aesthetic
analysis that is specifically tailored to extract the intuitive features for underlying task. The architecture employs comparatively lesser parameters despite its depth.
2) We propose two simple tricks to train the architecture
from scratch and achieve improved accuracy. First, we
explore the effect of converting input images to different
color spaces and find that LAB space is more sensitive
towards aesthetic features of image as compared to RGB
space. Second, we employ a novel training schedule,
called coherence training, that improves accuracy further.
With these two innovations, we are able to train our algorithm
from scratch and report accuracies close to the best models
even on train datasets as small as 26000 images. We also
show that with increase in the amount of data, our algorithm’s
accuracy gets better.
II. DATASET
We used AVA2 dataset, which is a subset of original AVA
dataset [6]. AVA Dataset consists of 250,000 images scored for
their aesthetic quality on a scale of 1 to 10. To create AVA2
dataset, the images of AVA are sorted in ascending order of
their mean score and the top 10% and bottom 10% are taken as
attractive and unattractive images respectively. Both classes of
images are then split into train and test images making AVA2
a dataset of total 51,106 images, 25,553 images in both train
and test.
The model produced by training algorithm on AVA2 train
set is called model1. We perform an additional experiment
where we add to our train set more images, outside of AVA2,
from 75th percentile of average score onwards and from below
25th percentile of the AVA dataset. The algorithm trained on
this dataset (AVA2 train set + images from outside AVA2) is
called model2.
III. M ETHOD
A. Architecture
Our algorithm’s architecture is inspired from ILGNet architecture [4] in the manner that it takes both high and low level
features into account to classify an image into attractive or
unattractive. Figure 1 represents our proposed architecture. We
use DenseNet [3] blocks in our architecture as they are known
to use fewer parameters and thus avoid risk of overfitting
while training from scratch. We use three Dense Blocks (A
typical DenseBlock showing feature growth is represented
in Figure 2) with growth rate of 12 in model1 and growth
rate of 24 in model2. Transition Blocks (as seen in Figure
3) between Dense Blocks reduce the feature maps by half
using (1x1) Convolutions. Skip connections from end of each
Dense Block connect to the Decision Making module which
produces final output. Each Dense Block is hypothesized to be
learning to extract features at different levels. The learning at
all levels is then passed as input to Decision Making Module.
Fig. 1. Our Convolutional Neural Network architecture with Dense Blocks and feature accumulation from different levels to model aesthetics
In the Decision Making Module, knowledge from each level
is first feature-map reduced (to one-third of feature maps
coming as input into the module) by using (1x1) Convolutions.
Followed by (1X1) convolutions, knowledge from each level
is individually transformed using Fully Connected(FC) layer.
Then the output from all levels is concatenated to be passed
through a FC Layer, which produces the final classification.
Fig. 2. : DenseBlock
B. Training
The model was trained with SGD algorithm with momentum and decaying learning rate. We employed two simple techniques to boost the performance of our model as mentioned
above. First, we converted the input images from RGB space to
LAB space, which was found to give an accuracy boost of 1%.
Fig. 3. : Transition Block
Further, we used novel coherent training schedule for training
the network from scratch. Instead of training with minibatches
comprised of completely random images, we composed the
minibatches with comparatively similar images related to both
attractive as well as unattractive categories. The proposed
technique is inspired from real-world, where learning from
multiple similar examples at once leads to better understanding
of the concept. In CNN framework, the approach of coherent
training leads to learning of more discriminative features. We
calculated the semantic representation of each image from
output of pool5 layer of VGG16 network [7] pretrained on
ImageNet dataset. We then computed nearest neighbors for
each image in the aforementioned semantic space to be used
as similar examples. We get an accuracy boost of 2% by this
technique.
C. Testing
The model was tested on AVA2 Dataset on which the current
state of the art model has an accuracy of 85.6% [4]. The
best accuracy using hand crafted features is 68.55% [6]. In
comparison, our model gets an accuracy score of 78.7% on
the AVA2 dataset (model1). When our algorithm is trained on
slightly larger dataset (model2) and tested on the test set of
AVA2 dataset, the accuracy reaches 81.48%.
D. Discussion
The CNN architecture we propose and our training methodology are designed keeping the following points in mind :
1) Both low level features and high level features are
important in training a CNN for visual aesthetics. This
was a concept introduced by [4].
2) Instead of training an algorithm using a pretrained CNN
on imagenet dataset as a feature extractor for the task,
we train our proposed CNN architecture from scratch
just on AVA2 dataset. This shows that CNNs can be
trained with good accuracy on aesthetic datasets without
transfer learning.
3) We introduce two training techniques, which help us
get better results. First is usage of LAB space instead
of RGB space as an input to our CNN.The intuition for
this technique is that LAB space is designed to closely
model human vision. We also use coherent learning to
train CNN on minibatches that are comprised of similar
images belonging to both attractive/unattractive classes.
This technique is based on the intuition that when similar
images from both classes are introduced in the same
minibatch, the CNN is forced to learn discriminative
features.
4) While we train our model on AVA2 dataset (model1)
with good accuracy, we also show that the model gets
better as we add more training data. We do this by
adding a non-participating subset of AVA dataset into
AVA2 dataset’s train set and testing on AVA2 dataset’s
test set (model2).
IV. C ONCLUSION
We present a deep CNN architecture, which can be trained
from scratch on a visual aesthetic analysis dataset that gets
better as we give it more data. We also propose training
techniques to increase its accuracy.
R EFERENCES
[1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet:
A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
[2] Zhe Dong, Xu Shen, Houqiang Li, and Xinmei Tian. Photo Quality
Assessment with DCNN that Understands Image Well, pages 524–535.
Springer International Publishing, Cham, 2015.
[3] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[4] Xin Jin, Jingying Chi, Siwei Peng, Yulu Tian, Chaochen Ye, and Xiaodong Li. Deep image aesthetics classification using inception modules
and fine-tuning connected layer. CoRR, abs/1610.02256, 2016.
[5] Long Mai, Hailin Jin, and Feng Liu. Composition-preserving deep photo
aesthetics assessment. In The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), June 2016.
[6] Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A largescale database for aesthetic visual analysis. In Computer Vision and
Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2408–
2415. IEEE, 2012.
[7] Karen Simonyan and Andrew Zisserman. Very deep convolutional
networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[8] Weining Wang, Mingquan Zhao, Li Wang, Jiexiong Huang, Chengjia Cai,
and Xiangmin Xu. A multi-scene deep learning model for image aesthetic
evaluation. Image Commun., 47(C):511–518, September 2016.
| 1 |
A Comparison of NOOP to Structural
Domain-Theoretic Models of OOP
arXiv:1603.08648v3 [] 29 Dec 2017
Moez A. AbdelGawad
moez@cs.rice.edu
College of Mathematics and Econometrics, Hunan University
Changsha 410082, Hunan, P.R. China
Informatics Research Institute, SRTA-City
New Borg ElArab, Alexandria, Egypt
Abstract. Mainstream object-oriented programming languages such as
Java, C#, C++ and Scala are all almost entirely nominally-typed. NOOP
is a recently developed domain-theoretic model of OOP that was designed
to include full nominal information found in nominally-typed OOP. This
paper compares NOOP to the most widely known domain-theoretic
models of OOP, namely, the models developed by Cardelli and Cook,
which were structurally-typed models. Leveraging the development of
NOOP, the comparison presented in this paper provides a clear and
precise mathematical account for the relation between nominal and structural OO type systems.
1
Introduction
The first mathematical models of object-oriented programming (OOP) to gain
wide-spread recognition were structural models. Being structural, objects were
viewed in these models as being mere records. Object types, in accordance, were
viewed as record types, where the type of an object specifies the structure of the
object, meaning that object types carry information on the names of the members
of objects (i.e., fields and methods), and, inductively, on the (structural) types of
these members. The model of OOP developed by Cardelli in the eighties of last
century, and later enhanced by Cook and others, is an example of a structural
model of OOP. Examples of structurally-typed OO languages include lesserknown languages such as O’Caml [41], Modula-3 [24], Moby [34], PolyTOIL [17],
and Strongtalk [15].
Despite the popularity of the structural view of OOP among programming
languages researchers, many industrial-strength mainstream OO programming
languages are nominally-typed. Examples of nominally-typed OO languages include well-known languages such as Java [38], C# [3], C++ [2], and Scala [50].
In nominally-typed OO languages, objects and their types are nominal, meaning
that objects and their types carry class names information (also called nominal
information) as part of the meaning of objects and of their types, respectively.
In pure structurally-typed OO languages, nominal information is not used as
part of the identity of objects and of their types during static type checking nor
is nominal information available at runtime1 . Accordingly, nominal information
is missing in all structurally-typed models of OOP.
OOP was in its early days at the time the first mathematical models of OOP
were developed, in the eighties of last century. Functional programming was the
dominant programming paradigm among programming languages researchers at
that time—and largely still is today. As such, the role of nominality of objects
and of their types (i.e., the inclusion of nominal information in their identities)
in the semantics of mainstream OOP was not widely appreciated, and nominal
OO type systems remain under-researched. NOOP [6,8] is a recently developed
domain-theoretic model of OOP that addresses this shortcoming. To the best
of our knowledge, NOOP is so far the only domain-theoretic model of OOP to
include full class names information as found in mainstream nominally-typed
OO programming languages. In this paper, we compare NOOP to other wellknown structural domain-theoretic models of OOP.
This paper is structured as follows. First, we discuss related work—including
the history of modeling OOP—in Section 2. As an appetizer for the following
comparison, a fundamental technical difference between pure nominally-typed
OO languages and pure structurally-typed OO languages is discussed in Section 3. In Section 4 we then compare the nominal mathematical view of OOP to
the structural mathematical view of OOP by presenting a comparison between
NOOP and the structural models of OOP constructed by Cardelli and enhanced
by Cook and others. We conclude in Section 5 by summarizing our findings, making some final remarks, and discussing some possible future research.
2
Related Work
Even though object-oriented programming emerged in the 1960s, and got mature
and well-established in mainstream software development in the late 1980s, the
differences between nominally-typed and structurally-typed OO programming
languages started getting discussed by programming languages (PL) researchers
only in the 1990s [44,54,59]. In spite of these early research efforts, the value of
nominal typing and nominal subtyping to mainstream OO developers did not
get the full attention of the PL research community until around the turn of the
century.
In the eighties, while OOP was in its infancy, Cardelli built the first denotational model of OOP [21,22]. Cardelli’s work was pioneering, and naturally,
given the research on modeling functional programming extant at that time, the
model Cardelli constructed was a structural denotational model of OOP.2 In the
1
2
Given that most industrial-strength OO languages are statically-typed, in this work
we focus on nominal and structural statically-typed OO languages. A discussion
of statically-typed versus dynamically-typed OO languages (including the non-welldefined so-called “duck-typing”), and the merits and demerits of each, is beyond the
scope of this work. The interested reader should check [47].
Quite significantly, Cardelli in fact also hinted at looking for investigating nominal
typing [23, p.2]. Sadly, Cardelli’s hint went largely ignored for years, and structural
late 1980s/early 1990s, Cook and his colleagues worked to improve on Cardelli’s
model, leading them to break the identification of the notions of inheritance
and subtyping [28,31,30]. Unlike Cardelli, Cook emphasized in his work—as we
discuss in more detail in Section 4.2—the importance of self-references in OOP,
at the value level (i.e., self variables, such as this) and at the type level (i.e.,
self-type variables).
In 1994, Bruce et al. presented a discussion of the problem of binary methods in OOP [18]. Later, Bruce, and Simons, also promoted the structural view
of OOP in a number of publications (e.g., [19] and [58]) and they promoted
conclusions based on this view. However, the deep disagreement between these
conclusions (such as breaking the correspondence between inheritance and subtyping) and the fundamental intuitions of a significant portion of mainstream
OO developers persisted [25,10].
Under the pressure of this disagreement, some PL researchers then started
in the late 1990s/early 2000s stressing the significance of the differences between
nominally-typed OOP and structurally-typed OOP, and they started acknowledging the practical value of nominal typing and nominal subtyping (see [10,7]
for more details) and asserted the need for more research on studying nominal
OO type systems [52]. Accordingly, some attempts were made to develop OO
languages that are both nominally- and structurally-typed [33,51,37,45,46,50].3
However, at least in the eyes of mainstream OO developers, these hybrid languages have far more complex type systems than those of OO languages that
are either purely nominally-typed or purely structurally-typed (see discussion in
Section 4.1).
As to operational mathematical models of OOP, Abadi and Cardelli were the
first to present such a model [4,5]. Their model also had a structural view of OOP.
However, operational models of nominally-typed OOP got later developed. In
their seminal work, Igarashi, Pierce, and Wadler presented Featherweight Java
(FJ) [40] as an operational model of a nominally-typed OO language. Even
though FJ is not the first operational model of nominally-typed OOP (for example, see [32], [49] and [35,36]), yet FJ is the most widely known operational
model of (a tiny core subset of) a nominally-typed mainstream OO language,
namely Java. The development of FJ and other operational models of nominallytyped OOP marked a strong focus on studying nominal-typing in OO languages,
thereby departing from earlier disregard of it.
These developments later motivated the construction of NOOP. Featherweight Java (FJ) in fact offers the closest research to NOOP since it offers a
very clear operational semantics for a tiny nominally-typed OO language. It is
worth mentioning that NOOP, as a more foundational domain-theoretic model
of nominally-typed OO languages (i.e., that has fewer assumptions than FJ),
provides a denotational justification for the inclusion of nominal information in
3
typing was rather assumed superior to nominal typing instead, particularly after the
publication of Cook et al.’s and Bruce et al.’s work.
Multiple dispatch (see [26,14,27]), also, was discussed (e.g., in [18]) as a possible
solution to the problem of binary methods.
FJ. The inclusion of nominal information in NOOP is crucial for proving the
identification of inheritance and subtyping in nominally-typed OOP. In FJ [40],
rather than being proven as a consequence of nominality, the identification of
inheritance and subtyping was taken as an assumption. NOOP also allows discussing issues of OOP such as type names, ‘self-types’ and binary methods on a
more foundational level than provided by operational models of OOP. The more
abstract description of denotational models results in a conceptually clearer understanding of the programming notions described, as well as of the relations
between them.4
Finally, related to our work is also the dissatisfaction some researchers expressed about possible misunderstandings extant in the PL research community,
and about the (mal)practices based on these misunderstandings when PL researchers study object-oriented programming languages in particular. Given the
different basis for deriving data structuring in functional programming (based on
standard branches of mathematics) and in object-oriented programming (based
on biology and taxonomy) [21,22], some PL researchers have expressed dissatisfaction with assuming that the views of programming based on researching functional programming (including a view that assumes structural typing) may apply
without qualifications to object-oriented programming. In addition to pointing
out the importance of distinguishing between nominal typing and structural typing, MacQueen [42], for example, has noted many mismatches between Standard
ML [48] (a popular functional programming language) and class-based OO languages such as Java and C++. Later, Cook [29] also pointed out differences
between objects of OOP and abstract data types (ADTs), which are commonly
used in functional programming.5,6
4
5
6
It is worthy to also mention that NOOP was developed, partially, in response to the
technical challenge Pierce (an author of FJ) presented in his LICS’03 lecture [53] in
which Pierce looked for precising the relation between structural and nominal OO
type systems (notably, after the development of FJ was concluded).
We consider these research results as running in a similar vein as ours, since they
somewhat also point to some mismatches between the theory and practice of programming languages—theory being mathematics-based, functional, and structurallytyped, and practice being biology/taxonomy-based, object-oriented, and nominallytyped.
Yet another research that is also somewhat similar to the one we present here,
but that had different research interests and goals, is that of Reus and Streicher [55,57,56]. In [56], an untyped denotational model of class-based OOP is developed. Type information is largely ignored in Reus and Streicher’s work (in particular,
members of objects have no type signatures) and some minimal amount of nominal
information is included with objects only to support analyzing OO dynamic dispatch.
This model was developed to analyze mutation and imperative features of OO languages and for developing specifications of OO software and the verification of its
properties [56]. Analyzing the differences between structurally-typed and nominallytyped OO type systems was not a goal of Reus and Streicher’s research. Despite the
similarity of NOOP and the model of Reus and Streicher, we thus make no further
mention of Reus and Streicher’s model in this paper due to its different interests and
goals, and due to the fundamentally different nature of NOOP compared to their
3
Type Names, Type Contracts, Recursive Types and
Binary Methods
From the point of view of OO developers and OO language designers, there are
many technical differences between nominally-typed OO languages and structurallytyped OO languages. We discuss these in brief in this section. (A more detailed
discussion is presented in [10] and [7].)
Type Names and Behavioral Type Contracts A fundamental technical difference between nominally-typed OO type systems and structurally-typed OO
type systems is how the two approaches view type names. In structurally-typed
OO languages, type names are viewed as being names for type variables that
abbreviate type expressions (i.e., are “shortcuts”). As such, the use of type names
in structurally-typed OO languages is not always necessary, but type names are
useful as abbreviations and they are even necessary for defining recursive type
expressions. As variable names, however, recursive type names in structurallytyped OO languages (such as the name of a class when used inside the definition
of the class—which gets interpreted as “self-type”) get rebound to different types
upon type inheritance, and they get rebound to types that, if they were subtypes,
could break the contravariant subtyping rule of method parameter types (and,
thus, break the type safety of structurally-typed OO languages). Structurallytyped OO languages resolve this situation by breaking the correspondence between type inheritance and subtyping.
In nominally-typed OO languages, on the other hand, the nominality of types
means type names are viewed as part of the identity and meaning of type expressions, since type names in these languages are associated with public formal or
informal behavioral contracts.7 Being names for public, and thus fixed, contracts
means that, in nominally-typed OO languages, type names cannot be treated as
variable names. In nominally-typed OO languages, thus, type names have fixed
meanings that do not change upon inheritance. Further, in these languages the
fixed type a type name is bound to does not break the contravariant subtyping
of method parameters when the method and its type get inherited by subtypes
(types corresponding to subclasses/subinterfaces). As such, in nominally-typed
7
model (i.e., NOOP including all essential class names information inside objects
versus Reus and Streicher’s model lacking most of this information.)
In well-designed OO programs, each class (and interface and trait, in languages that
support these notions) has associated contracts describing the behavior of objects
of the class (its instances). The contracts include an invariant (a predicate) for the
values of class fields, and a contract for each method stipulating what conditions
the inputs should satisfy and what output condition should hold over the value
returned by the method, as well as side effects that have been performed on this and
perhaps other objects passed as arguments to the method. (The output predicate
may mention the values of arguments and in such case is often called an input-output
predicate.) In practice, class contracts are typically informal, may be incomplete, and
are usually expressed only in code documentation. (See [10] for a longer, detailed
and deeper discussion of the association of type names with contracts, and of the
import of this association to mainstream OO developers.)
OOP it is not necessary to break the identification of type inheritance with
subtyping.
Recursive Types Further, in class-based OOP, a class (or interface or trait, in
languages that support these notions) can directly refer to itself (using class/interface/trait names) in the signature of a field, or the signature of a method parameter or return value, where the class name is used also as a type name. This
kind of reference is called a type self-reference, recursive reference, or, sometimes, circular reference. Also, mutually-dependent classes, where a class refers
to itself indirectly (i.e., via other classes), are allowed in class-based OOP. As
Pierce noted [52], nominally-typed OO languages allow readily expression of
mutually-dependent class definitions. Since objects are characterized as being
self-referential values (according to Cook [29], objects are ‘autognostic’), and
since self-referential values can be typed using recursive types [43], there is wide
need for recursive type definitions in mainstream OOP. As such, direct and indirect circular type references are quite common in mainstream OOP [29]. The ease
by which recursive typing can be expressed in nominally-typed OO languages is
one of the main advantages of nominally-typed OOP.8
In the comparison of nominal and structural mathematical models of OOP
in Section 4 we will see that, in accordance with their different views of type
names, self-referential class references are viewed differently by nominally-typed
models of OOP than by structurally-typed models of OOP. The different views
of circular class references are behind nominal models of OOP leading to a
different conclusion about the relation between inheritance and subtyping than
the conclusion reached based on structural models.
Binary Methods From the point of view of OO developers, the difference between the nominal and the structural views of type names in OOP demonstrates
itself, most prominently, in the different support and the different treatment provided by OO languages to what are usually called “binary methods”. In OOP,
a ‘binary method’ is defined as a method that takes a parameter (or more)
of the same type as the class the method is declared in [18]. “The problem of
binary methods” and requiring them to be supported in OO languages was a
main motivation behind structural models of OOP leading to inheritance and
subtyping not being identified (i.e., as not being in a one-to-one correspondence) [30]. As explained above, given their view of type names as type variable
names, structurally-typed OO languages require the self-type of the argument
of a method—where the method is identified as a binary method, and upon inheritance of the method by a subclass of the class the method is first declared
in—to be that of the type corresponding to the subclass.
Nominally-typed OO languages, on the other hand, with their fixed interpretation of type names, treat a method taking in an argument of the same class
as that in which the method is declared like any other method, i.e., needing no
special treatment. As such, nominally-typed OO languages guarantee that the
8
According to Pierce [52, p.253], “The fact that recursive types come essentially for
free in nominal systems is a decided benefit [of nominally-typed OO languages].”
type of the input parameter of a method that approximates a binary method is
a supertype of its type if it were a true binary method.
Nominally-typed OO languages, thus, offer a somewhat middle-ground solution between totally avoiding binary methods and overly embracing them (as
pure structurally-typed OO languages do). Given that the meaning of types
names in nominally-typed OO languages does not change upon inheritance,
these languages provide methods whose type, upon inheritance, only approximates the type of true binary methods. Nominally-typed OO languages do not
quite support binary methods, but, for good reasons (i.e., so as to not break
the identification of inheritance of contracts and subtyping, nor lose other advantages of nominal typing [10]), offer only a good approximation to binary
methods. Given that the type of the parameter does not change in subclasses,
the degree of approximation (if the method was indeed a true binary method)
gets lesser the deeper in the inheritance hierarchy the method gets inherited.9
4
Nominally-Typed versus Structurally-Typed Models of
OOP
To see how nominality and nominal typing affects mathematical views of OOP,
we compare NOOP, as a nominally-typed denotational model of OOP, to the
most widely known structural model of OOP—the one constructed and presented
by Cardelli [21,22], and extended, analyzed and promoted by others such as
Cook [28,31,30], Bruce [19] and Simons [58].
Even though unnamed by their authors, for ease of reference in this paper we
call Cardelli’s model SOOP, for Structural OOP, while calling the extension of
SOOP by Cook et al. µSOOP (due to its inclusion of recursive types). As we
discussed earlier, NOOP is the first domain-theoretic model of OOP to include
full nominal type information found in nominally-typed OOP. The construction
of NOOP is presented in [6], and is summarized in [8]. In the following sections
we first compare NOOP to SOOP then compare it to µSOOP.
9
With the introduction of generics [38,3,50,13,11,16,40], and ‘F-bounded generics’
(the nominal counterpart of F-bounded polymorphism [20,12,39]) in particular,
nominally-typed OO languages provided better support for true binary methods
while keeping the identification of type inheritance with subtyping and other benefits of nominal typing. It should be noted that the lesser-recognized problem of
‘spurious binary methods’ in structurally-typed OOP (see [10, Section 3.3.1]) provides further justification for nominally-typed OO languages being cautious about
fully embracing binary methods by treating a method that “looks like” a binary
method as indeed being one. In light of the spurious binary methods problem, and
precluding the use of F-bounded generics, in our opinion a better approach towards
supporting true binary methods in mainstream OO languages might be by allowing
developers to explicitly mark or flag true binary methods as being such, or, even
more precisely, to allow developers to mark specific arguments of methods as being
arguments that ‘need to be treated as those of true binary methods.’
4.1
NOOP Compared to SOOP
The model of OOP developed by Cardelli in the 1980s [21,22] was the first
denotational model of OOP to gain widespread recognition. In his pioneering
and seminal work Cardelli, according to him himself, had a goal of ‘unifying
functional programming and object-oriented programming’ [22, p.2]. A domain
equation that describes the main features of SOOP (distilled to exclude variants.
See [22, pp.15, 16] for the actual domain equations used by Cardelli) is
V = B + (V → V) + (L → V)
where V is the main domain of values, B is a domain of base values, L is the
flat domain of labels, → is the standard continuous functions domain constructor, and + is the disjoint summation domain constructor. The distilled SOOP
domain equation expresses the view that values are either base values, unary
functions over values, or records (“objects”) modeled as (infinite) functions from
labels to values.
The domain equation describing NOOP is
O = S × (L ⊸ O) × (L ⊸ (O∗ ⊸→ O))
where the main domain defined by the equation, namely domain O, is the domain
of (raw) objects, × is the strict product domain constructor, and ⊸ is the records
domain constructor (See [8] or [6, Chapter 6] for more details on the NOOP
domain equation). The NOOP domain equation expresses the view that every
object is a triple of: (1) a class signature closure (i.e., a member of domain S),
(2) a fields record (i.e., a member of L ⊸ O), and (3) a methods record (i.e.,
a member of L ⊸ (O∗ ⊸→ O), where ⊸→ is the strict continuous functions
domain constructor, and ∗ is the finite-sequences domain constructor).
Class signatures and other related constructs are syntactic constructs that
capture all nominal (i.e., class/interface/trait names) information found in objects of mainstream nominally-typed OO software [8,6]. Class signatures formalize the informal notion of ‘object interfaces’ [10,7,6]. Embedding class signature
constructs in objects of NOOP makes them nominal objects. It should be noted
that consistency conditions for signature constructs in NOOP [8, Section 4] [6,
Section 5.1] do not preclude a signature from directly or indirectly referring to
itself in the signature of a field or of a method parameter or method return value,
so as to allow for self-referential types (see Section 3.)
A comparison of NOOP to SOOP reveals the following fundamental difference between the two models:
– SOOP is a structural model of OOP, that, as explained by its domain equation, does not include nominal information into its objects. As such, SOOP
views objects as being essentially records (of functions) [22, p.3]. Due to
the lack of nominal information, the definitions of types of objects and of
subtyping, based on SOOP, are also structural definitions, i.e., ones that
can only respect object structures but that cannot respect the behavioral
contracts maintained by objects that are associated with their type names.
– NOOP is a nominal model of OOP, that, via the S component (for signatures) of its domain equation, includes full nominal information into its
objects . As such, NOOP views objects as records (of fields and methods)
accompanied by nominal information referencing the behavioral contracts
maintained by the fields and methods of the objects. The definition of types
of objects and of subtyping, based on NOOP, can thus be nominal ones,
i.e., ones which can respect behavioral contracts associated with type names
in addition to respecting object structures.
In the comparison of NOOP to SOOP it should also be noted that the ‘Inheritance ⇔ Subtyping’ (‘inheritance is subtyping’) theorem of NOOP ([8, Sectionï¿œ5.3]), stating the identification of type inheritance with subtyping in
nominally-typed OOP, is very similar to Cardelli’s ‘Semantic Subtyping’ theorem ([22, Sectionï¿œ11]). Cardelli did not model recursive types, and thus did
not handle recursive type expressions (which are the structural counterpart of
self-referential class signatures). As such, despite the model of Cardelli being a
structural model of OOP, the omission of recursive types enabled Cardelli to
easily identify an inaccurate “structural” notion of inheritance with a structural
definition of subtyping and prove their one-to-one correspondence10.
Other tangential differences that are noted in the comparison between NOOP
and SOOP include:
1. SOOP models records as infinite functions, with only an informal restriction on the functions that requires the functions to map a cofinite set of
input labels—i.e., all but a finite number of labels—to the value wrong.
NOOP, on the other hand, models the record component of objects using
the ⊸ (‘rec’) domain constructor which constructs records as tagged finite
functions. Domain constructor ⊸, even though having similarity to some
other earlier-developed domain constructors, was particularly developed to
let NOOP model mainstream OOP more accurately. Because of using ⊸,
the NOOP domain of objects formally includes no “junk” (i.e., unnecessary)
infinite records as those found in the formal definition of SOOP.
2. Given its attempt to unify FP and OOP, SOOP allows functions as firstclass values in its main domain of values. As such, SOOP is not a pure-OO
model of OOP. NOOP, on the other hand, is a pure-OO model of OOP.
Every value in the main domain of NOOP is an object. To model methods
and records, NOOP uses functional domains, but they are used only as
auxiliary domains.
3. Functions (used to model methods) in SOOP are unary functions that take
exactly one argument—an element of domain V. SOOP thus requires ‘currying’ to model multi-ary functions and methods. In NOOP, on the other
10
In his work, Cardelli, informally and somewhat implicitly, defined inheritance as
structural subtyping between (record) type expressions. Demonstrating the strong
influence of functional programming on Cardelli’s model, Cardelli even argued for expanding the definition of inheritance to include some notion of “inheritance” between
function types (by which it seems Cardelli really meant subtyping, since Cardelli did
not suggest any code sharing).
hand, sequences of objects are used as method arguments to model multiary methods more precisely (i.e., without the need for currying, which is
not commonly familiar to mainstream OOP developers as it is to FP developers, and thus, inline with the previous point, also without the need for
functions/methods to be first-class values).
4. SOOP uses the same namespace for fields and methods of records, disallowing a field in a record to have the same name as a method in the record.
NOOP, on the other hand, aims to mimic mainstream OO languages more
closely, and thus it uses two records as separate components inside objects
to give fields and methods separate namespaces. A field and a method in a
NOOP object can thus have the same name without conflict (method overloading, however, where two methods inside an object can have the same
name, is supported neither by SOOP nor by NOOP).11
Nominal vs. Structural vs. Hybrid Typed OO Languages It is worthy
to mention here that the fundamental ‘structural versus nominal’ difference between SOOP and NOOP has profound implications on comparing nominallytyped OO languages to structurally-typed OO languages, and to hybrid OO
languages that try or claim to support both nominal and structural typing.
First, it is clear that supporting nominal typing in an OO language with a
structural view of objects is impossible, since the nominal information stripped
by the structural view of objects is irrecoverable from the structure of the objects.
Second, due to the association of type names to behavioral contracts, it is clear
nominal typing is closer to semantic/behavioral typing than structural typing is
(More discussion of contracts and semantic typing is presented in [10]).
Thirdly, from the definition of NOOP it is clear also that, if needed, it is
easy to define structural types on a domain of nominal objects. The definition
of these types can be done in NOOP by ignoring nominal information, as is
done in “hybrid” OO languages such as Scala, SmallTalk, Whiteoak and Unity.
The definition of these structural types in this case is not the same as for an OO
language based on a structural view of objects and modeled by SOOP, since
objects of the defined structural types will still carry nominal information at
run-time (ready to be used during software run-time, such as in type casting
operations and instanceof type tests). Structural OO languages that support
a structural view of objects are fundamentally different than nominal languages
because objects in such languages, as modeled by SOOP, are plain records (and
thus without any reference to behavioral class contracts), which is not true in
OO languages that try to support both nominal and structural types.12
11
12
To put research on structural OOP on a more rigorous footing, and as a step towards
the construction of NOOP, we constructed COOP—[6, Ch. 4] and [9, Sec. 4]—as
a simple structural domain-theoretic model of OOP that dealt with the first three
of the four tangential differences between NOOP and SOOP.
A further reason we do not believe hybrid languages, such as Scala [50], SmallTalk [1],
Whiteoak [37] and Unity [45], indeed provide true or full support for structural typing
is that these languages do not quite support recursive structural types (varying
4.2
NOOP Compared to µSOOP
Cook built on Cardelli’s work by first developing a model of untyped inheritance [28,31] and, with others, then built a model of typed inheritance [30]. In
his work, Cook took self-referential classes, and thus recursive types, in consideration, but, following the footsteps of Cardelli, Cook kept a structural view of
OO typing. Thus Cook et al. concluded that ‘inheritance is not subtyping’ [30].
Building on the work of Cook et al. and based on its conclusions, Bruce, in his
book on the foundations of OO languages [19], and Simons, in a series of articles
on the theory of classification [58], enforced in the PL research community the
conclusion reached by Cook and his colleagues regarding breaking the relation
between inheritance and subtyping (implying the superiority of a structural view
of OOP in the process), even when the conclusion opposed and contradicted the
intuition (and even the “conventional wisdom” [30, p.125]) of a large section
of OO developers and OO language designers. To explain the discrepancy, it
was then thought that mainstream OO languages are technically deficient or
flawed because, according to Cook [30], these languages ‘place restrictions on
inheritance’.
Given that µSOOP (i.e., Cook et al’s work) is based on that of Cardelli, the
differences between NOOP and SOOP we discussed in Section 4.1 get inherited
by a comparison between NOOP and µSOOP.
The main technical similarity between NOOP and µSOOP is that both
models of OOP take self-referential classes, and thus recursive types, in consideration. This is also where the two models strongly disagree, since NOOP leads to
a different conclusion about the relation between inheritance and subtyping than
µSOOP does. This different conclusion is due to the differences in the nominal
view of objects versus the structural view of th em and to the inclusion/exclusion
of contracts in object typing and object subtyping, and accordingly due to the
role of inheritance (and thus contracts) in deciding subtyping.
As such, in addition to the main difference with SOOP, comparing NOOP
to µSOOP highlights the following four differences, which we first mention then
discuss afterwards in some detail.
1. NOOP and µSOOP have different views of type names.
2. NOOP and µSOOP have different definitions of type inheritance.
3. NOOP and µSOOP are different as to the uniformity of their inheritance
models at the object level and at the type level.
4. NOOP and µSOOP are different as to the simplicity of the mental model
they present to developers during the OO software design process.
between having reluctant/weak support to having no support for them at all). As
discussed in Section 3, recursive types are essential for serious OO programming. As
demonstrated by Cook’s work (which we discuss in more detail in the next section),
supporting recursive structural types (and thus fully supporting structural typing in
these so-called hybrid languages) leads to undesirable consequences. The interested
reader is again advised to see [10] for more details.
Views of type names As we discussed, in detail, in Section 3, a main difference
between nominal typing and structural typing that is illustrated by comparing
NOOP to µSOOP is how type names are viewed in nominal versus structural
OO type systems, them having fixed meanings in the first, while allowing their
meanings to get rebound (upon inheritance) in the latter.
Definitions of inheritance It is worthy to note that the different conclusion
reached by NOOP than that by µSOOP on the relation between inheritance
and subtyping is based, in particular, on how the two models differently define
inheritance. Cook defines inheritance as ‘a mechanism for the definition of new
program units by modifying existing ones in the presence of self-reference’ [28].
Cook also intentionally targets modeling the multiple levels of inheritance that
take place in OOP uniformly (as we discuss below), having a single model of
inheritance that models type-level inheritance and object-level inheritance. Applied to types, Cook’s definition of inheritance based on a structural view of
types makes type inheritance ‘a mechanism for the definition of new record type
expressions by modifying existing ones, in the presence of ‘self-type’ ’. On the
other hand, for the purpose of modeling nominally-typed mainstream OOP with
a nominal view of types (as in NOOP), Cook’s definition of type inheritance
has to be changed to ‘a mechanism for the definition of new class signatures by
adding member (i.e., field and method) signatures to an explicitly-specified set
of existing class signatures.’
In contrast to Cook’s structural definition of type inheritance, the nominal
definition of type inheritance, first, disregards self-types as having relevance in
the definition, in agreement with the intuitions of mainstream OO developers
about the inheritance of class signatures (where it is implied that nominal typing,
with its fixed bindings of type names, only presents an approximation to selftypes). Secondly, also in agreement with intuitions of mainstream OO developers,
the nominal definition of type inheritance stresses explicitness in specifying inheritance, making inheritance an intended relation that is based on behavioral
contracts and structure, not an accidental relation based only on structure.
Uniformity of inheritance models Structurally-typed OOP, as modeled by µSOOP,
uniformly applies the same model of inheritance (i.e., Cook’s model [28]) at the
level of values (i.e., objects) and at the level of types. Using the same model at
both levels requires rebinding the self-variable, at the value level, and rebinding
of the self-type-variable, at the type level, upon inheritance. Nominally-typed
OOP, and thereby NOOP, on the other hand, uses two different models of inheritance, one at the level of values (i.e., objects) and another at the level of
types. The model of inheritance at the level of values used in nominally-typed
OOP (the model of [28] applies well) allows for rebinding the self-variable upon
inheritance. At the level of types, however, a different model where type names
do not get rebound is used by nominally-typed OOP, since there is no exact
notion of a self-type-variable in nominally-typed OO languages (but only an
approximation to it, using a superclass name, is available, as we explain in Section 3).
As such, while the model of inheritance used in µSOOP uniformly applies to
object-level inheritance and type-level inheritance, we can see that the models of
inheritance used in NOOP reflect the non-uniformity of inheritance models in
mainstream nominally-typed OOP, where a different model (and thus a different
definition of inheritance) is used at the object level than that at the type level.
Economy of OO software design conceptual model Agreeing with the intuitions
and conventional wisdom of mainstreaom OOP software developers and OOP
language designers, NOOP proves that ‘inheritance is subtyping’ [6,8,25], i.e.,
that there is a one-to-one correspondence between OO type inheritance and
OO subtyping, while µSOOP breaks the correspondence and proves that ‘inheritance is not subtyping’ [30,19,58]. Splitting inheritance from subtyping, as
µSOOP necessitates, requires a structurally-typed OOP developer to keep two
hierarchies in mind when developing his software, namely, the inheritance hierarchy and the subtyping hierarchy13.
This complexity, and the disregard of class contracts in deciding subtyping, creates significant problems from the perspective of OO program design
(See [10]). Respecting semantic class contracts in subtyping (thereby maintaining the identification of inheritance with subtyping) allows nominally-typed OOP
developers on the other hand to reason about their software more readily and
to keep only one hierarchy in mind while developing their software, leading to a
simpler more economic software design conceptual model.
Table 1 on the following page summarizes the similarities and differences
between NOOP, SOOP and µSOOP.
5
Concluding Remarks and Future Work
The identification of types with behavioral contracts, and of subtyping with the
inheritance and possible narrowing of contracts, makes nominal typing and nominal subtyping in nominally-typed OOP closer to semantic typing and semantic
subtyping. Based on noting that, in this paper we compared a nominally-typed
domain-theoretic model of OOP to the most well-known structurally-typed models. Our comparison has shown that nominally-typed models and structurallytyped models of OOP lead to different views of fundamental notions of object13
Bruce, in an attempt to address this issue, suggested that OO languages replace subtyping with ‘match-bounded polymorphism’ (which is a simplification of F-bounded
polymorphism [20,12]) then identify type inheritance with matching. Matching [18],
upon which match-bounded polymorphism depends, however, uses subtyping in its
definition. As such, match-bounded polymorphism is not truly a full replacement
of subtyping, since developers still need to understand subtyping to be able to understand matching. Having a non-simple mental model of OOP, due to insisting on
maintaining the split between subtyping and inheritance, creates significant conceptual problems when designing OO software. We speculate that this led Bruce’s
suggested language extensions on matching to not gain traction or support in mainstream OO languages.
SOOP
µSOOP
NOOP
(Cardelli; 1980s)
(Cook et al; 1990s) (AbdelGawad; 2010s)
Structural model;
Structural model;
Nominal model;
Nominal Info.
Class names info.
Class names info.
Full class names info.
missing from objects missing from objects
included in objects
Structural (reflect
Structural (reflect Nominal (reflect struc.
Object Types
only object structure) only object structure) and assoc. contracts)
Recursive Types
Excluded
Included
Included
View of
Shortcuts. Self-ref.
Shortcuts. Self-ref. Associated with public
Type Names
not considered
gets rebound
contracts. No rebinding
Inheritance
Redefine inheritance
Same rebinding
Object level: ReModels
as non-recursive
model at object
binding. Type
structural subtyping level and type level
level: No rebinding
Type Inheritance
Structural
Structural
Nominal
Conceptual
Inher. = Subty.
Inher. 6= Subty.
Inher. = Subty.
Economy
Table 1. SOOP vs. µSOOP vs. NOOP
oriented programming, namely objects, type names, class types, subtyping and
the relation between subtyping and inheritance.
In particular, our comparison highlights that in nominally-typed OOP
1. An object should not be mathematically viewed as merely a records of its
members (i.e., its fields and methods) but rather as a record together
with nominal information that is associated with class contracts that
the object maintains—this information being carried along with the record,
behaviorally constraining its members,
2. A class type should not be viewed as a record type but rather as a record
type that additionally respects behavioral contracts associated with
nominal information embedded in elements of the type (i.e., its objects), and
3. Inheritance is correctly identified with nominal subtyping, i.e., that in pure
nominally-typed OOP inheritance is subtyping.
We believe the development of NOOP, and the mathematical comparison presented in this paper, are significant steps in providing a full account of the
relation between nominal and structural OO type systems.
Further, we hope that having a more accurate mathematical view of nominallytyped OO software presents programming languages researchers with better
chances for progressing mainstream OO programming languages. For example, generics ([38,3,50]) add to the expressiveness of type systems of nominallytyped OO programming languages ([13,11,16,40]). As hinted to earlier, we believe
that F-bounded generics offer better support for binary methods in nominallytyped OO languages while maintaining the benefits of nominal typing. Building
a domain-theoretic model of generic nominally-typed OOP, akin to NOOP,
and comparing it to domain-theoretic models of polymorphic structurally-typed
OOP, can as such offer better chances for having a deeper understanding of
features of generic mainstream OO languages such as generic binary methods,
variance annotations (such as Java wildcards), Java erasure, polymorphic methods, and generic type inference.
Acknowledgments
The author expresses his gratitude and appreciation to Professor Robert “Corky”
Cartwright for the discussions we had and the guidance he gave that helped in
developing and reaching some of the conclusions in this paper, and to Professor Benjamin Pierce for the feedback he offered on motivating and presenting
NOOP.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
ANSI Smalltalk Standard. 1998.
ISO/IEC 14882:2011: Programming Languages: C++. 2011.
C# language specification, version 5.0. http://msdn.microsoft.com/vcsharp, 2015.
Martin Abadi and Luca Cardelli. A semantics of object types. In Proc. LICS’94,
1994.
Martin Abadi and Luca Cardelli. A Theory of Objects. Springer-Verlag, 1996.
Moez A. AbdelGawad. NOOP: A Mathematical Model of Object-Oriented Programming. PhD thesis, Rice University, 2012.
Moez A. AbdelGawad.
An overview of nominal-typing versus structuraltyping in object-oriented programming (with code examples). Technical report,
arXiv.org:1309.2348 [], 2013.
Moez A. AbdelGawad. A domain-theoretic model of nominally-typed objectoriented programming. Journal of Electronic Notes in Theoretical Computer Science (ENTCS), DOI: 10.1016/j.entcs.2014.01.002. Also presented at The 6th International Symposium on Domain Theory and Its Applications (ISDT’13), 301:3–19,
2014.
Moez A. AbdelGawad. Domain theory for modeling oop: A summary. Technical
report, arXiv.org:1406.7497 [], 2014.
Moez A. AbdelGawad. Why nominal-typing matters in OOP. In Submitted for
publication in Onward! Essays, 2016.
Ole Agesen, Stephen N Freund, and John C Mitchell. Adding type parameterization to the Java language, 1997.
Paolo Baldan, Giorgio Ghelli, and Alessandra Raffaeta. Basic theory of f-bounded
polymorphism. Information and Computation, 153(1):173–237, 1999.
Joseph A. Bank, Barbara Liskov, and Andrew C. Myers. Parameterized types and
Java. Technical report, 1996.
John Boyland and Giuseppe Castagna. Parasitic methods: An implementation of
multi-methods for Java. In OOPSLA, 1997.
G. Bracha and D. Griswold. Strongtalk: typechecking Smalltalk in a production
environment. In OOPSLA’93, pages 215–230, 1993.
Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler. Making
the future safe for the past: Adding genericity to the Java programming language.
In Craig Chambers, editor, ACM Symposium on Object-Oriented Programming:
Systems, Languages and Applications (OOPSLA), volume 33, pages 183–200, Vancouver, BC, October 1998. ACM, ACM SIGPLAN.
17. K. Bruce, A. Schuett, R. van Gent, and A. Fiech. PolyTOIL: A type-safe polymorphic object-oriented language. ACM Transactions on Programming Languages
and Systems, 25(2):225–290, 2003.
18. Kim Bruce, Luca Cardelli, Giuseppe Castagna, The Hopkins Objects Group, Gary
Leavens, and Benjamin C. Pierce. On binary methods. Theory and Practice of
Object Systems, 1994.
19. Kim B. Bruce. Foundations of Object-Oriented Languages: Types and Semantics.
MIT Press, 2002.
20. Peter S. Canning, William R. Cook, Walter L. Hill, J. Mitchell, and W. Olthoff.
F-bounded polymorphism for object-oriented programming. In Proc. of Conf. on
Functional Programming Languages and Computer Architecture, 1989.
21. Luca Cardelli. A semantics of multiple inheritance. In Proc. of the internat. symp.
on semantics of data types, volume 173, pages 51–67. Springer-Verlag, 1984.
22. Luca Cardelli. A semantics of multiple inheritance. Inform. and Comput., 76:138–
164, 1988.
23. Luca Cardelli. Structural subtyping and the notion of power type. In ACM Proceedings of POPL, 1988.
24. Luca Cardelli, James Donahue, Lucille Glassman, Mick Jordan, Bill Kalsow, and
Greg Nelson. Modula-3 Report (Revised), volume 52. Digital Systems Research
Center, 1989.
25. Robert Cartwright and Moez A. AbdelGawad. Inheritance Is subtyping (extended abstract). In The 25th Nordic Workshop on Programming Theory (NWPT),
Tallinn, Estonia, 2013.
26. C. Chambers. Object-oriented multi-methods in Cecil. In ECOOP, 1992.
27. C. Clifton, T. Millstein, G. Leavens, and C. Chambers. MultiJava: Design rationale,
compiler implementation and applications. ACM Transactions on Programming
Languages and Systems, 28(3):517–575, 2006.
28. William R. Cook. A Denotational Semantics of Inheritance. PhD thesis, Brown
Univ., 1989.
29. William R. Cook. On understanding data abstraction, revisited. volume 44, pages
557–572. ACM, 2009.
30. William R. Cook, Walter L. Hill, and Peter S. Canning. Inheritance is not subtyping. In POPL’90 Proceedings, 1990.
31. William R. Cook and Jens Palsberg. A denotational semantics of inheritance and
its correctness. In ACM Symposium on Object-Oriented Programming, Systems,
Languages and Applications (OOPSLA), pages 433–444, 1989.
32. Sophia Drossopoulou, Susan Eisenbach, and Sarfraz Khurshid. Is the java type
system sound? TAPOS, 5(1):3–24, 1999.
33. Robert Bruce Findler, Matthew Flatt, and Matthias Felleisen. Semantic casts:
Contracts and structural subtyping in a nominal world. In ECOOP 2004–ObjectOriented Programming, pages 365–389. Springer, 2004.
34. K. Fisher and J. Reppy. The design of a class mechanism for Moby. In PLDI,
1999.
35. Matthew Flatt, Shriram Krishnamurthi, and Matthias Felleisen. Classes and mixins. In Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles
of programming languages, pages 171–183. ACM, 1998.
36. Matthew Flatt, Shriram Krishnamurthi, and Matthias Felleisen. A programmer’s
reduction semantics for classes and mixins. In Formal syntax and semantics of
Java, pages 241–269. Springer, 1999.
37. J. Gil and I. Maman. Whiteoak: Introducing structural subtyping in Java. In
OOPSLA, 2008.
38. James Gosling, Bill Joy, Guy Steele, Gilad Bracha, and Alex Buckley. The Java
Language Specification. Addison-Wesley, 2014.
39. Ben Greenman, Fabian Muehlboeck, and Ross Tate. Getting f-bounded polymorphism into shape. In Proceedings of the 35th ACM SIGPLAN Conference on
Programming Language Design and Implementation, PLDI’14, 2014.
40. Atsushi Igarashi, Benjamin C. Pierce, and Philip Wadler. Featherweight Java:
A minimal core calculus for Java and GJ. ACM Transactions on Programming
Languages and Systems, 23(3):396–450, May 2001.
41. X. Leroy, D. Doligez, J. Garrigue, D. Rémy, and J. Vouillon. The Objective Caml
system. Available at http://caml.inria.fr/.
42. David B. MacQueen. Should ML be object-oriented? Formal Aspects of Computing,
13:214–232, 2002.
43. David B. MacQueen, Gordon D. Plotkin, and R. Sethi. An ideal model for recursive
polymorphic types. Information and Control, 71:95–130, 1986.
44. Boris Magnusson. Code reuse considered harmful, 1991.
45. Donna Malayeri and Jonathan Aldrich. Integrating nominal and structural subtyping. In ECOOP 2008–Object-Oriented Programming, pages 260–284. Springer,
2008.
46. Donna Malayeri and Jonathan Aldrich. Is structural subtyping useful? an empirical
study. In ESOP, 2009.
47. Erik Meijer and Peter Drayton. Static typing where possible, dynamic typing when
needed: The end of the cold war between programming languages. In OOPSLA,
2004.
48. R. Milner, M. Tofte, R. Harper, and D. MacQueen. The Definition of Standard
ML (Revised). MIT Press, 1997.
49. Tobias Nipkow and David Von Oheimb. Javalight is type-safe–definitely. In Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pages 161–170. ACM, 1998.
50. Martin Odersky. The scala language specification, v. 2.9. http://www.scalalang.org, 2014.
51. Klaus Ostermann. Nominal and structural subtyping in component-based programming. Journal of Object Technology, 7(1):121–145, 2008.
52. Benjamin C. Pierce. Types and Programming Languages. MIT Press, 2002.
53. Benjamin C. Pierce. Types and programming languages: The next generation.
LICS’03, 2003.
54. Harry H Porter III. Separating the subtype hierarchy from the inheritance of
implementation. Journal of Object-Oriented Programming, 4(6):20–29, 1992.
55. Bernhard Reus. Class-based versus object-based: A denotational comparison. Algebraic Methodology And Software Technology, Lecture Notes in Computer Science,
2422:473–488, 2002.
56. Bernhard Reus. Modular semantics and logics of classes. In Computer Science
Logic, volume 2803, pages 456–469. Springer, 2003.
57. Bernhard Reus and Thomas Streicher. Semantics and logics of objects. Proceedings
of the 17th Symp. on Logic in Computer Science (LICS 2002), pages 113–122, 2002.
58. Anthony J. H. Simons. The theory of classification, part 1: Perspectives on type
compatibility. Journal of Object Technology, 1(1):55–61, May-June 2002.
59. Kresten Krab Thorup and Mads Torgersen. Unifying genericity. In ECOOP 99–
Object-Oriented Programming, pages 186–204. Springer, 1999.
| 6 |
arXiv:1701.03769v1 [] 13 Jan 2017
A General Approach for Cure Models
in Survival Analysis
Valentin Patilea
∗
Ingrid Van Keilegom
§
January 16, 2017
Abstract
In survival analysis it often happens that some subjects under study do not experience the event of interest; they are considered to be ‘cured’. The population is thus
a mixture of two subpopulations : the one of cured subjects, and the one of ‘susceptible’ subjects. When covariates are present, a so-called mixture cure model can be
used to model the conditional survival function of the population. It depends on two
components : the probability of being cured and the conditional survival function of
the susceptible subjects.
In this paper we propose a novel approach to estimate a mixture cure model when
the data are subject to random right censoring. We work with a parametric model for
the cure proportion (like e.g. a logistic model), while the conditional survival function
of the uncured subjects is unspecified. The approach is based on an inversion which
allows to write the survival function as a function of the distribution of the observable
random variables. This leads to a very general class of models, which allows a flexible
and rich modeling of the conditional survival function. We show the identifiability of
the proposed model, as well as the weak consistency and the asymptotic normality of
the model parameters. We also consider in more detail the case where kernel estimators
are used for the nonparametric part of the model. The new estimators are compared
with the estimators from a Cox mixture cure model via finite sample simulations.
Finally, we apply the new model and estimation procedure on two medical data sets.
Key Words: Asymptotic normality; bootstrap; kernel smoothing; logistic regression; mixture
cure model; semiparametric model.
MSC2010: 62N01, 62N02, 62E20, 62F12, 62G05.
∗
CREST (Ensai), France. V. Patilea acknowledges support from the research program New Challenges
for New Data of LCL and Genes. Email address: patilea@ensai.fr.
§
ORSTAT, Katholieke Universiteit Leuven, Belgium. I. Van Keilegom acknowledges support from the
European Research Council (2016-2021, Horizon 2020 / ERC grant agreement No. 694409), and from IAP
Research Network P7/06 of the Belgian State. Email address: ingrid.vankeilegom@kuleuven.be.
1
1
Introduction
Driven by emerging applications, over the last two decades there has been an increasing
interest for time-to-event analysis models allowing the situation where a fraction of the right
censored observed lifetimes corresponds to subjects who will never experience the event. In
biostatistics such models including covariates are usually called cure models and they allow
for a positive cure fraction that corresponds to the proportion of patients cured of their
disease. For a review of these models in survival analysis, see for instance Maller & Zhou
(2001) or Peng & Taylor (2014). Economists sometimes call such models split population
models (see Schmidt & Witte 1989), while the reliability engineers refer to them as limitedfailure population life models (Meeker 1987).
At first sight, a cure regression model is nothing but a binary outcome, cured versus
uncured, regression problem. The difficulty comes from the fact that the cured subjects are
unlabeled observations among the censored data. Then one has to use all the observations,
censored and uncensored, to complete the missing information and thus to identify, estimate
and make inference on the cure fraction regression function. We propose a general approach
for this task, a tool that provides a general ground for cure regression models. The idea is
to start from the laws of the observed variables and to express the quantities of interest,
such as the cure rate and the conditional survival of the uncured subjects, as functionals of
these laws. These general expressions, that we call inversion formulae and we derive with
no particular constraint on the space of the covariates, are the vehicles that allow for a wide
modeling choice, parametric, semiparametric and nonparametric, for both the law of the
lifetime of interest and the cure rate. Indeed, the inversion formulae allow to express the
likelihood of the binary outcome model as a function of the laws of the observed variables.
The likelihood estimator of the parameter vector of the cure fraction function is then simply
the maximizer of the likelihood obtained by replacing the laws of the observations by some
estimators. With at hand the estimate of the parameter of the cure fraction, the inversion
formulae will provide an estimate for the conditional survival of the uncured subjects. For
the sake of clarity, we focus on the so-called mixture cure models with a parametric cure
fraction function, the type of model that is most popular among practitioners. Meanwhile,
the lifetime of interest is left unspecified.
The paper is organized as follows. In Section 2 we provide a general description of the
mixture cure models and next we introduce the needed notation and present the inversion
formulae on which our approach is built. We finish Section 2 by a discussion of the identification issue and some new insight on the existing approaches in the literature on cure
models. Section 3 introduces the general maximum likelihood estimator, while in Section 4
we derive the general asymptotic results. A simple bootstrap procedure for making feasible
inference is proposed. Section 4 ends with an illustration of the general approach in the case
where the conditional law of the observations is estimated by kernel smoothing. In Sections
2
5 and 6 we report some empirical results obtained with simulated and two real data sets.
Our estimator performs well in simulations and provides similar or more interpretable results
in applications compared with a competing logistic/proportional hazards mixture approach.
The technical proofs are relegated to the Appendix.
2
2.1
The model
A general class of mixture cure models
Let T denote (a possible monotone transformation of) the lifetime of interest that takes values
in (−∞, ∞]. A cured observation corresponds to the event {T = ∞}, and in the following
this event is allowed to have a positive probability. Let X be a covariate vector with support
X belonging to a general covariate space. The covariate vector could include discrete and
continuous components. The survival function FT ((t, ∞] | x) = P(T > t | X = x) for t ∈ R
and x ∈ X can be written as
FT ((t, ∞] | x) = 1 − φ(x) + φ(x)FT,0 ((t, ∞) | x),
where φ(x) = P(T < ∞ | X = x) and FT,0 ((t, ∞) | x) = P(T > t | X = x, T < ∞).
Depending on which model is used for φ(x) and FT,0 (· | x), one obtains a parametric,
semiparametric or nonparametric model, called a ‘mixture cure model’. In the literature, one
often assumes that φ(x) follows a logistic model, i.e. φ(x) = exp(a + x⊤ b)/[1 + exp(a + x⊤ b)]
for some (a, b⊤ )⊤ ∈ Rd+1 . Recently, semiparametric models (like a single-index model as in
Amico et al. 2017) or nonparametric models (as in Xu & Peng 2014 or López-Cheda et al.
2017) have been proposed. As for the survival function FT,0 (· | x) of the susceptible subjects,
a variety of models have been proposed, including parametric models (see e.g. Boag 1949,
Farewell 1982), semiparametric models based on a proportional hazards assumption (see e.g.
Kuk & Chen 1992, Sy & Taylor 2000, Fang et al. 2005, Lu 2008; see also Othus et al. 2009)
or nonparametric models (see e.g. Taylor 1995, Xu & Peng 2014).
In this paper we propose to model φ(x) parametrically, i.e. we assume that φ(·) belongs
to the family of conditional probability functions
{φ(·, β) : β ∈ B},
where φ(·, β) takes values in the interval (0, 1), β is the parameter vector of the model and B
is the parameter set. This family could be the logistic family or any other parametric family.
For the survival function FT,0 (· | x) we do not impose any assumptions in order to have a
flexible and rich class of models for FT (· | x) to choose from. Later on we will see that for the
estimation of FT,0 (· | x) any estimator that satisfies certain minimal conditions can be used,
and hence we allow for a large variety of parametric, semiparametric and nonparametric
estimation methods.
3
As is often the case with time-to-event data, we assume that the lifetime T is subject to
random right censoring, i.e. instead of observing T , we only observe the pair (Y, δ), where
Y = T ∧ C, δ = 1{T ≤ C} and C is a non-negative random variable, called the censoring
time. Some identification assumptions are required to be able to identify the conditional law
of T from the observed variables Y and δ. Let us assume that
C⊥T |X
P(C < ∞) = 1.
and
(2.1)
The conditional independence between T and C is an usual identification assumption in
survival analysis in the presence of covariates. The zero probability at infinity condition for
C implies that P(C < ∞ | X) = 1 almost surely (a.s.). This latter mild condition is required
if we admit that the observations Y are finite, which is the case in the common applications.
For the sake of simplicity, let us also consider the condition
P(T = C) = 0,
(2.2)
which is commonly used in survival analysis, and which implies that P(T = C | X) = 0 a.s.
2.2
Some notations and preliminaries
We start with some preliminary arguments, which are valid in general without assuming any
model on the functions φ, FT and FC .
The observations are characterized by the conditional sub-probabilities
H1 ((−∞, t] | x) = P(Y ≤ t, δ = 1 | X = x)
H0 ((−∞, t] | x) = P(Y ≤ t, δ = 0 | X = x),
t ∈ R, x ∈ X .
def
Then H((−∞, t] | x) = P(Y ≤ t | X = x) = H0 ((−∞, t] | x) + H1 ((−∞, t] | x). Since we
assume that Y is finite, we have
H((−∞, ∞) | x) = 1,
∀x ∈ X .
(2.3)
For j ∈ {0, 1} and x ∈ X , let τHj (x) = sup{t : Hj ([t, ∞) | x) > 0} denote the right endpoint
of the support of the conditional sub-probability Hj . Let us define τH (x) in a similar way
and note that τH (x) = max{τH0 (x), τH1 (x)}. Note that τH0 (x), τH1 (x) and τH (x) can equal
infinity, even though Y only takes finite values. For −∞ < t ≤ ∞, let us define the
conditional probabilities
FC ((−∞, t] | x) = P(C ≤ t | X = x)
and
FT ((−∞, t] | x) = P(T ≤ t | X = x),
x ∈ X.
Let us show how the probability of being cured could be identified from the observations
without any reference to a model for this probability. Under conditions (2.1)-(2.2) we can
write
H1 (dt | x) = FC ([t, ∞) | x)FT (dt | x),
H0 (dt | x) = FT ([t, ∞] | x)FC (dt | x),
4
and H([t, ∞) | x) = FT ([t, ∞] | x)FC ([t, ∞) | x). These equations could be solved and thus
they allow to express the functions FT (· | x) and FC (· | x) in an unique way as explicit
transformations of the functions H0 (· | x) and H1 (· | x). For this purpose, let us consider the
conditional cumulative hazard measures
ΛT (dt | x) =
FT (dt | x)
FT ([t, ∞] | x)
ΛC (dt | x) =
and
FC (dt | x)
,
FC ([t, ∞) | x)
x ∈ X.
The model equations yield
ΛT (dt | x) =
H1 (dt | x)
H([t, ∞) | x)
and
ΛC (dt | x) =
H0 (dt | x)
.
H([t, ∞) | x)
Then, we can write the following functionals of H0 (· | x) and H1 (· | x) :
Y
FT ((t, ∞] | x) =
{1 − ΛT (ds | x)},
(2.4)
−∞<s≤t
FC ((t, ∞) | x) =
Y
{1 − ΛC (ds | x)},
t ∈ R,
(2.5)
−∞<s≤t
Q
where s∈A stands for the product-integral over the set A (see Gill and Johansen 1990).
Moreover, if τH1 (x) < ∞, then
Y
P(T > τH1 (x) | x) =
{1 − ΛT (dt | x)},
t∈(−∞,τH1 (x)]
but there is no way to identify the conditional law of T beyond τH1 (x). Therefore, we will
impose
P(T > τH1 (x) | x) = P(T = ∞ | x),
(2.6)
Q
Q
i.e. t∈R {1 − ΛT (dt | x)} = −∞<t≤τH (x) {1 − ΛT (dt | x)}. Note that if τH1 (x) = ∞,
1
condition (2.6) is no longer an identification restriction, but just a simple consequence of the
definition of ΛT (· | x). Finally, the condition that P(C < ∞) = 1 in (2.1) can be re-expressed
by saying that we assume that H0 (· | x) and H1 (· | x) are such that
Y
P(C = ∞ | x) =
{1 − ΛC (dt | x)} = 0,
∀x ∈ X .
(2.7)
t∈R
Let us point out that this condition is satisfied only if τH1 (x) ≤ τH0 (x). Indeed, if τH1 (x) >
τH0 (x) then necessarily τH0 (x) < τH (x) and so H([τH0 (x), ∞) | x) > 0. Hence, ΛC (R | x) =
ΛC ((−∞, τH0 (x)] | x) < ∞, and thus P(C = ∞ | x) > 0, which contradicts (2.7).
It is important to understand that any two conditional sub-probabilities H0 (· | x) and
H1 (· | x) satisfying conditions (2.1) and (2.2) define uniquely FT (· | x) and FC (· | x). Indeed,
FT (· | x) is precisely the probability distribution of T given X = x with all the mass beyond
τH1 (x) concentrated at infinity. In general, FT (· | x) and FC (· | x) are only functionals of
H0 (· | x) and H1 (· | x).
We will assume conditions (2.1), (2.2) and (2.6) throughout the paper.
5
2.3
A key point for the new approach: the inversion formulae
Write
H([t, ∞) | x) = FT ([t, ∞] | x)FC ([t, ∞) | x)
= FT ([t, ∞) | x)FC ([t, ∞) | x) + P(T = ∞ | x)FC ([t, ∞) | x),
and thus
FT ([t, ∞) | x) =
H([t, ∞) | x) − P(T = ∞ | x)FC ([t, ∞) | x)
.
FC ([t, ∞) | x)
(2.8)
Consider the conditional cumulative hazard measure for the finite values of the lifetime of
interest:
FT (dt | x)
FT,0 (dt | x)
def
=
ΛT,0 (dt | x) =
FT,0 ([t, ∞) | x)
FT ([t, ∞) | x)
for t ∈ R. Since H1 (dt | x) = FC ([t, ∞) | x)FT (dt | x), using relationship (2.8) we obtain
ΛT,0(dt | x) =
H1 (dt | x)
.
H([t, ∞) | x) − P(T = ∞ | x)FC ([t, ∞) | x)
Next, using the product-integral we can write
Y
FT,0 ((t, ∞) | x) =
{1 − ΛT,0 (ds | x)},
t ∈ R, x ∈ X .
(2.9)
(2.10)
−∞<s≤t
Let us recall that FC (· | x) can be written as a transformation of H0 (· | x) and H1 (· | x),
see equations (2.4) and (2.5). This representation is not surprising since we can consider C
as a lifetime of interest and hence T plays the role of a censoring variable. Hence, estimating
the conditional distribution function FC (· | x) should not be more complicated than in a
classical conditional Kaplan-Meier setup, since the fact that T could be equal to infinity
with positive conditional probability is irrelevant when estimating FC (· | x).
Finally, the representation of FC (· | x) given in equation (2.5), plugged into equation
(2.9), allows to express ΛT,0 (· | x), and thus FT,0 (· | x), as maps of P(T = ∞ | x) and the
measures H0 (· | x) and H1 (· | x). This will be the key element for providing more insight in
the existing approaches and the starting point of our new approach.
2.4
Model identification issues
Let us now investigate the identification issue. Recall that our model involves the functions
FT (· | x), FC (· | x) and φ(·, β), and the assumptions (2.1), (2.2) and (2.6). For a fixed value
of the parameter β, and for t ∈ R and x ∈ X , let
ΛβT,0 (dt | x) =
H1 (dt | x)
,
H([t, ∞) | x) − [1 − φ(x, β)]FC ([t, ∞) | x)
6
(2.11)
and
β
FT,0
((t, ∞) | x) =
Y n
−∞<s≤t
o
1 − ΛβT,0 (ds | x) .
(2.12)
Let FY,δ (·, · | x) denote the conditional law of (Y, δ) given X = x. Moreover, let
β
β
FY,δ
(dt, 1 | x) = φ(x, β)FC ((t, ∞) | x)FT,0
(dt | x)
and
β
β
FY,δ
(dt, 0 | x) = [FT,0
((t, ∞) | x)φ(x, β) + 1 − φ(x, β)]FC (dt | x).
These equations define a conditional law for the observations (Y, δ) based on the model.
More precisely, for a choice of FT (· | x), FC (· | x) and β, the model yields a conditional law
for (Y, δ) given X = x. If the model is correctly specified, there exists a value β0 such that
β0
FY,δ (·, · | x) = FY,δ
(·, · | x),
∀x ∈ X .
(2.13)
The remaining question is whether the true value of the parameter is identifiable. In other
words, one should check if, given the conditional subdistributions H0 (· | x) and H1 (· | x),
x ∈ X , there exists a unique β0 satisfying condition (2.13). For this purpose we impose the
following mild condition:
and we show that
e almost surely
φ(X, β) = φ(X, β),
e
⇒ β = β,
(2.14)
βe
β0
e ∀x ∈ X .
(·, · | x), ∀x ∈ X ⇒ φ(x, β0 ) = φ(x, β),
FY,δ
(·, · | x) = FY,δ
e
β0
β
Indeed, if FY,δ
(·, · | x) = FY,δ
(·, · | x), then for any x,
β0
e C ((t, ∞) | x)F βe (dt | x),
φ(x, β0 )FC ((t, ∞) | x)FT,0
(dt | x) = φ(x, β)F
T,0
for all t ∈ (−∞, τH (x)] ∩ R. Our condition (2.7) guarantees that τH (x) = τH0 (x) so that
FC ((t, ∞) | x) should be necessarily positive for t ∈ (−∞, τH (x)), and thus could be simplified in the last display. Deduce that
β0
e βe (dt | x),
φ(x, β0 )FT,0
(dt | x) = φ(x, β)F
T,0
for all t ∈ (−∞, τH (x)). Finally, recall that by construction, in the model we consider,
β
β
FT,0
((−∞, ∞) | x) = 1 for any β such that FY,δ
(·, · | x) coincides with the conditional law of
(Y, δ) given X = x, for any x. Thus taking integrals on (−∞, ∞) on both sides of the last
e Let us gather these facts in the following statement.
display we obtain φ(·, β0 ) = φ(·, β).
Theorem 2.1. Under conditions (2.1), (2.2), (2.6) and (2.14) the model is identifiable.
7
2.5
Interpreting the previous modeling approaches
We suppose here that the function φ(x) follows a logistic model, and comment on several
models for FT,0 that have been considered in the literature.
2.5.1
Parametric and proportional hazards mixture model
In a parametric modeling, one usually supposes that τH0 (x) = τH1 (x) = ∞ and that ΛT,0 (· |
x) belongs to a parametric family of cumulative hazard functions, like for instance the Weibull
model; see Farewell (1982).
Several contributions proposed a more flexible semiparametric proportional hazards (PH)
approach; see Fang et al. (2005), Lu (2008) and the references therein. In such a model one
imposes a PH structure for the ΛT,0 (· | x) measure. More precisely, it is supposed that
ΛT,0(dt | x) =
H1 (dt | x)
= exp(x⊤ γ)Λ0 (dt),
H([t, ∞) | x) − [1 − φ(x, β)]FC ([t, ∞) | x)
where γ is some parameter to be estimated and Λ0 (·) is an unknown baseline cumulative
hazard function. Our inversion formulae reveal that in this approach the parameters γ and
Λ0 depend on the observed conditional measures H0 (· | x) and H1 (· | x), but also on the
parameter β. The same is true for the parametric models.
2.5.2
Kaplan-Meier mixture cure model
Taylor (1995) suggested to estimate FT,0 using a Kaplan-Meier type estimator. With such an
approach one implicitly assumes that the law of T given X and given that T < ∞ does not
depend on X. This is equivalent to supposing that ΛT,0(· | x) = ΛT,0(·). Next, to estimate
ΛT,0 (·) one has to modify the unconditional version of the usual inversion formulae (2.4)
to take into account the conditional probability of the event {T = ∞}. Following Taylor’s
approach we rewrite (2.9) as
ΛT,0(dt) =
H1 (dt | x)
o
n
.
1−φ(x,β)
H1 ([t, ∞) | x)+ [t,∞) 1 − φ(x,β)FT,0 ([s,∞))+1−φ(x,β) H0 (ds | x)
R
Next, assume that the last equality remains true if H0 (dt | x) and H1 (dt | x) are replaced
by their unconditional versions, that is assume that
ΛT,0(dt) =
n
H1 ([t, ∞)) + [t,∞) 1 −
R
H1 (dt)
1−φ(x,β)
φ(x,β)FT,0 ([s,∞))+1−φ(x,β)
o
.
(2.15)
H0 (ds)
See equations (2) and (3) in Taylor (1995). The equation above could be solved iteratively
(m)
(m+1)
by a EM-type procedure: for a given β and an iteration FT,0 (·), build ΛT,0 (dt) and the
(m+1)
updated estimate FT,0 (·); see Taylor (1995) for the details. Let us point out that even if
8
(T, C) is independent of X and thus H1 (· | x) does not depend on x, the subdistribution
H0 (· | x) still depends on x, since
H0 (dt | x) = FT ((t, ∞] | x)FC (dt | x)
= [FT ((t, ∞) | x) + P(T = ∞ | x)]FC (dt | x).
Hence, a more natural form of equation (2.15) is
ΛT,0 (dt) =
n
H1 ([t, ∞)) + [t,∞) 1 −
R
H1 (dt)
1−φ(x,β)
φ(x,β)FT,0 ([s,∞))+(1−φ(x,β))
o
.
H0 (ds | x)
The investigation of a EM-type procedure based on the latter equation will be considered
elsewhere.
3
Maximum likelihood estimation
Let (Yi , δi , Xi ) (i = 1, . . . , n) be a sample of n i.i.d. copies of the vector (Y, δ, X).
We use a likelihood approach based on formulae (2.9) and (2.5) to build an estimator of
β
b k (· | x) of the subdistriφ(·, β) and FT,0
(· | x). To build the likelihood we use estimates H
butions Hk (· | x), k ∈ {0, 1}. These estimates are constructed with the sample of (Y, δ, X),
without reference to any model for the conditional probability P(T < ∞ | x). At this stage
b k (· | x). To derive the asymptotic results
it is not necessary to impose a particular form for H
β
we will only impose that these estimators satisfy some mild conditions. Let FbT,0
(· | x) be
b 0 (· | x) and H
b 1 (· | x) instead of H0 (· | x)
defined as in equations (2.11) and (2.12) with H
and H1 (· | x), that is
)
(
Y
b
H
(ds
|
x)
1
β
,
FbT,0
((t, ∞) | x) =
1−
b
bC ([s, ∞) | x)
H([s,
∞)
|
x)
−
[1
−
φ(x,
β)]
F
−∞<s≤t
b 0 (· | x)
where FbC (· | x) is the estimator obtained from equations (2.4) and (2.5) but with H
b 1 (· | x), i.e.
and H
(
)
Y
b 0 (ds | x)
H
FbC ((t, ∞) | x) =
1−
.
b
H([s,
∞)
|
x)
−∞<s≤t
Let fX denote the density of the covariate vector with respect to some dominating measure. The contribution of the observation (Yi , δi , Xi ) to the likelihood when δi = 1 is then
β
φ(Xi , β)FbT,0
({Yi } | Xi )fX (Xi )FbC ((Yi , ∞) | Xi ),
while the contribution when δi = 0 is
β
FbC ({Yi } | Xi )fX (Xi )[φ(Xi , β)FbT,0
((Yi , ∞) | Xi ) + 1 − φ(Xi , β)].
9
Since the laws of the censoring variable and of the covariate vector do not carry information of
the parameter β, we can drop the factors FbC ((Yi , ∞) | Xi )fX (Xi ) and FbC ({Yi } | Xi )fX (Xi ).
bn (β) where
Hence the criterion to be maximized with respect to β ∈ B is L
bn (β) =
L
n n
oδi n
o1−δi
Y
β
β
φ(Xi , β)FbT,0
({Yi } | Xi )
φ(Xi , β)FbT,0
((Yi , ∞) | Xi ) + 1 − φ(Xi , β)
.
i=1
The estimator we propose is
bn (β).
βb = arg max log L
β∈B
(3.1)
Let us review the identification issue in the context of the likelihood estimation approach.
If conditions (2.1), (2.2) and (2.6) hold true and the parametric model for the conditional
probability of the event {T = ∞} is correct and identifiable in the sense of condition (2.14),
the true parameter β0 is the value identified by condition (2.13). This is the conclusion of
Theorem 2.1 above. It remains to check that the proposed likelihood approach allows to
consistently estimate β0 .
Let
log p(t, δ, x; β) = δ log p1 (t, x; β) + (1 − δ) log p0 (t, x; β),
(3.2)
with
p1 (t, x; β) =
β
φ(x, β)FT,0
(dt | x)FC ((t, ∞) | x)
,
H1 (dt | x)
β
FC (dt | x)[φ(x, β)FT,0
((t, ∞) | x) + 1 − φ(x, β)]
p0 (t, x; β) =
.
H0 (dt | x)
(3.3)
(3.4)
Following a common notational convention, see for instance Gill (1994), here we treat dt not
just as the length of a small time interval [t, t + dt) but also as the name of the interval
itself. Moreover, we use the convention 0/0 = 1. Let us notice that, up to additive terms
not containing β, the function β 7→ E[log p(Y, δ, X; β)] is expected to be the limit of the
bn (·). Hence, a minimal condition for guaranteeing the consistency
random function log L
of the likelihood estimation approach is that β0 is the maximizer of the limit likelihood
criterion E[log p(Y, δ, X; ·)]. This is proved in the following proposition using a Bernoulli
sample likelihood inequality. The proof is given in the Appendix.
Proposition 3.1. Suppose that conditions (2.1), (2.2), (2.6) and (2.14) hold true. If β0 is
the value of the parameter defined by equation (2.13), then for any β 6= β0 ,
E [log p(Y, δ, X; β)] < E [log p(Y, δ, X; β0 )] .
4
General asymptotic results
Little assumptions were needed for our analysis so far. To proceed further with the asymptotic results we need to be more specific with respect to several aspects. In order to prove
10
bn (β) along sequences of values
consistency, we have to control the asymptotic behavior of L
of the parameter β. Such a control requires a control of denominators like
b
H([t,
∞) | x) − (1 − φ(x, β))FbC ([t, ∞) | x)
on the support of H1 (· | x), uniformly with respect to x. A usual way to deal with this
technical difficulty is to consider a finite threshold τ beyond which no uncensored lifetime is
observed, i.e.
inf H1 ((−∞, τ ] | x) = 1.
(4.1)
x
Moreover, to be able to keep denominators away from zero, we require the condition
inf H1 ({τ } | x)H0 ((τ, ∞) | x) > 0.
x∈X
(4.2)
In particular, this condition implies
τH0 (x) > τH1 (x) = τ,
x ∈ X.
Moreover, given condition (2.2), necessarily H0 ({τ } | x) = 0, ∀x. This means that FC ({τ } |
x) = 0, ∀x. This constraint on H0 ({τ } | x) could be relaxed at the expense of suitable
adjustments of the inversion formulae. For simplicity, we keep condition (2.2). Let us also
β
notice that condition (4.2) implies inf x FC ((τ, ∞) | x) > 0, and inf x FT,0
({τ } | x) > 0, ∀β.
Conditions like in equations (4.1)-(4.2) are more or less explicitly used in the literature
of cure models. Sometimes τ is justified as representing a total follow-up of the study. For
instance, Lu (2008) supposes that Y = min{T, min(C, τ )} and δ = 1{T ≤ min(C, τ )}, where
T = ηT ∗ +(1−η)∞, with T ∗ < ∞ and η ∈ {0, 1}. The conditional probability of being cured
is precisely the conditional probability of the event {η = 0}. Next, Lu (2008) supposes that
inf x P(τ ≤ T ≤ C | x) > 0, and Λ0 (τ ) < ∞, where Λ0 (·) is the cumulative hazard function
of T ∗ . All these conditions together clearly imply our conditions (4.1)-(4.2).
Fang et al. (2005) implicitly restrict the uncensored lifetimes to some compact interval
[0, τ ] and suppose E(δ1{Y ≥ τ }) > 0. This could be possible only if H1 ({τ } | x) > 0 for a set
of values x with positive probability. In a proportional hazards context with the covariates
taking values in a bounded set, as assumed by Fang et al. (2005), this is equivalent to
H1 ({τ } | x) ≥ c > 0 for almost all x, for some constant c.
The fact that technical conditions similar to our conditions (4.1)-(4.2) could be traced in
the cure models literature is not unexpected in view of our Section 2.5. Indeed, the existing
approaches could be interpreted through our inversion formulae and thus the technical problems we face in the asymptotic investigation are expected to be also present in the alternative
approaches.
11
4.1
Consistency
Let us sketch the arguments we use in the proof of Theorem 4.1 below for deriving the
b On one hand, if the conditional subdistributions Hk (· | x) are given, one
consistency of β.
can build the purely parametric likelihood
Ln (β) =
n n
Y
β
φ(Xi , β)FT,0
({Yi } | Xi )
i=1
oδin
o1−δi
β
φ(Xi , β)FT,0
((Yi , ∞) | Xi )+1−φ(Xi , β)
. (4.3)
bn (β)
By construction, Ln (β) is a functional of H0 (· | x) and H1 (· | x), x ∈ X , while L
is a functional of the estimated versions of H0 (· | x) and H1 (· | x). Hence, a prerequisite
condition for deriving the consistency of our semiparametric estimator βb is the consistency
of the infeasible maximum likelihood estimator
βe = arg max log Ln (β).
β∈B
A necessary condition for the consistency of βb is
bn (β) − log Ln (β)| = oP (1).
sup | log L
(4.4)
β∈B
We then have that
b ≥ log Ln (β)
e − oP (1).
log Ln (β)
From this we will derive the consistency of βb using Section 5.2 in van der Vaart (1998). See
the proof in the Appendix for details.
b k − Hk as
To prove condition (4.4), we have to guarantee the uniform convergence of H
stated in Assumption (AC1) below. Indeed, this uniform convergence will imply
sup sup |FbC ([t, ∞) | x) − FC ([t, ∞) | x)| = oP (1),
(4.5)
x∈X t∈(−∞,τ ]
and
sup sup
β
β
sup |FbT,0
([t, ∞) | x) − FT,0
([t, ∞) | x)| = oP (1).
(4.6)
β∈B x∈X t∈(−∞,τ ]
See Lemma 7.1 in the Appendix. The uniform convergence in equation (4.4) then follows.
b we need the following assumptions :
To prove the consistency of β,
(AC1) For τ appearing in conditions (4.1)-(4.2),
b k ([t, ∞) | x) − Hk ([t, ∞) | x)| = oP (1),
sup sup |H
x∈X t∈(−∞,τ ]
(AC2) The parameter set B ⊂ Rp is compact.
12
k ∈ {0, 1}.
(AC3) There exist some constants a > 0 and c1 > 0 such that
|φ(x, β) − φ(x, β ′ )| ≤ c1 kβ − β ′ ka ,
∀β, β ′ ∈ B, ∀x, x′ ∈ X .
(AC4) inf β∈B inf x∈X φ(x, β) > 0.
Now we can state our consistency result.
Theorem 4.1. Assume that (AC1)-(AC4) and (2.1), (2.2), (2.6), (2.14), (4.1) and (4.2)
hold true. Moreover, assume that there exists a unique value β0 in the parameter space B
such that (2.13) is true. Then, βb − β0 = oP (1).
Let us point out that the consistency result is stated in terms of the subdistributions of
the observations and the conditional probability model {φ(x, ·) : β ∈ B}. If the identification
b x)
assumptions used in Proposition 3.1 hold true and the model is correctly specified, φ(β,
consistently estimates the cure probability P(T = ∞ | x) for all x in the support of X. Let
us also notice that condition (AC3) guarantees the Glivenko-Cantelli property for certain
classes of functions. It could be significantly weakened, but in the applications our condition
(AC3) will cover the common modeling situations. Condition (AC4) is a weak condition on
the model φ(x, β) and is e.g. satisfied for the logistic model if X and B are compact.
4.2
Asymptotic normality
For the asymptotic normality we will use the approach in Chen et al. (2003). For this
bn (β) with respect to β.
purpose we use the derivative of log L
bn (β) with respect
First note that the vector of partial derivatives of the log-likelihood log L
to the components of β equals
1X n
b 1 ({Yi } | Xi )]
b
∇β δi log φ(Xi , β) + δi log[H
∇β log Ln (β) =
n i=1
h
i
b
b
−δi log H([Yi , ∞) | Xi ) − (1 − φ(Xi , β))FC ([Yi , ∞) | Xi )
n
(1)
+δi log FbT,β ([Yi , ∞) | Xi )
h
io
(1)
+(1 − δi ) log φ(Xi , β)Fb ([Yi , ∞) | Xi ) + 1 − φ(Xi , β) .
T,β
=
1
n
n
X
i=1
b0, H
b 1 ),
∇β qi (β, H
where qi is defined in the proof of Theorem 4.1.
b we embed the nuisance functions
To develop the asymptotic normality of our estimator β,
Hk ([·, ∞) | ·) (k = 0, 1) in a functional space H, which is equipped with a pseudo-norm k·kH .
Both the space H and its pseudo-norm k · kH will be chosen depending on the estimators
13
b k ([·, ∞) | ·), k = 0, 1, and have to satisfy certain conditions, which we give below. The
H
true vector of nuisance functions is
η0 (t, x) = (η01 (t, x), η02 (t, x)) = (H0 ([t, ∞) | x), H1 ([t, ∞) | x)).
For each x ∈ X and for each η ∈ H, let η(dt, x) = (η1 (dt, x), η2 (dt, x)) be the measures
associated to the non-increasing functions η(·, x) = (η1 (·, x), η2 (·, x)), and define
n
1X
m(Yi , δi , Xi ; β, η)
Mn (β, η) =
n i=1
(4.7)
M(β, η) = E[m(Y, δ, X; β, η)],
(4.8)
and
where
δ∇β φ(x, β) δ∇β φ(x, β)T2 (η)(t, x)
−
+ δT4 (β, η)(t, x)
φ(x, β)
T1 (β, η)(t, x)
∇β φ(x, β)T3(β, η)(t, x) + φ(x, β)(T3 T4 )(β, η)(t, x) − ∇β φ(x, β)
,
+ (1 − δ)
φ(x, β)T3 (β, η)(t, x) + 1 − φ(x, β)
m(t, δ, x; β, η) =
where
T1 (β, η)(t, x) = [η1 (t, x) + η2 (t, x)] − (1 − φ(x, β))T2 (β, η)(t, x)
Y
η1 (ds, x)
T2 (β, η)(t, x) ≡ T2 (η)(t, x) =
1−
(η1 + η2 )(s, x)
−∞<s<t
Y
η2 (ds, x)
T3 (β, η)(t, x) =
1−
T1 (β, η)(s, x)
−∞<s<t
T4 (β, η)(t, x) = ∇β log T3 (β, η)(t, x)
Z
= −∇β φ(x, β)
(−∞,t)
T2 (η)(s, x)η2 (ds, x)
.
T1 (β, η)(s, x)[T1(β, η)(s, x) − η2 ({s}, x)]
Note that for η = η0 , we have
T1 (β, η0 )(t, x) = H([t, ∞) | x) − (1 − φ(x, β))FC ([t, ∞) | x),
β
T2 (η0 )(t, x) = FC ([t, ∞) | x) and T3 (β, η0)(t, x) = FT,0
([t, ∞) | x). Hence, we have that
M(β0 , η0 ) = 0
and
b ηb)k = infβ∈B kMn (β, ηb)k,
kMn (β,
b 0 ([t, ∞) | x), H
b 1 ([t, ∞) | x)).
where ηb(t, x) = (b
η1 (t, x), ηb2 (t, x)) = (H
If in addition β = β0 ,
β0
T1 (β0 , η0 )(t, x) = FT,0
([t, ∞) | x)FC ([t, ∞) | x),
14
t ∈ (−∞, τ ],
(4.9)
and thus for any x, the map t 7→ T1 (β0 , η0 )(t, x) is decreasing on (−∞, τ ]. Moreover, by
condition (4.2),
inf T1 (β0 , η0 )(τ, x) > 0.
x∈X
We need this lower bound to be valid on a neighborhood around β0 . Hence, let us consider
B0 a neighborhood of β0 such that
inf inf T1 (β, η0 )(τ, x) > 0.
β∈B0 x∈X
(4.10)
The existence of B0 is guaranteed by condition (4.2) and the regularity of the function
φ(·, ·); see assumption (AN3) below that strengthens assumption (AC3). Finally, let us
note that by construction for any t ∈ (−∞, τ ), H({t} | x) = H0 ({t} | x) + H1 ({t} | x) ≥
(1 − φ(x, β))FC ({t} | x) + H1 ({t} | x) and thus
H([t, ∞) | x) − (1 − φ(x, β))FC ([t, ∞) | x) − H1 ({t} | x)
≥ H((t, ∞) | x) − (1 − φ(x, β))FC ((t, ∞) | x).
Then, by the arguments guaranteeing the existence of a set B0 as in equation (4.10),
inf inf
inf
[T1 (β, η0 )(t, x) − H1 ({t} | x)] > 0.
β∈B0 x∈X t∈(−∞,τ )
(4.11)
Further, define the Gâteaux derivative of M(β, η0 ) in the direction [η − η0 ] by
h
i
−1
∇η M(β, η0 )[η − η0 ] = lim τ
M(β, η0 + τ (η − η0 )) − M(β, η0 ) ,
τ →0
and in a similar way the Gâteaux derivatives ∇η Tj (β, η0)[η − η0 ] are defined.
We need the following assumptions :
(AN1) The matrix ∇β M(β, η) exists for β in a neighborhood B0 of β0 and is continuous in β
for β = β0 . Moreover, ∇β M(β0 , η0 ) is non-singular.
(AN2) Hk ([·, ∞) | ·) ∈ H for k = 0, 1.
(AN3) The function β → φ(x, β) is continuously differentiable for all x ∈ X , and the derivative
is bounded uniformly in x ∈ X and β ∈ B0 . Moreover, B0 is compact and β0 belongs
to the interior of B0 .
b k ([·, ∞) | ·) satisfies the following :
(AN4) For k = 0, 1, the estimator H
b k ([·, ∞) | ·) ∈ H) → 1
(i) P(H
b k − Hk )([·, ∞) | ·)kH = oP (n−1/4 )
(ii) k(H
15
(iii) There exist functions Ψ1 and Ψ2 , such that
1
X
k=0
Z
E ψ1k (Y, X)
∗
−∞<u<Y
=
1
n
n
X
b k − Hk )([u, ∞) | X)
ψ2k (u, X)d (H
Ψ1 (Yi , δi , Xi ) + R1n ,
i=1
and
1
X
k,ℓ=0
∗
E
ψ3k (Y, X)
Z
ψ4k (u, X)ψ5k
−∞<u<Y
n
=
b
(Hk − Hk )([u, ∞) | X), X dHℓ (u | X)
1X
Ψ2 (Yi , δi , Xi ) + R2n ,
n i=1
where E∗ denotes conditional expectation given the data (Yi , δi , Xi ), 1 ≤ i ≤ n,
the functions ψjk are defined in (7.13) in the Appendix, and where
E[Ψℓ (Y, δ, X)] = 0,
Rℓn = oP (n−1/2 )
(j = 1, . . . , 5; k = 0, 1; ℓ = 1, 2). Note that the above expectations are conditionally on the sample and are taken with respect to the generic variables Y, δ, X
which have the same law as the sample.
R∞p
log N(ε, H, k · kH )dε < ∞, where N(ε, H, k · kH ) is the
(AN5) The class H satisfies 0
ε-covering number of the space H with respect to the norm k · kH , i.e. the smallest
number of balls of k · kH -radius ε needed to cover the space H.
Theorem 4.2. Assume that βb − β0 = oP (1) and that (AN1)-(AN5) and (2.1), (2.2), (2.6),
(2.14), (4.1) and (4.2) hold true. Then,
n1/2 βb − β0 ⇒ N (0, Ω),
where
Ω = ∇β M(β0 , η0 )
V ∇β M(β0 , η0 )
and V = Var m(Y, δ, X; β0 , η0 ) + Ψ1 (Y, δ, X) + Ψ2 (Y, δ, X) .
4.3
−1
−1
Bootstrap consistency
Although in principle one can use Theorem 4.2 above for making inference, the asymptotic
variance Ω has a complicated structure, and the estimation of Ω would not only be cumbersome, but its precision for small samples could moreover be rather poor. We continue
16
this section by showing that a bootstrap procedure can be used to estimate the asymptotic
b to approximate the whole distribution of βb or to construct confidence intervals
variance of β,
or test hypotheses regarding β0 .
Here, we propose to use a naive bootstrap procedure, consisting in drawing triplets
∗ ∗
(Yi , δi , Xi∗ ), 1 ≤ i ≤ n, randomly with replacement from the data (Yi , δi , Xi ), 1 ≤ i ≤ n.
b ∗ be the same estimator as H
b k (k = 0, 1) but based on the bootstrap data, and for
Let H
k
P
each (β, η) let Mn∗ (β, η) = n−1 ni=1 m(Yi∗ , δi∗ , Xi∗ ; β, η). Define the bootstrap estimator βb∗
to be any sequence that satisfies
b ηb)k = infβ∈B kM ∗ (β, ηb∗) − Mn (β,
b ηb)k,
kMn∗ (βb∗ , ηb∗ ) − Mn (β,
n
b ∗ ([t, ∞) | x), H
b ∗ ([t, ∞) | x)).
where ηb∗ (t, x) = (b
η1∗ (t, x), ηb2∗ (t, x)) = (H
0
1
The following result shows that the bootstrap works, in the sense that it allows to recover
correctly the distribution of n1/2 (βb − β0 ).
Theorem 4.3. Assume that βb − β0 = oP (1) and that (AN1)-(AN5) hold true. Moreover,
assume that ∇β M(β, η) is continuous in η (with respect to k · kH ) at (β, η) = (β0 , η0 ), and
b k − Hk replaced by H
b∗ − H
b k (k = 0, 1) in P∗ -probability. Then,
that (AN4) holds true with H
k
b ≤ u − P n1/2 (βb − β0 ) ≤ u = oP (1),
sup P∗ n1/2 (βb∗ − β)
u∈Rp
where P∗ denotes probability conditionally on the data (Yi , δi , Xi ), i = 1, . . . , n, and where
the inequality sign means the component-wise inequality for vectors.
4.4
Verification of the assumptions for kernel estimators
We finish this section with an illustration of the verification of the assumptions of our
asymptotic results when the conditional subdistributions Hk are estimated by means of
kernel smoothing.
Consider the case where X is composed of continuous and discrete components, that is
X = (Xc , Xd ) ∈ Xc × Xd ⊂ Rdc × Rdd , with dc + dd = d ≥ 1. For simplicity, assume that the
support Xd of the discrete subvector Xd is finite. We also assume that the life time T has
not been transformed by a logarithmic or other transformation, so that its support is [0, ∞].
The subdistributions Hk ([t, ∞) | x) could be estimated by means of a kernel estimator :
b k ([t, ∞) | x) =
H
where for any (xc , xd ) ∈ Xc × Xd ,
n
X
i=1
e hn (Xi − x)
K
I(Yi ≥ t, δi = k),
Pn e
j=1 Khn (Xj − x)
e hn (Xi − x) = Khn (Xc,i − xc )I(Xd,i = xd ),
K
17
hn is a bandwidth sequence, Kh (·) = K(·/h)/hdc , K(u) = k(u1 ) · . . . · k(udc ) and k is a
probability density function.
Nonparametric smoothing of continuous covariates is possible for dimensions dc larger
than 1. However, the technical arguments necessary to verify the assumptions used for the
asymptotic results are tedious. Therefore, in the following we consider dc = 1. The discrete
covariates do not contribute to the curse of dimensionality, and therefore dd could be larger
than 1. However, for simplicity, below we do not consider discrete covariates.
To satisfy assumption (AN4), we need to impose the following conditions :
−1
(C1) The sequence hn satisfies nh4n → 0 and nh3+ζ
→ ∞ for some ζ > 0.
n (log n)
(C2) The support X of X is a compact subset of R.
(C3) The probability density function K has compact support,
twice continuously differentiable.
R
uK(u)du = 0 and K is
Further, let F1 be the space of functions from [0, τ ] to [0, 1] with variation bounded by
M, and let F2 be the space of continuously differentiable functions f from X to [−M, M]
that satisfy supx∈X |f ′(x)| ≤ M and supx1 ,x2 ∈X |f ′(x1 ) − f ′ (x2 )|/|x1 − x2 |ǫ ≤ M for some
M < ∞ and 0 < ǫ < 1. Let
n
∂
η(·, x) ∈ F1 for all x ∈ X ,
H = (t, x) → η(t, x) : η(·, x) ∈ F1 ,
∂x
o
and η(t, ·) ∈ F2 for all 0 ≤ t ≤ τ .
We define the following norm associated with the space H : for η ∈ H, let
kηkH = sup sup |η(t, x)|.
0≤t≤τ x∈X
Then, it follows from Propositions 1 and 2 in Akritas and Van Keilegom (2001) that
−1
b
P(Hk ∈ H) → 1 provided nh3+ζ
→ ∞, with ζ > 0 as in condition (C1). Moreover,
n (log n)
b
supx,t |Hk ([t, ∞) | x)−Hk ([t, ∞) | x)| = OP ((nhn )−1/2 (log n)1/2 ) = oP (n−1/4 ) (see Proposition
1 in Akritas and Van Keilegom 2001). The class H satisfies assumption (AN5) thanks to
Lemma 6.1 in Lopez (2011). It remains to show the validity of assumption (AN4)(iii). We
will show the first statement, the second one can be shown in a similar way. Note that the
18
left hand side equals
Z
1
h
X
E ψ1k (Y, X)
X
1
ψ2k (u, X) fX−1 (X)
Kh (Xi − X) d I(Yi ≥ u, δi = k)
nh
0<u<Y
i=1
i
−1/2
−Hk ([u, ∞) | X) Yi , δi , Xi + oP (n
)
k=0
n
n
1
n
h
X
X
1 −1
Kh (Xi − X) − ψ2k (Yi , X)I(Yi ≤ Y, δi = k)
=
E ψ1k (Y, X) fX (X)
nh
i=1
k=0
Z
o
i
−
ψ2k (u, X) dHk ([u, ∞) | X) Yi , δi , Xi + oP (n−1/2 )
=−
1
X
k=0
0<u<Y
n
X
−1
n
−
Z
i=1
h
n
E ψ1k (Y, Xi ) ψ2k (Yi , Xi )I(Yi ≤ Y, δi = k)
ψ2k (u, Xi ) dHk ((−∞, u] | Xi )
0<u<Y
o
i
Yi , δi , Xi , X = Xi + oP (n−1/2 + h2 ),
which is of the required form.
5
Simulations
In this section we will investigate the small sample performance of our estimation method.
We consider the following model. The covariate X is generated from a uniform distribution
on [−1, 1], and the conditional probability φ(x, β) of not being cured follows a logistic model :
φ(x, β) =
exp(β1 + β2 x)
,
1 + exp(β1 + β2 x)
for any −1 ≤ x ≤ 1. We will work with β0 = (β01 , β02 ) = (1.75, 2) and (1.1, 2), corresponding
to an average cure rate of 20% respectively 30%. The conditional distribution function
FT,0 (·|x) of the uncured individuals is constructed as follows. For a given X, we draw T
from an exponential distribution with mean equal to exp[−(γ0 + γ1 x + γ2 /(1 + 2x2 ))], where
γ0 = γ1 = 0.5 and γ2 ∈ {0, 1, 2}. Next, in order to respect condition (4.2), we truncate
this distribution at τ , which is the quantile of order 0.97 of an exponential distribution with
mean E{exp[−(γ0 + γ1 X + γ2 /(1 + 2X 2 ))]}, i.e.
FT,0 ([0, t]|x) = 1 − exp[− exp(γ0 + γ1 x + γ2 /(1 + 2x2 ))t]I(0 ≤ t ≤ τ ).
Note that this is the distribution function corresponding to a Cox model with baseline hazard
equal to I(0 ≤ t ≤ τ ), and exponential factor equal to exp(γ0 + γ1 x + γ2 /(1 + 2x2 )).
Next, we generate the censoring variable C independently of X from an exponential
distribution with mean equal to 1.65 when β0 = (1.75, 2), and with mean 1.45 when β0 =
(1.1, 2). In this way we have respectively 40% and 50% of censoring when γ2 = 0.
19
In what follows we will compare our estimator of β with the estimator proposed by Lu
(2008) which assumes a Cox model for the uncured individuals. The exponential factor in
the Cox model is assumed to be linear in the covariate X, and hence the Cox model will only
be verified when γ2 = 0. The estimated β coefficients under the Cox model are obtained
using the R package smcure.
For our estimation procedure we used the kernel estimators given in Section 4.4, and we
programmed βb using the optimization procedure optim in R. As starting values we used the
estimator obtained from a logistic model based on the censoring indicator (as a surrogate for
the unobserved cure indicator). However, due to the non-concavity of our likelihood function
and due to the inconsistency of this vector of starting values, the procedure optim often ends
up in a local maximum instead of the global maximum. To circumvent this problem, we
added the following intermediate step to the estimation procedure. Based on the initial
starting values, we estimate β from a logistic model based on the nonparametric estimator
P
FbT ([0, ∞)|x), so we maximize the log-likelihood ni=1 {(1 − FbT ([0, ∞)|Xi )) log(φ(Xi , β)) +
FbT ([0, ∞)|Xi ) log(1 − φ(Xi , β))}. Since this log-likelihood is concave it has a unique local
and global maximum, expected to be close to the maximizer of our likelihood. We now use
this intermediate estimate as starting value for our likelihood maximization.
The results of this two-step maximization procedure are given in Table 1 for the case
where β0 = (1.75, 2), and in Table 2 for the case where β0 = (1.1, 2). A total of 500
samples of size n = 150 and n = 300 are generated, and the tables show the bias and
mean squared error (MSE) of the estimators βb1 and βb2 obtained under the Cox model
and from our procedure. The kernel function K is taken equal to the Epanechnikov kernel :
b k (·|Xi ) (k = 0, 1)
K(u) = (3/4)(1−u2)I(|u| ≤ 1). The bandwidth h of the kernel estimators H
is taken proportional to n−2/7 so as to verify regularity condition (C1), i.e. h = cn−2/7 for
several values of c, namely c = 2, 3 and 4. In addition, we also used the cross-validation
(CV) procedure proposed by Li, Lin and Racine (2013) for kernel estimators of conditional
distribution functions. The CV procedure is implemented in the package np in R. For each
b 0 and H
b 1 and used the average
sample in our simulation, we calculated these bandwidths for H
of these two bandwidths.
The tables show that our estimator outperforms the one that is based on the Cox model,
even when the Cox model is correct. They also show that our estimator is only mildly
sensitive to the bandwidth, which could be explained by the fact that we average out the
effect of the bandwidth. We also see that the CV selection of the bandwidth is working rather
well, in the sense that the MSE is close to the smallest value among the MSE’s corresponding
to the three fixed bandwidths.
Next, we look at the estimation of the quartiles of the distribution FT,0 (·|x) when x =
βb
0.25. We estimate these quartiles by means of our nonparametric estimator FbT,0
(·|x) and
by means of the Cox model studied in Lu (2008). The results given in Tables 3 and 4 show
that, as could be expected, when the Cox model is not satisfied (i.e. when γ2 = 1 or 2), the
20
n γ2
150 0
1
2
300
0
1
2
Par.
β1
β2
β1
β2
β1
β2
β1
β2
β1
β2
β1
β2
c=2
Bias MSE
.092 .230
.002 .540
.063 .123
-.099 .340
.045 .100
-.109 .242
.021 .100
-.081 .266
-.004 .060
-.107 .181
-.015 .050
-.124 .157
c=3
Bias MSE
.012 .183
-.231 .438
-.021 .101
-.334 .331
-.029 .086
-.356 .277
-.029 .088
-.252 .268
-.048 .055
-.278 .208
-.059 .046
-.295 .198
c=4
Bias MSE
-.075 .162
-.506 .510
-.094 .094
-.605 .505
-.099 .083
-.632 .497
-.081 .081
-.461 .363
-.097 .055
-.482 .328
-.107 .049
-.498 .329
hCV
Bias MSE
.044 .224
-.158 .536
.017 .120
-.246 .397
-.005 .101
-.302 .382
.004 .099
-.148 .288
-.013 .061
-.150 .201
-.030 .052
-.197 .217
Cox
Bias MSE
.216 .523
.291 1.15
.147 .191
.278 .635
.124 .145
.263 .476
.099 .139
.135 .365
.097 .092
.215 .302
.077 .074
.181 .247
Table 1: Bias and MSE of βb1 and βb2 for two sample sizes, three values of γ2 , three bandwidths
of the form h = cn−2/7 and the bandwidth hCV obtained from cross-validation. Here,
P (cured) = 0.2 and P (censoring) = 0.4 for γ2 = 0. The Cox model is satisfied for γ2 = 0.
n γ2
150 0
1
2
300
0
1
2
Par.
β1
β2
β1
β2
β1
β2
β1
β2
β1
β2
β1
β2
c=2
Bias MSE
.082 .139
-.017 .418
.057 .076
-.107 .253
.049 .065
-.129 .202
.059 .074
-.068 .216
-.013 .036
-.117 .135
-.049 .028
-.295 .168
c=3
Bias MSE
.003 .116
-.244 .365
-.011 .063
-.328 .277
-.016 .056
-.361 .260
-.010 .060
-.237 .226
-.043 .034
-.282 .175
-.080 .030
-.496 .306
c=4
Bias MSE
-.060 .105
-.525 .477
-.068 .059
-.608 .478
-.071 .054
-.640 .493
-.058 .056
-.453 .326
-.074 .035
-.486 .306
-.037 .030
-.244 .194
hCV
Bias MSE
.031 .130
-.217 .419
.010 .071
-.294 .352
.006 .066
-.329 .358
.031 .073
-.153 .241
-.024 .035
-.187 .160
-.034 .030
-.217 .182
Cox
Bias MSE
.116 .189
.215 .618
.086 .099
.227 .434
.065 .078
.196 .329
.060 .083
.103 .257
.050 .047
.159 .197
.035 .037
.128 .156
Table 2: Bias and MSE of βb1 and βb2 for two sample sizes, three values of γ2 , three bandwidths
of the form h = cn−2/7 and the bandwidth hCV obtained from cross-validation. Here,
P (cured) = 0.3 and P (censoring) = 0.5 for γ2 = 0. The Cox model is satisfied for γ2 = 0.
MSE of the quartiles obtained under the Cox model is much higher than the corresponding
MSE obtained from our procedure. This shows the importance of having a model that does
21
n γ2
150 0
1
2
300
0
1
2
p
.25
.50
.75
.25
.50
.75
.25
.50
.75
.25
.50
.75
.25
.50
.75
.25
.50
.75
c=2
Bias MSE
.044 .061
.022 .049
.024 .083
.083 .053
.060 .051
.072 .089
.126 .060
.098 .058
.112 .107
.007 .032
-.001 .034
-.003 .053
.027 .026
.028 .031
.031 .055
.063 .031
.055 .033
.055 .058
c=3
Bias MSE
.025 .035
.003 .031
.006 .055
.102 .039
.092 .041
.114 .075
.218 .082
.189 .073
.210 .120
-.007 .023
-.013 .023
-.019 .034
.049 .021
.054 .025
.053 .040
.129 .037
.121 .040
.109 .056
c=4
Bias MSE
.030 .032
.003 .025
.001 .045
.155 .053
.144 .049
.154 .077
.325 .139
.291 .121
.296 .156
-.018 .018
-.026 .018
-.030 .026
.081 .021
.086 .025
.086 .038
.216 .065
.200 .062
.191 .074
hCV
Bias MSE
.033 .049
.011 .043
.011 .071
.096 .049
.073 .048
.091 .085
.202 .085
.172 .081
.187 .124
.002 .030
-.006 .031
-.009 .046
.033 .026
.034 .030
.033 .052
.096 .039
.086 .038
.084 .061
Cox
Bias MSE
.031 .027
.028 .023
.032 .038
.267 .104
.251 .093
.254 .117
.592 .401
.513 .308
.492 .322
.010 .013
.008 .013
.010 .023
.252 .081
.240 .076
.235 .086
.580 .366
.498 .274
.452 .248
Table 3: Bias and MSE of the conditional quantiles of order p = 0.25, 0.50 and 0.75 at
x = 0.25 for two sample sizes, three bandwidths of the form h = cn−2/7 and the bandwidth
hCV obtained from cross-validation. Here, P (cured) = 0.2 and P (censoring) = 0.4 for
γ2 = 0. The Cox model is satisfied for γ2 = 0.
not impose any assumptions on the distribution of the uncured individuals and which still
provides very accurate estimators for the logistic part of the model.
We also verify how close the distributions of βb1 and βb2 are to a normal distribution. We
know thanks to Theorem 4.2 that the estimators converge to a normal limit when n tends to
infinity. Figure 1 shows that for n = 150 the distribution is rather close to a normal limit,
especially for βb1 . The figure is based on 1000 samples generated from the above model with
P (cured) = 0.2 and P (censoring) = 0.4. The results for n = 300 (not shown here for space
constraints) are close to a straight line, showing that the results improve when n increases.
Finally, we verify the accuracy of the naive bootstrap proposed in Section 4.3. We
consider the above model, but restrict attention to n = 150 and to the case where P (cured) =
0.2 and P (censoring) = 0.4. Figure 2 shows boxplots of the variance of βb1 and βb2 obtained
from 250 bootstrap resamples for each of 500 samples. The bandwidth is h = 3n−2/7 . The
empirical variance of the 500 estimators of β1 and β2 is also added, and shows that the
bootstrap variance is well centered around the corresponding empirical variance.
22
n γ2
150 0
1
2
300
0
1
2
p
.25
.50
.75
.25
.50
.75
.25
.50
.75
.25
.50
.75
.25
.50
.75
.25
.50
.75
c=2
Bias MSE
.074 .095
.049 .069
.042 .098
.107 .072
.076 .060
.099 .113
.159 .085
.120 .066
.143 .135
.032 .042
.018 .039
.015 .060
.026 .033
.030 .036
.033 .060
.070 .040
.060 .038
.062 .066
c=3
Bias MSE
.020 .049
.005 .041
.007 .063
.116 .053
.103 .049
.118 .083
.232 .097
.211 .089
.228 .142
-.005 .029
-.011 .026
-.018 .037
.046 .025
.052 .027
.050 .042
.132 .042
.125 .044
.115 .061
c=4
Bias MSE
.021 .044
-.003 .031
-.005 .050
.151 .059
.142 .054
.154 .086
.322 .144
.296 .131
.308 .178
-.029 .023
-.035 .021
-.037 .028
.071 .023
.080 .026
.081 .040
.212 .064
.201 .064
.193 .078
hCV
Bias MSE
.046 .071
.023 .057
.024 .084
.119 .066
.088 .057
.109 .102
.233 .109
.195 .092
.219 .153
.016 .038
.007 .034
-.001 .054
.034 .031
.037 .034
.038 .055
.103 .046
.089 .043
.090 .070
Cox
Bias MSE
.039 .038
.031 .029
.034 .046
.274 .116
.255 .100
.257 .126
.586 .402
.508 .307
.489 .325
.018 .017
.013 .016
.016 .022
.252 .083
.234 .074
.232 .087
.570 .357
.492 .269
.451 .251
Table 4: Bias and MSE of the conditional quantiles of order p = 0.25, 0.50 and 0.75 at
x = 0.25 for two sample sizes, three bandwidths of the form h = cn−2/7 and the bandwidth
hCV obtained from cross-validation. Here, P (cured) = 0.3 and P (censoring) = 0.5 for
γ2 = 0. The Cox model is satisfied for γ2 = 0.
6
Data analysis
Let us now apply our estimation procedure on two medical data sets. The first one is
about 286 breast cancer patients with lymph-node-negative breast cancer treated between
1980 and 1995 (Wang et al. (2005)). The event of interest is distant-metastasis, and the
associated survival time is the distant metastasis-free survival time (defined as the time to
first distant progression or death, whichever comes first). 107 of the 286 patients experience
a relapse from breast cancer. The plot of the Kaplan-Meier estimator of the data is given in
Figure 3(a) and shows a large plateau at about 0.60. Furthermore, a large proportion of the
censored observations is in the plateau, which suggests that a cure model is appropriate for
these data. As a covariate we use the age of the patients, which ranges from 26 to 83 years
and the average age is about 54 years.
We estimate β using our estimator and using the estimator based on the Cox model. The
bandwidth h is selected using cross-validation, as in the simulation section. The estimated
23
0
1
2
3
1
2
2.5
3
−3 −2 −1
Theoretical Quantiles
0
1
2
3
Theoretical Quantiles
−3 −2 −1
0
1
2
Theoretical Quantiles
3
−3 −2 −1
0
1
2
Theoretical Quantiles
3
4
3
2
1
Sample Quantiles
5
4
3
2
1
Sample Quantiles
8
6
4
2
0
Sample Quantiles
5
Theoretical Quantiles
0
1.5
Sample Quantiles
−3 −2 −1
0.5
2.5
1.5
0.5
Sample Quantiles
4
3
2
1
Sample Quantiles
0
−3 −2 −1
−3 −2 −1
0
1
2
3
Theoretical Quantiles
Figure 1: QQ-plots of βb1 (first row) and βb2 (second row) for 1000 samples of size n = 150.
The first column corresponds to γ2 = 0, the second to γ2 = 1 and the third to γ2 = 2. The
bandwidth is h = 3n−2/7 .
intercept is -0.224 (with standard deviation equal to 0.447 obtained using a naive bootstrap
procedure), and the estimated slope parameter is -0.005 (with standard deviation equal
to 0.008). Under the Cox model the estimated intercept and slope are respectively 0.063
and -0.010. A 95% confidence interval is given by (−1.100, 0.653) for the intercept and
(−0.021, 0.011) for the slope, where the variance is again based on the naive bootstrap
procedure. The graph of the two estimators of the function φ(x) is given in Figure 3(b).
The estimated coefficients and curves are quite close to each other, suggesting that the Cox
model might be valid. This is also confirmed by Figure 3(c)-(d), which shows the estimation
of the survival function 1 − FT,0 (·|x) of the uncured patients for x = 48 and x = 60 based
on our estimation procedure and the procedure based on the Cox model. The figure shows
that the two estimators are close for both values of x.
Next, we analyse data provided by the Medical Birth Registry of Norway (see
http://folk.uio.no/borgan/abg-2008/data/data.html). The data set contains information on
births in Norway since 1967, related to a total of 53,558 women. We are interested in the
time between the birth of the first and the second child, for those mothers whose first child
died within the first year (n = 262). The covariate of interest is age (X), which is the age
of the mother at the birth of the first child. The age ranges from 16.8 to 29.8 years, with
an average of 23.2 years. The cure rate is the fraction of women who gave birth only once.
24
0.12
0.18
0.08
0.14
0.10
0.40
0.04
0.06
0.4
0.3
0.2
0.1
0.04
0.10
0.08
0.20
0.30
0.12
0.7
0.5
0.3
0.1
Figure 2: Boxplots of the variance of βb1 (first row) and βb2 (second row) obtained from 250
bootstrap resamples for each of 500 samples of size n = 150. The first column corresponds
to γ2 = 0, the second to γ2 = 1 and the third to γ2 = 2. The bandwidth is h = 3n−2/7 . The
empirical variance of the 500 estimators of β1 and β2 is also added (dashed line).
Figure 4(a) shows the Kaplan-Meier estimator, and suggests that a cure fraction is present.
As we did for the first data set, we analyse these data using the approach proposed in
this paper, and also using the Cox mixture cure model. The estimated intercept equals 1.952
using our model and 0.034 using the Cox model. The bootstrap confidence interval for the
intercept is (−1.577, 5.481) (the estimated standard deviation equals 1.801). The estimated
slope equals -0.041 respectively 0.052 using the two models. For our estimation procedure
the confidence interval is given by (−0.193, 0.111) (with estimated standard deviation equal
to 0.078). Figure 4(b) shows that the two estimators of the function φ(x) are quite different,
and have opposite slopes. Moreover, the survival function 1−FT,0 (·|x) of the uncured patients
is given in Figure 4(c)-(d) for x = 21 and x = 25. We see that the estimator based on the
Cox model is quite different from ours, suggesting that the Cox model might not be valid for
these data, although a formal test would need to confirm this. This is however beyond the
scope of this paper. Also note that the estimator of the cure proportion 1−φ(x) is increasing
under our model and decreasing under the Cox model. It seems however natural to believe
that the probability of having no second child (so the cure proportion) is increasing with
age, which is again an indication that the Cox model is not valid for these data.
25
0.0
0.25
0.35
1−P(cured|X)
0.4
0.45
(b)
0.8
(a)
1000
2000
3000
4000
5000
30
40
50
60
t
X
(c)
(d)
70
80
4000
5000
0.0
0.0
0.4
0.4
0.8
0.8
0
0
1000
2000
3000
4000
5000
0
t
1000
2000
3000
t
Figure 3: Analysis of the breast cancer data : (a) Kaplan-Meier estimator; (b) Graph of the
proposed estimator of φ(x) (solid curve) and of the estimator based on the Cox model (dashed
curve); (c) Estimation of 1 − FT,0 (·|x) using the proposed estimator (solid curve) and using
the estimator based on the Cox model (dashed curve) when x = 48; (d) Idem when x = 60.
7
Appendix : Proofs
Proof of Proposition 3.1. By the properties of the likelihood of a Bernoulli random variable
given that Y ∈ dt and X = x, we have
"
β
φ(x, β)FT,0
(dt | x)FC ((t, ∞) | x)
E δ log
H1 (dt | x)
h
i
β
FC (dt | x) φ(x, β)FT,0 ((t, ∞) | x) + 1 − φ(x, β)
+(1 − δ) log
Y ∈ dt, X = x ≤ 0.
H0 (dt | x)
Integrate with respect to Y and X and deduce that
E [log p(Y, δ, X; β)] ≤ E [log p(Y, δ, X; β0 )] .
26
If there exists some β 6= β0 such that the last inequality becomes an equality, then necessarily
p1 (Y, X; β) = 1 almost surely. Then
β
β0
FC ((t, ∞) | x)φ(x, β)FT,0
(dt | x) = FC ((t, ∞) | x)φ(x, β0 )FT,0
(dt | x), ∀ − ∞ < t ≤ τH (x),
for almost all x ∈ X . By Theorem 2.1, we deduce that necessarily β = β0 .
Lemma 7.1. Let conditions (4.2), (AC1) and (AC4) hold true. Then,
sup
sup |FbC ([t, ∞) | x) − FC ([t, ∞) | x)| = oP (1),
x∈X t∈(−∞,τ ]
sup sup
b0, H
b 1 )(t, x) − T1 (β, H0 , H1 )(t, x)| = oP (1),
sup |T1 (β, H
β∈B x∈X t∈(−∞,τ ]
where
T1 (β, H0, H1 )(t, x) = H([t, ∞) | x) − (1 − φ(x, β))FC ([t, ∞) | x),
(b)
0.0
0.75
0.65
0.4
1−P(cured|X)
0.8
0.85
(a)
1000
2000
3000
4000
18
20
22
24
t
X
(c)
(d)
26
28
30
0.0
0.0
0.4
0.4
0.8
0.8
0
0
1000
2000
3000
4000
0
t
1000
2000
3000
4000
t
Figure 4: Analysis of the second birth data : (a) Kaplan-Meier estimator; (b) Graph of the
proposed estimator of φ(x) (solid curve) and of the estimator based on the Cox model (dashed
curve); (c) Estimation of 1 − FT,0 (·|x) using the proposed estimator (solid curve) and using
the estimator based on the Cox model (dashed curve) when x = 21; (d) Idem when x = 25.
27
b 0, H
b 1 ) is defined similarly, but with H0 , H1 and FC replaced by H
b0, H
b 1 and FbC ,
and T1 (β, H
respectively. Moreover,
β
β
sup |FbT,0
([t, ∞) | x) − FT,0
([t, ∞) | x)| = oP (1).
sup sup
β∈B x∈X t∈(−∞,τ ]
Proof of Lemma 7.1. Let us first investigate the uniform convergence of the estimated cub C (· | x). For any t ∈ (−∞, τ ] let us write
mulative hazard measure Λ
Z
Z
b 0 (ds | x)
H0 (ds | x)
H
b
−
ΛC ((−∞, t] | x) − ΛC ((−∞, t] | x) =
b
(−∞,t] H([s, ∞) | x)
(−∞,t] H([s, ∞) | x)
#
"
Z
Z
b 0 (ds | x) − H0 (ds | x)
1
H
1
b 0 (ds | x) +
−
H
.
=
b
H([s, ∞) | x)
∞) | x) H([s, ∞) | x)
(−∞,t]
(−∞,t] H([s,
b 0 (ds | x) are well defined, since, with probability tending to
The integrals with respect to H
b 0 ((−∞, s] | x), s ∈ (−∞, τ ], is a function of bounded
1, for each x ∈ X , the map s 7→ H
variation. The uniform convergence Assumption (AC1) implies that
sup sup
x∈X t∈(−∞,τ ]
b C ((−∞, t] | x) − ΛC ((−∞, t] | x)
Λ
≤ c sup sup
x∈X t∈(−∞,τ ]
n
b 0 ([t, ∞) | x)−H0 ([t, ∞) | x) + H([t,
b
H
∞) | x)−H([t, ∞) | x)
H([τ, ∞) | x)2
o
,
for some constant c > 0. Next, by Duhamel’s identity (see Gill and Johansen 1990),
FbC ((t, ∞) | x) − FC ((t, ∞) | x) = −FC ((t, ∞) | x)
Z
FbC ([s, ∞) | x) b
ΛC (ds | x) − ΛC (ds | x) .
×
(−∞,t] FC ((s, ∞) | x)
b C (· | x)
Then, the uniform convergence of FbC (· | x) follows from the uniform convergence of Λ
b0, H
b 1), and hence we omit
and condition (4.2). The same type of arguments apply for T1 (β, H
the details.
Next, since by conditions (4.2) and (AC4) we have
inf inf
inf
β∈B x∈X t∈(−∞,τ ]
[H([t, ∞) | x) − (1 − φ(x, β))FC ([t, ∞) | x)] > 0,
there exists some constant c > 0 with the property that
h
i
b
P inf inf
inf
H([t,
∞) | x) − (1 − φ(x, β))FbC ([t, ∞) | x) ≥ c > 0 → 1.
β∈B x∈X t∈(−∞,τ ]
β
Hence, the uniform convergence of FbT,0
(· | x) follows.
28
Proof of Theorem 4.1. Let us write
β
FT,0
({Yi }
| Xi ) =
ΛβT,0 ({Yi }
|
β
Xi )FT,0
([Yi , ∞)
β
FT,0
([Yi , ∞) | Xi )
| Xi ) = H1 ({Yi } | Xi )
,
T1 (β, H0 , H1 )(Yi , Xi )
where
T1 (β, H0, H1 )(t, x) = H([t, ∞) | x) − (1 − φ(x, β))FC ([t, ∞) | x).
Moreover, let
qi (β, H0 , H1 ) = q(β, H0, H1 )(Yi , δi , Xi ),
where, for t ∈ R, d ∈ {0, 1}, x ∈ X ,
β
q(β, H0, H1 )(t, d, x) = d{log φ(x, β) + log FT,0
([t, ∞) | x) − log T1 (β, H0 , H1 )(t, x)}
β
+ (1 − d) log{φ(x, β)FT,0
([t, ∞) | x) + 1 − φ(x, β)}.
Let
n
1X
Qn (β, H0 , H1) =
qi (β, H0 , H1 ).
n i=1
b0, H
b 1 ) that is defined as Qn (β, H0, H1 ), but with H0 , H1 ,
Similarly, let us consider Qn (β, H
β
b0, H
b 1 , FbC and Fbβ , respectively. Then the estimator βb in equation
FC and FT,0
replaced by H
T,0
(3.1) becomes
b 0, H
b 1 ).
βb = arg max Qn (β, H
β∈B
The first step is to check that
b0, H
b 1 ) − Qn (β, H0, H1 ) = oP (1).
sup Qn (β, H
(7.12)
β∈B
This follows directly from Lemma 7.1. Next, given our assumptions, it is easy to check that
for any t ∈ (−∞, τ ], d ∈ {0, 1}, x ∈ X ,
|q(β, H0 , H1 )(t, d, x) − q(β ′ , H0 , H1 )(t, d, x)| ≤ Ckβ − β ′ ka ,
∀β, β ′ ∈ B,
with a > 0 from Assumption (AC3) and some constant C depending only on c1 from Assumption (AC3) and the positive values inf x∈X H1 ({τ } | x) and inf x∈X H0 ((τ, ∞) | x). It
follows that the class {(t, d, x) → q(β, H0, H1 )(t, d, x) : β ∈ B} is Glivenko-Cantelli. Hence,
sup |Qn (β, H0 , H1 ) − Q(β, H0, H1 )| = oP (1),
β∈B
where Q = E(Qn ). Finally, Proposition 3.1 guarantees that
β0 = arg max Q(β, H0 , H1 ).
β∈B
Gathering the facts, we deduce that βb − β0 = oP (1).
29
Proof of Theorem 4.2. We show the asymptotic normality of our estimator by verifying the
high-level conditions in Theorem 2 in Chen et al. (2003). First of all, for the consistency we
refer to Section 4.1, whereas conditions (2.1) and (2.2) in Chen et al. (2003) are satisfied by
construction and thanks to assumption (AN1), respectively. Concerning (2.3), first note that
the expression inside the expected value in ∇η M(β, η0 )[η − η0 ] is linear in ∇η Tj (β, η0 )[η − η0 ]
(j = 1, 2, 3, 4). Hence, we will focus attention on the latter Gâteaux derivatives. First,
∇η T1 (β, η0)[η − η0 ](t, x) = (η1 − η01 + η2 − η02 )(t, x) − (1 − φ(x, β))∇η T2 (η0 )[η − η0 ](t, x).
Using Duhamel’s formula (see Gill and Johansen 1990), we can write
Z
1
∇η T2 (η0 )[η − η0 ](t, x) = −T2 (η0 )(t, x)
−∞<u<t (η01 + η02 )(u, x) − η01 ({u}, x)
n
η01 (du, x)(η1 + η2 − η01 − η02 )(u, x) o
.
× (η1 − η01 )(du, x) −
(η01 + η02 )(u, x)
In a similar way, we find that
Z
1
−∞<u<t T1 (β, η0 )(u, x) − η02 ({u}, x)
n
η02 (du, x) T1 (β, η) − T1 (β, η0 ) (u, x) o
.
× (η2 − η02 )(du, x) −
T1 (β, η0 )(u, x)
∇η T3 (β, η0 )[η − η0 ](t, x) = −T3 (β, η0 )(t, x)
Finally,
∇η T4 (β, η0 )[η − η0 ](t, x)
(
Z
∇η T2 (η0 )[η − η0 ](s, x)η02 (ds, x) + T2 (η0 )(s, x)(η2 − η02 )(ds, x)
= −∇β φ(x, β)
T1 (β, η0 )(s, x) T1 (β, η0 )(s, x) − η02 ({s}, x)
(−∞,t)
T2 (η0 )(s, x)η02 (ds, x)∇η T1 (β, η0 )[η − η0 ](s, x)
−
2
T1 (β, η0 )(s, x) T1 (β, η0)(s, x) − η02 ({s}, x)
)
T2 (η0 )(s, x)η02 (ds, x) ∇η T1 (β, η0 )[η − η0 ](s, x) − (η2 − η02 )({s}, x)
.
−
2
T1 (β, η0 )(s, x) T1 (β, η0 )(s, x) − η02 ({s}, x)
Note that all denominators in ∇η Tj (β, η0 )[η − η0 ](t, x) are bounded away from zero, thanks
to (4.10) and (4.11). By tedious but rather elementary arguments, it follows from these
formulae that
k∇η Tj (β, η0 )[η − η0 ] − ∇η Tj (β0 , η0 )[η − η0 ]kH ≤ Ckβ − β0 kkη − η0 kH ,
for some constant C. Hence, it can be easily seen that ∇η Tj (β, η0)[η − η0 ] satisfies the
second property in assumption (2.3) in Chen et al. (2003), and hence the same holds true
for ∇η M(β, η0 )[η − η0 ]. Similarly, by decomposing Tj (β, η) − Tj (β, η0 ) − ∇η Tj (β, η0 )[η − η0 ]
30
using Taylor-type arguments (in η), the first property in assumption (2.3) is easily seen to
hold true.
Next, conditions (2.4) and (2.6) are satisfied thanks to Assumption (AN4) and because
it follows from the above calculations of ∇η Tj (β, η0 )[η − η0 ] (j = 1, 2, 3, 4) that
Z
X
∇η M(β0 , η0 )[η − η0 ] =
E ψ1k (Y, X)
ψ2k (u, X)d ((ηk − η0k )(u, X))
+
X
E ψ3k (Y, X)
k,ℓ∈{0,1}
−∞<u<Y
k∈{0,1}
Z
−∞<u<Y
ψ4k (u, X)ψ5k (ηk − η0k )(u, X) dHℓ (u | X) (7.13)
for certain measurable functions ψjk (j = 1, . . . , 5; k = 0, 1).
It remains to verify condition (2.5). Note that
|m(t, δ, x; β2 , η2 ) − m(t, δ, x; β1 , η1 )| ≤ C1 (t, δ, x)kβ2 − β1 k + C2 (t, δ, x)kη2 − η1 kH
for some functions Cj satisfying E[Cj2 (Y, δ, X)] < ∞ (j = 1, 2), and hence (2.5) follows from
assumption (AN5) and Theorem 3 in Chen et al. (2003). This finishes the proof.
Proof of Theorem 4.3. To prove this theorem we will check the conditions of Theorem B in
Chen et al. (2003), which gives high level conditions under which the naive bootstrap
is consistent. The only difference between their setting and our setting is that we are
proving bootstrap consistency in P-probability, whereas their result holds true a.s. [P]. As
a consequence, in their high level conditions we can replace all a.s. [P] statements by the
corresponding statements in P-probability.
First of all, it follows from assumption (AN1) that condition (2.2) in Chen et al. (2003)
holds with η0 replaced by any η in a neighborhood of η0 , and from the proof of Theorem 4.2
it follows that the same holds true for condition (2.3). Next, conditions (2.4B) and (2.6B)
in Chen et al. follow from the fact that we assume that assumption (AN4) continues to hold
b k − Hk by H
b∗ − H
b k (k = 0, 1). It remains to verify condition (2.5’B) in
true if we replace H
k
Chen et al. This follows from Theorem 3 in Chen et al., whose conditions have been verified
already for our Theorem 4.2.
References
[1] Akritas, M.G. & Van Keilegom, I. (2001). Nonparametric estimation of the residual
distribution. Scand. J. Statist. 28, 549–568.
[2] Amico, M., Legrand, C. & Van Keilegom, I. (2017). The single-index/Cox mixture
cure model (submitted).
[3] Boag, J.W. (1949). Maximum likelihood estimates of the proportion of patients cured
by cancer therapy. J. Roy. Statist. Soc. - Series B 11, 15–53.
31
[4] Chen, X., Linton, O. & Van Keilegom, I. (2003). Estimation of semiparametric
models when the criterion function is not smooth. Econometrica 71, 1591–1608.
[5] Fang, H.B., Li, G. & Sun, J. (2005). Maximum likelihood estimation in a semiparametric logistic/proportional-hazards mixture model. Scand. J. Statist. 32, 59–75.
[6] Farewell, V.T. (1982). The use of mixture models for the analysis of survival data
with long-term survivors. Biometrics 38, 1041–1046.
[7] Gill, R.D. (1994). Lectures on survival analysis. Lectures on probability theory: Ecole
d’été de probabilités de Saint-Flour XXII. Lecture notes in mathematics 1581. Springer.
[8] Gill, R.D. & Johansen, S. (1990). A survey of product-integration with a view toward
application in survival analysis. Ann. Statist. 18, 1501–1555.
[9] Kuk, A.Y.C. & Chen, C.-H. (1992). A mixture model combining logistic regression
with proportional hazards regression. Biometrika 79, 531–541.
[10] Li, Q., Lin, J. & Racine, J.S. (2013). Optimal bandwidth selection for nonparametric conditional distribution and quantile functions. J. Buss. Econ. Statist. 31, 57–65.
[11] López-Cheda, A., Cao, R., Jácome M.A. & Van Keilegom, I. (2017). Nonparametric incidence estimation and bootstrap bandwidth selection in mixture cure models.
Comput. Statist. Data Anal. 105, 144–165.
[12] Lopez, O. (2011). Nonparametric estimation of the multivariate distribution function
in a censored regression model with applications. Commun. Stat. - Theory Meth. 40,
2639–2660.
[13] Lu, W. (2008). Maximum likelihood estimation in the proportional hazards cure model.
Ann. Inst. Stat. Math. 60, 545–574.
[14] Maller, R.A. & Zhou, S. (1996). Survival Analysis with Long Term Survivors.
Wiley, New York.
[15] Meeker, W.Q. (1987). Limited failure population life tests: Application to integrated
circuit reliability. Technometrics 29, 51–65.
[16] Othus, M., Li, Y. & Tiwari, R.C. (2009). A class of semiparametric mixture cure
survival models with dependent censoring. J. Amer. Statist. Assoc. 104, 1241–1250.
[17] Peng, Y. & Taylor, J.M.G. (2014). Cure models. In: Klein, J., van Houwelingen, H.,
Ibrahim, J. G., and Scheike, T. H., editors, Handbook of Survival Analysis, Handbooks
of Modern Statistical Methods series, chapter 6, pages 113-134. Chapman & Hall, Boca
Raton, FL, USA.
32
[18] Schmidt, P. & Witte, A.D. (1989). Predicting criminal recidivism using split population survival time models. J. Econometrics 40, 141–159.
[19] Sy, J.P. & Taylor, J.M.G. (2000). Estimation in a Cox proportional hazards cure
model. Biometrics 56, 227–236.
[20] Taylor, J.M.G. (1995). Semi-parametric estimation in failure time mixture models.
Biometrics 51, 899–907.
[21] van der Vaart, A.D. (1998). Asymptotic Statistics. Cambridge University Press.
[22] Wang, Y., Klijn, J.G.M., Sieuwerts, A.M., Look, M.P., Yang, F., Talantov, D., Timmermans, M., Meijet-van Gelder, M.E.M., Yu, J., Jatkoe, T.,
Berns, E.M.J.J., Atkins, D. & Foekens, J.A. (2005). Gene-expression profiles to
predict distant metastasis of lymph-node-negative primary breast cancer. The Lancet
365, 671–679.
[23] Xu, J. & Peng, Y. (2014). Nonparametric cure rate estimation with covariates. Canad.
J. Statist. 42, 1–17.
33
| 10 |
Detection of malicious data in vehicular ad-hoc
networks for traffic signal control applications
Bartłomiej Płaczek and Marcin Bernas
arXiv:1703.10983v1 [cs.NI] 31 Mar 2017
University of Silesia, Institute of Computer Science,
Będzińska 39, 41-200 Sosnowiec, Poland
{placzek.bartlomiej,marcin.bernas}@gmail.com
Abstract. Effective applications of vehicular ad hoc networks in traffic
signal control require new methods for detection of malicious data. Injection of malicious data can result in significantly decreased performance of
such applications, increased vehicle delays, fuel consumption, congestion,
or even safety threats. This paper introduces a method, which combines
a model of expected driver behaviour with position verification in order
to detect the malicious data injected by vehicle nodes that perform Sybil
attacks. Effectiveness of this approach was demonstrated in simulation
experiments for a decentralized self-organizing system that controls the
traffic signals at multiple intersections in an urban road network. Experimental results show that the proposed method is useful for mitigating
the negative impact of malicious data on the performance of traffic signal
control. 1
Keywords: vehicular networks, malicious data, Sybil attack, traffic signal control
1
Introduction
Vehicular ad-hoc networks (VANETs) facilitate wireless data transfer between
vehicles and infrastructure. The vehicles in VANET can provide detailed and
useful data including their positions, velocities, and accelerations. This technology opens new perspectives in traffic signal control and creates an opportunity
to overcome main limitations of the existing roadside sensors, i.e., low coverage,
local measurements, high installation and maintenance costs. The availability
of the detailed data from vehicles results in a higher performance of the traffic
signal control [1,2,3].
The VANET-based traffic signal systems have gained considerable interest in
recent years. In this field the various solutions have been proposed that extend
existing adaptive signal systems for isolated intersections [4,5]. For such systems
1
Preprint of: Płaczek, B., Bernas, M.: Detection of malicious data in vehicular ad-hoc
networks for traffic signal control applications. in Gaj, P., Kwiecien, A., Stera, P.
(eds.) Computer Networks. CCIS, vol. 608, pp. 72-82, 2016. The final publication is
available at link.springer.com
2
B. Płaczek, M. Bernas
the data collected in VANET are used to estimate queue lengths and vehicle delays. On this basis optimal cycle length and split of signal phases are calculated.
Similar adaptive approach was also used to control traffic signals at multiple intersections in a road network [6,7]. Particularly advantageous for VANET-based
systems is the self-organizing signal control scheme, which enables a decentralized optimization, global coordination of the traffic streams in a road network,
and improved performance [8].
Effective VANET applications in traffic signal control require new methods
for real-time detection of attacks that are based on malicious data. Injection
of malicious data can result in significantly decreased performance o the traffic
signal control, increased vehicle delays, fuel consumption, congestion, or even
safety threats.
This paper introduces a method for the above mentioned applications, which
can be used to detect malicious data. The considered malicious data are injected
by vehicle nodes that perform Sybil attacks, i.e., create a large number of false
vehicle nodes in order to influence the operation of traffic signals. The proposed
method detects the malicious data by combining a model of expected driver
behaviour with a position verification approach.
The paper is organized as follows. Related works are discussed in Section 2.
Section 3 introduces the proposed method. Results of simulation experiments
are presented in Section 4. Finally, conclusions are given in Section 5.
2
Related works
The problem of malicious nodes detection in VANETs has received particular
attention and various methods have been proposed so far [9]. The existing solutions can be categorized into three main classes: encryption and authentication
methods, methods based on position verification, and methods based on VANET
modelling.
In the encryption and authentication methods, malicious nodes detection is
implemented by using authentication mechanisms. One of the approaches is to
authenticate vehicles via public key cryptography [10]. The methods that use
public key infrastructure were discussed in [11]. Main disadvantages of such
methods are difficulties in accessing to the network infrastructure and long computational time of encryption and digital signature processing. Public key encryption and message authentication systems consume time and memory. Thus,
bandwidth and resource consumption is increased in the public key systems.
In [12] an authentication scheme was proposed, which assumes that vehicles
collect certified time stamps from roadside units as they are travelling. The
malicious nodes detection is based on verification of the collected series of time
stamps. Another similar method [13] assumes that vehicles receive temporary
certificates from roadside units and malicious nodes detection is performed by
checking spatial and temporal correlation between vehicles and roadside units.
These methods require a dense deployment of the roadside units.
Detection of malicious data in vehicular ad-hoc networks...
3
Position verification methods are based on the fact that position reported
by a vehicle can be verified by other vehicles or by roadside units [14]. The key
requirement in this category of the methods is accurate position information. A
popular approach is to detect inconsistencies between the strength of received
signal and the claimed vehicle position by using a propagation model. According to the method introduced in [15] signal strength measurements are collected
when nodes send beacon messages. The collected measurements are used to estimate position of the nodes according to a given propagation model. A node is
considered to be malicious if its claimed position is too far from the estimated
one. In [16] methods were proposed for determining a transmitting node location by using signal properties and trusted peers collaboration for identification
and authentication purposes. That method utilizes signal strength and direction
measurements thus it requires application of directional antennas.
Xiao et al. [17] and Yu et al. [18] proposed a distributed method for detection and localization of malicious nodes in VANET by using verifier nodes that
confirm claimed position of each vehicle. In this approach, statistical analysis
of received signal strength is performed by neighbouring vehicles over a period
of time in order to calculate the position of a claimer vehicle. Each vehicle has
the role of claimer, witness, or verifier on different occasions and for different
purposes. The claimer vehicle periodically broadcasts its location and identity
information, and then, verifier vehicle confirms the claimer position by using a set
of witness vehicles. Traffic pattern analysis and support of roadside units is used
for selection of the witness vehicles. Yan et al. [19] proposed an approach that
uses on-board radar to detect neighbouring vehicles and verify their positions.
The modelling-based methods utilize models that describe expected behaviour
of vehicle nodes in VANET. These methods detect malicious nodes by comparing
the model with information collected from the vehicles. Golle et al. [20] proposed
a general model-based approach to evaluating the validity of data collected form
vehicle nodes. According to this approach, different explanations for the received
data are searched by taking into account the possible presence of malicious nodes.
Explanations that are consistent with a model of the VANET get scores. The
node accepts data that are consistent with the most scored explanation. On this
basis, the nodes can detect malicious data and identify the vehicles that are the
sources of such data. Another method in this category relies on comparing the
behaviour of a vehicle with a model of average driving behaviour, which is built
on the fly by using data collected from other vehicles [21].
In [22] a malicious data detection scheme was proposed for post crash notification applications that broadcast warnings to approaching traffic. A vehicle
node observes driver’s behaviour for some time after the warning is received and
compares it with some expected behaviour. The vehicle movement in the absence of any traffic accident is assumed to follow some free-flow mobility model,
and its movement in case of accident is assumed to follow some crash-modulated
mobility model. On this basis the node can decide if the received warning is true
or false.
4
B. Płaczek, M. Bernas
A framework based on subjective logic was introduced in [23] for malicious
data detection in vehicle-to-infrastructure communication. According to that approach, all data collected by a vehicle node can be mapped to a world-model and
can then be annotated with opinions by different misbehaviour detection mechanisms. The opinions are used not only to express belief or disbelief in a stated
fact or data source, but also to model uncertainty. Authors have shown that
application of the subjective logic operators, such as consensus or transitivity,
allows different misbehaviour detection mechanisms to be effectively combined.
According to the authors’ knowledge, the problem of malicious data detection
for traffic signal control applications has not been studied so far in VANETrelated literature. In this paper a malicious data detection method is introduced
for VANET-based traffic signal control systems. The proposed method integrates
a model of expected driver behaviour with position verification in order to detect
the malicious data that can degrade the performance of road traffic control at
signalized intersections.
3
Proposed method
This section introduces an approach, which was intended to detect malicious
data in VANET applications for road traffic control at signalized intersections.
The considered VANET is composed of vehicle nodes and control nodes that
manage traffic signals at intersections. Vehicles are equipped with sensors that
collect speed and position data. The collected information is periodically transmitted from vehicles to control nodes. The control nodes use this information
for optimizing traffic signals to decrease delay of vehicles and increase capacity
of a road network.
In order to detect and filter out malicious data, the control node assigns
a trust level to each reported vehicle. The trust levels are updated (decreased
or increased) after each data delivery by using the rules discussed later in this
section. When making decisions related to changes of traffic signals, the control
node takes into account only those data that were collected by vehicles with
positive trust level. The data delivered by vehicles with trust level below or
equal to zero are recognized as malicious and ignored.
At each time step vehicle reports ID of its current traffic lane, its position
along the lane, and velocity. If vehicles i and j are moving in the same lane
during some time period and at the beginning of this period vehicle i is in front
of vehicle j then the same order of vehicles has to be observed for the entire
period. Thus, the following rule is used to detect the unrealistic behaviour of
vehicles in a single lane:
xi (t)−xj (t) > x ∧xi (t−δ)−xj (t−δ) < −x ∧li (t0 ) = lj (t0 ) ∀t0 : t−δ ≤ t0 ≤ t, (1)
where: li (t), xi (t), vi (t) denote respectively lane ID, position, and velocity of
vehicle i at time step t, δ is length of the time period, and x is maximum localization error. It is assumed that the frequency of data reports enables recognition
of overtaking. In situation when condition (1) is satisfied and both vehicles have
Detection of malicious data in vehicular ad-hoc networks...
5
positive trust level, the trust level of both vehicles (i and j) is decreased by value
α because the collected data do not allow us to recognize which one of the two
reported vehicles is malicious. If one of the two vehicles has non-positive trust
level then the trust level is decreased only for this vehicle.
According to the second rule (so-called reaction to signal rule), current traffic
signals are taken into account in order to recognize the malicious data. If the
information received from vehicle i indicates that this vehicle enters an intersection when red signal is displayed or stops at green signal then the trust level of
vehicle i is decreased by value α. The following condition is used to detect the
vehicles passing at red signal:
hn − xj (t − δ) > x ∧ hn − xj (t) < −x ∧ sn (t0 ) = red ∀t0 : t − δ ≤ t0 ≤ t, (2)
where: hn is position of the stop line for signal n, sn (t) is the colour of signal n
at time t, and the remaining symbols are identical to those defined for rule (1).
The vehicles stopped at green signal are recognized according to condition:
|hn − xj (t0 )| < x ∀t0 : t − δ ≤ t0 ≤ t ∧ sn (t0 ) = green ∀t0 : t − δ ≤ t0 ≤ t,
(3)
where | · | denotes absolute value and the remaining symbols were defined above.
In opposite situations, when the vehicle enters the intersection during green
signal or stops at red signal then its trust level is increased by α.
Theoretical models of vehicular traffic assume that vehicles move with desired
free flow velocity if they are not affected by other vehicles or traffic signals [24].
Based on this assumption, expected velocity of vehicle i can be estimated as
follows:
hi (t) − hmin
,
(4)
v̂i (t) = min vf ,
τ
where: vf is free flow velocity, hi (t) is headway distance, i.e., distance between
vehicle i and vehicle in front in the same lane or distance between vehicle i and
the nearest red signal, hmin is minimum required headway distance for stopped
vehicle, τ denotes time which is necessary to stop vehicle safely and leave adequate space from the preceding car or traffic signal. Time τ can be determined
according to the two seconds rule, which is suggested by road safety authorities
[25].
When the velocity reported by a vehicle differs significantly from the expected
velocity then the vehicle is suspected to be malicious. Thus, the trust level of
the vehicle is decreased. In opposite situation, when the reported velocity is
close to the expected value, the trust level is increased. According to the above
assumptions, the trust level is updated by adding value ui (t) · β and ui (t) is
calculated using the following formula:
(
1,
|v̂i (t) − vi (t)| < v ,
ui (t) =
(5)
i (t)|
− |v̂i (t)−v
,
else,
vf
where v is a threshold of the velocity difference, and the remaining symbols were
defined earlier. Threshold v was introduced in Eq. (5) to take into account error
of velocity measurement and uncertainty of the expected velocity determination.
6
B. Płaczek, M. Bernas
The last rule for updating trust level assumes that vehicles are equipped
with sensors which enable detection and localization of neighbouring vehicles
within distance r. In this case each vehicle reports the information about its own
position and speed as well as positions of other vehicles in the neighbourhood.
The information about neighbouring vehicles, delivered by vehicle j at time t, is
represented by set Dj (t):
Dj (t) = {hxk (t), lk (t)i} ,
(6)
where xk (t) and lk (t) denote position and lane of k-th vehicle in the neighbourhood. The additional data can be utilized by control node for verification of the
collected vehicle positions. Position of vehicle i should correspond to one of the
positions of neighbours (k) reported by vehicle j if distance between vehicles i
and j is not greater than r. Therefore, trust level of vehicle i is increased by α if
dist(i, j) ≤ r ∧ ∃ hxk (t), lk (t)i ∈ Dj (t) : dist(i, k) ≤ x ,
(7)
where: dist(i, j) is distance between vehicles i and j, r is localization range, and
the remaining symbols were defined earlier. The trust level is decreased by α if
dist(i, j) ≤ r ∧ dist(i, k) > x ∀ hxk (t), lk (t)i ∈ Dj (t).
(8)
The symbols in (8) were defined earlier in this section. The above rule is applied
only if the trust level of vehicle j is positive.
It should be noted that the parameter of decreasing trust level for the
velocity-related rule (β) is different than for the remaining rules (α) because
the velocity-related rule can be used after each data transfer, i.e., significantly
more frequently than the other rules.
4
Experiments
Simulation experiments were performed to evaluate effectiveness of the proposed
method for malicious data detection. The experimental results presented in this
section concern percentages of detected malicious data and the impact of these
data on vehicle delay at signalized intersections in a road network.
In this study the stochastic cellular automata model of road network, proposed by Brockfeld et al. [26], was used for the traffic simulation. This model
represents a road network in a Manhattan-like city. Topology of the simulated
network is a square lattice of 8 unidirectional roads with 16 signalized intersections (Fig. 1). The distance between intersections is of 300 m. Each link in the
network is represented by a one-dimensional cellular automaton. An occupied
cell on the cellular automaton symbolizes a single vehicle. At each discrete time
step (1 second) the state of cellular automata is updated according to four steps
(acceleration, braking due to other vehicles or traffic light, velocity randomization, and movement). These steps are necessary to reproduce the basic features
of real traffic flow. Step 1 represents driver tendency to drive as fast as possible,
Detection of malicious data in vehicular ad-hoc networks...
7
step 2 is necessary to avoid collisions and step 3 introduces random perturbations
necessary to take into account changes of vehicle velocity in regions with high
density. Finally, in step 4 the vehicles are moved according to the new velocity
calculated in steps 1-3. Steps 1-4 are applied in parallel for all vehicles. Detailed
definitions of these steps can be found in [26]. Maximum velocity was set to 2
cells per second (54 km/h).
Deceleration probability p for the Brockfeld model is 0.15. The saturation flow
at intersections is 1700 vehicles per hour of green time. This model was applied to
calculate the stop delay of vehicles. The traffic signals were controlled by using
the self-organizing method based on priorities that correspond to "pressures"
induced by vehicles waiting at an intersection [27]. The traffic signal control was
simulated assuming the intergreen times of 5 s and the maximum period of 120
s. At each intersection there are two alternative control actions: the green signal
can be given to vehicles coming from south or to those that are coming from
west. The simulator was implemented in Matlab.
Intensity of the traffic flow is determined for the network model by parameter
q in vehicles per second. This parameter refers to all traffic streams entering
the road network. At each time step vehicles are randomly generated with a
probability equal to the intensity q in all traffic lanes of the network model.
Similarly, the false (malicious) vehicles are generated with intensity qF at random
locations. The false vehicles move with constant, randomly selected velocity.
During experiments nine algorithms of malicious data detection were taken
into account (Tab. 1). The algorithms use different combinations of the rules
proposed in Sect. 3 (4 rules used separately and 5 selected combinations that
achieved the most promising results in preliminary tests). Simulations were performed for four various intensities of true vehicles (q = 0.02, 0.06, 0.10, 0.14) and
false vehicles (qF = 0.02, 0.04, 0.06, 0.08). For each combination of the intensities
q and qF the simulation was executed in 20 runs of 10 minutes. Based on preliminary results, the parameters used for updating the trust levels were set as
follows: α = 1 and β = 0.2.
Fig. 1. Simulated road network
8
B. Płaczek, M. Bernas
Table 1. Compared algorithms for malicious data detection
Algorithm
Rule
1 - vehicles order
2 - reaction to signals
3 - expected velocity
4 - neighbour detection
1
+
-
2
+
3
+
-
4
+
-
5
+
+
-
6
+
+
7
+
+
-
8
+
+
9
+
+
+
+
Figure 2 shows percentages of correctly detected malicious data and correctly
recognized true data for two different intensities of false vehicles. The data were
categorized as true or malicious at each one-second interval.
Total delay of vehicles for the considered algorithms is compared in Fig. 3.
The results in Fig. 3 were averaged for all considered true and false vehicle
intensities. Average number of vehicles for one simulation run (10 minutes) was
384. The best results were obtained for algorithm 9, which utilizes all proposed
rules for detection of the malicious data. This algorithm allows the delay of
vehicles to be kept at the low level (close to the value obtained for simulation
without malicious data). The delay is increased only by 1% in comparison to
the delay observed when no malicious data are present. Algorithm 9 correctly
recognizes 90% of the malicious data and 96% of the true data on average. High
accuracy was also observed for Algorithm 2, which uses the approach of position
verification by neighbouring vehicles without any additional rules. The least
satisfactory results were obtained when using the vehicles order rule (Algorithm
1) or the reaction to signals rule (Algorithm 3). For these algorithms the delay
of vehicles is close to that observed when the detection of malicious data is not
used.
Fig. 2. Accuracy of malicious data detection for the compared algorithms: a) qF = 0.02
veh./s, b) qF = 0.08 veh./s
Detection of malicious data in vehicular ad-hoc networks...
9
Fig. 3. Average delay of vehicles for the compared algorithms
Figure 4 shows mean vehicle delays for various intensities of the traffic flow
(q) and two different intensities of the false vehicles generation (qF ). Algorithm
0 in Fig. 4 corresponds to the situation when no malicious data detection is
implemented. It can be observed in these results that the effectiveness of a particular algorithm strongly depends on the considered intensities. For instance,
in case of q = 0.14 and qF = 0.02 Algorithm 8 causes a higher delay than those
obtained without malicious data detection, while for the remaining intensities
Algorithm 8 gives good results. However, for Algorithm 9 the delay is reduced
when comparing with those obtained without malicious data detection for all
considered intensity settings. This fact confirms that all the proposed rules are
useful as they contribute with different degree in various traffic conditions to
mitigating the negative impact of malicious data.
Fig. 4. Mean delay for different traffic intensities: a) qF = 0.02 veh./s, b) qF = 0.08
veh./s
10
5
B. Płaczek, M. Bernas
Conclusion
Sybil attacks can degrade the performance of VANET-based traffic signal control.
The proposed approach enables effective detection of malicious data created in
VANETs when the Sybil attacks are launched. The introduced detection scheme
is based on rules that take into account unrealistic overtaking manoeuvres, expected driver behaviour (reaction to traffic signals and preferred velocity) as
well as verification of vehicle position by neighbouring nodes. Effectiveness of
this approach was demonstrated in simulation experiments for a decentralized
self-organizing system that controls the traffic signals at multiple intersections in
an urban road network. The experimental results show that combination of different detection mechanisms allows the malicious data in VANET to be correctly
recognized and is essential for mitigating their negative impact on the performance of traffic signal control. Further research is necessary to integrate the
method with more sophisticated models of driver behaviours, enable automatic
parameters calibration based on collected data, and test the proposed approach
in different (more realistic) scenarios with various traffic control algorithms.
References
1. Bajwa, E. J. S., Walia, E. L.: A Survey on Traffic Management Systems in VANET.
International Journal of Advanced Trends in Computer Applications, vol. 1, no. 4,
pp. 28–32 (2015)
2. Płaczek, B.: Efficient data collection for self-organising traffic signal systems based
on vehicular sensor networks. International Journal of Ad Hoc and Ubiquitous Computing, in press
3. Płaczek, B.: A self-organizing system for urban traffic control based on predictive
interval microscopic model. Engineering Applications of Artificial Intelligence, 34,
pp. 75–84 (2014)
4. Chang, H. J., Park, G. T.: A study on traffic signal control at signalized intersections
in vehicular ad hoc networks, Ad Hoc Networks 11, pp. 2115–2124 (2013)
5. Kwatirayo, S., Almhana, J., Liu, Z.: Adaptive Traffic Light Control using VANET:
A case study in Wireless Communications and Mobile Computing Conference
(IWCMC), IEEE, pp. 752–757 (2013)
6. Maslekar, N., Mouzna, J., Boussedjra, M., Labiod, H.: CATS: An adaptive traffic
signal system based on car-to-car communication, Journal of Network and Computer
Applications 36(5), pp. 1308–1315 (2013)
7. Priemer, C., Friedrich, B.: A decentralized adaptive traffic signal control using V2I
communication data in 12th International IEEE Conference on Intelligent Transportation Systems ITSC ’09, IEEE, pp. 1–6 (2009)
8. Zhang, L., Garoni, T. M., de Gier, J.: A comparative study of Macroscopic Fundamental Diagrams of arterial road networks governed by adaptive traffic signal
systems, Transportation Research Part B: Methodological 49, pp. 1–23 (2013)
9. Ali Mohammadi, M., Pouyan, A. A.: Defense mechanisms against Sybil attack in
vehicular ad hoc network. Security and Communication Networks, 8(6), pp. 917–936
(2015)
10. Bouassida, M.S., Guette, G., Shawky, M., Ducourthial, B.: Sybil nodes detection
based on received signal strength variations within VANET. International Journal
of Network Security 9(1), pp. 22–32 (2009)
Detection of malicious data in vehicular ad-hoc networks...
11
11. Raya, M., Papadimitratos, P., Hubaux, J.P.: Securing vehicular communications.
IEEE Wireless Communications Magazine, Special Issue on Inter-Vehicular Communications 13(5) pp. 8–15 (2006)
12. Chang, S., Qi, Y., Zhu, H., Zhao, J., Shen, X.: Footprint: detecting Sybil attacks in
urban vehicular networks. IEEE Transactions on Parallel and Distributed Systems
23(6), pp. 1103–1114 (2012)
13. Park, S., Aslam, B., Turgut, D., Zou, C.C.: Defense against Sybil attack in the
initial deployment stage of vehicular ad hoc network based on roadside unit support.
Security and Communication Networks 6(4), pp. 523–538 (2013)
14. Guette, G., Ducourthial, B.: On the Sybil attack detection in VANET. In Mobile
Adhoc and Sensor Systems, MASS 2007. IEEE International Conference on, pp. 1–6
(2007)
15. Xiao, B., Yu, B., Gao, C.: Detection and localization of sybil nodes in VANETs.
In Proceedings of the 2006 workshop on Dependability issues in wireless ad hoc
networks and sensor networks, pp. 1–8, ACM (2006)
16. Suen, T., Yasinsac, A.: Ad hoc network security: Peer identification and authentication using signal properties. In Information Assurance Workshop, 2005. IAW’05.
Proceedings from the Sixth Annual IEEE SMC, pp. 432–433 (2005)
17. Xiao, B., Yu, B., Gao, C.: Detection and localization of Sybil nodes in VANETs.
Proceedings of the 2006 workshop on Dependability issues in wireless ad hoc networks and sensor networks, pp. 1–8 (2006)
18. Yu, B., Xu, C.Z., Xiao, B.: Detecting Sybil attacks in VANETs. Journal of Parallel
and Distributed Computing 73(6), pp. 746–756 (2013)
19. Yan, G., Olariu, S., Weigle, MC.: Providing VANET security through active position detection. Computer Communications 31(12), pp. 2883–2897 (2008)
20. Golle, P., Greene, D., Staddon, J.: Detecting and correcting malicious data in
VANETs. In Proceedings of the 1st ACM international workshop on Vehicular ad
hoc networks, pp. 29–37, ACM (2004)
21. Raya, M., Papadimitratos, P., Aad, I., Jungels, D., Hubaux, J. P.: Eviction of misbehaving and faulty nodes in vehicular networks. Selected Areas in Communications,
IEEE Journal on, 25(8), pp. 1557–1568 (2007)
22. Ghosh, M., Varghese, A., Gupta, A., Kherani, A. A., Muthaiah, S. N.: Detecting
misbehaviors in VANET with integrated root-cause analysis. Ad Hoc Networks,
8(7), pp. 778–790 (2010)
23. Dietzel, S., van der Heijden, R., Decke, H., Kargl, F.: A flexible, subjective logicbased framework for misbehavior detection in V2V networks. In A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International
Symposium on, pp. 1–6, IEEE, (2014)
24. PÅĆaczek, B.: A Traffic Model Based on Fuzzy Cellular Automata. Journal of
Cellular Automata, 8(3-4), pp. 261–282 (2013)
25. Thammakaroon, P., Tangamchit, P.: Adaptive brake warning system for automobiles. In ITS Telecommunications, 2008. ITST 2008. 8th International Conference
on, pp. 204–208, IEEE (2008)
26. Brockfeld, E., Barlovic, R., Schadschneider, A., Schreckenberg, M.: Optimizing
traffic lights in a cellular automaton model for city traffic. Physical Review E, 64(5),
056132, (2001)
27. Helbing, D., Lämmer, S. and Lebacque, J.P.: Self-organized control of irregular or
perturbed network traffic, in: Optimal Control and Dynamic Games, Springer US,
pp. 239–274 (2005)
| 3 |
On the Hardness of Learning Sparse Parities
Arnab Bhattacharyya∗
Ameet Gadekar∗
Suprovat Ghoshal∗
Rishi Saket†
arXiv:1511.08270v1 [cs.CC] 26 Nov 2015
November 30, 2015
Abstract
This work investigates the hardness of computing sparse solutions to systems of linear equations over
F2 . Consider the k-EvenSet problem: given a homogeneous system of linear equations over F2 on n
variables, decide if there exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse solution).
While there is a simple O(nk/2 )-time algorithm for it, establishing fixed parameter intractability for kEvenSet has been a notorious open problem. Towards
this goal, we show that unless k-Clique can be
√
solved in no(k) time, k-EvenSet has no poly(n) · 2o( k) time algorithm for all k and no polynomial time
algorithm when k = ω(log2 n).
Our work also shows that the non-homogeneous generalization of the problem – which we call kVectorSum – is W[1]-hard on instances where the number of equations is O(k log n), improving on
previous reductions which produced Ω(n) equations. We use the hardness of k-VectorSum as a starting
point to prove the result for k-EvenSet, and additionally strengthen the former to show the hardness
of approximately learning k-juntas. In particular, we prove that given a system of O(exp(O(k)) · log n)
linear equations, it is W[1]-hard to decide if there is a k-sparse linear form satisfying all the equations
or any function on at most k-variables (a k-junta) satisfies at most (1/2 + ε)-fraction of the equations,
for any constant ε > 0. In the setting of computational learning, this shows hardness of approximate
non-proper learning of k-parities. In a similar vein, we use the hardness of k-EvenSet to show
that
√
that for any constant d, unless k-Clique can be solved in no(k) time, there is no poly(m, n) · 2o( k) time
algorithm to decide whether a given set of m points in Fn
2 satisfies: (i) there exists a non-trivial k-sparse
homogeneous linear form evaluating to 0 on all the points, or (ii) any non-trivial degree d polynomial P
supported on at most k variables evaluates to zero on ≈ PrFn2 [P (z) = 0] fraction of the points i.e., P is
fooled by the set of points.
Lastly, we study the approximation in the sparsity of the solution. Let the Gap-k-VectorSum
problem be: given an instance of k-VectorSum of size n, decide if there exist a k-sparse solution, or
every solution is of sparsity at least k′ = (1 + δ0 )k. Assuming ETH, we show that for some constants c0 ,
δ0 > 0 there is no poly(n) time algorithm for Gap-k-VectorSum when k = ω((log log n)c0 ).
∗ Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India. Emails: {arnabb,
ameet.gadekar, suprovat.ghoshal}@csa.iisc.ernet.in. AB supported in part by DST Ramanujan Fellowship.
† IBM Research, Bangalore, India. Email: rissaket@in.ibm.com.
1
Introduction
Given a system of linear equations over F2 , does there exist a sparse non-trivial solution? This question is
studied in different guises in several areas of mathematics and computer science. For instance, in coding
theory, if the system of linear equations is Mx = 0 where M is the parity check matrix of a binary code,
then the minimum (Hamming) weight of a nonzero solution is the distance of the code. This also captures
the problem of determining whether a binary matroid has a short cycle, as the latter reduces to deciding
whether there is a sparse nonzero x such that Mx = 0. In learning theory, the well known sparse parity
problem is: given a binary matrix M and a vector b decide whether there is a small weight nonzero vector
x satisfying Mx = b. The version where Mx is required to equal b in most coordinates, but not necessarily
all, is also well studied as the problem of learning noisy parities.
Let a vector x ∈ Fn2 be called k-sparse if it is nonzero in at most k positions, i.e. it has Hamming weight
at most k. In this work, we show that learning a k-sparse solution to a system of linear equations is fixed
parameter intractable, even when (i) the number of equations is only logarithmic in the number of variables,
(ii) the learning is allowed to be approximate, i.e. satisfy only 51% of the equations and, (iii) is allowed
to output as hypothesis any function (junta) supported on at most k variables. We also prove variants
of these results for the case when the system of equations is homogeneous, which correspond to hardness
of the well known k-EvenSet problem. Note that it is always possible to recover a k-sparse solution in
O(nk ) time simply by enumerating over all k-sparse vectors. Our results show that for many settings of
k, no substantially faster algorithm is possible for k-EvenSet unless widely believed conjectures are false.
Assuming similar conjectures, we also rule out fast algorithms for learning γk-sparse solutions to a linear
system promising the existence of a k sparse solutions, for some γ > 1.
In the next few paragraphs we recall previous related work and place our results in their context. Let us
first formally define the basic objects of our study:
Definition 1.1. k-VectorSum: Given a matrix M ∈ Fm×n
and a vector b ∈ Fm
2 , and a positive integer k
2
as parameter, decide if there exists a k-sparse vector x such that Mx = b.
Definition 1.2. k-EvenSet: Given a matrix M ∈ Fm×n
, and a positive integer k as parameter, decide if
2
there exists a k-sparse vector x such that Mx = 0.
Remark: In the language of coding theory, k-VectorSum is also known as the MaximumLikelihoodDecoding problem and k-EvenSet as the MinimumDistance problem.
Clearly, k-VectorSum is as hard as k-EvenSet1 . The k-VectorSum problem was shown to be W[1]hard2 by Downey, Fellows, Vardy and Whittle [DFVW99], even in the special case of the vector b consisting of
all 1’s. More recently, Bhattacharyya, Indyk, Woodruff and Xie [BIWX11] showed that the time complexity
of k-VectorSum is min(2Θ(m) , nΘ(k) ), assuming 3-SAT has no 2o(n) time algorithm.
In contrast, the complexity of k-EvenSet remains unresolved, other than its containment in W[2] shown
in [DFVW99]. Proving W[1]-hardness for k-EvenSet was listed as an open problem in Downey and Fellows’
1999 monograph [DF99] and has been reiterated more recently in lists of open problems [FM12, FGMS12].
Note that if we ask for a vector x whose weight is exactly k instead of at most k, the problem is known to
be W[1]-hard [DFVW99]. Our work gives evidence ruling out efficient algorithms for k-EvenSet for a wide
range of settings of k.
In the non-parameterized setting, where k is part of the input, these problems are very well-studied.
Vardy showed that EvenSet (or MinimumDistance) is NP-hard [Var97]. The question of approximating
k, the minimum distance of the associated code, has also received attention. Dumer, Micciancio and Sudan
[DMS03] showed that if RP 6= NP, then k is hard to approximate within some constant factor γ > 1.
Their reduction was derandomized by Cheng and Wan [CW08, CW09], and subsequently Austrin and Khot
[AK14] gave a simpler deterministic reduction for this problem. The results of [CW08, CW09] and [AK14]
were further strengthened by Micciancio [Mic14].
1 The name k-EvenSet is from the following interpretation of the problem: given a set system F over a universe U and a
parameter k, find a nonempty subset S ⊆ U of size at most k such that the intersection of S with every set in F has even size.
2 Standard definitions in parameterized complexity appear in Section 2.
1
From a computational learning perspective, the k-VectorSum problem can be restated as: given an
m-sized set of n-dimensional point and value pairs over F2 , decide if there exists a parity supported on at
most k variables (i.e. a k-parity) that is consistent with all the pairs. This has been extensively studied
as a promise problem when the points are uniformly generated. Note that in this case, if m = Ω(n), there
is a unique solution w.h.p and can be found efficiently by Gaussian elimination. On the other hand, for
m = O(k log n), the best known running time of O(nk/2 ) is given in [KS06] (credited to Dan Spielman).
Obtaining a polynomial time algorithm for m = poly(k log n) would imply attribute-efficient learning of
k-parities and is a long-standing open problem in the area [Blu96]. The best known dependence between
m and the running time for this problem is described in [BGM10, BGR15]. Our work proves the hardness
of k-VectorSum when m = O(k log n), showing evidence which rules out polynomial time algorithms for
learning k-parity when the input is generated adversarially.
A natural question studied in this work is whether one can do better if the learning algorithm is allowed
to be non-proper (i.e., output a hypothesis that is not a k-parity) and is allowed to not satisfy all the pointvalue pairs. To further motivate this problem, let us look at the case when k is not fixed. In the absence
of noise, Gaussian elimination can efficiently recover a consistent parity. The harder case is the agnostic
(noisy) setting which promises that there is a parity consistent with at least 1 − ε fraction of the point-value
pairs. When the points are generated uniformly at random, one can learn the parity in time 2O(n/ log n)
[FGKP09, BKW03].
On the other hand, when the points are adversarially drawn, there is a non-proper algorithm due to Kalai,
that runs in time 2O(n/ log n) and outputs a circuit C which is consistent with
Mansour and Verbin [KMV08]
0.99
of the point value pairs. Håstad’s inapproximability for Max-3LIN [Hås01] implies
at least 1 − ε − 2−n
that learning a noisy parity in the adversarial setting is NP-hard, even for 1/2 + ε accuracy, for any constant
ε > 0. Gopalan, Khot and Saket [GKS10] showed that achieving an accuracy of 1 − 1/2d + ε using degree-d
polynomials as hypotheses is NP-hard. Subsequently, Khot [Kho09] proved an optimal bound of 1/2 + ε for
learning by constant degree polynomials3 . Our work studies the intractability of approximate non-proper
learning of k-parity, and extends the hardness result for k-VectorSum to learning by juntas of k variables,
and for k-EvenSet to learning using constant degree polynomials on k variables.
Another interesting question in the parameterized setting is related to a gap in the sparsity parameter k,
i.e. how tractable it is to learn a γk-sparse solution when the existence of a k-sparse solution is guaranteed,
for some constant γ > 1 (or γ < 1 in case of a maximization problem). Previously, Bonnet et al. [BEKP15]
and Khot and Shinkar [KS15] studied this problem for k-Clique, and both these works show conditional
hardness results. In our work we prove a “gap in k” hardness result for k-VectorSum similar to that
obtained in [BEKP15] for k-Clique.
In the rest of this section we formally describe our results for k-VectorSum and k-EvenSet, and give
a brief description of the techniques used to obtain them.
1.1
Our Results
Hardness of exact problems
We begin by giving a reduction from k-Clique showing the W[1]-hardness of k-VectorSum on instances
which have a small number of rows.
Theorem 1.3 (W[1]-hardness of k-VectorSum). The k-VectorSum problem is W[1]-hard on instances
(M, b) where M ∈ Fm×n
and b ∈ Fm
such that m = O(k log n). Our reduction implies, in particular, that
2 √
2
o( k)
k-VectorSum does not admit an n
time algorithm on such instances, unless k-Clique on r-vertex
graphs has an ro(k) time algorithm.
As far as we know, in previous proofs of the W[1]-hardness of k-VectorSum [DFVW99, CFK+ 15], the
number of rows in the matrix output by the reduction was linear in n. Our proof is inspired by a recent proof
3 As far as we know, this result is unpublished although it was communicated to the fourth author of this paper. We include
with his permission a proof of Khot’s result to illustrate some of the techniques which inspire part of this work.
2
of the W[1]-hardness of k-Sum [ALW14]. Also, in Appendix A, we give a simple O(n · 2m ) time algorithm
for k-VectorSum, which suggests that m cannot be made sublogarithmic in n for hard instances.
Next, we give a hardness reduction from k-VectorSum to the k-EvenSet problem.
Theorem 1.4 (Hardness of k-EvenSet). There is an FPT reduction from an instance (M, b) of k′
2
′
VectorSum, where M ∈ Fm×n
and b ∈ Fm
2 , to an instance M of O((k log n) )-EvenSet, where M ∈
2
m′ ×n′
′
′
F2
such that both m and n are bounded by fixed polynomials in m and n.
Using Theorem 1.3, the above yields the following corollary.
Corollary 1.5. There does not exist a poly(n) time algorithm for k-EvenSet when k = ω(log2 n), assuming
that k-Clique does not have a polynomial time algorithm
for any k = ω(1). More generally, under the same
√
o( k)
assumption, k-EvenSet does not admit a poly(n) · 2
time algorithm for unrestricted k.
Proof. Suppose there is a T (n, k) algorithm for k-EvenSet. Chaining together the reductions in Theorem 1.3
and Theorem 1.4, we get a T (poly(n), k 4 log2 n) algorithm for k-Clique.√ Choosing k = ω(1) implies the
first part of the corollary. For the second part, observe that if f (x) = 2o( x) , then f (k 4 log2 n) = no(1) for
some k = ωn (1).
To the best of our knowledge, Corollary 1.5 gives the first nontrivial hardness results for parameterized
k-EvenSet. Theorem 1.4 is obtained by adapting the hardness reduction for the inapproximability of
MinimumDistance by Austrin and Khot [AK14] to the parameterized setting.
Hardness of non-proper and approximately learning sparse parities
The hardness for k-VectorSum proved in Theorem 1.3 can be restated in terms of W[1]-hardness of learning
k-parity, i.e., linear forms depending on at most k variables4 .
Theorem 1.6 (Theorem 1.3 restated). The following is W[1]-hard: given m = O(k log n) point-value pairs
n
{(yi , ai )}m
i=1 ⊆ F2 × F2 , decide whether there exists a k-parity L which satisfies all the point-value pairs, i.e.,
L(yi ) = ai for all i = 1, . . . , m.
Next, we strengthen the above theorem in two ways. We show that the W[1]-hardness holds for learning
a k-parity using a k-junta, and additionally for any desired accuracy exceeding 50%. Here, a k-junta is any
function depending on at most k variables.
Theorem 1.7. The following is W[1]-hard: for any constant δ > 0, given m = O(k · 23k · (log n)/δ 3 )
n
point-value pairs {(zi , bi )}m
i=1 ⊆ F2 × F2 , decide whether:
YES Case: There exists a k-parity which satisfies all the point-value pairs.
NO Case. Any function f : Fn2 7→ F2 depending on at most k variables satisfies at most 1/2 + δ fraction of
the point value pairs.
Theorem 1.7 also implies hardness for approximately learning k-juntas, in comparison to previous W[2]hardness of exactly learning k-juntas shown by Arvind, Köbler and Lindner [AKL09]. Note that the current
best algorithm for learning k-junta, even over the uniform distribution, takes nΩ(k) time [Val12, MOS04].
We similarly strengthen Corollary 1.5 to rule out efficient algorithms for approximately learning a ksparse solution to a homogeneous linear system using constant degree polynomials supported on at most k
variables.
4 Note that Theorem 1.3 as stated shows hardness of learning homogeneous k-parity i.e., homogeneous k-sparse linear forms
(without the constant term). The result can easily be made to hold for any general k-parities by adding a point-value pair
which is (0, 0).
3
Theorem 1.8. Assume that k-Clique does not have a poly(n) time√algorithm for any k = ω(1). Then for
any constant δ > 0 and positive integer d, there is no poly(m, n) · 2o( k) time algorithm to decide whether a
n
given set of m points {zi }m
i=1 ⊆ F2 satisfy:
YES Case: There exists a nonzero k-parity L such that L(zi ) = 0 for all i = 1, . . . , m.
n
NO Case. Any non-trivial degree d polynomial
P : F2 7→ F2 depending on at most k variables satisfies
P (zi ) = 0 for at most Prz∈Fn2 [P (z) = 0] + δ fraction of the points.
The proof of the above theorem relies on an application of Viola’s [Vio09] pseudorandom generator for
constant degree polynomials, and is inspired by Khot’s [Kho09] NP-hardness of learning linear forms using
constant degree polynomials.
Gap in sparsity parameter
Using techniques similar to those employed in [BEKP15], we prove the following gap in k hardness for
k-VectorSum, i.e., hardness of Gap-k-VectorSum.
Theorem 1.9. Assuming the Exponential Time Hypothesis, there are universal constants δ0 > 0 and c0 such
that there is no poly(N ) time algorithm to determine whether an instance of Gap-k-VectorSum of size N
admits a solution of sparsity k or all solutions are of sparsity at least (1 + δ0 )k, for any k = ω((log log N )c0 ).
c0
More generally, under the same assumption, this problem does not admit an N O(k/ω((log log N ) )) time algorithm for unrestricted k.
1.2
Our Techniques
Our first result, Theorem 1.3, is based on a gadget reduction from an n-vertex instance of k-Clique creating
columns of M corresponding to the vertices and edges of the graph along with a target vector b. Unlike
previous reductions which used dimension linear in the number of vertices, we reuse the same set of coordinates for the vertices and edges by assigning
unique logarithmic length patterns to each vertex. In total we
create k columns for each vertex and k2 columns for each edge, using O(k 2 log n) coordinates. The target
vector b ensures that a solution always has at least k + k2 columns, which suffices in the YES case while
the NO case requires strictly more columns to sum to b.
For proving Theorem 1.4, we homogenize the instance of Theorem 1.3 by including b as a column of M.
To force the solution to always choose b we use the approach of Austrin and Khot [AK14] who face the
same issue when reducing to the MinimumDistance problem. Since we need to retain the bound on the
sparsity of the solution, we cannot use their techniques directly. Instead, we construct a small length sketch
of a purported sparse solution and use it as an input to a tensor code based amplification gadget used in
[AK14]. Our construction however inflates the parameter k to O((k log n)2 ).
The hardness of approximately learning k-parities with k-juntas given in Theorem 1.7 is obtained by
transforming the instance of Theorem 1.3 using an ε-balanced code, along with an analysis of the Fourier
spectrum of any k-junta on the resulting distribution. In contrast, Theorem 1.8 is obtained by using the
instance of Theorem 1.4 (appropriately transformed using an ε-balanced code) as an input to Viola’s construction [Vio09] of pseudorandom generators for degree d polynomials. Note that the exp(k) blowup in the
reduction for Theorem 1.7 rules out its use for proving Theorem 1.8 due to the presence of a (log2 n) factor
in the sparsity parameter of the instance obtained in Theorem 1.4. On the other hand, the non-homogeneity
of the k-VectorSum problem hinders the use of Viola’s pseudorandom generator for proving a version (for
degree d polynomials on k variables instead of k-juntas) of Theorem 1.7 which avoids the exp(k) blowup.
For Theorem 1.9, we use the improved sparsification lemma of Calabro, Impagliazzo and Paturi [CIP06]
followed by Dinur’s almost linear PCP construction [Din07] to reduce an n-variable 3-SAT instance to 2εn
Gap-3-SAT instances with almost linear in n clauses and variables. For each instance a corresponding
k-VectorSum instance is created by partitioning the clauses into k blocks and adding F2 -valued variables
for partial assignments to each block along with non-triviality and consistency equations. In the YES case
setting one variable from each block to 1 (i.e. a k-sparse solution) suffices, whereas in the NO case at least γk
4
variables need to be set to 1, for some constant γ > 1. The parameters are such that an efficient algorithm
to decide the YES and NO cases would violate the Exponential Time Hypothesis for 3-SAT.
Organization of the paper. Theorem 1.3 is proved in Section 3, and using it as the starting point Theorem
1.4 is proved in Section 4. The reduction proving Theorem 1.7 is given in 5, and starts with a restatement
of Theorem 1.3. The proofs of Theorems 1.8 and 1.9 are provided in Section 6 and Section 7 respectively.
We also include in Appendix B a proof of Khot’s [Kho09] result on NP-hardness of approximately learning
linear forms using constant degree polynomials to illustrate the use of Viola’s pseudorandom generator [Vio09]
which is also used in the proof of Theorem 1.8.
In the next section we give some definitions and results which shall prove useful for the subsequent
proofs.
2
2.1
Preliminaries
Parameterized Complexity
A parameterization of a problem is a poly(n)-time computable function that assigns an integer k > 0 to each
problem instance x of length n (bits). The pair (x, k) is an instance of the corresponding parameterized
problem. The parameterized problem is said to be fixed parameter tractable (FPT) if it admits an algorithm
that runs in time f (k) · poly(n) where k is the parameter of the input, n is the size of the input, and f is
an arbitrary computable function. The W-hierarchy, introduced by Downey and Fellows [DF95, DF99], is a
sequence of parameterized complexity classes with FPT = W[0] ⊆ W[1] ⊆ W[2] ⊆ · · · . It is widely believed
that FPT 6= W[1].
These hierarchical classes admit notions of completeness and hardness under FPT reductions i.e., f (k) ·
poly(n)-time transformations from a problem A instance (x, k) where |x| = n, to an instance (x′ , k ′ ) of
problem B where |x′ | = poly(n) and k ′ is bounded by f (k). For example, consider the k-Clique problem:
given a graph G on n vertices and an integer parameter k, decide if G has a clique of size k. The kClique problem is W[1]-complete, and serves as a canonical hard problem for many W[1]-hardness reductions
including those in this work.
For a precise definition of the W-hierarchy, and a general background on parameterized algorithms and
complexity, see [DF99, FG06, CFK+ 15].
2.2
Coding Theoretic Tools
Our hardness reductions use some basic results from coding theory. For our purposes, we shall be restricting
our attention to linear codes over F2 i.e., those which form linear subspaces. A code C ⊆ Fn2 is said to
be a [n, k, d]-binary linear code if C forms a k-dimensional subspace of Fn2 such that all nonzero elements
(codewords) in C are of Hamming weight at least d. We use weight wt(x) of a codeword x to denote its
Hamming weight, distance of a code to denote the minimum weight of any nonzero codeword, and rate to
denote the fraction k/n. A generator matrix G ∈ F2n×k for C is such that C = {Gx | x ∈ Fk2 }. Also
(n−k)×n
satisfying: G⊥ y = 0 iff y ∈ C. We shall use the
associated with C is a parity check matrix G⊥ ∈ F2
generator and parity check matrices of well studied code constructions whose properties we state below.
Theorem 2.1 (BCH Codes, Theorem 3 [BR60]). The dimension of the BCH code of block length n = (2m −1)
and distance d is at least n − ⌈ d−1
2 ⌉m .
While the above theorem restricts the block length to be of the form (2m − 1), for general n we can use
as the parity check matrix any n columns of the parity check matrix of a BCH code of the minimum length
(2m − 1) greater than or equal to n. In particular, we have the following corollary tailored for our purpose.
Corollary 2.2. For all lengths n and positive integers k < n, there exists a parity check matrix R ∈
F220k log n×n such that Rx =
6 0 whenever 0 < wt(x) < 18k. Moreover, this matrix can be computed in time
poly(n, k).
5
The following explicit family of ε-balanced binary linear codes of constant rate was given by Alon et
al. [ABN+ 92].
Theorem 2.3 (ε-balanced codes [ABN+ 92]). There exists an explicit family of codes C ⊆ Fn2 such that every
codeword in C has normalized weight in the range [1/2 − ε, 1/2 + ε], and rate Ω(ε3 ), which can be constructed
in time poly(n, 1ε ), where ε > 0 is an arbitrarily small constant.
Given a linear code C ⊆ Fn2 , the product code C ⊗2 consists of n × n matrices where each row and each
column belongs to C; equivalently, C ⊗2 = {GXGT : X ∈ F2k×k } where G ∈ F2n×k is the generator matrix
for the code C. If the distance d(C) = d, then it is easy to verify that d(C ⊗2 ) > d2 . However, we shall use
the following lemma from [AK14] for a tighter lower bound on the Hamming weight when the code word
satisfies certain properties.
Lemma 2.4 (Density of Product Codes [AK14]). Let C ⊆ Fn2 be a binary linear code of distance d = d(C),
and let Y ∈ C ⊗2 be a nonzero codeword with the additional properties that diag(Y) = 0, and Y = YT . Then,
the Hamming weight of Y is at least 23 d2 .
2.3
Some Useful Tools
The proof of Theorem 1.8 in Section 6 and Khot’s [Kho09] result given in Appendix B use Viola’s [Vio09]
construction of pseudorandom generators which we describe below.
Definition 2.5. A distribution D over Fn2 is said to ε-fool degree d polynomials in n-variables over F2 if
for any degree d polynomial P :
E [e(P (z))] − E [e(P (z))] 6 ε,
z←D
z←U
where U is the uniform distribution over Fn2 and e(x) := (−1)x for x ∈ {0, 1}.
Theorem 2.6. Let Y1 , . . . , Yd be d independent distributions on Fn2 that each ε-fool linear polynomials. Then
d−1
the distribution W = Y1 + · · · + Yd εd -fools degree-d polynomials where εd := 16 · ε1/2 .
Our proof of Theorem 1.9 in Section 7 uses the following improved sparsification lemma of Calabro,
Impagliazzo, and Paturi [CIP06].
Lemma 2.7. There is a deterministic algorithm which, for any ε > 0, transforms an n-variable 3-CNF
formula F to F1 , . . . , Fs ∈ 3-CNF, each on at most n variables s.t.
1. s 6 2εn .
2. F is satisfiable if and only if at least one of F1 , . . . , Fs is satisfiable.
3. The number of clauses in each F1 , . . . , Fs is at most O((1/ε)9 · n).
4. The algorithm runs in time 2εn · poly(n), where the degree of the polynomial may depend on ε.
We shall also use in Section 7 the following reduction to Gap-3-SAT implied by the construction of almost
linear sized PCPs given by Dinur [Din07].
Theorem 2.8. There exist universal constants γ0 > 0 and c0 , and a polynomial time reduction from a 3-CNF
formula F on m clauses to a 3-CNF formula F ′ on at most m(log m)c0 clauses such that: (i) (YES Case) if
F is satisfiable then F ′ is satisfiable, and (ii) (NO Case) if F is unsatisfiable then at most (1 − γ0 ) fraction
of the clauses of F ′ are satisfied by any assignment.
6
3
W[1]-hardness of k-VectorSum on O(k log n) Equations
The following theorem implies Theorem 1.3.
Theorem 3.1. There is an FPT reduction from an instance G(V, E) of k-Clique, over n vertices and m
′
edges, to an instance (M, b) of k ′ -VectorSum, where M ∈ F2d×n such that k ′ = O(k 2 ), d = O(k 2 log n)
and n′ is polynomial in n and k.
The rest of this section is devoted to proving the above theorem. We start by observing
that a k-clique
in a graph G(V, E) can be certified by the pair of mappings f : [k] 7→ V and g : [k]
→
7
E , such that
2
[k]
g(i, j) = (f (i), f (j)) ∈ E ∀i, j ∈ [k], i < j. Here, we use 2 to represent {(i, j) | 1 6 i < j 6 k}. The
underlying idea behind the reduction is to construct M and b such that f and g exist iff there is a sparse
set of columns of M that sums up to b.
Construction of M and b. Let G(V, E) be a k-Clique instance on n = |V | vertices and m = |E| edges,
where V = {v1 , v2 , . . . , vn }. For each vertex vi ∈ V , assign a distinct N = ⌈log(n + 1)⌉ bit nonzero binary
pattern denoted by qi ∈ FN
2 . We first construct a set of vectors – which shall be the columns of M –
corresponding to the vertices and edges. The dimension over which the vectors are defined is partitioned
into three sets of coordinates:
Edge-Vertex Incidence Coordinates: These consist of k slots, where each slot consists of (k − 1) subslots,
and each subslot in turn consists of N coordinates. In any column of M, a subslot may either contain the
N -length pattern of a vertex, or it might be all zeros.
Edge Indicator Coordinates: These are a set of k2 coordinates corresponding to {(i, j) | 1 6 i < j 6 k},
indicating whether the vector represents an edge mapped from (i, j). Any column of M may have at most
one of these coordinates set to 1.
Vertex Indicator Coordinates: These are a set of k coordinates corresponding to indices i ∈ {1, . . . , k}, which
indicate whether the vector represents a vertex mapped from i. Any column of M may have at most one of
these coordinates set to 1.
Thus, each vector is a concatenation of k(k − 1)N edge-vertex incidence bits, followed by k2 edge
indicator bits and k vertex indicator bits, so that d = k(k − 1)N + k2 + k = O(k 2 log n). For ease of
notation, let Slj represent the N -sized subset of coordinates belonging to the subslot l of slot j where j ∈ [k]
and l ∈ [k − 1]. We define qi (Slj ) ∈ Fd2 to be the vector which contains the pattern of vertex vi in Slj , and
is zero everywhere else. For 1 6 i < j 6 k, let δi,j ∈ Fd2 be the vector which has a 1 at the edge indicator
coordinate corresponding to (i, j), and is 0 everywhere else. Similarly, δi ∈ Fd2 is the indicator vector which
has its ith vertex indicator coordinate set to 1, everything else being 0. Use these components we construct
the vertex and edge vectors as follows.
Vertex Vectors: For each vertex vi ∈ V and j ∈ [k], we introduce a vector η(vi , j) ∈ Fd2 which indicates that
vertex vi is mapped from index (slot) j i.e., f (j) = vi . The vector is constructed as follows: populate each of
the (k − 1) subslots of the jth slot with the pattern of vertex vi (which is qi ), and set its jth vertex indicator
Pk−1
coordinate to 1. Formally, η(vi , j) := l=1
qi (Slj ) + δj . For each vertex there are k vertex vectors resulting
in a total of nk vertex vectors.
Edge Vectors: For each edge e = (vi1 , vi2 ) ∈ E where i1 < i2 , and 1 6 j1 < j2 6 k, we introduce a vector
that indicates that the pair of indices (slots) (j1 , j2 ) is mapped to (vi1 , vi2 ) i.e., g(j1 , j2 ) = (vi1 , vi2 ) . We
construct the vector η(e, j1 , j2 ) ∈ Fd2 as follows: populate Sjj21−1 with the pattern of vertex vi1 , and Sjj12 with
the pattern of vertex vi2 . Additionally, we set the edge indicator coordinate corresponding to (j1 , j2 ) to 1.
The vector is formally expressed as, η(e, j1 , j2 ) := qi1 (Sjj21−1 ) + qi2 (Sjj12 ) + δj1 ,j2 . Intuitively, for the lower
ordered vertex vi1 , η(e, j1 , j2 ) cancels out the (j2 − 1)th subslot of slot j1 , and for the higher ordered vertex
vi2 , it cancels out the j1 th subslot of its j2 th slot. Note that we are treating (vi1 , vi2 ) as an unordered pair
since i1 < i2 . Therefore, for each edge e ∈
E, and for each choice of 1 6 j1 < j2 6 k, we introduce one edge
vector. Hence, there are a total of m · k2 edge vectors in the set.
7
The vertex and edge vectors constructed above constitute the columns
of M. The target vector b ensures
that (i) every solution must have at least k vertex vectors, and k2 edge vectorsP
and (ii) thePvectors must
cancel each other out in the Edge-Vertex Incidence coordinates. Formally, b = i∈[k] δi + 16i<j<k δi,j .
In other words, all the edge and vertex indicator coordinates of b are set to 1, and everything else to 0.
3.1
YES case
We show that if G(V, E) has a k-Clique, then there exists a set of k+ k2 columns of M that sum to b. Assume
that vi1 , vi2 , . . . , vik form a k-clique where i1 < i2 < · · · < ik . We select k vertex vectors {η(vij , j)}j∈[k] , and
k
2 edge vectors {η(e, j1 , j2 ) | e = (vij1 , vij2 ), 1 6 j1 < j2 6 k}. Since the k vertices form a clique, these
vectors always exists. Observe that for any fixed j ∈ [k], (i) for ℓ = 1, . . . , j − 1, η(vij , j) and η(e, ℓ, j) have
the same pattern qij in subslot ℓ of slot j, where e = (viℓ , vij ), and (ii) for ℓ = j + 1, . . . , k, η(vij , j) and
η(e, j, ℓ) have the same pattern qij in subslot (ℓ − 1) of slot j, where e = (vij , viℓ ). Thus, the k + k2 selected
vectors sum to zero on all but the vertex and edge indicator coordinates and thus sum up to b.
3.2
NO Case
Suppose for a contradiction that S is a subset of columns of M that sum to b and that |S| 6 k +
k
2
.
Proposition 3.2. There are exactly k vertex vectors corresponding to indices (slots) i ∈ [k] in S. Also,
there are exactly k2 edge vectors, one for each pair (i, j) (1 6 i < j 6 k) of slots, in S.
Proof. This follows from the observation that there are k + k2 nonzero indicator coordinates in the target
b, and each (edge or vertex) vector contributes exactly one nonzero (edge or vertex) indicator coordinate.
Therefore, by a counting argument, k vertex vectors, one each for the indices (slots) i ∈ [k], must contribute
to the k vertex indicator bits. Similarly, k2 edge vectors, one for each pair of slots (i, j) (1 6 i < j 6 k),
must contribute to the k2 edge indicator bits.
The above proposition implies that for each pair of vertex vectors there is exactly one edge vector which
has a common populated subslot with each of them. So there are exactly (k − 1) edge vectors which share
a populated subslot with any given vertex vector in S.
Since the k vertex vectors in S populate distinct slots, in total k(k − 1) subslots are populated
by the sum
of the k vertex vectors. Note that any edge vector populates exactly 2 subslots. Thus, for the k2 = k(k−1)/2
edge vectors in S to sum up to the values in k(k − 1) subslots, it must be that no two edge vectors overlap
in the same slot-subslot combination.
Thus, for each vertex vector there are exactly (k − 1) edge vectors which share distinct populated subslots
with it, and these edge vectors must cancel out the corresponding subslots i.e., have the same pattern in the
shared subslot as that of the vertex vector. In other words, for any two vertex vectors corresponding to slots
i and j respectively (i < j), the edge vector corresponding to the pair (i, j) must cancel one subslot from
each one of the two vertex vectors. This is possible only if (i) the k vertex vectors correspond to distinct
vertices in G, and (ii) each pair of these vertices have an edge between them for the corresponding edge
vector to exist. This implies that G has a k-clique which is a contradiction.
4
Parameterized Reduction for the k-EvenSet problem
The following is a restatement of Theorem 1.4.
Theorem 4.1 (Hardness of k-EvenSet). Given an instance (M, t) of k-VectorSum, where M ∈ Fm×n
2
2
′
2
′
and t ∈ Fm
2 , there is a poly(m, n) time reduction to an instance M of O(k log n)-EvenSet, where M ∈
m′ ×n′
′
′
F2
such that m and n are polynomial in n and m.
The rest of this section is devoted to proving the above theorem. The next few paragraphs give an informal
description of the reduction. We then define the variables and equations of the k-EvenSet instance, and
analyze the completeness and soundness of the reduction.
8
4.1
Reduction Overview
Let Mx = t be a hard instance of k-VectorSum i.e., in the YES case there exists a k-sparse solution,
whereas in the NO case all solutions have Hamming weight at least (k + 1). We homogenize this affine
system by replacing the target vector t by a0 t for some F2 -variable a0 , where a0 t is a coordinate-wise
multiplication of t with the scalar a0 . Clearly, if all (k + 1)-sparse (including a0 as a variable) solutions
to Mx = a0 t have a0 = 1 then the hardness of k-VectorSum implies the desired hardness result for kEvenSet. However, this may not be true in general: there could exist a k-sparse x such that Mx = 0.
The objective of our reduction therefore, is to ensure that any solution to Mx = a0 t that has a0 = 0 with
a k-sparse x, must have significantly large weight in other auxiliary variables which we shall add in the
construction.
Towards this end, we borrow some techniques from the proof of the inapproximability of MinimumDistance by Austrin and Khot [AK14]. Using transformations by suitable codes we first obtain a K =
O(k log n)-length sketch y = (y1 , . . . , yK ) of x, such that y is of normalized weight nearly 1/2 when x is
k-sparse but nonzero. We then construct a codeword Y ∈ FK×K
, which is intended to be the product code2
word yyT . However, this relationship cannot be expressed explicitly in terms of linear equations. Instead,
for each pair of coordinates (i, j) ∈ [K] × [K], we introduce functions Zij : F2 × F2 7→ F2 indicating the value
taken by the pair (yi , yj ) along with constraints that relate the Zij variables to codewords y and Y. In fact,
the explicit variables {Zij } determine both y and Y which are implicit. The constraints also satisfy the key
property: if x is k-sparse, then the number of nonzero Zij variables is significantly larger when a0 = 0 than
when a0 = 1. This forces all sparse solutions to set a0 = 1, which gives us the desired separation in sparsities
between the YES and NO cases.
4.2
Constraints
Let Mx = t be the instance of k-VectorSum over F2 , in n variables and m equations. We homogenize this
system by introducing a new F2 -variable a0 so that the new system of equations is then given by
Mx = a0 t,
(1)
where the a0 t is the coordinate wise product of t with the scalar a0 . We also add the following additional
constraints and variables.
′
Linear Sketch Constraints : Let R ∈ Fk ×n be the parity check matrix of a [n, n − k ′, 18k] linear code, where
′
k = 20k log n, as defined in Corollary 2.2. Define η to be a k ′ -length sketch of x using R as,
η = Rx.
(2)
′
Mixing Constraints : Let C ∈ F2K×k be the generator matrix of a linear code C ⊆ FK
2 as defined in
20k log n
k′
Theorem 2.3 where C has relative distance 12 −ε and rate Ω(ε3 ) for some small ε > 0 and K = Ω(ε
,
3) 6
cε3
for some constant c > 0. We add the constraint
y = Cη = CRx.
(3)
2
Product Code Constraints : Let C ⊗2 := C C be the product code with relative distance 12 − ε , constructed from C. Let Y = {Yij }16i,j6K ∈ FK×K
be such that Y = yyT . To represent this relation linearly,
2
we introduce variables {Zij (a, b)}a,b∈F2 for each 1 6 i, j 6 K, which are intended to indicate the value assigned to the pair (yi , yj ) i.e., Zij (a, b) = 1{yi = a, yj = b}. For each (i, j) ∈ [K] × [K] we add the following
equations,
N
Zij (0, 0) + Zij (0, 1) + Zij (1, 0) + Zij (1, 1) =
a0
(4)
Zij (1, 0) + Zij (1, 1) =
Zij (0, 1) + Zij (1, 1) =
yi
yj
(5)
(6)
Yij .
(7)
Zij (1, 1) =
9
Furthermore, we add the constraints
QY = 0,
where Q is the parity check matrix for the product code C
(8)
⊗2
, and
Yij
=
Yji
∀i 6= j,
Yii
=
yi
∀i ∈ [K],
T
(9)
(10)
1
2
r−1
so that Y preserves the diagonal entries and symmetry of yy . Finally, we introduce x , x , . . . , x
constraints
xi = x
∀i ∈ [r − 1],
and
(11)
2
2
25k(log n)
where r = K
. These r − 1 explicit copies of the vector x are used to balance the Hamming
16 6
c2 ε6
weight of the final solution. Observe that all the variables described above are linear combinations of
a0 , {Zij (·, ·)}i,j∈[k] and the coordinates of the vectors x and {xi }i∈[r−1] . Hence, we analyze the sparsity of
the solution restricted to these explicit variables. The total number of variables considered is 4K 2 + r · n + 1.
Remark : The key difference between [AK14] and our reduction is in Equation (2) which constructs a small
(O(k log n))-length sketch of the n-length vector x. This helps us contain the blowup in the sparsity of the
solution to O(k 2 log2 n) instead of O(n).
4.3
Completeness
In the YES case, setting a0 = 1 we obtain a k-sparse x such that Mx = a0 t = t. Furthermore, for each
i, j ∈ [K], exactly one of the Zij variables would be nonzero. Hence, we have a solution of weight K 2 + rk + 1.
4.4
Soundness
Since the solution has to be non-trivial, at least one of a0 , x, y, Y must be nonzero. Note that when x = 0,
y = 0 since y is a homogeneous linear transformation of x. Moreover, we may assume that the weight of x
K 2 +1
+ k +1 < 18k by our setting of r, otherwise the total weight of the solution would be at
is at most
r
2
least r · K r+1 + k + 1 > K 2 + r(k + 1) + 1 due to the copies of x and we would be done. The construction
of y along with the upper bound of 18k on the weight of x constrains y to be nonzero when x is nonzero.
Thus, the only three cases we need to consider are:
Case (i): a0 = 1. In this case, any solution x to Mx = a0 t = t has weight at least k + 1. Furthermore,
for each i, j ∈ [K], at least one of the four Zij variables must be nonzero since a0 = 1. Hence, the total
Hamming weight of the solution is at least K 2 + r(k + 1) + 1.
Case (ii): a0 = 0, x 6= 0, y 6= 0. By construction, since y is nonzero it has weight > 12 − ε K. Therefore,
2
for at least 1 − 12 + ε > 43 − 2ε fraction of the pairs (i, j) ∈ [K] × [K], either yi = 1 or yj = 1 . Observe
that for each
two Zij variables are set to 1. Thus, the weight of any solution in this case
such pair,
at least
is at least 2
3
4
− 2ε K 2 =
3
2
− 4ε K 2 .
Case (iii): a0 = 0, x = 0, y = 0, Y 6= 0. We have that diag(Y) = y = 0, Y is symmetric and it belongs
to the product
C ⊗2 (as enforced by Equations (8) and (9)). Then by lemma 2.4, the weight of Y is at
code
3
2
least 8 − 3ε K . Observe that for each i, j ∈ [K] such that Yij = 1, Equations (4)-(7) force all four Zij
variables to be set to 1. Hence, the number of nonzero Zij ’s are at least 32 − 12ε K 2 .
The above analysis yields that in contrast to the YES case which admits a (K 2 + rk + 1)-sparse solution,
in the NO case all solutions are of weight at least
3
2
2
min K + r(k + 1) + 1 ,
> K 2 + r(k + 1) + 1
− 12ε K
2
by choice of the parameter r. Thus, solving the d-EvenSet problem with d = K 2 + rk + 1 = O(k 2 (log n)2 )
solves the k-VectorSum instance Mx = t.
10
5
Hardness of Learning k-Parities using k-Juntas
The hardness for k-VectorSum proved in Theorem 1.3 can be restated in terms of W[1]-hardness of learning
k-parities, i.e. linear forms depending on at most k-variables.
Theorem 5.1. The following is W[1]-hard: given r = O(k log n) point-value pairs {(yi , ai )}ri=1 ⊆ Fn2 × F2 ,
decide whether there exists a homogeneous linear form L supported on at most k variables which satisfies all
the point-value pairs, i.e. L(yi ) = ai for all i = 1, . . . , t.
Combining the above with a small bias linear code we induce an approximation gap for learning k-parities
along with extending the result to non-homogeneous linear forms.
Theorem 5.2. The following is W[1]-hard: for any ε > 0 depending only on k, given t = O(k log n/ε3 )
point-value pairs {(zi , bi )}ti=1 ⊆ Fn2 × F2 , decide whether:
YES Case: There exists a homogeneous linear form supported on at most k variables which satisfies all the
point-value pairs.
NO Case. Any linear form supported on at most k variables satisfies a fraction in the range [1/2 − ε, 1/2 + ε]
of the point value pairs.
Proof. Let W = {Wij } ∈ Ft×r
be the generator matrix of an ε-balanced linear code given by Theorem 2.3,
2
where t = O(r/ε3 ). Given an instance {(yj , aj )}rj=1 from Theorem 5.1, let
zi =
r
X
Wij yj ,
and
bi =
r
X
Wij aj ,
j=1
j=1
for i = 1, . . . , t.
In the YES case, there is a homogeneous linear form L∗ that satisfies all {(yj , aj )}rj=1 and thus satisfies
linear combinations of these point-value pairs, in particular {(zi , bi )}ti=1 .
For the NO case, consider any linear form L(x) + c. Since the homogeneous part L does not satisfy
all pairs {(yj , aj )}rj=1 , it will satisfy a fraction in the range [1/2 − ε, 1/2 + ε] of the pairs {(zi , bi )}ti=1 , due
the lower and upper bounds bound on the weight of the nonzero codewords in the column space of W.
Thus, the linear form L(x) + c also satisfies a fraction in the range [1/2 − ε, 1/2 + ε] of the point-value pairs
{(zi , bi )}ti=1 .
As we show below, using a small enough bias ε in the above construction, one can strengthen the hardness
result to learning k-parities with k-juntas, i.e. functions depending only on a subset of at most k variables.
Theorem 5.3. (Theorem 1.7 restated) The following is W[1]-hard: for any constant δ > 0, given t =
O(k · 23k · log n/δ 3 ) point-value pairs {(zi , bi )}ti=1 ⊆ Fn2 × F2 , decide whether:
YES Case: There exists a homogeneous linear form supported on at most k variables which satisfies all the
point-value pairs.
NO Case. Any function f : Fn2 7→ F2 depending on at most k variables satisfies at most 1/2 + δ fraction of
the point value pairs.
Proof. The construction of Z = {(zi , bi )}ti=1 is exactly the same as in the proof of Theorem 5.2 taking
ε = δ · 2−k . The YES case follows directly as before.
For the NO case, let f : Fn2 7→ F2 be a function depending only a subset S ⊆ [n] of coordinates where
|S| 6 k. Define an extension g : Fn+1
7→ F2 as g(x1 , . . . , xn , xn+1 ) := f (x1 , . . . , xn ) + xn+1 . For convenience
2
we shall abuse notation to denote (z, b) = (z1 , . . . , zn , b) where z = (z1 , . . . , zn ) ∈ Fn2 and b ∈ F2 . To complete
the proof we need to show that,
E
(z,b)∈Z
[e(g(z, b))] 6 2δ,
11
(12)
where e(x) := (−1)x . For some real values Cα (α ⊆ [n + 1]), the Fourier expansion of e(g) is given by,
X
Cα χα .
e(g) =
α⊆[n+1]
Since e(g(x1 , . . . , xn+1 )) = e(f (x1 , . . . , xn ) + xn+1 ) and f depends only on coordinates in S, it is easy to see
that the Fourier spectrum of e(g) is supported only on characters χα such that α ⊆ S ∪ {n + 1}. Further,
since e(g(x1 , . . . , xn+1 )) changes sign on flipping xn+1 , Cα 6= 0 ⇒ (n + 1) ∈ α. Thus,
X
Cα χα .
(13)
e(g) =
α⊆S∪{n+1}
(n+1)∈α
Observe that for any α in the sum above, χα (x1 , . . . , xn , b) = e(L(x1 , . . . , xn ) + b) where L is a homogeneous
linear form supported on at most k variables. For any such α, the NO case of Theorem 5.2 implies,
E
(z,b)∈Z
[χα (z, b)] 6 2ε
(14)
Using the above along with Equation (13) yields,
E
(z,b)∈Z
[e(g(z, b))]
6 (2ε) ·
X
|Cα |
α⊆S∪{n+1}
(n+1)∈α
6 (2ε) · 2k = 2δ,
where the last inequality is because there are at most 2k subsets α in the sum on the RHS of Equation (13)
and each |Cα | 6 1 since e(g) is a {−1, 1}-valued function.
6
Proof of Theorem 1.8
We first prove the following strengthening of Theorem 1.4 along the same lines as Theorem 5.2 in Section 5.
Theorem 6.1 (Hardness of approximate k-EvenSet). For any constant ε > 0, given an instance (A, b) of
′
′
′
m×n
k ′ -VectorSum, where A ∈ F2m ×n and b ∈ Fm
of
2 , there is an FPT reduction to an instance B ∈ F2
′
′
′ 2
k -EvenSet for some k = O((k log n ) ), such that
YES Case: There is a nonzero k-sparse vector x which satisfies Bx = 0.
NO Case. For any nonzero k-sparse vector x the weight of Bx is in the range [1/2 − ε, 1/2 + ε].
Here both m and n are bounded by fixed polynomials in m′ and n′ .
Proof. Let M ∈ Fr×n
be the instance of k-EvenSet obtained by applying Theorem 1.4 to the instance
2
be the generator
(A, b) of k ′ -VectorSum we start with. As in the proof of Theorem 5.2 let W ∈ Fm×r
2
matrix of an ε-balanced linear code given by Theorem 2.3, where m = O(r/ε3 ). Taking B := WM completes
the proof.
It is easy to see that the uniform distribution on the rows of the matrix B fools all linear forms (with
error ε) over k variables. Viola’s result [Vio09] (Theorem 2.6) implies that for any constant d, taking d-wise
d−1
sums of the rows of M yields a distribution which fools all degree d polynomials with error 16 · ε1/2 .
Taking ε to be a small enough constant yields the following theorem which implies Theorem 1.8.
Theorem 6.2. For any constants δ > 0 and positive integer d, given an instance (A, b) of k ′ -VectorSum,
′
′
′
m
n
where A ∈ F2m ×n and b ∈ Fm
2 , there is an FPT reduction to a set of m points {zi }i=1 ⊆ F2 such that for
′
′ 2
some k = O((k log n ) ),
12
YES Case: There exists a k-parity L such that L(zi ) = 0 for all i = 1, . . . , m.
NO Case. Any degree d polynomial
P : Fn2 7→ F2 depending on at most k variables satisfies P (zi ) = 0 for at
n
most Prz∈F2 [P (z) = 0] + δ fraction of the points.
In the above m and n are bounded by fixed polynomials in m′ and n′ .
7
Hardness Reduction for Gap-k-VectorSum
In this section we first prove the following theorem.
Theorem 7.1. For universal constants δ0 > 0 and c0 , and any arbitrarily small constant ε > 0, there is a
2O(εn) -time Turing reduction from an n-variable 3-CNF formula F to s 6 2εn instances I1 , . . . , Is of Gap′
k-VectorSum, each of size at most O(k 2 · 2O(n /k) ) where n′ = cε n(log n)c0 for some constant cε depending
on ε, such that
YES Case. If F is satisfiable, then at least one instance Ij (j ∈ [s]) admits a solution of sparsity k.
NO Case. If F is unsatisfiabe, then no instance Ij admits a solution of sparsity 6 (1 + δ0 )k.
Proof. Let F be a 3-CNF formula on n-variables. We use Lemma 2.7 to obtain 3-CNF fomulas H1 , . . . Hs
for s 6 2εn , each on at most n variables and O((1/ε)9 · n) clauses. Using Theorem 2.8, each Hj (j ∈ [s]) is
separately transformed into a Gap-3-SAT instance Fj with at most n′ = cε n(log n)c0 clauses (and variables),
for some cε = O((1/ε)9 ). Each Fj is now reduced to an instance Ij of Gap-k-VectorSum as follows.
Fix j ∈ [s] and let C(Fj ) denote the set of clauses of Fj . We may assume that |C(Fj )| is divisible by
k by adding up to k dummy clauses which are always satisfied. Let B1j , . . . , Bkj be any arbitrary partition
of C(Fj ) into k equal sized subsets, and let X(Bij ) be the set of variables contained in the clauses Bij . Let
j
SAT(Bij ) ⊆ {0, 1}X(Bi ) be the set of assignments to the variables X(Bij ) that satisfy all the clauses in Bij ,
for 1 6 i 6 k. For each α ∈ SAT(Bij ) we introduce an F2 -valued variable zα , and add the following equation,
X
zα = 1,
(15)
α∈SAT(Bij )
for each i = 1, . . . , k. We also add equations to ensure consistency of the solution across different blocks
of variables. For each pair of distinct blocks Bij and Bij′ and each assignment to their common variables
j
j
σ ∈ {0, 1}X(Bi )∩X(Bi′ ) we add the following equation.
X
X
zα =
zβ .
(16)
α∈SAT(Bij )
σ=α|
j
β∈SAT(Bij′ )
σ=β|
j
j
X(B )∩X(B ′ )
i
i
j
X(B )∩X(B ′ )
i
i
It is easy to see that |X(Bij )| = O(n′ /k) and |SAT(Bij )| 6 2O(n /k) for 1 6 i 6 k. Thus, the above
′
′
construction of Ij has at most k · 2O(n /k) variables, and at most k 2 · 2O(n /k) equations. The constant c0 is
the same as in Theorem 2.8 and δ0 shall be chosen later.
′
7.1
YES Case
If F is satisfiable, the for some j ∗ ∈ [s], Hj ∗ is satisfiable and therefore Fj ∗ is satisfiable. Let π be a satisfying
∗
assignment for Fj ∗ . In Ij ∗ , for each Bij (1 6 i 6 k) we set the variable zα corresponding to the projection
∗
∗
α ∈ SAT(Bij ) of π on to the variables X(Bij ) to 1, and the rest of the variables to 0. Clearly, this satisfies
Equations (15) and (16) since π is a satisfying assignment. As we set exactly one variable per block to 1,
Ij ∗ admits a solution of sparsity k.
13
7.2
NO Case
If F is not satisfiable, then none of H1 , . . . , Hs are satisfiable and thus, for each j = 1, . . . , s, at most (1 − γ0 )
fraction of the clauses of Fj are satisfiable by any assignment. Fix any j ∈ [s]. Since B1j , . . . , Bkj is a balanced
partition of C(Fj ), any assignment to the variables of Fj can satisfy all the clauses of at most (1 − γ0 ) fraction
of Bij , 1 6 i 6 k.
Consider a solution to Ij of sparsity at most (1 + δ0 )k. By Equation (15), this solution must set an odd
number of variables in {zα | α ∈ SAT(Bij )} to 1, for 1 6 i 6 k. Let S ⊆ [k] consist of all indices i ∈ [k] such
that exactly one variable zαi for some αi ∈ SAT(Bij ) is set to 1. Thus, the sparsity is at least |S| + 3(k − |S|),
which is at most (1 + δ0 )k by our assumption. By rearranging we get |S| > (1 − δ0 /2)k. Further, Equation
′
(16) implies that for any i, i′ ∈ S, the assignments αi and αi are consistent on their common variables.
j
Thus, there is an assignment to the variables in ∪i∈S X(Bi ) that satisfies all the clauses of Bij for i ∈ S.
Choosing δ0 = γ0 yields a contradiction to our assumption, and therefore no Ij admits a solution of sparsity
at most (1 + δ0 )k.
Theorem 7.1 proved above implies the following restatement of Theorem 1.9.
Theorem 7.2. Assuming the Exponential Time Hypothesis, there are universal constants δ0 > 0 and c0 such
that there is no poly(N ) time algorithm to determine whether an instance of Gap-k-VectorSum of size N
admits a solution of sparsity k or all solutions are of sparsity at least (1 + δ0 )k, for any k = ω((log log N )c0 ).
c0
More generally, under the same assumption, this problem does not admit an N O(k/ω((log log N ) )) time algorithm for unrestricted k.
Proof. For the first part of the theorem, assume for a contradiction that such an algorithm exists. In Theorem
7.1, the size of each Gap-k-VectorSum instance constructed is at most N , where log N = O(log k +n′ /k) =
O(log k)+ O(cε n(log n)c0 /k). Here n is the number of variables in the 3-CNF formula F . Thus, choosing k =
ω((log log N )c0 ) implies k = ω((log n)c0 ). Note that in the reduction k is bounded by poly(n). Our supposed
′
algorithm would decide each Gap-k-VectorSum instance in time poly(k, 2O(n /k) ) = 2o(n) . Applying this
to all the instances of Gap-k-VectorSum would decide the n-variable 3-CNF formula F in time 2(ε+o(1))n
for all constants ε > 0, which contradicts the ETH.
A similar analysis proves the second part (unrestricted k) of the theorem.
Acknowledgements
We thank Subhash Khot for his permission to include his proof for the hardness of learning parities using
polynomials. Also, thanks to Ryan Williams for encouraging and stimulating conversations and to Rocco
Servedio for bringing [AKL09] to our attention .
References
[ABN+ 92] Noga Alon, Jehoshua Bruck, Joseph Naor, Moni Naor, and Ron M Roth. Construction of asymptotically good low-rate error-correcting codes through pseudo-random graphs. IEEE Trans. Inform. Theory, 38(2):509–516, 1992. 6
[AK14]
Per Austrin and Subhash Khot. A simple deterministic reduction for the gap minimum distance
of code problem. IEEE Trans. Inform. Theory, 60(10):6636–6645, 2014. 1, 3, 4, 6, 9, 10, 17
[AKL09]
Vikraman Arvind, Johannes Köbler, and Wolfgang Lindner. Parameterized learnability of juntas.
Theor. Comp. Sci., 410(47-49):4928–4936, 2009. 3, 14
[ALW14]
Amir Abboud, Kevin Lewi, and Ryan Williams. Losing weight by gaining edges. In Proc. 22nd
Annual European Symposium on Algorithms, pages 1–12. Springer, 2014. 3
14
[BEKP15] Edouard Bonnet, Bruno Escoffier, Eun Jung Kim, and Vangelis Th. Paschos. On subexponential
and fpt-time inapproximability. Algorithmica, 71(3):541–565, 2015. 2, 4
[BGM10]
Harry Buhrman, David Garcı́a-Soriano, and Arie Matsliah. Learning parities in the mistakebound model. Inform. Process. Lett., 111(1):16–21, 2010. 2
[BGR15]
Arnab Bhattacharyya, Ameet Gadekar, and Ninad Rajgopal. On learning k-parities with and
without noise. arXiv preprint arXiv:1502.05375, 2015. 2
[BIWX11] Arnab Bhattacharyya, Piotr Indyk, David P Woodruff, and Ning Xie. The complexity of linear
dependence problems in vector spaces. In Proc. 2nd Innovations in Computer Science, pages
496–508, 2011. 1
[BKW03]
Avrim Blum, Adam Kalai, and Hal Wasserman. Noise tolerant learning, the parity problem,
and the statistical query model. J. ACM, 50(4):506–519, 2003. 2
[Blu96]
Avrim Blum. On-line algorithms in machine learning. In Workshop on on-line algorithms,
Dagstuhl, pages 305–325. Springer, 1996. 2
[BR60]
R. C. Bose and Dwijendra K. Ray-Chaudhuri. On A class of error correcting binary group codes.
Information and Control, 3(1):68–79, 1960. 5
[CFK+ 15] Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin
Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer International Publishing, 2015. 2, 5
[CIP06]
Chris Calabro, Russell Impagliazzo, and Ramamohan Paturi. A duality between clause width
and clause density for SAT. In 21st Annual IEEE Conference on Computational Complexity,
pages 252–260, 2006. 4, 6
[CW08]
Qi Cheng and Daqing Wan. Complexity of decoding positive-rate reed-solomon codes. In Proc.
35th Annual International Conference on Automata, Languages, and Programming, pages 283–
293. Springer, 2008. 1
[CW09]
Qi Cheng and Daqing Wan. A deterministic reduction for the gap minimum distance problem.
In Proc. 41st Annual ACM Symposium on the Theory of Computing, pages 33–38. ACM, 2009.
1
[DF95]
Rodney G. Downey and Michael R. Fellows. Fixed-parameter tractability and completeness I:
basic results. SIAM J. on Comput., 24(4):873–921, 1995. 5
[DF99]
Rodney G Downey and Michael Ralph Fellows. Parameterized complexity. Springer Science &
Business Media, 1999. 1, 5
[DFVW99] Rod G Downey, Michael R Fellows, Alexander Vardy, and Geoff Whittle. The parametrized
complexity of some fundamental problems in coding theory. SIAM J. on Comput., 29(2):545–
570, 1999. 1, 2
[Din07]
Irit Dinur. The pcp theorem by gap amplification. J. ACM, 54(3), 2007. 4, 6
[DMS03]
Ilya Dumer, Daniele Micciancio, and Madhu Sudan. Hardness of approximating the minimum
distance of a linear code. IEEE Trans. Inform. Theory, 49(1):22–37, 2003. 1
[FG06]
Jörg Flum and Martin Grohe. Parameterized Complexity Theory. Springer Verlag, 2006. 5
[FGKP09] Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Kumar Ponnuswami. On agnostic
learning of parities, monomials, and halfspaces. SIAM J. on Comput., 39(2):606–645, 2009. 2
15
[FGMS12] Michael R Fellows, Jiong Guo, Dániel Marx, and Saket Saurabh. Data reduction and problem
kernels (Dagstuhl Seminar 12241). Dagstuhl Reports, 2(6):26–50, 2012. 1
[FM12]
Fedor V Fomin and Dániel Marx. FPT suspects and tough customers: Open problems of Downey
and Fellows. In The Multivariate Algorithmic Revolution and Beyond, pages 457–468. Springer,
2012. 1
[GKS10]
Parikshit Gopalan, Subhash Khot, and Rishi Saket. Hardness of reconstructing multivariate
polynomials over finite fields. SIAM J. on Comput., 39(6):2598–2621, 2010. 2, 18
[Hås01]
Johan Håstad. Some optimal inapproximability results. Journal of the ACM (JACM), 48(4):798–
859, 2001. 2
[Kho06]
Subhash Khot. Ruling out PTAS for graph min-bisection, dense k-subgraph, and bipartite clique.
SIAM J. on Comput., 36(4):1025–1071, 2006. 17
[Kho09]
Subhash Khot. personal communication, 2009. 2, 4, 5, 6, 17
[KMV08]
Adam Tauman Kalai, Yishay Mansour, and Elad Verbin. On agnostic boosting and parity
learning. In Proc. 40th Annual ACM Symposium on the Theory of Computing, pages 629–638.
ACM, 2008. 2
[KS06]
Adam R Klivans and Rocco A Servedio. Toward attribute efficient learning of decision lists and
parities. J. Mach. Learn. Res., 7:587–602, 2006. 2
[KS15]
Subhash Khot and Igor Shinkar. On hardness of approximating the parameterized clique problem. Electronic Colloquium on Computational Complexity (ECCC), 22:13, 2015. http://eccc.hpiweb.de/report/2015/013. 2
[LW05]
Michael Luby and Avi Wigderson. Pairwise independence and derandomization. Foundation
and Trends in Theoretical Computer Science, 1(4):237–301, 2005. 18
[Mic14]
Daniele Micciancio. Locally dense codes. In Proc. 29th Annual IEEE Conference on Computational Complexity, pages 90–97. IEEE, 2014. 1
[MOS04]
Elchanan Mossel, Ryan O’Donnell, and Rocco A. Servedio. Learning functions of k relevant
variables. J. Comp. Sys. Sci., 69(3):421–434, November 2004. 3
[Val12]
Gregory Valiant. Finding correlations in subquadratic time, with applications to learning parities
and juntas. In Proc. 53rd Annual IEEE Symposium on Foundations of Computer Science, pages
11–20. IEEE, 2012. 3
[Var97]
Alexander Vardy. The intractability of computing the minimum distance of a code. IEEE Trans.
Inform. Theory, 43(6):1757–1766, 1997. 1
[Vio09]
Emanuele Viola. The sum of D small-bias generators fools polynomials of degree D. Computational Complexity, 18(2):209–217, 2009. 4, 5, 6, 12, 18
A
A simple O(n · 2m)-time algorithm for k-VectorSum
Let (M, b) be an instance of k-VectorSum where M ∈ Fm×n
and b ∈ Fm
2 . Construct a graph G on vertex
2
m
set V = F2 and edge set given by,
V
E = {u, v} ∈
| u + v is a column of M .
2
16
We say that an edge {u, v} ∈ E is labeled by the column u + v of M. Clearly, if there is a vector x of
Hamming weight at most k such that Mx = b then there is a path of length at most k in G from 0 to
b given by choosing the edges labeled by the columns corresponding to the non-zero entries of x in any
sequence. On the other hand, if there is a path in G from 0 to b of length at most k, then there is a sequence
of at most k columns (with possible repetitions) of M which sum up to b. Cancelling out even number
of repetitions of any column yields a subset of at most k distinct columns of M that sum up to b. Thus,
deciding k-VectorSum reduces to determining whether there is a path of length at most k from 0 to b.
The size of V is 2r and of E is at most n · 2m , and the graph can be constructed in time O(n · 2m ). Doing
a Breadth First Search yields a running time of O(n · 2m ).
B
Optimal hardness of learning parities using degree d polynomials
The hardness reduction in this section is due to Khot [Kho09].
The starting point of the reduction is the MinimumDistance problem over F2 : given a matrix A ∈ Fm×n
,
2
find a nonzero vector z ∈ Fn2 to minimize wt(Az) where wt(a) is the normalized hamming weight of a. The
latter quantity denotes the distance of the code given by the linear span of the columns of A.
Below we restate the hardness of MinimumDistance as proved by Austrin and Khot [AK14] along
with an additional guarantee satisfied by their reduction in the YES case which shall prove useful for the
subsequent application.
, it
Theorem B.1. [[AK14]] There is a universal constant ζ ∈ (0, 1/5) such that given a matrix A ∈ Fm×n
2
is NP-hard to distinguish between the following:
YES Case. There is a vector z = (z1 , . . . , zn ) ∈ Fn2 such that z1 = 1 and wt(Az) 6 ζ.
NO Case. For any nonzero vector z ∈ Fn2 , wt(Az) > 5ζ.
The following is an easy consequence of above obtained by tensoring the instance of the above theorem.
Theorem B.2. There is a universal constant ζ ∈ (0, 1/5) such that for any constant positive integer K,
given a matrix A ∈ Fm×n
, it is NP-hard to distinguish between the following:
2
YES Case. There is a vector z = (z1 , . . . , zn ) ∈ Fn2 such that z1 = 1 and wt(Az) 6 ζ K .
NO Case. For any nonzero vector z ∈ Fn2 , wt(Az) > 5K ζ K .
From the above we obtain – using techniques similar to those used in [Kho06] – the following hardness
of distinguishing between an MinimumDistance instance of distance ε vs. 1/2 − ε.
, it is NP-hard to distinguish
Theorem B.3. For any positive constant ε > 0, given a matrix B ∈ Fm×n
2
between the following:
YES Case. There is a vector z = (z1 , . . . , zn ) ∈ Fn2 such that z1 = 1 and wt(Bz) 6 ε.
NO Case. For any nonzero vector z ∈ Fn2 , 1/2 − ε 6 wt(Bz) 6 1/2 + ε.
Proof. Let A ∈ Fm×n
be an instance of MinimumDistance from Theorem B.2 for a large value of K. Let
2
Ai , i = 1, . . . , m be the rows of A. Let G be a regular expander on m vertices labeled from [m], with degree
10
D = 5K4ζ K
and second largest eigenvalue λ 6 D0.9 , which can be constructed efficiently. Note that
λ/D 6 (5K ζ K )/4.
Let t = 1/(2K ζ K ) and consider all t length random walks [i1 , i2 , . . . , it ] in G. There are m · Dt of such
walks. For each such walk we add 2t rows in B given by
t
X
sj Aij | s1 , . . . , st ∈ F2 .
j=1
17
In total, there are m′ = m · (2D)t rows in B and n columns.
YES Case
Let z be the vector given in the YES case of Theorem B.2, such that there are at most ζ K fraction of rows
Ai satisfying hAi , zi = 1. For a random walk in G, the probability that it contains an index corresponding
to any such row is at most tζ 6 1/2K . Thus, wt(Bz) 6 1/2K .
NO Case
Consider any z ∈ Fn2 . From the NO case of Theorem B.2, we have that at least 5k ζ K fraction of the rows
Ai satisfy hAi , zi = 1. Let I be the set of the indices corresponding to these rows. Using the analysis in
Section 4.7 of [LW05] we can bound the probability that a t length random walk [i1 , . . . , it ] in G does not
contain an index from I as follows.
t
p
λ
K
K
Pr [i1 6∈ I, i2 6∈ I, . . . , it 6∈ I] 6
1−5 ζ +
D
p
t
5K ζ K
6
1 − 5K ζ K +
4
K K
1/(ζ
2
)
5K ζ K
6
1−
4
6 e−(5/2)
K
/4
K
6 2−2 .
K
Thus, at least 1 − 2−2 fraction of the random walks contain an index from I. For any such walk, exactly
half of the 2t linear combinations of the corresponding rows result in a row Br of B such that hBr , zi = 0.
K
K
Thus, (1/2)(1 − 2−2 ) 6 wt(Bz) 6 (1/2)(1 + 2−2 ). Choosing K to be large enough completes the proof.
B.1
Main Theorem
The following theorem shows optimal hardness of learning linear forms by degree d polynomials and subsumes
the weaker results in [GKS10].
Theorem B.4. For any constants δ > 0 and positive integer d, given a set of point-value pairs {(xi , yi )}M
i=1 ⊆
FM
×
F
,
it
is
NP-hard
to
distinguish
whether
2
2
YES Case. There is a linear form ℓ∗ : FN
2 7→ F2 such that for at least (1 − δ) fraction of the pairs (x, y),
ℓ∗ (x) = y.
NO Case. Any degree d polynomial p : FN
2 7→ F2 satisfies p(x) = y for at most (1/2 + δ) fraction of the
point-value pairs.
Proof. Let B ∈ Fm×n
be the matrix given by Theorem B.3 for a parameter ε > 0 to be chosen small
2
enough. For each combination of d rows of B we add a point-value pair in our instance as follows. For
r = (r1 , . . . , rn ) obtained by summing some d rows of B we add ((r2 , . . . , rn ), r1 ) to our instance. Thus,
M = md and N = n − 1.
YES Case
Let z = (z1 , . . . , zn ) with z1 = 1 be the vector from the YES case of Theorem B.3 satisfying hz, bi = 0 for at
Pn−1
least (1 − ε) fraction of the rows b of B. Defining ℓ∗ (x) = j=1 zj+1 xj , we obtain that for (1 − ε) fraction
of the rows (b1 , . . . , bn ) of B, ℓ∗ (b2 , . . . , bn ) = b1 . Thus, for at least (1 − εd) fraction of the point-value pairs
(x, y) of the instance constructed above, ℓ∗ (x) = y.
NO Case
In the NO case, the uniformly random distribution over the rows of B fools every linear form ℓ : Fn2 7→ F2
with bias ε. By the result of Viola [Vio09] (Theorem 2.6), the distribution given by uniformly random
18
d−1
d-wise sums of of the rows of B fools all degree d polynomials q : Fn2 7→ F2 with bias εd = 16 · ε1/2 .
Let (r1 , . . . , rn ) be a random element from the latter distribution, and let q(r1 , . . . , rn ) = r1 + p(r2 , . . . , rn )
where p is a degree d polynomial. Since q is linear in the first coordinate, the bias of q under the uniform
distribution is 0, and p(r2 , . . . , rn ) 6= r1 for at least 1/2 − εd fraction of (r1 , . . . , rn ).
Thus, for any degree d polynomial p, for at least 1/2−εd fraction of the point-value pairs (x, y) constructed
in our instance p(x) 6= y.
d−1
To complete the reduction we choose ε to be small enough so that max{εd, 16 · ε1/2 } 6 δ.
19
| 8 |
On subexponential parameterized algorithms for Steiner Tree and
Directed Subset TSP on planar graphs∗
Dániel Marx†
Marcin Pilipczuk‡
Michal Pilipczuk§
arXiv:1707.02190v1 [] 7 Jul 2017
Abstract
There are numerous examples of the so-called “square root phenomenon” in the field of
parameterized algorithms: many of the most fundamental graph problems, parameterized by
some natural parameter k, become significantly simpler when restricted
to planar graphs and
√
in particular the best possible running time is exponential in O( k) instead of O(k) (modulo
standard complexity assumptions). We consider two classic optimization problems parameterized
by the number of terminals. The Steiner Tree problem asks for a minimum-weight tree
connecting a given set of terminals T in an edge-weighted graph. In the Subset Traveling
Salesman problem we are asked to visit all the terminals T by a minimum-weight closed walk.
We investigate the parameterized complexity of these problems in planar graphs, where the
number k = |T | of terminals is regarded as the parameter. Our results are the following:
√
• Subset TSP can be solved in time 2O( k log k) · nO(1) even on edge-weighted directed
planar graphs. This improves upon the algorithm of Klein and Marx [SODA 2014] with
the same running time that worked only on undirected planar graphs with polynomially
large integer weights.
• Assuming the Exponential-Time Hypothesis, Steiner Tree on undirected planar graphs
cannot be solved in time 2o(k) · nO(1) , even in the unit-weight setting. This lower bound
makes Steiner Tree the first “genuinely planar” problem (i.e., where the input is only
planar graph with a set of distinguished terminals) for which we can show that the square
root phenomenon does not appear.
√
• Steiner Tree can be solved in time nO( k) ·W on undirected planar graphs with maximum
edge weight W . Note that this result is incomparable to the fact that the problem is known
to be solvable in time 2k · nO(1) even in general graphs.
A direct corollary of the combination of our results for Steiner Tree is that this problem does
not admit a parameter-preserving polynomial kernel on planar graphs unless ETH fails.
∗
This research is a part of projects that have received funding from the European Research Council (ERC) under
the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 280152 (Dániel
Marx) and 714704 (Marcin Pilipczuk). The research of Michal Pilipczuk is supported by Polish National Science
Centre grant UMO-2013/11/D/ST6/03073. Michal Pilipczuk is also supported by the Foundation for Polish Science
(FNP) via the START stipend programme.
†
Institute for Computer Science and Control, Hungarian Academy of Sciences (MTA SZTAKI), Hungary,
dmarx@cs.bme.hu
‡
Institute of Informatics, University of Warsaw, Poland, marcin.pilipczuk@mimuw.edu.pl.
§
Institute of Informatics, University of Warsaw, Poland, michal.pilipczuk@mimuw.edu.pl.
1
Introduction
It has been observed in the context of different algorithmic paradigms that planar graphs enjoy
important structural properties that allow more efficient solutions to many of the classic hard
algorithmic problems. The literature on approximation algorithms contains many examples of
optimization problems that are APX-hard on general graphs, but admit polynomial-time approximation schemes (PTASes) when restricted to planar graphs (see, e.g., [2–5, 7, 17, 18, 23, 27, 30]). Even
though the planar versions of most NP-hard problems remain NP-hard, a more fine-grained look
reveals that significantly better running times are possible for planar graphs. As a typical example,
consider the 3-Coloring problem: it can be solved in 2O(n) time in general graphs and, assuming
the Exponential-Time Hypothesis (ETH), this is best possible as there is no 2o(n) -time
√ algorithm.
O(
n) , which is
However, when restricted to planar graphs, 3-Coloring
√ can be solved in time 2
o(
n)
again best possible assuming ETH: the existence of a 2
-time algorithm would contradict ETH.
There are many other problems that behave in a similar way and this can be attributed to the
√
combination of two important facts: (1) every planar graph on n vertices has treewidth O( n)
and (2) given an n-vertex graph of treewidth t, most of the natural combinatorial problems can
O(t) · nO(1) (or perhaps 2O(t·polylog t) · nO(1) ). On the lower bound side, to rule
be solved
√ in time 2
o(
n)
out 2
time algorithms, it is sufficient to observe that most planar NP-hardness proofs increase
the size of the instance at most quadratically (because of the introduction of crossing gadgets).
For example, there are reductions from n-variable m-clause 3SAT to√ 3-Coloring a planar graph
with O((n + m)2 ) vertices, and together with ETH, this rules out 2o( n) time algorithms
for planar
√
O(
n)
3-Coloring. Thus the existence of this “square root phenomenon” giving 2
time complexity
is well-understood both from the algorithmic and complexity viewpoints.
Our understanding of this phenomenon is much less complete for parameterized problems. A
large fraction of natural fixed-parameter tractable graph problems can be solved in time 2O(k) · nO(1)
(with notable exceptions [13, 31]) and a large fraction of W[1]-hard problems can be solved in time
nO(k) . There are tight or almost-tight lower bounds showing the optimality of these running
times.
√
O( k·polylog k)
By now, there
is
a
growing
list
of
problems
where
the
running
time
improves
to
n
√
or to 2O( k·polylog k) · nO(1) when restricted to planar graphs. For a handful of problems (e.g.,
Independent Set, Dominating Set, Feedback Vertex Set, k-Path) this improvement can
be explained in a compact way by the elegant theory of bidimensionality [14]. However, there√ is
no generic argument (similar to the simple argument described above for the existence of 2O( n)
algorithms) why such an improvement should be possible for most parameterized problems. The
√
fact that every n-vertex
planar graph has treewidth O( n) does not seem to help in improving
√
the 2O(k) factor to 2O( k) in the running time. The algorithmic results of this form are thus very
problem-specific, exploiting nontrivial observations on the structure of the solution or invoking other
tools tailored to the problem’s nature. Recent results include algorithms for Subset TSP [29],
Multiway Cut [28, 33], unweighted Steiner Tree parameterized by the number of edges of the
solution [38, 39], Strongly Connected Steiner Subgraph, [9], Subgraph Isomorphism [21],
facility location problems [35], Odd Cycle Transversal [32], and 3-Coloring parameterized by
the number of vertices with degree ≥ 4 [1].
It is plausible to expect that other natural problems also have significantly faster parameterized
algorithms on planar graphs. The reason for this optimism is twofold. First, even though the
techniques used to obtain the results listed above are highly problem-specific, they suggest that
planar graphs have rich structural properties that could be exploited when solving other problems.
Second, it looks almost impossible to prove lower bounds ruling out subexponential algorithms for
planar problems. To prove that a parameterized algorithm with running time 2o(k) · nO(1) violates
1
ETH, one needs to give a reduction from 3SAT with m clauses to a planar instance with parameter
k = O(m). In a typical reduction, we represent each bit of information in the 3SAT instance
(e.g., values of the variables in the solution) by a small “gadget” in the planar graph. In order to
encode the constraints of the input instance, we need to connect these gadgets in an appropriate
way. However, in a planar graph, we need to introduce some kind of “crossing gadgets” in order to
realize these connections. To realize the constraints given by the O(m) clauses, it may be necessary
to introduce up to O(m2 ) crossings. As each crossing typically increases the parameter,
we end up
√
with an instance having parameter k = O(m2 ), which is only sufficient to rule out 2o( k) · nO(1) -time
algorithms. Thus the appearance of many crossing gadgets seems to be an inherent limitation
preventing stronger lower bounds. This may suggest that running times of the form 2o(k) · nO(1) are
achievable for many problems.
Our contribution. In this paper we address two network design problems on planar graphs
for which the existence of subexponential parameterized algorithms was open. Given a graph
G with a subset T of vertices distinguished as terminals, the Subset TSP problem asks for a
shortest closed walk visiting the terminals in any order. Parameterized by the number k = |T |
of terminals, the problem is fixed-parameter tractable in arbitrary graphs: it can be solved in
time 2k · nO(1) by first computing the distance between every pair of terminals, and then solving
the resulting k-terminal instance using the standard Bellman-Held-Karp dynamic programming
algorithm. Klein and Marx [29] showed that if G is an undirected planar graph with√polynomially
bounded edge weights, then the problem can be solved significantly faster, in time 2O( k log k) · nO(1) .
The limitations of polynomial weights and undirected graphs are inherent to this algorithm: it starts
with computing a locally 4-step optimal solution (which requires polynomial weights to terminate in
polynomial time) and relies on an elaborate subtour-replacement argument (which breaks down if
the tour has an orientation). The main argument is the unexpected and somewhat√mysterious claim
that the union of an optimal and a locally 4-step optimal tour has treewidth O( k).
Our first result is a more robust and perhaps less mysterious algorithm that achieves the same
running time, but does not suffer from these limitations.
Theorem 1.1. Given an edge-weighted directed planar
graph G with terminals T , Subset TSP
√
parameterized by k = |T | can be solved in time 2O( k log k) nO(1) .
The proof of√Theorem 1.1 has the same high-level idea as the algorithm of Klein and Marx [29]:
a family of 2O( k log k) subsets of terminals is computed, followed by applying a variant of the
Bellman-Held-Karp dynamic programming algorithm that considers only subsets of terminals that
appear in this family. However, the way we compute such a family is very different: the construction
of Klein and Marx [29] crucially relies on how the optimal solution interacts with the locally 4-step
optimal solution (e.g., they cross each other O(k) times), while our argument here does not use
any such assumption. For directed graphs, we can extract much fewer properties of the structure
of the solution or how it interacts with some other object. For example, we cannot require that
the optimum solution is non-self-crossing and the number of self-crossings cannot be even bounded
by a function of k. Thus in order to find an algorithm working on directed graphs, we need to
use more robust algorithmic ideas that better explain why it is possible to have subexponential
parameterized algorithms for this problem. In Section 2, we highlight these new ideas in an overview
of the algorithm of Theorem 1.1.
Given an edge-weighted undirected graph G and a set T of terminal vertices, Steiner Tree asks
for a minimum-weight tree connecting all the terminals. This problem is well known to be NP-hard,
even in planar graphs [24]. Dreyfus and Wagner [16] gave a dynamic programming algorithm that
2
solves Steiner Tree in time 3k · nO(1) in arbitrary graphs. The running time of this algorithm
was improved to 2k · nO(1) for small weights using fast subset convolution [37]. It is known that,
assuming ETH, there is no 2o(k) · nO(1) time algorithm for the problem in general graphs and in fact
it is conjectured that the 2k factor cannot be improved to (2 − )k for any > 0 [10].
In light of the long list of other subexponential parameterized
problems on planar graphs, it is
√
natural to expect that Steiner Tree can be solved in 2O( k log k) · nO(1) time on planar graphs. In
fact, this question has been posed as a natural open problem in various places [6,11,19,34,38,39]. As
partial progress toward this goal, in the unweighted case, subexponential algorithms parameterized
by the number of edges of the solution and number of nonterminal vertices were found [38, 39, 41].
However, the number of edges can be of course much larger than the number of terminals, hence an
algorithm that is subexponential in the number of edges is not necessarily subexponential in the
number of terminals. We show here that there was a reason why, despite significant efforts, no such
algorithm was found so far: assuming ETH, there is no subexponential parameterized algorithm for
Steiner Tree on planar graphs.
Theorem 1.2. Unless the ETH fails, there is no 2o(k) · nO(1) time algorithm for Steiner Tree
on an unweighted and undirected planar graph with k = |T | terminals.
Thus unlike many other problems, Steiner Tree parameterized by the number of terminals
does not become dramatically simpler with the restriction of planarity. This is highly unexpected:
Steiner Tree seems to be first “genuinely planar” problem where there is no significant speedup
when restricted to planar graphs, and the 2O(k) factor for arbitrary graphs cannot be improved. The
informal expression “genuinely planar” emphasized the fact that input of Steiner Tree is planar
graph with a distinguished subset of vertices and there is no other extra, nonplanar information
encoded in the input. For example, it was known before that Directed Steiner Network (given
a directed graph G and requests (s1 , t1 ), . . . , (sk , tk ), find a subgraph of minimum weight that
contains an si → ti path for every i) can be solved in time nO(k) on general graphs, and there is no
f (k)no(k) time algorithm even on planar graphs [9]. However, this problem is not genuinely planar,
as the pairs (si , ti ) can encode arbitrary connections that do not respect planarity.
Theorem 1.2 makes the previous subexponential algorithms (including Theorem 1.1 for Subset
TSP on directed graphs) even more surprising: apparently there is no general rule why these
problems should have subexponential parameterized algorithms on planar graphs, and it could have
turned out for other problems as well that planarity does not allow any dramatic speedup. This
negative result also changes our expectations for future work in this direction: we cannot take it for
granted that most reasonable problems have subexponential parameterized algorithms on planar
graphs and now it seems to be a very real possibility that other natural problems behave similarly
to Steiner Tree.
We need some explanation how it was possible to prove Theorem 1.2: earlier we have argued that
such lower bounds seem very unlikely, because one would need O(n2 ) crossings when reducing from
a 3SAT instance with O(n) variables and O(n) clauses. In the proof of Theorem 1.2, we are doing
something unusual: in the created planar instance, we are not only introducing O(n) gadgets, each
representing one bit of the 3SAT instance (as it is usually done in reductions), but we introduce
also gadgets representing larger groups of bits. The crucial trick is that we can create crossings
where an information flow of one bit crosses the information flow of many bits, and this crossing
increases the parameter only by O(1). With such crossings, the total number of crossing gadgets
can be limited and we can make sure that the parameter becomes O(n). The catch is that the
reduction is no longer polynomial: the representation of large groups of bits require a planar graph
that is exponentially large. However, we can argue that a subexponential parameterized algorithm
on this exponentially large graph would result in a 2o(n) algorithm for 3SAT, violating ETH.
3
The reduction in the proof of Theorem 1.2 is a “hybrid” reduction in the sense that it combines
different proof strategies. In typical NP-hardness proofs, one constructs small gadgets that represent
one bit or information or have a constant number of different states. In typical W[1]-hard proofs,
the challenge is to construct large gadgets that can have many different states (corresponding to,
say, the choice of a vertex in a clique). The proof of Theorem 1.2 combines these two ideas: we
need both small gadgets representing single bits and large gadgets having many different states.
Additionally, we use the idea of splitting the variables into groups and allowing a blowup of the size
of the instance by a factor that is exponential in the size of the groups (as it is done in, e.g., [8, 31]).
Thus our reduction combines in a novel way many of the insights that we have learned in the past
decades about proving lower bounds on the exact complexity of hard algorithmic problems.
Our final result shows that there is still some way in which subexponentiality appears for
planar Steiner Tree. On high level, the proof of this theorem follows the same approach as a
corresponding result for rectilinear Steiner tree [20].
Theorem 1.3. Given an edge-weighted planar undirected graph G with n vertices
and a set T ⊆ V (G)
√
O( |T |)
· W , where W is the
of terminals, one can find a minimum-cost Steiner tree for T in time n
maximum weight of an edge.
Note that this running time is incomparable to the 2k · nO(1) time, available for general graphs.
It is known that unweighted Steiner Tree in planar graphs admits a polynomial kernel when
parameterized by the number of edges in the solution [39]. A natural question is whether this can
be improved to a polynomial kernel parameterized by the number of terminals. While we cannot
answer this question here, a simple combination of Theorems 1.2 and 1.3 implies that, assuming
ETH, there is no kernelization algorithm that produces a polynomial kernel that preserves the
number of terminals.
Corollary 1.4. Suppose there is a polynomial-time algorithm that, given an unweighted planar
Steiner Tree instance (G, T ) and an integer k, computes another unweighted planar Steiner
Tree instance (G0 , T 0 ) and an integer k 0 , such that |T 0 | = O(|T |), |G0 | bounded polynomially in |T |,
and (G, T ) admits a Steiner tree of size at most k if and only if (G0 , T 0 ) admits a Steiner tree of
size at most k 0 . Then the ETH fails.
2
Directed Traveling Salesman: overview
In this section we give an overview of the approach leading to the subexponential parameterized
algorithm for Directed Subset TSP, that is, the proof of Theorem 1.1. We first describe the
high-level strategy of restricting the standard dynamic programming algorithm to a smaller family
of candidate states. Then we explain the main idea of how such a family of candidate states can be
obtained, however we introduce multiple simplifying assumptions and hide most of the technical
problems. Finally, we briefly review the issues encountered when making the approach work in full
generality, and explain how we cope with them. We strongly encourage the reader to read this
section before proceeding to the formal description, as in the formal layer many of the key ideas
become somehow obfuscated by the technical details surrounding them.
2.1
Restricted dynamic programming
Restricting dynamic programming to a small family of candidates states is by now a commonly
used technique in parameterized complexity. The idea is as follows. Suppose that we search for a
minimum-cost solution to a combinatorial problem, and this search can be expressed as solving a
4
number of subproblems in a dynamic programming fashion, where each subproblem corresponds
to a state from a finite state space S. Usually, subproblems correspond to partial solutions, and
transitions between states correspond to extending one partial solution to a larger partial solution
at some cost, or combining two or more partial solutions to a larger one. For simplicity, assume for
now that we only extend single partial solutions to larger ones, rather than combine multiple partial
solutions. Then the process of assembling the final solution from partial solutions may be described
as a nondeterministic algorithm that guesses consecutive extensions, leading from a solution to the
most basic subproblem to the final solution for the whole instance. The sequence of these extensions
is a path (called also a computation path) in a directed graph on S where the transitions between
the states are the arcs. Then the goal is to find a minimum-weight path from the initial state to any
final state, which can be done in time linear in the size of this state graph, provided it is acyclic.
In order to improve the running time of such an algorithm one may try the following strategy.
Compute a subset of states S 0 ⊆ S with the following guarantee: there is a computation path
leading to the discovery of a minimum-weight solution that uses only states from S 0 . Then we may
restrict the search only to states from S 0 . So the goal is to find a subset of states S 0 that is rich
enough to “capture” some optimum solution, while at the same time being as small as possible so
that the algorithm is efficent.
Let us apply this principle to Directed Subset TSP. Consider first the most standard dynamic
programming algorithm for this problem, working on general graphs in time 2k · nO(1) , where we
denote k = |T | by convention. Each subproblem is described by a subset of terminals S ⊆ T and
two terminals s1 , s2 ∈ S. The goal in the subproblem is to find the shortest tour that starts in s1 ,
ends in s2 , and visits all terminals of S along the way. The transitions are modelled by a possibility
of extending a solution for the state (S, s1 , s2 ) to a solution for the state (S ∪ {s0 }, s1 , s0 ) for any
s0 ∈
/ S at the cost of adding the shortest path from s2 to s0 . The minimum-weight tour can be
obtained by taking the best among solution obtained as follows: for any s1 , s2 ∈ T , take the solution
for the subproblem (T, s1 , s2 ) and augment it by adding the shortest path from s2 to s1 .
This is not the dynamic programming algorithm we will be improving upon. The reason is
that restricting ourselves to constructing one interval on the tour at a time makes it difficult to
enumerate a small subfamily of states capturing an optimum solution.
Instead, we consider a more
√ involved variant of the above dynamic programming routine, which
intuitively keeps track of O( k) intervals on the tour at a time. More precisely, each subproblem is
described by a state defined as a pair (S, M), where S ⊆ T is a subset of terminals to be visited,
and M (also
√ called connectivity pattern) is a set of pairwise disjoint pairs of terminals from S, where
|M| ≤ C k for some universal constant C. The goal in the subproblem is to compute a family of
paths P(S,M) of minimum possible weight having the following properties: for each (s1 , s2 ) ∈ M
there is a path in P(S,M) that leads from s1 to s2 , and each terminal from S lies on some path in
P(S,M) . Note, however, that we do not specify, for each terminal from S, on which of the paths it
has to lie. Solutions to such subproblems may be extended by single terminals as in the standard
dynamic programming, but they can be also combined in pairs. Precisely, given solutions P1 and
P2 respectively for (S1 , M1 ) and (S2 , M2 ), where S1 ∩ S2 = ∅, we may merge these solutions into a
solution for (S1 ∪ S2 , M) by connecting paths from P1 and P2 using shortest path between respective
start and end vertices, so that
M is obtained at the end. Since we assume
√
√ the connectivity pattern
O(
k)
that |M1 |, |M1 |, |M| ≤ C k, there are only k
ways to perform such a merge. While this
dynamic programming algorithm formally does not conform to the “linear view” described in the
paragraphs above, as it may merge partial solutions for two simpler states into a larger partial
solution, it straightforward to translate the concept restricting the state space to a small subset that
preserves the existence of a computation path (in this setting, rather a computation tree) leading to
5
a minimum-cost solution.
√
Observe that since in a state (S, M) we√stipulate the size of M to be O( k), the total number
of states with a fixed subset S ⊆ T is k O( k) . Thus, from the discussion above we may infer the
following lemma, stated here informally.
Lemma 2.1 (Lemma 5.22, informal statement). Let (G, T ) be an instance of Directed Subset
TSP. Suppose we are also given a family B of subsets of T with the following guarantee: there is a
computation path of the above dynamic programming leading to an optimum solution that uses only
states of the form √(S, M) where S ∈ B. Then we can find an optimum solution for the instance
(G, T ) in time k O( k) · (|B| · |G|)O(1) .
Concluding, we are left with constructing
a family B of subsets of T that satisfies the prerequisites
√
of Lemma 2.1 and has size k O( k) , provided the underlying graph G is planar. For this, we will
crucially use topological properties of G given by its planar embedding.
2.2
Enumerating candidate states
Suppose (G, T ) is the input instance of Directed Subset TSP where G is planar. Without loss
of generality we may assume that G is strongly connected. Fix some optimum solution W , which is
a closed walk in the input graph G that visits every terminal.
We now introduce a number of simplifying assumptions. These assumptions are made with loss
of generality, and we introduce them in order to present our main ideas in a setting that is less
obfuscated by technical details.
(A1) Walk W is in fact a simple directed cycle, without any self-intersections. In particular, the
embedding of W in the plane is a closed curve without self-crossings; denote this curve by δ.
(A2) Walk W visits every terminal exactly once, so that we may speak about the (cyclic) order of
visiting terminals on W .
We will also assume that shortest paths are unique in G, but this can be easily achieved by perturbing
the weights of edges of G slightly.
Suppose now that
√ we have another closed curve γ in the plane, without self-crossings, that
crosses δ in p = O( k) points, none of which is a terminal. Curve γ divides the plane into two
regions—say R1 , R2 —and thus δ is divided into p intervals which are alternately contained in R1
and R2 . Let S be the set of terminals visited on the intervals contained in R1 . Then it is easy to√see
that S is a good candidate for a subset of terminals that we are looking: S forms at most O( k)
contiguous intervals in the order of visiting terminals by W , and hence for the connectivity pattern
M consisting of the first and last terminals on these intervals, the state (S, M) would describe a
subproblem useful for discovering W .
However, we are not really interested in capturing one potentially useful state, but in enumerating
a family of candidate states that contains a complete computation path leading to the discovery of
an optimum solution. Hence, we rather need to capture a hierarchical decomposition of T using
curves γ as above, so that terminal subsets S induce the sought computation path. For this, we will
use the notion of sphere-cut decompositions of planar graphs, and the√well-known fact that every
k-vertex planar graph admits a sphere-cut decomposition of width O( k).
A branch decomposition of a graph G is a tree T with every internal node having degree 3,
together with a bijection η from the edges of G to leaves of T . For every edge e of T , the removal
of e from T splits T into two subtrees, say Te1 and Te2 . This naturally induces a partition {Fe1 , Fe2 }
of the edge set of G as follows: Fe1 comprises edges mapped by η to leaves residing in Te1 , and
6
symmetrically for Te2 . The width of the edge e is the number of vertices of G incident to both an
edge of Fe1 and to an edge of Fe2 , and the width of a branch decomposition (T , η) is the maximum
width among its edges. The branchwidth of a graph G is the minimum possible width of a branch
√
decomposition of G. It is well-known that a planar graph on k vertices has branchwidth O( k)
(see e.g. [22]).
After rooting a branch decomposition (T , η) in any node, it can be viewed as a hierarchical
decomposition of the edge set of G using vertex cuts of size bounded by the width of the decomposition.
Robertson et al. [40] proved that in planar graphs we can always find an optimum-width branch
decomposition that somehow respects the topology of the plane embedding of a graph. Precisely,
having fixed a plane embedding of a connected graph G, call a closed curve γ in the plane a noose if
γ has no self-crossings and it crosses the embedding of G only at vertices; in particular it does not
intersect any edge of G1 . Such a curve γ divides the plane into two regions, which naturally induces
a partition of the edge set of G into edges that are embedded in the first, respectively the second
region. A sphere-cut decomposition of G is a branch decomposition (T , η) where in addition every
edge e of T is assigned its noose γ(e) that induces exactly the partition {Fe1 , Fe2 } of the edge set
of G in the sense above. Then the result of Robertson et al. [40] may be stated as follows: every
connected planar graph has a sphere-cut decomposition of width equal to its branchwidth2 . Together
with the square-root behavior of the branchwidth of a planar graph, this implies the following.
Theorem 2.2 (see e.g. [22]).
√ Every connected plane graph on k vertices has a sphere-cut decomposition of width at most α k, for some constant α.
Moreover, such a sphere-cut decomposition of can be computed in polynomial time [15, 25].
Turning back to our Directed Subset TSP instance (G, T ) and its optimum solution W , our
goal is to enumerate a possibly small family of subsets of T that contains some complete computation
path leading to the discovery of W . The idea for this will be as follows. Take any minimal tree H0
in (the underlying undirected graph) of G spanning all terminals of T . We may assume that H0
contains at most k leaves that are all terminals, at most k − 2 vertices of degree at least 3, and
otherwise it consists of at most 2k − 3 simple paths connecting these leaves and vertices of degree at
least 3 (further called special vertices of H0 ). To avoid technical issues and simplify the picture, we
introduce another assumption.
(A3) Walk W and tree H0 do not share any edges.
Let H be the graph formed by the union of W and H0 . Even though both W and H consists of
at most 2k simple paths in G, the graph H may have many vertices of degree more than 3. This is
because a subpath Q between two consecutive terminals on W and a path P in H0 that connects two
special vertices of H0 may cross many times. The intuition is, however, that the planar structure of
H roughly resembles a structure√of a planar graph on O(k) vertices, and a sphere-cut decomposition
of this planar graph of width O( k) should give rise to the sought hierarchical partition of terminals
leading to the discovery of W by the dynamic programming algorithm.
Let us remark that, of course, the definition of the graph H relies on the (unknown to the
algorithm) solution W , though the tree H0 can be fixed and used by√the algorithm. At the end
we will argue that having fixed H0 , we may enumerate a family of k O( k) candidates for nooses in
1
In standard literature, e.g. [40], a noose is moreover required to visit every face of G at most once; in this paper
we do not impose this restriction.
2
In [40] it is also assumed that the graph is bridgeless, which corresponds to the requirement that every face is
visited by a noose at most once. It is easy to see that in the absence of this requirement it suffices to assume the
connectivity of the graph.
7
a sphere-cut decomposition of H. Roughly, for each such noose γ we consider the bi-partition of
terminals according to the regions of the plane minus γ in which they lie, and
√ we put all terminal
subsets constructed in this manner into a family B, which is of size k O( k) . Then restricting
the dynamic programming algorithm to B in the sense of Lemma 2.1 gives us the required time
complexity.
Therefore,
√ the goal is to simplify the structure of H so that it admits a sphere-cut decomposition
of width O( k). Consider any pair of terminals t1 , t2 visited consecutively on W , and let P be the
subpath of W from t1 to t2 . Consider contracting all internal vertices on P into a single vertex,
thus turning P into a path P 0 on 2 edges and 3 vertices. Let H 0 be the graph obtained from H
by contracting each path between two consecutive terminals on W in the manner described above.
Observe that thus, H 0 has less than 3k vertices of degree at least 3: there are at most 2k vertices
on the contracted W in total, and there can be at most k − 2 vertices of degree at least 3 on H0
that do not lie on W . If we now take H 0 and contract every maximal path with internal vertices of
00
00
degree 2 into a single edge, we turn H 0 into a planar
√ graph H on at most 3k vertices. Then H
has a sphere-cut decomposition of width at most α 3k, say (T , η, γ(·)).
Consider family D of subsets of terminals constructed as follows. For each noose γ(e) for
e ∈ T , that is, appearing in the sphere-cut decomposition (T , η, γ(·)), and each
partition (X, Y )
√
√
of terminals traversed by γ(e) (there are at most α 3k such terminals, so 2O( k) such partitions),
add to D the following two terminal subsets: the set of terminals enclosed by γ(e) plus X, and the
set of terminals excluded by γ(e) plus Y . It can be now easily seen that D contains a complete
computation
path that we are looking for, as each terminal subset included in D forms at most
√
O( k) contiguous intervals in the cyclic order of terminals on W , and the decomposition tree T
shows how our dynamic programming should assemble subsets appearing in D in pairs√up to the
whole terminal set. In other words, if we manage to construct a family B of size k O( k) with a
guarantee that it contains the whole D, then we will be done by Lemma 2.1.
Obviously, the graph H 00 is not known to the algorithm, as its definition depends on the fixed
optimum solution W . Nevertheless, we may enumerate a reasonably small family of candidates for
nooses used in its sphere-cut decomposition (T , η, γ(·)). The √main idea is that even though the
full structure of H 00 cannot be guessed√at one shot within k O( k) possibilities, each noose we are
interested in traverses only at most α 3k vertices of H 00 , and hence it is sufficient to guess only
this small portion of H 00 .
More precisely, let Q be the subset of those vertices of H 00 that are obtained from contracting
the subpaths of W between consecutive terminals. Fix a noose γ appearing in √
the sphere-cut
decomposition of H 00 , that is, γ = γ(e) for some e ∈ T . Then γ traverses at most α √3k vertices of
Q; say that R ⊆ Q is the set of these vertices. We can now enumerate a set of k O( k) candidates
for γ as follows (by guessing we mean iterating through all options):
√
(1) Guess a set R of at most α 3k pairs of distinct terminals.
(2) For each (s, t) ∈ R, take the shortest path P(s,t) from s to t and consider contracting it to a
single vertex p(s,t) .
(3) Take the fixed tree H0 that spans terminals in G, apply the above contractions in G, and let
HR be the graph to which H0 is transformed under these contractions.
(4) Enumerate all nooses
√ γ that meet HR only at terminals and vertices of degree at least 3, and
traverse at most α 3k such vertices.
√
In the Step 1 we have at most k O( k) options for such a set R, and the contractions in Steps 2
and 3 turn H0 into a planar graph HR with O(k) vertices. It is not hard to convince oneself that in
8
√
such a graph, there are only k O(√ k) nooses satisfying the property expressed in the√Step 3, so all
in all we enumerate at most k O( k) curves in the plane, each traversing at most α 3k terminals.
Now, to conclude the construction of B we do as follows. For each enumerated curve γ, and each
partition (X, Y ) of terminals traversed by γ, we include into B two terminal subsets: the set√ of
terminals enclosed by γ plus X, and the set of terminals excluded by γ plus Y . Thus |B| = k O( k) .
It remains to argue that B contains the whole family D that was given by the sphere-cut
decomposition (T , η, γ(·)) of H 00 , so that Lemma 2.1 may be applied. It should be quite clear that
it is sufficient to show that every noose γ appearing in (T , η, γ(·)) is somehow enumerated in Step 4
of the procedure from the previous paragraph. However, nooses with respect to HR are formally
not necessarily nooses with respect to H 00 , as we wanted. Nevertheless, if a noose γ appears in the
sphere-cut decomposition (T , η, γ(·)) of H 00 , and we take R to be the set of pairs of consecutive
terminals on W such that γ passes through the contracted vertices p(s,t) exactly for (s, t) ∈ R, then
after dropping parts of H 00 not appearing in HR , γ becomes a noose enumerated for HR . Therefore,
the terminal partitions raised by γ are still included in B as we wanted, and we are done.
2.3
Traps, issues, and caveats
The plan sketched in the previous section essentially leads to an algorithm with the promised
time complexity, modulo Assumptions A1, A2, A3, and a number of technical details of minor
relevance. Assumptions A2 and A3 are actually quite simple to achieve without loss of generality.
It is Assumption A1 that was a major conceptual obstacle.
For Assumption A2, we may at the very beginning perform the following reduction. For every
original terminal t, introduce a new terminal t0 and edges (t, t0 ) and (t0 , t) of weight 0 to the graph;
t0 and these edges are embedded in any face incident to t. The new terminal set consists of terminals
t0 for all original terminals t. In this way, any closed walk visiting any new terminal t0 has to make
a detour of weight 0 using arcs (t, t0 ) and (t0 , t), and we may assume that an optimal solution makes
only one such detour for each new terminal t0 . Thus we achieve Assumption A2, but actually the
fact that we may assume that every terminal is incident to one non-trivial face and one trivial face
between (t, t0 ) and (t0 , t) also helps in solving technical issues later on.
For Assumption A3, observe that in the reasoning we relied only on the fact that H0 is a tree
spanning all terminals that has at most k leaves and at most k − 2 vertices of degree at least 3. In
particular, we did not use any metric properties of H0 . In fact, the reader may think of H0 as a
combinatorial object used to control the homotopy group of the plane with terminals pierced out:
for any non-self-crossing curve γ on the plane, we may infer how terminals are partition into those
enclosed by γ, excluded by γ, and lying γ by just examining the consecutive intersections of γ with
H0 . Therefore, instead of choosing H0 arbitrarily, we may add it to the graph artificially at the
very beginning, say using edges of weight +∞. In this way we make sure that the optimum solution
W does not use any edge of H0 .
Finally, let us examine Assumption A1: the optimum solution W is a simple directed cycle
without self-intersections. Unfortunately, this assumption may not hold in general. Consider the
example depicted in Figure 1, where we have a directed planar graph with two terminals s, t, and
the only closed walk visiting both s and t consists of two paths, one from s to t and the second from
t to s, that intersect each other an unbounded number of times. Therefore, in general the optimum
solution W may have an unbounded number of self-crossings. Nevertheless, we may still develop
some kind of a combinatorial understanding of the topology of W .
It will be convenient to assume that no edge of the graph is traversed by W more than once;
this can be easily achieved by copying each edge |T | times, and using a different copy for each
traversal. Consider two visits of the same vertex u by W ; let e1 , e2 be the edges incident to u used
9
s
t
Figure 1: A planar Directed Subset TSP instance with two terminals. The only solution consists
of the union of the red path from s to t and the blue path from t to s. These two paths cross each
other many times, which gives many self-crossings of the solution.
by W just before and just after the first visit, and define f1 , f2 in the same way for the second visit.
Examine how e1 , e2 , f1 , f2 are arranged in the cyclic order of edges around vertex u. If they appear
in the interlacing order, i.e., (e1 , f1 , e2 , f2 ) or (e1 , f2 , e2 , f1 ), then we say that these two visits form
a self-crossing of W . Intuitively, if the order is not interlacing, then we may slightly pull the two
parts of the embedding of W near u corresponding to the visits so that they do not intersect. So
topologically we do not consider such a self-intersection as a self-crossing. For two walks W1 , W2 in
G that do not share any common edges we define their crossing in a similar manner, as a common
visit of a vertex u such that the cyclic order of edges used by W1 and W2 immediately before and
immediately after these visits is interlacing.
We now prove the following structural statement about self-crossings of W : we may always
choose an optimal solution W so that the following holds. Consider any self-crossing of W at some
vertex u (recall it consists of two visits of u) and say it divides W into two closed subwalks W1 and
W2 : W1 is from the first visit of u to the second, and W2 is from the second visit of u to the first.
Then the subwalks W1 and W2 do not cross at all. This statement can be proved by iteratively
“uncrossing” an optimum solution W as long as the structure of its self-crossings is too complicated.
However, one needs to be careful in order not to split W into two closed curves when uncrossing.
It is not hard to observe that the statement given in the previous paragraph actually shows that
the topology of W roughly resembles a cactus where each 2-connected component is a cycle (here,
we assume that self-intersections that are not self-crossings are pulled slightly apart so that W does
not touch itself there). See the left panel of Figure 2 in Section 5.1 for reference. Then we show (see
Lemma 5.7) that W can be decomposed into O(k) subpaths P = {B1 , . . . , B` } such that:
• each path Bi has no terminal as an internal vertex and is the shortest path between its
endpoints; and
• each path Bi may cross with at most one other path Bj .
To see this, note that the cactus structure of W may be described as a tree T with at most k leaves
and at most k − 2 vertices of degree at least 3. We have a pair of possibly crossing subpaths in the
decomposition P per each maximal path with internal vertices of degree 2 in T .
The idea now is as follows. In the previous section we essentially worked with the partition of
W into subpaths between consecutive terminals, as Assumption A1 allowed us to do so. In the
absence of this assumption, we work with the finer partition P as above. The fact that the paths of
P interact with each other only in pairs, and in a controlled manner, makes the whole reasoning go
through with the conceptual content essentially unchanged, but with a lot more technical details.
One nontrivial difference is that in the previous section we were contracting shortest paths
between pairs of consecutive terminals, so we had a small set of candidates for the endpoints of
10
these paths: the terminals themselves. In the general setting, the decomposition statement above
a priori does not give us any small set of candidates for endpoints of paths Bi . If we chose
those
√
O( k) instead
endpoints as arbitrary
vertices
of
the
graph,
we
would
end
up
with
time
complexity
n
√
of promised k O( k) · poly(n). Fortunately, the way we define the decomposition P = {B1 , . . . , B` }
allows us to construct alongside also a set of at most k 4 important vertices such that each path Bi
is the shortest path from one important vertex to another important vertex.
Finally, there are more technical problems regarding handling possible self-intersections of W
that are not self-crossings. Recall that in our topological view of W , we would like not to regard
such self-intersections as places where W touches itself. In particular, when examining a sphere-cut
decomposition of the union of W and H0 after appropriate contractions, the nooses in this sphere-cut
decomposition should not see such self-intersections as vertices through which they may or should
travel. A resolution to this problem is to consider a “blow-up” of the original graph where each
vertex is replaced by a large well-connected “cloud” of vertices, and each edge is replaced by a large
matching of parallel edges leading from one cloud to another. Walks in the original graph naturally
map to walks in the blow-up. Every original self-crossing maps to a self-crossing, and every original
self-intersection that is not a self-crossing actually is “pulled apart”: there is no self-intersection at
this place anymore. This blow-up has to be performed at the very beginning of the proof, and hence
we need to work on it throughout the whole reasoning. This technical layer is somehow conceptually
simple, but contributes to the technical complexity of the argumentation.
3
Preliminaries
Throughout the paper we denote [n] = {1, 2, . . . , n} for any positive integer n,
We will consider directed or undirected planar graphs G with a terminal set T ⊆ V (G) and
weight function ωG : E(G) → Z≥0 ; we omit the subscript if it is clear from the context. Furthermore,
we assume that G does not contain loops, but may contain multiple edges or arcs with the same
endpoints.
We assume that shortest paths in the input instance G are unique. Since we do not analyze the
polynomial factor in the running time bound of our algorithms, this can be ensured in a standard
manner by replacing a weight ω(e) of the i-th edge/arc with ω(e) · n|E(G)|+1 + ni . Let PG (u, v) be
the shortest path from u to v in G.
We also assume that every terminal t ∈ T has only one neighbor wt , with two arcs arcs (wt , t)
and (t, wt ) of weight 0 in the directed setting, or a single edge twt of weight 0 in the undirected
setting. To obtain such a property, for every terminal t0 ∈ T we can make its copy t, connect t and
t0 with an edge or arcs in both direction of weight 0, and rename wt = t0 . The new terminal set
is the set of the copies of the old terminals. Note that this property implies that every minimal
solution to the Steiner Tree problem contains every terminal as a leaf, and we can consider only
solutions to the Directed Subset TSP problem that visit every terminal exactly once.
A walk in a directed graph G is a sequence (e1 , . . . , ep ) of edges of G such that the head of ei is
the tail of ei+1 , for all i = 1, . . . , p − 1. A walk is closed if additionally the head of ep is equal to the
tail of e1 . The weight of a walk is the sum of the weights of its edges.
For a walk W that visits every terminal exactly once, a permutation π = (t1 , t2 , . . . , t|T | ) of
T is a witnessing permutation of W if it is exactly the (cyclic) order of the terminals visited by
W . A closed walk W is a T -simple walk if it visits every terminal exactly once and the subwalks
of W between the consecutive terminals are actually simple paths. A T -simple walk is T -short
if additionally the subwalks between the consecutive terminals are shortest paths between their
endpoints. Note that a minimum-weight solution to the Directed Subset TSP problem is a
11
T -short walk.
Blow-up of the graph. While working with a directed graph G and the Directed Subset
TSP problem, we modify the graph G into its blow-up as follows: for every edge e of G that is not
incident to any terminal, we replace it with |T | copies with the same head and tail and the same
weight. These copies are embedded in the plane in place of the original edge in the natural manner
so that they are parallel to each other; we say that they form a parallel bunch and that they are
parallel to each other. Note that each bunch is linearly ordered so that every two consecutive edges
form a 2-face. The relation of being parallel naturally extends to paths and walks in G. To simplify
notation, we will also consider every arc incident to a terminal (i.e., an arc of the form (t, wt ) or
(wt , t) for a terminal t) as a parallel bunch on its own.
Replacing a graph G with its blow-up breaks the property of having unique shortest path, but
only in a limited fashion: if G had the unique shortest paths property prior to the blow-up, then
after the blow-up for every two vertices u, v ∈ V (G), every two shortest paths from u to v are
parallel. By slightly abusing the notation, we will still say that G has unique shortest paths even if
the shortest paths are unique up to the choice of parallel paths.
After the blow-up, call a walk W in G clean if each edge of G is used by W at most once. Recall
that every T -simple walk W in G consists of |T | simple paths in G, so in particular each parallel
bunch in G is traversed at most |T | times by W . Hence, it is straightforward to modify any T -simple
walk to a clean parallel one, and hence we can consider only T -simple walks in G that are clean.
In the rest of the graph, we call an instance to Steiner Tree or Directed Subset TSP
preprocessed if it has undergone all the preprocessing steps outlined in this section.
Nooses and branch decompositions. Given a plane graph G, a noose is a closed curve without
self-intersections that meets the drawing of G only in vertices. Contrary to some other sources in
the literature, we explicitly allow a noose to visit one face multiple times, however each vertex is
visited at most once.
A branch decomposition of a graph G is a pair (T , ζ) where T is an unrooted ternary tree and
ζ is a bijection between the leaves of T and the edges of G. For every edge e ∈ E(T ), we define
the cut (or middle set) mid(e) ⊆ V (G) as follows: if T1 and T2 are the two components of T − e,
then v ∈ mid(e) if v is incident both to an edge corresponding to a leaf in T1 and to an edge
corresponding to a leaf in T2 . The width of a decomposition is the maximum size of a middle set,
and the branchwidth of a graph is a minimum width of a branch decomposition of a graph.
It is well known that planar graphs have sublinear branchwidth.
√
Theorem 3.1 (see e.g. [22]). Every planar graph on n vertices has branchwidth bounded by 4.5n.
In planar graphs, one can compute good branch decompositions, where the cuts mid(e) correspond
to nooses. More formally, a triple (T , ζ, γ) is an sc-branch decomposition (for sphere-cut branch
decomposition) if (T , ζ) is a branch decomposition and for every e ∈ E(T ), γ(e) is a noose that
traverses the vertices of mid(e) and separates the edges corresponding to the leaves of the two
components of T − e from each other.
We need the following result of Seymour and Thomas [40], with the algorithmic part following
from [15, 25].
Theorem 3.2 ( [15,25,40]). Given a connected plane graph G, one can in time O(|V (G)|3 ) compute
an sc-branch decomposition of G of width equal to the branchwidth of G.
12
We remark that in [15, 25, 40] one considers nooses that can visit every face at most once,
which makes it necessary to assume also that the graph is bridgeless; see e.g. [36]. It is easy to see
that without this assumption on nooses, one can extended the theorem also to connected graphs
with bridges by first decomposing into bridgeless components, and then decomposing each such
component separately.
4
Nooses
Let G be a plane (directed or undirected) graph with terminal set T and let γ be a noose in G that
visits at most ` vertices. In this section we show that if ` |T |, then there is much less than 2Θ(|T |)
ways of how the noose can partition the terminal set.
More formally, we think of the planar embedding of G as a spherical one (i.e., without distinguished outer face) and with a noose γ we associate a tri-partition of the terminal set (T0 , {T1 , T2 })
where T0 is the set of terminals that lie on γ and T1 and T2 are the sets of terminals that lie in the
two components of the sphere minus γ. Since we consider spherical embeddings and the two sides
of γ are symmetric, the pair {T1 , T2 } is an unordered pair.
Our main claim in this section is that there are only |T |O(`) reasonable partitions for nooses
visiting at most ` vertices.
Lemma 4.1. Assume we are given a plane connected graph G with a terminal set T and an integer `.
Then one can in time |T |O(`) nO(1) compute a family A of |T |O(`) of partitions of T such that, for
every noose of G that visits at most ` vertices, its corresponding tri-partition of the terminals belongs
to A.
Proof. The crucial observation is that deleting an edge or a vertex from G only increases the family
of curves in the plane that are nooses with respect to G. Consequently, if we replace G with any its
connected subgraph that spans all the terminals, and enumerate a family of partitions satisfying the
lemma statement for this subgraph, then the same family will be also a valid output for the original
graph G. Thus, by restricting attention to an inclusion-wise minimal subtree spanning all terminals,
without loss of generality we may assume that G is a tree and every its leaf is a terminal.
Let S be the set of special vertices in G: terminals and vertices of degree at least 3. Clearly,
every leaf of G is in S and |S| < 2|T |. Then G decomposes into |S| − 1 paths Q1 , Q2 , . . . , Q|S|−1
such that each path Qi has both endpoints in S but no internal vertices in S.
Construct now a graph G0 from G by replacing every path Qi with a path Q0i with the same
drawing in the plane, but with exactly ` internal vertices. Furthermore, for every noose γ in G that
visits at most ` vertices of G, construct its shift γ 0 , being a noose with respect to G0 , as follows: for
every path Qi , move all intersections of γ with the internal vertices of Qi to distinct internal vertices
of Q0i , keeping the relative order of the intersections along the paths Qi and Q0i the same. Since
Q0i has ` internal vertices, this is always possible. Furthermore, we can obtain γ 0 from γ by local
modifications within close neighborhoods of the paths Qi , but not near its endpoints. Consequently,
the partitions of the terminals induced by γ and γ 0 are the same.
Observe now that γ 0 is a noose with respect to a tree with less than 2|T |(` + 1) vertices. With
every intersection of γ 0 with G0 , say at a vertex v, we associate three pieces of information: the
vertex v itself, between which pair of edges incident with v the noose γ 0 entered v, and between
which pair of edges it left v. Since there are only O(|T |`) = O(|T |2 ) choices for every piece of
information, there are only |T |O(`) possible combinatorial representations of γ 0 , defined as a sequence
of the aforementioned triples of pieces of information at every vertex traversed γ 0 , in the order of a
walk along γ 0 . Finally, observe that, since the single face of G0 is isomorphic to a disc, knowing the
13
combinatorial representation of γ 0 is sufficient to deduce the tri-partition of the terminals induced
by γ 0 . This finishes the proof.
5
An algorithm for the Directed TSP problem
In this section we provide a full proof of Theorem 1.1.
5.1
Cleaning
Crossings and cleaning. Suppose W = (e1 , . . . , ep ) is a clean closed walk in G. In the following,
we assume the cyclic order of edges on closed walks, that is, ep+1 = e1 . A visit of a vertex v in G is
an index i ∈ {1, . . . , p} such that the head of ei (equivalently, the tail of ei+1 ) is equal to v. Note
that one vertex may have multiple visits on W . Suppose now W 0 = (e01 , . . . , e0p0 ) is another clean
closed walk in G that does not share any edges with W . A crossing of W and W 0 is a pair of indices
(i, j) such that the head of ei is the same vertex v as the head of e0j (that is, i is a visit of v on W
and j is a visit of v on W 0 ), and the clockwise order of edges ei , ei+1 , e0j , e0j+1 in the plane around v
is interlacing: it is ei , e0j , ei+1 , e0j+1 or ei , e0j+1 , ei+1 , e0j (of course, cyclically). A self-crossing of W is
defined in the same manner, but we consider two different visits of the same vertex v on W .
We now show that the crossing pattern of every clean closed walk can be simplified to a
“cactus-like” structure, as described next.
Definition 5.1. Suppose W = (e1 , . . . , ep ) is a clean closed walk in G and suppose (i, j) is a
self-crossing of W . Let W1 and W2 be the two clean closed walks obtained by splitting W at this
self-crossing, that is, W1 = (ei+1 , . . . , ej ) and W2 = (ej+1 , . . . , ei ). The clean closed walk W is called
reduced if for any self-crossing (i, j) of W , the clean closed walks W1 , W2 do not cross.
Lemma 5.2. For every clean closed walk W in G there exists a reduced closed walk Wrd that
traverses exactly the same set of edges as W . Furthermore, given W , such a walk Wrd can be
computed in polynomial time.
Proof. Suppose (f, g) is a pair of edges of G such that the head of f is the same as the tail of g;
call it v. Let (e1 , e2 , . . . , ed ) be the edges of incident to v in the clockwise order around v, and let
1 ≤ i, j ≤ d, i 6= j, be such that f = ei and g = ej . Define the span of (f, g), denoted span(f, g),
as the value |i − j|2 + (d − |i − j|)2 . Note that this value does not depend on the choice of the
enumeration (e1 , e2 , . . . , ed ), which is unique up to a cyclic shift. For a clean closed walk W , define
its span span(W ) as the sum of spans of pairs of consecutive edges on W .
We first show that “uncrossing” a crossing strictly increases the span.
Claim 5.3. Suppose v is a vertex of G, edges f1 , f2 have v as the head, edges g1 , g2 have v as the
tail, and edges f1 , f2 , g1 , g2 appear in this order in the clockwise order of edges incident to v. Then
span(f1 , g1 ) + span(f2 , g2 ) < span(f1 , g2 ) + span(f2 , g1 ).
Proof. Let (e1 , e2 , . . . , ed ) be the edges of incident to v in the clockwise order around v. By
choosing appropriately the cyclic shift of this order, we can assume w.l.o.g. that there are indices
1 ≤ i1 < i2 < j1 < j2 ≤ d such that f1 = ei1 , f2 = ei2 , g1 = ej1 , and g2 = ej2 . Then the claim is
equivalent to the inequality
(j1 −i1 )2 +(j2 −i2 )2 +(d−(j1 −i1 ))2 +(d−(j2 −i2 ))2 < (j1 −i2 )2 +(j2 −i1 )2 +(d−(j1 −i2 ))2 +(d−(j2 −i1 ))2 .
14
After opening brackets and straightforward algebraic manipulations, this is equivalent to
i1 j1 + i2 j2 > i1 j2 + i2 j1 .
This, in turn, is equivalent to (i1 − i2 )(j1 − j2 ) > 0, which holds due to i1 < i2 and j1 < j2 .
y
We proceed with the proof of the lemma. We give a polynomial-time procedure that given W
and a self-crossing (i, j) that witnesses that W is not reduced, outputs a clean closed walk W 0 that
traverses exactly the same set of edges as W , but W 0 has a strictly larger span than W . Since
the span of a clean walk is integral and bounded polynomially by the size of G, by applying this
procedure exhaustively after a polynomial number of iterations we eventually obtain a reduced
closed walk that can be output.
Suppose then W = (e1 , . . . , ep ) has some self-crossing (i, j) such that the subwalks W1 =
(ei+1 , . . . , ej ) and W2 = (ej+1 , . . . , ei ) obtained by splitting W at (i, j) do cross. Observe that the
set of pairs of consecutive edges on W is almost exactly the same as the union of sets of pairs of
consecutive edges on W1 and W2 , except that in this union we have pairs (ei , ej+1 ) and (ej , ei+1 )
instead of (ei , ei+1 ) and (ej , ej+1 ). Since (i, j) is a self-crossing of W , by Claim 5.3 we infer that
span(W1 ) + span(W2 ) > span(W ).
(1)
Now, for t = 1, 2 let pt be the length of Wt (thus p = p1 +p2 ), and let us enumerate Wt = (et1 , . . . , etpt )
so that e1k = ei+k for k = 1, . . . , p1 and e2k = ej+k for k = 1, . . . , p2 . We assumed that W1 and
W2 cross, so let (c1 , c2 ) be any their crossing, where (c1 , c2 ) ∈ {1, . . . , p1 } × {1, . . . , p2 }. Obtain a
closed walk W 0 by cutting W1 at position c1 , cutting W2 at position c2 , and gluing them together.
Formally,
W 0 = (e1c1 +1 , e1c1 +2 , . . . , e1c1 , e2c2 +1 , e2c2 +2 , . . . , e2c2 ).
Note that W 0 is a closed clean walk that uses exactly the same edges as W . Observe further that
the set of pairs of consecutive edges on W 0 is almost exactly the same as the union of sets of pairs
of consecutive edges on W1 and W2 , except that on W 0 we have pairs (e1c1 , e2c2 +1 ) and (e2c2 , e1c1 +1 )
instead of (e1c1 , e1c1 +1 ) and (e2c2 , e2c2 +1 ). Since (c1 , c2 ) is a crossing of W1 and W2 , by Claim 5.3 we
again infer that
span(W 0 ) > span(W1 ) + span(W2 ).
(2)
By combining (1) and (2) we infer that span(W 0 ) > span(W ), which concludes the proof.
Pushing crossings. We now proceed to analyzing T -simple walks. Recall that each T -simple
walk W in G is clean due to the blow-up operation. Note that every T -simple walk in G can be
made reduced using Lemma 5.2.
For technical reasons, we will need some further normalization of T -simple walks. Intuitively, we
want the property that whenever two subpaths between consecutive terminals cross, they cross as
early as possible.
Definition 5.4. A reduced T -simple walk W in G is first-cross reduced if for every two different
subpaths P and Q of W between consecutive terminals on W , and every crossing of P and Q, the
edges on P and Q that precede this crossing are not parallel.
Lemma 5.5. For every reduced T -simple walk W in G there exists a first-cross reduced T -simple
walk Wfc that traverses exactly the same set of edges as W . Furthermore, given W , such a walk Wfc
can be computed in polynomial time.
15
Proof. Let W = (e1 , . . . , ep ), and for i ∈ [p] let ui be the head of ei . For each index i, let the load of
i, denoted τ (i), be the smallest number t such that ui−t is a terminal (recall that indices behave
cyclically). Define the load of W to be
X
τ (W ) =
(τ (i) + τ (j)).
(i,j) : crossing of W
We give a polynomial-time procedure that given a reduced T -simple walk W in G that is not
first-cross, outputs a reduced T -simple walk W 0 that traverses exactly the same set of edges as
W , but W 0 has a strictly smaller load than W . Since the load of a T -simple walk is integral,
nonnegative, and bounded polynomially by the size of G, by applying this procedure exhaustively
after a polynomial number of iterations we eventually obtain a first-cross reduced T -simple walk
that can be output.
Since W is not first-cross, there is a self-crossing (i, j) of W , where edges ei , ej are parallel.
Denote v = ui = uj . Examine the parallel bunch to which ei and ej belong, and recall that
this bunch is linearly ordered. Among self-crossings of W not at terminals and where the edges
predeceasing the crossing are parallel, we choose (i, j) to be the one that minimizes the number of
edges on W , parallel to ei and ej that are between them in their parallel bunch. This minimization
criterion yields the following.
Claim 5.6. None of the edges parallel to ei and ej that are between them in their parallel bunch is
traversed by W .
Proof. For the sake of contradiction, suppose there is some edge ek on W , where k ∈
/ {i, j}, such
that ek is parallel to ei and ej and it appears between them in their parallel bunch. Since (i, j) is a
self-crossing of W at v, and in the cyclic order of edges around v all edges of the parallel bunch
of ei , ej appear consecutive and in the same order as in their bunch, it follows that either (i, k) or
(j, k) is a self-crossing, with both edges preceding the crossing parallel. However, in each of these
two cases, the number of parallel edges between the edges preceding the crossing would be strictly
smaller. This contradicts the choice of (i, j).
y
Consider now a closed walk W 0 obtained from W by swapping edges ei and ej ; that is, W 0 =
ej , and e0j = ei . As ei , ej are parallel, it is clear that
is reduced, it suffices to observe that by Claim 5.6,
the set of self-crossings of W 0 is exactly the same as the set of self-crossings of W , except the
self-crossing (i, j) is replaced with the self-crossing (i − 1, j − 1). Finally, since edges ei , ej are
parallel, we have that v is not a terminal, as terminals have exactly one incoming arc. Hence, we
have τ (i − 1) = τ (i) − 1 and τ (j − 1) = τ (j). It follows that τ (W 0 ) = τ (W ) − 2, hence we are
done.
(e01 , . . . , e0p ), where e0k = ek for k ∈
/ {i, j}, e0i =
W 0 is a T -simple walk in G. To see that W 0
Decomposing T -simple walks. Our goal now is to show that every reduced T -simple walk in
G can be decomposed into a small number of paths that interact with each other only in a limited
way. Moreover, for future use in the algorithm we will require that provided the said T -simple walk
is first-cross and T -short, the endpoints of the paths in the decomposition belong to some subset of
the vertices of size polynomial in the number of terminals. More precisely, each endpoint will be
important in the sense defined next.
First, we declare each terminal important. Next, we consider every quadruple of terminals
t1 , t2 , s1 , s2 , where t1 =
6 t2 , s1 6= s2 , t1 6= s1 , and t2 6= s2 . Choose shortest paths PG (t1 , t2 ) and
PG (s1 , s2 ) picking always the leftmost parallel edge in every traversed bunch. Let F be the set of
16
edges traversed by both these paths. On PG (t1 , t2 ), the edges of F form a family F of intervals.
Take any such interval I ∈ F, which is a subpath of PG (t1 , t2 ), and call it crossing if the following
holds: after contracting I to a single vertex the two edges preceding and succeeding I on PG (t1 , t2 ),
and the two edges preceding and succeeding I on PG (s1 , s2 ), appear in the interlacing order around
the contracted vertex; just as in the definition of crossing. Provided there are intervals in F that
are crossing, or simply PG (t1 , t2 ) crosses with PG (s1 , s2 ) without sharing any edges incident to the
crossing point, choose the crossing interval or the crossing that appears the earliest on PG (t1 , t2 ), and
declare the first (or only, in case it is just a crossing) vertex visited there by PG (t1 , t2 ) important.
Thus, the number of important vertices in G is at most |T |4 , which is polynomial in |T |. We
can now state the decomposition lemma.
Lemma 5.7. Every reduced T -simple walk W in G can be decomposed into ` < 9|T | subpaths
B1 , . . . , B` such that the following conditions are satisfied:
(a) no path Bi contains any terminal as an internal vertex,
(b) each path Bi has a crossing with at most one other path Bj , and
(c) there are fewer than 25|T | self-crossings of W that are not crossings of paths Bi and Bj , for
some i, j ∈ [`], i 6= j.
Furthermore, if W is first-cross reduced and T -short, then the decomposition B1 , . . . , B` may be
chosen so that the following additional condition is satisfied:
(d) each path Bi is a shortest path leading from one important vertex to another important vertex.
Proof. We first focus on defining the decomposition so that conditions ((a))–((c)) are satisfied. At
the end we shall argue why the defined decomposition satisfies also condition (d) in case W is
first-cross and T -short.
Let W = (e1 , . . . , ep ). As W is a T -simple walk, let (t1 , . . . , t|T | ) be the witnessing permutation
of the terminals, that is, W is the concatenation of the simple paths P1 , . . . , P|T | such that each Pi
leads from ti to ti+1 . For each j ∈ {1, . . . , |T |} we choose index ij ∈ {1, . . . , p} such that Pj is equal
to the subwalk (eij +1 , . . . , eij+1 ) of W .
Create an auxiliary graph H on vertex set x1 , . . . , xp , where xi can be thought of a copy of
the head of the edge ei (we also say that xi corresponds to the head of ei ). In H, we put an edge
between xi and xi+1 for each i = 1, 2, . . . , p (where xp+1 = x1 ), and moreover, for each self-crossing
(i, j) of W , we put an edge between xi and xj . The latter edges, corresponding to self-crossings, are
called internal. Note that since each terminal is visited exactly once on W , vertices xi1 , . . . , xi|T |
are the only vertices out of x1 , . . . , xp that correspond to terminals.
Claim 5.8. The graph H is outerplanar and has an outerplanar embedding where the cycle
(x1 , . . . , xp ) is the boundary of the outer face. Moreover, each vertex xij , for j ∈ [|T |], has degree 2
in H.
Proof. To see that H admits the desired outerplanar embedding, it suffices to show that there are
no indices i < i0 < j < j 0 such that both (i, j) and (i0 , j 0 ) are self-crossings of W . However, if this
was the case, then the self-crossing (i0 , j 0 ) would yield a crossing of the closed walks W1 and W2
obtained by splitting W at the self-crossing (i, j). Since W is reduced, this cannot happen.
To see that the vertex xij , corresponding to the terminal tj , has degree 2 in H, observe that
otherwise xij would be incident to some internal edge of H. This means that W would have a
self-crossing at tj , but W visits each terminal at most once; a contradiction.
y
17
Fix an outerplanar embedding of H as in Claim 5.8. Let S be a graph with vertex set consisting
of the inner faces of H, where two faces are considered adjacent if and only if they share an edge in
H. Since H is outerplanar and connected, it follows that S is a tree.
Consider now any leaf f of S. Then the boundary of f consists of one edge of H corresponding
to some self-crossing (i, j) of W , say at vertex v of G, and a subpath Qf of the cycle (x1 , . . . , xp ) in
H. For leaves f of S, the subpaths Qf are pairwise edge disjoint.
Figure 2: The original closed walk W (left panel) and the outerplanar graph H constructed based
on W (right panel). Terminals are depicted by yellow squares, the tree S is depicted in blue, the
cycle (x1 , . . . , xp ) is depicted using solid gray edges, while dashed Gray edges are the internal edges
of H. Special edges are colored orange, while red lines depict places where we put dividing points
for defining blocks. They correspond to vertices depicted by red circles in the left panel, which are
important in case W is T -short and first-cross.
Claim 5.9. For each leaf f of S, the subpath Qf contains at least one vertex xij , for some j ∈ [|T |],
as an internal vertex. Consequently, the tree S has at most |T | leaves.
Proof. For the first claim, observe that Qf corresponds to a closed subwalk Wf of W obtained by
splitting W at a self-crossing. Observe that Wf cannot be entirely contained in any of the paths Pj ,
since Wf visits v twice whereas a simple path cannot visit any vertex more than once. Hence, Qf
contains some vertex xij as an internal vertex. The second claim follows by noting that paths Qf
are pairwise edge disjoint for different leaves f of S, and there are |T | vertices xij .
y
Observe that in the duality of H and S, the edges of S are the dual edges of the internal edges of
H. By somehow abusing the notation, we identify each internal edge of H with its dual edge in S.
We now define the set I ⊆ V (S) of special edges of S as follows. First, for each vertex f of S
of degree at least 3 in W , we mark all edges incident to f special. Second, for each vertex xij , for
j ∈ [|T |], we find such the unique index hj ∈ [p] such that none of vertices xij , . . . , xhj −1 is incident
to any internal edges of H, but xhj is incident to such an edge. Then there is a unique special edge
18
of H that is incident both to xhj and the internal face of H on which xij lies (this face is unique
since xij has degree 2 in H). We mark this internal edge important as well.
Claim 5.10. There are less than 4|T | special edges in S.
Proof. It is well known that in every tree with at most k leaves, the total number of edges incident
to vertices of degree at least 3 is at most 3k − 6. Hence, since S has at most |T | leaves by Claim 5.9,
less than 3|T | edges of S were marked as special in the first step of marking. In the second step of
marking we mark one edge per each terminal, so the total upper bound of less than 4|T | follows. y
We divide the walk W = (e1 , . . . , ep ) into blocks as follows. For any i ∈ [p], declare xi a dividing
point if either xi corresponds to a terminal (i.e. i = ij for some j ∈ [|T |]), or xi is an endpoint of
a special edge. Then blocks are maximal subwalks of W that do not contain any dividing points
as internal vertices. More precisely, the sequence (ei+1 , ei , . . . , ei0 ) is a block if both xi and xi0 are
dividing points, but none of vertices xi+1 , . . . , xi0 −1 is a dividing point. It is clear that blocks form a
partition of W into less than 9|T | subwalks, as there are less than 9|T | dividing points by Claim 5.10.
Let B1 , . . . , B` be the obtained blocks It suffices to verify that the decomposition B1 , . . . , B` has all
the required properties. Observe that condition (a) is holds trivially, as we explicitly took all visits
of terminals by W as dividing points.
For condition (b), let Di be the subpath of the cycle (x1 , . . . , xp ) in H that corresponds to
the block Bi , for i = 1, . . . , `. Note that every crossing of paths Bi and Bi0 , for i 6= i0 , is also a
self-crossing of W that corresponds to an internal edges of H that connects an internal vertex of Di
with an internal vertex of Di0 . Fix now some block Bi ; we want to argue that there is at most one
other block Bi0 such that Bi and Bi0 cross.
Observe that every connected component of S − I, the tree S with the special edges removed,
either consists of one vertex being a leaf or a vertex of degree at least 3 in S, or is a path consisting
only of vertices of degree 2 in S. This is because any edge incident to a leaf of S is always marked
as special by Claim 5.9. By the construction of blocks, the set of internal faces of H incident to the
edges Di can be spanned by a subtree of S that does not contain any special edge. Consequently,
either all the edges of Bi are incident to the same internal face of H (and hence they form an
interval on its boundary), or there is a path R in S, consisting only of vertices of degree 2 connected
by non-special edges, such that all the edges of Di are incident to the faces on this path. In the
former case, Bi does not cross any other block Bj , as all internal vertices of Di have degree 2 in H.
In the latter case, it is easy to see that all the edges of the cycle (x1 , . . . , xp ) that are incident to
some non-endpoint face of R but do not lie on Di , are in fact in the same subpath Di0 for some
i0 6= i. Then all internal edges of H incident to the internal vertices of Di have the second endpoint
on Di0 , so Bi0 is the only block that may cross Bi . This establishes condition ((b)).
For condition (c), we shall use the following claim.
Claim 5.11. Suppose e1 , e2 are two internal edges of H incident to the same vertex xi , and also
incident to a common internal face of H. Then at least one of e1 , e2 is special.
Proof. Let f be the internal face of H incident to both e1 and e2 . Obviously the degree of f in S
is at least 2 due to edges e1 , e2 , and we may assume that it is exactly 2, since otherwise both e1
and e2 would be special. Consequently, e1 , e2 are the only internal edges of H incident to f , so the
boundary of f consists of e1 , e2 , and some subpath L of the cycle (x1 , . . . , xp ) whose internal vertices
have all degree two in H. We may assume that none of the vertices traversed by L corresponds to a
terminal, as otherwise either e1 or e2 would be marked as special. In particular, this implies that all
the edges of L belong to the same block.
19
Let xi1 and xi2 be the second endpoints of e1 and e2 , different from xi , respectively. Then
both (i, i1 ) and (i, i2 ) are self-crossings of W and xi , xi1 , xi2 correspond to the same vertex v of G.
However, we have already argued that each block is entirely contained in one path Pj , for some
j ∈ [|T |], so xi1 and xi2 would correspond to two visits of the same vertex v within the same path
Pj , which is assumed to be simple, as W is a T -simple walk. This is a contradiction.
y
Observe now that self-crossings of W that are not crossings of two distinct blocks are exactly
those self-crossings (i, i0 ) for which either xi or xi0 is a dividing point. Hence, to give an upper
bound on the number of such self-crossings, it suffices to estimate the number of internal edges
of H incident to a dividing point. Take any dividing point xi and examine the internal edges of
H incident to xi in the cyclic order around it, as in the (outerplanar) embedding of H. Then by
Claim 5.11, out of any two consecutive ones, at least one is special. It follows that if xi is incident
to d special edges, then it is incident to at most 2d + 1 internal edges of H. Since every special edge
has two endpoints, it follows that the total number of internal edges incident to a dividing point
is upper bounded by the number of dividing points plus four times the number of special edges.
These quantities are smaller by 9|T | and 4 · 4|T |, respectively, so the upper bound of less than 25|T |
follows.
Thus, we have verified conditions (a), (b), and (c), so we are left with verifying condition (d)
assuming additionally that W is T -short and first-cross. Since W is T -short, each path Pj is the
shortest path from tj to tj+1 , so every its subpath is also the shortest path between its endpoints.
This implies that each block Bi is the shortest path between its endpoints, so it remains to show
that these endpoints are important. To this end, we will use the following claim.
Claim 5.12. Suppose we have indices 1 ≤ j, j 0 ≤ |T |, j 6= j 0 . Suppose further on the subpath
of (xij , xij +1 , . . . , xij+1 ), vertex xk is the first one that is adjacent in H to any of the vertices
xij 0 , xij 0 +1 , . . . , xij 0 +1 via an internal edge of H. Then xk corresponds to an important vertex in G.
Proof. It can be easily seen that if xk corresponds to a vertex v, then v is included in the set of
important vertices when considering the quadruple of terminals (tj , tj+1 , tj 0 , tj 0 +1 ). This is because
W is first-cross, so when the crossing between subpaths from tj to tj+1 and from tj 0 to tj 0 +1 that
is the first in the former path corresponds to a crossing interval on PG (tj , tj+1 ), then the actual
crossing in W occurs on the first vertex of this interval.
y
We now proceed with verification that all the dividing points used in the definition of blocks
correspond to important vertices. This is done explicitly for terminals, so we are left with verifying
this for endpoints of special edges. Suppose that an internal edge e = (xi , xi0 ) of H is special. Then
xi and xi0 correspond to the same vertex v of G such that (i, i0 ) is a self-crossing of W at v. We
have two cases, depending on why e was marked as special.
Suppose first that e was marked as special due to being incident to some internal face f of H of
degree at least 3 in S. This means that in S, f has at least two other incident edges, and suppose
e1 and e2 are the edge incident to f that are directly preceding and succeeding e in the counterclockwise order of edges of S incident to f ; here, we assume that the cycle (x1 , . . . , xp ) is oriented
counter-clockwise in the plane. Further, suppose without loss of generality that e1 , xi , e, xi0 , e2
are in this counter-clockwise order on the boundary of face f . Now, let j 1 ∈ [|T |] be such that
on the subpath (xij 1 , xij 1 +1 , . . . , xi ) no internal vertex corresponds to a terminal, and similarly
let j 2 ∈ [|T |] be such that on the subpath (xi0 , xi0 +1 , . . . , xij 2 ) no internal vertex corresponds to
a terminal. Observe that since each leaf f 0 of S has a vertex corresponding to a terminal among
internal vertices of Qf (Claim 5.9), vertices xij 1 , xij 2 −1 , and xij 2 lie on the following parts of the
cycle (x1 , . . . , xp ):
20
• denoting e1 = xr11 xr21 , where xr11 xr21 , and xi lie in this order on (x1 , . . . , xp ), we have that xij 1
is an internal vertex of (xr11 , . . . , xi );
• xij 2 −1 is an internal vertex of (xi , . . . , xi0 ); and
• denoting e2 = xr12 xr22 , where xi0 , xr12 , and xr22 lie in this order on (x1 , . . . , xp ), we have that
xij 2 is an internal vertex of (xi0 , . . . , xr22 );
In particular, all the vertices xij 1 , xij 2 −1 , and xij 2 are pairwise different, and moreover e is the
internal edge of H connecting (xij 1 , xij 1 +1 , . . . , xij 1 +1 ) with (xij 2 −1 , xij 2 −1 +1 , . . . , xij 2 ) that has the
earliest possible endpoint on the former path. The fact that v is then important follows from
applying Claim 5.12 to j = j 1 and j 0 = j 2 − 1.
Suppose now, without loss of generality, that e was marked special due to the following situation:
i = hj for some terminal tj , and e is the unique edge incident to xi that is also incident to the
internal face f of H on whose boundary lies xij . Then on the subpath (xij , xij +1 , . . . , xi ), all vertices
have degree 2 in H, apart from xi itself, so in particular they are not incident to any internal edge
of H. Suppose now that j 0 ∈ [|T |] is such that on the subpath (xij 0 , xij 0 +1 , . . . , xi0 ) no internal
vertex corresponds to a terminal. By Claim 5.9 it is easy to see that j 6= j 0 . Moreover, from the
previous observation it follows that xi is the earliest vertex on (xij , xij +1 , . . . , xij+1 ) that is adjacent
to any vertex of (xij 0 , xij 0 +1 , . . . , xij 0 +1 ) via an internal edge of H, because the earlier vertices were
not incident to any internal edges at all. The fact that v is then important follows from applying
Claim 5.12 to j and j 0 .
5.2
Enumerating subsets of a clean walk
Our main technical result, proved in this section, is that any√reduced T -short walk can be hierarchically decomposed using closed curves of “complexity” |T |O( |T |) . We first formalize what we mean
by a decomposition. In this section C ≥ 1 is a sufficiently large universal constant, whose value will
emerge from the proof.
Definition 5.13. Let W be a T -short walk and let πW = (t1 , t2 , . . . , t|T | ) be a witnessing permutap
tion. A set A ⊆ T is a good section of (W, πW ) if A can be partitioned into at most C |T | subsets
that form contiguous subsequences of πW .
A good decomposition of W and πW is a pair (T , β) where T is a rooted binary tree and
β : V (T ) → 2T is a function with the following properties:
(1) β(v) is a good section of (W, πW ) for every v ∈ V (T );
(2) β(r) = T for the root r of T ;
(3) every non-leaf node v of T has two children v1 , v2 with β(v1 ) ∩ β(v2 ) = ∅, β(v) = β(v1 ) ∪ β(v2 );
p
(4) every leaf node v of T satisfies |β(v)| ≤ C |T |.
p
Note that both T and every set A ⊆ T of size at most C |T | is always a good section, regardless
of the choice of W and πW .
The main result of this section is the following.
Theorem
5.14. Given a preprocessed Directed Subset
√
√ TSP instance (G, T ), one can in time
O( |T |) O(1)
O( |T |)
T
|T |
n
compute a family B ⊆ 2 of size |T |
such that for every first-cross reduced
T -short walk W and its witnessing permutation πW , there exists a good decomposition (T , β) of
(W, πW ) such that every set β(s) for s ∈ V (T ) belongs to B.
21
The rest of this section is devoted to the proof of Theorem 5.14. Fix the walk W as in the
statement. Recall that in the input graph, we assume that every terminal t has a unique neighbor
wt , connected by two arcs (t, wt ) and (wt , t), while every other arc is copied |T | times; a set of
parallel copies of the same arc is called a parallel bunch. Furthermore, without loss of generality we
assume that G is strongly connected; this is because all terminals have to lie in the same strongly
connected component of G for a solution to exist, and we can restrict our attention to this strongly
connected component. For a terminal t, let ft0 be the 2-face between the arcs (t, wt ) and (wt , t), and
let ft be the other face incident with t.
We start with the following augmentation of the input graph; see Figure 3 for an illustration.
First, we temporarily collapse every parallel bunch of G back into a single edge.
Second, we select an arbitrary minimal tree T ? in the dual of G that spans all faces ft and does
not contain any face ft0 for t ∈ T . We fix some drawing of T ? such that every face of G contains at
most one vertex of T ? and every arc of T ? intersects exactly one arc of G, namely its dual, in one
point. For every t ∈ T , we connect the vertex ft of T ? with the terminal t with an arc (t, ft ).
Third, we add to G the tree T ? with all arcs (t, ft ), t ∈ T , in the natural manner: we add all
vertices of T ? to G and, whenever an arc of T ? intersects its dual arc in G, we subdivide both
arcs and add a new vertex at the point of intersection. During the subdivision, we distribute the
weight of the arc of G arbitrarily between its parts, while all the new arcs of T ? , as well as its
f denote the tree in the augmented
connections to the terminals, are assigned weight +∞. Let W
?
graph G consisting of the (subdivided) arcs of T together with the arcs (t, ft ), t ∈ T . In this
f is a tree spanning all terminals in G. We remark here that, due to the weight +∞ on
manner, W
f , the directions of arcs of W
f are irrelevant to the algorithm. Intuitively, the purpose
the arcs of W
f
of W is to control the homotopy types of closed curves in the plane punctured at the terminals, by
f.
examining how they cross with W
f ) that is not incident to a terminal |T | times, and
Finally, we duplicate every arc of G − E(W
project W onto the modified graph accordingly: if W traversed the i-th copy of an arc e before the
collapse of parallel bunches, then after the modifications it traverses the i-th copies of all the arcs
into which e has been subdivided. This operation does not break the property that W is first-cross
reduced. As before with the arcs incident with terminals, for notational convenience we treat every
f as a parallel bunch on its own. Observe that, since we added only arcs of weight +∞ and
arc of W
G is strongly connected, we maintain the property that G has unique shortest paths and that W is
T -short.
f , that is, new vertices drawn inside
Let VF be the set of vertices of T ? that become vertices of W
former faces of G (but not the vertices drawn at the crossing points of arcs of T ? and their duals).
By Lemma 5.7, the walk W can be decomposed into ` = O(|T |) paths B1 , B2 , . . . , B` such that
every Bi crosses at most one other path Bj . Furthermore, since W is first-cross reduced and T -short,
we can assume that every endpoint of a path Bi is an important vertex of G.
Graph lift. We extend graph G to a lift G• as follows; see Figure 4 for an illustration. We start
with G• equal to G. For every vertex v ∈
/ T ∪ VF , we draw a small circle Cv with center in v. For
every intersection of Cv with the drawing of G, we subdivide the corresponding arc and place the
subdividing vertex at this intersection. The full weight of the original arc is kept on the part outside
Cv . Next, we remove from the graph all the edges whose drawings are inside the circle Cv . For
every pair of newly introduced vertices x, y on Cv , where x lies on an in-arc and y lies on an out-arc
of v, we add an edge (x, y) of weight 0 embedded using a straight-line chord between embeddings of
x and y inside Cv . By slightly shifting the drawing of G beforehand, we may assume that no three
such chords intersect at the same point. To make the embedding planar, at every intersection of
22
f . Terminals are yellow squares, the tree
Figure 3: Augmentation of the graph G to add the tree W
f
f
W is depicted in blue. The directions of the arcs of W are irrelevant to the algorithm, and hence
omitted in the figure.
23
Figure 4: From G to G• . The panel on the left shows a vertex v ∈ V (G) with two incoming parallel
bunches and two outgoing. The panel on the right shows the corresponding circle Cv and the chords
drawn inside; at every intersection of two chords there is a vertex of G• . Two parts of a walk W
(red and blue) that correspond to a crossing at v are lifted to red and blue paths on the right in G• .
two chords we place a new vertex and use it to subdivide the chords.
There is a natural projection of vertices, edges, and paths from G• to G, by collapsing, for every
v ∈ V (G) \ T , the circle Cv together with all vertices and arcs on and inside it onto the vertex v.
We denote this projection by η. Furthermore, every arc e = (u, v) ∈ E(G) has its lift η −1 (e), which
is an edge in G• : the one going from either u (if u ∈ T ∪ VF ) or the point at the intersection of e
with Cu (if u ∈
/ T ∪ VF ) to either v (if v ∈ T ∪ VF ) or the point of the intersection of e with Cv (if
v∈
/ T ∪ VF ).
The walk W has its naturally defined lift to a walk W • in G• : follow W starting and some
terminal and, for every visited nonterminal vertex v, if the drawing of W around v intersects the
cycle Cv in x before v and in y after v, then we go from x to y along the chord (x, y) in the
f • of W
f analogously. Note that W
f • is a tree in G• , while W • is a
lift W • . We define the lift W
walk, whose self-intersections are in one-to-one correspondence with self-crossings of W ; here, by
a self-intersection of a walk we mean a pair of two different visits of the same vertex. Moreover,
f.
for every self-intersection of W • at a nonterminal vertex v, the projection η(v) does not lie on W
Finally, W • visits every vertex of G• at most twice. We will follow the convention that W has
self-crossings while W • has self-intersections; note here that every self-intersection of W • is also its
self-crossing.
Untwisting pairs of paths. We now define a family of paths P that are subpaths of the paths
Bi . First, we insert into P every path Bi that does not cross any other path Bj . For pairs of paths
Bi and Bj that cross each other and i < j, we proceed as follows. Let v1 and v2 be the vertices of
the first and last crossing of Bi and Bj , where first and last refer to the order on Bj . These two
crossing points split each of Bi and Bj into three subpaths (two, if v1 = v2 ). We insert into P all
three subpaths of Bi and the two side subpaths of Bj (i.e., except the one between v1 and v2 ); in
the case v1 = v2 , we insert into P all the four subpaths. See Figure 5 for an illustration.
24
Bj
v2
v1
Bi
Bj
v2
v1
Bi
Figure 5: Construction of the family P. For every two paths Bi and Bj with i < j that cross (red
and blue paths in the figure), we insert into P all but one subpaths into which v1 and v2 (the first
and the last crossing of Bi and Bj w.r.t. the order on Bj ) split Bi and Bj . The omitted subpath is
the part of Bj between v1 and v2 , depicted by dashed blue in the figure. From the fact that Bi and
Bj are shortest paths in G, one could argue that the situation in the bottom panel is impossible,
and the two crossing paths always look as in the top panel. However, we will not use this fact in
the proof, and hence we do not formally prove it.
25
Observe that P is a family of O(|T |) subpaths of W that are pairwise noncrossing. Let S be
the set of endpoints of the paths in P. Furthermore, the fact that the paths Bi have endpoints in
important vertices implies the following.
Claim 5.15. We can compute a set S̄ ⊆ V (G) of size at most O(|T |18 ) that contains S.
Proof. We first insert into S̄ all important vertices (which, in particular, includes all terminals);
recall that there are at most |T |4 of them. This covers the endpoints of all paths Bi that do not
intersect every other path Bj and were directly inserted into P. To cover the endpoints of subpaths
of paths Bi and Bj , i < j, that intersect each other, we proceed as follows. We first iterate over all
tuples (si , ti , sj , tj ) of four important vertices (these are |T |16 options), and focus on the case where
the path Bi leads from si to ti and Bj leads from sj to tj . By the assumption that G has unique
shortest paths, we can compute a path Bi0 parallel to Bi and Bj0 parallel to Bj . Second, we iterate
over all pairs (ei , ej ), where ei is an arc parallel to the last arc of Bi0 and ej is an arc parallel to the
last arc of Bj0 ; there are |T |2 choices of such a pair. We focus on the case where ei is the last arc of
Bi and ej is the last arc of Bj .
We observe that the assumption that W is first-cross reduced allows us to infer all vertices
where Bi and Bj cross from the paths Bi0 , Bj0 , and arcs ei and ej , similarly as in the definition of
important vertices. That is, the crossings happen at vertices v for which there is an arc ei,v on Bi0
with head v and an arc ej,v on Bj0 with head v, such that ei,v and ej,v are not parallel and, if Bi,v
and Bj,v are the maximal parallel subpaths of Bi0 and Bj0 starting at v that do not contain the last
0 lie on different sides of B 0 (that is, after
arcs ei and ej , then the arcs preceding and succeeding Bi,v
j
0
contracting Bi,v we see a crossing at the contracted vertex). We insert the first and the last such
crossing vertex on Bj0 into S̄ for every choice of (si , ti , sj , tj ) and (ei , ej ).
y
We now lift the paths P to a family of paths P • in G• . First, for every path Bi , we define
its lift Bi• to be the subpath of W • from the lift of the first arc of Bi to the lift of the last arc
of Bj , inclusive. We define the set S • ⊆ V (G• ) to be the set of all terminals and all vertices at
self-intersections of W • except for the following: for every pair of paths Bi and Bj that cross each
other, i < j, we insert into S • only the first and the last intersections of the lifts Bi• and Bj• into S •
(where, again, first and last refer to the order on Bj ). Observe that we have η(S • ) = S, but there
may be many different vertices v ∈ S • with the same projection η(v) ∈ S.
Claim 5.15 allows us to enumerate a small set of candidates for endpoints in S • .
Claim 5.16. We can compute a set S̄ • ⊆ V (G• ) of size bounded by O(|T |54 ) that contains S • .
Proof. Consider a self-intersection of W • at vertex v. This self-intersection corresponds to a selfcrossing of W at η(v), which consists of two different visits of η(v) on W . Say that (e1 , e3 ) are the
arcs traversed by W preceding and succeeding the first considered visit of η(v), while (e2 , e4 ) are
the arcs preceding and succeeding the second considered visit. Observe that the vertex v is uniquely
determined by e1 , e2 , e3 , and e4 : it is the intersection of the chord from the head of η −1 (e1 ) to
the tail of η −1 (e3 ), and the chord from the head of η −1 (e2 ) to the tail of η −1 (e4 ). Furthermore,
the number of options this quadruple edges is bounded by O(|T |54 ) as follows. We can first pick
any η(v) ∈ S̄ for S̄ coming from Claim 5.15 (O(|T |18 ) options), and then every arc ej can be
inferred from the knowledge of: the endpoints of the path Bi it lies on (which requires guessing two
important vertices, at most |T |8 options) and its index in the parallel bunch to which it belongs
(|T | options).
y
Observe that |S • | = O(|T |), due to the fact that W has only O(|T |) crossings that are not
crossings of pairs of paths Bi and Bj (Lemma 5.7).
26
We define the family P • to be the family of paths into which S • splits the walk W • , except for
the following subpaths. For every pair of paths Bi and Bj that cross at least twice and i < j, the
lifts Bi• and Bj• contain exactly two vertices of S • , corresponding to the first and the last crossings
of Bi and Bj along Bj . Denote the vertices of these self-intersections of W • as uj , vj ∈ V (G• ), and
let Bimid and Bjmid be the subpaths of Bi• and Bj• between uj and vj . We do not put Bjmid into P • ,
similarly as we did not put the subpath of Bj between η(uj ) and η(vj ) into P.
Note that every path P ∈ P has its corresponding path (called henceforth lift as well) P • ∈ P •
with η(P • ) = P . However, observe two delicate matters. First, apart from the lifts of the paths from
P, the family P • may contain short paths between two consecutive crossings inside one circle Cv .
Second, even if P = Bi for some P ∈ P, the lift P • may be slightly longer than the lift Bi• defined
earlier, as P • may contain some edges inside Cu and Cv for u and v being the endpoints of P . Since
we will no longer use the lifts Bi• in this proof, we allow this slight abuse of notation.
Graphs H, H × , and an sc-branch decomposition. We define a subgraph H of G• as the
f • and all paths from P • .
union of W
Although H is a plane graph, it can have unbounded number of vertices and potentially large
branchwidth. Let H × be the graph obtained from H by contracting, for every Q ∈ P • , all internal
vertices of Q into one vertex uQ . Thus, Q gets contracted into a path Q× consisting of two edges and
three vertices: the former endpoints and uQ . Recall that since the paths of P • are vertex-disjoint
except for possibly having common endpoints, the contractions on different paths Q ∈ P • do not
interfere with each other.
We have the following bound.
p
Claim 5.17. The graph H × admits an sc-branch decomposition (T , ζ, γ) of width O( |T |).
Proof. First, note that H × is connected. Let H 0 be the graph H × with all vertices of degree 2
suppressed (i.e., every maximal path with internal vertices of degree 2 replaced by a single edge).
It suffices to show that H 0 admits such a decomposition, as it is straightforward to adjust the
decomposition for H 0 to a decomposition
of H × of the same width. By Theorem 3.2, it suffices to
p
show only that H 0 has branchwidth O( |T |). To this end, note that every vertex of H 0 is either a
terminal, or one of the three vertices of the contracted path Q× for some Q ∈ P • . As |P • | = O(|T |),
we have that |V (H 0 )| = O(|T |), and the claim follows from Theorem 3.1.
y
Let (T , ζ, γ) be the sc-branch decomposition of H × given by Claim 5.17. Recall that, given a
closed curve γ without intersections, by a tri-partition of the terminals induced by γ we mean a
tuple (T0 , {T1 , T2 }) where T0 is the set of terminals on γ, and T1 and T2 are sets of terminals on
the two sides of γ. With a noose γ(f ) for some f ∈ E(T ) we can also associate another partition
{T10 , T20 } with T = T10 ] T20 , called henceforth the arc-induced partition: a terminal t belongs to
T10 or T20 depending on the side of γ that contains the arc (t, wt ). Observe that, if γ(f ) induces a
tri-partition (T0 , {T1 , T2 }) and arc-induces a partition {T10 , T20 }, then we have Ti ⊆ Ti0 ⊆ Ti ∪ T0 for
i = 1, 2, provided we assume that Ti and Ti0 correspond to the same side of γ(f ).
Branch decomposition gives good decomposition. We now show that a good decomposition
of W inferred from the sc-branch decomposition (T , ζ, γ). Root the tree T at an arbitrary leaf
r such that ζ(r) is not of the form (t, wt ) for a terminal t and define β(r) = T . For every node
s ∈ V (T ) \ {r} with a parent edge f , we define β(s) as follows. Let {T10 , T20 } be the partition of the
terminals arc-induced by γ(f ) and assume that the side of γ(f ) that contains ζ(r) corresponds to
the set T20 . Then, we put β(s) = T10 .
27
In the next two claims we verify that (T , β) is a good decomposition of W (formally, after
removing the root in order to have a binary tree). We first verify that arc-induced partitions of
nooses are actually good sections of W .
Claim 5.18. If C is a sufficiently large universal constant, then for every edge f ∈ E(T ), if
(T0 , {T1 , T2 }) is the tri-partition induced by γ(f ), then every set T 0 with Ti ⊆ T 0 ⊆ T0 ∪ Ti for some
i = 1, 2 is a good section of W and πW .
p
Proof. Intuitively, the claim follows from the fact that γ(f ) visits only O( |T |) vertices of H × ,
and each such vertex can be modeled as only one crossing with W . However, we need to be a bit
careful with the formal proof, because in the construction of P and, consequently, H, we sometimes
drop part of a path Bj if there was another path Bi crossing it.
We proceed to the formal argumentation. Let us take the graph H and forget the directions of
the edges; denote the obtained undirected graph by H 0 . From a walk W • , we construct a walk W 0
as follows: for every pair of paths Bi and Bj that cross, where i < j, consider subpaths Bimid and
Bjmid as defined in the previous paragraph. We replace the subpath Bjmid with (possibly traversed in
the reverse direction) path Bimid . Note that W 0 is a walk in H 0 , visits every vertex of H 0 a constant
number of times, and visits the terminals in the same order as Wp
.
We now show that there is a noose γ 0 that intersects W 0 in O( |T |) vertices and that keeps the
same set of terminals of G inside, on, and outside as γ(f ). First, set γ 0 to be an “uncontraction” of
γ(f ), defined as follows (see Figure 6).
• Whenever γ(f ) traverses a face f0 of H × from a vertex v1 to v2 , γ 0 traverses the corresponding
face f00 of H between the vertices v10 and v20 that were contracted onto v1 and v2 , respectively.
• Whenever γ(f ) traverses a vertex uQ of H × , for some Q ∈ P • , the curve γ 0 goes along a
subpath of Q.
• Whenever γ(f ) traverses a vertex v of H × that is not of the form uQ , for some Q ∈ P • , γ 0
traverses v as well.
Note that while at this point γ 0 is formally not a noose, due to possibly travelling along subpaths of
paths Q ∈ P • , it is a closed curve in the plane without self-intersections. This is because the paths
in P • are vertex-disjoint except for possible common endpoints.
Second, we shift γ 0 slightly along every traversed subpath of a path Q ∈ P • so that γ 0 follows Q
parallely in a small neighborhood of Q, crossing it once
p at a vertex if needed, and not more than
• at O( |T |) vertices, it follows that γ 0 intersects
once. Since
γ
intersected
the
embedding
of
G
p
W 0 at O( |T |) vertices. Since no path in Q ∈ P • has a terminal as an internal vertex, the above
operations do not change the set of terminals inside, on, p
and outside γ(f ) and γ 0 .
0
0
Note that since γ is a curve that intersects W at O( |T |) vertices and W 0 visits
p every vertex
0
0
0
a constant number of times, the intersections of γ with W split W into O( |T |) subwalks
W10 , W20 , . . .. Each subwalk Wj0 visits terminals of T0 ∪ T1 or T0 ∪ T2 only, depending on the side of
γ 0 it lies on. Furthermore, the terminals at endpoints of walks Wj0 are exactly all terminals of T0 .
To conclude, assume T 0 ⊆ T is such that Ti ⊆ T 0 ⊆ T0 ∪ Ti for some i ∈ {1, 2}. Define a family
R of subsets of T as follows: we take every subwalk Wj0 that lies on the same side of γ 0 as Ti , remove
from Wj0 any endpoint that belongs to T0 \ T 0 , thus obtaining a walk Wj00 , and insert into R the set
of terminals traversed by Wj00 . Observe that R is a partition of T 0 into subsets that form contiguous
intervals in the cyclic order of terminals visited by W 0 . Since the order of visiting terminals is the
same on W and on W 0 , we infer the same conclusion for W . This finishes the proof of the claim. y
Second, we verify the remaining properties of a good decomposition.
28
Figure 6: Uncontraction performed in the proof of Claim 5.18. The first two panels show a path
Q ∈ P • in H and its corresponding two-arc path in H × , with the new vertex uQ in red. A visit
of the vertex uQ by the green curve γ(f ) in the third panel gets uncontracted to the part of γ 0
depicted in green in the fourth panel.
29
Claim 5.19. (T − r, β) is a good decomposition of W and πW .
Proof. Let us verify the properties of a good decomposition one-by-one. Property (1) is ensured by
Claim 5.18. We have β(r) = T by definition, and note that by the choice of ζ(r) we have β(r0 ) = T
for the unique child r0 of r in T . This ensures property (2) For property (3), pick a non-leaf non-root
node s of T , and observe that it has always exactly two children, say s1 and s2 . Let f be the edge
of T connecting s with its parent, and let f1 , f2 be edges connecting s with s1 , s2 , respectively. By
the properties of a branch decomposition, the set of edges on the side of γ(f ) that does not contain
ζ(r) is partitioned into sets defined in the same manner for γ(f1 ) and γ(f2 )}. As β(s), β(s1 ), β(s2 )
are defined by including every terminal t depending on whether the edges (t, wt ) is included in the
sets above, property (3) follows. Finally, for property (4), note that for a leaf s with parent edge f ,
the noose γ(f ) encloses a single edge, and thus |β(s)| ≤ 1.
y
Thus, our goal now is to construct a small family of subsets of T that contains all sets Ti0 for
partitions {T10 , T20 } arc-induced by nooses γ(f ) for f ∈ E(T ).
Guessing parts of the walk W . Intuitively, the idea now is that a noose γ = γ(f ) for some
f ∈ E(T ), or more precisely the tri-partition of terminals it induces, may be guessed as follows.
Curve γ, as a noose in H × , may be uncontracted to a noose γ 0 as in the proof of Claim 5.18; this
modification does not change the set of enclosed terminals. Now γ 0 travels alternately via faces of
•
H and
between these two modes
palong subpaths of paths from P , and the number of alternations
•
is O( |T |). Intuitively, the traversed subpaths of paths from P may be guessed efficiently by
guessing the endpoints and drawing shortest paths; we know that the endpoints belong to the set
S̄ • given by Claim 5.16, a set of size bounded polynomially in |T |. Having guessed these paths, we
are left with guessing the rest of γ. However, as we are only interested in the tri-partition induced
f • plus the guessed paths, with the internal parts of
by γ, so it remains to apply Lemma 4.1 to W
p
the guessed paths contracted. Then the number of crossing points we are interested in is O( |T |).
One of the technical problems we encounter is that, even if we know two endpoints of a path
Q ∈ P • with the help of Claim 5.16, and we can deduce a path parallel to the projection η(Q) since
it is a shortest path between its endpoints, we do not know how exactly η(Q) travels in the graph
G• , as we do not know which arcs in parallel bunches η(Q) uses. However, we will now rely on the
fact that the paths in P do not cross each other to show that, up to some shifts, there are only
|T |O(|L|) ways to embed L paths from P • .
Let Q ⊆ P • be a family of L paths, and let Q0 be a family of paths in G• that are pairwise
vertex-disjoint apart from possibly sharing endpoints. We say that Q0 is a shift of Q if there is a
bijection that maps every Q ∈ Q to a corresponding path Q0 ∈ Q0 with the following properties:
1. η(Q) and η(Q0 ) are parallel, and start and end with the same arc;
2. Q and Q0 have the same starting vertices, and travel along the same subpath up to η −1 (e)
(inclusive), where e is the first arc on η(Q), equivalently the first arc on η(Q0 );
3. a symmetric property holds for the ending vertices of Q and Q0 ; and
4. the bijection Q 7→ Q0 preserves the order of the paths inside the lift of every parallel bunch of
arcs. In other words, if Q1 , . . . , Qp are paths from Q such that η(Q1 ), . . . , η(Qp ) all traverse
some parallel bunch, then the relative orders of arcs within this bunch that are used by
η(Q1 ), . . . , η(Qp ), respectively, and that are used by η(Q01 ), . . . , η(Q0p ), respectively, coincide.
In particular, if Q is one of the short paths completely contained in one circle Cv , then Q = Q0 .
We prove the following enumeration statement.
30
Claim 5.20. Given an integer L, one can in time (|T |L)O(|L|) nO(1) compute a family of (|T |L)O(L)
families of paths in G• such that for every set Q ⊆ P • of size L there exists a shift of Q in the
computed family.
Proof. It is convenient for us to describe the enumeration as a guessing procedure that guesses,
with the total number of subcases being (|T |L)O(L) , a family of paths Q0 that is a shift of a fixed
family Q ⊆ P • of size L.
Let Q = {Q1 , Q2 , . . . , QL }. For every path Qi we guess the following information. First, we
guess the endpoints ui and vi of η(Qi ); by Claim 5.16, we may guess this vertices from the set S̄ • ,
so the number of options is O(|T |108 ). If both ui and vi lie in the same circle Cη(ui ) , then Qi is a
subpath of the unique chord that contains both ui and vi , that is, is completely determined by its
endpoints. Hence, we can from this point ignore such paths Qi .
For other paths, we compute a shortest path Pi0 in G from η(ui ) to η(vi ); observe that Pi0 is
parallel to Pi := η(Qi ). Given Pi0 , we guess the first and the last arc of Pi ; there are |T | options
for each, as they need to be parallel to the first and the last arc of Pi0 , respectively. Denote these
arcs eui and evi , respectively. Note that the knowledge of ui , vi , eui , and evi allows us to deduce the
prefix of Qi up to η −1 (eui ) and the suffix from η −1 (evi ) to the end: these are subpaths of appropriate
chords within Cη(ui ) and Cη(vi ) , respectively. Finally, we define ι[u, ←] ∈ [L] ∪ {⊥} as follows: we
look at the parallel bunch of eui , find the first arc in the counter-clockwise direction from eui that
belongs to some path in Q, and pick ι[u, ←] to be the index of this path. We pick ι[u, ←] = ⊥ if
there is no such path. Similarly we define ι[u, →] for the clockwise direction, and ι[v, ←], ι[v, →] for
the arc evi . There are L + 1 options for each of the indices ι[u, ←], ι[u, →], ι[v, ←], and ι[v, →].
We now show that the guessed information is sufficient to uniqely determine a shift of Q. Recall
that for every path Qi ∈ Q we already know its endpoints and a path Pi0 in G parallel to η(Qi );
without loss of generality, we can assume that Pi0 starts with eui and ends with evi . Observe that it
suffices to determine, for every parallel bunch in G, the order in which the projections of the paths
in Q traverse it.
We deduce this information in polynomial time by the following resolution procedure. From the
guessed information, we can deduce the following; for simplicity, we use “left” and “right” instead
of “counter-clockwise” and “clockwise”.
(endpoints rule) In the bunch of eui , the path Qi is to the right of Qι[u,←] , and similarly for
ι[u, →], ι[v, ←], and ι[v, →].
(diverge rule) Assume that for some indices a and b, Pa0 and Pb0 contain parallel edges e0a and e0b ,
respectively, but the preceding arcs on Pa0 and Pb0 are not parallel. Then from the order of
these preceding arcs in the cyclic order of arcs around the tail of e0a we can deduce the relative
order of η(Qa ) and η(Qb ) on the parallel bunch of e0a . This is because η(Qa ) and η(Qb ) do not
cross.
Second, we have the following straightforward deduction rules.
(parallel rule) If we know that Qa is to the left of Qb and Qb is to the left of Qc in some parallel
bunch, then Qa is to the left of Qc in the bunch in question.
(serial rule) If the projections of Qa and Qb have two-edge subpaths Ra and Rb that are parallel,
and we know the relative order of η(Qa ) and η(Qb ) on one of the bunches of the edges from
Ra and Rb , then, since η(Qa ) and η(Qb ) do not cross, we can deduce the order on the second
bunch of the edges of Ra and Rb .
31
We claim that after applying these deduction rules exhaustively (which can clearly be done in
polynomial time), we know the relative order of Qa and Qb for every Qa , Qb ∈ Q and every parallel
bunch of G their projections traverse jointly. Assume this is not true, and pick Qa , Qb , and a
parallel bunch traversed by η(Qa ) and η(Qb ) via arcs ea and eb , respectively, where we did not
deduce the relative order of ea and eb , with the following minimization criteria: the number of arcs
between ea and eb in their bunch that are in the projections of other paths of Q is minimized, and,
subject to that, the number of arcs on η(Qa ) before ea is minimized.
From the parallel rule we infer that actually no projection of a path of Q uses any arc between
ea and eb in their parallel bunch. From the endpoints rule we infer that ea and eb are not the first
arcs of η(Qa ) and η(Qb ), respectively. From the diverge rule we infer that the preceding arcs e0a and
e0b of ea and eb on η(Qa ) and η(Qb ) are parallel, too. From the minimality we infer that there is
another arc e0c in the parallel bunch between e0a and e0b that is part of η(Qc ) for some other path
Qc ∈ Q. Since the projections of paths of Q are noncrossing, η(Qc ) needs to end at the head of e0c ,
i.e., e0c is the last arc of η(Qc ). From the endpoints rule, we know both paths of Q that traverse the
bunch of e0c next to e0c . Since this applies to any arc e0c between e0a and e0b in their parallel bunch,
the parallel rule allows us to deduce the relative order of e0a and e0b in their bunch and, subsequently,
the serial rule would allow us to deduce the relative order of ea and eb . This contradiction finishes
the proof of the claim.
y
Enumeration algorithm. Armed with Claim 5.20, we can finish our enumeration procedure.
√
√
Claim 5.21. In time |T |O( |T |) nO(1) one can enumerate a family A of |T |O( |T |) tri-partitions of
the terminals such that for every edge f ∈ E(T ), A contains the tri-partition induced by γ(f ).
a branching algorithm that guesses
Proof. Consider a noose γ(f ) for some f ∈ E(T ). We present
√
O( |T |)
the tri-partition induced by γ(f ) by branching into |T |
subcases.
•
Let Qf be the family of those paths Q ∈ Ppsuch that uQ lies on γ(f ). We have L := |Qf | =
p
O( |T |). We guess the number L (one of O( |T |) options).
Then we invoke the algorithm of
√
Claim 5.20 for parameter L, giving us a family of √
size |T |O( |T |) that includes some shift of Qf . By
guessing one member of this family (one of |T |O( |T |) options), we may proceed assuming that we
have correctly guessed a family Q0f that is a shift of Qf .
f • and all paths
We construct a graph H10 defined as a subgraph of G• equal to the union of W
0
0
0
0
0
from Qf . Compute a graph H2 from H1 as follows: for every path Q ∈ Qf contract all the internal
vertices of Q0 into a single vertex u0Q0 , similarly as in the process of construction H × . Finally, we
apply Lemma 4.1 to H20 and parameter L, obtaining a family A0 of tri-partitions of the terminal
set. To conclude the proof of the claim, it suffices to show that, provided the
√ previous guesses are
O( |T |)
0
0
correct, A contains a tri-partition induced by γ(f ). Indeed, as |A | = |T |
, we may then just
guess one element of A0 and return it.
Let H1 and H2 be defined as H10 and H20 , but with the family Qf instead of Q0f . The crucial
observation is that since the paths in P • are vertex-disjoint except for endpoints, and Q0f is a shift of
Qf , then H1 and H10 become isomorphic after suppressing non-terminal vertices of degree 2. Moreover,
f • under the suppresion,
this isomorphism fixes the terminals as well as the image of the tree W
and the embedding of H1 may be transformed into an embedding of H10 by a homeomorphism of
the plane that moves paths of Qf to the corresponding shifts from Q0f . The existence of such an
isomorphism can be easily seen from the definition of a shift; recall here that paths of Q0f have
exactly the same endpoints as the corresponding paths from Qf . Now, the construction of H2 from
H1 is done by applying an isomorphic set of contractions as in the construction of H20 from H10 . It
32
follows that H2 after suppressing degree-2 nonterminal vertices is isomorphic to H20 after suppressing
degree-2 vertices, and again the embeddings of these two graphs can be transformed to each other
by means of a homeomorphism of the plane. Observe now that, by applying this homeomorphism
to the noose γ(f ) in H2 , we obtain a noose γ(f )0 in H20 that induces the same tri-partition of the
terminals as γ(f ). By Lemma 4.1, this tri-partition was included in the enumerated family A0 , so
the proof of the claim is finished.
y
To conclude the proof of Theorem 5.14, note p
that every tri-partition (T0 , {T1 , T2 }) induced by a
noose γ(f ) for some f ∈ E(T ) satisfies |T0 | = O( |T |). Thus, we can compute a family B of subsets
of terminals as follows:
we compute the family A using Claim 5.21 and, for every (T0 , {T1 , T2 }) ∈ A
p
with |T0 | = O( |T |), every j = 1, 2, and every T 0 with Tj ⊆ T 0 ⊆ T0 ∪ Tj , we insert T 0 into B.
Claim 5.21 ensures that the family B is of correct size, is computed within the promised running
time bound, and that it contains the sets β(s) for every s ∈ V (T ). Furthermore, by Claim 5.19,
(T − r, β) is a good decomposition of W and πW . This finishes the proof of Theorem 5.14.
5.3
Dynamic programming algorithm
In this section we show that, given the family B obtained
using Theorem 5.14, one can find a shortest
√
walk visiting all terminals in time |B|O(1) · |T |O( |T |) · nO(1) by a standard dynamic programming
approach. More formally, we show the following lemma.
Subset TSP instance (G, T ) and a family B of
Lemma 5.22. Given a preprocessed Directed
√
O( |T |)
O(1)
O(1)
subsets of T , one can in time |B|
· |T |
compute a T -short walk W0 of total length not
·n
greater than the minimum length of a T -short walk W for which there exists a good decomposition
(T , β) satisfying {β(s) : s ∈ V (T )} ⊆ B.
p
Proof. A state consists of a set A ∈ B and a family M of O(
√ |T |) ordered pairs of (not necessarily
different) terminals from A. Note that there are |B| · |T |O( |T |) states.
A realization of a state (A, M) is a mapping P that assigns to every pair (t, t0 ) ∈ M a walk
0
(t, t ) from t to t0 in G in such a manner that the walks {P (u, v) : (t, t0 ) ∈ M} together visit all
terminals of A. The weight of a realization is the sum of the weights of all walks in it. In our
dynamic programming algorithm we shall compute a realization P(A,M) for every state (A, M), in
the order of increasing size of A.
Given two walks Q1 and Q2 , a concatenation of Q1 and Q2 is a walk consisting of the walk Q1 ,
then a shortest path from the ending point of Q1 to the starting vertex of Q2 , and then the walk
Q2 . This definition naturally
√ generalizes to concatenations of longer sequences of walks.
For states with |A| ≤ C T , we compute a minimum weight realization by a standard dynamic
programming algorithm that builds walks P (u, v) by extending them by terminals one-by-one. More
precisely, the realization for a state (A, M) is chosen by taking any pair (t, t0 ) ∈ M with t 6= t0 , and
selecting the minimum-weight realization among the following candidates: for each a state of the
form (A \ {t0 }, M \ {(t, t0 )} ∪ {(t, t00 )}) for some t00 ∈ A \ {t0 }, take its precomputed realization P
and append a shortest path from t00 to t0 to P (t, t00 ). When t = t0 for all M, the minimum-weight
realization clearly consists of a one-vertex path for every pair from M, which has total weight 0.
For states (A, M) with larger sets A, we iterate over all partitions of the form A = A1 ] A2
with A1 , A2 ∈ B and |A1 |, |A2 | < |A|, and all states (A1 , M1 ) and (A2 , M2 ) with precomputed
realizations
P1 and P2 , respectively. With a standard dynamic programming algorithm, in time
√
O( |T |) O(1)
2
n
we find a minimum-weight realization P of (A, M) whose walks are concatenations of
the walks of P1 and P2 in such a manner that each walk of P1 and P2 is used exactly once. The state
33
of this dynamic programming algorithm keeps track of which walks of P1 and P2 were already used.
We build walks of P one-by-one, by concatenating a walk of P1 or P2 one at a time, maintaining
the current endpoint of the so-far constructed walk.
Finally, we iterate over all states (T, {(t, t0 )}) for terminals t, t0 ∈ T and set πt,t0 to be the order
in which P(T,{(t,t0 )}) (t, t0 ) traverses the terminals. For each such choice, compute a T -short walk
Wt,t0 with the witnessing permutation πt,t0 , and return the minimum-weight walk found.
Clearly, the algorithm returns a T -short walk. Consider a T -short walk W for which there exists
a witnessing permutation πW and a good decomposition (T , β) such that {β(v) : v ∈ V (T )} ⊆ B.
From the definition
of a good decomposition, for every node s ∈ V (T ) there exists a collection Ps of
p
at most C |T | subwalks of W that visit exactly the terminals of β(s). Furthermore, we can choose
these collections in such a manner that every subwalk starts and ends at a terminal, for the root r
the collection Pr consists of a single subwalk of W from the first to the last terminal of πW , and for
every node s with children s1 and s2 , the walks of Ps are concatenations of some walks of Ps1 and
Ps2 , where every walk in Ps1 and Ps2 is used exactly once. Then a standard inductive argument
shows that the realization for (β(s), Ms ) is of weight at most the total weight of the walks in Ps .
Consequently, if t, t0 are the first and the last terminal on πW , then the computed realization of
(T, {(t, t0 )}) is of weight at most the weight of the subwalk of W from t to t0 . Hence the T -short
walk computed for πt,t0 is of weight not larger than the weight of W , which concludes the proof.
5.4
Wrap up
We conclude the proof of Theorem 1.1. As discussed in Section 3, without loss of generality we
assume that the input graph G has unique shortest paths, that every terminal t has a unique
neighbor wt connected via arcs (wt , t) and (t, wt ), and that every arc not incident to a terminal is
copied |T | times.
Observe that the walk W we are looking for is always a T -short walk. Lemmas 5.2 and 5.5 imply
that we can assume that the walk W we are looking for is not only T -short, but also first-cross
reduced and visits every edge at most once.
√
We invoke the algorithm Theorem 5.14, yielding a family B of size |T |O( |T |) . By the properties
of the sought walk W , there exists a good decomposition (T , β) of W and its witnessing permutation
πW such that β(s) ∈ B for every s ∈ V (T ). Consequently, the algorithm of Lemma 5.22, applied to
B, finds a walk of total length not longer than W . By the optimality of W , the resulting walk is a
shortest closed walk visiting T . This concludes the proof of Theorem 1.1.
6
Dynamic programming algorithm for the Steiner tree problem
This section is devoted to the proof of Theorem 1.3. On high level, we follow the approach of Fomin
et al. [20] used for designing a subexponential algorithm for the rectilinear Steiner Tree problem,
and we adjust the technique from there to the more general graph setting. More precisely, we argue
that the union of some optimum solution and a local-search optimum has low treewidth, which
yields a small family of states for a dynamic programming procedure assembling the optimum.
Let G be an edge-weighted undirected planar graph, and let T ⊆ V (G) be a set of terminals.
Without loss of generality we can assume that G is connected (by deleting connected components
that do not contain terminals) and that every weight of G is positive (by contracting edges of weight
0). A solution is a subset of edges of G that induces a tree containing all terminals. Given a solution
A, a vertex v ∈ V (A) is special if it is a terminal or it has degree at least 3 in A. An inner path in
a solution A is a path in the tree A that does not contain any special vertex as an inner vertex.
34
An `-local search step from a solution A is defined as follows: we delete at most ` inner paths in
the tree A and reconnect the remaining connected components in a minimum-cost way. A solution
A is an `-local search optimum if there is no `-local search step that leads to a solution of strictly
smaller cost.
Our main combinatorial engine of the proof is the following statement.
Theorem 6.1. Let G be an edge-weighted undirected planar graph with positive weights and let
T ⊆ V (G) be a set of terminals. Furthermore, let A be a 2-local search optimum and let A? be a
minimum-weight solution thatp
additionally minimizes the number of edges of A? \ A. Then, the
graph A ∪ A? has treewidth O( |T |).
We prove Theorem 6.1 in Section 6.1 and then, in Section 6.2, we conclude the proof of
Theorem 1.3 by a relatively standard dynamic programming routine.
6.1
Treewidth bound
We now prove Theorem 6.1. Let H0 be the graph induced by the edges of A ∪ A? , and let H be a
minor of H0 constructed by suppressing all non-terminal vertices of degree two. That is, we replace
every maximal path of H0 containing only non-terminal vertices of degree two as inner vertices by a
single edge in H.
Note that H is a planar graph.
p To prove Theorem 6.1, it suffices to show that H has O(|T |)
vertices, as that would imply a O( |T |) bound on the treewidth of H and, consequently, the same
bound for the treewidth of H0 .
Since A is a 2-local search optimum, it is in particular an inclusion-wise minimal solution (a
single edge is always an inner path and edge weights are positive). Similarly, A? is an inclusion-wise
minimal solution. Consequently, if v is a non-terminal vertex of H0 of degree at most 2, then v
has degree exactly 2 in H0 and both its incident edges belong either both to A or none to A, and
belong both to A? or none to A? . This observation allows us to project the sets A and A? onto sets
AH , A?H ⊆ E(H) in the following manner. We start with AH = A ∩ E(H) and A?H = A? ∩ E(H).
Furthermore, when, in the process of constructing H from H0 , we replace a maximal path P in H0
with an edge eP ∈ E(H), the edge set of P is either fully contained in A or disjoint with A, and
similarly for A? . We put eP ∈ AH if E(P ) ⊆ A and we put eP ∈ A?H if E(P ) ⊆ A? .
We now define important vertices in H. A vertex v ∈ H is important if either v is a special
vertex of A or A? (recall that V (H) ⊆ V (H0 )). Let S be the set of important vertices. The fact
that both A and A? are trees with all leaves in T ensures that there are not that many important
vertices and not that many bridges in H.
Lemma 6.2. There are less than 3|T | important vertices and less than 2|T | bridges in H. Furthermore,
X
degH (v) < 14|T |.
v∈S
Proof. By inclusion-wise minimality, every leaf of both A and A? is a terminal. This implies that
both A and A? have less than |T | vertices of degree at least 3 each (in particular, less than 2|T |
special vertices each), giving the 3|T | bound on the number of important vertices. Furthermore, it
also implies that
X
max(0, degA (v) − 2) < |T |,
v∈V (A)
X
v∈V
max(0, degA? (v) − 2) < |T |.
(A? )
35
With the above, the second claim of the lemma follows, as
X
X
degH (v) ≤
(degA (v) + degA? (v))
v∈S
v∈S
≤ 4|S| +
X
max(0, degA (v) − 2) +
X
v∈V
v∈V (A)
max(0, degA? (v) − 2)
(A? )
< 12|T | + |T | + |T | = 14|T |.
For the bound on the number of bridges, let B be the set of bridges in H, and let C be a connected
component of H − B. We claim that C contains a special vertex of A; since A has less than 2|T |
special vertices, this would prove the bound |B| < 2|T |. Assume the contrary. By the minimality of
A and A? , we have B ⊆ AH ∩ A?H . Consequently, since C does not contain a vertex of degree at
least 3 in A (as such vertices are special in A), we have that C is incident to at most 2 bridges of B.
Since C does not contain a terminal and A is inclusion-wise minimal, C is incident to exactly two
bridges in B, say e1 and e2 with endpoints v1 , v2 ∈ V (C), respectively. Then the optimality of A
and A? asserts that both A and A? contain a minimum-weight path from v1 to v2 . However, the
condition that A? minimizes the number of edges of A? \ A implies that A and A? use the same
path from v1 to v2 and, consequently, C is a path. This is a contradiction, as then in the process of
constructing H we would contract C, e1 , and e2 onto a single edge. This finishes the proof of the
lemma.
We now make a crucial observation that faces of H either are incident to an important vertex or
are long in a specific sense.
Lemma 6.3. Every face of H is either incident to an important vertex, to a bridge of H, or to at
least six edges of the symmetric difference AH 4A?H .
Proof. Assume the contrary. Let f be a face that is not incident to any important vertex nor bridge
of H and is incident to at most 5 edges from AH 4A?H . Since H is connected, f is homeomorphic to
an open disc. Let W be the closed walk around H. Since f is not incident to a bridge of H, every
edge of H appears in W at most once. This in particular implies that, as A and A? are trees, we
have E(W ) 6⊆ AH and E(W ) 6⊆ A?H , that is, there exists an edge of AH \ A?H and of A?H \ AH on
the walk W .
We now greedily partition the walk W into walks W1 , W2 , . . . , W` as follows. We start with an
edge e1 ∈ E(W ) \ A?H ⊆ AH \ A?H and take W1 to be the maximal subwalk of W containing e1
and contained in AH . As E(W ) \ AH is nonempty, W1 is a proper subwalk of W , and on both its
endpoints is incident to an edge of E(W ) \ AH ⊆ A?H \ AH . We pick e2 to be one of these edges,
and define W2 to be a maximal subwalk of W − W1 that contains e2 and is contained in A?H . The
walk W2 has one endpoint at the endpoint of W1 , and the second endpoint either at the second
endpoint of W2 or at some edge e3 ∈ E(W ) \ A?H . We continue defining edges ei and walks Wi
in this manner such that ei ∈ Wi , for odd indices i we have ei ∈ E(W ) \ A?H and E(Wi ) ⊆ AH
while for even indices i we have ei ∈ E(W ) \ AH and E(Wi ) ⊆ A?H . The process ends when we
S
hit the second endpoint of W1 , and then E(W ) = `i=1 E(Wi ). Furthermore, by the maximality of
W1 , both edges of W incident to the endpoints of W1 but not on W1 belong to E(W ) \ A?H and,
consequently, the number of steps ` is even.
Recall that f is not incident to an important vertex of H, consequently, every walk Wi corresponds
to an inner path Pi in A for i odd and in A? for i even; in particular, every Wi is in fact a simple
path in H.
36
v2
P3
P2
P1
v3 = v4
v2
P4
P2
v1
v1
P3
v3
P4
P1
v4
Figure 7: The replacement argument used in the proof of Lemma 6.3 for ` = 4. The solution A is
green and the solution A? is red. The left picture shows the first subcase, when there is a corner
vertex that is not close; note that the connections may be a bit different (e.g., v1 = v4 ), but the
replacement stays the same. The right picture shows the second subcase, when every corner vertex
is close in one of the trees.
Since every walk Wi contains a distinct edge of AH 4A?H , we have ` ≤ 5. As ` is even, we have
` = 2 or ` = 4. We will reach a contradiction in both cases.
Assume first ` = 2, that is, P1 and P2 are two paths in G with the same endpoints. Then,
by the fact that no vertex on W is important and A is a 2-local search optimum, we have that
ω(P1 ) ≤ ω(P2 ). However, then (A? \ E(P2 )) ∪ E(P1 ) is a solution of weight at most the weight of
A? , while having a smaller number of edges not belonging to A due to the edge e2 ∈ A?H \ AH on
W2 . This is the desired contradiction.
Consider now the more involved case ` = 4. Since both P1 and P3 are inner paths in a tree A
and are edge-disjoint, we define the close endpoint v1 of P1 to be the endpoint of P1 that is closer
to P3 in the tree A. Symmetrically, define the close endpoint v3 of P3 to be the endpoint closer to
P1 in A. We similarly define the close endpoints v2 and v4 on P2 and P4 with respect to the tree
A? . We now distinguish two subcases, depending on whether there exists an endpoint v of one of
the paths Pi that is not close for the incident two paths.
For the first case, by symmetry assume that the common endpoint of P1 and P2 , denoted v, is
different than v1 and v2 (see the left panel on Figure 7). Then we observe that (A \ E(P1 )) ∪ E(P2 )
is a solution and, by the local search optimality of A, we have ω(P1 ) ≤ ω(P2 ). However, we have
then that (A? \ E(P2 )) ∪ E(P1 ) is a solution of weight not larger than the weight of A? that has
strictly less edges outside A due to the existence of the edge e2 ∈ A?H \ AH on W2 .
For the second case, observe that there are two options: either vi is the common endpoint
of Pi and Pi+1 for i = 1, 2, 3, 4 (with P5 = P1 ) or vi is the common endpoint of Pi and Pi−1 for
i = 1, 2, 3, 4 (with P0 = P4 ). The arguments are symmetric, so let us assume the first alternative.
The key observation now is that the following are solutions (see the right panel on Figure 7):
Ā := (A \ (E(P1 ) ∪ E(P3 ))) ∪ E(P2 ) ∪ E(P4 ),
Ā? := (A? \ (E(P2 ) ∪ E(P4 ))) ∪ E(P1 ) ∪ E(P3 ).
Consequently, the 2-local search optimality of A implies that ω(P1 ) + ω(P3 ) ≤ ω(P2 ) + ω(P4 ),
yielding ω(Ā? ) ≤ ω(A? ). However, Ā? has strictly less edges outside A than A? due to the edges e2
and e4 , giving the desired contradiction.
Since we have reached a contradiction in all subcases, the lemma is proven.
Armed with the above two lemmas, we now conclude the proof of Theorem 6.1 by a discharging
argument.
37
We set up the initial charge in two steps. First, every vertex and every face receives a charge
of −6, while every edge receives a charge of +6. Second, every important vertex v receives an
additional charge of +6 degH (v) + 7 and every bridge of H receives an additional charge of +6. By
Euler’s formula, the total charge assigned in the first step is negative, while Lemma 6.2 implies that
the second step assigns a total charge of less than
6 · 14|T | + 7 · 3|T | + 6 · 2|T | = 117|T |.
Consequently, the total charge in the graph is less than 117|T |.
Let us now proceed with the discharging rules.
• Every important vertex v sends a charge of 6 to every incident face.
• Every edge e ∈ E(H) that belongs to both AH and A?H sends a charge of +3 to each of the
two incident vertices.
• Every edge e ∈ E(H) that belongs to AH 4A?H sends a charge of +2 to each of the two incident
vertices and a charge of +1 to each of the two incident faces (they are distinct, as an edge of
AH 4A?H cannot be a bridge of H).
• Every bridge e ∈ E(H) sends the additional charge of +6 to the unique incident face.
Clearly, at the end of the process every edge has charge zero.
We claim that every face f has a nonnegative charge; recall that initially it received a charge
of −6. If f is incident to an important vertex or a bridge of H, it received a charge of +6 from it.
Otherwise, Lemma 6.3 implies that f is incident to at least 6 edges from AH 4A?H , and receives a
charge of +1 from each such edge.
Finally, we claim that every vertex v has a strictly positive charge; recall that initially it received
also a charge of −6. If v is important, then it received an additional charge of 6 degH (v) + 7 and
sent 6 degH (v) to the incident faces, leaving a charge of −6 + 7 = +1.
Hence, we are left with the case when v is not important. Then v received a charge of +2 or +3
from each of its incident edges. Since v is not important, in particular it is not a terminal, and it is
of degree at least 3 in H by the construction of H. If v is of degree at least 4, then the incident
edges send a charge of at least +8, and the final charge is at least −6 + 8 = +2. Otherwise, when
degH (v) = 3, observe that exactly two edges incident to v belong to AH and exactly two edges
incident to v belong to A?H . This implies that v is incident to an edge of H that belongs both to
AH and A?H , and such an edge sends a charge of +3. Consequently, v receives a charge of at least
+3 + 2 + 2 = +7, and the final charge is at least −6 + 7 = +1.
To sum up, we have distributed an initial charge of less than 117|T | in such a manner that every
edge has zero charge, every face has a nonnegative charge, and every vertex has a charge of at least
+1. This implies that H has less than 117|T | vertices, finishing the proof of Theorem 6.1.
6.2
Dynamic programming
We now present the algorithm promised by Theorem 1.3. We use the principle of the dynamic
programming based on the combinatorial result of Theorem 6.1.
Let A be a 2-local search optimum that can be computed in time W · nO(1) where W is the
maximum edge weight. Let A? be a minimum-weight solution that minimizes the number of edges
of A? \ A and let H0 = A ∪ A? . Fix a spherical embedding of G; this naturally induces a spherical
embedding of H0 aspwell. By Theorems 6.1 and 3.2, there exists an sc-branch decomposition (T , ζ, γ)
of H0 of width O( |T |).
38
With every edge e ∈ E(T ) and its noose γ(e), we associate a tuple (T0 , {T1 , T2 }), called henceforth
a partition of the terminals, where T0 is the set of terminals lying on γ(e), and {T1 , T2 } are the
sets of terminals contained in the two parts of the sphere minus γ(e). Since we consider spherical
embeddings in this section, {T1 , T2 } is an unordered pair.
As discussed in Section 4, given A, one can enumerate a relatively small family of candidates for
partitions.
√
√
Claim 6.4. One can in time |T |O( |T |) nO(1) enumerate a family A of |T |O( |T |) subsets of T such
that for every e ∈ E(T ), if (T0 , {T1 , T2 }) is the partition corresponding to e, then T0 ∪ T1 and T0 ∪ T2
belong to A.
p
Proof. We apply Lemma 4.1 to the graph A, the terminal set T , and a bound ` = O( |T |).
y
With the family A given by Claim 6.4, we now describe
p our dynamic programming algorithm. A
state is a triple (S, I, R), where S ∈ √
A, I is a set of O( |T |) vertices of G, and R is an equivalence
√
relation on I. Clearly, there are nO( |T |) states and they can be enumerated within time nO( |T |) .
For every state (S, I, R), we say that a subgraph B is feasible for (S, I, R) if the following holds:
S ⊆ V (B), every connected component of B contains at least one vertex of I, and two vertices
u, v ∈ I that are in relation R belong to the same connected component of R. For every state
(S, I, R), the algorithm will compute along the way a number of feasible subgraphs and choose as
the value B(S, I, R) the one of minimum
pweight.
For a state (S, I, R) with |S| = O( |T |), we can compute B(S, I, R) to be the subgraph of
minimum weight among all feasible subgraphs by the following direct argument. We iterate over
all functions σ : S → I and compute a minimum-weight Steiner forest that connects t to σ(t)
√ for
O( |T |)
every t ∈ S and that connects every u, v ∈ I that are in relation R. Since there are |T | p
functions σ and, for each σ the question can be phrased as a Steiner
forest instance with O( |T |)
√
connection requests, all computations can be done in time nO( |T |) by a standard Dreyfus-Wagner
dynamic programming algorithm. Furthermore, note that if B is the sought feasible subgraph of
minimum weight, then for the case when σ assigns to every t ∈ S a fixed representative σ(t) ∈ I of
the connected component of B that contains t, then the found Steiner forest is a feasible
p solution of
weight at most ω(B). This finishes the description of the computations for |S| = O( |T |).
For states (S, I, R) with larger sets S, we proceed as follows. We make |E(T )| = O(n) rounds,
in each round computing a number of feasible subgraphs for every state (S, I, R), and at every
round choosing as B(S, I, R) the minimum-weight feasible subgraph among all found subgraphs and
the subgraph B(S, I, R) from the previous round. The number of rounds is chosen so that it is not
larger than the number of edges of T , which is O(n). In every round, for every state (S, I, R), we
iterate over all pairs of states (S1 , I1 , R1 ) and (S2 , I2 , R2 ) and take B(S1 , I1 , R1 ) ∪ B(S2√
, I2 , R2 ) as
O( |T |)
a candidate if it is a feasible subgraph for (S, I, R). Clearly, the computations take n
time.
After the computation, we will iterate over all states (T, I, I × I) (i.e., with only one equivalence
class in R) and choose the minimum-weight subgraph B(T, I, I × I) found. Since a feasible subgraph
for a state (S, I, I × I) needs to be connected, the algorithm returns a connected subgraph spanning
T . We have already argued about the running time bound.
To show the optimality of the returned solution, we show the following claim.
Claim 6.5. Let e ∈ E(T ), let (T0 , {T1 , T2 }) be the partition corresponding to the noose γ(e), and
let Ti for i = 1, 2 be the connected component of T − e that corresponds to the set Ti of terminals.
Furthermore, let A?i be the subgraph of A? with edges corresponding to the leaves of Ti (i.e., lying on
the same side of γ(e) as Ti ), let I be the set of vertices of V (A? ) that lie on γ(e), and let Ri be the
equivalence relation of lying in the same connected component of A? .
39
Then, for i = 1, 2, the subgraph A?i is a feasible subgraph for the state (T0 ∪Ti , I, Ri ). Furthermore,
after |E(Ti )| rounds of the algorithm, the weight of B(T0 ∪ Ti , I, Ri ) is at most ω(A?i ).
Proof. The fact that A?i is a feasible subgraph for (T0 ∪ Ti , I, Ri ) follows directly from the definitions.
For the second claim, we perform induction on the number of edges of Ti . For the base case, that is,
Ti being a single vertex, we have that γ(e) encloses a single edge of H0 , and hence |T0 ∪ Ti |, |I| ≤ 2.
This is covered by the base case of the computation, where B(T0 ∪ Ti , I, Ri ) is a minimum-weight
feasible subgraph before the first round of the computations.
For the inductive case, let wi be the endpoint of e in Ti and let e1 and e2 be the two other edges
of T incident with wi . Furthermore, for j = 1, 2, let T j be the connected component of T − ej that
does not contain wi , let A?,j be the subgraph of A? consisting of the edges corresponding to the
leaves of T j , let I j be the set of vertices of A? that lie on γ(I j ), let Rj be the equivalence relation
on I j of lying in the same connected component of A?,j , and let S j be the set of terminals that lie
on γ(I j ) or on the same side as the edges corresponding to the leaves of T j .
Observe that the edge set A?i is a disjoint union of A?,1 and A?,2 . By the induction hypothesis,
after round |E(T j )| the subgraph B(S j , I j , Rj ) has weight at most ω(A?,j ) for j = 1, 2. A direct
check from the definition shows that B(S 1 , I 1 , R1 )∪B(S 2 , I 2 , R2 ) is feasible for B(T0 ∪Ti , I, Ri ) (this
is easy to see from the fact that all three considered states were defined from the behavior of A? , and
A?i is a disjoint union of A?,1 and A?,2 ). Consequently, since |E(Ti )| > |E(T j )| for j = 1, 2, we have
that after round |E(T j )| the subgraph B(Si , I, Ri ) has weight at most ω(A?,1 ) + ω(A?,2 ) = ω(A?i ),
finishing the proof of the claim.
y
We conclude the proof of the correctness of the algorithm, and the whole proof of Theorem 1.3
as follows. Observe that we may assume that there is an edge e0 of the graph that is not in A?
but is incident with a vertex in A? , for otherwise the whole connected component containing all
the terminals must consist only of a minimal tree spanning them. This edge e0 corresponds to leaf
w ∈ V (T ); that is, e0 = ζ(w). Let e be the edge of T incident with w and let T1 and T2 be the
connected components of T − e0 such that {w} = V (T2 ) and T1 = T − w. Note that the set I of
vertices of V (A? ) that lie on γ(e) is nonempty and has at most 2 elements, by the choice of e0 .
Let (T0 , {T1 , T2 }) be the partition corresponding to the noose γ(e). Note that T2 = ∅, as γ(e)
separates ζ(w) from the rest of H0 . Furthermore, as ζ(w) ∈
/ A? , we have A?1 = A? , and A?1 is a
feasible subgraph for the state (T, I, I × I). By Claim 6.5, after |E(T )| = O(n) rounds, the subgraph
B(T, I, I × I) is of weight at most ω(A?1 ) = ω(A? ), and thus of minimum possible weight. This
finishes the proof of Theorem 1.3.
7
A lower bound for the Steiner tree problem
In this section we present the proof of Theorem 1.2. More precisely, we prove the following stronger
technical result.
Theorem 7.1. Suppose ETH holds. Then for every γ > 0 there exists a positive real dγ > 0 with
limγ→0 dγ = ∞ such that there is no algorithm solving Planar Steiner Tree with unit weights
in time O(2γ|T | · |V (G)|dγ ) for an input instance (G, T ).
Note that in the statement of Theorem 1.2, for large γ we may set dγ < 1, which just trivially
excludes sublinear algorithms. The statement becomes meaningful when γ approaches 0, as then it
asserts that the degree of the polynomial factor necessarily needs to grow to infinity, assuming ETH.
Observe that Theorem 7.1 in particular implies that there is no algorithm solving Planar
Steiner Tree in time O(2o(|T |) · |V (G)|c ) for any constant c, which is the statement of Theorem 1.2,
40
but it does not exclude tradeoffs between the base of the exponent 2γ and the degree of the
polynomial factor. Similar tradeoffs for the size parameterization were described by Tazari [42],
so it is conceivable that they might exist also in the case of the stronger parameterization by the
number of terminals.
For the proof of Theorem 7.1 we will need the following abstraction for ETH that can be derived
from the Sparsification Lemma [26]. See also [12] for a broader discussion.
Theorem 7.2 (Theorem 14.4 of [12]). Unless ETH fails, there is a constant δ > 0 such that there is
no algorithm solving 3SAT in time 2δ(n+m) · (n + m)O(1) , where n, m respectively denote the number
of variables and clauses of the input formula.
Theorem 7.1 will be proved by a reduction from Theorem 7.2, encapsulated in the following lemma.
Lemma 7.3. For every ε > 0 there is an algorithm that given an instance ϕ of 3SAT, say with
n variables and m clauses, constructs an instance (G, T ) of the Steiner Tree problem with unit
weights and an integer budget k, such that
• G is planar and has O(2ε(n+m) · (n + m)c ) vertices for some universal constant c;
• |T | ≤ Cε−1 (n + m) for some universal constant C; and
• ϕ is satisfiable if and only if (G, T ) admits a Steiner tree of total weight at most k.
Moreover, the algorithm runs in time linear in the size of the output instance.
Before we proceed to the proof of Lemma 7.3, we first verify that Theorem 7.1 indeed follows by
combining Theorem 7.2 with Lemma 7.3.
Proof of Theorem 7.1 assuming Lemma 7.3. Let δ > 0 be such that there is no algorithm for 3SAT
with running time 2δ(n+m) · (n + m)O(1) ; such δ exists by Theorem 7.2. We prove that for every γ > 0
there is no algorithm solving Planar Steiner Tree with unit weights in time O(2γ|T | · |V (G)|dγ )
for dγ = δ 2 /(4γC), where C is the constant from Lemma 7.3. Note that lim supγ→0 dγ = ∞.
For the sake of contradiction assume otherwise: there is γ > 0 and an algorithm as above
with running time O(2γ|T | · |V (G)|dγ ) for dγ = δ 2 /(4γC). Consider the following algorithm for
solving 3SAT. Given the input formula ϕ, first construct the instance (G, T ) of Planar Steiner
Tree together with budget k using the reduction of Lemma 7.3 for ε = (2γC)/δ. Then apply the
hypothetical algorithm for Planar Steiner Tree to verify whether (G, T ) admits a Steiner tree
of total weight at most k, thereby deciding whether ϕ is satisfiable. Observe that supposing n, m
denote the number of variables and clauses of ϕ, this algorithm works in time
d
γ|T |
dγ
γCε−1 (n+m)
ε(n+m)
c γ
O 2
· |V (G)|
= O 2
· 2
· (n + m)
= O 2δ/2·(n+m) · 2δ/2·(n+m) · (n + m)c·dγ = 2δ(n+m) · (n + m)O(1) .
This is a contradiction with the choice of δ.
We are left with proving Lemma 7.3. In the reduction we use edges with positive integral weights.
Such a weighted edge can be emulated by replacing it with a path of length equal to the weight,
consisting of unit-weight edges. At the end of the construction we argue that this replacement does
not blow up the size of the graph too much, so that we achieve the promised bounds.
We first give some auxiliary gadget constructions.
41
7.1
Gadgets
For our gadgets we will use seven magnitudes of weights of edges: M1 , . . . , M7 , defined as Mt = M t−1
for some large integer M > 1, to be defined later. The value of M will be chosen so that it is linear
in the total output size. This implies that all weights of edges are bounded by a degree-6 polynomial
of the total output size, which will be useful for limiting the instance blow-up when reducing to the
unit-weight setting.
Connector gadgets. The first construction is a simple connector gadget that passes information
without creating a connection in the intended solution. A connector gadget CG consists of 12
vertices, out of which 4 are terminals: there are
• four interface vertices xNW , xNE , xSE , xSW ,
• four terminals tN , tE , tS , tW , and
• four normal vertices uNW , uNE , uSE , uSW .
Moreover, we connect uNW with xNW , tN , and tW using edges of the same weight, and similarly for
the other three corners; see Figure 8 for a reference.
In the construction to follow, connector gadgets will be combined with other gadgets by identifying
the interface vertices with some other vertices. Thus, the only vertices from a connector gadget that
may have some edges going outside of this gadget are the interface vertices. The following simple
claim explains the optimum behavior in the connector gadget. Its proof follows by a straightforward
case study, so we leave it to the reader.
Lemma 7.4. Suppose that H is a subgraph of the connector gadget CG such that in H each terminal
is connected to some interface vertex. Then H has at least 6 edges. Moreover, if H has exactly 6
edges, then H either consists of all edges incident to uNW and to uSE , or of all edges incident to
uNE and to uSW .
In particular, Lemma 7.4 shows that any such optimum connection within a connector gadget
does not connect any of the northern interfaces xNW and xNE with any of the southern interfaces
xSW and xSE . In future constructions, we will use connector gadgets where the weight of each edge
is M7 (the largest magnitude). Note that then Lemma 7.4 asserts that the minimum weight of a
subgraph connecting each terminal to any interface is 6M7 , and the only such subgraphs having
this weight are as described in the lemma statement.
xNE
xNW
N
t
uNW
uNE
tW
tE
uSE
uSW
tS
xSE
xSW
Figure 8: Connector gadget CG. Terminals are depicted by yellow squares.
42
Verification gadgets. In a nutshell, our reduction will pass information about the chosen variable
assignment via multiple long west-east paths, arranged one above the other. Whether each clause is
satisfied will be checked by a column of verification gadgets, where the consistency between them
will be verified by connecting them using connector gadgets along each column. See Figure 11 to
support this intuition. While this whole grid of verification gadgets is not defined yet, we now
describe one verification gadget and its functionality.
The verification gadget of order N , denoted VGN , is constructed as follows; see Figure 9 for
reference. First, take an N × N grid composed of vertices v[i, j] for i, j ∈ [N ], where there is an
edge between v[i, j] and v[i0 , j 0 ] iff |i − i0 | + |j − j 0 | = 1. Place this grid in the plane so that vertices
v[1, ·] form the west side, vertices v[·, 1] form the north side, vertices v[N, ·] form the east side, and
vertices v[·, N ] form the south side. Remove all vertical edges in the upper-right triangle; that is,
remove the edge between v[i, j] and v[i, j − 1] whenever i ≥ j. Next, add one vertex w connected to
all vertices from the south side. Finally, for each i ∈ [N ], add vertices y[i] and z[i], and connect y[i]
with v[1, i] and z[i] with v[N, i]. The vertices w, y[i], z[i] will be called the interface vertices of the
verification gadget VGN , and similarly as in the case of the connector gadgets, verification gadgets
will be connected to the rest of the construction by identifying interfaces with some vertices from
the outside of the gadget.
v[1, 1]
v[N, 1]
z[1]
y[1]
iM3
iM2
z[i]
y[i]
M3
M4
v[1, N ]
z[N ]
iM 2
M5 −
y[N ]
v[N, N ]
w
Figure 9: Verification gadget of order N , VGN . An intended solution connecting y[i], z[i], and w is
highlighted in blue.
This defines the shape of the verification gadget, we now assign weights to the edges. From now
on we assume that 10N < M for all considered verification gadgets, as this will be the case for the
final choice of M .
• Horizontal (i.e. west-east) edges in the grid have weight M4 each.
43
• Vertical (i.e. north-south) edges in the grid have weight M3 each.
• For each i ∈ [N ], the edge between y[i] and v[1, i] has weight iM2 .
• For each i ∈ [N ], the edge between z[i] and v[N, i] has weight iM3 .
• For each i ∈ [N ], the edge between w and v[i, N ] has weight M5 − iM2 .
This concludes the construction of the verification gadget. The intuition is that in the intended
solution to the constructed instance, inside each verification gadget we will need to connect three
interfaces: one among interfaces y[i] for i ∈ [N ], one among interfaces z[j] for j ∈ [N ], and the
interface w. The following lemmas describe the behavior of the gadget.
Lemma 7.5. For each i ∈ [N ], the verification gadget VGN contains a tree of total weight M5 +
(N − 1)M4 + (N − 1)M3 that connects y[i], z[i], and w.
Proof. Take the union of paths (y[i], v[1, i], . . . , v[N, i], z[i]) and (v[i, i], . . . , v[i, N ], w). It is easy to
verify that its weight is exactly M5 + (N − 1)M4 + (N − 1)M3 .
Lemma 7.6. Suppose that H is a subgraph of the verification gadget VGN such that H is a
(connected) tree and contains w, y[i] for some i ∈ [N ], and z[j] for some j ∈ [N ]. Then the total
weight of the edges of H is at least M5 + (N − 1)M4 + (N − 1)M3 . Furthermore, if this total weight
is equal to M5 + (N − 1)M4 + (N − 1)M3 , then i = j and H contains the edge between w and v[i, N ].
Proof. Suppose H is such a subtree, and suppose the total weight of H is at most M5 + (N −
1)M4 + (N − 1)M3 . First, observe that M5 + (N − 1)M4 + (N − 1)M3 < 32 M5 , but every edge of
VGN incident to w has weight at least 34 M5 . Therefore, H contains at most one edge incident to w.
As H has to connect w with y[i] and z[j], it follows that H contains exactly one edge incident to w,
say wv[k, N ] for some k ∈ [N ]. This edge has weight M5 − kM2 , which means that the other edges
of H have weight at most (N − 1)M4 + (N − 1)M3 + kM2 < (N − 1)M4 + (N − 1)M3 + 14 M3 .
Since H connects y[i] with z[j], and H has only one edge incident to w, it follows that H contains
the edges y[i]v[1, i] and v[N, j]z[j], which have weights iM2 and jM3 respectively, as well as at least
one edge between any pair of consecutive columns of the grid. The abovementioned edges have
weight (N − 1)M4 + iM2 + jM3 in total. Since (N − 1)M3 + 14 M3 < 12 M4 , we infer that H cannot
contain any more edges of weight M4 , that is, horizontal edges of the grid. We infer that H has
exactly one edge between each pair of consecutive columns of the grid.
Since in the upper-right triangle there are no vertical edges, it follows that H has to contain the
whole path (v[j, j], v[j + 1, j], . . . , v[N, j]). Moreover, since H connects w with y[i] and z[j], and
wv[k, N ] is the only edge of H incident to w, H actually connects v[k, N ] with y[i] and z[j]. Observe
that, again due to the lack of vertical edges in the upper right triangle, if we had that k > j, then
at least one additional horizontal edge of the grid would need to be used to connect v[k, N ] with
v[j, j], between two columns of the grid where one edge on the path (v[j, j], v[j + 1, j], . . . , v[N, j])
was already picked. This would be a contradiction with the constraint on the total weight of H, so
we infer that k ≤ j.
Now, in order to connect v[k, N ] with y[i] and z[j] we need to use at least (N − 1) − min(i, j)
vertical edges of the grid, each of which has weight M3 . So in total, we see that the weight of H has
to be at least
[(N − 1)M4 + iM2 + jM3 ] + ((N − 1) − min(i, j)) · M3 + M5 − kM2 ,
44
where the consecutive terms come from horizontal edges, vertical edges, and the edge wv[k, N ].
After refactoring this is equal to
M5 + (N − 1) · M4 + (N − 1) · M3 + (j − min(i, j))M3 + (i − k)M2 .
If we had j > i, then j − min(i, j) > 0, and since M3 > 10N · M2 , the whole sum would be strictly
larger than M5 + (N − 1)M4 + (N − 1)M3 . Hence j ≤ i and j − min(i, j) = 0. On the other
hand, we have that k ≤ j, hence in fact k ≤ j ≤ i. So in order to have the total weight at most
M5 + (N − 1)M4 + (N − 1)M3 , we need to have k = j = i. We infer that no tree connecting y[i],
z[j], and w can have weight smaller than M5 + (N − 1)M4 + (N − 1)M3 , and for any tree of exactly
this weight we need to have i = j and the edge wv[i, N ] included in the tree.
Suppose we have an arbitrary subset S ⊆ [N ]. We define the S-restriction of the verification
gadget VGN as follows: take VGN and remove all edges between w and v[i, N ] for i ∈
/ S. From
Lemmas 7.5 and 7.6 it follows that the set of subtrees of VGN connecting y[i] for some i ∈ [N ], z[j]
for some j ∈ [N ], and w contains no trees of weight less than M5 +(N −1)M4 +(N −1)M3 . Moreover,
when VGN is restricted by S ⊆ [N ], such a tree of weight exactly M5 + (N − 1)M4 + (N − 1)M3
exists if and only if i = j and i ∈ S.
Paired verification gadgets. We will use verification gadgets in pairs, called paired verification
gadgets. Such a paired verification gadget of order N , denoted PVGN , consists of two verification
gadgets VG1N and VG2N of order N ; see Figure 10 for reference. We will follow the convention that
vertices in VG1N and VG2N are named as when describing one verification gadget, and the index of
the gadget (1 or 2) appears in the superscript; e.g., v 1 [i, j] denotes the vertex of the grid in the
first gadget in the ith column and jth row. Place VG1N and VG2N next to each other, and for each
i ∈ [N ], add an edge between z 1 [i] and y 2 [i] with weight M5 − iM3 − iM2 . Further, for each i ∈ [N ]
add a new interface vertex p[i] and q[i], connected as follows: p[i] is adjacent to y[i] via an edge
of weight iM1 , while q[i] is adjacent to z[i] via an edge of weight M2 − iM1 . The interfaces of the
paired verification gadget are vertices p[·], q[·], and w1 , w2 .
p[1]
z 2 [1]
M5 − iM3 − iM2
iM1
p[N ]
y 2 [1]
z 1 [1]
y[1]
z 1 [N ]
y 1 [N ]
q[1]
M2 − iM1
y 2 [N ]
z 2 [N ]
q[N ]
w2
w1
Figure 10: The paired verification gadget of order N . Old interfaces y t [·] and z t [·] are depicted in
gray. An intended solution connecting p[i], q[i], and w1 is highlighted in blue.
The following lemmas explain the optimum behavior in a paired verification gadget.
45
Lemma 7.7. For each i ∈ [N ] and each t ∈ {1, 2}, the paired verification gadget PVGN contains a
tree of total weight 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 that connects p[i], q[i], and wt .
Proof. Take the union of paths (p[i], y 1 [i], v 2 [1, i], . . . , v 1 [N, i], z 1 [i], y 2 [i], v 2 [1, i], . . . , v 2 [N, i], z 2 [i], q[i])
and (v t [i, i], . . . , v t [i, N ], wt ). It is easy to verify that its weight is exactly 2M5 + 2(N − 1)M4 + (N −
1)M3 + M2 .
Lemma 7.8. Suppose that H is a subgraph of the paired verification gadget PVGN such that H is
a (connected) tree and contains wt for some t ∈ {1, 2}, p[i] for some i ∈ [N ], and q[j] for some
j ∈ [N ]. Then the total weight of the edges of H is at least 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 .
Furthermore, if this total weight is equal to 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 , then i = j and
H contains the edge between wt and v t [i, N ].
Proof. Suppose that the total cost of H is at most 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 . Obviously,
H has to contain edges p[i]y 1 [i] and z 2 [j]q[j], which have weight M2 + (i − j)M1 in total, so in
particular H connects y 1 [i], z 2 [j], and wt . Observe further that H has to contain the edge z 1 [k]y 2 [k]
for some k ∈ [N ], which has weight M5 − kM3 − kM2 > 34 M5 , as well as at least one edge incident
to wt , which has weight at least M5 − N · M2 > 34 M5 . Thus, these two edges alone have weight
larger than 32 M5 . If H contained at least one more edge between vertices y 2 [·] and z 1 [·], its total
weight would be larger than 94 M5 , so in particular larger than 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 .
Hence, z 1 [k]y 2 [k] is the only edge between VG1N and VG2N included in H.
Let H s be the restriction of H to the verification gadget VGsN , for s = 1, 2. Recall that H
connects p[i], q[j], and wt ; from now on suppose that t = 1, since the proof for the second case is
symmetric. Since z 1 [k]y 2 [k] is the only edge between the gadgets included in H, it follows that H 1
connects y 1 [i], z 1 [k], and w1 , whereas H 2 connects y 2 [k] with z 2 [j]. By Lemma 7.6, the total weight
of H 1 is at least M5 + (N − 1)M4 + (N − 1)M3 , and the equality can occur if and only if i = k.
Moreover, since H 2 connects y 2 [k] with z 2 [j], it must contain at least N − 1 horizontal edges and at
least |j − k| vertical edges of the grid in the second gadget. It follows that the total weight of H 2 is
at least
kM2 + jM3 + (N − 1)M4 + |k − j|M3 = (N − 1)M4 + kM3 + kM2 + (j − k)M3 + |j − k|M3 .
It follows that this value is never smaller than (N − 1)M4 + kM3 + kM2 , and the equality holds only
if k ≥ j. Further, if k < j, then this values is larger by at least M3 than (N − 1)M4 + kM3 + kM2 .
We conclude that the total weight of H has to be at least
M5 + (N − 1)M4 + (N − 1)M3 + (M5 − kM3 − kM2 ) + ((N − 1)M4 + kM3 + kM2 ) + M2 + (i − j)M1 ,
where the consecutive terms come from H 1 , the edge z 1 [k]y 2 [k], H 2 , and edges p[i]y 1 [i] and z 2 [j]q[j]
This is equal to
2M5 + (2N − 2)M4 + (N − 1)M3 + M2 + (i − j)M1 .
On the other hand, the total weight of H is assumed to be at most 2M5 +2(N −1)M4 +(N −1)M3 +M2 ;
this may be smaller or equal than the value above only if i ≤ j, and then it is smaller by |(i − j)M1 |.
Observe now that |(i − j)M1 | < M3 , so the contribution from H 2 cannot be larger by at least M3
than the value (N − 1)M4 + kM3 + kM2 we counted. As we argued, this implies that k ≥ j. Further,
recall that the contribution from H 1 can be equal to M5 + (N − 1)M4 + (N − 1)M3 only if i = k and
H 1 uses the edge w1 v 1 [i, N ]. All weights within the first gadget VG1N are multiples of M2 , so if this
contribution is larger than M5 + (N − 1)M4 + (N − 1)M3 , then it is larger by at least M2 . However,
|(i − j)M1 | < M2 as well, so we infer that i = k and H 1 uses the edge w1 v 1 [i, N ]. Since i = k and
46
k ≥ j, we have i ≥ j, so i − j ≥ 0. It follows that the term (i − j)M1 is nonnegative, and it is zero
only if i = j. So all in all, the weight of H has to be at least 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 ,
and the equality can hold if and only if i = j = k and H uses the edge w1 v 1 [i, N ].
Similarly as with single verification gadgets, we can restrict a paired verification gadget by a
pair of subsets S1 , S2 ⊆ [N ]. This means that for each i ∈ [N ] \ S1 we remove the edge w1 v 1 [i, N ]
and for each i ∈ [N ] \ S2 we remove the edge w2 v 2 [i, N ]. By Lemmas 7.7 and 7.8, this restricts the
space of solutions of the optimum weight 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 to trees connecting
p[i], q[i], and w1 for i ∈ S 1 , and trees connecting p[i], q[i], and w2 for i ∈ S 2 .
7.2
Partitioning variables into groups and the intuition
For simplicity and without loss of generality, by making ε slightly smaller we assume that ε = 1/L
for some fixed integer L. Let ϕ be the input formula, let V, C denote the variable and clause set
of ϕ, respectively, and let n = |V| and m = |C|. By adding dummy variables if necessary, we can
assume without loss of generality that n is divisible by L.
We partition the variables into L groups V1 , . . . , VL , of size n/L each. For each t ∈ [L], let Λt be
the set of all variable assignments for the group Vt , that is, functions from Vt to {T, F}. Note that
|Λt | = 2|Vt | = 2n/L for each t ∈ [L].
We now describe the intuition behind the rest of the construction; see Figure 11 for reference.
We create one root terminal r placed on the very west of the whole construction. Then, for each
variable group Vt we create a row consisting of 4m verification gadgets of order 3 · 2n/L , grouped
into m columns with 4 verification gadgets per row in each column. These four verification gadgets
are paired into two paired verification gadgets. The western pair is placed normally, as defined
in the previous section, so that its interfaces w1 and w2 are placed southward. The eastern pair
is rotated by 180◦ so that its interfaces are placed northward. The western and eastern pairs are
combined naturally within each quadruple, but the connections between quadruples will be slightly
more complicated for technical reasons that will become clear later. At the eastern end of each
row we create one terminal. The columns of quadruples of verification gadgets are then connected
via connector gadgets as follows: identify the northern interfaces of the connector gadget with
(southbound) interfaces wt of the western pair of the northern quadruple, and identify the southern
interfaces of the connector gadget with the (northbound) interfaces wt of the eastern pair in the
southern quadruple.
Since each edge within a connector gadget is far more expensive than all the other edges in
total, within each connector gadget we need to pick one of two optimum solutions. This enables
coordination of choices throughout columns without creating vertical paths, and the optimum
solution must send one horizontal path through each row that “collects” terminals from neighboring
connector gadgets on the way. The forced optimum behavior within verification gadgets ensures us
that these paths need to be indeed horizontal — within each row they need to choose the same
index i within each verification gadget. The choice of this index i within row t corresponds to
choosing the variable assignment for group Vt from Λt . Then the satisfaction of clauses is verified by
appropriately restricting each verification gadget, so that each column of quadruples of verification
gadgets is responsible for checking one clause. Here we use the one-bit information transported via
connector gadgets to implement this check.
We proceed with implementing this plan formally.
47
7.3
Construction
We now give the full construction. For simplicity of presentation, we aim at constructing a graph
of size bounded by O(2Kε(n+m) · (n + m)c ) for some universal constants K, c, because at the very
end we can rescale ε by factor K; that is, we actually apply the whole construction for ε/K instead
of ε. Denote N = 3 · 2n/L , which is equal to 3|Λt | for all t = 1, . . . , L. For the embedding of the
construction in the plane, we refer to Figure 11.
PVGW
1,1
PVGW
1,3
PVGE
1,1
PVGE
1,3
h1
CG1,3
CG1,2
CG1,1
h2
r
CG2,2
CG2,1
CG2,3
h3
PVGW
3,1
PVGE
3,1
PVGW
3,3
PVGE
3,3
Figure 11: The whole construction for m = 3, L = 3, and n/L = 2. A possible solution is highlighted
in blue. It corresponds to a variable assignment where c1 is satisfied by a variable from the V3 , c2 is
satisfied by a variable from V1 , and c3 is satisfied by a variable from V2 .
Create first a terminal r, and a terminal ht for each t ∈ [L].
For each t ∈ [L] and each j ∈ [m], create two paired verification gadgets of order N , denoted
E
W
PVGW
t,j and PVGt,j . We will follow the convention that the interface vertices of the gadget PVGt,j
W,1
W,2
W
will be denoted by pW
t,j [i] for i ∈ [N ], qt,j [i] for i ∈ [N ], and wt,j , wt,j ; similarly for the gadget
W [i] and q E [N + 1 − i].
PVGEt,j . For each i ∈ [N ], identify vertices qt,j
t,j
For each t ∈ [L], j ∈ [m], and s ∈ [2n/L ] create a vertex gt,j [s]. Connect each vertex gt,j [s] with
the following vertices via edges of weight M6 :
• pW
t,j [3s − `] for ` = 0, 1, 2, and
• pEt,j−1 [N + 1 − (3s − `)] for ` = 0, 1, 2, unless j = 1.
Moreover, connect vertices gt,1 [s] to r via edges of weight M6 . Also, for each t ∈ [L] connect ht to
all vertices pEt,m [i] via edges of weight M6 , for i ∈ [N ].
For each t ∈ [L − 1] and each j ∈ [m] create a connector gadget CGt,j . We will follow the
NE
SE
SW
convention that the interface vertices of this gadget will be denoted by xNW
t,j , xt,j , xt,j , and xt,j .
We identify:
W,1
• xNW
t,j with wt,j ;
W,2
• xNE
t,j with wt,j ;
48
E,1
• xSE
t,j with wt+1,j ; and
E,2
• xSW
t,j with wt+1,j .
E,1
W,2
Moreover, for each j ∈ [m], we make vertices w1,j
and wL,j
into terminals.
Finally, we remove some edges from the constructed graph by restricting the paired verification
gadgets in order to encode the input formula ϕ. For each variable group Vt , choose an arbitrary
enumeration (λt,1 , . . . , λt,2n/L ) of Λt . Moreover, let (c1 , . . . , cm ) be an arbitrary enumeration of C.
For each t ∈ [L] and j ∈ [m] we perform the following restrictions. Suppose R ⊆ [2n/L ] is the subset
of all indices s for which the assignment λt,s satisfies the formula cj ; that is, there is a variable in Vt
that appears in cj and setting it according to λt,s satisfies cj Define
W,1
St,j
= {3s − 1 : s ∈ [2n/L ]}
and
W,2
St,j
= {3s : s ∈ R} ∪ {3s − 2 : s ∈ [2n/L ]},
W,1
W,2
and restrict the paired verification gadget PVGW
t,j by the pair (St,j , St,j ). Similarly, define
E,1
St,j
= {(N +1)−3s : s ∈ R}∪{(N +1)−(3s−1) : s ∈ [2n/L ]}
and
E,2
St,j
= {(N +1)−(3s−2) : s ∈ [2n/L ]},
E,2
E,1
, St,j
).
and restrict the paired verification gadget PVGEt,j by the pair (St,j
We set M to be 10 times larger than the total number of edges used in the construction; this
automatically defines the weights M1 through M7 . Finally, we set the budget for the intended
Steiner tree to be
k = 6m(L − 1)M7 + (2m + 1)LM6 + 2mL(2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 ).
This finishes the construction. The following claim follows directly from the construction, where the
plane embedding is as in Figure 11.
Lemma 7.9. The obtained instance (G, T ) of Steiner Tree is planar, and we have
|G| ≤ O(22εn · (n + m))
and
|T | ≤ 6ε−1 (n + m).
Moreover, the instance (G, T ) can be constructed in time linear in its size.
7.4
Correctness
Hence, we are left verifying the following claim.
Lemma 7.10. The instance (G, T ) admits a Steiner tree of total weight at most k if and only if ϕ
is satisfiable.
The remainder of this section is devoted to the proof of Lemma 7.10. We split the proof into
two implications: we first show that if ϕ is satisfiable then (G, T ) has a Steiner tree of appropriate
weight, and next we show the converse implication.
From a satisfying assignment to a Steiner tree. Suppose λ : V → {T, F} is an assignment
that satisfies ϕ. For each t ∈ [L] pick the index st ∈ [2n/L ] such that λt,st is the restriction of λ to
Vt . Next, for each j ∈ [m] arbitrarily pick an index tj such that a variable belonging to Vtj satisfies
ϕ under λ.
We now construct the tree as follows. For each t ∈ [L] and j ∈ [m], pick it,j = 3 · st − a, where
a = 1 if t < tj , a = 2 if t > tj , and a = 0 if t = tj . Then take into the solution the following:
49
W,a
W,2
W
W
• the subtree of PVGW
t,j connecting pt,j [it,j ], qt,j [it,j ], and (wt,j if a 6= 0 and wt,j if a = 0) with
total weight 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 ; and
E [(N + 1) − i ], pE [(N + 1) − i ], and (w W,a if a 6= 0 and
• the subtree of PVGEt,j connecting qt,j
t,j
t,j
t,j
t,j
W,1
wt,j
if a = 0) with total weight 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 .
W [i ] = q E [(N + 1) − i ], so the union of
These subtrees exist by Lemma 7.7. Note also that qt,j
t,j
t,j
t,j
W,a
W,2
E
these two trees is a tree connecting pW
t,j [it,j ], pt,j [(N + 1) − it,j ], (wt,j if a 6= 0 and wt,j if a = 0),
W,a
W,1
and (wt,j
if a 6= 0 and wt,j
if a = 0).
Further, for each t ∈ [L] and each j ∈ [m], add to the solution the edge between gt,j [st ] and
E
pW
[i
t,j t,j ], and the edge between gt,j [st ] and pt,j−1 [(N + 1) − it,j ] provided j > 1, and between gt,1 [st ]
and r for j = 1. All these edges exist since it,j = 3 · st − a for some a ∈ {0, 1, 2}. Also, for each
t ∈ [L] add the edge between pEt,m [(N + 1) − it,m ] and ht . Finally, for each connector gadget CGt,j ,
for t ∈ [L − 1] and j ∈ [m], we add the solution of weight 6M7 connecting terminals within CGt,j to
SE
NE
SW
xNW
t,j and xt,j in case t < tj , and to xt,j and xt,j in case t ≥ tj . It is straightforward to verify that
the solution obtained in this manner has total weight exactly k and connects all terminals to the
root terminal r.
From a Steiner tree to a satisfying assignment. Suppose the instance (G, T ) admits a
Steiner tree F , treated as a subset of edges of G, of total weight at most k. The edges of F can be
partitioned into three subsets:
• edges FCG that reside in connector gadgets CGt,j for t ∈ [L − 1] and j ∈ [m];
E
• edges FPVG that reside in paired verification gadgets PVGW
t,j and PVGt,j for t ∈ [L] and j ∈ [m];
• the remaining edges Frest .
We now investigate these subsets basing the analysis on the fact that F has total weight at most k.
First, we consider connector gadgets CGt,j for all t ∈ [L − 1] and j ∈ [m]. Since each terminal
within CGt,j has to be connected to r by F , it follows that FCG ∩ E(CGt,j ) connects each terminal
within CGt,j to one of its interface vertices. By Lemma 7.4, this means that each set FCG ∩ E(CGt,j )
consists of at least 6 edges, implying that the total cost of FCG is at least 6m(L−1)M7 . If at least one
more edge of weight M7 was taken to FCG , then its total weight would be at least 6m(L − 1)M7 + M7 ,
which is larger than k; this would be a contradiction. We infer that within each connector gadget
CGt,j , the set FCG selects exactly 6 edges, which connect each terminal of this gadget to one of its
interfaces. By Lemma 7.4, we have that within each CGt,j , the set FCG either connects two terminals
SE
NE
SW
to xNW
t,j and two to xt,j , or it connects two terminals to xt,j and two to xt,j . Call connector gadgets
conforming to the former case of type 1, while connector gadgets conforming to the latter case shall
be called of type 2. Note that in particular, FCG ∩ CGt,j does not connect any northern interface of
CGt,j with any southern interface.
We thus have that the total weight of FCG is exactly 6m(L − 1)M7 , so the total weight of
Frest ∪ FPVG is at most (2m + 1)LM6 + 2mL(2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 ). Observe
now that for each terminal ht for t ∈ [L], the tree F has to contain a path from r to ht . By the
construction and the above discussion of the behavior of F within each connector gadget, the path
E
connecting F with ht must be contained in the union of paired verification gadgets PVGW
t,j , PVGt,j
for j ∈ [m], edges incident to ht , and edges incident to vertices gt,j [s] for j ∈ [m] and s ∈ [2n/L ]. In
particular, for each t ∈ [L] we have that Frest has to contain:
• at least one edge incident to ht ;
50
• at least one edge between r and a vertex gt,1 [s] for some s ∈ [2n/L ];
• for each j ∈ [m] \ {1}, at least one edge between a vertex gt,j [s] for some s ∈ [2n/L ] and a
vertex pEt,j−1 [i] for some i ∈ [N ].
• for each j ∈ [m], at least one edge between a vertex gt,j [s] for some s ∈ [2n/L ] and a vertex
pW
t,j [i] for some i ∈ [N ].
Above we have counted at least (2m + 1)L distinct edges within Frest , each of weight M6 . Since we
have that the total weight of Frest ∪ FPVG is at most (2m + 1)LM6 + 2mL(2M5 + 2(N − 1)M4 + (N −
1)M3 + M2 ), which is smaller than (2m + 1)LM6 + M6 , we infer that Frest contains exactly one edge
of each of the above mentioned kinds, and does not contain any other edges outside of the connector
and paired verification gadget. In particular, the total weight of Frest is exactly (2m + 1)LM6 , and
for each t ∈ [L] and j ∈ [m] we can choose an index st,j ∈ [2n/L ] so that Frest contains:
• an edge between gt,j [st,j ] and pEt,j−1 [(N + 1) − (3st,j − a)] for some a ∈ {0, 1, 2} (or between
gt,1 [st,1 ] and r if j = 1); and
• an edge between gt,j [st,j ] and pW
t,j [3st,j − a] for some a ∈ {0, 1, 2}.
Since the total weight of Frest is exactly (2m + 1)LM6 , we infer that the total weight of FPVG
is at most 2mL(2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 ). Observe now that the edges of FPVG
E
reside in 2mL paired verification gadgets PVGW
t,j and PVGt,j for t ∈ [L] and j ∈ [m]. By the already
established shape of sets FCG and Frest , the intersection of FPVG with each gadget PVGW
t,j has to be a
W [i0 ] for some i0 ∈ [N ], and one
tree that connects an interface pW
[i]
for
some
i
∈
[N
],
an
interface
q
t,j
t,j
W,1
W,2
of the interfaces wt,j
, wt,j
. The latter is because two terminals from the adjacent connector gadget
W,2
CGt,j have to be connected to r via PVGW
t,j , or when t = L, the terminal wL,j has to be connected
to r. By Lemma 7.8 this implies that the weight of the intersection of FPVG with PVGW
t,j has to be
at least 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 . A symmetric analysis yields the same conclusion
for the gadget PVGEt,j . Since there are 2mL such gadgets in total and the weight of FPVG is at most
E
2mL(2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 ), we infer that within each gadget PVGW
t,j and PVGt,j ,
the set FPVG has to select edges of total weight exactly 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 .
Furthermore, there are no more edges in F than all the ones counted in the above analysis.
By Lemma 7.8, this means that for each t ∈ [L] and each j ∈ [m] we can select an index it,j ∈ [N ]
W,1
W
W
so that FPVG ∩ PVGW
t,j connects pt,j [it,j ], qt,j [it,j ], and either wt,j (in case j < L and CGt,j is of
W,2
type 1) or wt,j
(in case j = L or CGt,j is of type 2). Also, the index it,j has to be in the respective
W,1
W,2
restricting set St,j
or St,j
. Similarly, we can select an index i0t,j ∈ [N ] so that FPVG ∩ PVGEt,j
E,1
0
W
0
connects pW
t,j [(N + 1) − it,j ], qt,j [(N + 1) − it,j ], and either wt,j (in case j = 1 or CGt,j is of type 1)
E,2
or wt,j
(in case j > 1 and CGt,j is of type 2). Again, the index i0t,j has to be in the respective
W,1
W,2
restricting set St,j
or St,j
.
Now observe that in order to have that the edges of F within each row of the construction
contain a path from r to respective ht , we need to have the following equality of indices for all
t ∈ [L]:
• it,j = i0t,j for all j ∈ [m];
• st,j = st,j 0 for all j, j 0 ∈ [m] (henceforth we denote this common value as st );
• it,j = 3 · st − at,j for some at,j ∈ {0, 1, 2}.
51
For t ∈ [L], let λ[t] = λt,st ∈ Λt be the variable assignment that corresponds to the index st ∈ [2n/L ].
Further, let λ be the variable assignment for ϕ obtained by taking the union of assignments λ[t] for
t ∈ [L]. We claim that λ is a satisfying assignment for ϕ. To this aim, we examine the properties of
the offsets at,j ∈ {0, 1, 2}.
Claim 7.11. If at,j = 0 for some t ∈ [L] and j ∈ [m], then λ[t] satisfies cj .
Proof. Since at,j = 0, we have that it,j = 3st . By the construction, if λ[t] = λt,st did not satisfy cj ,
W,1
W,2
E,1
E,2
then neither it,j would be included in St,j
∪ St,j
, nor (N + 1) − it,j would be included in St,j
∪ St,j
.
W
As we argued, from Lemma 7.8 it follows that the intersection of SPVG with PVGt,j would have
weight more than 2M5 + 2(N − 1)M4 + (N − 1)M3 + M2 , and the same holds also for and PVGEt,j .
This is a contradiction.
y
Claim 7.12. For all t ∈ [L − 1] and j ∈ [m], the pair (at,j , at+1,j ) belongs to the following set:
{(1, 1), (1, 0), (0, 2), (2, 2)}. Moreover, a1,j 6= 2 and aL,j 6= 1 for all j ∈ [m].
W,1
Proof. Suppose first that at,j = 1. Then it,j = 3st − 1, and this number is included only in St,j
,
W,2
W,1
NW
and not in St,j
. It follows that F ∩ PVGW
t,j connects the interface wt,j = xt,j to r, which implies
that the connector gadget CGt,j must be of type 1 in order to have its terminals connected to r. In
E,1
particular, F ∩ PVGEt+1,j has to connect wt,j+1
= xSE
t,j to r. By the construction of the restricting set
E,1
St+1,j
, this implies that it+1,j = 3st+1 or it+1,j = 3st+1 − 1, which means that at+1,j ∈ {0, 1}. We
can analogously prove that if at,j ∈ {0, 2}, then at+1,j = 2.
E,1
W,2
To see that a1,j =
6 2 and aL,j 6= 1 for all j ∈ [m], recall that both w1,j
and wL,j
are terminals.
These terminals need to be connected to r. If we had a1,j = 2 then we would have i1,j = 3s1 − 2, but
E,1
the value (N + 1) − (3s1 − 2) is not included in the restricting set S1,j
; this would be a contradiction.
The case aL,j = 1 leads to a contradiction in a symmetric manner.
y
By Claim 7.12, for each j ∈ [m] the sequence (a1,j , a2,j , . . . , aL,j ) has the following structure:
first a (possibly empty) prefix of entries 1, then a single entry 0, and then a (possibly empty) suffix
of entries 2. In particular, for each j ∈ [m] there exists some tj ∈ [L] such that atj ,j = 0. By
Claim 7.11, this implies that λ[tj ] satisfies the clause cj . Since j was taken arbitrarily, we conclude
that λ satisfies ϕ.
Removing the weights and wrapping up the proof. All in all, we have obtained an instance
(G, T ) with the following properties:
• |G| ≤ O(22εn · (n + m));
• edges in G have weights bounded by O(|G|7 );
• |T | ≤ 6ε−1 (n + m); and
• (G, T ) admits a Steiner tree of total weight at most k if and only if ϕ is satisfiable.
We now replace each edge of weight ` by a path of length ` consisting of unit-weight edges; it is
easy to see that this yields an instance with the same value of the optimum. Observe that thus the
size of G grows to O(216εn · (n + m)8 ), the number of terminals does not change, and in the final
instance all the edges have unit weights. Hence, by performing the whole reduction for parameter
ε/16 instead of ε we obtain an instance of size O(2εn · (n + m)8 ), at most 96ε−1 (n + m) terminals,
and all edges of unit weight. This concludes the proof.
52
References
[1] P. Aboulker, N. Brettell, F. Havet, D. Marx, and N. Trotignon. Colouring graphs with
constraints on connectivity. CoRR, abs/1505.01616, 2015.
[2] M. Bateni, E. D. Demaine, M. Hajiaghayi, and D. Marx. A PTAS for planar Group Steiner
Tree via spanner bootstrapping and prize collecting. In STOC 2016, pages 570–583. ACM,
2016.
[3] M. Bateni, M. Hajiaghayi, P. N. Klein, and C. Mathieu. A polynomial-time approximation
scheme for planar Multiway Cut. In SODA 2012, pages 639–655. SIAM, 2012.
[4] M. Bateni, M. T. Hajiaghayi, and D. Marx. Approximation schemes for steiner forest on planar
graphs and graphs of bounded treewidth. J. ACM, 58(5):21:1–21:37, 2011.
[5] G. Borradaile and P. N. Klein. The two-edge connectivity survivable-network design problem
in planar graphs. ACM Trans. Algorithms, 12(3):30:1–30:29, 2016.
[6] G. Borradaile, P. N. Klein, D. Marx, and C. Mathieu. Algorithms for optimization problems in
planar graphs (Dagstuhl seminar 13421). Dagstuhl Reports, 3(10):36–57, 2013.
[7] G. Borradaile, P. N. Klein, and C. Mathieu. An O(n log n) approximation scheme for Steiner
tree in planar graphs. ACM Trans. Algorithms, 5(3):31:1–31:31, 2009.
[8] J. Chen, B. Chor, M. Fellows, X. Huang, D. W. Juedes, I. A. Kanj, and G. Xia. Tight lower
bounds for certain parameterized NP-hard problems. Inf. Comput., 201(2):216–231, 2005.
[9] R. H. Chitnis, M. Hajiaghayi, and D. Marx. Tight bounds for Planar Strongly Connected
Steiner Subgraph with fixed number of terminals (and extensions). In SODA 2014, pages
1782–1801. SIAM, 2014.
[10] M. Cygan, H. Dell, D. Lokshtanov, D. Marx, J. Nederlof, Y. Okamoto, R. Paturi, S. Saurabh,
and M. Wahlström. On problems as hard as CNF-SAT. ACM Trans. Algorithms, 12(3):41:1–
41:24, 2016.
[11] M. Cygan, F. V. Fomin, B. M. P. Jansen, L. Kowalik, D. Lokshtanov, D. Marx,
M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Open problems for FPT School 2014, 2014.
http://fptschool.mimuw.edu.pl/opl.pdf.
[12] M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and
S. Saurabh. Parameterized Algorithms. Springer, 2015.
[13] M. Cygan, M. Pilipczuk, and M. Pilipczuk. Known algorithms for Edge Clique Cover are
probably optimal. SIAM J. Comput., 45(1):67–83, 2016.
[14] E. D. Demaine, F. V. Fomin, M. T. Hajiaghayi, and D. M. Thilikos. Subexponential parameterized algorithms on bounded-genus graphs and H-minor-free graphs. J. ACM, 52(6):866–893,
2005.
[15] F. Dorn, E. Penninkx, H. L. Bodlaender, and F. V. Fomin. Efficient exact algorithms on planar
graphs: Exploiting sphere cut decompositions. Algorithmica, 58(3):790–810, 2010.
[16] S. E. Dreyfus and R. A. Wagner. The Steiner problem in graphs. Networks, 1(3):195–207, 1971.
53
[17] D. Eisenstat, P. N. Klein, and C. Mathieu. An efficient polynomial-time approximation scheme
for Steiner forest in planar graphs. In SODA 2012, pages 626–638. SIAM, 2012.
[18] D. Eisenstat, P. N. Klein, and C. Mathieu. Approximating k-center in planar graphs. In SODA
2014, pages 617–627. SIAM, 2014.
[19] J. Erickson, P. N. Klein, D. Marx, and C. Mathieu. Algorithms for optimization problems in
planar graphs (Dagstuhl seminar 16221). Dagstuhl Reports, 6(5):94–116, 2016.
[20] F. V. Fomin, S. Kolay, D. Lokshtanov, F. Panolan, and S. Saurabh. Subexponential algorithms
for rectilinear Steiner tree and arborescence problems. In SoCG 2016, volume 51 of LIPIcs,
pages 39:1–39:15. Schloss Dagstuhl — Leibniz-Zentrum fuer Informatik, 2016.
[21] F. V. Fomin, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Subexponential parameterized algorithms for planar and apex-minor-free graphs via low treewidth
pattern covering. In FOCS 2016, pages 515–524. IEEE Computer Society, 2016.
[22] F. V. Fomin and D. M. Thilikos. New upper bounds on the decomposability of planar graphs.
Journal of Graph Theory, 51(1):53–81, 2006.
[23] K. Fox, P. N. Klein, and S. Mozes. A polynomial-time bicriteria approximation scheme for
planar bisection. In STOC 2015, pages 841–850. ACM, 2015.
[24] M. R. Garey and D. S. Johnson. The rectilinear steiner tree problem in NP complete. SIAM
Journal of Applied Mathematics, 32:826–834, 1977.
[25] Q. Gu and H. Tamaki. Optimal branch-decomposition of planar graphs in O(n3 ) time. ACM
Trans. Algorithms, 4(3):30:1–30:13, 2008.
[26] R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential complexity?
J. Comput. Syst. Sci., 63(4):512–530, 2001.
[27] P. N. Klein. A linear-time approximation scheme for TSP in undirected planar graphs with
edge-weights. SIAM J. Comput., 37(6):1926–1952, 2008.
[28] P. N. Klein and D. Marx. Solving Planar k-Terminal Cut in O(nc
volume 7391 of LNCS, pages 569–580. Springer, 2012.
√
k)
time. In ICALP 2012,
[29] P. N. Klein and D. Marx. A subexponential parameterized algorithm for Subset TSP on planar
graphs. In SODA 2014, pages 1812–1830. SIAM, 2014.
[30] P. N. Klein, C. Mathieu, and H. Zhou. Correlation clustering and two-edge-connected augmentation for planar graphs. In STACS 2015, volume 30 of LIPIcs, pages 554–567. Schloss
Dagstuhl — Leibniz-Zentrum fuer Informatik, 2015.
[31] D. Lokshtanov, D. Marx, and S. Saurabh. Slightly superexponential parameterized problems.
In SODA 2011, pages 760–776. SIAM, 2011.
[32] D. Lokshtanov, S. Saurabh, and M. Wahlström. Subexponential parameterized odd cycle
transversal on planar graphs. In FSTTCS 2012, volume 18 of LIPIcs, pages 424–434. Schloss
Dagstuhl — Leibniz-Zentrum fuer Informatik, 2012.
[33] D. Marx. A tight lower bound for Planar Multiway Cut with fixed number of terminals. In
ICALP 2012, volume 7391 of LNCS, pages 677–688. Springer, 2012.
54
[34] D. Marx. What’s next? future directions in parameterized complexity. In The Multivariate
Algorithmic Revolution and Beyond, volume 7370 of LNCS, pages 469–496. Springer, 2012.
[35] D. Marx and M. Pilipczuk. Optimal parameterized algorithms for planar facility location
problems using Voronoi diagrams. In ESA 2015, volume 9294 of LNCS, pages 865–877. Springer,
2015.
[36] D. Marx and M. Pilipczuk. Optimal parameterized algorithms for planar facility location
problems using Voronoi diagrams. CoRR, abs/1504.05476, 2015.
[37] J. Nederlof. Fast polynomial-space algorithms using Möbius inversion: Improving on Steiner
tree and related problems. In ICALP 2009 (1), volume 5555 of LNCS, pages 713–725. Springer,
2009.
[38] M. Pilipczuk, M. Pilipczuk, P. Sankowski, and E. J. van Leeuwen. Subexponential-time
parameterized algorithm for Steiner tree on planar graphs. In STACS 2013, volume 20 of
LIPIcs, pages 353–364. Schloss Dagstuhl — Leibniz-Zentrum fuer Informatik, 2013.
[39] M. Pilipczuk, M. Pilipczuk, P. Sankowski, and E. J. van Leeuwen. Network sparsification for
Steiner problems on planar and bounded-genus graphs. In FOCS 2014, pages 276–285. IEEE
Computer Society, 2014.
[40] P. D. Seymour and R. Thomas. Call routing and the ratcatcher. Combinatorica, 14(2):217–241,
1994.
[41] O. Suchý. Extending the kernel for Planar Steiner Tree to the number of Steiner vertices. In
IPEC 2015, volume 43 of LIPIcs, pages 151–162. Schloss Dagstuhl — Leibniz-Zentrum fuer
Informatik, 2015.
[42] S. Tazari. Faster approximation schemes and parameterized algorithms on (odd-)H-minor-free
graphs. Theor. Comput. Sci., 417:95–107, 2012.
55
| 8 |
A Comparative Study of Arithmetic Constraints
on Integer Intervals
arXiv:cs/0403016v1 [] 12 Mar 2004
Krzysztof R. Apt1,2 and Peter Zoeteweij1
1
CWI, P.O. Box 94079, 1090 GB Amsterdam, the Netherlands
2
University of Amsterdam, the Netherlands
Abstract. We propose here a number of approaches to implement constraint propagation for arithmetic constraints on integer intervals. To
this end we introduce integer interval arithmetic. Each approach is explained using appropriate proof rules that reduce the variable domains.
We compare these approaches using a set of benchmarks.
1
1.1
Preliminaries
Introduction
The subject of arithmetic constraints on reals has attracted a great deal of attention in the literature. For some reason arithmetic constraints on integer intervals
have not been studied even though they are supported in a number of constraint
programming systems. In fact, constraint propagation for them is present in
ECLi PSe , SICStus Prolog, GNU Prolog, ILOG Solver and undoubtedly most of
the systems that support constraint propagation for linear constraints on integer
intervals. Yet, in contrast to the case of linear constraints — see notably [5] —
we did not encounter in the literature any analysis of this form of constraint
propagation.
In this paper we study these constraints in a systematic way. It turns out
that in contrast to linear constraints on integer intervals there are a number of
natural approaches to constraint propagation for these constraints.
To define them we introduce integer interval arithmetic that is modeled after
the real interval arithmetic, see e.g., [6]. There are, however, essential differences
since we deal with integers instead of reals. For example, multiplication of two
integer intervals does not need to be an integer interval. In passing by we show
that using integer interval arithmetic we can also define succinctly the wellknown constraint propagation for linear constraints on integer intervals. In the
second part of the paper we compare the proposed approaches by means of a set
of benchmarks.
1.2
Constraint Satisfaction Problems
We review here the standard concepts of a constraint and of a constraint satisfaction problem. Consider a sequence of variables X := x1 , . . ., xn where n ≥ 0,
with respective domains D1 , . . ., Dn associated with them. So each variable xi
ranges over the domain Di . By a constraint C on X we mean a subset of
D1 × . . . × Dn . Given an element d := d1 , . . ., dn of D1 × . . . × Dn and a subsequence Y := xi1 , . . ., xiℓ of X we denote by d[Y ] the sequence di1 , . . ., diℓ . In
particular, for a variable xi from X, d[xi ] denotes di .
A constraint satisfaction problem, in short CSP, consists of a finite sequence of variables X with respective domains D, together with a finite set C of
constraints, each on a subsequence of X. We write it as hC ; x1 ∈ D1 , . . ., xn ∈
Dn i, where X := x1 , . . ., xn and D := D1 , . . ., Dn .
By a solution to hC ; x1 ∈ D1 , . . ., xn ∈ Dn i we mean an element d ∈ D1 ×
. . .×Dn such that for each constraint C ∈ C on a sequence of variables X we have
d[X] ∈ C. We call a CSP consistent if it has a solution and inconsistent if it
does not. Two CSPs with the same sequence of variables are called equivalent
if they have the same set of solutions. In what follows we consider CSPs the
constraints of which are defined in a simple language and identify the syntactic
description of a constraint with its meaning being the set of tuples that satisfy
it.
We view constraint propagation as a process of transforming CSPs that
maintains their equivalence. In what follows we define this process by means of
proof rules that act of CSPs and preserve equivalence. An interested reader can
consult [1] for a precise explanation of this approach to describing constraint
propagation.
1.3
Arithmetic Constraints
To define the arithmetic constraints use the alphabet that comprises
–
–
–
–
variables,
two constants, 0 and 1,
the unary minus function symbol ‘−’,
three binary function symbols, ‘+’,‘−’and ‘·’, all written in the infix notation.
By an arithmetic expression we mean a term formed in this alphabet and
by an arithmetic constraint a formula of the form
s op t,
where s and t are arithmetic expressions and op ∈ {<, ≤, =, 6=, ≥, >}. For example
x5 · y 2 · z 4 + 3x · y 3 · z 5 ≤ 10 + 4x4 · y 6 · z 2 − y 2 · x5 · z 4
(1)
is an arithmetic constraint. Here x5 is an abbreviation for x · x · x · x · x and
similarly with the other expressions. If ‘·’ is not used in an arithmetic constraint,
we call it a linear constraint.
By an extended arithmetic expression we mean a term formed in the
√
above alphabet extended by the unary function symbols ‘.n ’ and ‘ n .’ for each
n ≥ 1 and the binary function symbol ‘/’ written in the infix notation. For
example
p
3
(y 2 · z 4 )/(x2 · u5 )
(2)
is an extended arithmetic expression. Here, in contrast to the above x5 is a term
obtained by applying the function symbol ‘.5 ’ to the variable x. The extended
arithmetic expressions will be used only to define constraint propagation for the
arithmetic constraints.
Fix now some arbitrary linear ordering ≺ on the variables of the language.
By a monomial we mean an integer or a term of the form
a · xn1 1 · . . . · xnk k
where k > 0, x1 , . . ., xk are different variables ordered w.r.t. ≺, and a is a nonzero integer and n1 , . . ., nk are positive integers. We call then xn1 1 · . . . · xnk k the
power product of this monomial.
Next, by a polynomial we mean a term of the form
n
Σi=1
mi ,
where n > 0, at most one monomial mi is an integer, and the power products
of the monomials m1 , . . ., mn are pairwise different. Finally, by a polynomial
constraint we mean an arithmetic constraint of the form s op b, where s is a
polynomial with no monomial being an integer, op ∈ {<, ≤, =, 6=, ≥, >}, and b is
an integer. It is clear that by means of appropriate transformation rules we can
transform each arithmetic constraint to a polynomial constraint. For example,
assuming the ordering x ≺ y ≺ z on the variables, the arithmetic constraint (1)
can be transformed to the polynomial constraint
2x5 · y 2 · z 4 − 4x4 · y 6 · z 2 + 3x · y 3 · z 5 ≤ 10
So, without loss of generality, from now on we shall limit our attention to the
polynomial constraints.
Next, let us discuss the domains over which we interpret the arithmetic constraints. By an integer interval , or an interval in short, we mean an expression of the form
[a..b]
where a and b are integers; [a..b] denotes the set of all integers between a and b,
including a and b. If a > b, we call [a..b] the empty interval and denote it by
∅. Finally, by a range we mean an expression of the form
x∈I
where x is a variable and I is an interval.
2
Integer Set Arithmetic
To reason about the arithmetic constraints we employ a generalization of the
arithmetic operations to the sets of integers.
2.1
Definitions
For X, Y sets of integers we define the following operations:
– addition:
X + Y := {x + y | x ∈ X, y ∈ Y },
– subtraction:
X − Y := {x − y | x ∈ X, y ∈ Y },
– multiplication:
X · Y := {x · y | x ∈ X, y ∈ Y },
– division:
X/Y := {u ∈ Z | ∃x ∈ X∃y ∈ Y u · y = x},
– exponentiation:
X n := {xn | x ∈ X},
for each natural number n > 0,
– root extraction:
√
n
X := {x ∈ Z | xn ∈ X},
for each natural number n > 0.
All the operations except division are defined in the expected way. We shall
return to it at the end of Section 6. At the moment it suffices to note the division
operation is defined for all sets of integers, including Y = ∅ and Y = {0}. This
division operation corresponds to the following division operation on the sets of
reals introduced in [8]:
X
:= {u ∈ R | ∃x ∈ X∃y ∈ Y u · y = x}.
Y
For a (n integer or real) number a and op ∈ {+, −, ·, /} we identify a op X with
{a} op X and X op a with X op {a}.
To present the rules we are interested in we shall also use the addition and
division operations on the sets of real numbers. Addition is defined in the same
way as for the sets of integers, and division is defined above. In [6] it is explained
how to implement these operations.
Further, given a set A of integers or reals, we define
≤
A := {x ∈ Z | ∃a ∈ A x ≤ a},
≥
A := {x ∈ Z | ∃a ∈ A x ≥ a}.
When limiting our attention to intervals of integers the following simple observation is of importance.
Note 1. For X, Y integer intervals and a an integer the following holds:
– X ∩ Y , X + Y, X − Y are integer intervals.
–
–
–
–
–
–
X/{a} is an integer interval.
X · Y does not have to be an integer interval, even if X = {a} or Y = {a}.
X/Y does not have to be an integer interval.
For each n > 1 √
X n does not have to be an integer interval.
For odd n > 1 n√X is an integer interval.
For even n > 1 n X is an integer interval or a disjoint union of two integer
intervals.
✷
For example we have
[2..4] + [3..8] = [5..12],
[3..7] − [1..8] = [−5..6],
[3..3] · [1..2] = {3, 6},
[3..5]/[−1..2] = {−5, −4, −3, 2, 3, 4, 5},
[−3..5]/[−1..2] = Z,
[1..2]2 = {1, 4},
p
3
[−30..100] = [−3..4],
p
2
[−100..9] = [−3..3],
p
2
[1..9] = [−3.. − 1] ∪ [1..3].
To deal with the problem that non-interval domains can be produced by some
of the operations we introduce the following operation on the subsets of the set
of the integers Z:
smallest integer interval containing X if X is finite,
int(X) :=
Z
otherwise.
For example int([3..5]/[−1..2]) = [−5..5] and int([−3..5]/[−1..2]) = Z.
2.2
Implementation
To define constraint propagation for the arithmetic constraints on integer intervals we shall use the integer set arithmetic, mainly limited to the integer
intervals. This brings us to the discussion of how to implement the introduced
operations on the integer intervals. Since we are only interested in maintaining
the property that the sets remain integer intervals or the set of integers Z we
shall clarify how to implement the intersection, addition, subtraction and root
extraction operations of the integer intervals and the int(.) closure of the multiplication, division and exponentiation operations on the integer intervals. The
case when one of the intervals is empty is easy to deal with. So we assume that
we deal with non-empty intervals [a..b] and [c..d], that is a ≤ b and c ≤ d.
Intersection, addition and subtraction It is easy to see that
[a..b] ∩ [c..d] = [max(a, c)..min(b, d)],
[a..b] + [c..d] = [a + c .. b + d],
[a..b] − [c..d] = [a − d .. b − c].
So the interval intersection, addition, and subtraction are straightforward to
implement.
Root extraction The outcome of the root extraction operator applied to an integer interval will be an integer interval or a disjoint union of two integer intervals.
We shall explain in Section 4 why it is advantageous not to apply int(.) to the
outcome. This operator can be implemented by means of the following case
analysis.
Case 1. Suppose n is odd. Then
k
p
√ j√
n
n
[a..b] = [ n a ..
b ].
Case 2. Suppose n is even and b < 0. Then
p
n
[a..b] = ∅.
Case 3. Suppose n is even and b ≥ 0. Then
j √ k
l √ m
l √ m j √ k
p
n
n
n
n
n
[a..b] = [− | b| .. − | a+ | ] ∪ [ | a+ | .. | b| ]
where a+ := max (0, a).
Multiplication For the remaining operations we only need to explain how to
implement the int(.) closure of the outcome. First note that
int([a..b] · [c..d]) = [min(A)..max(A)],
where A = {a · c, a · d, b · c, b · d}.
Using an appropriate case analysis we can actually compute the bounds of
int([a..b] · [c..d]) directly in terms of the bounds of the constituent intervals.
Division In contrast, the int(.) closure of the interval division is not so straightforward to compute. The reason is that, as we shall see in a moment, we cannot
express the result in terms of some simple operations on the interval bounds.
Consider non-empty integer intervals [a..b] and [c..d]. In analyzing the outcome of int([a..b]/[c..d]) we distinguish the following cases.
Case 1. Suppose 0 ∈ [a..b] and 0 ∈ [c..d].
Then by definition int([a..b]/[c..d]) = Z. For example,
int([−1..100]/[−2..8]) = Z.
Case 2. Suppose 0 6∈ [a..b] and c = d = 0.
Then by definition int([a..b]/[c..d]) = ∅. For example,
int([10..100]/[0..0]) = ∅.
Case 3. Suppose 0 6∈ [a..b] and c < 0 and 0 < d.
It is easy to see that then
int([a..b]/[c..d]) = [−e..e],
where e = max(|a|, |b|). For example,
int([−100.. − 10]/[−2..5]) = [−100..100].
Case 4. Suppose 0 6∈ [a..b] and either c = 0 and d 6= 0 or c 6= 0 and d = 0.
Then int([a..b]/[c..d]) = int([a..b]/([c..d] − {0})). For example
int([1..100]/[−7..0]) = int([1..100]/[−7.. − 1]).
This allows us to reduce this case to Case 5 below.
Case 5. Suppose 0 6∈ [c..d].
This is the only case when we need to compute int([a..b]/[c..d]) indirectly.
First, observe that we have
int([a..b]/[c..d]) ⊆ [⌈min(A)⌉ .. ⌊max(A)⌋],
where A = {a/c, a/d, b/c, b/d}.
However, the equality does not need to hold here. Indeed, note for example
that int([155..161]/[9..11]) = [16..16], whereas for A = {155/9, 155/11, 161/9,
161/11} we have ⌈min(A)⌉ = 15 and ⌊max(A)⌋ = 17. The problem is that the
value 16 is obtained by dividing 160 by 10 and none of these two values is an
interval bound.
This complication can be solved by preprocessing the interval [c..d] so that
its bounds are actual divisors of an element of [a..b]. First, we look for the least
c′ ∈ [c..d] such that ∃x ∈ [a..b] ∃u ∈ Z u · c′ = x. Using a case analysis, the latter
property can be established without search. Suppose for example that a > 0 and
c > 0. In this case, if c′ · ⌊ cb′ ⌋ ≥ a, then c′ has the required property. Similarly,
we look for the largest d′ ∈ [c..d] for which an analogous condition holds. Now
int ([a..b]/[c..d]) = [⌈min(A)⌉..⌊max (A)⌋], where A = {a/c′ , a/d′ , b/c′ , b/d′ }.
Exponentiation The int(.) closure of the interval exponentiation is straightforward to implement by distinguishing the following cases.
Case 1. Suppose n is odd. Then
int([a..b]n ) = [an .. bn ].
Case 2. Suppose n is even and 0 ≤ a. Then
int([a..b]n ) = [an .. bn ].
Case 3. Suppose n is even and b ≤ 0. Then
int([a..b]n ) = [bn .. an ].
Case 4. Suppose n is even and a < 0 and 0 < b. Then
int([a..b]n ) = [0..max(an , bn )].
2.3
Correctness Lemma
Given now an extended arithmetic expression s each variable of which ranges
over an integer interval, we define int(s) as the integer interval or the set Z
obtained by systematically replacing each function symbol by the application of
the int(.) operation to the correspondingp
integer set operation. For example, for
the extended arithmetic expression s := 3 (y 2 · z 4 )/(x2 · u5 ) of (2) we have
p
int(s) = int( 3 int(int(Y 2 ) · int(Z 4 ))/int(int(X 2 ) · int(U 5 ))),
where x ranges over X, etc.
The discussion in the previous subsection shows how to compute int(s) given
an extended arithmetic expression s and the integer interval domains of its variables.
The following lemma is crucial for our considerations. It is a counterpart
of the so-called ‘Fundamental Theorem of Interval Arithmetic’ established in
[7]. Because we deal here with the integer domains an additional assumption is
needed to establish the desired conclusion.
Lemma 1 (Correctness). Let s be an extended arithmetic expression with the
variables x1 , . . ., xn . Assume that each variable xi of s ranges over an integer
interval Xi . Choose ai ∈ Xi for i ∈ [1..n] and denote by s(a1 , . . ., an ) the result
of replacing in s each occurrence of a variable xi by ai .
Suppose that each subexpression of s(a1 , . . ., an ) evaluates to an integer. Then
the result of evaluating s(a1 , . . ., an ) is an element of int(s).
Proof. The proof follows by a straightforward induction on the structure of s.
✷
3
An Intermezzo: Constraint Propagation for Linear
Constraints
Even though we focus here on arithmetic constraints on integer intervals, it is
helpful to realize that the integer interval arithmetic is also useful to define
in a succinct way the well-known rules for constraint propagation for linear
n
constraints. To this end consider first a constraint Σi=1
ai · xi = b, where n ≥ 0,
a1 , . . ., an are non-zero integers, x1 , . . ., xn are different variables, and b is an
integer. To reason about it we can use the following rule parametrized by j ∈
[1..n]:
LINEAR EQUALITY
n
hΣi=1
ai
n
hΣi=1 ai
where
· xi = b ; x1 ∈ D1 , . . ., xn ∈ Dn i
· xi = b ; x1 ∈ D1′ , . . ., xn ∈ Dn′ i
– for i 6= j
Di′ := Di ,
–
Dj′ := Dj ∩ int (b − Σi∈[1..n]−{j} ai · xi )/aj .
Note that by virtue of Note 1
Dj′ = Dj ∩ (b − Σi∈[1..n]−{j} int(ai · Di ))/aj .
To see that this rule preserves equivalence suppose that for some d1 ∈
n
D1 , . . ., dn ∈ Dn we have Σi=1
ai · di = b. Then for j ∈ [1..n] we have
dj = (b − Σi∈[1..n]−{j} ai · di )/aj
which by the Correctness Lemma 1 implies that
dj ∈ int (b − Σi∈[1..n]−{j} ai · xi )/aj ,
i.e., dj ∈ Dj′ .
n
Next, consider a constraint Σi=1
ai · xi ≤ b, where a1 , . . ., an , x1 , . . ., xn and b
are as above. To reason about it we can use the following rule parametrized by
j ∈ [1..n]:
LINEAR INEQUALITY
n
hΣi=1
ai
n
hΣi=1 ai
where
– for i 6= j
· xi ≤ b ; x1 ∈ D1 , . . ., xn ∈ Dn i
· xi ≤ b ; x1 ∈ D1′ , . . ., xn ∈ Dn′ i
Di′ := Di ,
–
Dj′ := Dj ∩ (≤ int(b − Σi∈[1..n]−{j} ai · xi )/aj )
To see that this rule preserves equivalence suppose that for some d1 ∈
n
D1 , . . ., dn ∈ Dn we have Σi=1
ai · di ≤ b. Then aj · dj ≤ b − Σi∈[1..n]−{j} ai · di .
By the Correctness Lemma 1
b − Σi∈[1..n]−{j} ai · di ∈ int(b − Σi∈[1..n]−{j} ai · xi ),
so by definition
and consequently
aj · dj ∈≤ int(b − Σi∈[1..n]−{j} ai · xi )
dj ∈≤ int(b − Σi∈[1..n]−{j} ai · xi )/aj
This implies that dj ∈ Dj′ .
4
Constraint Propagation: First Approach
We now move on to a discussion of constraint propagation for the arithmetic
constraints on integer intervals. To illustrate the first approach consider the
following example. Consider the constraint
x3 y − x ≤ 40
and the ranges x ∈ [1..100] and y ∈ [1..100]. We can rewrite it as
r
40 + x
x≤ 3
y
(3)
since x assumes integer
√values.
The maximum value the expression on the righthand side can take is 3 140 , so we conclude x ≤ 5. By reusing (3), now with the
information that x ∈ [1..5], we conclude that the maximum
√ value the expression
on the right-hand side of (3) can take is actually 3 45 , from which it follows
that x ≤ 3.
In the case of y we can isolate it by rewriting the original constraint as
y ≤ x403 + x12 from which it follows that y ≤ 41, since by assumption x ≥ 1. So
we could reduce the domain of x to [1..3] and the domain of y to [1..41]. This
interval reduction is optimal, since x = 1, y = 41 and x = 3, y = 1 are both
solutions to the original constraint x3 y − x ≤ 40.
m
More formally, we consider a polynomial constraint Σi=1
mi = b where m > 0,
no monomial mi is an integer, the power products of the monomials are pairwise
different and b is an integer. Suppose that x1 , . . ., xn are its variables ordered
w.r.t. ≺.
Select a non-integer monomial mℓ and assume it is of the form
a · y1n1 · . . . · yknk ,
where k > 0, y1 , . . ., yk are different variables ordered w.r.t. ≺, a is a non-zero
integer and n1 , . . ., nk are positive integers. So each yi variable equals to some
variable in {x1 , . . ., xn }. Suppose that yp equals to xj . We introduce the following
proof rule:
POLYNOMIAL EQUALITY
n
hΣi=1
mi
n
hΣi=1 mi
= b ; x1 ∈ D1 , . . ., xn ∈ Dn i
= b ; x1 ∈ D1′ , . . ., xn ∈ Dn′ i
where
– for i 6= j
Di′ := Di ,
–
Dj′
and
q
np
:= int Dj ∩
int (b − Σi∈[1..m]−{ℓ}mi )/s
n
n
p−1
p+1
s := a · y1n1 · . . . · yp−1
· yp+1
. . . · yknk .
To see that this rule preserves equivalence choose some d1 ∈ D1 , . . ., dn ∈ Dn .
To simplify the notation, given an extended arithmetic expression t denote by t′
the result of evaluating t after each occurrence of a variable xi is replaced by di .
m
Suppose that Σi=1
m′i = b. Then
n
dj p · s′ = b − Σi∈[1..m]−{ℓ} m′i ,
so by the Correctness Lemma 1 applied to b − Σi∈[1..m]−{ℓ} m′i and to s
n
dj p ∈ int(b − Σi∈[1..m]−{ℓ} mi )/int(s).
Hence
dj ∈
q
int(b − Σi∈[1..m]−{ℓ} mi )/int(s)
np
and consequently
q
np
int (b − Σi∈[1..m]−{ℓ} mi )/s
dj ∈ int Dj ∩
i.e., dj ∈ Dj′ .
Note that we do not apply int(.) to the outcome of the root extraction operation. For even np this means that the second operand of the intersection can
be a union of two intervals, instead of a single interval. To see why this is desirable, consider the constraint x2 − y = 0 in the presence of ranges x ∈ [0..10],
y ∈ [25..100]. Using the int(.) closure of the root extraction we would not be
able to update the lower bound of x to 5.
m
Next, consider a polynomial constraint Σi=1
mi ≤ b. Below we adopt the
assumptions and notation used when defining the POLYNOMIAL EQUALITY
rule. To formulate the appropriate rule we stipulate that for extended arithmetic
expressions s and t
int((≤ s)/t) := ≥ Q ∩ ≤ Q,
with Q = (≤ int(s))/int(t).
To reason about this constraint we use the following rule:
POLYNOMIAL INEQUALITY
n
hΣi=1
mi ≤ b ; x1 ∈ D1 , . . ., xn ∈ Dn i
n m ≤ b ; x ∈ D ′ , . . ., x ∈ D ′ i
hΣi=1
i
1
n
n
1
where
– for i 6= j
Di′ := Di ,
–
Dj′
q
np
int
:= int Dj ∩
≤ (b
− Σi∈[1..m]−{ℓ}mi )/s
To prove that this rule preserves equivalence choose some d1 ∈ D1 , . . ., dn ∈
Dn . As above given an extended arithmetic expression t we denote by t′ the
result of evaluating t when each occurrence of a variable xi in t is replaced by
di .
m
Suppose that Σi=1
m′i ≤ b. Then
n
dj p · s′ ≤ b − Σi∈[1..m]−{ℓ} m′i .
By the Correctness Lemma 1
b − Σi∈[1..m]−{ℓ} m′i ∈ int(b − Σi∈[1..m]−{ℓ} mi ),
so by definition
n
dj p · s′ ∈≤ int(b − Σi∈[1..m]−{ℓ} mi ).
Hence by the definition of the division operation on the sets of integers
n
dj p ∈≤ int(b − Σi∈[1..m]−{ℓ} mi )/int(s)
Consequently
dj ∈
q
np
≤ int(b −
Σi∈[1..m]−{ℓ} mi )/int(s)
This implies that dj ∈ Dj′ .
Note that the set ≤ int(b − Σi∈[1..m]−{ℓ}mi ) is not an interval. So to properly
implement this rule we need to extend the implementation of the division operation discussed in Subsection 2.2 to the case when the numerator is an extended
interval. Our implementation takes care of this.
In an optimized version of this approach we simplify the fractions of two polynomials by splitting the division over addition and subtraction and by dividing
out common powers of variables and greatest common divisors of the constant
factors. Subsequently, fractions whose denominators have identical power products are added. We used this optimization in the initial example by simplifying
40
1
40+x
x3 to x3 + x2 . The reader may check that without this simplification step we
can only deduce that y ≤ 43.
To provide details of this optimization, given two monomials s and t, we
denote by
s
[ ]
t
the result of performing this simplification operation on s and t. For example,
3
4·x3 ·y
x·y
·y
2·x3
[ 2·x
4·x2 ] equals 2 , whereas [ 2·y 2 ] equals y .
In this approach we assume that the domains of the variables y1 , . . ., yp−1 ,
yp+1 , . . ., yn of mℓ do not contain 0. (One can easily show that this restriction is
necessary here). For a monomial s involving variables ranging over the integer
intervals that do not contain 0, the set int(s) either contains only positive numbers or only negative numbers. In the first case we write sign(s) = + and in the
second case we write sign(s) = −.
The new domain of the variable xj in the POLYNOMIAL INEQUALITY
rule is defined using two sequences m′0 ...m′n and s′0 ...s′n of extended arithmetic
expressions such that
m′0
b
mi
m′
= [ ] and ′i = −[ ] for i ∈ [1..m].
′
s0
s
si
s
Let S := {s′i | i ∈ [0..m] − {ℓ}} and for an extended arithmetic expression t ∈ S
let It := {i ∈ [0..m] − {ℓ} | s′i = t}. We denote then by pt the polynomial
P
′
i∈It mi . The new domains are then defined by
Dj′
r
pt
np
≤ int Σ
:= int Dj ∩
t∈S
t
if sign(s) = +, and by
Dj′
r
pt
np
≥ int Σ
:= int Dj ∩
t∈S
t
if sign(s) = −. Here the int(s) notation used in the Correctness Lemma 1 is
extended to expressions involving the division operator on real intervals in the
obvious way. We define the int(.) operator applied to a bounded set of real
numbers, as produced by the division and addition operators in the above two
expressions for Dj′ , to denote the smallest interval of real numbers containing
that set.
5
Constraint Propagation: Second Approach
In this approach we limit our attention to a special type of polynomial constraints, namely the ones of the form s op b, where s is a polynomial in which
each variable occurs at most once and where b is an integer. We call such a constraint a simple polynomial constraint. By introducing auxiliary variables that
are equated with appropriate monomials we can rewrite each polynomial constraint into a sequence of simple polynomial constraints. This allows us also to
compute the integer interval domains of the auxiliary variable from the integer
interval domains of the original variables. We apply then to the simple polynomial constraints the rules introduced in the previous section.
To see that the restriction to simple polynomial constraints can make a difference consider the constraint
100x · y − 10y · z = 212
in presence of the ranges x, y, z ∈ [1..9]. We rewrite it into the sequence
u = x · y, v = y · z, 100u − 10v = 212
where u, v are auxiliary variables, each with the domain [1..81].
It is easy to check that the POLYNOMIAL EQUALITY rule introduced
in the previous section does not yield any domain reduction when applied to
the original constraint 100x · y − 10y · z = 212. In presence of the discussed
optimization the domain of x gets reduced to [1..3].
However, if we repeatedly apply the POLYNOMIAL EQUALITY rule to
the simple polynomial constraint 100u − 10v = 212, we eventually reduce the
domain of u to the empty set (since this constraint has no integer solution in the
ranges u, v ∈ [1..81]) and consequently can conclude that the original constraint
100x · y − 10y · z = 212 has no solution in the ranges x, y, z ∈ [1..9], without
performing any search.
6
Constraint Propagation: Third Approach
In this approach we focus on a small set of ‘atomic’ arithmetic constraints. We
call an arithmetic constraint atomic if it is in one of the following two forms:
– a linear constraint,
– x · y = z.
It is easy to see that using appropriate transformation rules involving auxiliary variables we can transform each arithmetic constraint to a sequence of
atomic arithmetic constraints. In this transformation, as in the second approach,
the auxiliary variables are equated with monomials so we can easily compute
their domains.
The transformation to atomic constraints can strengthen the reduction. Consider for example the constraint
u·x·y+1= v·x·y
and ranges u ∈ [1..2], v ∈ [3..4], and x, y ∈ [1..4]. The first approach without
optimization and the second approach cannot find a solution without search.
If, as a first step in transforming this constraint into a linear constraint, we
introduce an auxiliary variable w to replace x · y, we are effectively solving the
constraint
u·w+1= v·w
with the additional range w ∈ [1..16], resulting in only one duplicate occurrence
of a variable instead of two. With variable w introduced (or using the optimized
version of the first approach) constraint propagation alone finds the solution
u = 2, v = 3, x = 1, y = 1.
We explained already in Section 3 how to reason about linear constraints.
(We omitted there the treatment of the disequalities which is routine.) Next, we
focus on the reasoning for the multiplication constraint x · y = z in presence of
the non-empty ranges x ∈ Dx , y ∈ Dy and z ∈ Dz . To this end we introduce the
following three domain reduction rules:
MULTIPLICATION 1
hx · y = z ; x ∈ Dx , y ∈ Dy , z ∈ Dz i
hx · y = z ; x ∈ Dx , y ∈ Dy , z ∈ Dz ∩ int(Dx · Dy )i
MULTIPLICATION 2
hx · y = z ; x ∈ Dx , y ∈ Dy , z ∈ Dz i
hx · y = z ; x ∈ Dx ∩ int(Dz /Dy ), y ∈ Dy , z ∈ Dz i
MULTIPLICATION 3
hx · y = z ; x ∈ Dx , y ∈ Dy , z ∈ Dz i
hx · y = z ; x ∈ Dx , y ∈ Dy ∩ int(Dz /Dx ), z ∈ Dz i
The way we defined the multiplication and the division of the integer intervals
ensures that the MULTIPLICATION rules 1,2 and 3 are equivalence preserving.
Consider for example the MULTIPLICATION 2 rule. Take some a ∈ Dx , b ∈ Dy
and c ∈ Dz such that a · b = c. Then a ∈ {x ∈ Z | ∃z ∈ Dz ∃y ∈ Dy x · y = z}, so
a ∈ Dz /Dy and a fortiori a ∈ int(Dz /Dy ). Consequently a ∈ Dx ∩ int(Dz /Dy ).
This shows that the MULTIPLICATION 2 rule is equivalence preserving.
The following example shows an interaction between all three MULTIPLICATION rules.
Example 1. Consider the CSP
hx · y = z ; x ∈ [1..20], y ∈ [9..11], z ∈ [155..161]i.
(4)
To facilitate the reading we underline the modified domains. An application
of the MULTIPLICATION 2 rule yields
hx · y = z ; x ∈ [16..16], y ∈ [9..11], z ∈ [155..161]i
since, as already noted in in Subsection 2.2, [155..161]/[9..11]) = [16..16], and
[1..20] ∩ int([16..16]) = [16..16]. Applying now the MULTIPLICATION 3 rule
we obtain
hx · y = z ; x ∈ [16..16], y ∈ [10..10], z ∈ [155..161]i
since [155..161]/[16..16] = [10..10] and [9..11] ∩ int([10..10]) = [10..10]. Next, by
the application of the MULTIPLICATION 1 rule we obtain
hx · y = z ; x ∈ [16..16], y ∈ [10..10], z ∈ [160..160]i
since [16..16] · [10..10] = [160..160] and [155..161] ∩ int([160..160]) = [160..160].
So using all three multiplication rules we could solve the CSP (4).
✷
Now let us clarify why we did not define the division of the sets of integers
Z and Y by
Z/Y := {z/y ∈ Z | y ∈ Y, z ∈ Z, y 6= 0}.
The reason is that in that case for any set of integers Z we would have Z/{0} = ∅.
Consequently, if we adopted this definition of the division of the integer intervals, the resulting MULTIPLICATION 2 and 3 rules would not be anymore
equivalence preserving. Indeed, consider the CSP
hx · y = z ; x ∈ [−2..1], y ∈ [0..0], z ∈ [−8..10]i.
Then we would have [−8..10]/[0..0] = ∅ and consequently by the MULTIPLICATION 2 rule we could conclude
hx · y = z ; x ∈ ∅, y ∈ [0..0], z ∈ [−8..10]i.
So we reached an inconsistent CSP while the original CSP is consistent.
In the remainder of the paper we will also consider variants of this third
approach that allow squaring and exponentiation as atomic constraints. For this
purpose we explain the reasoning for the constraint x = y n in presence of the
non-empty ranges x ∈ Dx and y ∈ Dy , and for n > 1. To this end we introduce
the following two rules in which to maintain the property that the domains are
intervals we use the int(.) operation of Section 2:
EXPONENTIATION
hx = y n ; x ∈ Dx , y ∈ Dy i
hx = y n ; x ∈ Dx ∩ int(Dyn ), y ∈ Dy i
ROOT EXTRACTION
hx = y n ; x ∈ Dx , y ∈ Dy i
√
hx = y n ; x ∈ Dx , y ∈ int(Dy ∩ n Dx )i
To prove that these rules are equivalence preserving suppose that for some
a ∈ Dx and b ∈ Dy we have a = bn . Then a ∈ Dyn , so a ∈ int(Dyn ) and conse√
√
quently a ∈ Dx ∩ int(Dyn ). Also b ∈ n Dx , so b ∈ Dy ∩ n Dx , and consequently
√
b ∈ int(Dy ∩ n Dx ).
7
Implementation Details
In this section we describe the benchmark experiments that were performed
to compare the proposed approaches. These experiments were performed using
a single solver of the DICE (DIstributed Constraint Environment) framework.
DICE [10] is a framework for solver cooperation, implemented using techniques
from coordination programming. It is developed around an experimental constraint solver, called OpenSolver, which is particularly suited for coordination.
The coordination and cooperation aspects are irrelevant from the point of view
of this paper. Relevant aspects of the OpenSolver are:
– It implements a branch-and-infer tree search algorithm for constraint solving. The inference stage corresponds to constraint propagation and is performed by repeated application of domain reduction functions (DRFs) that
correspond to the domain reduction rules associated with the considered
constraints.
– This algorithm is abstract in the sense that the actual functionality is determined by software plug-ins in a number of predefined categories. These categories correspond to various aspects of the abstract branch-and-infer tree
search algorithm. Relevant categories are: variable domain types, domain
reduction functions, schedulers that control the application of the DRFs,
branching strategies that split the search tree after constraint propagation
has terminated, and several categories corresponding to different aspects of
a search strategy that determine how to traverse a search tree.
All experiments were performed using the IntegerInterval variable domain type plug-in. Domains of this type consist of an indication of the type of
the interval (bounded, unbounded, left/right-bounded, or empty), and a pair
of arbitrary precision integer bounds. This plug-in, and the interval arithmetic
operations on it are built using the GNU MP library [4].
The branching strategy that we used selects variables using the chronological
ordering in which the auxiliary variables come last. The domain of the selected
variable is split into two subdomains using bisection, so the resulting search trees
are binary trees. In all experiments we searched for all solutions, traversing the
entire search tree by means of depth-first leftmost-first chronological backtracking.
For the experiments in this paper a DRF plug-in has been developed that
implements the domain reduction rules discussed in the previous sections. The
scheduler plug-in used in the benchmarks keeps cycling through the sequence of
DRFs, applying DRFs that have been scheduled for execution. When a DRF is
applied, and some variable domain is modified, all DRFs that depend on these
changes are scheduled for execution, including possibly the one that has just been
applied. The cycling stops when no more DRFs are scheduled for execution, or
when the domain of a variable becomes empty.
As an alternative to cycling, the scheduler can be supplied with a schedule:
a sequence of indices into the sequence of DRFs. The scheduler will then cycle
through this schedule instead, and consider DRFs for application in the specified
order. This is used in combination with the second and third approach, where we
distinguish user constraints from the constraints that are introduced to define
the values of auxiliary variables. Before considering for execution a DRF f that
is part of the implementation of a user constraint, we make sure that all auxiliary
variables that f relies on are updated. For this purpose, the indices of the DRFs
that update these variables precede the index of f in the schedule. If f can
change the value of an auxiliary variable, its index is followed by the indices of
the DRFs that propagate back these changes to the variables that define the
value of this auxiliary variable.
For the third approach, there can be hierarchical dependencies between auxiliary variables. Much like the HC4 algorithm of [2], the schedule specifies a
bottom-up traversal of this hierarchy in a forward evaluation phase and a topdown traversal in a backward propagation phase before and after applying a
DRF of a user constraint, respectively. In the forward evaluation phase, the
DRFs that are executed correspond to rules MULTIPLICATION 1 and EXPONENTIATION. The DRFs of the backward propagation phase correspond to
MULTIPLICATION 2 and 3 , and ROOT EXTRACTION. It is easy to construct examples showing that the use of hierarchical schedules can be beneficial
compared to cycling through the rules.
The proposed approaches were implemented by first rewriting arithmetic
constraints to polynomial constraints, and then to a sequence of DRFs that
correspond with the rules of the approach used. We considered the following
methods:
1a the first approach, discussed in Section 4,
1b the optimization of the first approach discussed at the end of Section 4 that
involves dividing out common powers of variables,
2a the second approach, discussed in Section 5. The conversion to simple polynomial constraints is implemented by introducing an auxiliary variable for
every non-linear monomial. This procedure may introduce more auxiliary
variables than necessary.
2b an optimized version of approach 2a, where we stop introducing auxiliary
variables as soon as the constraints contain no more duplicate occurrences
of variables.
3a the third approach, discussed in Section 6, allowing only linear constraints
and multiplication as atomic constraints.
3b idem, but also allowing x = y 2 as an atomic constraint.
3c idem, allowing x = y n for all n > 1 as an atomic constraint.
Approaches 2 and 3 involve an extra rewrite step, where the auxiliary variables are introduced. The resulting CSP is then rewritten according to approach
1a. During the first rewrite step the hierarchical relations between the auxiliary
variables are recorded and the schedules are generated as a part of the second
rewrite step. For approaches 2b and 3 the question of which auxiliary variables
to introduce is an optimization problem in itself. Some choices result in more
auxiliary variables than others. We have not treated this issue as an optimization problem but relied on heuristics. We are confident that these yield a realistic
implementation. In our experiments we used the following benchmarks.
Cubes The problem is to find all natural numbers n ≤ 1000 that are a sum of
four different cubes, for example
13 + 23 + 33 + 43 = 100.
This problem is modeled as follows:
h1 ≤ x1 , x1 ≤ x2 − 1, x2 ≤ x3 − 1, x3 ≤ x4 − 1, x4 ≤ n,
x31 + x32 + x33 + x34 = n; n ∈ [1..1000], x1 , x2 , x3 , x4 ∈ Zi
Opt We are interested in finding a solution to the constraint x3 + y 2 = z 3 in the
integer interval [1..100000] for which the value of 2x · y − z is maximal.
Fractions This problem is taken from [9]: find distinct nonzero digits such that
the following equation holds:
A
D
G
+
+
=1
BC
EF
HI
There is a variable for each letter. The initial domains are [1..9]. To avoid symmetric solutions an ordering is imposed:
D
G
A
≥
≥
BC
EF
HI
Also two redundant constraints are added:
3
A
≥1
BC
and
3
G
≤1
HI
Because division is not present in the arithmetic expressions, the above constraints are multiplied by the denominators of the fractions to obtain arithmetic
constraints.
Two representations for this problem were studied:
– fractions1 in which five constraints are used: one equality and four inequalities for the ordering and the redundant constraints,
– fractions2, used in [9], in which three auxiliary variables, BC, EF and HI,
are introduced to simplify the arithmetic constraints: BC = 10B + C, EF =
10E + F , and HI = 10H + I.
Additionally, in both representations, 36 disequalities A 6= B, A 6= C, ..., H 6= I
are used.
Kyoto The problem1 is to find the number n such that the alphanumeric equation
KYOTO
KYOTO
+KYOTO
TOKYO
has a solution in the base-n number system. Our model uses a variable for each
letter and one variable for the base number. The variables K and T may not
be zero. There is one large constraint for the addition, 6 disequalities K 6= Y
... T 6= O and four constraints stating that the individual digits K, Y, O, T ,
are smaller than the base number. To spend some CPU time, we searched base
numbers 2..100.
1
V. Dubrovsky and A. Shvetsov. Quantum
http://www.nsta.org/quantum/kyotoarc.asp
cyberteaser:
May/June
1995,
8
Results
Tables 1 and 2 compare the proposed approaches on the problems defined in the
previous section. The first two columns of table 1 list the number of variables and
the DRFs that were used. Column nodes lists the size of the search tree, including
failures and solutions. The next two columns list the number of times that a
DRF was executed, and the percentage of these activations that the domain of a
variable was actually modified. For the opt problem, the DRF that implements
the optimization is not counted, and its activation is not taken into account. The
elapsed times in the last column are the minimum times (in seconds) recorded
for 5 runs on a 1200 MHz Athlon CPU.
cubes
1a
2a
3a
3b
opt
1a
2a
3a
3b
fractions1 1a
1b
2a
2b
3
fractions2 1a
1b
2a
2b
3
kyoto
1a
1b
2a
2b
3a
3b
3c
nvar nDRF
nodes
activated %effective elapsed
5
14
167
1880
13.03
0.013
9
22
167
2370
22.15
0.014
13
34
359
4442
26.23
0.024
13
34
227
3759
29.24
0.021
4
7
115,469
5,186,968
42.16 22.037
8
15
115,469
9,799,967
60.00 23.544
10
21
?
?
?
?
10
21 5,065,137 156,903,869
46.49 518.898
9
154
11,289
1,193,579
3.65 16.586
9
154
7,879
734,980
3.45 17.811
37
210
11,289
1,410,436
23.27
5.575
32
200
11,289
1,385,933
21.65
5.957
43
208
11,131
1,426,186
27.76
5.635
12
105
2,449
270,833
9.72
0.660
12
105
989
94,894
9.12
0.538
20
121
2,449
350,380
22.19
0.597
15
111
2,449
301,855
17.51
0.547
22
123
1,525
293,038
27.33
0.509
5
37
87,085
3,299,736
6.09 23.680
5
37
87,085
3,288,461
5.94 45.406
13
53
87,085
3,781,414
23.03 11.784
12
51
87,085
3,622,361
21.45 12.138
16
60
87,087
4,275,930
26.70 22.538
16
60
87,085
4,275,821
26.70 22.530
16
59
87,085
3,746,532
23.26 10.466
E
+
+
=
+
+
=
=
=
=
=
=
=
=
=
=
=
+
=
=
=
=
=
I
=
=
+
+
=
=
=
=
=
=
=
=
=
=
=
+
=
=
=
=
=
Table 1. Statistics and comparison with other solvers
Table 2 lists measured numbers of basic interval operations. Note that for
approach 1b, there are two versions of the division and addition operations:
one for integer intervals, and one for intervals of reals of which the bounds are
rational numbers (marked Q). Columns multI and multF list the numbers of
multiplications of two integer intervals, and of an integer interval and an integer
factor, respectively. These are different operations in our implementation.
cubes
1a
2a
3a
3b
opt
1a
2a
3a
3b
fractions1 1a
1b
2a
2b
3
fractions2 1a
1b
kyoto
2a
2b
3
1a
1b
2a
2b
3a
3b
3c
root
exp
div
multI multF
sum
total
1
4
0
0
5
4
14
< 0.5 < 0.5
0
0
5
4
9
0
0
1
1
6
5
13
< 0.5 < 0.5
1
< 0.5
5
5
11
2,299 4,599 1,443
1,444 11,064 5,187
26,037
1,636 1,538 2,150
738
8,138 4,445
18,645
?
?
?
?
?
?
?
21,066 18,105 54,171 18,284 106,651 57,469 275,747
0
0
868 28,916 14,238 13,444
57,466
0
0
51 11,892
8,010 6,727
29,584
1,550 Q
1,355 Q
0
0
734
933
4,736 4,669
11,071
0
0
776
1,509
5,292 5,147
12,725
0
0
693
339
4,835 4,769
10,636
0
0
142
690
304
212
1,347
0
0
19
127
59
26
344
65 Q
49 Q
0
0
124
149
138
94
505
0
0
124
206
210
118
658
0
0
114
46
142
101
403
735 11,040 1,963 13,852 10,852 13,946
52,388
735 8,146
218
8,955 12,516 10,592
48,749
4,310 Q
3,277 Q
383
759 1,590
484
5,324 7,504
16,044
383
759 1,597
1,360
5,756 8,008
17,863
0
0 1,991
578
5,324 7,505
15,397
< 0.5 < 0.5 1,990
578
5,324 7,504
15,397
1
1 1,554
484
5,324 7,504
14,868
Table 2. Measured numbers (thousands) of interval operations
For the cubes and opt problems, the constraints are already in simple form, so
approaches 1a, 1b and 2b are identical. Also all non-linear terms involve either a
multiplication or an exponentiation, so also approaches 2a and 3c are the same.
The results of these experiments clearly show the disadvantage of implementing
exponentiation by means of multiplication: the search space grows because we
increase the number of variable occurrences and lose the information that it is
the same number that is being multiplied. For opt and approach 3a, the run did
not complete within reasonable time and was aborted.
Columns E and I of table 1 compare the propagation achieved by our approaches with two other systems, respectively ECLi PSe Version 5.62 using the
ic library, and ILOG Solver 5.13 using type ILOINT. For this purpose we ran
the test problems without search, and compared the results of constraint propagation. A mark ‘=’ means that the computed domains are the same, ‘+’ that
our approach achieved stronger propagation than the solver that we compare
with, and ‘-’ that propagation is weaker. For cubes, ECLi PSe computes the same
domains as those computed according to approach 3b, so here the reduction is
stronger than for 3a, but weaker than for the other approaches. For opt ECLi PSe
and ILOG Solver compute the same domains. These domains are narrower than
those computed according to approaches 3a and 3b, but the other approaches
achieve stronger reduction. In all other cases except for kyoto and approach 1b
the results of all three solvers are the same.
For both representations for the fractions puzzle, the symbolic manipulation
of approach 1b is able to achieve a significant reduction of the search tree, but
this is not reflected in the timings. For fractions1 the elapsed time even increases.
The reason is that computing the domain updates involves adding intervals of
real numbers. The arithmetic operations on such intervals are more expensive
than their counterparts on integer intervals, because the bounds have to be maintained as rational numbers. Arithmetic operations on rational numbers are more
expensive because they involve the computation of greatest common divisors.
For kyoto the symbolic manipulation did not reduce the size of the search tree,
so the effect is even more severe.
In general, the introduction of auxiliary variables leads to a reduction of
the number of interval operations compared to approach 1a. The reason is that
auxiliary variables prevent the evaluation of subexpressions that did not change.
This effect is strongest for fractions1, where the main constraint contains a
large number of different power products. Without auxiliary variables all power
products are evaluated for every POLYNOMIAL EQUALITY rule defined by
this constraint, even those power products the variable domains of which did
not change. With auxiliary variables the intervals for such unmodified terms are
available immediately, which leads to a significant reduction of the number of
interval multiplications.
The effect that stronger reduction is achieved as a result of introducing auxiliary variables, mentioned in Section 6, is seen for both representations of the
fractions benchmark. The effect described in Section 5 is not demonstrated by
these experiments.
If we don’t consider the symbolic manipulation of approach 1b, approach
3c leads to the smallest total number of interval operations in all cases, but
the scheduling mechanism discussed in Section 7 is essential for a consistent
good performance. If for example the schedule is omitted for opt, the number
2
3
ECLi PSe
Constraint
Logic
icparc.doc.ic.ac.uk/eclipse
See http://www.ilog.com
Programming
System.
See
http://www-
of interval operations almost triples, and performance of approach 2a and 3c is
then much worse than that of approach 1a.
The total numbers of interval operations in table 2 do not fully explain all
differences in elapsed times. One of the reasons is that different interval operations have different costs. Especially the preprocessing of the numerator interval
for integer interval division, discussed in Subsection 2.2, is potentially expensive, which may explain why for opt , approach 1a runs faster than approach 2a,
even though the total number of interval operations is higher. Among the many
other factors that may be of influence, some overhead is involved in applying a
DRF, so if the number of applications differs significantly for two experiments,
this probably influences the elapsed times as well (cubes, 1a, 2a, opt , 1a, 2a,
fractions2 , 2a, 2b). The elapsed times are not the only measure that is subject
to implementation details. For example, we implemented division by a constant
interval [−1.. − 1] as multiplication by a constant, which is more efficient in our
implementation. Such decisions are reflected in the numbers reported in table 2.
9
Discussion
In this paper we discussed a number of approaches to constraint propagation
for arithmetic constraints on integer intervals. To assess them we implemented
them using the DICE (DIstributed Constraint Environment) framework of [10],
and compared their performance on a number of benchmark problems. We can
conclude that:
– Implementation of exponentiation by multiplication gives weak reduction.
In our third approach x = y n should be an atomic constraint.
– The optimization of the first approach, where common powers of variables
are divided out, can significantly reduce the size of the search tree, but
the resulting reduction steps rely heavily on the division and addition of
rational numbers. These operations can be expected to be more expensive
than their integer counterparts, because they involve the computation of
greatest common divisors.
– Introducing auxiliary variables can be beneficial in two ways: it may strengthen the propagation, as discussed in Sections 5 and 6, and it may prevent the
evaluation of subexpressions the variable domains of which did not change.
– As a result, given a proper scheduling of the rules, the second and third
approach perform better than the first approach without the optimization,
in terms of numbers of interval operations. Actual performance depends on
many implementation aspects. However for our test problems the results of
variants 2a, 2b and 3c do not differ significantly.
In general, our implementation is slow compared to, for example, ILOG
Solver. A likely cause is that we use arbitrary precision integers. We chose this
representation to avoid having to deal with overflow, but an additional benefit
is that large numbers can be represented exactly.
A different approach would be to use floating-point arithmetic and then round
intervals inwards to the largest enclosed integer interval. This was suggested in [3]
and implemented in for example RealPaver4. A benefit of this inward rounding
approach is that all algorithms that were developed for constraints on the reals
are immediately available. A disadvantage is that for large numbers no precise
representation exists, i.e., the interval defined by two consecutive floating-point
numbers contains more than one integer. But it is debatable whether an exact
representation is required for such large numbers.
We realize that the current set of test problems is rather limited. In addition
to puzzles, some more complex non-linear integer optimization problems should
be studied. We plan to further evaluate the proposed approaches on non-linear
integer models for the SAT problem. Also we would like to study the relationship
with the local consistency notions that have been defined for constraints on the
reals and give a proper characterization of the local consistencies computed by
our reduction rules.
Note This work was performed during the first author’s stay at the School of
Computing of the National University of Singapore. The work of the second
author was supported by NWO, The Netherlands Organization for Scientific
Research, under project number 612.069.003.
References
1. K. R. Apt. A proof theoretic view of constraint programming. Fundamenta Informaticae, 33(3):263–293, 1998. Available via http://arXiv.org/archive/cs/.
2. F. Benhamou, F. Goualard, L. Granvilliers, and J.-F. Puget. Revising hull and
box consistency. In Proceedings of the 16th International Conference on Logic
Programming (ICLP’99), pages 230–244. The MIT Press, 1999.
3. F. Benhamou and W. Older. Applying interval arithmetic to real, integer and
boolean constraints. Journal of Logic Programming, 32(1):1–24.
4. T. Granlund. GNU MP, The GNU Multiple Precision Arithmetic Library, Edition
4.1.2. Swox AB, December 2002.
5. W. Harvey and P. J. Stuckey. Improving linear constraint propagation by changing
constraint representation. Constraints, 8(2):173–207, 2003.
6. T. J. Hickey, Q. Ju, and M. H. van Emden. Interval arithmetic: from principles to
implementation. Journal of the ACM, 48(5):1038–1068, 2001.
7. R. E. Moore. Interval Analysis. Prentice-Hall, Englewood Cliffs, NJ, 1966.
8. D. Ratz. Inclusion isotone extended interval arithmetic. Technical report, University of Karlsruhe, 1996. Report No. D-76128 Karlsruhe.
9. C. Schulte and G. Smolka. Finite domain constraint programming in Oz. A tutorial, August 2002. Version 1.2.4 (20020829). Available via http://www.mozartoz.org/documentation/fdt/index.html.
10. P. Zoeteweij. Coordination-based distributed constraint solving in DICE. In Brajendra Panda, editor, Proceedings of the 2003 ACM Symposium on Applied Computing, pages 360–366, 2003.
4
http://www.sciences.univ-nantes.fr/info/perso/permanents/granvil/realpaver/main.html
| 2 |
arXiv:1210.8203v2 [] 30 Apr 2013
THE ASYMPTOTICS OF SYMBOLIC GENERIC INITIAL
SYSTEMS OF SIX POINTS IN P2
SARAH MAYES
Abstract. Consider the ideal I ⊆ K[x, y, z] corresponding to six points
of P2 . We study the limiting behaviour of the symbolic generic initial
system, {gin(I (m) }m of I obtained by taking the reverse lexicographic
generic initial ideals of the uniform fat point ideals I (m) . The main result
of this paper is a theorem describing the limiting shape of {gin(I (m) }m
for each of the eleven possible configuration types of six points.
1. Introduction
Given a set of six points of Pn−1 with ideal I ⊆ k[Pn−1 ], we may consider
the ideal I (m) generated by the polynomials that vanish to at least order m
at each of the points. Such ideals are called uniform fat point ideals and,
although they are easy to describe, they have proven difficult to understand.
There are still many open problems and unresolved conjectures related to
finding the Hilbert function of I (m) and even the degree α(I (m) ) of the
smallest degree element of I (m) (for example, see [CHT11], [GH07], [GVT04],
[GHM09] , and [Har02]).
In this paper we will study a limiting shape that describes the behaviour of
the Hilbert functions of the set of fat point ideals {I (m) }m as m approaches
infinity. Studying asymptotic behaviour has been an important research
trend of the past twenty years; while individual algebraic objects may be
complicated, the limit of a collection of such objects is often quite nice (see,
for example, [Hun92], [Siu01],[ELS01], and [ES09]). Research on fat point
ideals has shown that certain challenges in understanding these ideals can be
overcome by studying the entire collection {I (m) }m . For instance, more can
(m)
be said about the limit (I) = limm→∞ α(Irm ) than the invariants α(I (m) )
of each ideal (see [BH10] and [Har02]).
To describe the limiting behaviour of the Hilbert functions of fat point
ideals, we will study the symbolic generic initial system, {gin(I (m) )}m , obtained by taking the reverse lexicographic generic initial ideals of fat point
ideals. When I ⊆ K[x, y, z] is an ideal of points of P2 , knowing the Hilbert
function of I (m) is equivalent to knowing the generators of gin(I (m) ); thus,
describing the limiting behaviour of the symbolic generic initial system of
I is equivalent to describing that of the Hilbert functions of the fat point
ideals I (m) as m gets large.
We define the limiting shape P of the symbolic generic initial system
1
{gin(I (m) }m of the ideal I to be the limit limm→∞ m
Pgin(I (m) ) , where Pgin(I (m) )
1
2
SARAH MAYES
denotes the Newton polytope of gin(I (m) ). When I ⊆ K[x, y, z] corresponds
to an arrangement of points in P2 , each of the ideals gin(I (m) ) is generated
in the variables x and y, so Pgin(I (m) ) , and thus P , can be thought of as a
subset of R2 .
The main result of this paper is the following theorem describing the
limiting shape of the symbolic generic initial system of an ideal corresponding to any collection of 6 points in P2 . The concept of configuration type
mentioned is intuitive; for example, {p1 , . . . , p6 } are of configuration type B
pictured in Figure 1 when there is one line through three of the points but
no lines through any other three points and no conics through all six points
(see Definition 2.3).
Theorem 1.1. Let I ⊆ K[x, y, z] be the ideal corresponding to a set of
six points in P2 . Then the limiting polytope P of the reverse lexicographic
symbolic generic initial system {gin(I (m) )}m is equal to the limiting shape
P shown in Figures 1 and 2 corresponding to the configuration type of the
six points.
This theorem will be proved in Section 4; Sections 2 and 3 contain background information necessary for the proof. In Section 5 we discuss how
characteristics of the arrangement of an arbitrary set of points in P2 are, or
may be, reflected in the limiting shape of the corresponding symbolic generic
initial system.
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
3
Figure 1. The limiting shape P of the generic initial systems {gin(I (m) )}m when I is the ideal corresponding
to points {p1 , . . . , pr } in configuration types A through F pictured.
4
SARAH MAYES
Figure 2. The limiting shape P of the generic initial systems {gin(I (m) )}m when I is the ideal corresponding
to points {p1 , . . . , pr } in configuration types G through K pictured.
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
5
2. Background
In this section we will introduce notation, definitions, and results related
to fat points in P2 , generic initial ideals, and systems of ideals. Unless stated
otherwise, R = K[x, y, z] is the polynomial ring in three variables over a field
K of characteristic 0 with the standard grading and the reverse lexicographic
order > with x > y > z.
2.1. Fat Points in P2 .
Definition 2.1. Let p1 , . . . , pr be distinct points of P2 , Ij be the ideal
of K[P2 ] = R consisting of all forms vanishing at the point pj , and I =
I1 ∩ · · · ∩ Ir be the ideal of the points p1 , . . . , pr . A fat point subscheme
Z = m1 p1 + · · · + mr pr , where the mi are nonnegative integers, is the
subscheme of P2 defined by the ideal IZ = I1m1 ∩ · · · ∩ Irmr consisting of
forms that vanish at the points pi with multiplicity at least mi . When
mi = m for all i, we say that Z is uniform; in this case, IZ is equal to the
mth symbolic power of I, I (m) .
The following lemma relates the symbolic and ordinary powers of I in the
case we are interested in (see, for example, Lemma 1.3 of [AV03]).
Lemma 2.2. If I is the ideal of distinct points in P2 ,
(I m )sat = I (m) ,
where J sat =
S
k≥0 (J
: mk ) denotes the saturation of J.
The precise definition of a configuration type mentioned in the statement
of Theorem 1.1 is as follows.
Definition 2.3 ([GH07]). Two sets of points {p1 , . . . , pr } and {p01 , . . . , p0r } of
P2 have the same configuration type if for all sequences of positive integers
m1 , . . . , mr the ideals of the fat point subschemes Z = m1 p1 + · · · + mr pr
and Z 0 = m1 p01 + · · · + mr p0r have the same Hilbert function, possibly after
reordering.
Proposition 2.4 ([GH07]). The configuration types for six distinct points
in P2 are exactly the configurations A through K shown in Figures 1 and 2.
2.2. Generic Initial Ideals. An element g = (gij ) ∈ GLn (K) acts on
R = K[x1 , . . . , xn ] and sends any homogeneous element f (x1 , . . . , xn ) to the
homogeneous element
f (g(x1 ), . . . , g(xn ))
Pn
where g(xi ) = j=1 gij xj . If g(I) = I for every upper triangular matrix g
then we say that I is Borel-fixed. Borel-fixed ideals are strongly stable when
K is of characteristic 0; that is, for every monomial m in the ideal such that
x m
xi divides m, the monomials xj i are also in the ideal for all j < i. This
property makes such ideals particularly nice to work with.
To any homogeneous ideal I of R we can associate a Borel-fixed monomial
ideal gin> (I) which can be thought of as a coordinate-independent version
6
SARAH MAYES
of the initial ideal. Its existence is guaranteed by Galligo’s theorem (also
see [Gre98, Theorem 1.27]).
Theorem 2.5 ([Gal74] and [BS87b]). For any multiplicative monomial order > on R and any homogeneous ideal I ⊂ R, there exists a Zariski open
subset U ⊂ GLn (K) such that In> (g(I)) is constant and Borel-fixed for all
g ∈ U.
Definition 2.6. The generic initial ideal of I, denoted gin> (I), is defined
to be In> (g(I)) where g ∈ U is as in Galligo’s theorem.
The reverse lexicographic order > is a total ordering on the monomials of
R defined by:
(1) if |I| = |J| then xI > xJ if there is a k such that im = jm for all
m > k and ik < jk ; and
(2) if |I| > |J| then xI > xJ .
For example, x21 > x1 x2 > x22 > x1 x3 > x2 x3 > x23 . From this point on,
gin(I) = gin> (I) will denote the generic initial ideal with respect to the
reverse lexicographic order.
Recall that the Hilbert function HI (t) of I is defined by HI (t) = dim(It ).
The following result is a consequence of the fact that Hilbert functions
are invariant under making changes of coordinates and taking initial ideals
([Gre98]).
Proposition 2.7. For any homogeneous ideal I in R, the Hilbert functions
of I and gin(I) are equal.
We now describe the structure of the ideals gin(I (m) ) where I is an ideal
corresponding to points in P2 . The proof of this result is contained in
[May12a] and follows from results of Bayer and Stillman ([BS87a]) and of
Herzog and Srinivasan ([HS98])
Proposition 2.8 (Corollary 12.9 of [May12a]). Suppose I ⊆ K[x, y, z] is
the ideal of distinct points in P2 . Then the minimal generators of gin(I (m) )
are
{xα(m) , xα(m)−1 y λα(m)−1 , . . . , xy λ1 (m) , y λ0 (m) }
for λ0 (m), . . . , λα(m)−1 such that λ0 (m) > λ1 (m) > · · · > λα(m)−1 (m) ≥ 1.
Since Borel-fixed ideals generated in two variables are determined by their
Hilbert functions (see, for example, Lemma 3.7 of [May12b]), we have the
following corollary of Propositions 2.7 and 2.8.
Corollary 2.9. If I and I 0 are ideals corresponding to two point arrangements of the same configuration type, gin(I (m) ) = gin(I 0(m) ) for all m.
Actually finding the Hilbert functions of fat point ideals is not easy and is
a significant area of research. (for example, see [CHT11], [GH07], [GVT04],
[GHM09] , and [Har02]) When I is the ideal of less than 9 points, however,
techniques exist for computing these Hilbert functions. In Section 3 we will
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
7
outline the method used in this paper, following [GH07]. Other techniques,
such as those in [CHT11], can also be used for some of the point arrangements A through K.
2.3. Graded Systems. In this subsection we introduce the limiting shape
of a graded system of monomial ideals.
Definition 2.10 ([ELS01]). A graded system of ideals is a collection of
ideals J• = {Ji }∞
i=1 such that
Ji · Jj ⊆ Ji+j
for all i, j ≥ 1.
Definition 2.11. The generic initial system of a homogeneous ideal I
is the collection of ideals J• such that Ji = gin(I i ). The symbolic generic
initial system of a homogeneous ideal I is the collection J• such that
Ji = gin(I (i) ).
The following lemma justifies calling these collections ‘systems’; see Lemma
2.5 of [May12c] and Lemma 2.2 of [May12a] for proofs.
Lemma 2.12. Generic initial systems and symbolic generic initial systems
are graded system of ideals.
Let J be a monomial ideal of R = K[x1 , . . . , xn ]. We may associate to J
a subset Λ of Nn consisting of the points λ such that xλ ∈ J. The Newton
polytope PJ of J is the convex hull of Λ regarded as a subset of Rn . Scaling
the polytope PJ by a factor of r gives another polytope that we will denote
rPJ .
If a• is a graded system of monomial ideals in R, the polytopes of { 1q Paq }q
1
are nested: 1c Pac ⊂ c+1
Pac+1 for all c ≥ 1. The limiting shape P of a• is the
limit of the polytopes in this set:
[ 1
Pa .
P =
q q
∗
q∈N
When I is the ideal of points in P2 gin(I (m) ) is generated in the variables
x and y by Proposition 2.8, so we can think of each Pgin(I (m) ) , and thus P ,
as a subset of R2 .
3. Technique for computing the Hilbert function
Here we summarize the method that is used to compute HI (m) (t) in this
paper. It follows the work of Guardo and Harbourne in [GH07]; details can
be found there.
Suppose that π : X → P2 is the blow-up of distinct points p1 , . . . , pr of
2
P . Let Ei = π −1 (pi ) for i = 1, . . . , r and L be the total transform in X of
a line not passing through any of the points p1 , . . . , pr . The classes of these
divisors form a basis of Cl(X); for convenience, we will write ei in place of
[Ei ] and e0 in place of [L]. Further, the intersection product in Cl(X) is
defined by e2i = −1 for i = 1, . . . , r; e20 = 1; and ei · ej = 0 for all i 6= j.
8
SARAH MAYES
Let Z = m(p1 + · · · + pr ) be a uniform fat point subscheme with sheaf of
ideals IZ ; set
Fd = dE0 − m(E1 + E2 + · · · + Er )
and Fd = OX (Fd ).
The following lemma relates divisors on X to the Hilbert function of I (m) .
Lemma 3.1. If Fd = dE0 − m(E1 + · · · + Er ) then h0 (X, Fd ) = HI (m) (d).
Proof. Since π∗ (Fd ) = IZ ⊗ OP2 (d),
HI (m) (d) = dim((IZ )d ) = h0 (P2 , IZ ⊗ OP2 (d)) = h0 (X, Fd )
for all d.
For convenience, we will sometimes write h0 (X, F ) = h0 (X, OX (F )). Recall that if [F ] not the class of an effective divisor then h0 (X, F ) = 0. On the
other hand, if F is effective, then we will see that we can compute h0 (X, F )
by finding h0 (X, H) for some numerically effective divisor H.
Definition 3.2. A divisor H is numerically effective if [F ] · [H] ≥ 0 for
every effective divisor F , where [F ]·[H] denotes the intersection multiplicity.
The cone of classes of numerically effective divisors in Cl(X) is denoted by
NEF(X).
Lemma 3.3. Suppose that X is the blow-up of P2 at r ≤ 8 points in general
position and that F ∈ NEF(X). Then F is effective and
h0 (X, F ) = ([F ]2 − [F ] · [KX ])/2 + 1
where KX = −3E0 + E1 + · · · + Er .
Proof. This is a consequence of Riemann-Roch and the fact that h1 (X, F ) =
0 for any numerically effective divisor F . See Lemma 2.1b of [GH07] for a
discussion.
The set of classes of effective, reduced, and irreducible curves of negative
intersection is
NEG(X) := {[C] ∈ Cl(X) : [C]2 < 0, C is effective, reduced, and irreducible}.
The set of classes in NEG(X) with self intersection less than −1 is
neg(X) := {[C] ∈ NEG(X) : [C]2 < −1}.
The following result of Guardo and Harbourne allows us to easily identify
divisor classes belonging to NEG(X). In the lemma, the curves defining
the configuration type are lines that pass through any three points or conics
that pass through any six points. For example, the divisors defining the
configuration type shown in Figure 3 are E0 − E1 − E2 − E3 and E0 − E1 −
E4 − E5 .
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
9
Figure 3. Points p1 , . . . , p6 of configuration type H.
Lemma 3.4 (Lemma 2.1d of [GH07]). The elements of neg(X) are the
classes of divisors that correspond to the curves defining the configuration
types. Further,
NEG(X) = neg(X)∪{[C] ∈ B∪L∪Q : [C]2 = −1, [C]·[D] ≥ 0 for all D ∈ neg(X)}
where B = {ei : i > 0}, L = {e0 −ei1 −· · ·−eir : r ≥ 2, 0 < i1 < · · · < ir ≤ 6},
and Q = {2e0 − ei1 − · · · − eir : r ≥ 5, 0 < i1 < · · · < ir ≤ 6}.
The following result will be used in Procedure 3.6; see Section 2 of [GH07].
Lemma 3.5. Suppose that [C] ∈ NEG(X) is such that [F ] · [C] < 0. Then
h0 (X, F ) = h0 (X, F − C).
Knowing how to compute h0 (X, H) for a numerically effective divisor H
will allow us to compute h0 (X, F ) for any divisor F . In particular, given
a divisor F , there exists a divisor H such that h0 (X, F ) = h0 (X, H) and
either:
(a) H is numerically effective so
h0 (X, F ) = h0 (X, H) = (H 2 − H · KX )/2 + 1
by Lemma 3.3; or
(b) there is a numerically effective divisor G such that [G]·[H] < 0 so [H]
is not the class of an effective divisor and h0 (X, F ) = h0 (X, H) = 0.
The method for finding such an H is as follows.
Procedure 3.6 (Remark 2.4 of [GH07]). Given a divisor F we can find a
divisor H with h0 (X, F ) = h0 (X, H) satisfying either condition (a) or (b)
above as follows.
(1) Reduce to the case where [F ]·ei ≥ 0 for all i = 1, . . . , n: if [F ]·ei < 0
for some i, h0 (X, F ) = h0 (X, F − ([F ] · ei )Ei ), so we can replace F
with F − ([F ] · ei )Ei .
(2) Since L is numerically effective, if [F ] · e0 < 0 then [F ] is not the
class of an effective divisor and we can take H = F (case (b)).
(3) If [F ] · [C] ≥ 0 for every [C] ∈ NEG(X) then, by Lemma 3.4, F is
numerically effective, so we can take H = F (case (a)).
(4) If [F ]·[C] < 0 for some [C] ∈ NEG(X) then h0 (X, F ) = h0 (X, F −C)
by Lemma 3.5. Then replace F with F − C and repeat from Step 2.
There are only a finite number of elements in NEG(X) to check by Lemma
3.4 so it is possible to complete Step 3. Further, [F ] · e0 > [F − C] · e0 when
10
SARAH MAYES
[C] ∈ NEG(X), so the condition in Step 2 will be satisfied after at most
[F ] · e0 + 1 repetitions. Thus, the process will terminate.
Taking these results together we can compute the Hilbert function of I (m)
as follows.
(1) Compute NEG(X) from neg(X) using Lemma 3.4.
(2) Find Ht corresponding to Ft using Procedure 3.6 for all t.
(3) Compute HI (m) (t) = h0 (X, Ft ) = h0 (X, Ht ) with Lemma 3.3.
4. Proof of the Main Theorem
In this section, we will outline the proof of Theorem 1.1. Recall that ideals
of points with the same configuration type have the same symbolic generic
initial system by Corollary 2.9 so the statement of the theorem makes sense.
Further, Proposition 2.4 ensures that the theorem includes all possible sets
of six points.
If I is the ideal of a set of six points having configuration type E, G, or
K, the theorem follows from the main result of [May12c]. Likewise, if I is
the ideal of six points of configuration type A, the theorem follows from the
main result of [May12a].
For the remaining cases we can find the limiting polytope of {gin(I (m) )}m
by following the five steps below. First, we record a lemma that will be used
in Step 2.
Lemma 4.1. Let J be a monomial ideal of K[x, y, z] generated in the variables x and y. Then the number of elements of J of degree t only involving
the variables x and y is equal to HJ (t) − HJ (t − 1). The number of minimal
generators of J in degree t is equal to HJ (t) − HJ (t − 2) − 1.
Proof. The first statement follows from the fact that there are exactly HJ (t−
1) monomials of J of degree t involving the variable z. The number of
generators in degree t is equal to the number of monomials of J in the
variables x and y of degree t minus the number of monomials of J that arise
from multiplying the elements of degree t − 1 in x and y by the variables x
and y. Using this, the last statement follows from the first.
Step 1: Find the Hilbert function of I (m) for infinitely many m by using the
method outlined in Section 3.
Step 2: Find the number of minimal generators of gin(I (m) ) of each degree t
for infinitely many m. We can use Lemma 4.1 for this computation
because gin(I (m) ) is an ideal generated in the variables x and y
(Proposition 2.8) and we know the Hilbert function of gin(I (m) ) from
Proposition 2.7 and Step 1.
Step 3: Write down the generators of gin(I (m) ) for infinitely many m. Note
that this follows from Step 2 since
gin(I (m) ) = (xα(m) , xα(m)−1 y λα(m)−1 , . . . , xy λ1 (m) , y λ0 (m) )
where λ0 (m) > · · · > λk−1 (m) ≥ 1 by Proposition 2.8.
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
11
Step 4: Compute the Newton polytope Pgin(I (m) ) of each gin(I (m) ) for infinitely many m. Recall that the boundary of these polytopes is
determined by the convex hull of the points (i, λi (m)) and (α(m), 0).
Step 5: Find the limiting polytope of the symbolic generic initial system of
I. To do this it suffices to take the limit
[ 1
P =
Pa
m m
∗
m∈N
over an infinite subset of N∗ .
All of the remaining calculations are similar but long so, for the sake of
space, we will only record the proof here for configuration H.
4.1. Proof of main theorem for configuration H. Let I be the ideal of
points p1 , . . . , p6 of configuration type H, ordered as in Figure 3.
Step 1.
First we will follow the method outlined in Section 3 to find HI (m) for
infinitely many m. We will use the notation from Section 3 and will often
denote the divisor a0 E0 − (a1 E1 + a2 E2 + a3 E3 + a4 E4 + a5 E5 + a6 E6 ) by
(a0 ; a1 , a2 , a3 , a4 , a5 , a6 ). Also, if F1 and F2 are divisors, F1 · F2 denotes
[F1 ] · [F2 ], the intersection multiplicity of their classes.
First we need to determine NEG(X). Note that the configuration type H
is defined by a line through points 1, 2, and 3 and another line through points
1, 4, and 5. Thus, neg(X) consists of the classes of A1 := E0 − E1 − E2 − E3
and A2 := E0 − E1 − E4 − E5 . The other elements of NEG(X) are exactly
those [C] ∈ B ∪ L ∪ Q such that [C]2 = −1 and [C] · [D] ≥ 0 for all
[D] ∈ neg(X) by Lemma 3.4. Using this, one can check that NEG(X)
consists of the classes of the divisors
A1 := E0 − E1 − E2 − E3 , A2 := E0 − E1 − E4 − E5 ,
B := E0 − E1 − E6 ,
Ci := E0 − Ei − E6 for i = 2, 3, 4, 5,
Dij := E0 − Ei − Ej for i = 2, 3 and j = 4, 5,
Q := 2E0 − E2 − E3 − E4 − E5 − E6 .
Next, we will follow Procedure 3.6 for each Ft once we fix m divisible by
12. The procedure produces a divisor Ht that is either numerically effective
or is in the class of an effective divisor such that
HI (m) (t) = h0 (X, Ft ) = h0 (X, Ht ).
First, we will make some observations about which elements of NEG(X)
may be subtracted during the procedure.
Suppose that J is a divisor of the form J := (a; b, c, c, c, c, d). We will
show that if the procedure allows us to subtract one Ai (respectively, one
Ci or one Dij ) from J, we can subtract them all consecutively. This is
equivalent to showing that if the intersection multiplicity of J with A1 is
12
SARAH MAYES
negative then the intersection multiplicity of J −A1 with A2 is also negative;
parallel statements hold for the Ci and Dij .
Ai :
J · A1 = a − b − 2c
(J − A1 ) · A2 = (a − 1; b − 1, c − 1, c − 1, c, c, d) · A2
= a − 1 − (b − 1) − 2c = a − b − 2c
Ci :
J · C2 = a − c − d
(J − C2 ) · C3 = (a − 1; b, c − 1, c, c, c, d − 1) · C3
= (a − 1) − c − (d − 1) = a − c − d
(J − C2 − C3 ) · C4 = (a − 2; b, c − 1, c − 1, c, c, d − 2) · C4
= (a − 2) − c − (d − 2) = a − c − d
(J − C2 − C3 − C4 ) · C5 = (a − 3; b; c − 1, c − 1, c − 1, c, d − 3) · C5
= (a − 3) − c − (d − 3) = a − c − d
Dij :
J · D24 = a − 2c
(J − D24 ) · D25 = (a − 1; b, c − 1, c, c − 1, c, d) · D25
= (a − 1) − (c − 1) − c = a − 2c
(J − D24 − D25 ) · D34 = (a − 2; b, c − 2, c, c − 1, c − 1, d) · D34
= (a − 2) − c − (c − 1) = a − 2c − 1
(J − D24 − D25 − D34 ) · D35 = (a − 3; b, c − 2, c − 1, c − 2, c − 1, d) · D35
= (a − 3) − 2(c − 1) = a − 2c − 1
Define
A := A1 + A2 , C := C2 + C3 + C4 + C5 , D := D24 + D25 + D34 + D35 .
The calculations above show that if J · A1 < 0 (if J · C2 < 0, J · D24 < 0,
respectively) then the procedure will allow us to subtract one entire copy of
A (C, D). If we begin with a divisor of the form J = (a; b, c, c, c, c, d) then
J − A, J − B, J − C, J − D, and J − Q have the same form. These facts
taken together mean that that Ht is obtained from Ft - a divisor with the
same form as J - by subtracting off copies of A, B, C, D, and Q.
In Procedure 3.6, the requirement for being able to subtract an element
of NEG(X) from J is that the intersection of that element with J is strictly
negative. Thus, it is of interest how the intersection multiplicities with
elements of NEG(X) change as other elements of NEG(X) are subtracted
from a divisor of the form (a; b, c, c, c, c, d).
If J = (a; b, c, c, c, c, d) as above, we have the following.
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
13
value of G
Ai B Ci Dij Q
(J − A) · G − J · G 2 0 -1
0
0
(J − B) · G − J · G 0 1 0
-1 -1
(J − C) · G − J · G -2 0 1
-2 0
(J − D) · G − J · G 0 -4 -2
0
0
(J − Q) · G − J · G 0 -1 0
0
1
We now use this set-up to obtain Ht from Ft by successively subtracting
elements of NEG(X) that have negative intersection with the remaining
divisor. First note that
Ft · Ai = t − 3m < 0 ⇐⇒ t < 3m,
Ft · B = Ft · Ci = Ft · Dij = t − 2m < 0 ⇐⇒ t < 2m,
and
5m
.
2
Therefore, [Ft ] = [Ht ] (that is, Ft is numerically effective) if and only if
t ≥ 3m. In this case, h0 (X, Ft ) = 12 t2 − 3m2 + 32 t − 3m + 1 by Lemma 3.3.
We will assume from this point on that 12|m.
Now suppose that 3m > t ≥ 5m
2 . In this case, [Ai ]·[Ft ] < 0, but [C]·[Ft ] ≥
0 for all other [C] ∈ NEG(X); thus, Procedure 3.6 allows us to subtract Ai
- and thus A - but no other divisors initially. How many copies can we
subtract? From the table, we see that the intersection multiplicity of the
remaining divisor with Ai increases by 2 each time we subtract a copy of Ai .
We can keep subtracting copies of A as long as the intersection multiplicity
with Ai is strictly negative; thus, we can subtract exactly
' &
'
&
3m − t
F t · Ai
=
−
2
2
Ft · Q = 2t − 5m < 0 ⇐⇒ t <
copies of A. The only other intersection multiplicity that changes through
the process subtracting As is with the Ci , which decreases by one for each
copy of A subtracted. Thus,
&
' !
&
'
3m − t
3m − t
Ft −
A · Ci = t − 2m −
2
2
7m
and this is never negative when t ≥ 5m
2 (t must be at most 3 for this expres
sion to be negative). Thus, the intersection multiplicity of Ft − 3m−t
with
2
all [C] ∈ NEG(X) is nonnegative, so
&
'
&
'
&
'
!
3m − t
3m − t
3m − t
Ht = t − 2
;m − 2
,m −
,...,m .
2
2
2
When t is even,
t−m
Ht = 2t − 3m; t − 2m,
,...,m
2
14
SARAH MAYES
and h0 (X, Ft ) = t2 − 3tm − 32 m2 + 32 t − 3m + 1 while when t is odd
t−m−1
,...,m
Ht = 2t − 3m − 1; t − 2m − 1,
2
3 2
3
1
0
2
and h (X, Ft ) = t − 3tm − 2 m + 2 t − 3m + 2 .
7m
Now suppose that 5m
2 > t ≥ 3 . In this case, Procedure 3.6 allows us
to subtract copies of Q because Ft · Q < 0. From the table, for each copy
of Q subtracted, the intersection multiplicity increases by 1; since we can
keep subtracting copies of Q as long as the intersection multiplicity with the
remaining divisor is negative, we
l can msubtract exactly −Ft · Q = 5m − 2t
A by the same argument as in the
copies. We may also subtract 3m−t
2
previous case, since subtracting copies of A doesn’t change the intersection
multiplicity with Q and vice versa.
Through the process of subtracting As and Qs the intersection multiplicities with Ci and B have changed; in particular,
&
'
l 3m − t m
3m − t
A − (5m − 2t)Q) · Ci = t − 2m −
(Ft −
2
2
and
(Ft −
l 3m − t m
A − (5m − 2t)Q) · Ci = t − 2m − (5m − 2t) = 3t − 7m.
2
These are both nonnegative, as t ≥ 7m
3 , so the intersection multiplicity
of the remaining divisor with all elements of NEG(X) is nonnegative and
Procedure 3.6 terminates.1 Therefore, when t is even
5t − 11m
, . . . , 2t − 4m)
Ht = (6t − 13m; t − 2m,
2
and h0 (X, Ft ) = 3t2 − 13tm + 14m2 + 52 t − 11
2 m + 1, while when t is odd
5t − 11m − 1
, . . . , 2t − 4m)
2
1
and h0 (X, Ft ) = 3t2 − 13tm + 14m2 + 52 t − 11
2 m + 2.
7m
Now suppose
as above, we can
that2mt = 3 − 1. By the same arguments
m
subtract 3m−t
=
copies
of
A
and
5m
−
2t
=
+
2
copies
of Q when
2
6
3
following Procedure 3.6. Then
m
2m
m
m
2m
F 7m −1 −
A−
+ 2 Q = m − 7; − 2, − 3, . . . ,
−2
3
6
3
3
3
3
has intersection multiplicity 1 with Ai and -2 with Ci . At this point, Procedure 3.6 allows us to do the following.
• Subtract one copy of C. Now the intersection multiplicity with Ai
is −1 and the intersection multiplicity with Ci is −1.
Ht = (6t − 13m − 1; t − 2m − 1,
1 7m is always an integer under the divisibility assumption so we don’t have to worry about
3
t being the smallest odd integer less than
7m+1
.
3
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
15
• Subtract one copy of A. Now the intersection multiplicity with A is
1 and the intersection multiplicity with Ci is −2.
It is clear that we can repeat this process as many times as we wish when
we follow the procedure; eventually, we will end up with a divisor that has
a negative E0 coefficient. We have that HI (m) (t) = h0 (X, Ft ) = 0 when
7m
t = 7m
3 − 1 and thus HI (m) (t) = 0 for all t < 3 .
Step 2.
Assume that 12|m.
Now we will turn our attention to the generic initial ideals of I (m) . We
compute the number of generators of gin(I (m) ) in each degree using Lemma
4.1 and the Hilbert function values from Step 1. We have the following.
Value of t
Number of generators of degree t
t < 7m
0
3
1
7m
t= 3
3m + 1
2
t = 7m
+
1
3
3m + 3
5m
7m
>
t
≥
+
2,
t
even
6
2
3
5m
7m
4
2 > t ≥ 3 + 2, t odd
t = 5m
6
2
+
1
1
t = 5m
2
5m
3m > t ≥ 2 + 2, t even
2
3m > t ≥ 5m
+
2,
t
odd
0
2
t = 3m
2
t > 3m
0
Step 3.
Assume once again that 12|m.
Note that there are
m
5m
7m
−2
m
2 − 3 −2
= 6
=
−1
2
2
12
7m
even (or odd) integers t such that 5m
2 > t ≥ 3 + 2, and
3m −
5m
2
−2
m
−1
2
4
even (or odd) integers t such that 3m > t ≥ 5m
2 + 2.
Using the results of Step 2, we can find strictly decreasing λi such that
=
gin(I (m) ) = (xk , xk−1 y λk−1 , . . . , xy λ1 , y λ0 ).
7m
Since the smallest degree generator is of degree 7m
3 , k = 3 .
The values of λi that we obtain are shown in the following table.
16
SARAH MAYES
7m
7m
degree 7m
3
3 +1
3 +2
7m
7m
4m
4m
4m
i 3
−
1
·
·
·
2m
2m
−
1
2m
−
2
·
·
·
−
3
3
3
3 − 4 ···
3 −9
m
m
m
· · · m + 4 m + 6 · · · m + 11
λi 0
1
···
3
3 +2
3 +3
5m
degree 7m
+
3
·
·
·
3
2
4m
4m
m
m
m
i 4m
−
10
·
·
·
−
13
·
·
·
−
4
−
10(
··· m
3
3
3
12 − 1) = 2 + 6
2 +5
2 +1
m
λi m + 13 · · · m + 16 · · · m + 6 + 12( 12 − 1) = 2m − 6 2m − 5 · · · 2m − 1
5m
5m
degree 5m
···
3m
2 +1
2 +2
2 +4
m
m
m
m
m
m
m
i
−
1
−
2
−
3
−
4
·
·
·
−
1
−
2(
0
2
2
2
2
2
2
4 − 1) = 1
m
λi 2m + 1 2m + 3 2m + 4 2m + 7 2m + 8 · · · 2m + 3 + 4( 4 − 1) = 3m − 1 3m
Step 4.
Assume that 12|m.
The Newton polytope of gin(I (m) ) is the convex hull of the ideal when
thought of as a subset of R2 . In particular, its boundary is determined by
the points (i, λi ) recorded in the table from Step 3. Plotting these points,
one can see that the boundary of Pgin(I (m) ) is defined by the line segments
m
4m
through the points (0, 3m), ( m
2 −2, 2m+4), ( 2 +1, 2m−1), ( 3 −9, m+11),
m
7m
4m
( 3 − 3, m + 4), (2m, 3 ), and ( 3 , 0).
Step 5.
1
and taking the limit as m
Scaling Pgin(I (m) ) from the previous step by m
approaches infinity, the limiting shape of the symbolic generic initial system
is defined by the line segments through the following points.
3m
(0, 3) = lim 0,
m→∞
m
m/2 − 2 2m + 4
m/2 + 1 2m − 1
, 2 = lim
,
= lim
,
m→∞
m→∞
2
m
m
m
m
1
4m/3 − 9 m + 11
4m/3 − 3 m + 4
= lim
, 1 = lim
,
,
m→∞
m→∞
3
m
m
m
m
1
2m m/3
2,
= lim
,
m→∞
3
m m
7
7m/3
, 0 = lim
,0
m→∞
3
m
Note that (2, 31 ) lies on the line segment connecting ( 43 , 1) with ( 37 , 0) so it
is not a vertex of the boundary of the limiting shape.
4
5. Point Configurations and Limiting Shapes: Questions and
Observations
In this section we investigate how the arrangement of points in a point
configuration influences the limiting shape of the symbolic generic initial
system of the corresponding ideal. Throughout I will be the ideal of a point
configuration in P2 and P will denote the limiting shape of {gin(I (m) )}m .
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
17
The following is Lemma 2.5 of [May12a] and is proven there; it describes
how the number of points in a configuration is reflected in the limiting shape.
Lemma 5.1. Let I be the ideal corresponding to an arrangement of r distinct
points in P2 . If Q is the complement of the limiting shape P of {gin(I (m) )}m
in R2≥0 , then the area of Q is equal to 2r .
While this lemma imposes strong restrictions on where the limiting shape
can lie, we would like a more precise description in terms of geometry.
Question 5.2. What is the meaning of the intercepts of the boundary of
P ? Can one see these intercepts in the point configuration?
We may partially answer this question. The x-intercept of P is equal to
limm→∞ α(m)
m where α(m) is the degree of the smallest degree element of
(m)
I . This limit has been studied for some special point configurations and
is sometimes equal to the Seshadri constant of I (see, for example, [BH10]
and [Har02]). Unfortunately, it does not seem as though there is a simple
connection to the point configuration.
(m)
The y-intercept of the boundary of P is equal to limm→∞ reg(Im ) (see
Lemma 3.1 of [May12a]). This limit is not as well-studied as the previous
one, but appears to have a nice geometric meaning in certain cases. For
example, when there is a line passing through at least three points and
algorithms similar to the one outlined in Section 3 may be used to find the
(m)
Hilbert function of I (m) , limm→∞ reg(Im ) is equal to the maximum number
of points lying on a single line.
Question 5.3. What features does a point configuration possess when the
boundary of P consists of a fixed number of line segments?
To describe one potential answer to Question 5.3, we will distinguish
between different ‘types’ of points within a configuration.
A curve of degree
d+2
d defines a point configuration if at least 2 points in the configuration
lie on the curve. If N points of the configuration lie on such a curve, we will
denote this curve by Cd,N . For example, in Configuration H shown in Figure
3, there are two curves defining the point configuration. Since they are both
lines containing three points, the set of curves defining the configuration is
denoted {C1,3 , C1,3 }.
Each point within a configuration may then be associated with the set of
curves defining the configuration that pass through that point. For example,
in Figure 3, points 2, 3, 4, and 5 correspond to the set {C1,3 }, point 1
corresponds to the set {C1,3 , C1,3 }, and point 6 corresponds to the empty
set. We will call such sets the incidence type of a point. Thus, Configuration
H has three distinct incidence types. Configuration F also has three distinct
incidence types: there are two points of incidence type {C1,3 }, three of
type {C1,4 }, and one of type {C1,3 , C1,4 }. Configuration I has two distinct
incidence types.
18
SARAH MAYES
Observation 5.4. Suppose that I is the ideal corresponding to one of the
following subsets of P2 : a point configuration of at most six points; a point
configuration arising from a complete intersection; a generic set of points;
points on an irreducible conic; a point configuration where all but one point
lies on a line; a point configuration where all but two points lies on a line
and no other line passes through three points; or a star point configuration.
Then the number of line segments forming the boundary of the limiting
shape of {gin(I (m) )}m is equal to the number of distinct incidence types of
the points in the corresponding point configuration ([May12a], [May13a],
[May13b], [May12c]).
While we do not have enough evidence to claim that the answer to Question 5.3 is always given by the number of incidence types in a configuration,
it is interesting to note that it holds for all of the cases that have been studied up to this point. This provides further evidence that our asymptotic
viewpoint reveals information that cannot be seen by looking at the Hilbert
functions of individual fat point ideals I (m) .
Question 5.5. Is there a geometric interpretation for the coordinates of
the ‘crux points’ lying on the intersection of the line segments defining the
boundary of P ?
The answer to this final question seems mysterious; it is likely that many
more configurations will need to be studied to formulate a reasonable conjecture to answer this question.
References
[AV03]
A. Arsie and J.E. Vatne, A note on symbolic and ordinary powers of homogeneous ideals, Annali dell’Universita di Ferrara 49 (2003), no. 1, 19–30.
[BH10]
C. Bocci and B Harbourne, Comparing powers and symbolic powers of ideals,
J. Algebraic Geometry 19 (2010), 399–417.
[BS87a] D. Bayer and M. Stillman, A criterion for detecting m-regularity, Inventiones
Mathematicae 87 (1987), 1–11.
[BS87b] D. Bayer and M. Stillman, A theorem on refining division orders by the reverse
lexicographic order, Duke J. Math. 55 (1987), 321–328.
[CHT11] S. Cooper, B. Harbourne, and Z. Teitler, Combinatorial bounds on Hilbert functions of fat points in projective space, J Pure Appl Algebra 215 (2011), 2165–
2179.
[ELS01] L. Ein, R. Lazarsfeld, and K.E. Smith, Uniform bounds and symbolic powers on
smooth varieties, Inventiones Mathematicae 144 (2001), no. 2, 241–252.
[EP90]
Ph. Ellia and C. Peskine, Groupes de points de P2 : caractère et position uniforme, Algebraic geometry., Springer LNM 1417, 1990, pp. 111–116.
[ES09]
D. Eisenbud and F.O. Schreyer, Betti numbers of graded modules and cohomology of vector bundles, J. Amer. Math. Soc. 22 (2009), no. 3, 859–888.
[Gal74] A. Galligo, A propos du théorem de préparation de Weierstrass, Lecture Notes
in Mathematics 409 (1974), 543–579.
[GH07] E. Guardo and B. Harbourne, Resolutions of ideals of any six fat points in P2 ,
J. Algebra 318 (2007), no. 2, 619–640.
[GHM09] A.V. Geramita, B. Harbourne, and J. Migliore, Classifying hilbert functions of
fat point subschemes in P2 , Collectanea mathematica 60 (2009), no. 2, 159–192.
SYMBOLIC GENERIC INITIAL SYSTEMS OF SIX POINTS
[Gre98]
[GVT04]
[Har02]
[HS98]
[Hun92]
[May12a]
[May12b]
[May12c]
[May13a]
[May13b]
[Siu01]
19
M. Green, Generic initial ideals, Six Lectures on Commutative Algebra (J. Elias,
J.M. Giral, R.M. Miro-Roig, and S. Zarzuela, eds.), Springer, 1998, pp. 119–186.
E. Guardo and A. Van Tuyl, Fat points in P1 × P1 and their Hilbert functions,
Canad. J. Math 56 (2004), no. 4, 716–741.
B. Harbourne, Problems and progress: A survey on fat points in P2 , Queen’s
Papers in Pure and Appl. Math., vol. 123, Queen’s University, Kingston, 2002,
pp. 85–132.
J. Herzog and H. Srinivasan, Bounds for multiplicities, Trans. Amer. Math. Soc.
350 (1998), no. 7, 2879–2902.
C. Huneke, Uniform bounds in Noetherian rings, Invent Math 107 (1992), no. 1,
203–233.
S. Mayes, The asymptotic behaviour of symbolic generic initial systems of
generic points, 2012, arXiv:1210.1622.
, The generic initial ideals of powers of a 2-complete intersection, 2012,
arXiv:1202:5750 [].
, The limiting shape of the generic initial system of a complete intersection, 2012, arXiv:1202:1317 [], to appear in Comm. Alg.
, The asymptotic behaviour of symbolic generic initial systems of points
on an irreducible conic, 2013, arXiv:1304.7542.
, The symbolic generic initial system of almost linear point configurations
in P2 , 2013, arXiv:1304.7541.
Y.-T. Siu, Very ampleness part of Fujitas conjecture and multiplier ideal sheaves
of Kohn and Nadel, Complex Analysis and Geometry (J.D. McNeal, ed.), Ohio
State University Mathematical Research Institute Publications, vol. 9, Verlag
Walter de Gruyter, 2001, pp. 171–191.
Department of Mathematics, University of Michigan, 530 Church Street,
Ann Arbor MI 48109
E-mail address: mayess@umich.edu
| 0 |
Indoor Location for Smart Environments with
Wireless Sensor and Actuator Networks
Zhongliang Zhao1 , Stephane Kuendig2 , Jose Carrera1 , Blaise Carron2 , Torsten Braun1 , Jose Rolim2
arXiv:1705.09543v1 [] 26 May 2017
University of Bern, Switzerland1 , University of Geneva, Switzerland 2
Email: {zhao, carrera, braun}@inf.unibe.ch, {stephane.kundig, jose.rolim}@unige.ch, blaise.carron@etu.unige.ch
Abstract—Smart environments interconnect indoor building
environments, indoor wireless sensor and actuator networks,
smartphones, and human together to provide smart infrastructure management and intelligent user experiences. To enable the
”smart” operations, a complete set of hardware and software
components are required. In this work, we present Smart Syndesi,
a system for creating indoor location-aware smart building environments using wireless sensor and actuator networks (WSANs).
Smart Syndesi includes an indoor tracking system, a WSAN
for indoor environmental monitoring and activation automation,
and a gateway interconnecting WSAN, tracking system with
mobile users. The indoor positioning system tracks the real-time
location of occupants with high accuracy, which works as a basis
for indoor location-based sensor actuation automation. To show
how the multiple software/hardware components are integrated,
we implemented the system prototype and performed intensive
experiments in indoor office environments to automate the indoor
location-driven environmental sensor monitoring and activation
process. The tracked indoor location of a user’s smartphone triggers the retrieval of environmental measurements and activates
the actuators automatically (i.e. turn on/off lights, switch on/off
fans) based on the location and correlated environmental sensor
information.
Keywords: Artificial Intelligence, Indoor Positioning, Environment Automation, Wireless Sensor and Actuator Networks.
I. I NTRODUCTION
Smart environments, for instance smart home or smart office, are expected to be intelligent and human-aware. Google’s
recent acquisition of Nest Labs [1], whose products include
smart sensor-driven and programmable thermostats, certainly
shows the huge market potential of smart environment applications. In the meanwhile, Human-Centric Computing (HCC)
focuses on improving application experiences by enhancing
application usability. Research activities on HCC have been
advancing in past years due to the rapid development of
mobile devices such as smartphones, wearable devices, as well
as distributed environmental sensors. Understanding human
beings and their contexts certainly helps to facilitate the
development of smart environment applications.
Home automation or office automation, targets to provide
convenient, comfortable, and energy efficient home or working
environments to the residents or occupants via the automation
of home or office appliances. In particular, office automation
could improve the office working experiences. For instance,
when a man enters the office room, the lighting system should
be switched on and be configured to a luminance level that
is of his favorites, and the window curtains should also be
opened automatically.
Recent advances in Artificial Intelligence (AI) technology
have enabled computer machines to perform better than human
being in certain tasks related to intelligence, such as aspects of
image recognition. The combination of artificial intelligence
and automation will automate entire task-handling process.
For instance, the latest machine learning algorithms enable
us to design efficient and scalable indoor positioning system
without calibration process, which makes the system much
more stable. The user’s indoor behaviors and preferences can
also be analyzed to find out patterns that can be used to trigger
the automation operations.
Given the previous examples, intelligent automation is the
key target in smart environment applications. However, most
of the existing products on the home/office automation markets
do not exhibit sufficient intelligence. To achieve the real intelligent automation, both software intelligence and hardware
automation procedure are needed. This means the ”smart environments” are really ”smart” only if they are aware of human
status such as their locations, preferences, intentions, etc. The
software intelligent algorithms should detect human status
first, which then trigger the hardware automation procedure.
To provide software intelligence and being the human status
aware, such as indoor locations, an efficient and accurate indoor positioning system is required. The system should be able
to track user’s indoor locations without asking people to carry
additional devices. Moreover, an infrastructure management
platform, such as a wireless sensor and actuator network,
should be established to connect the software intelligence with
hardware appliances such that intelligent decisions could be
transfered to trigger the electrical appliances.
In this work, we have built an indoor location-aware smart
office environment, by combining an indoor localization system with an infrastructure management system using existing
office WiFi networks and wireless sensor and actuator networks. The system is able to track people’s indoor location and
automate the operations of office building appliances properly.
The main contributions could be summarized as follows.
• We implement and deploy the indoor positioning system
in a smart office testbed, the Syndesi framework, and
integrate the indoor localization functions.
• We implement the Syndesi automation feature by enabling the indoor location-aware automation process.
We conduct a set of extensive experiments to validate the
system in complex indoor environments with long tracking paths. Results show that the indoor location-driven
smart environment appliances automation is working fine
in real-time fashion.
The structure of the paper is as follows: Section II discusses
existing solutions and background related to this work; Section III details the proposed system architecture and describes
the integration challenges, and Section IV presents the system
deployment and evaluation details to validate the proposed
integrated system. Finally, Section V concludes the paper and
discusses the main achievements of this research.
•
II. R ELATED WORK
The first problem in smart environment application is to
detect or identify the number and identification of the residents
or occupants in home or office building environments. Such
knowledge can be very useful to various home/office automation applications, such as energy management, lighting control,
and security. Ebadat et al. [13] proposed to estimate the
number of people in an indoor environment using information
available in standard HVAC (heating, ventilating, and air
conditioning) systems. Their approach avoids the installation
of extra hard sensors in the environment and offers a new
solution to this problem. He et al. [14] offers a different
solution to this problem by using the Google Glass, which
combines data from both visual and inertial sensors. Mao et
al. [15] investigated the issue of building occupancy estimation
by using a wireless CO2 sensor network.
Determining people’s indoor locations is another essential requirement for building smart office or smart home
applications. GPS technology, which provides accurate positioning for outdoor environments, cannot delivers satisfied
indoor positioning because of the signal propagation losses.
Therefore, many solutions have been proposed to provide
accurate indoor positioning service. In [16], authors proposed
a fingerprinting-based solution by combining digital compass
and WiFi information. The authors of [17] proposed a tracking
system by exploiting particle filter features. This work adopted
a particle filter to combine PDR (pedestrian dead reckoning),
and floor plans together. A WiFi component records RSSI
values periodically from all available access points on a floor.
WiFi information is used to perform room recognition and turn
verifying. The PDR component outputs a human motion vector
model, which is used as input for the particle filter component.
The authors of [18] implemented a Radial Basis Function
Network to estimate the location of occupants using RFID
(Radio-Frequency Identification) and IR (Infra-Red) sensor
data. Leveraging the fact that almost all the indoor occupants
carry smartphones, Carrera et al. [12] designed a terminalbased positioning system, which uses an enhanced particle
filter to fuse PDR, WiFi ranging, and floor plan, to further
improve indoor positioning accuracy. Chen et al. [19] designed
a smart home application, which focuses on indoor greeneries
for improving indoor living environments.
To build an infrastructure platform that controls the electrical appliances in smart environments, wireless sensor and
actuator networks are the dominant technology [20], [21],
[22], [23], [24]. WSN, rather than WiFi based networks, have
been popularly employed for remote control and monitoring
applications, mainly because of their low cost and reduced
power consumption [25], [26], [27], [28]. The deployment
of such a system is not easy due to the existences of different communication standards. Tudose et al. [29] proposed
a wireless sensor networking using 6LoWPAN to connect
sensors and actuators in a home automation application. The
authors of [11] presented Syndesi, a framework for creating
personalized smart environments using WSANs. It combines
WSANs with different communication technologies, such as
Near Field Communication (NFC), Bluetooth, ZigBee, and
6LoWPAN along with an electrical interface to control office
appliances.
III. I NDOOR L OCATION FOR S MART E NVIRONMENTS
This section details the design of the proposed indoor location-aware smart environment architecture. We first
present the architecture of the proposed system and it components. Then, we describe some implementation details and
mathematical algorithms that support our application. Moreover, we discuss the challenges when integrating all the
components.
A. Overall Architecture
The Smart Syndesi framework, as shown in Figure 1,
is a system comprised of heterogeneous hardware devices
together with intelligent software engines. It has three main
components, namely the Syndesi wireless sensor and actuator network management testbed, an indoor localization and
navigation engine, and a human-centric actuation automation
module. The Syndesi testbed is responsible for creating and
managing personalized smart environments using WSANs.
It can also control electronic appliances via an electrical
interface. The indoor localization and navigation engine estimates the indoor location of users in real-time fashion. The
human-centric actuation automation module is responsible for
activating the correlated actuators automatically based on the
estimated user locations. In the following, we describe the
functions of each module in detail.
B. Syndesi Testbed
Syndesi [11] is focused on providing personalized services
for smart environments combining sensor networks, electrical
appliances, actuators and gateways, utilizing communication
protocols and technologies such as Near-Field Communication (NFC), Bluetooth, ZigBee, 802.15.4, 6LoWPAN etc.
The whole system is built following a RESTful architectural
approach, providing interoperability between its resources,
devices, services and the Internet. Benefiting from Syndesi’s
REST-enabled services, many additional updates have been
implemented. One of the most important updates is an interactive Android application for mobile devices, which allows
Figure 1: Smart Syndesi System Architecture.
smartphone users equipped with the Syndesi Android application to directly interact with the system’s actuators, as well
as contribute with data collected from the smartphone sensors. Moreover, the mobile app integrates indoor localization
mechanisms, which are discussed in detail in the next sections,
in order to provide live location-stamped data and allow for
operation automation based on user location.
The Syndesi components are deployed in 4 office rooms in
the University of Geneva premises where 7 people currently
work. The core WSAN is comprised by 28 TelosB [7] and
Zolertia Z1 [8] sensor motes which are connected with the
existing electrical and electronic office devices, such as lights,
fans, electric curtains etc., via solid state relays. The core of
the Syndesi testbed is implemented on a Linux based machine
where the gateway server is deployed. It serves as a connection
point for all the elements and components as the WSAN,
the mobile crowdsensing smartphone application, the web etc.
Every service or resource is provided as a web service, in the
form of a RESTful API deployed also in the gateway server,
and linked to the web through a proxy service. In addition,
an SQL database is hosted on the server where sensor data
can be pushed via corresponding APIs. As REST architecture
implies, the API calls have the simple form of a URI utilizing
GET or POST methods.
Figure 2: 3D representation of office environment. Green
marks are sensors and red marks are actuators.
In figure 2 we present a 3D-model of one of the offices. The
red marks indicate the electrical devices that are connected to
Syndesi and thus capable of being triggered via the designated
APIs. Most of devices such as the fans, lights and coffee
machine support switching on/off while the electric curtain
motor is actuated up/down. Some sensor motes (green marks)
are placed on the wall for purely monitoring purposes. Using
those motes, an automated polling scheme is set-up since
September 2016 which collects environmental data, such as
temperature and illuminance, and stores it on the server
database following a predefined temporal pattern.
C. Indoor Localization and Navigation Engine
To track people in indoor environments, we have designed
an indoor positioning system to support real-time indoor
localization and navigation. Our approach [12] is able to
provide high accuracy by fusing smartphone’s on-board sensor
readings, such as Inertial Measurement Unit (IMU), WiFi
received signal strength (RSS), and floor plan information
in a particle filter. All the tracking algorithms run on the
smart-phone itself, which requires no additional hardware
deployment. Figure 3 depicts the system architecture of the
indoor positioning system, which contains four components
as follows:
1) Inertial Measurement Unit: In order to estimate the
pedestrian displacement, we use two sensors, the accelerometer, and the geomagnetic field sensor. The displacement of
the pedestrian at time t is defined by the motion vector
Mvt = [θt , `t ], where θt is heading orientation and `t is
stride length at time t. Time t is the instant that the pedestrian
executes a new step. Therefore, step recognition and heading
orientation methods are implemented in this component. The
step recognition method is developed by parsing acceleration
readings, whereas the heading orientation is determined by
a digital compass developed from the geomagnetic field and
accelerometer sensors. Despite the stride length value can vary
along the trajectory, in this work we assume that ` is a constant
value in order to focus on the tracking algorithm.
2) Floor Plan Component: Information about the area of
interest is used to further improve the tracking accuracy. This
D. Machine learning for Room Recognition
Figure 3: Indoor Positioning System.
component defines the constraints by the area of interest. Thus,
zones in the floor plan, where the pedestrian is not allowed to
walk, e.g., walls, furniture, are defined as not allowed zones.
3) Radio Information Component: Radio information needs
to be converted to range values. In order to achieve high
ranging accuracy we adopt the Non-Linear Regression (NLR)
model presented in [10]. The NLR model is defined as follows:
dj,t = αj ∗ eRSSj,t ∗βj .
(1)
dj,t is the distance between the target object and the jth AN
at the instant t. Both αj and βj are environmental variables
defined for the jth AN. RSSIj,t is the signal power measured
from the jth AN at time t.
4) Data Fusion Component: We propose a particle filter
approach by fusing PDR, WiFi, and floor plan information
to support real-time tracking in indoor environments. In our
approach, an additional re-sampling method is incorporated
in order to further mitigate the errors caused by off-the-shelf
WiFi sensors embedded on commodity smartphones. Thus, the
state vector at time t is defined as follows:
Xt = [xt , yt , θt , `t ],
(2)
(xt , yt ) are the Cartesian coordinates of the target object, θt
is the heading orientation and `t is the stride length at time t.
The prediction function can be written as:
Xt = F · Xt−1 + η,
(3)
where
1
0
F =
0
0
0
1
0
0
0
0
0
0
0
` ∗ cos(θ)
0
0
0
` ∗ sin(θ)
,η =
0
θ
0
0
0
`
0
0
θ =θ +ε
0
`=` +ε
00
Heading orientation and stride length are assumed to inter0
fere by zero-mean Gaussian random noises. Therefore, ε and
00
ε are the errors introduced in the calculation process.
All the components and algorithms are running on the
smartphones, which continuously explore the existing WiFi
network and on-board sensors.
Location plays an essential role in context-aware smart
environment applications. Information about the location of
users or their identities could be needed to develop customized
services. Therefore, different localization approaches should
be implemented based on the required accuracy. Some applications might need sub-meter level accuracy, while meter level
accuracy could be sufficient for some applications. In smart
environment applications, room level accuracy could already
enable automate activations of actuators based on estimated
user locations, such as turn on/off lights and open/close
curtains when a user is coming/leaving into a room. Before
the integration of the localization mechanisms, Syndesi application users had an assigned office room corresponding to
their working location, which was used as location on their
sensed data. Therefore, in this work, we regard a room level
localization accuracy is enough and propose an indoor localization approach, which is able to provide user localization
information in a room level accuracy.
The rapid development of WiFi capable mobile devices,
and the wide availability of WiFi network infrastructure have
promoted several research on providing indoor localization
by using WiFi technology. However, instability of the WiFi
signal propagation in indoor environments introduces errors in
the localization process. Therefore, a simple WiFi localization
approach like triangulation cannot cover the demands of
practical applications. Nevertheless, WiFi signal propagation
instability can be used as fingerprinting identifier for locations
in indoor environments. Thus, we consider the room recognition as a classification problem and we propose a simple
WiFi fingerprint room recognition approach. The key idea is
to combine the signal strength from multiple access points in
order to build WiFi fingerprints for supervised learning. In
this work we propose a fingerprinting room recognition based
on the WiFi RSS transmitted by nearby WiFi access points.
This approach does not require additional specialized hardware
or special infrastructure support, thus making it feasible for
implementation even in mobile devices limited in power and
resources. The proposed approach consists of two phases: the
off-line training phase and the on-line localization phase. The
training phase is intended to build the fingerprint database,
which consists of vectors of WiFi RSS collected from the
nearby WiFi access points. During the training phase, each
survey room is characterized by a set of vectors of WiFi RSS
readings. During the on-line localization phase, the predicted
room likelihood is calculated based on the previously collected
information and the current WiFi RSS vector. Thus, room
recognition can be handled by searching for the closest match
of the test data in feature space. We implement two simple
yet proven to be efficient classification algorithms for this
purpose: the K-Nearest Neighbor (KNN) algorithm and the
Support Vector Machine classifier. KNN ans SVN are some of
the simplest classification algorithms available for supervised
learning [2]. Therefore, when a localization task is launched
in the mobile app, both classifiers are employed and in case of
Figure 4: Integrated Android application architecture.
a classification mismatch the process is repeated with a newly
collected WiFi RSS vector.
E. Mobile Application
The mobile application has been developed in Android
Studio and IntelliJ IDEA [4] using the Android SDK with
API level 21. It supports devices from API level 15 and above,
which now encompasses over 99.1% of Android smartphone
users [6]. Figure 4 depicts the main components of the app.
The sensing unit is responsible for querying the smartphone
sensors for values. The localization unit is responsible for
estimating the smartphone location. The control unit handles
the various actuation/automation tasks as well as the resource
and power management. The application is constructed around
the Model-View-Controller (MVC) pattern [30]. Users, sensor
data and nodes are models, views are handled by Android
activities and controllers are using services to run in the
background and the broadcast method to communicate and
update the user interface. The application follows the Separation of Concerns (SoC) design principles [31] to facilitate the
maintenance and development. All the data transmitted to the
server are formatted in JSON format.
1) Sensing unit: When a user gives permission to access
his/her device sensors in the settings, the application first
discovers the available sensors in the device and then establishes a monitoring scheme. The polling rate is set manually
in the settings, the default value being one measurement
per minute. After each new sensing event, the raw data is
converted to JSON, after being stamped with the user’s id and
location provided by the localization unit. Then, a network
controller communicates with the RESTful service provided
by the Syndesi server and transmits the data via HTTP calls.
This controller can easily be derived to create subclasses
to connect to other types of servers, making the app easily
adaptable to different environments. The HTTP requests are
made using Volley [9], an HTTP library made for Android that
permits more efficient and faster networking by using caching,
multiple concurrent connections and an automatic scheduling
for network requests. All tasks performed by the sensing
unit are run as services that can run also in the background,
therefore allowing for continuous data collection even during
the smartphone’s idle periods.
2) Localization unit: The localization unit is where the
localization tasks are implemented. As mentioned in section
III-D the full tracking mechanism was evaluated as too powerconsuming to be used in a mobile app, so we adopted a
lightweight version of the same algorithm to achieve accurate
room-level localization. The lightweight version uses only the
radio information component to derive the user room, in a
temporal interval defined by the user. When a localization
task is launched, the app scans for surrounding WiFI signals
and forms a new WIFi RSS vector which is then fed on the
SVN and KNN classifiers. The classifiers are trained every
time the app is initially launched, providing that way the
possibility of training on different data without the need of
re-installation of the whole application. Just as in the sensing
unit, the localization tasks are run as services that can be run
on the background. The classifiers used for the localization
come from OpenCV [3], an open source computer vision and
machine learning library.
Figure 5: Sensors monitoring. Figure 6: Application settings.
12
4
AP1
10
AP2
3
Y(Axis((meters)
5
8
6
AP5
6
7
2
4
8
End
1(
2
AP4
0
Start
2
4
6
8
AP3
10
12
14
16
X(Axis((meters)
Figure 7: Experiment scenario.
3) Control unit: The control unit communicates with the
gateway server via a network controller who sends HTTP
requests in order to receive the list of sensor nodes registered
in the WSAN. The nodes are parsed from the JSON response
and displayed on the app screen. The user can toggle a node
by clicking on it, which enables the network controller to
send a mediate (HTTP GET) request to the server, using the
node’s id and the desired status as parameters. Besides the
manual triggering of the actuators, the control unit includes
an automated environment control module which uses the
tracked indoor location to activate smart management within
the Syndesi framework. If a user enables that option on the
settings, When a change of location is detected it will trigger
a process that switches on the appropriate appliances, such
as lights, that are in the same room as the device, switching
off the ones in the previous room if no one else is currently
located inside. The user can also set in the environment control
configuration a desired ambient temperature and illuminance,
which when compared to the live measured values will actuate
an office fan, turn off some lights or pull up/down a curtain
in corresponding cases. Figure 5 shows a screenshot of the
application home screen, where the latest sensor values are
displayed as well as the current location and the server status.
Figure 6 shows the various settings of the application.
F. Integration challenges
1) Machine learning library: The first challenge when
integrating the indoor localization algorithm in the mobile
application was the inclusion of the different classification
algorithms such as K-Nearest Neighbors (KNN) and Support
Vector Machines (SVM). Presently, there does not exist separate pure machine learning libraries developed for Android,
so we decided to utilize the OpenCV even though OpenCV
was initially designed for image precessing tasks. This library
is open source and already exists in Android but the downside
is that the user has to install it separately, something our
app prompts him/her to do if it is not already installed.
We are currently in the process of moving the localization
computations on the server side for future versions of the app
to deal with this issue.
2) Android permissions: The initial app was built and tested
on phones running Android OS version 5.x. Most of the OS
updates since then have modified the permission policy for the
application, which led to a constant refactoring of the app in
order to be in line with the newest changes and the advanced
Android policy. Right now the app is fully compatible with
the latest version of Android 7.1.2.
3) Battery consumption: The initial version of the app
was configured to collect data every 3 minutes and in a full
working day (8 hours) we measured an average of 3% battery
consumption. Although, as new features continued to be added
to the app, e.g. faster polling rates, expensive localization
processing, node management etc., it became significantly
more power demanding. To deal with this issue, we introduced
a new power management feature, where the polling rates as
well as the localization WiFi sampling intervals are depending
on the smartphone’s state and its current battery level. The user
can still choose a desired polling rate in the settings but when
the phone battery reaches levels below 50%, the polling rate is
reduced by half, and furthermore if the battery life goes below
20% the sensing and localization functions are both disabled.
We plan to further improve power efficiency by batching data
that are not critical to automation before sending them to the
server and by activating the localization only when a user
move has been detected.
IV. S YSTEM D EPLOYMENT AND E VALUATION
In this section, we explain how the system is deployed and
evaluated in an indoor office environment at the Computer
Science department of University of Geneva.
A. System Deployment
We deployed the system at the TCS-Sensor Lab in the
Computer Science department of University of Geneva with an
office area of nearly 260 m2 . In order to cover the area of the
4 office rooms and the in-between corridor with WiFi signals
that are sufficient to feed the localization algorithm, we have
deployed 5 WiFi access points strategically. Each room has
multiple sensors (e.g., humidity sensor, illuminance sensor,
etc) and electrical appliances as actuators (e.g., desk fans,
desk lights, and electrical curtains). Each actuator is paired
to a sensor mote via a solid state relay [32]. The relay is
connected to the 220V power supply, to the actuator and to
the sensor mote. The relay receives a 3V output from the
mote once a location match is detected on the app, thanks
to the accurate indoor localization mechanisms. The current
generated from this voltage then enables the relay’s underlying
circuit and the correlated appliances will be switched on
automatically. During the experiments, a person walks through
the office areas holding a smartphone with the Smart Syndesi
app installed and enabled (as shown in Figure 7).
B. Evaluation
This section includes two types of experiment evaluations:
a functional one, and a non-functional one. The function
evaluation is to validate the prototype functionalities, while
the non-function one is about the evaluation of the indoor
localization accuracy.
For the functionality evaluation, when the person holding
the smartphone enters an office, the actuators of that office
should be automatically triggered based on the user preferences. As shown in Figure 8, the ”environment control” feature
correctly triggers the specific appliances based on user settings,
displaying in its interface the automation status based on the
current location/office. The interface of ”Node Manager” will
also present the actuators that are in the current room and their
status for manual triggering, as shown in Figure 9.
Figure 8: Appliance Automa- Figure 9: Nodes Manager.
tion.
For the evaluation of the accuracy of the room recognition,
as well as the automated environment control, an experiment
Localization5Acuracy
100%
100% 100%
Appliances5Correctly5Actuated
100% 100% 100%
100% 100% 100%
100% 100% 100% 100% 100%
98%
96%
94%
94%
92%
92%
91%
90%
88%
86%
1
2
3
4
5
6
7
8
Check Point
Figure 10: Evaluation results.
scenario and walking path have been defined. As shown in
Figure 7, the experiment consists of a person holding the
smartphone and walking through the office areas following a
predefined path (shown as green dash line). From the starting
point, the person holds the smartphone and enters office rooms
one by one, triggering the ”relocate” function on the defined
”check points” on the ground (shown as number 1-6 in Figure
7). During that time, the mobile app is configured to have
the environment control function enabled so the expected outcome is the enabling/disabling of the corresponding electrical
appliances as the location changes. As WiFi received signal
strength depends on the smartphone hardware components, we
used 3 smartphone models from different manufacturers, and
their detailed specifications are shown in Table I.
Table I: Smartphone Models
Model
Specifications
Samsung Note 3
Android 5.0; Quad-core 2.3 GHz Krait 400;
Wi-Fi 802.11 a/b/g/n/ac; 3GB RAM
Sony Xperia Z5
Android 7.0; Quad-core 2.0 GHz;
Wi-Fi 802.11 a/b/g/n/ac; 3GB RAM
LG Nexus 5X
Android 7.1.2; Hexa-core 1.8 GHz Cortex-A53;
Wi-Fi 802.11 a/b/g/n/ac; 2GB RAM
The experiments were run 20 times for each phone and for
every checkpoint we gathered metrics on the room recognition
and appliance actuation. As results shown in Figure 10, the
overall accuracy of the room recognition was far greater than
90%, which proves the efficiency of the deployed localization
system. In particular, we notice that check points 1,3,5,7 and
8 which are all located inside the 4 offices achieved perfect
score. This is an important result since when it comes to
localization misses, a corridor miss is far less critical since it
is basically transitive state. Regarding the actuation of office
appliances, in the case where the office room was correctly
detected, the system demonstrated robustness and reliability
as there were almost no fails in the steps followed by the
phone-server-WSAN to enable the automation, as shown in
the results figure.
V. C ONCLUSION
In this paper, we presented an indoor location-aware smart
environment system. The system consists of an indoor localization module, a wireless sensor and actuator network
for electrical appliance management, and a gateway interconnecting office appliances, localization engine with smartphone
users. Thanks to the accurate indoor localization module, the
system is able to identify people’s indoor locations in realtime, which then trigger the automation of office appliances.
We have implemented the prototype, and deployed the system
in an indoor environment with 4 office rooms in the Computer Science department of University of Geneva. We have
performed multiple experiments with different smartphones
models to validate the system functionalities and performance.
Evaluation results show that the estimated indoor locations are
of high accuracy, and the automation of office appliances can
be triggered by the estimated indoor locations in real-time.
In the future, an envisioned milestone is to offload the
localization algorithm computations to the cloud/edge servers
that are deployed in the smart environments. In this way, we
can deploy the full tracking system and even experiments with
more advanced machine learning techniques and algorithms
which were overly demanding in power and computational
resources to be implemented on a mobile device. Moreover,
we plan to enhance the system with more power-efficient
management as well as security features to make it privacypreserving.
ACKNOWLEDGMENT
This work was partly supported by the Swiss National
Science Foundation via the SwissSenseSynergy project under
grant number 154458.
R EFERENCES
[1] Nest Labs Home Automation Producer[Online]. Available:
https://nest.com/
[2] OpenCV.org K-Nearest Neighbour[Online]. Available:
https://goo.gl/RXDGwP
[3] OpenCV.org OpenCV library[Online]. Available:
http://docs.opencv.org/3.0-beta/index.html
[4] IntelliJ IDEA the Java IDE [Online]. Available:
https://www.jetbrains.com/idea/
[5] Android 5.0 API[Online]. Available:
https://developer.android.com/about/versions/android-5.0.html
[6] Android Developers Dashboard[Online]. Available:
https://developer.android.com/about/dashboards/index.html
[7] Telosb Mote[Online]. Available: https://telosbsensors.wordpress.com/
[8] Zolertia Z1 sensor[Online]. Available: http://zolertia.io/z1/
[9] Transmitting Network Data Using Volley[Online]. Available:
https://developer.android.com/training/volley/index.html
[10] Z. Li and T. Braun and D. Dimitrova, A PassiveWiFi Source Localization
System Based on fine-grained power-based trilateration. Proceedings of
the IEEE International Symposium on a World of Wireless, Mobile and
Multimedia Networks (WoWMoM), 2015.
[11] O. Evangelatos and K. Samarasinghe and J. Rolim, Syndesi: A Framework for Creating Personalized Smart Environments Using Wireless
Sensor Networks. Proceedings of the IEEE International Conference
on Distributed Computing in Sensor Systems (DCOSS), 2013.
[12] J. Carrera and Z. Li and Z. Zhao and T. Braun and A. Neto, A Realtime Indoor Tracking System in Smartphones. Proceedings of the 19th
ACM International Conference on Modeling, Analysis and Simulation of
Wireless and Mobile Systems (MWSiM), 2016.
[13] A. Ebadat, G. Bottegal, D. Varagnolo, B. Wahlberg and K. H. Johansson,
Regularized Deconvolution-Based Approaches for Estimating Room Occupancies. IEEE Transactions on Automation Science and Engineering,
vol. 12, no. 4, pp. 1157-1168, Oct. 2015.
[14] H. He, Y. Li, and J. Tan, Wearable ego-motion tracking for blind
navigation in indoor environments. IEEE Transactions on Automation
Science and Engineering, vol. 12, no. 4, pp. 11811190, Oct. 2015.
[15] C. Mao, and Q. Huang, Occupancy Estimation in Smart Building using
Hybrid CO2/Light Wireless Sensor Network.
ASA MULTIDISCIPLINARY RESEARCH SYMPOSIUM, 2016.
[16] P. Nagpal and R. Rashidzadeh, Indoor Positioning using Magnetic
Compass and Accelerometer of Smartphones.
Proceedings of the
International Conference on Selected Topics in Mobile and Wireless
Networking, 2013.
[17] F. Hong and Y. Zhang and Z. Zhang and M. Wei and Y. Feng and Z.
Guo, WaP: Indoor Localization and Tracking Using WiFi-Assited particle
Filter. Proceedings of the 39th IEEE Conference on Local Computer
Networks (LCN), 2014.
[18] M.V. Moreno-Cano, M.A. Zamora-Izquierdo, Jos Santa, and Antonio
F. Skarmeta, An indoor localization system based on artificial neural
networks and particle filters applied to intelligent buildings. Elsevier
Journal on Neurocomputing, Volume 122, 25 December 2013, Pages 116125.
[19] M. Chen, J. Yang, X. Zhu, X. Wang, M. Liu, and J. Song, Smart
Home 2.0: Innovative Smart Home System Powered by Botanical IoT
and Emotion Detection.
Springer Journal on Mobile Networks and
Applications, 2017.
[20] D. M. Han J. H. Lim Smart home energy management system using
IEEE 802.15.4 and Zigbee Transactions on Consumer Electronics, vol.
56, no. 3, pp. 1403-1410, August 2010.
[21] J. Byun B. Jeon J. Noh Y. Kim S. Park An intelligent self-adjusting
sensor for smart home services based on ZigBee communications Transactions on Consumer Electronics, vol. 58, no. 3, pp. 794-802, August
2012
[22] H. Wang J. Wang Design and implementation of a smart home based
on WSN and AMR Applied Mechanics and Materials, vol. 271-272, pp.
1485-1489, 2013.
[23] J. M. Wang H. B. Wei Design of smart home management system based
on GSM and Zigbee Advanced Materials Research, vol. 842, pp. 703707, 2014.
[24] W. Liu Y. Yan Application of ZigBee wireless sensor network in smart
home system International Journal of Advanced Computer Technology,
vol. 3, no. 5, pp. 154-160, June 2011.
[25] V. C. Gungor B. Lu G. P. Hancke Opportunities and challenges of
wireless sensor networks in smart grid
Transactions on Industrial
Electronics, vol. 57, no. 10, pp. 3557-3564, October 2010.
[26] X. Cao J. Chen Y. Xiao Y. Sun Building-environment control with
wireless sensor and actuator networks: Centralized versus distributed
Transactions on Industrial Electronics, vol. 57, no. 11, pp. 3596-3605,
November 2010.
[27] M. Magno et al. Extended wireless monitoring through intelligent hybrid
energy supply Transactions on Industrial Electronics, vol. 61, no. 4, pp.
1871-1881, April 2014.
[28] S. D. T. Kelly N. K. Suryadevara S. C. Mukhopadhyay Towards the
implementation of IoT for environmental condition monitoring in homes
IEEE Sensors Journal, vol. 13, no. 10, pp. 3846-3853, October 2013.
[29] D. S. Tudose et al., Home automation design using 6LoWPAN wireless
sensor networks.
Proceedings of the International Conference on
Distributed Computing in Sensor Systems and Workshops (DCOSS),
2011.
[30] Model-View-Controller-design pattern [Online]. Available:
https://developer.apple.com/library/content/documentation/General/Concep
tual/DevPedia-CocoaCore/MVC.html
[31] D. Jackson, E. Kang, Separation of concerns for dependable software
design FoSER ’10 Proceedings of the FSE/SDP workshop on Future
of software engineering research, 2010.
[32] DC to AC Solid State Relay [Online]. Available:
http://www.fotek.com.hk/solid/SSR-1.htm
| 3 |
PREPRINT
1
Likelihood Gradient Evaluation Using Square-Root
Covariance Filters
arXiv:1605.06654v1 [] 21 May 2016
M.V. Kulikova
Abstract— Using the array form of numerically stable square-root
implementation methods for Kalman filtering formulas, we construct
a new square-root algorithm for the log-likelihood gradient (score)
evaluation. This avoids the use of the conventional Kalman filter with
its inherent numerical instabilities and improves the robustness of
computations against roundoff errors. The new algorithm is developed
in terms of covariance quantities and based on the ”condensed form” of
the array square-root filter.
Index Terms— identification, maximum likelihood estimation, gradient
methods, Kalman filtering, numerical stability.
I. I NTRODUCTION
Consider the discrete-time linear stochastic system
xk
=
Fk xk−1 + Gk wk ,
(1)
zk
=
Hk xk + vk ,
(2)
n
k = 1, . . . , N,
m
where xk ∈ R and zk ∈ R are, respectively, the state and the measurement vectors; k is a discrete time, i.e. xk means x(tk ). The noises
wk ∈ Rq , vk ∈ Rm and the initial state x0 ∼ N (x̄0 , Π0 ) are taken
from mutually independent Gaussian distributions with zero mean and
covariance matrices Qk and Rk , respectively, i.e. wk ∼ N (0, Qk ),
vk ∼ N (0, Rk ). Additionally, system (1), (2) is parameterized by a
vector of unknown system parameters θ ∈ Rp , which needs to be
estimated. This means that the entries of the matrices Fk , Gk , Hk ,
Qk , Rk and Π0 are functions of θ ∈ Rp . However, for the sake of
simplicity we will suppress the corresponding notations below, i.e
instead of Fk (θ), Gk (θ), Hk (θ), Qk (θ), Rk (θ) and Π0 (θ) we will
write Fk , Gk , Hk , Qk , Rk and Π0 .
Solving the parameter estimation problem by the method of maximum likelihood requires the maximization of the likelihood function
(LF) with respect to unknown system parameters. It is often done by
using a gradient approach where the computation of the likelihood
gradient (LG) is necessary. For the state-space system (1), (2) the
negative Log LF is given as [1]:
N
o
1 X nm
−1
Lθ Z1N =
ln(2π) + ln (det Re,k ) + eTk Re,k
ek
2
2
k=1
Z1N
where
= [z1 , . . . , zN ] is N -step measurement history and ek are
the innovations, generated by the discrete-time Kalman filter (KF),
with zero mean and covariance matrix Re,k . They are ek = zk −
Hk x̂k|k−1 and Re,k = Hk Pk|k−1 HkT + Rk , respectively. The KF
defines the one-step ahead predicted state estimate x̂k|k−1 and the
one-step predicted error covariance matrix Pk|k−1 .
Straight forward differentiation of the KF equations is a direct
approach to the Log LG evaluation, known as a ”score”. This leads
to a set of p vector equations, known as the filter sensitivity equations,
for computing ∂ x̂k|k−1 /∂θ, and a set of p matrix equations, known
as the Riccati-type sensitivity equations, for computing ∂Pk|k−1 /∂θ.
Consequently, the main disadvantage of the standard approach
is the problem of numerical instability of the conventional KF, i.e
divergence due to the lack of reliability of the numerical algorithm.
Solution of the matrix Riccati equation is a major cause of numerical difficulties in the conventional KF implementation, from the
Manuscript received September 14, 2007; revised May 9, 2008.
M.V. Kulikova is with the School of Computational and Applied
Mathematics, University of the Witwatersrand, South Africa; Email:
Maria.Kulikova@wits.ac.za.
standpoint of computational load as well as from the standpoint of
computational errors [2].
The alternative approach can be found in, so-called, squareroot filtering algorithms. It is well known that numerical solution
of the Riccati equation tends to be more robust against roundoff
errors if Cholesky factors or modified Cholesky factors (such as
the U T DU -algorithms [3]) of the covariance matrix are used as the
dependent variables. The resulting KF implementation methods are
called square-root filters (SRF). They are now generally preferred
for practical use [2], [4], [5]. For more insights about numerical
properties of different KF implementation methods we refer to the
celebrated paper of Verhaegen and Van Dooren [6].
Increasingly, the preferred form for algorithms in many fields
is now the array form [7]. Several useful SRF algorithms for KF
formulas formulated in the array form have been recently proposed
by Park and Kailath [8]. For this implementations the reliability of
the filter estimates is expected to be better because of the use of
numerically stable orthogonal transformations for each recursion step.
Apart from numerical advantages, array SRF algorithms appear to be
better suited to parallel and to very large scale integration (VLSI)
implementations [8], [9].
The development of numerically stable implementation methods
for KF formulas has led to the hope that the Log LG (with respect
to unknown system parameters) might be computed more accurately.
For this problem, a number of questions arise:
• Is it possible to extend reliable array SRF algorithms to the case
of the Log LG evaluation?
• If such methods exist, will they inherit the advantages from
the source filtering implementations? In particular, will they
improve the robustness of the computations against roundoff
errors compared to the conventional KF technique? The question
about suitability for parallel implementation is beyond the scope
of this paper.
The first attempt to answer these questions belongs to Bierman
et al. [10]. The authors used the square-root information filter,
developed by Dyer and McReynolds [11] and later extended by
Bierman [3], as a source filter implementation and constructed the
method for score evaluation. The algorithm was developed in the
form of measurement and time updates. However, the accuracy of
the proposed method has not been investigated.
In contrast to the main result of [10], we focus on the dual class
of KF implementation methods (that is the class of covariance-type
methods) and discuss the efficient Log LG evaluation in square-root
covariance filters. More precisely, we consider the array form of the
square-root covariance filter eSRCF introduced in [8]. The purpose
of this paper is to design the method for the Log LG evaluation in
terms of the square-root covariance variables, i.e. in terms of the
quantities that appear naturally in the eSRCF. This avoids the use
of the conventional KF with its inherent numerical instabilities and
gives us an opportunity to improve the robustness of the Log LG
computation against roundoff errors.
II. E XTENDED S QUARE -ROOT C OVARIANCE F ILTER
To achieve our goal, we are first going to present the extended
square-root covariance filter (eSRCF), proposed in [8], and second,
we will derive the expression for the Log LG evaluation in terms of
the variables that are generated by the eSRCF implementation.
Notations to be used: For the sake of simplicity, we denote the
one-step predicted state estimate as x̂k and the one-step predicted
error covariance matrix as Pk . We use Cholesky decomposition of
T /2 1/2
1/2
the form Pk = Pk Pk , where Pk is an upper triangular matrix.
1/2
1/2
1/2
Similarly, we define Rk , Qk , Re,k . For convenience we will write
PREPRINT
2
1/2
Rk
1/2
Qk Pk HkT
0
1/2
Re,k
=
0
0
0
1/2
Pk
FkT
1/2
Qk GTk
T
K̄p,k
1/2
Pk+1
0
−T /2
−Rk
−T /2
Pk
zk
x̂k
0
−ēk
−T /2
Pk+1 x̂k+1
γk
1/2
∂θi Rk
1/2
∂θi Pk HkT
0
Xi
Ni
Bi
A−1/2 = (A1/2 )−1 , A−T /2 = (A−1/2 )T and ∂θi A implies the
partial derivative of the matrix A with respect to the ith component
of θ, i.e ∂A/∂θi .
In this paper, we deal with the ”condensed form”1 of the eS−T /2
1/2
x̄0 , recurRCF [8]: Assume that Rk > 0. Given Π0 and Π0
1/2
−T /2
sively update Pk and Pk
x̂k as follows:
1/2
−T /2
Rk
0
−Rk
zk
−T /2
Qk Pk1/2 HkT Pk1/2 FkT
Pk
x̂k
1/2 T
0
0
Qk Gk
1/2
T
−ēk
Re,k K̄p,k
1/2
−T /2
(3)
= 0
Pk+1 Pk+1 x̂k+1
γk
0
0
where Qk is any orthogonal rotation that upper-triangularizes the
first two (block) columns of the matrix on the left-hand side of (3);
−1/2
−T /2
K̄p,k = Fk Pk HkT Re,k and ēk = Re,k ek .
One can easily obtain the expression for the negative Log LF in
terms of the eSRCF variables:
N
o
1 X nm
1/2
Lθ Z1N =
ln(2π) + 2 ln det Re,k + ēTk ēk . (4)
2 k=1 2
Let θ = [θ1 , . . . , θp ] denote the vector of parameters with respect
to which the likelihood function is to be differentiated. Then from (4),
we have
N
h
i 1
X
h
i
1/2
+ ∂θi ēTk ēk
∂θi ln det Re,k
∂θi Lθ Z1N =
. (5)
2
k=1
1/2
Re,k
Taking into account that the matrix
is upper triangular, we
derive
"
"m
#
#
m
h
i
X
X jj
1
1/2
jj
∂θi ln det Re,k =∂θi
=
ln re,k
· ∂θi re,k
jj
re,k
j=1
j=1
i
h
−1/2
1/2
i = 1, . . . , p (6)
=tr Re,k · ∂θi Re,k ,
jj
where the re,k
, j = 1, . . . , m denote the diagonal elements of the
1/2
matrix Re,k .
Substitution of (6) into (5) yields the result that we are looking for
N n
i
o
h
X
−1/2
1/2
∂θi Lθ Z1N =
tr Re,k · ∂θi Re,k + ēTk · ∂θi ēk . (7)
k=1
Ultimately, our problem is to compute Log LG (7) by using the
eSRCF equation (3). Before we come to the main result of this paper,
there are a few points to be considered. As can be seen from (7),
1/2
the elements ēk and Re,k involved in the Log LG evaluation are
obtained from the underlying filtering algorithm directly, i.e. from (3).
No additional computations are needed. Hence, our aim is to explain
1 The
0
”condensed form” of filtering algorithms refers to the case when implementation method for the KF formulas is not divided into the measurement
and time updates.
Yi
Vi
Ki
1/2
∂θi Pk FkT
1/2
∂θi Qk GTk
−T /2
zk
∂θi −Rk
−T /2
x̂k
∂θi Pk
0
Mi
Wi .
Ti
(8)
1/2
how the last two terms in the Log LG expression, ∂θi ēk and ∂θi Re,k ,
can be computed using quantities available from eSRCF (3).
III. S UGGESTED S QUARE -ROOT M ETHOD FOR S CORE
E VALUATION
We can now prove the following result.
Theorem 1: Let the entries of the matrices Fk , Gk , Hk , Qk , Rk ,
Π0 describing the linear discrete-time stochastic system (1), (2) be
differentiable functions of a parameter θ ∈ Rp . Then in order to
compute the Log LF and its gradient (with respect to unknown system
parameter θ) the eSRCF, which is used to filter the data, needs to be
extended as follows. Assume that R
k > 0. Given
the initial values
−T /2
1/2
−T /2
1/2
x̄0 , recursively update
x̄0 and ∂θi Π0 , ∂θi Π0
Π0 , Π0
−T /2
1/2
−T /2
1/2
x̂k as follows:
Pk , Pk
x̂k and ∂θi Pk , ∂θi Pk
I. Replace the eSRCF equation (3) by (8) where Qk is any orthogonal rotation that upper-triangularizes the first two (block)
columns of the matrix on the left-hand side of (8).
II. Having computed the elements of the right-hand side matrix
in (8), calculate for each θi :
#
#
"
"
1/2
T
T
h
i R1/2 K̄p,k
∂θi Re,k ∂θi K̄p,k
e,k
T
,
= L̄i + Di + Ūi
1/2
1/2
0 Pk+1
0
∂θi Pk+1
(9)
"
−∂θi ēk
−T /2
#
h
i
= L̄T − L̄
i
i
−ēk
−T /2
Pk+1 x̂k+1
∂θi Pk+1 x̂k+1
#−T
" 1/2
T
Re,k K̄p,k
Mi
Bi
(10)
γ
+
+
k
1/2
Wi
Ki
0 Pk+1
where L̄i , Di and Ūi are strictly lower triangular, diagonal and
strictly upper triangular parts of the following matrix product:
#
" −1/2
−1/2 T
−1/2
Xi Yi Re,k −Re,k K̄p,k Pk+1
= L̄i + Di + Ūi .
−1/2
Ni Vi
0
Pk+1
(11)
1/2
1/2
III. Having determined ēk , Re,k and ∂θi ēk , ∂θi Re,k compute Log
LF (4) and Log LG (7).
Proof: As discussed earlier, the main difficulty in score evalu1/2
ation (7) is to define ∂θi Re,k and ∂θi ēk from the underlying filter,
i.e. from (3). We divide the proof into two parts, first proving (9) for
1/2
the ∂θi Re,k evaluation and then validating (10) for ∂θi ēk .
1/2
Part I. Our goal is to express ∂θi Re,k in terms of the variables
that appear naturally in the eSRCF implementation. First, we can
note that the eSRCF transformation in (3) has a form
QA = B
where A is a rectangular matrix, and Q is an orthogonal transformation that block upper-triangularizes B. If matrix A is square and
PREPRINT
3
daij
invertible, then given the matrix of derivatives A′θ =
we can
dθ
compute Bθ′ as follows [10]:
h
i
Bθ′ = LT + D + U B
(12)
where L, D and U are, respectively, strictly lower triangular, diagonal
and strictly upper triangular parts of the matrix QA′θ B −1 .
However, this idea cannot be applied to the eSRCF because the
matrix to be triangularized, i.e. the first two (block) columns of the
matrix on the left-hand side of (3), is not square and, hence, not invertible. By using the pseudoinversion (Moore-Penrose inversion) we
avoid this obstacle and generalize the scheme of computations (12)
to the case of eSRCF (3).
To begin constructing the method for score evaluation, we augment
the matrix to be triangularized by q columns of zeros. Hence, we
obtain
1/2
1/2
T
Rk
0
0
0
Re,k K̄p,k
1/2
Qk Pk1/2 HkT Pk1/2 FkT 0 = 0
Pk+1 0 . (13)
1/2 T
0
0
0
0
Qk Gk 0
The matrices in (13) have dimensions (m + n + q) × (m + n + q).
For the sake of simplicity, we denote the left-hand side and the righthand side matrices of (13) as Ak and Bk , respectively. Then, by
differentiating (13) with respect to the components of θ, we obtain
∂θi Qk · Ak + Qk · ∂θi Ak = ∂θi Bk .
(14)
Multiplication both sides of (14) by the pseudoinverse matrix Bk+
yields
∂θi Bk · Bk+ =∂θi Qk Ak Bk+ + Qk · ∂θi Ak · Bk+
=∂θi Qk QTk Bk Bk+ + (Qk · ∂θi Ak ) Bk+ . (15)
One can easily obtain the explicit expression for Bk+ :
−1/2
−1/2 T
−1/2
Re,k −Re,k K̄p,k
Pk+1 0
−1/2
Bk+ = 0
Pk+1
0 .
h
i
∂θi Qk · QTk
m+n
= L̄Ti − L̄i .
(19)
Formulas (19) and (18) are, in fact, equations (9) and (11) of the
proposed method for score evaluation. The theorem is half proved.
Part II. We need to verify (10). By differentiating the last equation
of the eSRCF with respect to the components of θ
−T /2
−ēk
−Rk
zk
−T /2
Qk P −T /2 x̂k = Pk+1 x̂k+1
k
γk
0
we obtain
−∂θi ēk
−T /2
∂θi Pk+1 x̂k+1
∂θi γk
−T /2
−Rk
zk
T
= ∂θi Qk · Qk · Qk Pk−T /2 x̂k
0
−T /2
−∂θi Rk
zk
+Qk
∂θi Pk−T /2 x̂k . (20)
0
Next, we replace the last term in (20) with the quantities already
computed and collected in the right-hand side matrix of (8). Furthermore, it is useful to note that the element ∂θi γk is of no interest here.
These two steps give us
"
#
h
i
−∂θi ēk
−ēk
= ∂ Q · QT
−T /2
−T /2
θi k
k
∂θi Pk+1 x̂k+1
m+n Pk+1 x̂k+1
h
irow: 1:m+n
Mi
(21)
+ ∂θi Qk · QTk
γk +
col: last q
Wi
irow: 1:m+n
h
where ∂θi Qk · QTk
stands for the (m + n) × q matrix
col: last q
(16)
0
0
0
By using (8), we replace Qk · ∂θi Ak in (15) by the quantities
already computed. Then, taking into account (16), we derive the
equation for the (m + n) × (m + n) main block of the matrix Bk :
# " 1/2 T #−1
"
1/2
T
h
i
Re,k K̄p,k
∂θi Re,k ∂θi K̄p,k
T
=
∂
Q
×
Q
θ
k
k
i
1/2
1/2
m+n
0 Pk+1
0
∂θi Pk+1
#−1
"
1/2
T
Xi Yi Re,k K̄p,k
(17)
+
1/2
Ni Vi
0 Pk+1
where ∂θi Qk · QTk m+n denotes the (m+n)×(m+n) main block
of the matrix ∂θi Qk · QTk .
As discussed in [10], the matrix ∂θi Qk · QTk is skew symmetric
and, hence, can be represented in the form L̄T − L̄ where L̄ is strictly
lower triangular.
Now, let us consider matrix equation (17). As can be seen, the
matrix on the left-hand side of (17) is block upper
triangular. Thus,
the strictly lower triangular part of the matrix ∂θi Qk · QTk m+n
must exactly cancel the strictly lower triangular part of the second
term on the right-hand side of (17). In other words, if
#−1
" 1/2
T
Xi Yi Re,k K̄p,k
= L̄i + Di + Ūi ,
1/2
Ni Vi
0 Pk+1
then
Substitution of (18) into (17) leads to the result
#
"
#
"
1/2
T
T
i R1/2 K̄p,k
h
∂θi Re,k ∂θi K̄p,k
e,k
T
.
= L̄i + Di + Ūi
1/2
1/2
0 Pk+1
0
∂θi Pk+1
(18)
composed of the entries that are located at the intersections of the
last q columns with the first m + n rows of ∂θi Qk · QTk .
Taking into account (18), from the equation above we obtain
"
#
h
i
−∂θi ēk
−ēk
= L̄T − L̄
i
−T
/2
−T /2
i
Pk+1 x̂k+1
∂θi Pk+1 x̂k+1
irow: 1:m+n
h
Mi
(22)
γk +
+ ∂θi Qk · QTk
col: last q
Wi
where L̄i is strictly lower triangular part of the matrix in (11).
Since ∂θi Qk · QTk is skew symmetric, we can write down
h
irow: 1:m+n
irow: last q T
h
T
T
∂θi Qk · Qk
= − ∂θi Qk · Qk
(23)
col: last q
irow:
h
col: 1:m+n
last q
stands for the q × (m + n) matrix
where ∂θi Qk · QTk
col: 1:m+n
composed of the entries that are located at the intersections of the
last q rows with the first (m + n) columns of ∂θi Qk · QTk .
To evaluate the right-hand side of (23), we return to (15) and write
it in the matrix form:
+
1/2
1/2
T
T
Re,k K̄p,k
0
∂θi Re,k ∂θi K̄p,k
0
T
1/2
1/2
0
0 P
0 = ∂θi Qk · Qk
∂θ P
0
i
0
k+1
k+1
0
0
0
+
1/2
T
Xi Yi 0
Re,k K̄p,k
0
+ Ni Vi 0 0 P 1/2 0 . (24)
k+1
Bi Ki 0
0
0
0
0
0
PREPRINT
4
(26)
Final substitution of (26) into (22) validates (10) of the proposed
method for the Log LG evaluation. This completes the proof.
Remark 1: The method for score evaluation introduced above has
been derived from the eSRCF implementation. As a consequence, the
proposed method is of covariance-type.
Remark 2: The new square-root algorithm for score evaluation
naturally extends the eSRCF filter and, hence, consists of two parts.
They are the ”filtered” and ”differentiated” parts. This structure
allows the Log LF and its gradient to be computed simultaneously.
Thus, the method is ideal for simultaneous state estimation and
parameter identification.
Remark 3: In the KF formulation of the Log LG evaluation, it is
necessary to run the ”differentiated” KF for each of the parameters
θi to be estimated. As in [10], in the eSRCF formulation this ”bank”
of filters is replaced with the augmented arrays to which orthogonal
transformations are applied.
254
Negative Log Likelihood Function
By substituting (25) into (23), we obtain
" 1/2 T #−T
h
irow: 1:m+n
Re,k K̄p,k
Bi
.
∂θi Qk · QTk
=
1/2
col: last q
K
i
0 Pk+1
256
252
250
248
246
244
242
240
238
0
First, we would like to check our theoretical derivations. To do so,
we apply the square-root algorithm introduced in Theorem 1 to the
following simple test problem.
Example 1: Consider the special case of the system (1), (2) being
dk
0
1
∆t
wk , zk = [1 0] xk + vk
x
+
=
xk =
k−1
−∆t/τ
sk
1
0 e
where wk ∼ N (0, I2 ), vk ∼ N (0, I1 ), In denotes the n × n identity
matrix and τ is a parameter which needs to be estimated.
In our simulation experiment, we compute the negative Log LF and
its gradient by the proposed square-root method and, then, compare
the results to those produced by the conventional KF approach. The
outcomes of this experiments are illustrated by Fig. 1 and 2.
As can be seen from Fig. 2, all algorithms for score evaluation
produce exactly the same result and give the same zero point that
further coincides with the minimum point of the negative Log LF (see
Fig. 1). All these evidences substantiate the theoretical derivations of
Section III.
Next, we wish to answer the second question posed in this paper:
does the algorithm for score evaluation derived from numerically
stable square-root implementation method improve the robustness
of computations against roundoff errors? The previously obtained
results (Example 1) indicate that both methods, i.e. the conventional
KF technique and the new square-root algorithm, produce exactly
the same answer for the Log LF and Log LG evaluation. However,
numerically they no longer agree. We are now going to explore the
accuracy of the numerical algorithms.
To begin designing the ill-conditioned test problem we, first, stress
the type of the proposed method. As discussed in Remark 1, the
new square-root algorithm belongs to the class of covariance-type
methods. From Verhaegen and Van Dooren’s celebrated paper [6], we
know that the condition number of the innovation covariance matrix
K(Re,k ) is the key parameter determining the numerical behavior of
5
8
10
15
20
τ (in sec.)
25
30
35
Fig. 1. The negative Log LF computed by the eSRCF and the conventional
KF for Example 1
2
0
−2
−4
−6
−8
−10
−12
Conventional KF technique
Suggested square−root method, the "differentiated" part
−14
IV. N UMERICAL R ESULTS
Conventional KF technique
Suggested square−root method, the "filtered" part
236
Likelihood Gradient
As can be seen, the last (block) row of the left-hand side matrix
in (24) is zero. Thus, the last (block) row of the matrix ∂θi Qk · QTk
must exactly cancel the last (block) row of the second term in (24):
#−1
" 1/2
T
h
irow: last q
Re,k K̄p,k
T
∂θi Qk · Qk
. (25)
= − [Bi Ki ]
1/2
col: 1:m+n
0 Pk+1
0
5
8
10
15
20
τ (in sec.)
25
30
35
Fig. 2. The Log LG computed by the proposed square-root method and the
conventional KF for Example 1
the covariance algorithms. Taking into account these two important
facts, we construct the following ill-conditioned test problem.
Example 2: Consider the problem with the measurement sensitivity matrix
1 1 1
Hk =
and Fk = I3 , Gk = 0, Qk = I1 , Rk = δ 2 θI2
1 1 1+δ
with x0 ∼ N (0, θI3 ), where θ is an unknown system parameter. To
simulate roundoff we assume that δ 2 < ǫroundof f , but δ > ǫroundof f
where ǫroundof f denotes the unit roundoff error2 .
When θ = 1, Example 2 coincides with well-known ill-conditioned
filtering problem (see, for instance, [2]) and demonstrates how
a problem that is well-conditioned, as posed, can be made illconditioned by the filter implementation. The difficulty to be explored
is in matrix inversion. As can be seen, although rank H = 2, the
matrix Re,1 is singular in machine precision that yields the failure
of the conventional KF implementation. We introduced an unknown
system parameter θ making sure that the same problem is applied to
the matrix (Re,1 )′θ for each value of θ. Thus, both parts of the method
for score evaluation, that are the ”filtered” and ”differentiated” parts,
fail after processing the first measurement. From the discussion above
we understand that Example 2 demonstrates the difficulty only for
the covariance-type methods.
Our simulation experiments presented below are organized as
follows. All methods were implemented in the same precision (64-bit
2 Computer roundoff for floating-point arithmetic is often characterized by a
single parameter ǫroundof f , defined in different sources as the largest number
such that either 1 + ǫroundof f = 1 or 1 + ǫroundof f /2 = 1 in machine
precision.
PREPRINT
5
TABLE I
E FFECT OF ROUNDOFF ERRORS ON THE COMPUTED SOLUTIONS FOR THE SET OF TEST PROBLEMS FROM E XAMPLE 2
K(Re,1 )
10−2
103
10−4
107
10−6
10−8
10−9
10−10
1011
1015
1016
∞
Conventional KF technique
∆P1
10−13
1·
5 · 10−10
2 · 10−6
3 · 10−3
3 · 10−1
NaN
∆P1′
10−10
1·
9 · 10−4
2 · 10−1
2 · 10−1
NaN
NaN
∆LogLG
10−13
10−13
2·
4 · 10−9
2 · 10−5
3 · 10−1
4 · 100
NaN
floating point) in MatLab where the unit roundoff error is 2−53 ≈
1.11 · 10−16 . The MatLab function eps is twice the unit roundoff
error and δ = eps2/3 satisfies the conditions δ 2 < ǫroundof f and
δ > ǫroundof f from Example 2. We provide the computations for
for different values of δ, say δ ∈ [10−9 eps2/3 , 109 eps2/3 ]. This
means that we consider a set of test problems from Example 2. The
unknown system parameter θ is fixed, say θ = 2. The exact answers
are produced by the Symbolic Math Toolbox of MatLab.
Experiment 1: In this experiment we are going to use the performance profile technique to compare the conventional KF approach
for score evaluation with the square-root algorithm introduced in this
paper. The performance profile method was developed by Dolan and
Moré [12] to answer a common question in scientific computing:
how to compare several competing methods on a set of test problem.
Now, it can be found in textbooks (see, for instance, [13]).
In our simulation experiments we consider a set A of n = 2
algorithms, mentioned above. The performance measure, ta (p), is a
measure of accuracy. More precisely, ta (p) is the maximum absolute
error in Log LG computed for 7 different values of δ. Thus, we
consider a set P of m = 7 test problems from Example 2;
δ ∈ [10−2 , 10−3 , 10−4 , 10−5 , 10−6 , 10−7 , 10−8 ]. According to the
performance profile technique, we compute the performance ratio
rp,a =
ta (p)
≥ 1,
min{tσ (p) : σ ∈ A}
which is the performance of algorithm a on problem p divided
by the best performance of all the methods (we mean a particular
implementation method for score evaluation) on this problem. The
performance profile of algorithm a is the function
1
× number of p ∈ P such that rp,a ≤ µ,
m
which is monotonically increasing. Thus, φa (µ) is the probability
that the performance of algorithm a is within a factor µ of the best
performance over all implementations on the given set of problems.
The results of this experiment are illustrated by Fig. 3. For each
method, µ is plotted against the performance profile φa (µ), for µ ∈
[0, 3]. We are now going to explain Fig. 3.
Let us consider the left-hand side of Fig. 3, where µ = 1. We
can say that the new square-root algorithm proposed in this paper is
the most accurate implementation on ≈ 71% of the problems, with
the conventional KF being accurate on 30% of the problems. Next,
we consider the middle of the plot, looking where the curve first hit
probability 1. We conclude that the suggested square-root method is
within a factor µ ≈ 1.3 of being the most accurate implementation on
every test problem. However, the conventional KF approach for score
evaluation will never manage all 7 problems (as δ → ǫroundof f , the
machine precision limit, the test problems become ill-conditioned).
We need to increase µ to ≈ 2.7 to be able to say that for ≈ 58% of
the test problems the conventional KF provides an accurate Log LG
evaluation within a factor µ ≈ 2.7.
φa (µ) =
Suggested square-root method
∆LogLF
∆P1
10−15
1·
1 · 10−9
6 · 10−6
2 · 10−2
4 · 100
NaN
4·
4 · 10−13
3 · 10−11
3 · 10−10
2 · 10−8
2 · 10−7
∆P1′
∆LogLF
∆LogLG
7 · 10−16
7 · 10−14
1 · 10−11
2 · 10−10
7 · 10−9
1 · 10−8
1 · 10−13
6 · 10−10
9 · 10−6
2 · 10−1
1 · 100
2 · 104
9 · 10−14
7 · 10−10
4 · 10−6
9 · 10−3
5 · 101
2 · 104
1
0.9
0.8
0.7
a
δ
Performance profile φ (µ)
Problem conditioning
0.6
0.5
0.4
0.3
0.2
0.1
0
1
Suggested square−root algorithm
Conventional KF technique
1.2
1.4
1.6
1.8
2
2.2
Factor, µ
2.4
2.6
2.8
3
Fig. 3.
Performance profiles of the methods for score evaluation: the
conventional KF implementation and the new square-root algorithm proposed
in this paper, – on the set of test problems from for Example 2.
Thus, the performance profiles clearly indicate that on the set of
the test problems from Example 2 the new square-root algorithm
derived in this paper provides more accurate evaluation of the Log
LG compared with the conventional KF approach.
Experiment 2: In this experiment we use the conventional KF
technique and the proposed square-root method to compute the
maximum absolute error in Log LF, denoted as ∆LogLF , and its
gradient, denoted as ∆LogLG. The results of this experiment are
summarized in Table I. We also present the maximum absolute error
among elements in matrices P1 and (P1 )′θ (denoted as ∆P1 and ∆P1′ ,
respectively) to explore the numerical behavior of the ”filtered” and
”differentiated” parts of the methods for score evaluation.
As can be seen from Table I, the square-root implementation of
the Riccati-type sensitivity equation degrades more slowly than the
conventional Riccati-type sensitivity recursion as δ → ǫroundof f ,
the machine precision limit (see columns denoted as ∆P1′ ). For
instance, the ”filtered” (columns ∆P1 ) and ”differentiated” (columns
∆P1′ ) parts of the proposed square-root method for score evaluation
maintain about 7 and 8 digits of accuracy, respectively, at δ = 10−9 .
The conventional KF technique provides essentially no correct digits
in both computed solutions. Besides, it seems that the roundoff errors
tend to accumulate and degrade the accuracies of the Log LF and Log
LG faster than the accuracies of ∆P1 and ∆P1′ . Indeed, for the same
δ = 10−9 we obtain no correct digits in the computed solutions for
all methods. In MatLab, the term ’NaN’ stands for ’Not a Number’
that actually means the failure of the numerical algorithm.
Remark 4: The results of Experiment 2 indicate that the new
square-root algorithm provides more accurate computation of the
sensitivity matrix (Pk )′θ compared to the conventional KF. Hence,
it can be successfully used in all applications where this quantity is
required.
PREPRINT
6
V. C ONCLUDING R EMARKS
In this paper, a numerically stable square-root implementation
method for KF formulas, the eSRCF, has been extended in order
to compute the Log LG for linear discrete-time stochastic systems.
The preliminary analysis indicates that the new algorithm for score
evaluation provides more accurate computations compared with the
conventional KF approach. The new result can be used for efficient
calculations in sensitivity analysis and in gradient-search optimization algorithms for the maximum likelihood estimation of unknown
system parameters.
As an extension of the eSRCF, the new method for score evaluation
is expected to inherit its benefits. However, the question about
suitability for parallel implementation is still open.
It can be mentioned that another approach to construct numerically
stable implementation method for score evaluation is to use the U D
filter [3]. Being the modification of the square-root implementations,
the U D-type algorithms improve the robustness of computations
against roundoff errors, but compared with SRF, the U D filter reduces
the computational cost (see [3], [6], [5]). As mentioned in [10] and
as far as this author knows, it is still not known how to use the U D
filter to compute the score.
R EFERENCES
[1] F.C. Schweppe, ”Evaluation of likelihood functions for Gaussian signals”, IEEE Trans. Inf. Theory, vol. IT-11, pp. 61–70, Jan. 1965.
[2] M.S. Grewal and A.P. Andrews, Kalman Filtering: Theory and Practice
Using MATLAB, 2nd ed., Ed. New York: John Wiley & Sons, 2001.
[3] G.J. Bierman, Factorization Methods for Discrete Sequential Estimation.
Ed. New York: Academic Press, 1977.
[4] P.G. Kaminski, A.E. Bryson and S.F. Schmidt, ”Discrete square-root
filtering: a survey of current techniques”, IEEE Trans. Autom. Control,
vol. AC-16, pp. 727–735, Dec. 1971.
[5] T. Kailath, A.H. Sayed and B. Hassibi, Linear Estimation, Ed. New
Jersey: Prentice Hall, 2000.
[6] M. Verhaegen and P.V. Dooren, ”Numerical aspects of different Kalman
filter implementations”, IEEE Trans. Autom. Control, vol. AC-31,
pp. 907–917, Oct. 1986.
[7] T. Kailath, ”Array algorithms for structured matrices”, presented at the
conference of the International Linear Algebra Society, Winnipeg, USA,
June 3-6, 1998.
[8] P. Park and T. Kailath, ”New square-root algorithms for Kalman filtering”, IEEE Trans. Autom. Control, vol. 40, pp. 895–899, May 1995.
[9] E.K.B. Lee and S. Maykin, ”Parallel implementation of the extended
square-root covariance filter for tracking applications”, IEEE Trans.
Parallel Distrib. Syst., vol. 4, pp. 446–457, April 1993.
[10] G.J. Bierman, M.R. Belzer, J.S. Vandergraft and D.W. Porter, ”Maximum
likelihood estimation using square root information filters”, IEEE Trans.
Autom. Control, vol. 35, pp. 1293–1298, Dec. 1990.
[11] P. Dyer and S. McReynolds, ”Extensions of square root filtering to
include process noise”, J. Opt. Theory Appl., vol. 3, pp. 444–459,
June 1969.
[12] E.D. Dolan and J.J. Moré, ”Benchmarking optimization software with
performance profiles”, Math. Programming, vol. 91:201-213, pp. 320–
348, 2002.
[13] D.J. Higham and N.J. Higham, MatLab Guide, 2nd ed., Ed. Philadelphia:
SIAM, 2005.
| 3 |
Cellular Automata and Finite Groups
Alonso Castillo-Ramirez and Maximilien Gadouleau
May 3, 2017
arXiv:1610.00532v2 [] 1 May 2017
Abstract
For a finite group G and a finite set A, we study various algebraic aspects of cellular automata
over the configuration space AG . In this situation, the set CA(G; A) of all cellular automata over
AG is a finite monoid whose basic algebraic properties had remained unknown. First, we investigate the structure of the group of units ICA(G; A) of CA(G; A). We obtain a decomposition of
ICA(G; A) into a direct product of wreath products of groups that depends on the numbers α[H]
of periodic configurations for conjugacy classes [H] of subgroups of G. We show how the numbers
α[H] may be computed using the Möbius function of the subgroup lattice of G, and we use this to
improve the lower bound recently found by Gao, Jackson and Seward on the number of aperiodic
configurations of AG . Furthermore, we study generating sets of CA(G; A); in particular, we prove
that CA(G; A) cannot be generated by cellular automata with small memory set, and, when all
subgroups of G are normal, we determine the relative rank of ICA(G; A) on CA(G; A), i.e. the
minimal size of a set V ⊆ CA(G; A) such that CA(G; A) = hICA(G; A) ∪ V i.
Keywords: Cellular automata, Invertible cellular automata, Finite groups, Finite monoids,
Generating sets.
1
Introduction
Cellular automata (CA), introduced by John von Neumann and Stanislaw Ulam as an attempt to
design self-reproducing systems, are models of computation with important applications to computer
science, physics, and theoretical biology. In recent years, the mathematical theory of CA has been
greatly enriched by its connections to group theory and topology (e.g., see [6] and references therein).
The goal of this paper is to embark in the new task of exploring CA from the point of view of finite
group theory and finite semigroup theory.
First of all, we review the broad definition of CA that appears in [6, Sec. 1.4]. Let G be a group
and A a set. Denote by AG the configuration space, i.e. the set of all functions of the form x : G → A.
For each g ∈ G, let Rg : G → G be the right multiplication function, i.e. (h)Rg := hg, for any h ∈ G.
We emphasise that we apply functions on the right, while in [6] functions are applied on the left.
Definition 1. Let G be a group and A a set. A cellular automaton over AG is a transformation
τ : AG → AG such that there is a finite subset S ⊆ G, called a memory set of τ , and a local function
µ : AS → A satisfying
(g)(x)τ = ((Rg ◦ x)|S )µ, ∀x ∈ AG , g ∈ G,
where (Rg ◦ x)|S denotes the restriction to S of the configuration Rg ◦ x : G → A.
Most of the classical literature on CA focuses on the case when G = Zd , for d ≥ 1, and A is a
finite set (e.g. see survey [17]).
A semigroup is a set M equipped with an associative binary operation. If there exists an element
id ∈ M such that id · m = m · id = m, for all m ∈ M , the semigroup M is called a monoid and
id an identity of M . Clearly, the identity of a monoid is always unique. The group of units of M
is the set of all invertible elements of M (i.e. elements a ∈ M such that there is a−1 ∈ M with
a · a−1 = a−1 · a = id).
1
Let CA(G; A) be the set of all cellular automata over AG ; by [6, Corollary 1.4.11], this set equipped
with the composition of functions is a monoid. Although results on monoids of CA have appeared in
the literature before (see [4, 14, 19]), the algebraic structure of CA(G; A) remains basically unknown.
In particular, the study of CA(G; A), when G and A are both finite, has been generally disregarded
(except for the case when G = Zn , which is the study of one-dimensional CA on periodic points). It
is clear that many of the classical questions on CA are trivially answered when G is finite (e.g. the
Garden of Eden theorems become trivial), but, on the other hand, several new questions, typical of
finite semigroup theory, arise in this setting.
In this paper, we study various algebraic properties of CA(G; A) when G and A are both finite.
First, in Section 2, we introduce notation and review some basic results. In Section 3, we study the
group of units ICA(G; A) of CA(G; A), i.e. the group of all invertible (also known as reversible) CA
over AG . We obtain an explicit decomposition of ICA(G; A) into a direct product of wreath products
of groups that depends on the numbers α[H] of periodic configurations for conjugacy classes [H] of
subgroups H of G.
In Section 4, we show how the numbers α[H] may be computed using the Möbius function of the
subgroup lattice of G, and we give some explicit formulae for special cases. Furthermore, we make a
large improvement on the lower bound recently found by Gao, Jackson and Seward [11] on the number
of aperiodic configurations of AG .
Finally, in Section 5, we study generating sets of CA(G; A). A set T of CA is called a generating
set of CA(G; A) if every CA over AG is expressible as a word in the elements of T . We prove that
CA(G; A) cannot be generated by CA with small memory sets: every generating set T of CA(G; A)
must contain a cellular automaton with minimal memory set equal to G itself. This result provides a
striking contrast with CA over infinite groups because, in such cases, the memory set of any cellular
automaton may never be equal to the whole group (as memory sets are finite by definition). Finally,
when G is finite abelian, we find the smallest size of a set V ⊆ CA(G; A) such that ICA(G; A) ∪ V
generates CA(G; A); this number is known in semigroup theory as the relative rank of ICA(G; A) in
CA(G; A), and it turns out to be related with the number of edges of the subgroup lattice of G.
The present paper is an extended version of [5]. In this version, we added preliminary material
in order to make the paper self-contained, we improved the exposition, we generalised several results
(e.g. Corollary 2, Lemma 11, and Theorem 7), and we added the completely new Section 4.
2
Basic Results
For any set X, a transformation of X is a function of the form τ : X → X. Let Tran(X) and Sym(X)
be the sets of all transformations and bijective transformations of X, respectively. Equipped with
the composition of transformations, Tran(X) is known as the full transformation monoid on X, while
Sym(X) is the symmetric group on X. When X is finite and |X| = q, we write Tranq and Symq
instead of Tran(X) and Sym(X), respectively. A finite transformation monoid is simply a submonoid
of Tranq , for some q. This type of monoids has been extensively studied (e.g. see [10] and references
therein), and it should be noted its close relation to finite-state machines.
Recall that the order of a group G is simply the cardinality of G as a set. For the rest of the
paper, let G be a finite group of order n and A a finite set of size q. By Definition 1, it is clear that
CA(G; A) ≤ Tran(AG ) (where we use the symbol “≤” for the submonoid relation). We may always
assume that τ ∈ CA(G; A) has (not necessarily minimal) memory set S = G, so τ is completely
n
determined by its local function µ : AG → A. Hence, |CA(G; A)| = q q .
If n = 1, then CA(G; A) = Tran(A), while, if q ≤ 1, then CA(G; A) is the trivial monoid with one
element; henceforth, we assume n ≥ 2 and q ≥ 2. Without loss of generality, we identify A with the
set {0, 1, . . . , q − 1} and we denote the identity element of G by e.
A group action of G on a set X is a function · : X × G → X such that (x · g) · h = x · gh and
x · e = x for all x ∈ X, g, h ∈ G (where we denote the image of a pair (x, g) by x · g). A group G acts
on the configuration space AG as follows: for each g ∈ G and x ∈ AG , the configuration x · g ∈ AG is
defined by
(h)x · g = (hg −1 )x, ∀h ∈ G.
(1)
2
This indeed defines a group action because:
(i) For all h ∈ G, (h)x · e = (h)x, so x · e = x.
(ii) For all h, g1 , g2 ∈ G,
(h)(x · g1 ) · g2 = (hg2−1 )x · g1 = (hg2−1 g1−1 )x = (h(g1 g2 )−1 )x = (h)x · g1 g2 ,
so (x · g1 ) · g2 = x · g1 g2 .
Note that in equation (1), h has to be multiplied by the inverse of g and not by g itself, as property
(ii) may not hold in the latter case when G is non-abelian.
Definition 2. A transformation τ : AG → AG is G-equivariant if, for all x ∈ AG , g ∈ G,
(x · g)τ = (x)τ · g.
Theorem 1. Let G be a finite group and A a finite set. Then,
CA(G; A) = {τ ∈ Tran(AG ) : τ is G-equivariant}.
Proof. By Curtis-Hedlund Theorem (see [6, Theorem 1.8.1]), a transformation τ ∈ Tran(AG ) is a
cellular automaton if and only if τ is G-equivariant and continuous in the prodiscrete topology of
AG (i.e. the product topology of the discrete topology). However, as both G and A are finite, every
transformation in Tran(AG ) is continuous, and the result follows.
In other words, the previous result means that CA(G; A) is the endomorphism monoid of the G-set
AG . This result allows us to study CA(G; A) form a pure algebraic perspective.
We review a few further basic concepts on group actions (see [9, Ch. 1]). For x ∈ AG , denote by
Gx the stabiliser of x in G:
Gx := {g ∈ G : x · g = x}.
Remark 1. For any subgroup H ≤ G there exists x ∈ AG such that Gx = H; namely, we may define
x : G → A by
(
1 if g ∈ H,
∀g ∈ G.
(g)x :=
0 otherwise,
For any x ∈ AG , denote by xG the G-orbit of x on AG :
xG := {x · g : g ∈ G}.
Let O(G; A) be the set of all G-orbits on AG :
O(G; A) := {xG : x ∈ AG }.
It turns out that O(G; A) forms a partition of AG . The following result is known as the Orbit-Stabiliser
Theorem (see [9, Theorem 1.4A.]).
Theorem 2. Let G be a finite group and A a finite set. For any x ∈ AG ,
|xG| =
|G|
.
|Gx |
Moreover, if x = y · g for some x, y ∈ AG , g ∈ G, then Gx = g−1 Gy g.
In general, when X is a set and P is a partition of X, we say that a transformation monoid
M ≤ Tran(X) preserves the partition if, for any P ∈ P and τ ∈ M there is Q ∈ P such that
(P )τ ⊆ Q.
3
Lemma 1. For any x ∈ AG and τ ∈ CA(G; A),
(xG)τ = (x)τ G.
In particular, CA(G; A) preserves the partition O(G; A) of AG .
Proof. The result follows by the G-equivariance of τ ∈ CA(G; A).
A configuration x ∈ AG is called constant if (g)x = k ∈ A, for all g ∈ G. In such case, we usually
denote x by k ∈ AG .
Lemma 2. Let τ ∈ CA(G; A) and let k ∈ AG be a constant configuration. Then, (k)τ ∈ AG is a
constant configuration.
Proof. Observe that x ∈ AG is constant if and only if x · g = x, for all g ∈ G. By G-equivariance,
(k)τ = (k · g)τ = (k)τ · g,
∀g ∈ G.
Hence, (k)τ is constant.
A subshift of AG is a subset X ⊆ AG that is G-invariant, i.e. for all x ∈ X, g ∈ G, we have
x · g ∈ X, and closed in the prodiscrete topology of AG . As G and A are finite, the subshifts of AG
are simply unions of G-orbits in AG .
The actions of G on two sets X and Y are equivalent if there is a bijection λ : X → Y such that,
for all x ∈ X, g ∈ G, we have (x · g)λ = (x)λ · g.
Two subgroups H1 and H2 of G are conjugate in G if there exists g ∈ G such that g −1 H1 g = H2 .
This defines an equivalence relation on the subgroups of G. Denote by [H] the conjugacy class of
H ≤ G. If y and z are two configurations in the same G-orbit, then by Theorem 2 we have [Gy ] = [Gz ].
We use the cyclic notation for the permutations of Sym(AG ). If B ⊆ AG and a ∈ AG , we define
the idempotent transformation (B → a) ∈ Tran(AG ) by
(
a if x ∈ B,
∀x ∈ AG .
(x)(B → a) :=
x otherwise,
When B = {b} is a singleton, we write (b → a) instead of ({b} → a).
3
The Structure of ICA(G; A)
Denote by ICA(G; A) the group of all invertible cellular automata:
ICA(G; A) := {τ ∈ CA(G; A) : ∃τ −1 ∈ CA(G; A) such that τ τ −1 = τ −1 τ = id}.
As the inverse of a bijective G-equivariant map is also G-equivariant, it follows by Theorem 1 that
ICA(G; A) = CA(G; A) ∩ Sym(AG ).
The following is an essential result for our description of the structure of the group of invertible
cellular automata.
Lemma 3. Let G be a finite group of order n ≥ 2 and A a finite set of size q ≥ 2. Let x, y ∈ AG be
such that xG 6= yG. Then, there exists τ ∈ ICA(G; A) such that (x)τ = y if and only if Gx = Gy .
Proof. Suppose first that there is τ ∈ ICA(G; A) such that (x)τ = y. Let g ∈ Gx . Then, y · g =
(x)τ · g = (x · g)τ = (x)τ = y, so g ∈ Gy . This shows that Gx ≤ Gy . Now, let h ∈ Gy . Then,
x · h = (y)τ −1 · h = (y · h)τ −1 = (y)τ −1 = x, where τ −1 ∈ ICA(G; A) is the inverse of τ , so h ∈ Gx .
Therefore, Gx = Gy .
Suppose now that Gx = Gy . We define a map τ : AG → AG as follows:
y · g if z = x · g,
∀z ∈ AG .
(z)τ := x · g if z = y · g,
z
otherwise,
4
We check that τ is well-defined:
x · g = x · h ⇔ gh−1 ∈ Gx = Gy ⇔ y · g = y · h;
therefore, every element of AG has a unique image under τ . Clearly, τ is G-equivariant and invertible
(in fact, τ = τ −1 ). Hence τ ∈ ICA(G; A), and it satisfies (x)τ = y.
Corollary 1. Under the assumptions of Lemma 1, there exists τ ∈ ICA(G; A) such that (xG)τ = yG
if and only if [Gx ] = [Gy ]
Proof. Suppose that (xG)τ = yG. Then, (x)τ = y · g, for some g ∈ G. By Lemma 1, Gx = Gy·g .
However, note that Gy·g = g−1 Gy g, so [Gx ] = [Gy ]. Conversely, if [Gx ] = [Gy ], then Gx = g−1 Gy g =
Gy·g , for some g ∈ G. By Lemma 1, there exists τ ∈ ICA(G; A) such that (x)τ = y · g, and by Lemma
1, (xG)τ = yG.
A subgroup H ≤ G is normal if [H] = {H} (i.e. g−1 Hg = H for all g ∈ G).
Corollary 2. Suppose that G is a finite group whose subgroups are all normal. For any x, y ∈ AG ,
there exists τ ∈ ICA(G; A) such that (xG)τ = yG if and only if Gx = Gy .
Groups whose subgroups are all normal are called Dedekind groups. Clearly, abelian groups are
always Dedekind. The finite non-abelian Dedekind groups (also known as finite Hamiltonian groups)
were classified by Richard Dedekind in [8] and have the form Q8 ×(Z2 )n ×H, where Q8 is the quaternion
group (i.e. the group of units of the quaternions H), n ≥ 0, and H is a finite abelian group of odd
order. Several of our stronger results on CA(G; A) will hold when G is a finite Dedekind group.
For any integer α ≥ 2 and any group C, the wreath product of C by Symα is the set
C ≀ Symα := {(v; φ) : v ∈ C α , φ ∈ Symα }
equipped with the operation
(v; φ) · (w; ψ) = (vwφ ; φψ), for any v, w ∈ C α , φ, ψ ∈ Symα ,
where φ acts on w by permuting its coordinates:
wφ = (w1 , w2 , . . . , wα )φ := (w(1)φ , w(2)φ , . . . , w(α)φ ).
In fact, as may be seen from the above definitions, C ≀Symα is equal to the external semidirect product
C α ⋊ϕ Symα , where ϕ : Symα → Aut(C α ) is given by (w)(φ)ϕ := wφ , for all w ∈ C α , φ ∈ Symα . For
a more detailed description of the wreath product of groups, see [9, Sec. 2.6].
T
Let O ∈ O(G; A) be a G-orbit. If G(O) is the pointwise stabiliser of O, i.e. G(O) := x∈O Gx , the
group GO := G/G(O) is isomorphic to a subgroup of Sym(O) (as the homomorphism ρ : G → Sym(O)
given by (x)(g)ρ = x · g, for all x ∈ O, g ∈ G, has kernel G(O) ; see [9, p. 17] for more details). Abusing
the notation, we also write GO for the isomorphic copy of GO inside Sym(O). Define the group
ICA(O) := {τ ∈ Sym(O) : τ is G-equivariant}.
(2)
Note that ICA(O) is isomorphic to the centraliser of GO in Sym(O):
ICA(O) ∼
= CSym(O) (GO ).
Let H be a subgroup of G and [H] its conjugacy class. Define
B[H] (G; A) := {x ∈ AG : [Gx ] = [H]}.
Note that B[H] (G; A) is a subshift of AG (i.e. a union of G-orbits) and, by Theorem 2, all the G-orbits
contained in B[H] (G; A) have equal sizes. Define
α[H] (G; A) := O ∈ O(G, A) : O ⊆ B[H] (G; A) .
If r is the number of different conjugacy classes of subgroups of G, observe that
B := {B[H] (G; A) : H ≤ G}
is a partition of AG into r blocks. When G and A are clear from the context, we write simply B[H]
and α[H] instead of B[H] (G; A) and α[H] (G; A), respectively.
5
Remark 2. For any G and A, we have B[G] (G; A) = {x ∈ AG : x is constant} and α[G] (G; A) = |A|.
Example 1. Let G ∼
= Zn = {0, 1, . . . , n − 1} be a cyclic group of order n ≥ 2 and let A be a finite
set of size q ≥ 2. Any configuration x : Zn → A may be represented by a n-tuple (x1 , x2 , . . . , xn ) such
that xi := (i − 1)x. The action of Zn on AG correspond to cyclic shifts of the n-tuples; for example,
(x1 , x2 , . . . , xn ) · 1 = (xn , x1 , x2 , . . . , xn−1 ).
As Zn has a unique subgroup Zd for each d | n, we have
n
no
α[Zd ] (Zn ; A) = O ∈ O(Zn , A) : |O| =
.
d
This number may be determined by “counting necklaces”, and we shall discuss how to do this in the
next section (see Lemma 7).
Example 2. Let G = Z2 × Z2 be the Klein four-group and A = {0, 1}. As G is abelian, [H] = {H},
for all H ≤ G. The subgroups of G are
H1 = G, H2 = h(1, 0)i, H3 = h(0, 1)i, H4 = h(1, 1)i, and H5 = h(0, 0)i,
where h(a, b)i denotes the subgroup generated by (a, b) ∈ G. Any configuration x : G → A may be
written as a 2 × 2 matrix (xi,j ) where xi,j := (i − 1, j − 1)x, i, j ∈ {1, 2}. The G-orbits on AG are
0
0
1
1
1
0
0
1
O1 :=
, O2 :=
, O3 :=
,
,
0
0
1
1
1
0
0
1
0
1
1
0
0
0
1
1
,
, O5 :=
,
O4 :=
1
0
0
1
1
1
0
0
1
0
0
1
0
0
0
0
O6 :=
,
,
,
,
0
0
0
0
0
1
1
0
0
1
1
0
1
1
1
1
O7 :=
,
,
,
.
1
1
1
1
1
0
0
1
Hence,
B[H1 ] := O1 ∪ O2 , B[H2 ] := O3 , B[H3 ] := O4 , B[H4 ] := O5 , B[H5 ] := O6 ∪ O7 ;
α[Hi ] (G; A) = 2, for i ∈ {1, 5}, and α[Hi ] (G; A) = 1, for i ∈ {2, 3, 4}.
For any H ≤ G, let NG (H) := {g ∈ G : H = g −1 Hg} ≤ G be the normaliser of H in G. Note that
H is always normal in NG (H). The following result determines the structure of the group ICA(G; A),
and it is a refinement of [4, Lemma 4] (c.f. [19, Theorem 9] and [7, Theorem 7.2]).
Theorem 3 (Structure of ICA(G; A)). Let G be a finite group and A a finite set of size q ≥ 2. Let
[H1 ], . . . , [Hr ] be the list of all different conjugacy classes of subgroups of G. Then,
ICA(G; A) ∼
=
r
Y
i=1
(NG (Hi )/Hi ) ≀ Symαi ,
where αi := α[Hi ] (G; A).
Proof. Let Bi := B[Hi ] , and note that all these subshifts are nonempty because of Remark 1. By
Corollary 1, any τ ∈ ICA(G; A) maps configurations inside Bi to configurations inside Bi ; hence,
ICA(G; A) is contained in the group
r
Y
Sym(Bi ) = Sym(B1 ) × Sym(B2 ) × · · · × Sym(Br ).
i=1
6
For each 1 ≤ i ≤ r, fix a G-orbit Oi ⊆ Bi , and let Oi be the set of G-orbits contained in Bi (so
Oi ∈ Oi ). Note that Oi is a uniform partition of Bi (i.e. all the blocks in the partition have the same
size). For any τ ∈ ICA(G; A), Lemma 1 implies that the projection of τ to Sym(Bi ) is contained in
the group that preserves this uniform partition, i.e. the projection of τ is contained in
S(Bi , Oi ) := {φ ∈ Sym(Bi ) : ∀P ∈ Oi , (P )φ ∈ Oi }.
By [2, Lemma 2.1(iv)],
S(Bi , Oi ) ∼
= Sym(Oi ) ≀ Symαi .
It is well-known that Symαi is generated by its transpositions. As the invertible cellular automaton
constructed in the proof of Lemma 3 induces a transposition (xG, yG) ∈ Symαi , with xG, yG ∈ Oi ,
we deduce that Symαi ≤ ICA(G; A). Now it is clear that the projection of ICA(G; A) to Sym(Bi ) is
exactly ICA(Oi ) ≀ Symαi . By [9, Theorem 4.2A (i)], it follows that
CSym(Oi ) (GOi ) ∼
= NG (Hi )/Hi .
The result follows because ICA(Oi ) ∼
= CSym(Oi ) (GOi ).
Corollary 3. Let G be a finite Dedekind group and A a finite set of size q ≥ 2. Let H1 , . . . , Hr be the
list of different subgroups of G. Then,
ICA(G; A) ∼
=
r
Y
i=1
(G/Hi ) ≀ Symαi ,
where αi := α[Hi ] (G; A).
Proof. As every subgroup of G is normal, the results follows because [Hi ] = {Hi } and NG (Hi ) = G,
for all 1 ≤ i ≤ r.
Example 3. For any n ≥ 2,
ICA(Zn ; A) ∼
=
r
Y
i=1
Zdi ≀ Symαi ,
where d1 , d2 , . . . , dr are the divisors of n, and αi := αZn/di (Zn ; A).
Example 4. Let G = Z2 × Z2 and A = {0, 1}. By Example 2,
ICA(G; A) ∼
= (Z2 )4 × (G ≀ Sym2 ).
4
Aperiodic Configurations
In this section, we shall determine the integers α[H] (G; A), for each H ≤ G, as they play an important
role in the decomposition of ICA(G; A) given by Theorem 3.
The following elementary result links the integers α[H] (G; A) with the sizes of the subshifts B[H] (G; A).
Lemma 4. Let G be a finite group of order n ≥ 2 and A a finite set of size q ≥ 2. Let H be a subgroup
of G of order m. Then,
α[H] · n = m · |B[H] |.
Proof. The set of G-orbits contained in B[H] forms a partition of B[H] into α[H] blocks. The result
n
by Theorem 2.
follows as each one of these G-orbits has size m
Denote by [G : H] the index of H in G (i.e. the number of cosets of H in G). It is well-known that
|G|
. The following result characterises the situations when BH contains a
when G is finite, [G : H] = |H|
unique G-orbit and will be useful in Section 5.2.
Lemma 5. Let G be a finite group and A a finite set of size q ≥ 2. Then, α[H] (G; A) = 1 if and only
if [G : H] = 2 and q = 2.
7
Proof. Suppose first that [G : H] = q = 2. The subgroup H is normal because it has index 2, so
Hg = gH for every g ∈ G. Fix s ∈ G \ H, and define x ∈ AG by
(
0 if g ∈ H
(g)x =
1 if g ∈ sH = Hs.
Clearly, Gx = H and x ∈ B[H] . Let y ∈ B[H] . As H is normal, [H] = {H}, so Gy = H. For any
h ∈ H,
(h)y = (e)y · h−1 = (e)y and (sh)y = (s)y · h−1 = (s)y,
so y is constant on the cosets of H. Therefore, either y = x, or
(
1 if g ∈ H
(g)y =
0 if g ∈ sH = Hs.
In the latter case, y · s = x and y ∈ xG. This shows that there is a unique G-orbit contained in B[H] ,
so α[H] (G; A) = 1.
Conversely, suppose that [G : H] 6= 2 or q ≥ 3. If [G : H] = 1, then G = H and α[H] (G; A) = q ≥ 2.
Now we prove the two cases separately.
Case 1: [G : H] ≥ 3. Define configurations x1 , x2 ∈ AG by
(
1 if g ∈ H,
(g)x1 =
0 otherwise.
(
0 if g ∈ H,
(g)x2 =
1 otherwise.
It is clear that x1 , x2 ∈ B[H] because Gx1 = Gx2 = H. Furthermore, x1 and x2 are not in the
same G-orbit because the number of preimages of 1 under x1 and x2 is different (as [G : H] ≥ 3).
Hence, α[H] (G; A) ≥ 2.
Case 2: q ≥ 3. Define a configuration x3 ∈ B[H] by
(
2
(g)x3 =
0
if g ∈ H,
otherwise,
and consider the configuration x1 ∈ B[H] defined in Case 1. Clearly, x1 and x3 are not in the
same G-orbit because 2 ∈ A is not in the image of x1 . Hence, α[H] (G; A) ≥ 2.
For any H ≤ G, consider the set of H-periodic configurations:
Fix(H) := {x ∈ AG : x · h = x, ∀h ∈ H} = {x ∈ AG : H ≤ Gx }.
By [6, Corollary 1.3.4.], we have
|Fix(H)| = q [G:H] .
By the Cauchy-Frobenius lemma ([9, Theorem 1.7A]), the total number of G-orbits on AG is
|O(G; A)| =
1 X n/|g|
q
,
|G|
g∈G
where |g| := |hgi| is the order of the element g. However, we need a somewhat more sophisticated
machinery in order to count the number of orbits inside B[H] (G; A).
8
The Möbius function of a finite poset (P, ≤) is a map µ : P × P → Z defined inductively by the
following equations:
µ(a, a) = 1, ∀a ∈ P,
µ(a, b) = 0, ∀a > b,
X
µ(a, c) = 0, ∀a < b.
a≤c≤b
If L(G) is the set of subgroups of G, the poset (L(G), ⊆) is called the subgroup lattice of G. Let
µ : L(G) × L(G) → Z be the Möbius function of the subgroup lattice of G. In particular, µ(H, H) = 1
for any H ≤ G, and µ(H, K) = −1 if H is a maximal subgroup of K ≤ G.
For any H ≤ G of order m, let pH be the smallest order of a subgroup between H and G (note
that pH ≥ 2m):
pH := min{|K| : H < K ≤ G},
and define
SH := {K : H < K ≤ G, |K| = pH }.
By convention, pG = n and SG = ∅.
In the next result we use the following asymptotic notation: write f (n) = o(g(n)), with g(n) not
(n)
identically zero, if limn→0 fg(n)
= 0.
Theorem 4. Let G be a finite group of order n ≥ 2 and A a finite set of size q ≥ 2. Let H be a
subgroup of G of order m.
(i)
|B[H] | = |[H]|
X
µ(H, K) · q n/|K|.
K≤G
(ii) Using asymptotic notation by fixing G and considering q = |A| as a variable, we have
|B[H] |
= q n/m − (|SH | + o(1))q n/pH .
|[H]|
Proof. In order to prove part (i), observe that
|Fix(H)| =
X
H≤K≤G
1
|B |.
|[K]| [K]
By Möbius inversion (see [18, 4.1.2]), we obtain that
X
|B[H] | = |[H]|
µ(H, K) · |Fix(K)|.
K≤G
The result follows as |Fix(K)| = q n/|K| .
Now we prove part (ii). The result is clear for H = G. Otherwise, we have
|B[H] |
=
|[H]|
X
µ(H, K)q n/|K|
H≤K≤G
= q n/m − |SH |q n/pH +
X
µ(H, K)q n/|K|
K ∈S
/ H
=q
n/m
− (|SH | −
X
K ∈S
/ H
The result follows as q n/|K|−n/pH = o(1).
9
µ(H, K)q n/|K|−n/pH )q n/pH .
As the Möbius function of the subgroup lattice of G is not easy to calculate in general, we shall
give a few more explicit results by counting the number of so-called aperiodic configurations:
ac(G; A) := |{x ∈ AG : Gx = {e}}| = |B[{e}] (G; A)|.
Part of our motivation to study this number is that, when H is a normal subgroup of G, the size
of the subshift B[H] is equal to the number of aperiodic configurations with respect to G/H.
Lemma 6. Let G be a finite group, A a finite set, and H a normal subgroup of G. Then,
|B[H] (G; A)| = ac(G/H; A).
Proof. As H is normal, then G/H is a group. By [6, Proposition 1.3.7.], there is a G/H-equivariant
bijection between the configuration space AG/H and Fix(H). Hence, configurations in AG/H with
trivial stabiliser correspond to configurations in Fix(H) with stabiliser H.
The following result gives some formulae for the number of aperiodic configurations of various
finite groups.
Lemma 7. Let G be a finite group of order n ≥ 2 and A a finite set of size q ≥ 2.
(i)
X
ac(G; A) =
µ({e}, K) · q n/|K| .
K≤G
(ii) If G ∼
= Zn is a cyclic group, then
ac(Zn , A) =
X
µ(1, d) · q n/d ,
d|n
where µ is the classic Möbius function of the poset (N, |).
(iii) If G is a p-group and H := {H ≤ G : H is elementary abelian}, then
ac(G; A) =
X
log p |H|
2
(−1)logp |H| p(
) q n/|H| .
H∈H
(iv) If G ∼
= Zm
p is an elementary abelian group, then
ac(Zm
p ; A)
where
m
r p
=q
pm
m
X
r pm−r r(r−1)/2 m
(−1) q
p
+
.
r p
r=1
is the Gaussian binomial coefficient:
m
(1 − pm )(1 − pm−1 ) . . . (1 − pm−r+1 )
.
:=
(1 − pr )(1 − pr−1 ) . . . (1 − p)
r p
Proof. Part (i) follows by Lemma 4 (iii). Part (ii) follows because the subgroup lattice of the cyclic
group Zn is isomorphic to the lattice of divisors of n.
We prove part (iii). If G is a p-group, by [15, Corollary 3.5.], for any H ≤ G we have
(
logp |H|
(−1)logp |H| p( 2 ) if H is elementary abelian,
µ ({e}, H) =
0
otherwise.
So the result follows by part (i).
10
pr .
Finally, we prove part (iv). Denote by Hr the set of elementary abelian subgroups of G of order
Then,
ac(G; A) =
m
X
r
m r
(−1)r |Hr |p(2) q p /p
r=0
m
X
m−r r(r−1)/2
(−1)r q p
p
|Hr |.
=
r=0
The result follows because the Gaussian binomial coefficient
groups of order pr of Zm
p (see [3, Section 9.2]).
m
r p
gives precisely the number of sub-
As constant configurations are never aperiodic, the following obvious upper bound of ac(G; A) is
obtained:
ac(G; A) ≤ q n − q
This upper bound is achieved if and only if n is a prime number (so G is a cyclic group). The following
lower bound bound of ac(G; A) was obtained by Gao, Jackson and Seward [11, Corollary 1.7.2.]:
q n − q n−1 ≤ ac(G; A),
which is achieved for small values of n and q, as for example, when n = q = 2, or when G = Z2 × Z2
and q = 2 (see Example 2).
For any d | n, define G(d) := {g ∈ G : |hgi| = d}. In the next result we improve the known
estimates of ac(G; A).
Theorem 5 (Lower bound on apperiodic configurations). Let G be a finite group of order n ≥ 2 and
A a finite set of size q ≥ 2. Let p be the smallest prime dividing n. Then:
|G(p) |
(i) ac(G; A) = q n − p−1 + o(1) q n/p .
(ii) We have the following lower bound:
q n − (n − 1)q n/p ≤ ac(G; A).
Proof. By Theorem 4 (ii) with H = {e}, we have
ac(G; A) = q n − (|S{e} | + o(1))q n/p{e} .
In this case, p{e} is the smallest order of a non-identity element in G, which, by Sylow’s theorem, is
equal to p, the smallest prime dividing n. Furthermore, |S{e} |, the number of subgroups of G of order
p, is G(p) /(p − 1), so part (i) follows.
In order to prove part (ii), let t1 , . . . , tr be the generators of the minimal subgroups of G (all of
which are cyclic groups of prime order). Then,
n
ac(G; A) = q −
r
[
n
Fix(hti i) ≥ q −
r
X
q n/|ti | ≥ q n − (n − 1)q n/p .
i=1
i=1
The lower bound of Theorem 5 (ii) is much tighter than the one given by [11, Corollary 1.7.2.].
5
Generating Sets of of CA(G; A)
For a monoid M and a subset T ⊆ M , denote by hT i the submonoid generated by T , i.e. the smallest
submonoid of M containing T . Say that T is a generating set of M if M = hT i; in this case, every
element of M is expressible as a word in the elements of T (we use the convention that the empty
word is the identity).
11
5.1
Memory Sets of Generators of CA(G; A)
A large part of the classical research on CA is focused on CA with small memory sets. In some cases,
such as the elementary Rule 110, or John Conway’s Game of Life, these CA are known to be Turing
complete. In a striking contrast, when G and A are both finite, CA with small memory sets are
insufficient to generate the monoid CA(G; A).
Theorem 6 (Minimal memory set of generators of CA(G; A)). Let G be a finite group of order n ≥ 2
and A a finite set of size q ≥ 2. Let T be a generating set of CA(G; A). Then, there exists τ ∈ T with
minimal memory set S = G.
Proof. Suppose that T is a generating set of CA(G; A) such that each of its elements has minimal
memory set of size at most n−1. Consider the idempotent σ := (0 → 1) ∈ CA(G; A), where 0, 1 ∈ AG
are different constant configurations. Then, σ = τ1 τ2 . . . τℓ , for some τi ∈ T . By the definition of σ
G
G
and Lemma 2, there must be 1 ≤ j ≤ ℓ such that (AG
c )τj = q − 1 and (Anc )τj = Anc , where
G
G
G
AG
c := {k ∈ A : k is constant} and Anc := {x ∈ A : x is non-constant}.
Let S ⊆ G and µ : AS → A be the minimal memory set and local function of τ := τj , respectively.
G
By hypothesis, s := |S| < n. Since the restriction of τ to AG
c is not a bijection, there exists k ∈ Ac
G
(defined by (g)k := k ∈ A, ∀g ∈ G) such that k 6∈ (Ac )τ .
For any x ∈ AG , define the k-weight of x by
|x|k := |{g ∈ G : (g)x 6= k}|.
Consider the sum of the k-weights of all non-constant configurations of AG :
X
X
X
w :=
|x|k =
|x|k −
|x|k
x∈AG
nc
x∈AG
= n(q − 1)q
n−1
x∈AG
c
− n(q − 1) = n(q − 1)(q n−1 − 1).
In particular, w
n is an integer not divisible by q.
For any x ∈ AG and y ∈ AS , define
Sub(y, x) := |{g ∈ G : y = x|Sg }|;
this counts the number of times that y appears as a subconfiguration of x. Then, for any fixed y ∈ AS ,
(
X
nq n−s
if y ∈ ASnc ,
Ny :=
Sub(y, x) =
n(q n−s − 1) if y ∈ ASc .
G
x∈Anc
To see why the previous equality holds, fix g ∈ G and count the number of configurations x ∈ AG
nc such
n−s
n−s
that y = x|Sg : there are q
such configurations if y is non-constant, and q
− 1 if y is constant.
The equality follows by counting this for each one of the n elements of G.
G
Let δ : A2 → {0, 1} be the Kronecker’s delta function. Since (AG
nc )τ = Anc , we have
X
X
w=
|(x)τ |k =
Ny (1 − δ(y)µ,k )
x∈AG
nc
= nq n−s
y∈AS
X
(1 − δ(y)µ,k ) + n(q n−s − 1)
y∈AS
nc
X
(1 − δ(y)µ,k ).
y∈AS
c
S
Because k 6∈ (AG
c )τ , we know that (y)µ 6= k for all y ∈ Ac . Therefore,
X
w
(1 − δ(y)µ,k ) + (q n−s − 1)q.
= q n−s
n
S
y∈Anc
As s < n, this implies that
w
n
is an integer divisible by q, which is a contradiction.
12
5.2
Relative Rank of ICA(G; A) in CA(G; A)
One of the fundamental problems in the study of a finite monoid M is the determination of the
cardinality of a smallest generating subset of M ; this is called the rank of M and denoted by Rank(M ):
Rank(M ) := min{|T | : T ⊆ M and hT i = M }.
It is well-known that, if X is any finite set, the rank of the full transformation monoid Tran(X) is 3,
while the rank of the symmetric group Sym(X) is 2 (see [10, Ch. 3]). Ranks of various finite monoids
have been determined in the literature before (e.g. see [1, 2, 12, 13, 16]).
In [4], the rank of CA(Zn ; A), where Zn is the cyclic group of order n, was studied and determined
when n ∈ {p, 2k , 2k p : k ≥ 1, p odd prime}. Moreover, the following problem was proposed:
Problem 1. For any finite group G and finite set A, determine Rank(CA(G; A)).
For any finite monoid M and U ⊆ M , the relative rank of U in M , denoted by Rank(M : U ), is
the minimum cardinality of a subset V ⊆ M such that hU ∪ V i = M . For example, for any finite set
X,
Rank(Tran(X) : Sym(X)) = 1,
as any τ ∈ Tran(X) with |(X)τ | = |X| − 1 satisfies hSym(X) ∪ {τ }i = Tran(X). One of the main
tools used to determine Rank(CA(G; A)) is based on the following result (see [2, Lemma 3.1]).
Lemma 8. Let M be a finite monoid and let U be its group of units. Then,
Rank(M ) = Rank(M : U ) + Rank(U ).
We shall determine the relative rank of ICA(G; A) in CA(G; A) for any finite abelian group G and
finite set A. In order to achieve this, we prove two lemmas that hold even when G is nonabelian and
have relevance in their own right.
Lemma 9. Let G be a finite group and A a finite set of size q ≥ 2. Let τ ∈ CA(G; A) and x ∈ AG .
If (x)τ ∈ xG, then τ |xG ∈ Sym(xG).
Proof. It is enough to show that (xG)τ = xG as this implies that τ |xG : xG → xG is surjective, so it
is bijective by the finiteness of xG. Since (x)τ ∈ xG, we know that (x)τ G = xG. Hence, by Lemma
1, (xG)τ = (x)τ G = xG.
Remark 3. Recall that a Garden of Eden (GoE) of τ ∈ CA(G; A) is a configuration x ∈ AG such
that x ∈
/ (AG )τ . As AG is finite in our setting, note that τ is non-invertible if and only if it has a
GoE. Moreover, by G-equivariance, x is a GoE of τ if and only if xG is a GoE of τ , so we shall talk
about GoE G-orbits, rather than GoE configurations.
Denote by CG the set of conjugacy classes of subgroups of G. For any [H1 ], [H2 ] ∈ CG , write
[H1 ] ≤ [H2 ] if H1 ≤ g−1 H2 g, for some g ∈ G.
Remark 4. The relation ≤ defined above is a well-defined partial order on CG . Clearly, ≤ is reflexive
and transitive. In order to show antisymmetry, suppose that [H1 ] ≤ [H2 ] and [H2 ] ≤ [H1 ]. Then,
H1 ≤ g−1 H2 g and H2 ≤ f −1 H1 f , for some f, g ∈ G, which implies that |H1 | ≤ |H2 | and |H2 | ≤ |H1 |.
As H1 and H2 are finite, |H1 | = |H2 |, and H1 = g−1 H2 g. This shows that [H1 ] = [H2 ].
Lemma 10. Let G be a finite group and A a finite set of size q ≥ 2. Let x, y ∈ AG be such that
xG 6= yG. There exists a non-invertible τ ∈ CA(G; A) such that (x)τ = y if and only if Gx ≤ Gy .
Proof. In general, for any τ ∈ CA(G; A) such that (x)τ = y, we have Gx ≤ Gy , because we may argue
as in the first line of the proof of Lemma 3.
Conversely, suppose that Gx ≤ Gy . We define an idempotent transformation τx,y : AG → AG as
follows:
(
y · g if z = x · g,
∀z ∈ AG .
(z)τx,y :=
z
otherwise,
Note that τx,y is well-defined because x · g = x · h, implies that gh−1 ∈ Gx ≤ Gy , so y · g = y · h.
Clearly, τx,y is non-invertible and G-equivariant, so τx,y ∈ CA(G; A) \ ICA(G; A).
13
Corollary 4. Let G be a finite group and A a finite set of size q ≥ 2. Let x, y ∈ AG be such
that xG 6= yG. There exists a non-invertible τ ∈ CA(G; A) such that (xG)τ = yG if and only if
[Gx ] ≤ [Gy ].
Consider the directed graph (CG , EG ) with vertex set CG and edge set
2
EG := ([Hi ], [Hj ]) ∈ CG
: [Hi ] ≤ [Hj ] .
When G is abelian, this graph coincides with the subgroup lattice of G.
Remark 5. Lemma 10 may be restated in terms of (CG , EG ). By Lemma 9, loops ([Hi ], [Hi ]) do not
have corresponding non-invertible CA when α[Hi ] (G; A) = 1.
Recall that an action of G on a set X is transitive if for any x, y ∈ X there exists g ∈ G such that
x · g = y (i.e. X = xG, for any x ∈ X). The following result will be useful in order to prove the main
theorem of this section.
Lemma 11. Let G be a finite group and A a finite set of size q ≥ 2. Then ICA(G; A) is transitive on
every G-orbit on AG if and only if G is a finite Dedekind group.
Proof. Let x ∈ AG be a configuration. By Theorem 3, the group ICA(G; A) acts on xG as the group
NG (Gx )/Gx via the action x · (Gx g) := x · g, for any Gx g ∈ NG (Gx )/Gx . Note that NG (Gx )/Gx is
transitive on xG if and only if G = NG (Gx ), which holds if and only if Gx is normal in G. As any
subgroup of G occurs as a stabiliser of a configuration, this shows that ICA(G; A) is transitive on xG,
for all x ∈ AG , if and only if every subgroup of G is normal.
Remark 6. It is obvious, by definition, that G is transitive on a G-orbit xG. However, Lemma 11
establishes a criterion for the transitivity of the group ICA(G; A) on xG.
Theorem 7 (Relative rank of ICA(G; A) on CA(G; A)). Let G be a finite group and A a finite set of
size q ≥ 2. Let I2 (G) be the set of subgroups of G of index 2:
I2 (G) = {H ≤ G : [G : H] = 2}.
Then,
(
|EG | − |I2 (G)|
Rank(CA(G; A) : ICA(G; A)) ≥
|EG |
if q = 2,
otherwise,
with equality if and only if G is a finite Dedekind group.
Proof. Let [H1 ], [H2 ], . . . , [Hr ] be the list of different conjugacy classes subgroups of G with H1 = G.
Suppose further that this is ordered such that
|H1 | ≥ |H2 | ≥ · · · ≥ |Hr |
(3)
For each 1 ≤ i ≤ r, let αi := α[Hi ] (G; A) and Bi := B[Hi ] (G; A). Fix orbits xi G ⊆ Bi such that
Gxi < Gxj whenever [Hi ] < [Hj ]. For every αi ≥ 2, fix orbits yi G ⊆ Bi such that xi G 6= yi G.
Consider the set
V := τxi ,xj : Gxi < Gxj ∪ {τxi ,yi : αi ≥ 2} ,
and τxi ,xj and τxi ,yi are the idempotents that map xi to xj and xi to yi , respectively, as defined in
Lemma 10. Observe that
(
n
X
|EG | − |I2 (G)| if q = 2,
δ(αi , 1) =
|V | = |EG | −
|EG |
otherwise,
i=1
where the last equality follows by Lemma 5.
Claim. The relative rank of ICA(G; A) on CA(G; A) is at least |V |.
14
Proof. Suppose there exists W ⊆ CA(G; A) such that |W | < |V | and
hICA(G; A) ∪ W i = CA(G; A).
Let τ ∈ CA(G; A). We say that τ is a Unique-Garden-of-Eden CA (UCA) of type (Bi , Bj ) if the
following holds:
(⋆) τ has a unique Garden of Eden G-orbit Oi (i.e. (AG )τ = AG \ Oi ), and it satisfies Oi ⊆ Bi and
(Oi )τ ⊆ Bj .
Note that UCA of type (Bi , Bi ) only exist when there are at least two different orbits in Bi , i.e. αi ≥ 2.
For example, the idempotents τxi ,yi ∈ V , with αi ≥ 2, are UCA of type (Bi , Bi ), while the
idempotents τxi ,xj , with Gxi < Gxj , are UCA of type (Bi , Bj ) with Oi = xi G. Note that τ ∈ CA(G; A)
is a UCA of type (Bi , Bj ) if and only if hICA(G; A), τ i contains a UCA of type (Bi , Bj ) if and only
if all non-invertible elements of hICA(G; A), τ i are UCA of type (Bi , Bj ) (because any φ ∈ ICA(G; A)
always satisfies φ(Bk ) = Bk for all k, by Corollary 1).
As |W | < |V |, and V has exactly one UCA of each possible type (see Lemma 10), there must be
τ ∈ V such that there is no UCA in W of the same type as τ . Without loss of generality, suppose
that the type of τ is (Bi , Bj ) (possibly with i = j). We finish the proof of the claim by showing that
there is no UCA of type (Bi , Bj ) in hW i. This would imply that there is no UCA of type (Bi , Bj ) in
hICA(G; A) ∪ W i, contradicting that hICA(G; A) ∪ W i = CA(G; A).
Assume that
ω := ω1 . . . ωs . . . ωℓ ∈ hW i, with ωm ∈ W, ∀m = 1, . . . , ℓ,
(4)
is a UCA of type (Bi , Bj ). First note that, as ω has no GoE in Bk , for all k 6= i, then (Bk )ω = Bk .
Hence,
(Bk )ωm = Bk for all k 6= i and m = 1 . . . ℓ,
(5)
because ωm cannot map any G-orbit of Bk to a different subshift Bc , as there is no CA mapping back
Bc to Bk (see Lemmas 10 and 3), and ωm does not have GoE inside Bk because this would be a GoE
for ω inside Bk .
Now, observe that each non-invertible ωm that appears in (4) has a unique GoE orbit Ωm (inside
Bi because of (5)). This is true because if Ωm consists of more than one orbit, then (AG )ωm = AG \Ωm
implies that the size of (AG )ω is strictly smaller than |AG | − |Oi |, where Oi ⊆ Bi is the unique GoE
G-orbit of ω.
Let ωs the the first non-invertible CA that appears in (4). We finish the proof by showing that
−1
. . . ω1−1 . There are three possibilities:
(Ωs )ωs ⊆ Bj . Let Ω′s = (Ωs )ωs−1
Case 1: (Ωs )ωs = Pc ⊆ Bc for c 6= i, j. Then:
(Ω′s )ω = (Ωs )ωs . . . ωℓ = (Pc )ωs+1 . . . ωℓ ⊆ Bc ,
where the last contention is because of (5). However, as ω maps all orbits of Bi to Bi ∪ Bj , this
case is impossible.
Case 2: (Ωs )ωs ⊆ Bi . If i = j, then ωs is a UCA of the same type as ω. Hence, let i 6= j. By the
uniqueness of Ωs , (Bi \ Ωs )ωs = (Bi \ Ωs ), so there exists a G-orbit Q ⊆ Bi , Q 6= Ωs , such that
−1
(Ωs )ωs = (Q)ωs . Let Q′ = (Q)ωs−1
. . . ω1−1 . Then,
(Ω′s )ω = (Ωs )ωs . . . Ωℓ = (Q)ωs . . . ωℓ = (Q′ )ω.
However, as ω maps its only GoE orbit to Bj , it does not collapse orbits in Bi . So this case,
with i 6= j is impossible.
Case 3: (Ωs )ωs ⊆ Bj . In this case, ωs is a UCA of type (Bi , Bj ).
In any case, we obtain a contradiction with the assumption that W has no UCA of type (Bi , Bj ).
Therefore, hW i has no UCA of type (Bi , Bj ).
15
Claim. If G is a Dedekind group, then Rank(CA(G; A) : ICA(G; A)) = |V |.
Proof. We will show that
CA(G; A) = M := hICA(G; A) ∪ V i .
For any σ ∈ CA(G; A), consider σi ∈ CA(G; A), 1 ≤ i ≤ r, defined by
(
(x)σ if x ∈ Bi
(x)σi =
x
otherwise.
By Lemmas 3 and 10, we know that Gx ≤ G(x)σ , for all x ∈ AG , so (Bi )σ ⊆
the order given by (3)). Hence, we have the decomposition
S
j≤i Bj
for all i (recall
σ = σ1 ◦ σ2 ◦ · · · ◦ σr .
We shall prove that σi ∈ M for all 1 ≤ i ≤ r. For each σi , decompose Bi = Bi′ ∪ Bi′′ , where
[
Bi′ :=
{P ∈ O(G; A) : P ⊆ Bi and (P )σi ⊆ Bj for some j < i} ,
[
Bi′′ :=
{P ∈ O(G; A) : P ⊆ Bi and (P )σi ⊆ Bi } .
If σi′ and σi′′ are the CA that act as σi on Bi′ and Bi′′ , respectively, and fix everything else, then
σi = σi′ ◦ σi′′ . We shall prove that σi′ ∈ M and σi′′ ∈ M .
1. We show that σi′ ∈ M . For any orbit P ⊆ Bi′ , the orbit Q := (P )σi′ is contained in Bj for some
j < i. By Theorem 3, there exists an involution
φ ∈ (NG (Gxi )/Gxi ) ≀ Symαi × (NG (Gxj )/Gxj ) ≀ Symαj ≤ ICA(G; A)
that induces the double transposition (xi G, P )(xj G, Q). By Lemma 11, ICA(G; A) is transitive
on xi G and xj G (as G is Dedekind), so we may take φ such that (xi )φσi′ = (xj )φ. Then,
(z)σi′ = (z)φτxi ,xj φ, ∀z ∈ P = (xi G)φ.
As σi′ may be decomposed as a product of CA that only move one orbit in Bi′ , this shows that
σi′ ∈ M .
2. We show that σi′′ ∈ M . In this case, σi′′ ∈ Tran(Bi ). In fact, as σi′′ preserves the partition of Bi
into G-orbits, Lemma 9 and [2, Lemma 2.1 (i)] imply that σi′′ ∈ (G/Gxi ) ≀ Tranαi . If αi ≥ 2, the
monoid Tranαi is generated by Symαi ≤ ICA(G; A) together with the idempotent τxi ,yi . Hence,
σi′′ ∈ M .
Therefore, we have established that CA(G; A) = hICA(G; A) ∪ V i.
Claim. If G is not a Dedekind group, then Rank(CA(G; A) : ICA(G; A)) > |V |.
Proof. As G is not Dedekind, so there is a subgroup H ≤ G which is not normal. Hence, H = Gxi for
a non-constant configuration xi ∈ AG , and, by the proof of Lemma 11, ICA(G; A) is not transitive on
xi G. Consider the idempotent τi,1 ∈ V . Then τi,1 = (P → x1 ), where P is the ICA(G; A)-orbit inside
xi G that contains xi and x1 ∈ AG is a constant configuration. Let Q be an ICA(G; A)-orbit inside xi G
such that Q 6= P . As there is no φ ∈ ICA(G; A) mapping P to Q, we have (Q → xi ) ∈
/ hICA(G; A)∪V i.
Therefore, the edge ([H], [G]) of the graph on CG must be counted at least twice for its contribution
towards the relative rank of ICA(G; A) on CA(G; A). The claim follows.
Using Theorem 7, we may find an upper bound for the smallest size of a generating set of CA(G; A),
when G is a finite Dedekind group.
16
Corollary 5. Let G be a finite Dedekind group and A a finite set of size q ≥ 2. Suppose that
Rank(G) = m and that G has r different subgroups. Then,
1
Rank(CA(G; A)) ≤ m(r − 1) + r(r + 5).
2
Proof. Observe that for any α ≥ 3 and any H ≤ G, we have Rank((G/H) ≀ Symα ) ≤ m + 2 because
{((g1 , e, . . . , e); id), . . . , ((gm , e, . . . , e); id), ((e, . . . , e); (1, 2)), ((e, . . . , e); (1, 2, . . . , α))}
is a generating set of (G/H) ≀ Symα , whenever {g1 , . . . , gm } is a generating set for G.
Let H1 , H2 , . . . , Hr be the list of different subgroups of G with H1 = G. For each 1 ≤ i ≤ r, let
αi := α[Hi ] (G; A). Thus, by Lemma 8, Corollary 3, and Theorem 7 we have:
Rank(CA(G; A)) = Rank(ICA(G; A)) + Rank(CA(G; A) : ICA(G; A))
r
X
Rank((G/Hi ) ≀ Symαi ) + |EG |
≤
i=1
≤ Rank(Symq ) +
r
X
i=2
r
(m + 2) +
+r
2
1
≤ 2 + (r − 1)(m + 2) + r(r − 1) + r
2
1
= m(r − 1) + r(r + 5).
2
H5 ∼
= Z1
H2 ∼
= Z2
H3 ∼
= Z2
H4 ∼
= Z2
H1 = G
Figure 1: Hasse diagram of subgroup lattice of G = Z2 × Z2 .
The bound of Corollary 5 may become tighter if we actually know |EG | and Rank(G/Hi ), for all
Hi ≤ G.
Example 5. Let G = Z2 × Z2 be the Klein-four group and A = {0, 1}. With the notation of Example
2, Figure 1 illustrates the Hasse diagram of the subgroup lattice of G (i.e. the actual lattice of
subgroups is the transitive and reflexive closure of this graph). Hence, by Theorem 7 and Example 4,
Rank(CA(G; A) : ICA(G; A)) = |EG | − 3 = 12 − 3 = 9,
Rank(CA(G; A)) ≤ 9 + 9 = 18, as Rank(ICA(G; A)) ≤ 9.
6
Conclusions
In this paper we studied the monoid CA(G; A) of all cellular automata over a finite group G and a
finite set A. Our main results are the following:
17
1. We completely determined the structure of the group of invertible cellular automata ICA(G; A)
in terms of the structure of G (Theorem 3).
2. We improved the known lower bound on the number of aperiodic configurations of AG (Theorem
5).
3. We showed that any generating set of CA(G; A) must have at least one cellular automaton whose
minimal memory set is G itself (Theorem 6).
4. We gave a lower bound for the minimal size of a set V ⊆ CA(G; A) such that ICA(G; A) ∪ V
generates CA(G; A), and we showed that this lower bound is achieved if and only if all subgroups
of G are normal (Theorem 7).
Most of our results are particularly good for finite Dedekind groups, i.e. finite groups in which all
subgroups are normal (this includes all finite abelian groups).
Some open problems and directions for future work are the following:
1. Examine further the case when G is not a Dedekind group; in particular, determine the relative
rank of ICA(G; A) on CA(G; A).
2. Improve the upper bound on Rank(CA(G; A)) given by Corollary 5.
3. Study generating sets of CA(G; A) when G is an infinite group.
4. Study other algebraic properties of the monoid CA(G; A).
7
Acknowledgments
This work was supported by the EPSRC grant EP/K033956/1. We kindly thank Turlough Neary and
Matthew Cook for their invitation to submit this paper and for the organisation of the conference
AUTOMATA 2016. We also thank the referees of this paper for their insightful suggestions.
References
[1] Araújo, J., Bentz, W., Mitchell, J.D., Schneider, C.: The rank of the semigroup of transformations
stabilising a partition of a finite set. Mat. Proc. Camb. Phil. Soc. 159, 339–353 (2015).
[2] Araújo, J., Schneider, C.: The rank of the endomorphism monoid of a uniform partition. Semigroup
Forum 78, 498–510 (2009).
[3] Cameron, P.J.: Combinatorics: Topics, Techniques, Algorithms. Cambridge University Press,
Cambridge (1994).
[4] Castillo-Ramirez, A., Gadouleau, M.: Ranks of finite semigroups of one-dimensional cellular automata. Semigroup Forum 93, no. 2, 347–362 (2016).
[5] Castillo-Ramirez, A., Gadouleau, M.: On Finite Monoids of Cellular Automata. In: Cook, M.,
Neary, T. (eds.) Cellular Automata and Discrete Complex Systems. LNCS 9664, 90–104, Springer
International Publishing (2016).
[6] Ceccherini-Silberstein, T., Coornaert, M.: Cellular Automata and Groups. Springer Monographs
in Mathematics, Springer-Verlag Berlin Heidelberg (2010).
[7] Cyr, V., Bryna, K.: The automorphism group of a shift of linear growth: beyond transitivity.
Forum Math. Sigma 3, e5, 1–27 (2015).
[8] Dedekind, R.: Ueber Gruppen, deren sämmtliche Theiler Normaltheiler sind (German) Math. Ann.
48, no. 4, 548–561 (1897).
18
[9] Dixon, J.D., Mortimer, B.: Permutation Groups. Graduate Texts in Mathematics 163, SpringerVerlag, New York (1996).
[10] Ganyushkin, O., Mazorchuk, V.: Classical Finite Transformation Semigroups: An Introduction.
Algebra and Applications 9, Springer-Verlag, London (2009).
[11] Gao, S., Jackson, S., Seward, B.: Group Colorings and Bernoulli Subflows. Mem. Am. Math. Soc.
241, no. 1141, 1–239 (2016).
[12] Gomes, G.M.S., Howie, J.M.: On the ranks of certain finite semigroups of transformations. Math.
Proc. Camb. Phil. Soc. 101, 395–403 (1987).
[13] Gray, R.D.: The minimal number of generators of a finite semigroup. Semigroup Forum 89,
135–154 (2014).
[14] Hartman, Y.: Large semigroups of cellular automata. Ergodic Theory Dyn. Syst. 32, 1991–2010
(2012).
[15] Hawkes, T., Isaacs, I.M., Özaydin, M.: On the Möbius Function of a Finite Group. Rocky Mt. J.
Math. 19, no. 4, 1003–1034 (1989).
[16] Howie, J.M., McFadden, R.B.: Idempotent rank in finite full transformation semigroups. Proc.
Royal Soc. Edinburgh 114A, 161–167 (1990).
[17] Kari, J.: Theory of cellular automata: A Survey. Theoret. Comput. Sci. 334, 3–33 (2005).
[18] Kerber, A.: Applied Finite Group Actions, 2nd ed. Algorithms and Combinatorics 19, Springer
(1999).
[19] Salo, V.: Groups and Monoids of Cellular Automata. In: Kari, J. (ed.) Cellular Automata and
Discrete Complex Systems. LNCS 9099, 17–45, Springer Berlin Heidelberg (2015).
A. Castillo-Ramirez (Corresponding author), Universidad de Guadalajara, CUCEI, Departamento de Matemáticas,
Guadalajara, México.
Email: alonso.castillor@academicos.udg.mx
M. Gadouleau, School of Engineering and Computing Sciences, Durham University, South Road, Durham
DH1 3LE, U.K.
Email: m.r.gadouleau@durham.ac.uk
19
| 4 |
New Algorithms for Maximum Disjoint Paths Based on
Tree-Likeness
arXiv:1603.01740v2 [] 20 May 2016
Krzysztof Fleszar∗
Matthias Mnich†
Joachim Spoerhase‡
Abstract
We study the classical NP-hard problems of finding maximum-size subsets from given sets of k terminal
pairs that can be routed via edge-disjoint paths (MaxEDP) or node-disjoint paths (MaxNDP) in a
given graph. The approximability of MaxEDP/NDP is currently not well understood; the best known
lower bound is Ω(log1/2−ε n), assuming NP 6⊆ ZPTIME(npoly log n ). This constitutes a significant gap to
√
the best known approximation upper bound of O( n) due to Chekuri et al. (2006) and closing this gap is
currently one of the big open problems in approximation algorithms. In their seminal paper, Raghavan
and Thompson (Combinatorica, 1987) introduce the technique of randomized rounding
for
LPs; their
technique gives an O(1)-approximation when edges (or nodes) may be used by O logloglogn n paths.
In this paper, we strengthen the above fundamental results. We provide new bounds formulated in
terms of the feedback vertex set number r of a graph, which measures its vertex deletion distance to a
forest. In particular, we obtain the following.
√
• For MaxEDP, we give an O( r · log1.5 kr)-approximation algorithm. As r ≤ n, up to logarithmic
√
factors, our result strengthens the best known ratio O( n) due to Chekuri et al.
• Further, we show how to route Ω(OPT) pairs with congestion O logloglogkrkr , strengthening the bound
obtained by the classic approach of Raghavan and Thompson.
• For MaxNDP, we give an algorithm that gives the optimal answer in time (k + r)O(r) · n. If r
is at most triple-exponential in k, this improves the best known algorithm for MaxNDP with
parameter k, by Kawarabayashi and Wollan (STOC 2010).
We complement these positive results by proving that MaxEDP is NP-hard even for r = 1, and
MaxNDP is W[1]-hard for parameter r. This shows that neither problem is fixed-parameter tractable
in r unless FPT = W[1] and that our approximability results are relevant even for very small constant
values of r.
1
Introduction
In this paper, we study disjoint paths routing problems. In this setting, we are given an undirected graph G
and a collection of source-destination pairs M = {(s1 , t1 ), . . . , (sk , tk )}. The goal is to select a maximum-sized
subset M0 ⊆ M of the pairs that can be routed, where a routing of M0 is a collection P of paths such that,
for each pair (si , ti ) ∈ M0 , there is a path in P connecting si to ti . In the Maximum Edge Disjoint Paths
(MaxEDP) problem, a routing P is feasible if its paths are pairwise edge-disjoint, and in the Maximum
Node Disjoint Paths (MaxNDP) problem the paths in P must be pairwise vertex-disjoint.
Disjoint paths problems are fundamental problems with a long history and significant connections to
optimization and structural graph theory. The decision version of MaxEDP/MaxNPD asks whether all of
the pairs can be routed. Karp [27] showed that, when the number of pairs is part of the input, the decision
problem is NP-complete. In undirected graphs, MaxEDP and MaxNDP are solvable in polynomial time
∗ Universität
Würzburg, Würzburg, Germany. krzysztof.fleszar@uni-wuerzburg.de
Bonn, Bonn, Germany. mmnich@uni-bonn.de. Supported by ERC Starting Grant 306465 (BeyondWorstCase).
‡ Universität Würzburg, Würzburg, Germany. joachim.spoerhase@uni-wuerzburg.de
† Universität
1
when the number of pairs is a fixed constant; this is a very deep result of Robertson and Seymour [40] that
builds on several fundamental results in structural graph theory from their graph minors project.
In this paper, we consider the optimization problems MaxEDP and MaxNDP when the number of
pairs √
are part of the input. In this setting, the best approximation ratio for MaxEDP is achieved by
an O( n)-approximation algorithm [12, 33], where n is the number of nodes, whereas the best hardness for
undirected graphs is only Ω(log1/2−ε n) [3]. Bridging this gap is a fundamental open problem that seems
quite challenging at the moment.
Most of the results for routing on disjoint paths use a natural multi-commodity flow relaxation as a
starting point. A well-known
integrality gap instance due to Garg et al. [24] shows√that this relaxation has
√
an integrality gap of Ω( n), and this is the main obstacle for improving the O(
√ n)-approximation ratio
in general graphs. The integrality instance on an n × n grid (of treewidth Θ( n)) exploits a topological
obstruction in the plane that prevents a large integral routing; see Fig. 1. This led Chekuri et al. [15]
to studying the approximability of MaxEDP with respect to the tree-width of the underlying graph. In
particular, they pose the following conjecture:
Conjecture 1 ([13]). The integrality gap of the standard multi-commodity flow relaxation for MaxEDP
is Θ(w), where w is the treewidth of the graph.
Recently, Ene et al. [21] showed that MaxEDP admits an O(w3 )-approximation algorithm on graphs
of treewidth at most w. Theirs is the best known approximation ratio in terms of w, improving on an
earlier O(w · 3w )-approximation algorithm due to Chekuri et al. This shows that the problem seems more
amenable on “tree-like” graphs.
√
However, for w = ω(n1/6 ), the bound is weaker than the bound of O( n). In fact, EDP remains NP-hard
even for graphs of constant treewidth, namely treewidth w = 2 [37]. This further rules out the existence of a
fixed-parameter algorithm for MaxEDP parameterized by w, assuming P 6= NP. Therefore, to obtain fixedparameter tractability results as well as better approximation guarantees, one needs to resort to parameters
stronger than treewidth.
Another route to bridge the large gap between approximation lower and upper bounds for MaxEDP is to
allow the paths to have low congestion c: that is, instead of requiring the routed paths to be pairwise disjoint,
at most c paths can use an edge. In their groundbreaking work, Raghavan and Thompson [38] introduced
the technique of randomized rounding of LPs to obtain polynomial-time approximation algorithms
for
combinatorial problems. Their approach allows to route Ω(OPT) pairs of paths with congestion O logloglogn n .
This extensive line of research [2, 18, 29] has culminated in a logO(1) k-approximation algorithm with
congestion 2 for MaxEDP [20]. A slightly weaker result also holds for MaxNDP [11].
1.1
Motivation and Contribution
The goal of this work is to study disjoint paths problems under another natural measure for how “far” a
graph is from being a tree. In particular, we propose to examine MaxEDP and MaxNDP under the feedback
vertex set number, which for a graph G denotes the smallest size r of a set R of G for which G − R is a
forest. Note that the treewidth of G is at most r + 1. Therefore, given the NP-hardness of EDP for w = 2
and the current gap between the best known upper bound O(w3 ) and the linear upper bound suggested by
Conjecture 1, it is interesting to study the stronger restriction of bounding the feedback vertex set number r
of the input graph. Our approach is further motivated by the fact that MaxEDP is efficiently solvable on
trees by means of the algorithm of Garg, Vazirani and Yannakakis [24]. Similarly, MaxNDP is easy on trees
(see Theorem 3).
Our main insight is that one can in fact obtain bounds in terms of r that either strengthen the best known
bounds or are almost tight (see Table 1). It therefore seems that parameter r correlates quite well with the
“difficulty” of disjoint paths problems.
Our first result allows the paths to have small congestion: in this setting, we strengthen the result, obtained
by the classic randomized LP-rounding approach
of Raghavan and Thompson [38], that one can always
route Ω(OPT) pairs with congestion O
log n
log log n
with constant probability.
2
Theorem 1. For any instance
(G, M) of MaxEDP, one can efficiently find a routing of Ω(OPT) pairs with
log kr
congestion O log log kr with constant probability; in other words, there is an efficient O(1)-approximation
algorithm for MaxEDP with congestion O logloglogkrkr .
Our second main result builds upon Theorem 1 and uses it as a subroutine. We show how to use a routing
for MaxEDP with low congestion to obtain a polynomial-time approximation algorithm for MaxEDP
without congestion that performs well in terms of r.
Theorem √
2. The integrality gap of the multi-commodity flow relaxation for MaxEDP with k terminal
pairs is O( r · log1.5 rk) for graphs with feedback vertex set number r. Moreover, there is a polynomial time
algorithm that,
√ given a fractional solution to the relaxation of value opt, it constructs an integral routing of
size opt/O( r · log1.5 rk).
In particular, our algorithm strengthens the best known approximation algorithm for MaxEDP on general
graphs [12] as always r ≤ n, and indeed it matches that algorithm’s performance up to polylogarithmic factors.
Substantially improving upon our bounds would also improve the current state of the art of MaxEDP.
Conversely, the result implies that it suffices to study graphs
√ with close to linear feedback vertex set number
in order to improve the currently best upper bound of O( n) on the approximation ratio [12].
Our algorithmic approaches harness the forest structure of G − R for any feedback vertex set R. However,
the technical challenge comes from the fact that the edge set running between G − R and R is unrestricted.
Therefore, the “interaction” between R and G − R is non-trivial, and flow paths may run between the two
parts in an arbitrary manner and multiple times. In fact, we show that MaxEDP is already NP-hard if R
consists of a single node (Theorem 5); this contrasts the efficient solvability on forests [24].
In order to overcome the technical hurdles we propose several new concepts, which we believe could be of
interest in future studies of disjoint paths or routing problems.
In the randomized rounding approach of Raghavan and Thompson [38], it is shown that the probability
that the congestion on any fixed edge is larger than c logloglogn n for some constant c is at most 1/nO(1) . Combining
this with the fact that there are at most n2 edges, yields that every edge has bounded congestion w.h.p. The
number of edges in the graph may, however, be unbounded in terms of r and k. Hence, in order to to prove
Theorem 1, we propose a non-trivial pre-processing step of the optimum LP solution that is applied prior
to the randomized rounding. In this step, we aggregate the flow paths by a careful rerouting so that the
flow “concentrates” in O(kr2 ) nodes (so-called hot spots) in the sense that if all edges incident on hot spots
have low congestion then so have all edges in the graph. Unfortunately, for any such hot spot the number of
incident edges carrying flow may still be unbounded in terms of k and r. We are, however, able to give a
refined probabilistic analysis that suitably relates the probability that the congestion bound is exceeded to
the amount of flow on that edge. Since the total amount of flow on each hot spot is bounded in terms of k,
the probability that all edges incident on the same hot spot have bounded congestion is inverse polynomial
in r and k.
√
The known O( n)-approximation algorithm for MaxEDP by Chekuri et al. [12] employs a clever LProunding approach. If there are many long paths then there must be a single node carrying a significant
fraction of the total flow and a good fraction of this flow can be realized by integral paths by solving a
single-source flow problem. If the LP solution contains many short flow paths then greedily routing these short
paths yields the bound since each such path blocks a bounded amount of flow. In order to prove Theorem 2,
it is natural to consider the case where there are many paths visiting a large number of nodes in R. In this
case, we reduce to a single-source flow problem, similarly to the approach of Chekuri et al. The case where
a majority of the flow paths visit only a few nodes in R turns out more challenging, since any such path
may still visit an unbounded number of edges in terms of k and r. We use two main ingredients to overcome
these difficulties. First, we apply our Theorem 1 as a building block to obtain a solution with logarithmic
congestion while losing only a constant factor in the approximation ratio. Second, we introduce the concept of
irreducible routings with low congestion which allows us exploit the structural properties of the graph and the
congestion property to identify a sufficiently large number of flow paths blocking only a small amount of flow.
3
√
Note that the natural greedy approach of always routing the shortest conflict-free path gives only O( m)
for MaxEDP. We believe that it is non-trivial to obtain our bounds via a more direct or purely combinatorial
approach.
Our third result is a fixed-parameter algorithm for MaxNDP in k + r.
Theorem 3. MaxNDP can be solved in time (8k + 8r)2r+2 · O(n) on graphs with feedback vertex set number r
and k terminal pairs.
This run time is polynomial for constant r. We also note that for small r, our algorithm is asymptotically
significantly faster than the fastest known algorithm for NDP, by Kawarabayashi and Wollan [28], which
requires time at least quadruple-exponential in k [1]. Namely, if r is at most triple-exponential in k, our
algorithm is asymptotically faster than theirs. We achieve this result by the idea of so-called essential pairs
and realizations, which characterizes the “interaction” between the feedback vertex set R and the paths in an
optimum solution. Note that in our algorithm of Theorem 3, parameter k does not appear in the exponent of
the run time at all. Hence, for small values of r our algorithm is also faster than reducing MaxNDP to NDP
by guessing the subset of pairs to be routed (at an expense of 2k in the run time) and using Scheffler’s [41]
algorithm for NDP with run time 2O(r log r) · O(n).
Once a fixed-parameter algorithm for a problem has been obtained, the existence of a polynomial-size
kernel comes up. Here we note that MaxNDP does not admit a polynomial kernel for parameter k + r,
unless NP ⊆ coNP/poly [8].
Another natural question is whether the run time f (k, r) · n in Theorem 3 can be improved to f (r) · nO(1) .
We answer this question in the negative, ruling out the existence of a fixed-parameter algorithm for MaxNDP
parameterized by r (assuming FPT 6= W[1]):
Theorem 4. MaxNDP in unit-capacity graphs is W[1]-hard parameterized by r.
This contrasts the known result that NDP is fixed-parameter tractable in r [41]—which further stresses
the relevance of understanding this parameter.
For MaxEDP, we prove that the situation is, in a sense, even worse:
Theorem 5. MaxEDP is NP-hard for unit-capacity graphs with r = 1 and EDP is NP-hard for unit-capacity
graphs with r = 2.
This theorem also shows that our algorithms are relevant for small values of r, and they nicely complement
the NP-hardness for MaxEDP in capacitated trees [24].
Our results are summarized in Table 1.
const.
param.
r=0
r=1
r≥2
r
EDP
MaxEDP
NDP
MaxNDP
poly [24]
open
NP-hard (Thm. 5)
poly [24]
NP-hard (Thm. 5)
NP-hard (Thm. 5)
poly [41]
poly [41]
poly [41]
poly (Thm. 3)
poly (Thm. 3)
poly (Thm. 3)
para-NP-hard (Thm. 5)
FPT [41] W[1]-hard (Thm. 4)
√
O( r · log1.5 kr)-approx
(Thm.
2)
exact (k + r)O(r) (Thm. 3)
O(1)-approx. w.cg. O logloglogkrkr (Thm. 1)
Table 1: Summary of results obtained in this paper.
Related Work. Our study of the feedback vertex set number is in line with the general attempt to obtain
bounds for MaxEDP (or related problems) that are independent of the input size. Besides the above-mentioned
works that provide bounds in terms of the tree-width of the input graph, Günlük [25] and Chekuri et al. [17] give
bounds on the flow-cut gap for the closely related integer multicommodity flow problem that are logarithmic
with respect to the vertex cover number of a graph. This improved upon earlier bounds of O(log n) [34]
4
and O(log k) [5, 35]. As every feedback vertex set is in particular a vertex cover of a graph, our results
generalize earlier work for disjoint path problems on graphs with bounded vertex cover number. Bodlaender
et al. [8] showed that NDP does not admit a polynomial kernel parameterized by vertex cover number and
the number k of terminal pairs, unless NP ⊆ coNP/poly ; therefore, NDP is unlikely to admit a polynomial
kernel in r + k either. Ene et al. [21] showed that MaxNDP is W[1]-hard parameterized by treedepth, which
is another restriction of treewidth that is incomparable to the feedback vertex set number.
The basic gap in understanding the approximability of MaxEDP has led to several improved results for
special graph classes, and also our results can be seen in this light. For example, polylogarithmic approximation
algorithms are known for graphs whose global minimum cut value is Ω(log5 n) [39], for bounded-degree
expanders [9, 10, 23, 30, 34], and for Eulerian planar or 4-connected planar graphs [29]. Constant factor
approximation algorithms are known for capacitated trees [14, 24], grids and grid-like graphs [4, 6, 31, 32].
For planar graphs, there is a constant-factor approximation algorithm with congestion 2 [42]. Very recently,
9/19
Chuzhoy et al. [19]
)-approximation algorithm for MaxNDP on planar graphs. However,
√ gave a Õ(n
improving the O( n)-approximation algorithm for MaxEDP remains elusive even for planar graphs.
2
Preliminaries
We use standard graph theoretic notation. For a graph G, let V (G) denote its vertex set and E(G) its edge
set. Let G be a graph. A feedback vertex set of G is a set R ⊆ V (G) such that G − R is a forest. A minor
of G is a graph H that is obtained by successively contracting edges from a subgraph of G (and deleting any
occurring loops). A class G of graphs is minor-closed if for any graph in G also all its minors belong to G.
For an instance (G, M) of MaxEDP/MaxNPD, we refer to the vertices participating in the pairs M
as terminals. It is convenient to assume that M forms a matching on the terminals; this can be ensured by
making several copies of a terminal and attaching them as leaves.
Multi-commodity flow relaxation. We use the following standard multi-commodity flow relaxation for
MaxEDP (there is an analogous relaxation for MaxNDP). We use P(u, v) to denote the set of all paths
in G from u to v, for each pair (u, v) of nodes. Since the pairs M form a matching, the sets P(si , ti ) are
Sk
pairwise disjoint. Let P = i=1 P(si , ti ). The LP has a variable f (P ) for each path P ∈ P representing the
amount of flow on P . For each pair (si , ti ) ∈ M, the LP has a variable xi denoting the total amount of flow
routed for the pair (in the corresponding IP, xi denotes whether the pair is routed or not). The LP imposes
the constraint that there is a flow from si to ti of value xi . Additionally, the LP has constraints that ensure
that the total amount of flow on paths using a given edge (resp. node for MaxNDP) is at most 1.
max
(MaxEDP LP)
s1
k
X
s2
xi
i=1
s.t.
X
f (P ) = xi ≤ 1
i = 1, . . . , k,
P ∈P(si ,ti )
X
f (P ) ≤ 1
e ∈ E(G)
sk
P : e∈P
f (P ) ≥ 0
P ∈P
t1
t2
tk
√
Figure 1: Multi-commodity flow relaxation for MaxEDP. Right: Ω( n) integrality gap for MaxEDP [24]:
any integral routing routes at most one pair, whereas a multi-commodity flow can send 1/2 unit of flow for
each pair (si , ti ) along the canonical path from si to ti in the grid.
It is well-known that the relaxation MaxEDP LP can be solved in polynomial time, since there is an
5
efficient separation oracle for the dual LP (alternatively, one can write a compact relaxation). We use (f, x)
to denote a feasible solution to MaxEDP LP for an instance (G, M) of MaxEDP. For each terminal v,
let x(v) denote the total amount of flow routed for v and we refer to x(v) as the marginal value of v in the
multi-commodity flow f .
We will use the following result by Chekuri et al. [12, Sect. 3.1]; see also Proposition 3.3 of Chekuri et
al. [16].
Proposition 1. Let (f, x) be a fractional solution to the LP relaxation of a MaxEDP instance (G, M).
P If
1
some node v is contained in all flow paths of f , then we can find an integral routing of size at least 12
i xi
in polynomial time.
3
Bi-Criteria Approximation for MaxEDP with Low Congestion
We present a randomized rounding algorithm that will lead to the proof of Theorem 1.
3.1
Algorithm
Consider an instance (G, M) of MaxEDP. Let R be a 2-approximate minimum feedback vertex set of G and
let r = |R|; note that such a set R can be obtained in polynomial time [7].
For the sake of easier presentation, we will assume in this section that the feedback vertex set R contains
all terminal nodes from M. This can be achieved by temporarily adding the set of terminals to the feedback
vertex set R. Also note that this assumption increases the bound of Theorem 1 by at most a constant factor.
First, solve the corresponding MaxEDP LP. We obtain an optimal solution (f, x). For each (si , ti ) ∈ M
we further obtain a set P 0 (si , ti ) = {P ∈ P(si , ti ) | f (P ) > 0} of positive weighted paths that satisfy the LP
Sk
constraints. Note that the total set P 0 = i=1 P 0 (si , ti ) is of size polynomially bounded in the input size. In
what follows, we will modify P 0 and then select an (unweighted) subset S of P 0 that will form our integral
solution.
Each P ∈ P 0 has the form (r1 , . . . , r2 , . . . , r` ) where r1 , . . . , r` are the nodes in R that are traversed by P
in this order. The paths (rj , . . . , rj+1 ) with j = 1, . . . , ` − 1 are called subpaths of P . For every subpath P 0
of P , we set f (P 0 ) = f (P ). Let J be the multi-set of all subpaths of all paths in P 0 . Let F = G − R be the
forest obtained by removing R.
We now modify some paths in P 0 , one by one, and at the same time construct a subset H of nodes that
we will call “hot spots”. At the end, every subpath in J will contain at least one hot spot.
Initially, let H = ∅. Consider any tree T in F and fix any of its nodes as a root. Then let JT be the
multi-set of all subpaths in J that, excluding the endpoints, are contained in T . For each subpath P ∈ JT ,
define its highest node h(P ) as the node on P closest to the root. Note that P ∩ T = P ∩ F is a path. Now,
pick a subpath P ∈ JT that does not contain any node in H and whose highest node h(P ) is farthest away
from the root. Consider the multi-set J [P ] of all subpaths in JT that
P are identical to P (but may be subpaths
of different flow paths in P 0 ). Note that the weight f (J [P ]) := P ∈J [P ] f (P ) of J [P ] is at most 1 by the
constraints of the LP. Let u, v ∈ R be the endpoints of P . We define Juv as the set of all subpaths in J \ J [P ]
that have u and v as their endpoints and that do not contain any node in H.
Intuitively speaking, we now aggregate flow on P by rerouting as much flow as possible from Juv to P .
To this end, we repeatedly perform the following operation as long as f (J [P ]) < 1 and Juv 6= ∅. We
pick a path P 0 in J that contains a subpath in Juv . We reroute flow from P 0 by creating a new path P 00
that arises from P 0 by replacing its subpath between u and v with P , and assign it the weight f (P 00 ) =
min{f (P 0 ), 1 − f (J [P ])}. Then we set the weight of (the original path) P 0 to max{0, f (P 0 ) + f (J [P ]) − 1}.
We update the sets P 0 , P 0 (si , ti ), J , JT , J [P ] and Juv accordingly.
As soon as f (J [P ]) = 1 or Juv = ∅, we add h(P ) to H. Then, we proceed with the next P ∈ JT not
containing a hot spot and whose highest node h(P ) is farthest away from the root. If no such P is left we
consider the next tree T in F .
6
At the end, we create our solution S by randomized rounding: We route every terminal pair (si , ti ) with
probability xi . In case (si , ti ) is routed, we randomly select a path from P 0 (si , ti ) and add it to S where the
probability that path P is taken is f (P )/xi .
3.2
Analysis
First, observe that x did not change during our modifications of the paths, as the total flow
Pk between any
terminal pair did not change. Thus, the expected number of pairs routed in our solution is i=1 xi ≥ OPT.
Using the Chernoff bound, the probability that we route less than OPT /2 pairs is at most e−1/8 OPT < 1/2,
assuming that OPT > 8. Secondly, we bound the congestion of our solution—our second criterion.
Lemma 1. The congestion of flow f is at most 2.
Proof. In our algorithm, we increase the flow only along flow subpaths that are pairwise edge-disjoint. To
see this, consider two distinct flow subpaths P and P 0 on which we increase the flow. Assume, without loss
of generality, that P was considered before P 0 by the algorithm. If there was an edge e lying on P and P 0 ,
then both subpaths traverse the same tree in forest F . Hence, the path from e to h(P 0 ) would visit h(P ),
and h(P ) would be an internal node of P 0 . This yields a contradiction, as h(P ) was already marked as a hot
spot when P 0 was considered. This shows that we increased the flow along any edge by at most one unit, and,
hence, f has congestion at most 2.
We now bound the congestion of the integral solution obtained by randomized rounding. In the algorithm,
we constructed a set H of hot spots. As a part of the analysis, we will now extend this set as follows. We
build a sub-forest F 0 of F consisting of all edges of F that lie on a path connecting two hot spots. Then we
add to H all nodes that have degree at least 3 in F 0 . Since the number of nodes of degree 3 in any forest is at
most its number of leaves and since every leaf of F 0 is a hot spot, it follows that this can at most double the
size of H. Finally, we add the set R of all feedback vertex nodes to H.
Lemma 2. The number |H| of hot spots is O(kr2 ).
Proof. It suffices to show that the number of hot spots added to H by the algorithm is O(kr2 ). To this
end, fix two nodes u, v ∈ R and consider the set of flow subpaths P with end nodes u and v for which we
added h(P ) to H. Due to the aggregation of flows in our algorithm, all except possibly one of the subpaths
are saturated, that is, they carry precisely one unit of flow. Since no two of these subpaths are contained in a
same flow path of f and since the flow value of f is bounded from above by k, we added only O(k) hot spots
for the pair u, v. Since there are at most r2 pairs in R, the claim follows.
Definition 1. A hot spot u ∈ H is good if the congestion on any edge incident on u is bounded by c · logloglogkrkr ,
where c is a sufficiently large constant; otherwise, u is bad.
Lemma 3. Let u ∈ H be a hot spot. Then the probability that u is bad is at most 1/(k 2 r3 ).
Proof. Let e1 = uv1 , . . . , e` = uv` be the edges incident on u and let fi be the total flow on edge uvi
for i = 1, . . . , `. By Lemma
1, we have that fi ≤ 2. Since any flow path visits at most two of the edges incident
P`
on u, the total flow i=1 fi on the edges incident
on u is at most 2k.
P
For any i = 1, . . . , `, we have that fi = P : P 3eP
f (P ), where P runs over the set of all paths connecting
i
some terminal pair and containing ei . Let fij = P ∈P(sj ,tj ) : P 3ei f (P ) be the total amount of flow sent
across ei by terminal pair (sj , tj ). Recall that xj is the total flow sent for terminal pair (sj , tj ). The probability
that the randomized rounding procedure picks path P with P ∈ P(sj , tj ) is precisely xj · fx(p)
= f (p).
j
Given the disjointness of the respective events, the probability that pair (sj , tj ) routes a path across ei is
precisely fij . Let Xij be the binary P
random variable indicating whether pair (sj , tj ) routes a path across ei .
Then Pr [Xij = 1] = fij . Let Xi = j Xij be the
P number ofPpaths routed across ei by the algorithm. By
linearity of expectation, we have that E [Xi ] = j E [Xij ] = j fij = fi .
7
Fix any edge ei . Set δ = c · logloglogkrkr and δ 0 = 2 fδi − 1. Note that for fixed i, the variables Xij are
independent. Hence, by the Chernoff bound, we have that
Pr Xi ≥ c ·
!fi
0
log kr
eδ
0
≤ Pr [Xi ≥ (1 + δ )fi ] <
log log kr
(1 + δ 0 )1+δ0
2δ −2δ
log kr
0
δ
fi
fi
·
≤ fi e−c log log kr· log log kr ≤ 3 3 .
≤
2
e
2k r
Here, we use that fi ≤ 2 for the second last inequality and for the last inequality we pick c0 sufficiently large
by making c and k sufficiently large. (Note that MaxEDP can be solved efficiently for constant k.)
Now, using the union bound,
P we can infer that the probability that any of the edges incident on u carries
more than δ paths is at most i fi /(2k 3 r3 ) ≤ (2k)/(2k 3 r3 ) = 1/(k 2 r3 ).
Lemma 4. Assume that every hot spot is good. Then the congestion on any edge is bounded by 2c logloglogkrkr .
Proof. Consider an arbitrary edge e = uv that is not incident on any hot spot. In particular, this means
that e lies in the forest F = G − R. A hot spot z in F is called direct to u (or v) if the path in F from z to u
(or v) neither contains e nor any hot spot other than z.
Now observe that there can be only one hot spot z direct to u and only one hot spot z 0 direct to v. If
there was a second hot spot z 00 =
6 z direct to u then there would have to be yet another hot spot at the node
where the path Pz from z to u joins the path from z 00 to u contradicting the choice of z. Let Pz0 be the path
from z 0 to v in F . Moreover, let ez be the edge incident on z on path Pz and let ez0 be the edge incident
on z 0 on path Pz0 .
Now let P be an arbitrary path that is routed by our algorithm and that traverses e. It must visit a
hot spot. If P visited neither z nor z 0 , then P would contain a hot spot direct to u or to v that is distinct
from z and z 0 —a contradiction. Therefore, P contains ez or e0z . The claim now follows from the fact that this
holds for any path traversing e, that z and z 0 are good, and that therefore at most 2c logloglogkrkr paths visit ez
or e0z .
Theorem 6. The algorithm from Sect.
3.1
produces—with constant probability—a routing with Ω(OPT)
log kr
paths, such that the congestion is O log log kr .
Proof. As argued above, we route less than OPT /2 paths with probability at most 1/2. By Lemma 2, there
are O(kr2 ) hotspots. The probability that at least one of these hot spots is bad is O(kr2 /(k 2 r3 )) = O(1/(kr)),
by Lemma 3. Hence, with constant probability, we route at least OPT /2 pairs with congestion at
most 2c logloglogkrkr , by Lemma 4.
4
Refined Approximation Bound for MaxEDP
In this section, we provide an improved approximation guarantee for MaxEDP without congestion, thereby
proving Theorem 2. (In contrast to the previous section, we do not assume here that all terminals are
contained in the feedback vertex set.)
4.1
Irreducible Routings with Low Congestion
We first develop the concept of irreducible routings with low congestion, which is (besides Theorem 1) a key
ingredient of our strengthened bound on the approximability of MaxEDP based on the feedback vertex
number.
Consider any multigraph G and any set P of (not necessarily simple) paths in G with congestion c. We
say that an edge e is redundant in P if there is an edge e0 6= e such that the set of paths in P covering
(containing) e is a subset of the set of paths in P covering e0 .
8
Definition 2. Set P is called an irreducible routing with congestion c if each edge belongs to at most c paths
of P and there is no edge redundant in P.
In contrast to a feasible routing of an MaxEDP instance, we do not require an irreducible routing to
connect a set of terminal pairs. If there is an edge e redundant in P, we can apply the following reduction rule:
We contract e in G and we contract e in every path of P that covers e. By this, we obtain a minor G0 of G
and a set P 0 of paths that consists of all the contracted paths and of all paths in P that were not contracted.
Thus, there is a one-to-one correspondence between the paths in P and P 0 .
We make the following observation about P and P 0 .
Observation 1. Any subset of paths in P 0 is edge-disjoint in G0 if and only if the corresponding subset of
paths in P is edge-disjoint in G.
Since the application of the reduction rule strictly decreases the number of redundant edges, an iterative
application of this rule yields an irreducible routing on a minor of the original graph.
Theorem 7. Let G be a minor-closed class of multigraphs and let pG > 0. If for each graph G ∈ G and every
non-empty irreducible routing S of G with congestion c there exists a path in S of length at most pG , then the
average length of the paths in S is at most c · pG .
Proof. Take a path P0 of length at most pG . Contract all edges of P0 in G and obtain a minor G0 ∈ G of G.
For each path in S contract all edges shared with P0 to obtain a set S 0 of paths. Remove P0 along with all
degenerated paths from S 0 , thus |S 0 | < |S|. Note that S 0 is an irreducible routing of G0 with congestion c. We
repeat this reduction procedure recursively on G0 and S 0 until S 0 is empty which happens after at most |S|
steps. At each step we decrease the total path length by at most c · pG . Hence, the total length of paths in S
is at most |S| · c · pG .
As a consequence of Theorem 7, we get the following result for forests.
Lemma 5. Let F be a forest and let S be a non-empty irreducible routing of F with congestion c. Then the
average path length in S is at most 2c.
Proof. We show that S contains a path of length as most 2. The lemma follows immediately by applying
Theorem 7.
Take any tree in F , root it with any node and consider a leaf v of maximum depth. Let e1 and e2 be
the first two edges on the path from v to the root. By definition of irreducible routing, the set of all paths
covering e1 is not a subset of the paths covering e2 , hence, e1 is covered by a path which does not cover e2 .
Since all other edges incident to e1 end in a leaf, this path has length at most 2.
Note that the bound provided in Lemma 5 is actually tight up to a constant. Let c ≥ 1 be an arbitary
integer. Consider a graph that is a path of length c − 1 with a star of c − 1 leafs attached to one of its end
points. The c − 1 many paths of length c together with the 2c − 2 many paths of length 1 form an irreducible
routing with congestion c. The average path length is ((c − 1)c + (2c − 2))/(3c − 3) = (c + 2)/3.
4.2
Approximation Algorithm
Consider an instance (G, M) of MaxEDP, and let r be the size of a feedback vertex set
R in G. Using our
log kr
result of Sect. 3, we can efficiently compute a routing P with congestion c := O log log kr containing Ω(OPT)
paths.
√
Below we argue how to use the routing P to obtain a feasible routing of cardinality Ω |P|/(c1.5 r) ,
√
which yields an overall approximation ratio of O r · log1.5 rk ; that will prove Theorem 2.
p
Let r0 = r/c. We distinguish the following cases.
Case 1: At least half of the paths in P visit at most r0 nodes of the feedback vertex set R. Let P be the
subset of these paths. As long as there is an edge e not adjacent to R that is redundant in P 0 , we iteratively
9
apply the reduction rule from Sect. 4.1 on e. Let G0 be the obtained minor of G with forest F 0 = G0 − R, and
let P 0 be the obtained set of (not necessarily simple) paths corresponding to P. By Observation 1, it suffices
to show that there is a subset P00 ⊆ P 0 of pairwise edge-disjoint paths of size |P0 | = Ω (|P|/(cr0 )) in order to
obtain a feasible routing for (G, M) of size Ω (|P|/(cr0 )).
To obtain P00 , we first bound the total path length in P 0 . Removing R from G0 “decomposes” the set P 0
into a set S := {S is a connected component of P ∩ F | P ∈ P 0 } of subpaths lying in F 0 . Observe that S is
an irreducible set of F 0 with congestion c, as the reduction rule is not applicable anymore. (Note that a single
path in P 0 may lead to many paths in the cover S which are considered distinct.) Thus, by Lemma 5, the
average path length in S is at most 2c.
Let P be an arbitrary path in P 0 . Each edge on P that is not in a subpath in S is incident on a node in R,
and each node in R is incident on at most two edges in P . Together with the fact that P visits at most r0
nodes in R and
P that the average length of the subpaths in S is at most 2c, we can upper bound the total
path length P ∈P 0 |P | by |P 0 |r0 (2c + 2). Let P 00 be the set of the |P 0 |/2 shortest paths in P 0 . Hence, each
path in P 00 has length at most 4r0 (c + 1).
We greedily construct a feasible solution P00 by iteratively picking an arbitrary path P from P 00 adding
it to P00 and removing all paths from P 00 that share some edge with P (including P itself). We stop
when P 00 is empty. As P 00 has congestion c, √
weremove at most 4r0 c(c + 1) paths from P 00 per iteration.
0
00
0
1.5
Thus, |P0 | ≥ |P |/(4r c(c + 1)) = Ω |P|/(c
r .
Case 2: At least half of the paths in P visit at least r0 nodes of the feedback vertex set R. Let P 0 be the
subset of these paths. Consider each path in P 0 as a flow of value 1/c and let f be the sum of all these flows.
Note that f provides a feasible solution to the MaxEDP LP relaxation for (G, M ) of value at least |P|/(2c).
Note that each such flow path contributes 1/c unit of flow to each of the r0 nodes in R it visits. Since
every flow path in f has length at least r0 , the total inflow of the nodes in R is at least |f |r0 . By averaging,
there must be a node v ∈ R of inflow at least r0 |f |/r = |f |/r0 . Let f 0 be the subflow of f consisting of
all flow paths visiting v. This subflow corresponds to a feasible solution (f 0 , x0 ) of the LP relaxation of
value at P
least |f |/r0 ≥ |P|/(2cr0 ). Using Proposition
1, we can recover an integral feasible routing of size at
√
1
least 12 i x0i ≥ |P|/(24cr0 ) = Ω |P|/(c1.5 r .
This completes the proof of Theorem 2.
5
Fixed-Parameter Algorithm for MaxNDP
We give a fixed-parameter algorithm for MaxNDP with run time (k + r)O(r) · n, where r is the size of a
minimum feedback vertex set in the given instance (G, M). A feedback vertex set R of size r can be computed
in time 2O(r) · O(n) [36]. By the matching assumption, each terminal in M is a leaf. We can thus assume
that none of the terminals is contained in R.
Consider an optimal routing P of the given MaxNDP instance. Let MR ⊆ M be the set of terminal
pairs that are connected via P by a path that visits at least one node in R. Let P ∈ P be a path connecting a
terminal pair (si , ti ) ∈ MR . This path has the form (si , . . . , r1 , . . . , r2 , . . . , r` , . . . , ti ), where r1 , . . . , r` are the
nodes in R that are traversed by P in this order. The pairs (si , r1 ), (r` , ti ) and (rj , rj+1 ) with j = 1, . . . , ` − 1
are called essential pairs for P . A node pair is called essential if it is essential for some path in P. Let Me
be the set of essential pairs.
Let F be the forest that arises when deleting R from the input graph G. Let (u, v) be an essential pair.
A u-v path P in G is said to realize (u, v) if all internal nodes of P lie in F . A set P 0 of paths is said to
realize Me if every pair in Me is realized by some path in P 0 and if two paths in P 0 can only intersect at
their end nodes. Note that the optimal routing P induces a natural realization of Me , by considering all
maximal subpaths of paths in P whose internal nodes all lie in F . Conversely, for any realization P 0 of Me ,
we can concatenate paths in P 0 to obtain a feasible routing that connects all terminal pairs in MR . Therefore,
we consider P 0 (slightly abusing notation) also as a feasible routing for MR .
In our algorithm, we first guess the set Me (and thus MR ). Then, by a dynamic program, we construct
two sets of paths, Pe and PF where Pe realizes Me and PF connects in F a subset of MR := M \ MR . In
10
our algorithm, the set Pe ∪ PF forms a feasible routing that maximizes |PF | and routes all pairs in MR .
(Recall that we consider the realization Pe of Me as a feasible routing for MR .)
Now assume that we know set Me . We will describe below a dynamic program that computes an optimum
routing in time 2O(r) (k + r)O(1) n. For the sake of easier presentation, we only describe how to compute the
cardinality of such a routing.
We make several technical assumptions that help to simplify the presentation. First, we modify the input
instance as follows. We subdivide every edge incident on a node in R by introducing a single new node on
this edge. Note that this yields an instance equivalent to the input instance. As a result, every neighbor of a
node in R that lies in F , that is, every node in NG (R), is a leaf in F . Moreover, the set R is an independent
set in G. Also recall that we assumed that every terminal is a leaf. Therefore, we may assume that R does not
contain any terminal. We also assume that forest F is a rooted tree, by introducing a dummy node (which
plays the role of the root) and arbitrarily connecting this node to every connected component of F by an
edge. In our dynamic program, we will take care that no path visits this root node. We also assume that F is
an ordered tree by introducing an arbitrary order among the children of each node.
For any node v, let Fv be the subtree of F rooted at v. Let cv := degF (v) − 1 be the number of children
of v and let v1 , . . . vcv be the (ordered) children of v. Then, for i = 1, . . . , cv , let Fvi denote the subtree of Fv
induced by the union of v with the subtrees Fv1 , . . . , Fvi . For leaves v, we define Fv0 as Fv = v.
We introduce a dynamic programming table T . It contains an entry for every Fvi and every subset M0e
of Me . Roughly speaking, the value of such an entry is the solution to the subproblem, where we restrict the
forest to Fvi , and the set of essential pairs to M0e . More precisely, table T contains five parameters. Parameters v
and i describing Fvi , parameter M0e , and two more parameters u and b. Parameter u is either a terminal, or a
node in R, and b is in one of the three states: free, to-be-used , or blocked . The value T [v, i, M0e , u, b] is the
maximum cardinality of a set PF of paths with the following properties:
1. PF is a feasible routing of some subset of MR .
2. PF is completely contained in Fvi .
3. There is an additional set Pe of paths with the following properties:
(a) Pe is completely contained in Fvi ∪ R and node-disjoint from the paths in PF .
(b) Pe is a realization of M0e ∪ {(u, v)} if b = to-be-used . Else, it is a realization of M0e .
(c) There is no path in Pe ∪ PF visiting v if b = free.
If no such set PF exists then T [v, i, M0e , u, b] is −∞.
Note that the parameter u is only relevant when b = to-be-used (otherwise, it can just be ignored).
Observe that T [v, i, M0e , u, blocked ] ≥ T [v, i, M0e , u, free] ≥ T [v, i, M0e , u, to-be-used ]. Below, we describe how
to compute the entries of T in a bottom-up manner.
In the base case v is a leaf. We set T [v, 0, ∅, u, free] = 0. Then we set T [v, 0, M0e , u, blocked ] = 0 if M0e is
either empty, consists of a single pair of nodes in R ∩ NG (v), or consists of a single pair where one node is v
and the other one is in R ∩ NG (v). Finally, we set T [v, 0, ∅, u, to-be-used ] = 0 if u = v or u is in R ∩ NG (v).
For all other cases where v is a leaf, we set T [v, i, M0e , u, b] = −∞.
For the inductive step, we consider the two cases i = 1 and i > 1. Let i = 1. It holds
that T [v, 1, M0e , u, to-be-used ] = T [v1 , cv , M0e , u, to-be-used ] since the path in Pe realizing (u, v) has to
start at a leaf node of Fv1 . It also holds that T [v, 1, M0e , u, blocked ] and T [v, 1, M0e , u, free] are equal
to T [v1 , cv , M0e , u, blocked ].
Now, let i > 1. In a high level view, we guess which part of M0e is realized in Fvi−1 ∪ R and which part
is realized in Fvi ∪ R. For this, we consider every tuple (M0e1 , M0e2 ) such that M0e1 ] M0e2 is a partition
of M0e . By our dynamic programming table, we find a tuple that maximizes our objective. In the following,
we assume that we guessed (M0e1 , M0e2 ) correctly. Let us consider the different cases of b in more detail.
For b = free, node v is not allowed to be visited by any path, especially by any path in Fvi−1 ∪ R.
Hence, T [v, i, M0e , u, free] is equal to
T [v, i − 1, M0e1 , u, free] + T [vi , cvi , M0e2 , u, blocked ] .
11
In the case of b = to-be-used , we have to realize (u, v) in Fvi ∪ R. For this, there are two possibilities:
Either (u, v) is realized by a path in Fvi−1 ∪ R, or there is a realizing path that first goes through Fvi ∪ R and
then reaches v via the edge (vi , v). Hence, for the first case, we consider
T [v, i − 1, M0e1 , u, to-be-used ] + T [vi , cvi , M0e2 , u, blocked ],
for the second case, we consider
T [v, i − 1, M0e1 , u, free] + T [vi , cvi , M0e2 , u, to-be-used ] .
Maximizing over both, we obtain T [v, i, M0e , u, to-be-used ].
For the case of b = blocked , we will consider two subcases. In the first subcase, there is no path in Pe ∪ PF
going through edge (vi , v), hence, we get
T [v, i − 1, M0e1 , u, blocked ] + T [vi , cvi , M0e2 , u, blocked ] .
In the second subcase, there is a path P in Pe ∪ PF going through edge (vi , v). Since P is connecting two
leafs in Fvi , a part of P is in Fvi−1 ∪ R and the other part is in Fvi ∪ R. If P ∈ Pe , then it is realizing a pair
of M0e . Hence, for every pair (u1 , u2 ) ∈ M0e , we have to consider the term
T [v, i − 1, M0e1 − (u1 , u2 ), u1 , to-be-used ] + T [vi , cvi , M0e2 − (u1 , u2 ), u2 , to-be-used ]
and the symmetric term where we swap u1 and u2 . If P ∈ PF , then it is realizing a terminal pair of MR .
Hence, for every pair (u1 , u2 ) ∈ MR we get the term
1 + T [v, i − 1, M0e1 , u1 , to-be-used ] + T [vi , cvi , M0e2 , u2 , to-be-used ]
and the symmetric term where we swap u1 and u2 . Note that we count the path realizing (u1 , u2 ) in our
objective. Maximizing over all the terms of the two subcases, we obtain T [v, i, M0e , u, to-be-used ].
Let us analyze the run time of algorithm described in Sect. 5. In order to guess Me , we enumerate all
potential sets of essential pairs. There are at most (2k + r + 1)2r candidate sets to consider, since each
pair contains a node in R, and each node in R is paired with at most two other nodes each of which is
either a terminal or another node in R. For each particular guess Me , we run the above dynamic program.
0
The number
P of entries in T —as2rspecified by the five parameters v, i, Me , u and b—for each fixed Me is
at most ( v∈V (F ) degF (v)) × 2 × (2k + r) × 3. For the computation of each such entry, we consider all
combinations of at most 22r partitions of M0e with either at most r essential pairs in M0e , or with at most k
terminal pairs in MR . Altogether, this gives a run time of (8k + 8r)2r+2 · O(n). This finishes the proof of
Theorem 3.
6
Parameterized Intractability of MaxNDP for Parameter r
In this section we show that MaxNDP is W[1]-hard parameterized by the size r of a feedback vertex set.
This reduction was originally devised for parameter treedepth, by Ene et al. [21]; here we notice that the
same reduction also works for parameter r. (Both treedepth and feedback vertex set number are restrictions
of treewidth, but they are incomparable to each other.)
For sake of completeness, we include the reduction here, and argue about the feedback vertex set number
of the reduced graph. The reduction is from the W [1]-hard Multicolored Clique problem [22], where
given a graph G, an integer k, and a partition V = V 1 ] V 2 ] . . . ] V k , we are to check if there exists k-clique
in G with exactly one vertex in every set V i . By adding dummy vertices, we can assume that |V i | = n for
every i = 1, . . . , k, and that n, k ≥ 2.
Construction. Given an instance (G, k, (V i )ki=1 ) of Multicolored Clique, we aim at constructing an
equivalent instance (H, M, `) of MaxNDP.
12
We start with a construction, for every set V i , a gadget W i as follows. First, for every v ∈ V i we construct
a (k − 1)-vertex path Xvi on vertices xiv,1 , xiv,2 , . . . , xiv,i−1 , xiv,i+1 , . . . , xiv,k . Second, we select an arbitrary
vertex ui ∈ Vi . Third, for every v ∈ V i \ {ui }, we add a vertex siv adjacent to the first vertex of Xvi (i.e., xiv,1
and xiui ,1 if i > 1 or xiv,2 and xiu1 ,2 if i = 1), a vertex tiv adjacent to the last vertex of Xvi (i.e., xiv,k and xiui ,k
if i < k or xiv,k−1 and xiui ,k−1 if i = k), and make (siv , tiv ) a terminal pair. This concludes the description of
the gadget W i . By Mst we denote the set of terminal pairs constructed in this step.
To encode adjacencies in G, we proceed as follows. For every pair 1 ≤ i < j ≤ k, we add a vertex pi,j ,
adjacent to all vertices xiv,j for v ∈ Vi and all vertices xju,i for u ∈ Vj . For every edge vu ∈ E(G) with v ∈ Vi
and u ∈ Vj , we add a terminal pair (xiv,j , xju,i ). Let Mx be the set of terminal pairs constructed in this step;
we have M = Mst ∪ Mx .
Finally, we set the required number of paths ` := k(n − 1) + k2 . This concludes the description of the
instance (H, M, `).
From a clique to disjoint paths. Assume that the input Multicolored Clique instance is a “yes”instance, and let {v i | i = 1, . . . , k} be a clique in G with v i ∈ V i for i = 1, . . . , k. We construct a family
of ` vertex-disjoint paths as follows. First, for i = 1, . . . , k and every v ∈ V i \ {ui }, we route a path from siv
to tiv through the path Xvi if v 6= v i , and through the path Xui i if v = v i . Note that in this step we have
i
created k(n − 1) vertex-disjoint paths connecting terminal pairs, and
in every gadget W the only unused
k
i
vertices are vertices on the path Xvi . To construct the remaining 2 paths, for every pair 1 ≤ i < j ≤ k we
take the 3-vertex path from xivi ,j to xjvj ,i through pi,j ; note that the assumption that v i v j ∈ E(G) ensures
that (xivi ,j , xjvj ,i ) is indeed a terminal pair in M.
From disjoint paths to a clique. In the other direction, let P be a family of ` vertex-disjoint paths
connecting terminal pairs in H. Let Pst ⊆ P be the set of paths connecting terminal pairs from Mst , and
similarly define Px . First, observe that the set P = {pi,j | 1 ≤ i < j ≤ k} separates every terminal pair
from Mx . Hence, every path from Px contains at least one vertex from P . Since |P | = k2 , we have |Mx | ≤ k2 ,
and, consequently, |Pst | ≥ ` − k2 = k(n − 1) = |Mst |. We infer that Pst routes all terminal pairs in Mst
without using any vertex of P , while Px routes k2 pairs from Px , and every path from Px contains exactly
one vertex from P .
Since the paths in Pst cannot use any vertex in P , every such path needs to be contained inside one
gadget W i . Furthermore, observe that a shortest path between terminals siv,a and tiv,a inside W i is either Xui i
or Xvi , prolonged with the terminals at endpoints, and thus contains k + 1 vertices. Furthermore, a shortest
path between two terminals in Mx contains three vertices. We infer that the total number of vertices on
paths in P is at least
k
|Pst | · (k + 1) + |Px | · 3 = k(n − 1)(k + 1) + 3
2
k
= k (n(k − 1) + 2(n − 1)) +
= |V (H)| .
2
We infer that every path in Pst consists of k + 1 vertices, and every path in Px consists of three vertices. In
particular, for i = 1, . . . , k and v ∈ V i \ {ui }, the path in Pst that connects siv and tiv goes either through Xvi
or Xui i . Consequently, for i = 1, . . . , k there exists a vertex v i ∈ V i such that the vertices of W i that do not
lie on any path from Pst are exactly the vertices on the path Xvi i .
We claim that {v i | i = 1, . . . , k} is a clique in G. To this end, consider a pair 1 ≤ i < j ≤ k. Since |Px | = k2 ,
there exists a path in Px that goes through pi,j . Moreover, this path has exactly three vertices. Since the only
neighbours of pi,j that are not used by paths from Pst are xivi ,j and xjvj ,i , we infer that (xivi ,j , xjvj ,i ) ∈ M
and, consequently, v i v j ∈ E(G). This concludes the proof of the correctness of the construction.
Bounding the feedback vertex set number. We are left with a proof that H has bounded feedback
vertex set number.
13
To this end, first observe that H − P contains k connected components, being the gadgets W i . Second,
observe that the deletion of the endpoints of the path Xui i from the gadget W i breaks W i into connected
components being paths on at most k + 1 vertices. Consequently, H has a feedback vertex set R consisting
of P and {xiui ,1 , xiui ,k ∈ V (W i ) | i = 1, . . . , k}, of size |R| = O(k 2 ). This finishes the proof of Theorem 4.
7
Hardness of Edge-Disjoint Paths in Almost-Forests
In this section we show that EDP (and hence MaxEDP) is NP-hard already in graphs that are almost
forests, namely, in graphs that are forests after deleting two nodes. That is, we prove Theorem 5.
Proof of Theorem 5. We first show NP-hardness of EDP for r = 2. We reduce from the problem Edge
3-Coloring in cubic graphs, which is NP-hard [26]. Given a cubic graph H, we construct a complete bipartite
graph G, where one of the two partite classes of V (G) consists of three nodes {v1 , v2 , v3 }, and the other
partite class consists of V (H). As terminal pairs, we create the set M = {(s, t) | {s, t} ∈ E(H)}; in words,
we want to connect a pair of nodes by a path in G if and only if they are connected by an edge in H. This
completes the construction of the instance (G, M) of MaxEDP. Notice that G has a feedback vertex set of
size r = 2, since removing any size-2 subset of {v1 , v2 , v3 } from G yields a forest.
Regarding correctness of the reduction, we show that H is 3-edge-colorable if and only if all pairs in M
can be routed in G.
In the forward direction, suppose that H is 3-edge-colorable. Let ϕ : E(H) → {1, 2, 3} be a proper 3-edgecoloring of H. For c = 1, 2, 3, let Ec ⊆ E(H) be the set of edges that receive color c under ϕ. Then there
is a routing in G that routes all terminal pairs {(s, t) ∈ M | {s, t} ∈ Ec } exclusively via the node vc (and
thus via paths of length 2). Notice that this routing indeed yields edge-disjoint paths, for if there are distinct
vertices s, t1 , t2 ∈ V (H) and edges e1 = {s, t1 }, e2 = {s, t2 } ∈ E(H), then e1 , e2 receive distinct colors under ϕ
(as ϕ is proper), and so the two terminal pairs {s, t1 }, {s, t2 } are routed via distinct nodes c1 , c2 ∈ {v1 , v2 , v3 },
and thus also via edge-disjoint paths.
In the backward direction, suppose that all terminal pairs in M can be routed in G. Since H is cubic,
any node s ∈ V (H) is contained in three terminal pairs. Therefore, no path of the routing can have a node
in V (H) as an internal node and thus all paths in the routing have length 2. Then this routing naturally
corresponds to a proper 3-edge-coloring ϕ of H, where any terminal pair {s, t} routed via c means that we
color the edge {s, t} ∈ E(H) with color c under ϕ.
In order two show NP-hardness of MaxEDP for r = 1, we also reduce from Edge 3-Coloring in cubic
graphs and perform a similar construction as described above: This time, we construct a bipartite graph G
with one subset of the partition being {v1 , v2 }, the other being V (H), and the set M of terminal pairs being
again specified by the edges of H. This completes the reduction. The resulting graph G has a feedback vertex
set of size r = 1.
We claim that H is 3-colorable if and only if we can route n = |V (H)| pairs in G.
In the forward direction, suppose that H is 3-edge-colorable. Let ϕ : E(H) → {1, 2, 3} be a proper 3-edgecoloring of H. For c = 1, 2, 3, let Ec ⊆ E(H) be the set of edges that receive color c under ϕ. Then there is a
routing in G that routes all f {(s, t) ∈ M | {s, t} ∈ Ec } exclusively via the node vc (and thus via paths of
length 2) for the colors c = 1, 2. (The terminals corresponding to edges receiving color 3 remain unrouted.)
The reasoning that the resulting routing is feasible is analogous to the case of r = 2. Since for each of
the n terminals exactly two of the three terminal pairs are routed, this means that precisely n terminal pairs
are routed overall.
In the backward direction, suppose that n terminal pairs in M can be routed in G. Since any terminal v
in G is a node in V (H) has therefore has degree two in G, this means that at most two paths can be routed
for v. As n terminal pairs are realized, this also means that exactly two paths are routed for each terminal.
Hence, none of the paths in the routing has length more than two. Otherwise, it would contain an internal
node in V (H), which then could not be part of two other paths in the routing. Then this routing naturally
corresponds to a partial edge-coloring of H, where any terminal pair {s, t} routed via c means that we
color the edge {s, t} ∈ E(H) with color c. Since each terminal v in V (H) is involved in exactly two paths
14
in the routing, exactly one terminal pair for v remains unrouted. Hence, exactly one edge incident on v
in H remains uncolored in the partial coloring. We color all uncolored edges in H by color 3 to obtain a
proper 3-coloring.
Thus, we almost close the complexity gap for EDP with respect to the size of a minimum feedback
vertex set, only leaving the complexity of the case r = 1 open. We conjecture that this case can be solved in
polynomial time.
References
[1] I. Adler, S. G. Kolliopoulos, P. K. Krause, D. Lokshtanov, S. Saurabh, and D. Thilikos. Tight bounds
for linkages in planar graphs. In Proc. ICALP 2011, volume 6755 of Lecture Notes Comput. Sci., pages
110–121, 2011.
[2] M. Andrews. Approximation algorithms for the edge-disjoint paths problem via Räcke decompositions.
In Proc. FOCS 2010, pages 277–286, 2010.
[3] M. Andrews, J. Chuzhoy, V. Guruswami, S. Khanna, K. Talwar, and L. Zhang. Inapproximability of
edge-disjoint paths and low congestion routing on undirected graphs. Combinatorica, 30(5):485–520,
2010.
[4] Y. Aumann and Y. Rabani. Improved bounds for all optical routing. In Proc. SODA 1995, pages
567–576, 1995.
[5] Y. Aumann and Y. Rabani. An O(log k ) approximate min-cut max-flow theorem and approximation
algorithm. SIAM J. Comput., 27(1):291–301, 1998.
[6] B. Awerbuch, R. Gawlick, T. Leighton, and Y. Rabani. On-line admission control and circuit routing for
high performance computing and communication. In Proc. FOCS 1994, pages 412–423, 1994.
[7] V. Bafna, P. Berman, and T. Fujito. A 2-approximation algorithm for the undirected feedback vertex
set problem. SIAM J. Discrete Math., 12(3):289–297 (electronic), 1999.
[8] H. L. Bodlaender, S. Thomassé, and A. Yeo. Kernel bounds for disjoint cycles and disjoint paths. Theoret.
Comput. Sci., 412(35):4570–4578, 2011.
[9] A. Z. Broder, A. M. Frieze, S. Suen, and E. Upfal. Optimal construction of edge-disjoint paths in random
graphs. SIAM J. Comput., 28(2):541–573 (electronic), 1999.
[10] A. Z. Broder, A. M. Frieze, and E. Upfal. Existence and construction of edge-disjoint paths on expander
graphs. SIAM J. Comput., 23(5):976–989, 1994.
[11] C. Chekuri and A. Ene. Poly-logarithmic approximation for maximum node disjoint paths with constant
congestion. In Proc. SODA 2013, pages 326–341, 2013.
√
[12] C. Chekuri, S. Khanna, and F. B. Shepherd. An O( n) approximation and integrality gap for disjoint
paths and unsplittable flow. Theory Comput., 2:137–146, 2006.
[13] C. Chekuri, S. Khanna, and F. B. Shepherd. A note on multiflows and treewidth. Algorithmica,
54(3):400–412, 2009.
[14] C. Chekuri, M. Mydlarz, and F. B. Shepherd. Multicommodity demand flow in a tree and packing
integer programs. ACM Trans. Algorithms, 3(3):Art. 27, 23, 2007.
[15] C. Chekuri, G. Naves, and F. B. Shepherd. Maximum edge-disjoint paths in k-sums of graphs. In Proc.
ICALP 2013, volume 7965 of Lecture Notes Comput. Sci., pages 328–339, 2013.
15
[16] C. Chekuri, G. Naves, and F. B. Shepherd. Maximum edge-disjoint paths in k-sums of graphs. CoRR,
abs/1303.4897, 2013.
[17] C. Chekuri, F. B. Shepherd, and C. Weibel. Flow-cut gaps for integer and fractional multiflows. J.
Comb. Theory, Ser. B, 103(2):248–273, 2013.
[18] J. Chuzhoy. Routing in undirected graphs with constant congestion. In Proc. STOC 2012, pages 855–874,
2012.
[19] J. Chuzhoy, D. H. K. Kim, and S. Li. Improved approximation for node-disjoint paths in planar graphs.
In Proc. STOC 2016, 2016. to appear.
[20] J. Chuzhoy and S. Li. A polylogarithmic approximation algorithm for edge-disjoint paths with congestion
2. In Proc. FOCS 2012, pages 233–242, 2012.
[21] A. Ene, M. Mnich, M. Pilipczuk, and A. Risteski. On routing disjoint paths in bounded treewidth graphs.
In Proc. SWAT 2016, LIPIcs, 2016. to appear.
[22] M. R. Fellows, D. Hermelin, F. Rosamond, and S. Vialette. On the parameterized complexity of
multiple-interval graph problems. Theoret. Comput. Sci., 410(1):53–61, 2009.
[23] A. M. Frieze. Edge-disjoint paths in expander graphs. SIAM J. Comput., 30(6):1790–1801 (electronic),
2001.
[24] N. Garg, V. V. Vazirani, and M. Yannakakis. Primal-dual approximation algorithms for integral flow
and multicut in trees. Algorithmica, 18(1):3–20, 1997.
[25] O. Günlük. A new min-cut max-flow ratio for multicommodity flows. SIAM J. Discrete Math., 21(1):1–15,
2007.
[26] I. Holyer. The NP-completeness of edge-coloring. SIAM J. Comput., 10(4):718–720, 1981.
[27] R. Karp. On the computational complexity of combinatorial problems. Networks, 5:45–68, 1975.
[28] K. Kawarabayashi and P. Wollan. A shorter proof of the graph minor algorithm: the unique linkage
theorem. In Proc. STOC 2010, pages 687–694, 2010.
[29] K.-i. Kawarabayashi and Y. Kobayashi. Breaking O(n1/2 )-approximation algorithms for the edge-disjoint
paths problem with congestion two. In Proc. STOC 2011, pages 81–88, 2011.
[30] J. Kleinberg and R. Rubinfeld. Short paths in expander graphs. In Proc. FOCS 1996, pages 86–95, 1996.
[31] J. Kleinberg and É. Tardos. Disjoint paths in densely embedded graphs. In Proc. FOCS 1995, pages
52–61, Oct 1995.
[32] J. Kleinberg and É. Tardos. Approximations for the disjoint paths problem in high-diameter planar
networks. J. Comput. System Sci., 57(1):61–73, 1998.
[33] S. Kolliopoulos and C. Stein. Approximating disjoint-path problems using packing integer programs.
Math. Prog., 99(1):63–87, 2004.
[34] T. Leighton and S. Rao. Multicommodity max-flow min-cut theorems and their use in designing
approximation algorithms. J. ACM, 46(6):787–832, 1999.
[35] N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic applications.
Combinatorica, 15(2):215–245, 1995.
[36] D. Lokshtanov, M. S. Ramanujan, and S. Saurabh. Linear time parameterized algorithms for subset
feedback vertex set. In Proc. ICALP 2015, pages 935–946, 2015.
16
[37] T. Nishizeki, J. Vygen, and X. Zhou. The edge-disjoint paths problem is NP-complete for series-parallel
graphs. Discrete Appl. Math., 115(1-3):177–186, 2001.
[38] P. Raghavan and C. D. Tompson. Randomized rounding: A technique for provably good algorithms and
algorithmic proofs. Combinatorica, 7(4):365–374, 1987.
[39] S. Rao and S. Zhou. Edge disjoint paths in moderately connected graphs. SIAM J. Comput., 39(5):1856–
1887, 2010.
[40] N. Robertson and P. D. Seymour. Graph minors. XIII. The disjoint paths problem. J. Combin. Theory
Ser. B, 63(1):65–110, 1995.
[41] P. Scheffler. A practical linear time algorithm for disjoint paths in graphs with bounded tree-width.
Technical Report TR 396/1994, FU Berlin, Fachbereich 3 Mathematik, 1994.
[42] L. Séguin-Charbonneau and F. B. Shepherd. Maximum edge-disjoint paths in planar graphs with
congestion 2. In Proc. FOCS 2011, pages 200–209, 2011.
17
| 8 |
Error Performance of Wireless Powered
Cognitive Relay Networks with Interference
arXiv:1712.04744v1 [] 13 Dec 2017
Alignment
Sultangali Arzykulov∗ , Galymzhan Nauryzbayev, Theodoros A. Tsiftsis∗
and Mohamed Abdallah‡
∗
‡
School of Engineering, Nazarbayev University, Astana, Kazakhstan
Division of Information and Computing Technology, College of Science and Engineering,
Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
Email: {sultangali.arzykulov, theodoros.tsiftsis}@nu.edu.kz,
nauryzbayevg@gmail.com, moabdallah@hbku.edu.qa
Abstract
This paper studies a two-hop decode-and-forward underlay cognitive radio system with interference
alignment technique. An energy-constrained relay node harvests the energy from the interference signals
through a power-splitting (PS) relaying protocol. Firstly, the beamforming matrices design for the
primary and secondary networks is demonstrated. Then, a bit error rate (BER) performance of the system
under perfect and imperfect channel state information (CSI) scenarios for PS protocol is calculated.
Finally, the impact of the CSI mismatch parameters on the BER performance is simulated.
Keywords
Decode-and-forward (DF) relaying, energy harvesting (EH), cognitive radio (CR), interference
alignment (IA), wireless power transfer, bit error rate (BER).
I.
I NTRODUCTION
C
OGNITIVE RADIO (CR) has attracted a significant attention by being an intelligent
technology that can utilize radio spectrum and increase the spectral efficiency in wireless
networks [1]. The main idea of CR is to provide secondary users (SUs), which are unlicensed
nodes, with possibility to communicate in a licensed spectrum on the condition that primary
users (PUs) should not receive harmful interference [1], [2]. There are three types of spectrum
sharing CR paradigms, namely, interweave, underlay and overlay [1]. The interference mitigation
at receivers can be managed by a promising technique named interference alignment (IA). IA
provides free signaling dimensions by aligning the interference into one subspace [3], [4]. IA can
be also applied in CR to cancel interference at primary and secondary receivers. A mitigation of
the severe effect of interference at primary receivers allows secondary transmitters to increase
their transmit power which consequently leads to an improvement of the performance of the
secondary network (SN) [5]. In [6], degrees of freedom (DoFs) of the network were increased
by implementing an IA technique in a multiple-input multiple-output (MIMO) relay CR.
Another promising technique, known as energy harvesting (EH), which harvests energy from
ambient signals through time-switching (TS) and power-splitting (PS), was introduced in [7], [8].
IA and simultaneous wireless information and power transfer (SWIPT) in MIMO networks were
jointly studied in [9], where an improvement of the performance of the network was analyzed
through a dynamical selection of users as EH or information decoding (ID) terminals. The EHbased CR network was studied in [10], where authors developed a spectrum sensing policy in TS
mode to guarantee EH possibility for SUs from primary signals. An identical system was studied
in [11], where an optimal information and energy cooperation methods between primary and
secondary networks was investigated. Finally, the work in [12] represented a resource allocation
method in EH-based CR network with imperfect channel state information (CSI) cases.
In this paper, we study an underlay IA-based CR with an energy-restricted relay operating
in PS mode. The performance of both the primary network (PN) and SN after interference
mitigation is analyzed. In particular, a bit error rate (BER) performance of the proposed system
model is calculated for PS relaying protocol under different imperfect CSI cases.
II.
S YSTEM M ODEL
The proposed system model is composed of a PN with two pairs of PUs and a SN with three
SUs. Each primary transmitter (Ti ) transmits to its corresponding receiver by interfering with
another primary receiver (Rj ) and relay as shown in Fig. 1. The SN is composed of a source
(S), a relay (R) and a destination (D) nodes. An energy constrained R operates in decode-andforward (DF) half-duplex mode by relaying the signal from S to D in two time periods. R
uses harvested energy from interference signals as its transmit power, while S and D supplied
with stationary power sources. Also, it is assumed that D is located far from PN and does
not receive any interference. All nodes of the network are assumed to have MIMO antennas.
We also assume that all interference at Rj are canceled by IA technique, thus, S and R are
not restricted by the level of transmit power. Another assumption is that the channels remain
constant during a transmission block time T , but vary independently from one block to another.
The definition of channel links between nodes can be denoted by the next. For channels of PN
[k]
nodes, Hj,i ∈ CNj ×Mi , ∀i, j ∈ {1, 2} denotes the channel between Rj and Ti , where superscript
k indicates a certain time period when the data transmission occurs. Nj and Mi are the numbers
of antennas at Rj and Ti , respectively. For channels of SN nodes, HR,S and HD,R denote the
channel links related to the S-R and R-D transmissions while the inter-network channels are
given by Hj,R ∈ CNj ×NR , Hj,S ∈ CNj ×NS and HR,i ∈ CNR ×Mi , where NS , NR and ND denote
the numbers of antennas at S, R and D, respectively. Each entry of any matrix H is assumed to
be independent and identically distributed (i.i.d.) random variables according to CN (0, 1), where
CN (0, 1) denotes the complex normal distribution with zero mean and unit variance. Also, note
that each channel link is characterized by the corresponding distance and path-loss exponent
denoted by dm,n and τm,n , ∀m ∈ {1, 2, R, D}, ∀n ∈ A = {1, 2, S, R}, respectively. We assume
that each node is deployed with N antennas (Mi = Nj = NS = NR = ND = N) and IA is
exploited at R and Rj , accordingly. Therefore, each transmit node l with the power Pl employs
a precoding matrix Vl ∈ C(Ml
or Nl )×fl
, with trace{Vl VlH } = 1, ∀l ∈ A, where fl is the number
of the transmitted data streams. Then, each receive node, except D, employs the interference
suppression matrix Ut ∈ CNt ×ft , ∀t ∈ {1, 2, R}, where ft is the number of data streams that
needs to be decoded at the corresponding receiver. Thus, the received signal at Rj , in two time
N1
1
1
...
...
M1
...
T2
1
...
T1
1
M2
N2
NR
1
1
R1
PN
R2
1
...
S
R
...
...
D
SN
ND
NS
Intra-network interference
direct links
Inter-network interference
Fig. 1. IA and EH-based CRN with two PUs and one SU sharing the spectrum simultaneously.
periods, can be written as
s
s
P
Pi [k]H [k] [k]
j
[k]
[k]H [k] [k]
[k]
+
A[k]
Hj,j Vj sj +
Hj,i Vi si +ñj , k ∈ {1, 2}, (1)
yj =
τj,j Uj
τ U
|{z}
dj,j
dj,ij,i j
{z
} interference from SN |
{z
}
|
interference from PN, i 6= j
desired signal
[k]
[k] H
where the effective noise term ñj = Uj
[k] [k]H
(AWGN) vector, with E{ñj ñj
[k]
nj is a zero-mean additive white Gaussian noise
} = σñ2 j I, where E{·} denotes an expectation operator. More-
over, we have E{sl sH
l } = I, with l ∈ A, since sl is assumed to be a vector consisting of symbols
generated as i.i.d. Gaussian inputs. Finally, interference from SN to Rj can be determined as
r
[k]H
PS
dτj,S Uj Hj,S VS sS , if k = 1,
j,S
A[k] = r
(2)
[k]H
P
R
dτj,R Uj Hj,RVR sR , if k = 2.
j,R
The received signal at R, within transmission period of S-R, can be written as
s
s
2
PS H
Pi X H
[1]
UR HR,i Vi si +ñR ,
yR =
τR,S UR HR,S VS sS +
τR,i
dR,S
dR,i i=1
{z
} |
{z
}
|
desired signal
(3)
interference from PN
where ñR = UH
R nR is the effective noise after interference suppression beamforming at the
relay.
Then, R decodes and forwards the desired signal sS to D within R − D transmission period.
Thus, D obtains the following signal
yD =
s
PR
τD,R HD,R VR sR + nD ,
dD,R
(4)
2
where nD is the AWGN vector, with E{nD nH
D } = σD I.
The interference in receive nodes can be assumed to be completely canceled if the following
conditions are satisfied for Rj as [13], [14]
[k]H
[k]
[k]
Hj,i Vi = 0, ∀i, j ∈ {1, 2}, ∀i 6= j,
Hj,S VS , if k = 1,
[k]H [k]
[k]
Uj J = 0, where J =
H V , if k = 2,
j,R R
[k]H [k] [k]
rank Uj Hj,j Vj = fj , ∀j ∈ {1, 2},
Uj
(5a)
(5b)
(5c)
and for R as
[1]
UH
R HR,i Vi = 0, ∀i ∈ {1, 2},
rank UH
R HR,S VS = fS .
(6a)
(6b)
A. Beamforming Design
If the space of the desired signal is linearly independent from that of the interference signal,
then the desired signal can be easily decoded from the received one. Hence, the design of
precoding matrices should be in such a way that interference in all receivers need to span to
one another. Thus, in the first time period, the interference at R1 , R2 and R can be spanned as
[1]
[1]
[1]
[1]
[1]
span H1,2 V2 = span (H1,S VS ), span H2,1 V1 = span (H2,S VS ) and span HR,1 V1 =
[1]
span HR,2 V2 , respectively, where span(X) is the vector space spanned by the column vectors
[1]
[1]
of X. After spanning all interference, the precoding matrices V1 , V2 and VS can be obtained
as [15]
[1]
[1]
V2 = (HR,2 )−1 HR,1 V1 ,
[1]
[1]
VS = (H2,S )−1 H2,1 V1 ,
(7a)
(7b)
−1
[1]
[1]
[1]
[1]
where V1 is derived using V1 = eig (Z), with Z = (HR,1 )−1 HR,2 H1,2
H1,S (H2,S )−1 H2,1
and eig(X) are the eigenvectors of X.
[k]
Interference suppression matrices Uj during two time slots, need to be orthogonalized to
the interference at Rj to meet conditions in (5). Similarly, UR needs to be orthogonalized to the
interference at R in S-R transmission period. Derivations of those matrices can be written as
h
iH
[k] [k]
= null Hj,i Vi
, j 6= i,
h
iH
[1]
UR = null HR,1 V1
.
[k]
Uj
(8a)
(8b)
In the 2nd time period, S stays silent, while R establishes its own communication. The design
of precoding and interference suppression matrices for this time period can be done by following
the same step in (7)-(8).
B. Imperfect CSI
The assumption of perfect CSI in wireless networks is highly idealistic due to channel
estimation error. Thus, the following model can be deployed for an imperfect CSI estimation [4]
Ĥ = H + E,
(9)
where Ĥ is the observed mismatched channel, H ∼ CN (0, I) represents the real channel matrix
and E is the error matrix which represents an inaccuracy degree in the estimated CSI. It is
also assumed that E is independent of H. Considering the signal-to-noise ratio (SNR), θ, E is
described as
E ∼ CN (0, λI) with λ = ψθ−κ ,
(10)
where λ is an error variance, κ ≥ 0 and ψ > 0 determine various CSI scenarios. Finally, the
real channel matrix, conditioning on Ĥ, [16], can be described as
H=
1
Ĥ + H̃,
1+λ
(11)
λ
where H̃ ∼ CN (0, 1+λ
I) is independent of Ĥ.
T
PN
SN
T/2
T/2
Data transmission (TS 1)
Data transmission (TS 2)
Energy harvesting
R → D data transmission
S → R data transmission
Fig. 2. Time frame structure of PSR.
III.
P OWER -S PLITTING R ELAYING
The PSR for SWIPT is shown in Fig. 2, where the total time is split into two equal portions,
one for the S-R and the rest for R-D data transmissions [?]. Within the 1st time fraction, an
energy portion of ρ, with 0 < ρ < 1, at R is allocated for EH, while the remaining power of
(1 − ρ) is conveyed to data transmission purpose.
Hence, the R obtains the following signal for EH
s
s
2
X
ρP
ρPi
√
S
[1]
EH
yR
=
ρnR .
τR,S HR,S VS sS +
τR,i HR,i Vi si +
dR,S
d
R,i
i=1
(12)
The power harvested from the noise is insignificant which can be neglected. Thus, the
instantaneous harvested energy at R can be derived from (12) as [7]
2
PR = ηρ
X Pi
PS
2
||H
V
||
+
R,S S
τR,S
τ
dR,S
d R,i
i=1 R,i
[1]
HR,i Vi
2
!
,
(13)
where || · || denotes the Euclidean norm. Then, by using (11), the received information signal
with power (1 − ρ) at R can be derived as (14) at the top of the next page.
The corresponding signal-to-interference-noise ratio (SINR) for R from (14) is derived by
IT
yR
p
= 1 − ρUH
R
s
PS
τR,S
dR,S
!
s
2
X
1
1
Pi
[1]
ĤR,S + H̃R,S VS sS +
ĤR,i + H̃R,i Vi si + nR
τR,i
1+λ
1+λ
dR,i
i=1
(14)
following
τ
γR =
PS (1−ρ)
Pi (1−ρ)
τR,i
dR,i
P2
2
||UH
R ĤR,S VS ||
2
2
||UH
R H̃R,S VS || + IP N + σñR
τR,S
dR,S
where IP N =
PS (1−ρ)
R,S
(1+λ)2
dR,S
,
(15)
[1]
i=1
2
||UH
R H̃R,i Vi || defines the interference from primary transmitters.
Then, the received signal at D can be written as
s
PR
1
yD =
ĤD,R + H̃D,R VR sR + nD .
τD,R
1+λ
dD,R
(16)
and SINR from (16) can be derived as
PR
||ĤD,RVR ||2
τD,R
(1+λ)2
dD,R
γD =
PR
τD,R
dD,R
2
||H̃D,RVR ||2 + σD
,
(17)
2
where σD
is the noise power.
Also, the received SINR for Rj is shown as
τ
[k]
γj =
[k]H
Pj
j,j
dj,j
(1+λ)2
||Uj
[k]
[k]
Ĥj,j Vj ||2
B [k] + C [k] + σñ2 j
[k]
,
(18)
where the intra-network interference of the PN due to the CSI mismatch is given by B [k] =
[k]H
Pj
τ
j,j
dj,j
||Uj
[k]
[k]
H̃j,j Vj ||2 +
[k]H [k] [k] 2
Pi
H̃j,i Vi ||i6=j
τj,i ||Uj
dj,i
SN is expressed by
C
[k]
=
PS
τj,S
dj,S
[k]H
||Uj
while the inter-network interference from the
H̃j,S VS ||2, if k = 1,
[k]H
PR
H̃j,RVR ||2 ,
τj,R ||Uj
dj,R
(19)
if k = 2.
The BER of symbol sm for binary phase shift keying (BPSK) can be derived as [17]
√
BERm = Q( γm ), m ∈ {1, 2, R, D},
(20)
√ R∞
where Q(x) = 1/ 2π x exp(−t2 /2)dt is the Gaussian-Q function.
IV.
S IMULATION R ESULTS
This section presents the simulation results for the proposed system model in Rayleigh fading
channels with BPSK modulation. The system parameters are as follows: dm,n = 3 and τm,n =
2.7 ∀m ∈ {1, 2, R, D}, ∀n ∈ {1, 2, S, R} and equal transmit power at Ti and S. The calculated
optimal values of ρ = 0.75 with η = 0.8 in [18] is considered. Furthermore, the following values
of (κ, ψ) such as (0, 0.001), (0, 0.05), (0.75, 10), (1, 10) and (1.5, 15) are used to investigate
the impact of CSI mismatch.
10
10
10
-1
10
BER
BER
10
0
-2
10
0
-1
-2
S-R
R-D
10
PU
-3
10
-3
R-D
S-R
PU
Perfect CSI
10
-4
10
0
5
10
15
20
25
30
SNR (dB)
(a) BER performance for perfect CSI and SNR-independent
CSI mismatch ((0, 0.001) and (0, 0.005)).
-4
0
5
10
15
20
25
30
SNR (dB)
(b) BER performance for SNR-dependent CSI mismatch
((0.75, 10), (1, 10) and (1.5, 15)).
Fig. 3. BER vs. SNR of the primary user, the relay and the destination node operating in the
PSR protocol under different CSI scenarios.
Fig. 3 shows how imperfect CSI parameters impact on the BER performance of the PU and
SUs. For the case of SNR-independent CSI mismatch (when κ = 0, Fig. 3a), the BER degrades
as ψ increases because the channel error variance does not depend on the SNR and the BER
curves saturate after some SNR values, e.g. at 15 dB and 21 dB for 0.05 and 0.001, respectively.
Furthermore, it is worth to note that the BER performance is not affected by ψ in the low SNR
region, i.e. ψ starts playing a role at 3 dB and 6 dB for the BER of PU and SUs, respectively.
This can be explained by the fact that small values of ψ do not increase much the error rate of
the received signal at low SNR.
10
-1
BER
10
0
10
10
-2
-3
0
1
2
3
4
5
0
5
10
15
Fig. 4. BER vs. the CSI mismatch parameters κ and ψ of SU at 20 dB.
In general, BER performance of PU outperforms those of R and D because of the power
portion 1 − ρ devoted for data transmission at R. The less the value of the allocated power
for data transmission is, the higher the probability of the incorrect data detection becomes. To
compare the performance of R and D at low SNR, R performs better than D because R transmits
its data with a certain number of errors which in turn affects the error rate of D. However, the
BER performances of SUs match after 10 dB due to the fact that R harvests more energy at
high SNR and D consequently receives a strong signal to detect. When κ 6= 0, the channel
error variance becomes SNR-dependent (see Fig. 3b), which implies no saturation of the BER
performance. An increase of κ leads to the BER improvement. At 30 dB, the BER performance
of SUs obtains 0.0353, 0.0079 and 0.0005 for (0.75, 10), (1, 10) and (1.5, 15), respectively.
A more deeper analysis on the impact of κ and ψ on the BER performance can be obtained
from Fig. 4, where the BER performance of SUs is built up as a function of different values
of κ and ψ at 20 dB. It can be noticed that the BER performance improves as κ increases,
while an increase of ψ results in the BER degradation. In the first subfigure, the BER curves for
different values of ψ approach 0.0019 at certain κ values. Meanwhile, in the second subfigure,
the BER performance for different κ degrades as ψ increases. It is observed that small values
of κ correspond to more abrupt BER degradation, and vise versa.
V.
C ONCLUSION
In this paper, we analyzed the BER performance of EH-based DF CRN with PS relaying
protocols and embedded IA technique. The five special scenarios of the imperfect CSI given by
(0, 0.001), (0, 0.05), (0.75, 10), (1, 10) and (1.5, 15) were studied to analyze the impact of the
CSI quality on the BER performance of PU and SUs. The presented results with ρ = 0.75 showed
that the BER of PU outperforms those of SUs in perfect and imperfect SCI cases. Moreover,
the BER curve degraded as ψ increased while rise of κ leaded to the BER improvement.
R EFERENCES
[1]
A. Goldsmith, S. Jafar, L. Maric and S. Srinivasa, “Breaking spectrum gridlock with cognitive radios: an information
theoretic perspective,” IEEE Proceedings, vol. 97, no. 5, pp. 894–914, May 2009.
[2]
S. Arzykulov, G. Nauryzbayev and T. A. Tsiftsis, “Underlay Cognitive Relaying System Over α - µ Fading Channels,”
IEEE Commun. Lett., vol. 21, no. 1, pp. 216–219, Jan. 2017
[3]
G. Nauryzbayev and E. Alsusa, “Enhanced Multiplexing Gain Using Interference Alignment Cancellation in Multi-cell
MIMO Networks,” IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 357–369, Jan. 2016.
[4]
G. Nauryzbayev and E. Alsusa, “Interference Alignment Cancellation in Compounded MIMO Broadcast Channels with
General Message Sets,” IEEE Trans. Commun., vol. 63, no. 10, pp. 3702–3712, Oct. 2015.
[5]
M. Amir, A. El-Keyi and M. Nafie, “Constrained Interference Alignment and the Spatial Degrees of Freedom of MIMO
Cognitive Networks,” IEEE Trans. Inf. Theory, vol. 57, no. 5, pp. 2994–3004, May 2011.
[6]
J. Tang, S. Lambotharan and S. Pomeroy, “Interference cancellation and alignment techniques for multiple-input and
multiple-output cognitive relay networks,” IET Signal Process., vol. 7, no. 3, pp. 188–200, May 2013.
[7]
A. Nasir, X. Zhou, S. Durrani, and R. Kennedy, “Relaying protocols for wireless energy harvesting and information
processing,” IEEE Trans. Wireless Commun., vol. 12, no. 7, pp. 3622–3636, Jul. 2013.
[8]
G. Nauryzbayev, K. M. Rabie, M. Abdallah and B. Adebisi, “Ergodic Capacity Analysis of Wireless Powered AF Relaying
Systems over α - µ Fading Channels,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Singapore, pp. 1–6, Dec.
2017.
[9]
N. Zhao, F. R. Yu and V. C. M. Leung, “Wireless energy harvesting in interference alignment networks,” IEEE Commun.
Mag., vol. 53, no. 6, pp. 72-78, June 2015.
[10]
S. Park, H. Kim and D. Hong, “Cognitive radio networks with energy harvesting,” IEEE Trans. Wireless Commun., vol.
12, no. 3, pp. 1386–1397, Mar. 2013.
[11]
G. Zheng, Z. Ho, E. A. Jorswieck and B. Ottersten, “Information and energy cooperation in cognitive radio networks,”
IEEE Trans. Signal Process., vol. 62, no. 9, pp. 2290–2303, Sept. 2014.
[12]
F. Wang and X. Zhang, “Resource Allocation for Multiuser Cooperative Overlay Cognitive Radio Networks with RF
Energy Harvesting Capability,” IEEE Global Commun. Conf. (GLOBECOM), Washington, DC, pp. 1–6, 2016.
[13]
G. Nauryzbayev and E. Alsusa, “Identifying the Maximum DoF Region in the Three-cell Compounded MIMO Network,”
IEEE WCNC, pp. 1–5, Doha, Qatar, Apr. 2016.
[14]
G. Nauryzbayev, E. Alsusa, and J. Tang, “An Alignment Based Interference Cancellation Scheme for Multi-cell MIMO
Networks,” IEEE VTC, pp. 1–5, Glasgow, UK, May 2015.
[15]
H. Sung, S. H. Park, K. J. Lee and I. Lee, “Linear precoder designs for K-user interference channels,” IEEE Trans.
Wireless Commun., vol. 9, no. 1, pp. 291–301, Jan. 2010.
[16]
S. Kay, “Fundamentals of statistical signal processing: Estimation theory” in Englewood Cliffs NJ: Prentice-Hall 1993.
[17]
S. Ohno and K. A. D. Teo, “Universal BER Performance Ordering of MIMO Systems over Flat Channels,” IEEE Trans.
Wireless Commun., vol. 6, no. 10, pp. 3678-3687, Oct. 2007.
[18]
S. Arzykulov, G. Nauryzbayev, T. A. Tsiftsis and M. Abdallah, “On the Capacity of Wireless Powered Cognitive Relay
Network with Interference Alignment,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Singapore, pp. 1–6, Dec.
2017.
| 7 |
Traffic flow optimization using a quantum annealer
Florian Neukart∗1 , David Von Dollen1 , Gabriele Compostella2 , Christian Seidel2 ,
Sheir Yarkoni3 , and Bob Parney3
arXiv:1708.01625v2 [quant-ph] 9 Aug 2017
1
Volkswagen Group of America, San Francisco, USA
2
Volkswagen Data:Lab, Munich, Germany
3
D-Wave Systems, Inc., Burnaby, Canada
Abstract
Quantum annealing algorithms belong to the class of meta-heuristic tools, applicable for
solving binary optimization problems. Hardware implementations of quantum annealing, such
as the quantum processing units (QPUs) produced by D-Wave Systems, have been subject to
multiple analyses in research, with the aim of characterizing the technology’s usefulness for
optimization and sampling tasks. In this paper, we present a real-world application that uses
quantum technologies. Specifically, we show how to map certain parts of a real-world traffic
flow optimization problem to be suitable for quantum annealing. We show that time-critical
optimization tasks, such as continuous redistribution of position data for cars in dense road
networks, are suitable candidates for quantum computing. Due to the limited size and connectivity
of current-generation D-Wave QPUs, we use a hybrid quantum and classical approach to solve
the traffic flow problem.
∗ Corresponding
author: florian.neukart@vw.com
1
1
Introduction
Quantum annealing technologies such as the quantum processing units (QPUs) made by D-Wave
Systems are designed to solve complex combinatorial optimization problems. It has been shown in
literature how to use these QPUs to perform both complex sampling and optimization tasks, and how
the properties of the quantum bits (qubits) play a role in the computation of solutions [4, 6, 8–13].
The QPU is designed to solve quadratic unconstrained binary optimization (QUBO) problems, where
each qubit represents a variable, and couplers between qubits represent the costs associated with
qubit pairs. The QPU is a physical implementation of an undirected graph with qubits as vertices
and couplers as edges between them. The functional form of the QUBO that the QPU is designed to
minimize is:
Obj(x, Q) = xT · Q · x,
(1)
where x is a vector of binary variables of size N , and Q is an N × N real-valued matrix describing
the relationship between the variables. Given the matrix Q, finding binary variable assignments to
minimize the objective function in Equation 1 is equivalent to minimizing an Ising model, a known
NP-hard problem [7].
In this paper, we will introduce the traffic flow optimization problem. We start with the T-Drive
trajectory dataset1 of cars’ GPS coordinates, and develop a workflow to mimic a system that aims
to optimize traffic flow in real time. We show how to transform key elements of the problem to
QUBO form, for optimization on the D-Wave system (including both the machine and software tools
that use it). We treat the D-Wave system as an optimizer, and show that it is possible to integrate
D-Wave QPU calls into a workflow that resembles a real-world application. The method presented
here is a novel approach to mapping this real-world problem onto a quantum computer.
2
Formulation of the traffic flow problem
The objective of the traffic flow optimization problem is to minimize the time for a given set of cars
to travel between their individual sources and destinations, by minimizing total congestion over all
road segments. Congestion on an individual segment is determined by a quadratic function of the
number of cars traversing it in a specific time interval. To ensure reproducibility, we used the publicly
available T-Drive trajectory dataset containing trajectories of 10,357 taxis recorded over one week.
The dataset features 15 million data points, and the total distance of the trajectories makes up about
9 million kilometers [14–16]. We required every car to transmit its GPS coordinates in intervals of 1
to 5 seconds. Because not all cars in the dataset provide transmission data at this rate, we enriched
the dataset by interpolating between GPS points. We split the problem into a step-by-step workflow,
outlined below. “Classical” refers to calculations on classical machines, and “quantum” refers to
calculation on the D-Wave system:
1 This open source dataset provided by Microsoft can be found here:
https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/
2
1. Classical: Pre-process map and GPS data.
2. Classical: Identify areas where traffic congestion occurs.
3. Classical: Determine spatially and temporally valid alternative routes for each car in the
dataset, if possible.
4. Classical: Formulate the minimization problem as a QUBO (to minimize congestion in road
segments on overlapping routes).
5. Hybrid Quantum/Classical: Find a solution that reduces congestion among route assignments
in the whole traffic graph.
6. Classical: Redistribute the cars based on the results.
7. Iterate over steps 2 to 6 until no traffic congestion is identified.
A visualization of the input graph is shown in Figure 1. This visualization was generated using
the OSMnx API, which is based on OpenStreetMap and allows for retrieving, constructing, analyzing,
and visualizing street networks from OpenStreetMap [1].
2.1
Determination of alternate routes
To illustrate how we formulate the problem, we focus on a subset of the T-Drive dataset. Of the
10,357 cars in the dataset, we select 418 of those that are traveling to or from the city center and the
Beijing airport. In this specific scenario, the goal was to maximize traffic flow by redirecting a subset
of the 418 cars to alternative routes such that the number of intersecting road segments is minimized.
For this, optimizing over all cars simultaneously is required, which means that any redistribution of
cars that resolves the original congestion must not cause a traffic jam anywhere else in the map. We
used the OSMnx package to split the map of Beijing into segments and nodes, and assign a unique
ID to each. Our procedure can be summarized as follows:
1. Extract the road graph from the Beijing city map using OSMnx. This returns lists of segments
and nodes with IDs. Nodes represent connections between segments, and segments are edges
connecting the nodes, representing the streets (Figure 1).
2. Map the T-Drive trajectory dataset cars’ GPS coordinates onto street segments in the graph,
to determine the routes taken by the cars.
3. For each car, and each source and destination node, we extract all simple paths from source to
destination, and obtain 3 candidate alternative routes2 . We use these 3 candidates as proposed
alternative routes to redistribute traffic.
2 A simple path can traverse several nodes from source to destination, but without returning to nodes which were
already visited (no cycles). Several thousands of simple paths from source to destination (per car) may exist. We
selected two simple paths that are most dissimilar to the original route, and to each other, and proposed these as
alternates, along with the original route. To do this we used the Jaccard similarity index.
3
2.2
Formulating the traffic flow optimization in QUBO form
The definition of variables for the QUBO (Equation 1) requires some classical pre-processing on the
input. In rare cases it may not be possible to switch a car to different route. For example, if there is
no intersection or ramp near the car, it will not be considered for rerouting and will remain on its
original path. Nevertheless, this car will still affect possible routings of other cars, so it is included in
the QUBO. Figure 2 shows an example with road segments assigned to a car, as it is used in our
workflow.
To optimize the traffic flow, we minimize the number of overlapping segments between assigned
routes for each car. Thus, we formulate the optimization problem as follows: “Given 3 possible routes
per car, which assignment of cars to routes minimizes the overall congestion on all road segments?”
We require that every car be assigned one of the 3 possible routes, while simultaneously minimizing
total congestion over all assigned routes. It is important to emphasize that in this example each car
was proposed 3 possible alternative routes — not the same set of 3 routes for all cars. This not need
be the case in general; cars can have many possible routes. For simplicity we take (maximum) 3
routes per car, because the mathematical description of the problem is identical regardless of the
number of routes.
For every possible assignment of car to route, we define a binary variable qij representing car i
taking route j. Because each car can only occupy one route at a time, exactly one variable per car
must be true in the minimum of the QUBO. We define a constraint such that every car is required
to take exactly one route. This can be formulated as the following constraint (assuming 3 possible
routes):
0=
X
j∈{1,2,3}
2
qij − 1 = −qi1 − qi2 − qi3 + 2qi1 qi2 + 2qi2 qi3 + 2qi1 qi3 + 1,
(2)
simplified using the binary rule x2 = x. As stated previously, routes are described by lists of
street segments (S being the set of all street segments in the graph). Therefore, for all cars i, with
proposed routes j ∈ {1, 2, 3} with segments Sj , which share the street segment s ∈ Sj , the cost of
the occupancy of the street segment is given by:
2
XX X
cost(s) =
qij
i
j
(3)
s∈Sj
The global cost function for the QUBO problem, Obj from Equation 1, can now be simply
described by summing the cost functions for each street segment and the constraint from Equation 2:
Obj =
X
cost(s) + λ
X
i
s∈S
2
X
qij − 1 .
(4)
j
When summing components of the global cost function, the scaling parameter λ must be introduced. This ensures that Equation 2 is satisfied for all cars in the minimum of the QUBO. To find
4
this scaling factor, we find the maximum number of times some car i is present in cost functions of
the form Equation 3, and use this value as λ. This makes the cost of violating Equation 2 greater
than the cost of increasing the segment occupancy in every route by 1.
Now the cost function can be formulated as a quadratic, upper-triangular matrix, as required
for the QUBO problem. We keep a mapping of binary variable qij to index in the QUBO matrix Q
(as defined in Equation 1), given by I(i, j). These indices are the diagonals of the QUBO matrix.
The elements of the matrix are the coefficients of the qij terms in Equation 4. To find these terms
explicitly, whenever two routes j and j 0 share a street segment s:
1. We add a (+1) at diagonal index I(i, j) for every car i proposed with route j containing segment
s.
2. We add a (+2) for every pair of cars i1 and i2 taking route j containing segment s at the
off-diagonal index given by indices I(i1 , j) and I(i2 , j).
We then add the constraints to enforce that every car has only one route, as per Equation 2:
1. For every car i with possible route j, we add (−λ) to the diagonal of Q given by index I(i, j).
2. For every cross-term arising from Equation 2, we add (2λ) to the corresponding off-diagonal
term.
A special case occurs if a car is proposed only one route, meaning qij = 1. As stated previously, despite car i being assigned to route j, this assignment still affects other cars. This forces the
quadratic constraint terms from Equation 3 to be turned into additional linear terms: 2qij qi0 j 0 → 2qi0 j 0 .
Additionally, by keeping a record of which routes every segment appears in, we can remove the
redundant constraints, as some routes may overlap in more than one segment.
This results in a QUBO matrix as shown in Figure 3.
2.3
Summary of the traffic flow optimization algorithm
Expressed as pseudo-code, the important high-level steps of the traffic flow optimization algorithm
are as follows:
1. For each car i
(a) Determine the current route.
2. For each car i’s current route:
(a) Map the source and destination to their nearest nodes in the road graph.
3. For each with source/destination pair:
(a) Determine all simple paths from source to destination.
5
(b) Find two alternative paths that are maximally dissimilar to the original route and to each
other.
4. For each car i, define the set of possible routes needed to form the QUBO.
5. Define the matrix Q with binary variables qij as described in Section 2.2.
6. Solve the QUBO problem.
7. Update cars with the selected routes.
3
D-Wave solvers and architecture
Here, we briefly introduce the solvers and tools provided by D-Wave, to help understand how the
problem was solved using the QPU.
3.1
Connectivity and topology
The topology of the D-Wave 2X QPU is based on a C12 Chimera graph containing 1152 vertices
(qubits) and over 3000 edges (couplers). A Chimera graph of size CN is an N ×N grid of Chimera cells
(also called unit tiles or unit cells), each containing a complete bipartite graph of 8 vertices (K4,4 ).
Each vertex is connected to its four neighbors inside the cell as well as two neighbors (north/south
or east/west) outside the cell: therefore every vertex has degree 6 excluding boundary vertices [5].
The 418-car example used 1254 logical variables to represent the problem. A challenge in this
scenario is the restricted connectivity between qubits on a D-Wave QPU, which limits the ability to
directly solve arbitrarily-structured problems. When using the D-Wave QPU directly, an interaction
between two problem variables can only occur when there is a physical connection (coupler) between
the qubits representing these variables. For most problems, the interactions between variables do
not match the QPU connectivity. This limitation can be circumvented using minor-embedding, a
technique that maps one graph structure to another. The QPU we used has 1135 functional qubits,
thus it was not possible to embed the 1254 logical variables on the QPU at once. Therefore, the
problem was solved using the hybrid classical/quantum tool qbsolv (described in the next section).
3.2
The qbsolv algorithm
In January 2017, D-Wave Systems open-sourced the software tool qbsolv [3] 3 . The purpose of this
algorithm is to provide the ability to solve larger QUBO problems, and with higher connectivity,
than is currently possible on the QPU. Given a large QUBO input, qbsolv partitions the input into
important components and then solves the components independently using queries to the QPU.
This process iterates (with different components found by Tabu search) until no improvement in
the solution is found. The qbsolv algorithm can optimize sub-problems using either a classical
Tabu solver or via submission to a D-Wave QPU. In this paper, we run qbsolv in the hybrid
3 The
source code can be found at: github.com/dwavesystems/qbsolv
6
classical/quantum mode of submitting sub-problems to the D-Wave 2X QPU.
The high-level steps performed by qbsolv in hybrid mode are as follows:
1. Find the largest clique4 that can be minor embedded in the QPU topology, or in the full
Chimera graph if using the VFYC feature5 . This one-time operation can be done in advance.
2. Given a QUBO problem, initialize random bit-string representing a solution to the problem.
3. Use a heuristic method to rank nodes according to importance; create a sub-problem that fits
on the QPU using the importance ranking.
4. Create sub-problem using the importance order.
5. Solve sub-problem by submitting it to the QPU and update variable states in the bit-string.
6. Iterate steps 3 to 5 until no improvement in the objective function is found.
A full description of how the qbsolv algorithm works is detailed in [2].
4
Results
The goal of these experiments was to map a real-world problem to a quantum annealing machine,
which we have shown. When evaluating the solutions produced by the D-Wave QPU, the focus was
on finding good quality solutions within short periods of calculation. To quantify the quality of a
solution, we count the number of congested roads after optimization. Keeping in mind that routes are
described by sets of road segments, we simply count the number of segments that appear in routes
more than a given number of times (Nintersections ). Here we assume that a segment that appears in
more than Nintersections routes will become congested. For this experiment, we chose Nintersections = 10.
To evaluate the QUBO formulation of the traffic flow problem, we designed the following experiment: for the 418 car QUBO problem (as presented in Section 2.2), we solved the problem 50 times
using qbsolv. We also generated 50 random assignments of cars to routes as reference for the results.
Intuitively, one would expect random route assignments to spread traffic across the alternative routes,
thus reducing the number of congested segments. In Figure 4 we show the distribution of results
(measured as the number of congested segments) after running the experiments using qbsolv and
random assignments.
From the results in Figure 4, we can see that qbsolv redistributes the traffic over possible routes
in a way that reduces the number of congested roads. This is evident both with respect to random
assignment of routes, and also shows improvement over the original assignment of routes. It should
be noted that in the original assignment, there was a relatively small number of streets that are
4A
clique is a graph where all nodes are connected to each other.
has recently introduced a “virtual full-yield Chimera” (VFYC) solver, which takes the working QPU and
simulates the missing qubits and couplers using classical software. This allows for some programs to be standardized
across the different QPUs, and within generations of QPUs. This VFYC version of the D-Wave 2X solver was used in
our experiments.
5 D-Wave
7
heavily occupied (meaning above the Nintersections = 10 threshold), as all the cars shared the same
route, and that the average occupancy was much higher than Nintersections = 10. It is also worth
noting that all 50 experiments using qbsolv resolved the congestion.
Additionally, we measured the performance of qbsolv as a function of its run-time. The qbsolv
source code was compiled and executed on a server in Burnaby, Canada, to minimize the latency
between submitting jobs to the QPU and obtaining the results. However, since the QPU used
was a shared resource via the cloud, run-time of qbsolv varied greatly. Therefore, we consider the
run-time of qbsolv to be the minimum of the observed run-times, as this represents most faithfully
the algorithm, independent of the load on the D-Wave system. This run-time was observed as 22
seconds. There is also no evidence of correlation between the run-time of qbsolv and performance
(the long run-times are due to waiting in the job submission queue). Given the performance results
of qbsolv, it is reasonable to assume that a dedicated D-Wave QPU (circumventing the public job
submission queue) could be suitable for these kinds of optimization problems. A visual showing
the traffic density on the Beijing road graph before (original routes) and after optimization (using
qbsolv) is shown in Figure 5.
5
Conclusions and future work
The currently presented problem is a simplified version of traffic flow, as it incorporates only a
limited set of cars, no communication to infrastructure, no other traffic participants, and no other
optimization targets except minimization of road congestion. In our future work, we intend to consider
all of these parameters, and will also need to consider creative ways of formulating these parameters
as part of the QUBO problem. We will continue to focus on solving real-world problems by means of
quantum machine learning, quantum simulation, and quantum optimization. Furthermore, we find
that these types of real-time optimization problems are well-suited for the D-Wave systems, and the
hybrid tools that use them. The more combinatorially complex the problem becomes, the more time
would be needed for classical algorithms to consider additional parameters. However, D-Wave QPUs
have historically grown in number of qubits from one generation to the next, and given that this
trend is likely to continue, it is reasonable to assume that obtaining high-quality solutions quickly
using the QPU will be sustainable moving forward. We expect that in future generations of QPUs, we
will be able to embed larger problems directly. This will allow us to further leverage the performance
of the QPU.
Acknowledgments
Thanks go to the Volkswagen Group for its support in this exploratory research project. Further
thanks go to the team at D-Wave Systems, especially to Murray Thom, Adam Douglass, and Andy
Mason.
8
References
[1] Geoff Boeing. Osmnx: New methods for acquiring, constructing, analyzing, and visualizing
complex street networks. Computers, Environment and Urban Systems, 65:126 – 139, 2017.
[2] Michael Booth, Steven P. Reinhardt, and Aidan Roy. Partitioning optimization problems for
hybrid classical/quantum execution. https://www.dwavesys.com/resources/publications, 2017.
[3] D-Wave
Systems.
D-Wave
Initiates
Open
Quantum
Software
Environment.
https://www.dwavesys.com/press-releases/d-wave-initiates-open-quantum-softwareenvironment, Jan 2017.
[4] Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy,
John Martinis, and Hartmut Neven. What is the computational value of finite-range tunneling?
Phys. Rev. X, 6:031015, Aug 2016.
[5] James King, Sheir Yarkoni, Mayssam M. Nevisi, Jeremy P. Hilton, and Catherine C. McGeoch.
Benchmarking a quantum annealing processor with the time-to-target metric. arXiv:1508.05087,
2015.
[6] Los
Alamos
National
Laboratory.
D-Wave
Rapid
Response.
http://www.lanl.gov/projects/national-security-education-center/information-sciencetechnology/dwave/, 2016.
[7] Andrew Lucas. Ising formulations of many np problems. Frontiers in Physics, 2:5, 2014.
[8] B. O’Gorman, R. Babbush, A. Perdomo-Ortiz, A. Aspuru-Guzik, and V. Smelyanskiy. Bayesian
network structure learning using quantum annealing. The European Physical Journal Special
Topics, 224(1):163–188, Feb 2015.
[9] A. Perdomo-Ortiz, J. Fluegemann, S. Narasimhan, R. Biswas, and V.N. Smelyanskiy. A quantum
annealing approach for fault detection and diagnosis of graph-based systems. The European
Physical Journal Special Topics, 224(1):131–148, Feb 2015.
[10] Jack Raymond, Sheir Yarkoni, and Evgeny Andriyash. Global warming: Temperature estimation
in annealers. Frontiers in ICT, 3:23, 2016.
[11] Eleanor G. Rieffel, Davide Venturelli, Bryan O’Gorman, Minh B. Do, Elicia M. Prystay, and
Vadim N. Smelyanskiy. A case study in programming a quantum annealer for hard operational
planning problems. Quantum Information Processing, 14(1):1–36, Jan 2015.
[12] Davide Venturelli, Salvatore Mandrà, Sergey Knysh, Bryan O’Gorman, Rupak Biswas, and
Vadim Smelyanskiy. Quantum optimization of fully connected spin glasses. Phys. Rev. X,
5:031040, Sep 2015.
[13] Davide Venturelli, Dominic J. J. Marchand, and Galo Rojo. Quantum annealing implementation
of job-shop scheduling. arXiv:1506.08479, 2015.
9
[14] J. Yuan, Y. Zheng, X. Xie, and G. Sun. T-drive: Enhancing driving directions with taxi drivers’
intelligence. IEEE Transactions on Knowledge and Data Engineering, 25(1):220–232, Jan 2013.
[15] Jing Yuan, Yu Zheng, Xing Xie, and Guangzhong Sun. Driving with knowledge from the physical
world. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ’11, pages 316–324, New York, NY, USA, 2011. ACM.
[16] Yu
Zheng.
T-drive
trajectory
data
sample.
https://www.microsoft.com/en-
us/research/publication/t-drive-trajectory-data-sample/, August 2011.
10
Figures
Figure 1: OSMnx graph for the downtown-area of Beijing
Figure 2: An example of a single car (with ID 10012) and its assigned routes, split into segments.
Figure 3: QUBO matrix describing the traffic flow problem.
11
Number of congested roads
180
Original routes
160
140
120
100
80
60
40
qbsolv
Solver
Random
Figure 4: Results comparing random assignment of cars to routes, and qbsolv with calls to the
D-Wave 2X QPU. The y-axis shows the distribution of number of congested roads. The red line is
the number of congested roads given the original assignments of routes.
Figure 5: Left: Unoptimized situation under consideration of cars causing traffic jam in the network.
Right: Optimized re-distributed cars using qbsolv. Note that the areas in red, which indicate high
traffic density, are mostly absent from the right picture.
12
| 8 |
Online Combinatorial Optimization for Interconnected
Refrigeration Systems: Linear Approximation and Submodularity
arXiv:1604.06958v2 [] 1 Apr 2017
Insoon Yang∗
Abstract
Commercial refrigeration systems consume 7% of the total commercial energy consumption
in the United States. Improving their energy efficiency contributes to the sustainability of
global energy systems and the supermarket business sector. This paper proposes a new control
method that can save the energy consumption of multi-case supermarket refrigerators by explicitly taking into account their interconnected and switched system dynamics. Its novelty is a
bilevel combinatorial optimization formulation to generate ON/OFF control actions for expansion valves and compressors. The inner optimization module keeps display case temperatures in
a desirable range and the outer optimization module minimizes energy consumption. In addition to its energy-saving capability, the proposed controller significantly reduces the frequency
of compressor switchings by employing a conservative compressor control strategy. However,
solving this bilevel optimization problem associated with interconnected and switched systems
is a computationally challenging task. To solve the problem in near real time, we propose two
approximation algorithms that can solve both the inner and outer optimization problems at
once. The first algorithm uses a linear approximation, and the second is based on the submodular structure of the optimization problem. Both are (polynomial-time) scalable algorithms
and generate near-optimal solutions with performance guarantees. Our work complements existing optimization-based control methods (e.g., MPC) for supermarket refrigerators, as our
algorithms can be adopted as a tool for solving combinatorial optimization problems arising in
these methods.
Key words. Control systems, Refrigerators, Temperature control, Optimization, Integer linear
programming, Greedy algorithms, Scalability
1
Introduction
Commercial refrigeration systems account for 7% of the total commercial energy consumption in
the United States [29]. Therefore, there is a strong need for energy-efficient refrigeration systems,
but research and development have focused on improving hardware rather than software, including control systems. Traditionally, hysteresis and set point-based controllers have been used to
maintain the display case temperature in a desirable range without considering system dynamics
and energy consumption. Over the past decade, however, more advanced control systems have
been developed to save energy consumption using real-time sensor measurements and optimization
algorithms (see Section 1.1). Advances in new technologies, such as the Internet of Things and
cyber-physical systems, enhance the practicality of such an advanced control system with their
sensing, communication, and computing capabilities [9].
∗
Ming Hsieh Department of Electrical Engineering, University of Southern California (insoonya@usc.edu). Supported in part by NSF under CRII:CPS (CNS1657100).
1
2
display
case 1
1
2
refrigerator
3
4
5
6
7
8
9
10
ambient air
Figure 1: A supermarket refrigerator, which has 10 display cases. Evaporator i controls the temperature of display case i. Lines with arrows represent heat transfers between neighboring display
cases or between a display case and ambient air.
Supermarkets are one of the most important commercial sectors in which energy-efficient refrigeration systems are needed. The primary reasons are twofold. First, supermarket refrigerators
consume 56% of energy consumed by commercial refrigeration systems [17]. Second, supermarkets
operate with very thin profit margins (on the order of 1%), and energy savings thus significantly
help their business: the U.S. Environmental Protection Agency estimates that reducing energy
costs by $1 is equivalent to increasing sales by $59 [1]. However, improving the energy efficiency
of supermarket refrigerators is a challenging task because food products must be stored at proper
temperatures. Failure to do so will increase food safety risks. The most popular refrigerators in
supermarkets are multi-display case units. An example is illustrated in Fig. 1. Each display case
has an evaporator controlled by an expansion valve, and a unit’s suction pressure is controlled by
a compressor rack, as shown in Fig. 2. In typical supermarket refrigerators, controllers turn ON
and OFF expansion valves and compressors to keep display case temperatures in a specific range.
Importantly, there are heat transfers between display cases due to the interconnection among them.
Note that traditional hysteresis or set point-based controllers do not take into account such heat
transfers and therefore perform in a suboptimal way.
This paper proposes a new control method that can improve the energy efficiency of multi-case
supermarket refrigerators by explicitly taking into account the interconnected and switched dynamics of display case temperatures. The proposed controller receives sensor measurements and
optimizes ON/OFF control actions for expansion valves and compressors in near real time. The
novelty of this work is a bilevel combinatorial optimization formulation to generate such ON/OFF
control signals in which (i) the inner combinatorial optimization module is responsible for maintaining display case temperatures in a desirable range, and (ii) the outer combinatorial optimization
module minimizes energy consumption. The primary advantage of the proposed approach is its
energy savings. Because the controller explicitly takes into account the system dynamics and heat
transfers, it effectively uses state measurements and optimizes control actions to save energy while
guaranteeing desired temperature profiles. In our case studies, the proposed control method saves
7.5–8% of energy compared to a traditional approach. The secondary benefit of the proposed
method is to reduce the frequency of compressor switchings. It is known that frequent switchings
of compressors accelerate their mechanical wear. We propose a conservative compressor control
approach that reduces fluctuations in suction pressure and thus decreases the compressor switching
frequency. In our case studies using a benchmark refrigeration system model, the proposed method
reduces the switching frequency by 54–71.6%.
The proposed control method, however, presents a theoretical and algorithmic challenge because
a bilevel combinatorial optimization associated with a dynamical system must be solved in near
real time. To overcome this challenge, we suggest two approximation algorithms that can solve
both of the inner and outer optimization problems at once. The first algorithm uses the linear
approximation method developed in our previous work [32]. The approximate problem is a linear
3
evaporator 1
refrigerant
(liquid)
display case 1
condenser
..
.
display case n
suction manifold
expansion
valve 1
compressor
rack
Figure 2: Schematic diagram of a supermarket refrigerator.
binary program, which can be solved by an efficient and scalable single-pass algorithm. In addition,
it simulates the dynamical system model only once to generate control actions at each time point.
We also show that the approximate solution obtained by this method has a provable suboptimality
bound. The second algorithm is based on the submodular structure in the optimization problem.
The inner optimization’s objective function is submodular because opening an expansion valve
when a smaller set of valves are opened gives a greater marginal benefit than opening it when a
larger set of valves are already opened. We prove this intuitive submodularity property. Therefore,
a greedy algorithm can be adopted to obtain a (1 − 1e )-optimal solution [18]. In our case studies,
the actual performance of the proposed controller using these two algorithms is 98.9–99.5% of the
optimal controller.
1.1
Related Work
Several optimization-based control methods for commercial refrigerators have been developed over
the past decade. One of the most popular methods is model predictive control (MPC) although it
is computationally challenging to apply standard MPC due to the switched dynamics of refrigeration systems. It is shown that the mixed logical dynamical framework is useful to solve small-size
problems with a piecewise affine approximation of a system model [4, 12]. However, the practicality
of this method is questionable due to the high dimensionality of practical problems for supermarket refrigerators, except for limited cases. To overcome this limitation, [24] carefully selects and
parametrizes optimization variables to formulate the problem as nonlinear MPC instead of hybrid
MPC. Nonetheless this approach is computationally expensive because a nonlinear program with
many variables must be solved in each MPC iteration. An alternative approach using hierarchical
MPC is proposed in [28]. This method separates time scales into two: in every nonlinear MPC
iteration, low-level temperature controllers were employed, and the high-level optimization task is
to determine optimal parameters for these controllers. However, this approach still presents the
combinatorial growth of the search space. More recently, a sequential convex programming-based
method is shown to be computationally efficient in several case studies [10]. It iteratively solves
an optimization problem using convex programming, replacing the nonconvex cost function with
a convex approximation. In several numerical experiments, this heuristic method generates highquality control signals although it gives no theoretical performance guarantee. We believe that
our work is complementary to the aforementioned methods. One of our main contributions is to
develop two efficient and scalable algorithms for resolving the computational challenge in discrete
optimization problems associated with supermarket refrigeration systems. These algorithms can be
4
adopted as a tool for solving combinatorial optimization problems in the aforementioned methods.
We propose one of the most efficient control architectures that use the algorithms.
With advances in modern power systems, an important emerging application is using supermarket refrigeration systems for thermal storage [19, 25, 30, 16]. In this application, it is often
necessary to regulate the total power consumption of hundreds of refrigerators to provide reliable
demand response services to the power grid. Our work contributes to such demand response applications by providing scalable online optimization algorithms with performance guarantees that are
useful for solving large-scale problems. The utility of the proposed algorithms in demand response
is demonstrated in [31] and a case study is presented in Section 5.
1.2
Outline
The remainder of this paper is organized as follows. In Section 2, we describe an interconnected
hybrid system model of supermarket refrigerators and provide a simulation result when a traditional
set point- and PI-based controller is employed. We then propose the proposed control method
based on bilevel online combinatorial optimization in Section 3. In Section 4, we provide two
efficient algorithms to solve the combinatorial optimization problem near real time and examine
their scalability. In Section 5, we compare the performance of the proposed controllers with that
of the traditional controller and demonstrate its utility in automated demand response.
2
Switched and Interconnected Dynamics of Supermarket Refrigeration Systems
We consider a supermarket refrigerator in which multiple display cases are interconnected with one
another. For example, Fig. 1 shows a refrigerator that has 10 display cases. The temperature of
each display case is controlled by an evaporator unit, where the refrigerant evaporates absorbing
heat from the display case. Let evaporator i be in charge of display case i for i = 1, · · · , n, where
n is the number of display cases in all the refrigerators. Several dynamic models of supermarket
refrigeration systems have been proposed [23, 13, 14, 21, 22, 26] (see also the references therein).
Among those, we use the benchmark model of a typical supermarket refrigeration system proposed
in [13] and widely used in [24, 28, 34, 30, 16]. This model is useful for simulating display case
temperatures and evaluating the performances of several controllers.
2.1
Display Cases and Evaporators
Display cases store food products and keep them refrigerated. This refrigeration is due to the heat
transfer between the food product and the cold air in the display cases. Let Tfood,i and Tair,i denote
the temperatures of the food product and the air in display case i. The heat transfer Qfood→air,i
between the food product and the air in display case i can then be modeled as
mfood,i cfood,i Ṫfood,i = −Qfood→air,i
= −kfood−air (Tfood,i − Tair,i ),
(2.1)
where mfood,i is the mass of the food product, cfood,i is the heat capacity of the food product and
kfood−air is the heat transfer coefficient between the food product and the air.
The display case air temperature is affected by the heat transfers from the food product
(Qfood→air,i ), the
Pnambient air (Qamb→air,i ), the evaporator (−Qair→evap,i ) and the neighboring display case air ( j=1 Qj→i ). The refrigerant flow into an evaporator is controlled by its expansion
5
valve. Let ui be the valve control variable for evaporator i such that
0 if expansion valve i is closed at t
ui (t) :=
1 otherwise.
Expansion valve i controls the refrigerant injection into evaporator i and decreases the pressure
of the refrigerant if it is open, as shown in Fig. 2. Then, the dynamics of the display case air
temperature can be modeled as the following switched interconnected system:
mair,i cair,i Ṫair,i
= Qfood→air,i + Qamb→air,i − Qair→wall,i +
n
X
Qj→i
j=1
= kfood−air (Tfood,i − Tair,i ) + kamb−air (Tamb − Tair,i )
n
X
− kair−evap (Tair,i − Tevap ui ) +
ki,j (Tair,j − Tair,i ),
(2.2)
j=1
where Tamb is the ambient air temperature, Tevap is the refrigerant’s evaporation temperature,
kamb−air is the heat transfer coefficient between the ambient air and the display case air and ki,j is
the heat transfer coefficient between display case i’s air and display case j’s air. Note that ki,j = 0
if display cases i and j are not neighbors. For a more detailed model, one can separately consider
the dynamics of the evaporator wall temperature [24].1 However, the proposed model is a good
approximation because the heat transfer coefficient between the evaporator wall and the refrigerant
is five to ten times higher than other heat transfer coefficients [13].
The mass flow out of the evaporator can be computed as
fi :=
1
mref,i ,
∆t
where the refrigerant mass in the evaporator is controlled by the valve switching
max
mref
if ui = 1
mref,i =
0
if ui = 0.
Depending on the specification of refrigerators, it takes a nontrivial amount of time to fill up the
evaporator by refrigerant. In this case, the dynamics of the refrigerant mass in the evaporator can
be explicitly taken into account [24]. Alternatively, one can introduce a delay-time constant, τ , and
let mref,i (t) = mmax
ref ui (t − τ ) to model the effect of the time to fill up the evaporator.
2.2
Suction Manifold and Compressor Rack
As shown in Fig. 2, the evaporated refrigerant with low pressure from the outlet of the evaporator
is compressed by the electric motors in the compressor bank. Each refrigerator could have multiple
compressors and each compressor is switched ON or OFF. For example, all the compressors are
turned ON when maximal compression is needed. The compressor bank is conventionally controlled
by a PI controller to maintain the suction pressure within a bandwidth.
1
Alternatively, one can introduce a delay parameter, τ , and replace Tevap ui (t) with Tevap ui (t − τ ) to explicitly
take into account the effect of the evaporator wall temperature.
6
The suction manifold pressure Psuc evolves with the following dynamics:
!
nc
n
X
X
1
Ṗsuc =
fi − ρsuc
Fc,i ,
Vsuc rsuc
i=1
(2.3)
i=1
where Vsuc is the volume of the suction manifold, ρsuc is the density of the refrigerant in the suction
manifold, and rsuc := dρsuc /dPsuc . The variable Fc,i denotes the volume flow out of the suction
manifold controlled by compressor i. Let uc,i be the control variable for compressor i, where
uc,i = 0 represents that compressor i is OFF and uc,i = 1 represents that compressor i is ON. The
volume flow Fc,i is then given by
Fc,i = kc uc,i :=
ηVcomp
uc,i ,
n
where η is the volumetric efficiency of each compressor, and Vcomp denotes the compressor volume.
The total power consumption by the compressor rack is given by
p = ρsuc (hoc − hic )
nc
X
Fc,i ,
i=1
where hic and hoc are the enthalpies of the refrigerant flowing into and out of the compressor,
respectively. The compressed refrigerant flows to the condenser and is liquefied by generating heat,
as shown in Fig. 2. The liquefied refrigerant flows to the expansion valve, and as a result, the
refrigeration circuit is closed.
2.3
Traditional Set-Point/PI-Based Control
A widely used control method consists of (i) a set-point based control of expansion valves, and (ii)
a PI control of compressors [13, 24]. Specifically, the following ON/OFF control law is traditionally
used for expansion valve i:
if Tair,i (t) > Timax
1
0
if Tair,i (t) < Timin
ui (t) :=
ui (t− ) otherwise,
where [Timin , Timax ] is the desirable temperature range for display case i. To control the suction
pressure, compressors are traditionally operated by a PI controller. This controller tracks the error
e(t) that measures the deviation of the suction pressure from the reference P̄suc over the dead band
DB, i.e.,
Psuc (t) − P̄suc if |e(t)| > DB
e(t) :=
0
otherwise.
Then, the number of ON compressors is determined by a thresholding rule depending on the
following output of the PI controller
Z
1
uP I (t) = KP e(t) +
e(t)dt;
KI
the greater value the output generates, the more compressors the controller turns on. More details
about the thresholding rule can be found in [13, 24].
In our case studies, R134a is chosen as the refrigerant. Its relevant thermodynamic properties are
2 +
contained in [13]. For convenience, we summarize the properties as follows: Tevap = −4.3544Psuc
7
2 − 0.1704P
5
29.2240Psuc − 51.2005, ∆h = (0.0217Psuc
suc + 2.2988) × 10 , ρsuc = 4.6073Psuc + 0.3798,
3 + 0.2161P 2 − 0.4742P
3
2
rsuc = −0.0329Psuc
suc + 5.4817, ρsuc (hoc − hic ) = (0.0265Psuc − 0.4346Psuc +
suc
5
2.4923Psuc + 1.2189) × 10 . These formulas were obtained by fitting polynomials with experimental
data. The additional parameters used in the simulations are summarized in Table 1.
Table 1: Parameters used in simulations
Parameter
m̄food,i
mwall,i
mair,i
kfood−air
ki,j
mmax
ref
η
n
Timin
KP
P̄suc
Value
200 kg
260 kg
50 kg
300 J/(s·K)
500 J/(s·K)
1 kg
0.81
10
0◦ C
0.1
1.4 bar
Parameter
cfood,i
cwall,i
cair,i
kair−evap
kamb−evap
Vsuc
Vcomp
Tamb
Timax
KI
DB
Value
1000 J/(kg·K)
385 J/(kg·K)
1000 J/(kg·K)
225 J/(s·K)
275 J/(s·K)
10 m3
0.2 m3 /s
20◦ C
5◦ C
−0.8
0.3 bar
We perturbed the mass of food products in each display case by ±20% from the nominal value
m̄food,i . Despite this heterogeneity, the set point-based controller almost identically turns ON and
OFF all the expansion valves and therefore all the display case temperatures have almost the same
trajectory as shown in Fig. 3 (a). This synchronization is due to the decentralized nature of the
set point-based controller: the control decision for expansion valve i depends only on its local
temperature. Intuitively, this decentralized controller is suboptimal because it does not actively
take into account the heat transfer between neighboring display cases. This inefficiency of the
traditional control approach motivates us to develop a new optimization-based control method
that explicitly considers the interdependency of display case temperature dynamics.
Another disadvantage resulting from the synchronization of expansion valves is the significant
fluctuation of suction pressure. Since the PI controller integrates the deviation of suction pressure
from its reference, the output uP I (t) presents large and frequent variations. As a result, the number
of ON compressors frequently varies as shown in Fig. 3 (b). A frequent switching of compressors
is a serious problem because it accelerates the mechanical degradation of the compressors. Our
strategy to discourage frequent compressor switchings is twofold: (i) our conservative compressor
control method tries to maintain Psuc (t) = P̄suc , not fully utilizing the pressure bandwidth ±DB,
and (ii) our online optimization-based controller indirectly desynchronizes the ON/OFF operation
of expansion valves. The details about the two control methods are presented in the following
sections. Unlike the traditional control approach, our proposed method is suitable for regulating
the total power consumption in real time. This feature is ideal for modern power system (so-called
‘smart grid’) applications, allowing supermarket units to follow a real-time regulation signal for
reducing peak demand or supporting a spinning reserve. Such applications of our control method
to power systems are studied in [31] and one of which is studied in Section 5.4.
# of ON compressors
temperature ( °C)
8
7
6
5
4
3
2
0
case 1
1
2
3
case 2
4
time (h)
case 3
case 4
case 5
5
6
7
8
5
6
7
8
(a)
6
5
4
3
2
1
0
0
1
2
3
4
time (h)
(b)
Figure 3: (a) The food temperatures in display cases 1–5, operated by the PI controller over 8 hours.
The temperatures in all display cases are almost identical. (b) The number of ON compressors
operated by the PI controller. Note that the profile presents frequent fluctuations.
3
3.1
Control via Online Combinatorial Optimization
Conservative Compressor Control
We control the compressor rack to (approximately) maintain the suction pressure as the reference
P̄suc , i.e.,
Psuc (t) ≈ P̄suc ∀t.
In other words, to make Ṗsuc ≡ 0 in (2.3), we set uc := (uc,1 , · · · , uc,nc ) such that the refrigerant
outflow from the suction manifold is equal to the inflow:
ρsuc
nc
X
kc uc,i ≈
n
X
i=1
fi .
(3.1)
i=1
In practice, we may not be able to exactly satisfy this equality because each uc,i is either 0 or 1.
However, we assume that the compressor control action uc can be chosen to make the difference
between the outflow and the inflow negligible. This compressor control rule is suboptimal: it
induces a conservative operation of the compressor rack that does not fully utilize the pressure
bandwidth. However, this conservative control approach has a practical advantage: it does not
create significant compressor switchings. Therefore, it can potentially decelerate the mechanical
wear of compressors. Under this compressor control rule, the total power consumption can be
computed as
p = (hoc − hic )
(hoc − hic
=
∆t
3.2
n
X
fi
i=1
n
)mmax X
ref
(3.2)
ui .
i=1
Bilevel Optimization Formulation
We consider a receding-horizon online optimization approach to generate control signals for expansion valves and compressors. Let {t0 , t1 , · · · , tk , tk+1 , · · · } be the time steps at which the control
9
action is optimized. For the sake of simplicity, we describe a one-step look-ahead optimization
method; however, this approach can be easily extended to multiple-step look-ahead optimization
(see Remark 2).
3.2.1
Inner problem for temperature management
At time tk , we control the expansion valves to minimize the following quadratic deviation from the
upper-bound Timax , i = 1, · · · , n:
n Z tk+1
X
J(α) =
(Tair,i − Timax )2+ dt,
i=1
tk
where (a)2+ = a2 · 1{a≥0} , assuming Tair is evolving with (2.1) and (2.2). Specifically, the expansion
valve action at tk is generated as a solution to the following combinatorial optimization problem:
min
α∈{0,1}n
J(α)
(3.3a)
s.t. ẋ = Ax + Bu + C,
x(tk ) = xmeas
u(t) = α, t ∈ (tk , tk+1 ]
n
X
kαk0 =
αi ≤ K.
(3.3b)
(3.3c)
(3.3d)
i=1
Here, x := (Tfood , Tair ) and (3.3b) gives a linear system representation of the dynamics (2.1) and
(2.2). Note that xmeas represents (Tfood , Tair ) measured at t = tk .2 As specified in (3.3c), the control
action over (tk , tk+1 ] is fixed as the solution α. The last constraint (3.3d) essentially limits the
power consumed by the refrigeration system as K(hoc − hic )mmax
ref /∆t due to (3.2). Therefore, the
choice of K is important to save energy: as K decreases, the power consumption lessens.
3.2.2
Outer problem for energy efficiency
To generate an energy-saving control action, we minimize the number K of open expansion valves
while guaranteeing that the quadratic deviation J(α) from the upper-bound Timax , i = 1, · · · , n is
bounded by the threshold ∆. More precisely, we consider the following outer optimization problem:
min{K ∈ {0, · · · , n} | J(αopt (K)) ≤ ∆},
αopt (K)
(3.4)
K opt
where
is a solution to the expansion valve optimization problem (3.3). Let
be a
opt
opt
solution to this problem. Then, α (K ) is the expansion valve control action that saves energy
the most while limiting the violation of the food temperature upper-bound Timax , i = 1, · · · , n.
This outer optimization problem can be easily solved by searching K from 0 in an increasing order.
Once we find K̂ such that J(αopt (K̂)) ≤ ∆, we terminate the search and obtain the solution as
K opt := K̂. In the following section, we will show that this procedure can be integrated into
approximation algorithms for the inner optimization problem.
Then, as specified in (3.3c), the controller chooses uopt (t) := αopt P
(K opt ) for t ∈ (tk , tk+1 ]. Furopt
c
max
thermore, it determines the compressor control signal uc such that ni=1
uopt
c,i ≈ mref /(ρsuc kc ∆t)
using (3.1). If Psuc (t) < P̄suc , the controller rounds mmax
ref /(ρsuc kc ∆t) to the next smaller integer
and then determines the number of ON compressors as the integer. If Psuc (t) ≥ P̄suc , the controller
rounds mmax
ref /(ρsuc kc ∆t) to the nearest integer greater than or equal to it. The information flow in
this control system is illustrated in Fig. 4.
2
If Tfood is not directly measured, an observer needs to be employed to estimate the state. Then, the control
est
system uses the estimate Tfood
instead of its actual measurement as shown in Fig. 4.
10
System
Tair
Tfood
uopt
uopt
c
Controller
outer opt.
K opt
inner opt.
est
(or Tfood
)
Psuc
Tamb
Figure 4: The proposed control system with the outer and inner optimization modules.
Remark 1. Our objective function J(α) only takes into account the violation of temperature upperbounds. This choice is motivated by the fact that the food temperature in each display case increases
as we close more expansion valves, which is summarized in Proposition 1. In other words, as we
reduce the number K of open valves in the outer optimization problem, the possibility of violating
temperature upper-bounds increases, while it is less likely to violate temperature lower-bounds. This
monotonicity property of food temperatures justifies our focus on temperature upper-bounds.
α
α
Proposition 1. Let Tfood
,j and Tair,j denote the food and air temperatures in display case j when
the control action α is applied. Then, for any α, β ∈ Rn such that
αi ≤ β i ,
i = 1, · · · , n,
we have
β
β
α
α
Tfood
,j ≥ Tfood,j and Tair,j ≥ Tair,j ,
j = 1, · · · , n.
Proof. The proof is contained in Appendix A.
4
Approximation Algorithms
We present two approximation methods for the inner optimization problem. One is based on linear
approximation, and another utilizes submodularity. These will give approximate solutions with
guaranteed suboptimality bounds. We further show that, by simply modifying these approximation
algorithms for the inner optimization problem, we can obtain a near-optimal solution to the outer
optimization problem.
4.1
Linear Approximation-Based Optimization
We first consider a linear approximation-based approach to the inner combinatorial optimization
problem (3.3). It is convenient to work with the following value function:
V (α) = J(0) − J(α).
(4.1)
The value V (α) represents the reduction in the quadratic deviation from from the upper-bound
Timax , i = 1, · · · , n, when expansion valve j is chosen to be open only for j such that αj = 1.
Note that this value function is normalized such that V (0) = 0. The Taylor expansion of V at 0
gives V (α) = DV (0)> α + O(α2 ) assuming the derivative DV is well-defined. This motivates us to
11
consider the following first-order approximation of the expansion valve optimization problem (3.3):
max
α∈{0,1}n
DV (0)> α
(4.2)
kαk0 ≤ K.
The ith entry [DV (0)]i of the derivative represents the marginal benefit of opening expansion
valve i. Therefore, the approximate problem (4.2) can be interpreted as maximizing the marginal
benefit of valve operation while guaranteeing that the number of open valves is less than or equal
to K. A detailed method to define and compute the derivative can be found in [32]. Computing
the derivative should also take into account the dependency of the state x on the binary decision
variable α. For example, an adjoint-based approach can be used to handle this dependency [11].
The first advantage of the proposed approximation approach is that it gives an approximate
solution with a provable suboptimality bound. The bound is a posteriori, which does not require
the globally optimal solution αopt but the solution α? of (4.2).
Theorem 1 ([32]). Let α? be a solution to the approximate problem (4.2). If DV (0)> α? 6= 0, then
the following suboptimality bound holds:
ρV (αopt ) ≤ V (α? ),
where
ρ=
V (α? )
≤ 1.
DV (0)> α?
If DV (0)> α? = 0, then V (αopt ) = V (0) = 0, i.e., 0 is an optimal solution.
Its proof is contained in Appendix B. This theorem suggests that the approximate solution’s
performance is greater than (ρ × 100)% of the globally optimal solution’s performance.
The second advantage of the proposed method is that it yields an efficient algorithm to solve the
approximate problem (4.2). Specifically, we design a very simple algorithm based on the ordering
of the entries of DV (0). Let d(·) denote the map from {1, · · · , n} to {1, · · · , n} such that
[DV (0)]d(i) ≥ [DV (0)]d(j)
(4.3)
for any i, j ∈ {1, · · · , n} such that i ≤ j. Such a map can be constructed using a sorting algorithm
with O(n log n) complexity. Such a map may not be unique. We let αd(i) = 1 for i = 1, · · · , K
if [DV (0)]d(i) > 0. A more detailed algorithm to solve this problem is presented in Algorithm 1.
Note that it is a single-pass algorithm, i.e., does not require multiple iterations. Furthermore, Lines
9–12 can be parallelized.
Remark 2. The proposed linear approximation method is applicable to multi-period optimization
PNperiod
problems, in which the objective is given by J(α) := k=1
Jk (αk ) and the control variable is
time-varying, i.e., α = (α1 , · · · , αNperiod ) ∈ Rn×Nperiod . In such a case, we compute the derivative
DVk of Vk (αk ) := Jk (0) − Jk (αk ) for each k. The objective function can be approximated as
PNperiod
DVk (0)> αk , which is still linear in α. Therefore, we can use the proposed algorithm.
k=1
4.2
Submodular Optimization
The second approach gives another approximate solution of the expansion valve optimization problem (3.3) with a suboptimality bound. This solution is generally different from the solution obtained
12
Algorithm 1: Algorithm for the approximate problem (4.2)
1
2
3
4
5
6
7
8
9
10
11
Initialization:
α ← 0;
Construction of d:
Compute DV (0);
Sort the entries of DV (0) in descending order;
Construct d : {1, · · · , n} → {1, · · · , n} satisfying (4.3);
Solution of (4.2):
while [DV (0)]d(i) > 0 and i ≤ K do
αd(i) ← 1;
i ← i + 1;
end
by the first approach. Let Ω := {1, · · · , n} be the set of expansion valves to be controlled. We
define a set function, V : 2Ω → R, as
V(X) = V (I(X)),
where the value function V is defined as (4.4) and I(X) := (I1 (X), · · · , In (X)) ∈ {0, 1}n is the
indicator vector of the set X such that Ii (X) := 0 if i ∈
/ X and Ii (X) := 1 if i ∈ X. In other
words, V is a set function representation of V . The expansion valve optimization problem (3.3) is
equivalent to selecting the set X ⊆ Ω such that |X| ≤ K to maximize the value function V(X), i.e.,
max
X∈2Ω
V(X)
(4.4)
s.t. |X| ≤ K.
We observe that the value function V has a useful structure, which is called the submodularity. It
represents a diminishing return property such that opening an expansion valve when a smaller set
of valves is opened gives a greater marginal benefit than opening it when a larger set of valves is
already opened.
Theorem 2. The set function V : 2Ω → R is submodular, i.e., for any X ⊆ Y ⊆ Ω and any
a∈Ω\Y,
V(X ∪ {a}) − V(X) ≥ V(Y ∪ {a}) − V(Y ).
Furthermore, it is monotone, i.e., for any X ⊆ Y ⊆ Ω
V(X) ≤ V(Y ).
Proof. See Appendix C.
The submodularity of V guarantees that Algorithm 2, which is a greedy algorithm, provides an
(1 − 1e )-optimal solution. In other words, the approximate solution’s performance is greater than
(1 − 1e ) ≈ 63% of the oracle’s performance. In our case study, the actual submodularity is 98.9%,
which is significantly greater than this theoretical bound.
Theorem 3 ([18]). Algorithm 2 is a 1 − 1e -approximation algorithm. In other words, if we let
X ? be the solution obtained by this greedy algorithm, then the following suboptimality bound holds:
1
1−
V(X opt ) ≤ V(X ? ),
e
13
Algorithm 2: Greedy algorithm for (4.4)
1
2
3
4
5
6
7
8
Initialization:
X ← ∅;
Greedy algorithm:
while i ≤ K do
a∗ ∈ arg maxa∈Ω\X V(X ∪ {a});
X ← X ∪ {a∗ };
i ← i + 1;
end
where X opt is an optimal solution to (4.4).
Lines 5–9 of Algorithm 2 makes a locally optimal choice at each iteration. Therefore, it significantly reduces the search space, i.e., it does not search over all possible combinations of open
expansion valves. When the expansion valve optimization problem (3.3) is extended to multi-stage
optimization, a greedy algorithm achieves the same suboptimality bound using adaptive (or string)
submodularity and monotonicity of the value function [8, 2, 15].
4.3
Modified Algorithms for the Outer Optimization Problem
We now modify the two approximation algorithms for the inner problem (3.3) to solve the full
bilevel optimization problem. In both Algorithms 1 and 2, the expansion valve chosen to be open
at iteration i (line 10 of Algorithm 1 and line 7 of Algorithm 2) is independent of the selections at
later iterations. This independency plays an essential role in incorporating the outer optimization
problem into the algorithms. To be more precise, we compare the cases of K = l and K = l + 1.
Let αl and αl+1 be the solutions in the two cases obtained by Algorithm 1. Since the expansion
valve selected to be open at iteration l + 1 do not affect the choices at earlier iterations, we have
l+1
l
for i = 1, · · · , l. Therefore, we do not have to re-solve the entire inner optimization
αd(i)
= αd(i)
problem for K = l + 1 if we already have the solution for K = l; it suffices to run one more iteration
l+1
for i = l + 1 to obtain αd(l+1)
. This observation allows us to simply modify lines 8–12 in Algorithm
1 as Algorithm 3. As we can see in line 2, we select expansion valves to be open until when the
Algorithm 3: Modified version of Algorithm 1 for the outer optimization problem (3.4)
1
2
3
4
while [DV (0)]d(i) > 0 and J(α) > ∆ do
αd(i) ← 1;
i ← i + 1;
end
temperature upper-bound violation J(α) is less than or equal to the threshold ∆. Similarly, we
modify lines 5–9 of Algorithm 2 as Algorithm 4 to solve the outer problem.
4.4
Scalability
We now compare the complexity of Algorithms 3 and 4. Algorithm 3, which is based on a linear
approximation, is single pass in the sense that, after computing the derivative and ordering its
entries only once, we can obtain the solution. Calculating the derivative requires O(n2 NT ), where
14
Algorithm 4: Modified version of Algorithm 2 for the outer optimization problem (3.4)
1
2
3
4
5
while J(I(X)) > ∆ do
a∗ ∈ arg maxa∈Ω\X V(X ∪ {a});
X ← X ∪ {a∗ };
i ← i + 1;
end
NT is the number of time points in the time interval [tk , tk+1 ] used to integrate the dynamical system
[32], if a first-order scheme is employed. Therefore, the total complexity including the sorting step
is O(n2 NT ) + O(n log n). On the other hand, Algorithm 4, which is a greedy algorithm, chooses
a locally optimal solution at each stage. In other words, this iterative greedy choice approach
requires one to find an entry that maximizes the increment in the current payoff at every stage. Its
complexity is O(n3 NT ). Therefore, Algorithm 3 is computationally more efficient as the number n
of display cases grows. Note, however, that Algorithm 4 is also scalable because its complexity is
cubic in n and linear in NT .
5
Case Studies
In this section, we examine the performance of the proposed online optimization-based controllers.
For fair comparisons with the traditional controller, we use the parameter data reported in Section
2.3 with a refrigerator unit that has 10 display cases. Fig. 5 and 6 illustrates the simulation results
of the proposed controllers.
5.1
Energy Efficiency
As opposed to the synchronized food temperature profiles controlled by the traditional method (see
Fig. 3 (a)), the proposed controllers induce alternating patterns of the temperatures as shown in Fig.
5. Such patterns result from the explicit consideration of heat transfers between neighboring display
cases in the optimization module through the constraint (3.3b), which represents the interconnected
temperature dynamics. Using the spatial heat transfers, the proposed controllers do not turn
ON or OFF all the expansion valves at the same time. Instead, they predict the temperature
evolution for a short period and selects the valves to turn ON that are effective to minimize
the deviation from the desirable temperature range during the period. As a result, the ON/OFF
operation of expansion valves is desynchronized, unlike in the case of the traditional controller. This
desynchronization maintains the temperatures near the upper-bound T max reducing temperature
fluctuations. Therefore, it intuitively improves energy efficiency. As summarized in Table 2, the
proposed controllers save 7.5–8% of energy.
Table 2: Energy savings by the proposed controllers
PI
linear
submodular
average kW
11.24
10.34
10.40
energy saving –
8.0%
7.5%
suboptimality 90.1%
99.5%
98.9%
Note that the outer optimization module minimizes the total energy consumption while the
inner optimization module is responsible for maintaining the temperature profiles in a desirable
temperature ( °C)
temperature ( °C)
15
6
5.5
5
4.5
4
3.5
0
case 1
1
2
3
4
time (h)
case 2
5
case 3
6
case 4
case 5
7
8
(a)
6
5.5
5
4.5
4
3.5
0
case 1
1
2
3
4
time (h)
case 2
5
case 3
6
case 4
7
case 5
8
(b)
# of ON compressors
6
5
4
3
2
1
0
0
# of ON compressors
Figure 5: The food temperatures (in 5 display cases out of 10) controlled by (a) the linear
approximation-based algorithm (Algorithm 3), and (b) the submodular optimization algorithm
(Algorithm 4).
6
5
4
3
2
1
0
0
1
2
3
4
time (h)
5
6
7
8
5
6
7
8
(a)
1
2
3
4
time (h)
(b)
Figure 6: The number of ON compressors controlled by (a) the linear approximation-based algorithm (Algorithm 3), and (b) the submodular optimization algorithm (Algorithm 4).
range. When the bilevel combinatorial problem is exactly solved for all time, the average power
consumption is 10.29kW. Therefore, the two proposed controllers’ performances are 99.5% and
98.9% of the optimal controller although their theoretical suboptimality bounds are 39% and 63%.
5.2
Reduced Compressor Switching
Another advantage of the proposed controllers is the considerable reduction on the number of
compressor switching instances. By desynchronizing the switching instances of expansion valves
in the inner optimization module, the proposed controllers significantly reduce the variation of
suction pressure. Our conservative compressor control approach presented in Section 3.1 also helps
to minimize the deviation of the suction pressure from its reference. As a result, the controllers
significantly reduce the fluctuations on the number of ON compressors as shown in Fig. 6. First,
the maximum number of ON compressors is decreased from six to two. This reduction suggests
that a mechanically more compact compressor or a smaller number of compressors in the rack
may be enough if the proposed controllers are adopted. Second, the proposed controllers reduce
the number of compressor switching instances by 54.0–71.6% as summarized in Table 3. These
16
infrequent compressor operation strategies are beneficial to decelerate the mechanical degradation
of compressors.
Table 3: Compressor switching reductions by the
PI
linear
# of switchings 324
92
reduction
–
71.6%
5.3
proposed controllers
submodular
149
54.0%
Comparisons of the Two Proposed Controllers
Fig. 5 illustrates that the submodular optimization-based controller maintains the temperatures in
a narrower range than the linear approximation-based controller. This feature is owing to the fact
that the greedy algorithm used in the submodular optimization-based method avoids violating the
temperature upper bound in a locally optimal way. However, to keep the temperatures in a narrower
range this approach requires a faster adjustment of suction pressure than the linear approximationbased method. As a result, the proposed compressor controller performs a more frequent switching
of compressors when the submodular optimization-based method is adopted (see Fig. 6 and Table
3). Furthermore, this frequent compressor switching induces an inefficient use of the compressor
rack and therefore turns ON more compressors on average (in time). Therefore, the submodular
optimization-based controller consumes slightly more energy than the linear approximation-based
controller as reported in Table 2.
5.4
Automated Demand Response under Real-Time Pricing
Real-time pricing of electricity refers to passing wholesale prices through to end users. At least
in theory, it is shown to improve electricity market efficiency among other benefits [5].3 However,
consumer should bear the risk of receiving high energy bills if consumers do not appropriately
react to fluctuating wholesale prices. Such a risk transfer to end-users under real-time pricing
can be reduced by automated demand response (ADR) [20] and can also be limited by contracts
for ADR [33]. In this subsection, we demonstrate the utility of our method as a control tool
for refrigeration ADR under real-time pricing. In particular, we consider the scenario of energy
arbitrage: supermarket refrigeration systems automatically consume less energy when the realtime price is high and consume more when it is low. Note that real-time prices are often difficult
to predict and hence ADR must be performed in an online fashion by appropriately reacting to
fluctuating prices. The online optimization feature of our controllers allows them to adjust energy
consumption in response to real-time prices (by changing the number K of ON expansion valves
in real time). Fig. 7 (a) shows the real-time wholesale electricity price at the Austin node in the
Electric Reliability Council of Texas (ERCOT) on July 3, 2013 [7]. We use a simple thresholding
law for choosing K: if the electricity price is greater than $0.1/kWh, the controller allows up to
70% of expansion valves to turn ON; otherwise, it operates as usual. As summarized in Table
4, the proposed controllers save 14.3–15.0% of energy cost compared to a standard PI controller.
In addition, the temperature deviations from T max are less than 0.5◦ C as shown in Fig. 7 (b),
because right after the reduction in energy consumption the controller encourages enough cooling
to recover the desired temperature levels.4 Further applications of the proposed algorithms to ADR
3
However, real-time prices fail to capture the economic value of responsive loads in general [27].
Such post-cooling is mostly feasible due to the mean-reverting behavior of electricity prices which discourages
sustained price peaks [6]. We can also perform pre-cooling if prices are predictable or their distributional information
is available. Such pre-cooling will increase the economic benefit of the proposed method.
4
electricity price ($)
17
0.4
0.3
0.2
0.1
0
10
11
12
13
14
15
16
17
18
time (h)
temperature (°C)
(a)
6
5.5
5
4.5
4
3.5
10
case 1
11
12
13
14
case 2
15
case 3
16
case 4
17
case 5
18
temperature (°C)
time (h)
6
5.5
5
4.5
4
3.5
10
(b)
case 1
11
12
13
14
case 2
15
case 3
16
case 4
17
case 5
18
time (h)
(c)
Figure 7: (a) The real-time price data; The food temperatures (in five display cases out of ten)
controlled by (b) the linear approximation-based algorithm (Algorithm 3), and (c) the submodular
optimization algorithm (Algorithm 4).
aggregating a large number of supermarket refrigerator units can be found in [31].
Table 4: Operation costs per refrigerator under real-time pricing from 10am to 6pm
PI
linear
submodular
cost
$5.67
$4.82
$4.86
cost saving –
15.0%
14.3%
6
Conclusions
The proposed controller explicitly takes into account the switched and interconnected dynamics,
and is therefore is suitable for multi-case supermarket refrigeration systems. However, it has to
solve a bilevel combinatorial optimization problem associated with switched interconnected systems in near real time, which is a challenging task. To overcome this difficulty, we proposed two
polynomial-time approximation algorithms that are based on the structural properties of this optimization problem. These algorithms can also be adopted as a tool for solving combinatorial
optimization problems arising in existing MPC-based methods. We demonstrated the performance
of the proposed controllers through case studies using a benchmark refrigeration system model and
found that (i) they improve energy efficiency by 7.5–8%, (ii) they reduce the number of compressor
switchings by 54–71.6%, and (iii) they save 14.3–15% of operation cost under a demand response
scenario. In addition to conventional usages, the scalability of the proposed algorithms can contribute to an emerging methodology of controlling a large number of refrigerator units through
cloud computing.
18
A
Proof of Proposition 1
Proof. We use the linear system representation (3.3b) of the food and air temperature dynamics
(equations (2.1) and (2.2)). We first notice that
Ai,j ≥ 0
∀i 6= j,
where Ai,j represents the (i, j)th entry of the matrix A. Furthermore, kair−evap Tevap ≤ 0 due to the
non-positive evaporator temperature. Hence, we have
Bi,j ≤ 0 ∀i, j.
Using Proposition III.2 in [3], we conclude that the system (3.3b) is input-monotone such that for
any α, β ∈ Rn with αi ≤ βi , i = 1, · · · , n,
xαi ≥ xβi ,
i = 1, · · · , n,
where xα denotes the solution of the system (3.3b) when its input is chosen as α.
B
Proof of Theorem 1
Proof. In (3.3b), we notice that
x(t) = e
A(t−tk )
Z
t
xmeas +
eA(t−s) Bαds,
tk
which implies that x(t) is linear in α. Therefore, V is concave with respect to α in a continuously
relaxed space, Rn . Then, the result follows from Theorem 2 in [32].
C
Proof of Theorem 2
X
Proof. Let Tfood
,i denote the temperature of the food product in display case i given that the
X
expansion valves in X are open. Due to the linearity of the system dynamics (2.1) and (2.2), Tfood
,i
is modular, i.e.,
X {a}
X
Tfood
Tfood,i .
,i =
a∈X
Therefore, for any X ⊆ Y ⊆ Ω and any a ∈ Ω \ Y
X∪{a}
Tfood,i
Y ∪{a}
X
Y
− Tfood
,i = Tfood,i − Tfood,i .
Furthermore, Proposition 1 yields the following monotonicity result: for any X ⊆ Y ⊆ Ω
X
Y
Tfood
,i ≥ Tfood,i ,
i.e., as we openP
moreRexpansion valves, the food temperature decreases. Lastly, the concavity of
t
X
max )2 dt in T X
V(X) = V(∅) − ni=1 tkk+1 (Tfood
implies that X ⊆ Y ⊆ Ω and any a ∈ Ω \ Y
+
,i − Ti
food,i
V(X ∪ {a}) − V(X) ≥ V(Y ∪ {a}) − V(Y ).
Therefore, V is submodular. Its monotonicity follows from Proposition 1.
19
References
[1] ENERGY STAR Building Upgrade Manual Chapter 11: Supermarkets and Grocery Stores,
2008.
[2] S. Alaei and A. Malekian. Maximizing sequence-submodular functions and its application to
online advertising. arXiv:1009.4153 [cs.DM], 2010.
[3] D. Angeli and E. D. Sontag. Monotone control systems. IEEE Transactions on Automatic
Control, 48(10):1684–1698, 2003.
[4] A. Bemporad and M. Morari. Control of systems integrating logic, dynamics, and constraints.
Automatica, 35(3):407–427, 1999.
[5] S. Borenstein and S. P. Holland. On the efficiency of competitive electricity markets with
time-invariant retail prices. RAND Journal of Economics, 36(3):469–493, 2005.
[6] A. Cartea and M. G. Figueroa. Pricing in electricity markets: a mean reverting jump diffusion
model with seasonality. Applied Mathematical Finance, 12(4):313–335, 2005.
[7] Electric Reliability Council of Texas. Real-time prices reports.
[8] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. Journal of Artificial Intelligence Research, 42:427–486, 2011.
[9] M. Graziano and M. Pritoni. Gloudfridge: A cloud-based control system for commercial refrigeration systems. Technical report, ACEEE Summer Study on Energy Efficiency in Buildings,
2014.
[10] T. G. Hovgaard, S. Boyd, L. F. S. Larsen, and J. B. Jørgensen. Nonconvex model predictive
control for commercial refrigeration. International Journal of Control, 86(8):1349–1366, 2013.
[11] P. Kokotović and J. Heller. Direct and adjoint sensitivity equations for parameter optimization.
IEEE Transactions on Automatic Control, 12(5):609–610, 1967.
[12] L. F. S. Larsen, T. Geyer, and M. Morari. Hybrid model predictive control in supermarket
refrigeration systems. In Proceedings of 16th IFAC World Congress, 2005.
[13] L. F. S. Larsen, R. Izadi-Zamanabadi, and R. Wisniewski. Supermarket refrigeration system
- benchmark for hybrid system control. In Proceedings of 2007 European Control Conference,
2007.
[14] B. Li and A. G. Alleyne. A dynamic model of a vapor compression cycle with shut-down and
start-up operations. International Journal of Refrigeration, 33(3):538–552, 2010.
[15] Y. Liu, E. K. P. Chong, A. Pezeshki, and B. Moran. Bounds for approximate dynamic programming based on string optimization and curvature. In Proceedings of the 53rd IEEE Conference
on Decision and Control, 2014.
[16] T. Minko, R. Wisniewski, J. D. Bendtsen, and R. Izadi-Zamanabadi. Cost efficient optimization
based supervisory controller for supermarket subsystems with heat recovery. In Proceedings of
2015 European Control Conference, 2015.
20
[17] Navigant Consulting, Inc. Energy savings potential and R&D opportunities for commercial
refrigeration. Technical report, U.S. Department of Energy, 2009.
[18] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximising submodular set functions – I. Mathematical Programming, 265–294, 1978.
[19] R. Pedersen, J. Schwensen, S. Sivabalan, C. Corazzol, S. E. Shafiei, K. Vinther, and J. Stoustrup. Direct control implementation of a refrigeration system in smart grid. In Proceedings of
2013 American Control Conference, 2013.
[20] M. A. Piette, S. Kiliccote, and G. Ghatikar. Design and imple- mentation of an open, interoperable automated demand response infrastructure. Technical report, Lawrence Berkeley
National Laboratory, 2008.
[21] B. P. Rasmussen. Dynamic modeling for vapor compression systems–part i: Literature review.
HVAC&R Research, 18(5):934–955, 2012.
[22] B. P. Rasmussen. Dynamic modeling for vapor compression systems–part ii: Simulation tutorial. HVAC&R Research, 18(5):956–973, 2012.
[23] B. P. Rasmussen and A. G. Alleyne. Dynamic modeling and advanced control of air conditioning and refrigeration systems. Technical report, Air Conditioning and Refrigeration Center,
University of Illinois at Urbana-Champaign, 2006.
[24] D. Sarabia, F. Capraro, L. F. S. Larsen, and C. de Prada. Hybrid NMPC of supermarket
display cases. Control Engineering Practice, 17:428–441, 2009.
[25] S. E. Shafiei, R. Izadi-Zamanabadi, H. Rasmussen, and J. Stoustrup. A decentralized control
method for direct smart grid control of refrigeration systems. In Proceedings of the 52rd IEEE
Conference on Decision and Control, 2013.
[26] S. E. Shafiei, H. Rasmussen, and J. Stoustrup. Modeling supermarket refrigeration systems
for demand-side management. Energies, 6(2):900–920, 2013.
[27] R. Sioshansi and W. Short. Evaluating the impacts of real time pricing on the usage of wind
power generation. IEEE Transaction on Power Systems, 24(2):516–524, 2009.
[28] C. Sonntag, A. Devanathan, and S. Engell. Hybrid NMPC of a supermarket refrigeration
system using sequential optimization. In Proceedings of 17th IFAC World Congress, 2008.
[29] U.S. Department of Energy. Commercial real estate energy alliance. 2012.
[30] K. Vinther, H. Rasmussen, J. Stoustrup, and A. G. Alleyne. Learning-based precool algorithms
for exploiting foodstuff as thermal energy reserve. IEEE Transactions on Control Systems
Technology, 23(2):557–569, 2015.
[31] I. Yang. Risk Management and Combinatorial Optimization for Large-Scale Demand Response
and Renewable Energy Integration. PhD thesis, University of California, Berkeley, 2015.
[32] I. Yang, S. A. Burden, R. Rajagopal, S. S. Sastry, and C. J. Tomlin. Approximation algorithms for optimization of combinatorial dynamical systems. IEEE Transactions on Automatic
Control, 2016.
21
[33] I. Yang, D. S. Callaway, and C. J. Tomlin. Indirect load control for electricity market risk
management via risk-limiting dynamic contracts. In Proceedings of 2015 American Control
Conference, pages 3025–3031, 2015.
[34] Z. Yang, K. B. Rasmussen, A. T. Kieu, and R. Izadi-Zamanabadi. Fault detection and isolation
for a supermarket refrigeration system-part one: Kalman-filter-based methods. In Proceedings
of 18th IFAC World Congress, 2011.
| 3 |
The information bottleneck and geometric clustering
DJ Strouse∗ & David J. Schwab†
arXiv:1712.09657v1 [stat.ML] 27 Dec 2017
December 29, 2017
Abstract
The information bottleneck (IB) approach to clustering takes a joint distribution
P (X, Y ) and maps the data X to cluster labels T which retain maximal information
about Y (Tishby et al., 1999). This objective results in an algorithm that clusters data
points based upon the similarity of their conditional distributions P (Y | X). This is
in contrast to classic “geometric clustering” algorithms such as k-means and gaussian
mixture models (GMMs) which take a set of observed data points {xi }i=1:N and cluster them based upon their geometric (typically Euclidean) distance from one another.
Here, we show how to use the deterministic information bottleneck (DIB) (Strouse and
Schwab, 2017), a variant of IB, to perform geometric clustering, by choosing cluster
labels that preserve information about data point location on a smoothed dataset. We
also introduce a novel intuitive method to choose the number of clusters, via kinks in
the information curve. We apply this approach to a variety of simple clustering problems, showing that DIB with our model selection procedure recovers the generative
cluster labels. We also show that, for one simple case, DIB interpolates between the
cluster boundaries of GMMs and k-means in the large data limit. Thus, our IB approach to clustering also provides an information-theoretic perspective on these classic
algorithms.
1
Introduction
Unsupervised learning is a crucial component of building intelligent systems (LeCun,
2016), since such systems need to be able to leverage experience to improve performance
even in the absence of feedback. One aspect of doing so is discovering discrete structure in
data, a problem known as clustering (MacKay, 2002). In the typical setup, one is handed a
set of data points {xi }N
i=1 , and asked to return a mapping from data point label i to a finite
set of cluster labels c. The most basic approaches include k-means and gaussian mixture
models (GMMs). GMMs cluster data based on maximum likelihood fitting of a probabilistic
generative model. k-means can either be thought of as directly clustering data based on
geometric (often Euclidean) distances between data points, or as a special case of GMMs
with the assumptions of evenly sampled, symmetric, equal variance components.
∗
†
Department of Physics, Princeton University
Initiative for the Theoretical Sciences, CUNY Graduate Center
1
The information bottleneck (IB) is an information-theoretic approach to clustering data
X that optimizes cluster labels T to preserve information about a third “target variable”
of interest Y . The resulting (soft) clustering groups data points based on the similarity in their conditional distributions over the target variable through the KL divergence,
KL[p(y | xi ) | p(y | xj )]. An IB clustering problem is fully specified by the joint distribution
P (X, Y ) and the tradeoff parameter β quantifying the relative preference for fewer clusters
and more informative ones.
At first glance, it is not obvious how to use this approach to cluster geometric data, where
the input is a set of data point locations {xi }N
i=1 . For example, what is the target variable
Y that our clusters should retain information about? What should P (X, Y ) be? And how
should one choose the tradeoff parameter β?
Still et al. (2003) were the first to attempt to do geometric clustering with IB, and claimed
an equivalence (in the large data limit) between IB and k-means. Unfortunately, while much
of their approach is correct, it contained some fundamental errors that nullify the main
results. In the next section, we describe those errors and how to correct them. Essentially,
their approach did not properly translate geometric information into a form that could be
used correctly by an information-theoretic algorithm.
In addition to fixing this issue, we also choose to use a recently introduced variant of the
information bottleneck called the deterministic information bottleneck (DIB) (Strouse and
Schwab, 2017). We make this choice due to the different way in which IB and DIB use the
number of clusters provided to them. IB is known to use all of the clusters it has access to,
and thus clustering with IB requires a search both over the number of clusters Nc as well
as the the parsimony-informativeness tradeoff parameter β (Slonim et al., 2005). DIB on
the other hand has a built-in preference for using as few clusters as it can, and thus only
requires a parameter search over β. Moreover, DIB’s ability to select the number of clusters
to use for a given β leads to a intuitive model selection heuristic based on the robustness of
a clustering result across β that we show can recover the generative number of clusters in
many cases.
In the next section, we more formally define the geometric clustering problem, the IB
approach of Still et al. (2003), and our own DIB approach. In section 3, we show that
our DIB approach to geometric clustering behaves intuitively and is able to recover the
generative number of clusters with only a single free parameter (the data smoothing scale
s). In section 4, we discuss the relationship between our approach and GMMs and k-means,
proving that at least in one simple case, DIB interpolates between GMM and k-means cluster
boundaries by varying the data smoothing scale s. Our approach thus provides a novel
information-theoretic approach to geometric clustering, as well as an information-theoretic
perspective on these classic clustering methods.
2
Geometric clustering with the (deterministic) information bottleneck
In a geometric clustering problem, we are given a set of N observed data points {xi }i=1:N
and asked to provide a weighting q(c | i) that categorizes data points into (possibly multiple)
clusters such that data points “near” one another are in the same cluster. The definition of
2
“near” varies by algorithm: for k-means, for example, points in a cluster are closer to their
own cluster mean than to any other cluster mean.
In an information bottleneck (IB) problem, we are given a joint distribution P (X, Y ) and
asked to provide a mapping q(t | x) such that T contains the “relevant” information in X for
predicting Y . This goal is embodied by the information-theoretic optimization problem:
∗
(t | x) = argmin I(X, T ) − βI(T, Y ) ,
qIB
(1)
q(t|x)
subject to the Markov constraint T ↔ X ↔ Y . β is a free parameter that allows for
setting the desired balance between the compression encouraged by the first term and the
relevance encouraged by the second; at small values, we throw away most of X in favor of a
succint representation for T , while for large values of β, we retain nearly all the information
that X has about Y .
This approach of squeezing information through a latent variable bottleneck might remind
some readers of a variational autoencoder (VAE) (Kingma and Welling, 2013), and indeed
IB has a close relationship with VAEs. As pointed out by Alemi et al. (2016), a variational
version of IB can essentially be seen as the supervised generalization of a VAE, which is
typically an unsupervised algorithm.
We are interested in performing geometric clustering with the information bottleneck.
For the purposes of this paper, we will focus on a recent alternative formulation of the IB,
called the deterministic information bottleneck (DIB) (Strouse and Schwab, 2017). We do
this because the DIB’s cost function more directly encourages the use of as few clusters as
clusters, it will typically converge to a solution with far
possible, so initialized with nmax
c
fewer. Thus, it has a form of model selection built in that will prove useful for geometric
clustering (Strouse and Schwab, 2017). IB, on the other hand, will tend to use all nmax
c
clusters, and thus requires an additional search over this parameter (Slonim et al., 2005).
DIB also differs from IB in that it leads to a hard clustering instead of a soft clustering.
Formally, the DIB setup is identical to that of IB except that the mutual information
term I(X; T ) in the cost functional is replaced with the entropy H(T ):
∗
qDIB
(t | x) = argmin H(T ) − βI(T, Y ) .
(2)
q(t|x)
This change to the cost functional leads to a hard clustering with the form (Strouse and
Schwab, 2017):
∗
qDIB
(t | x) = δ(t − t∗ (x))
t∗ = argmax log q(t) − βd(x, t)
(3)
(4)
t
d(x, t) ≡ KL[p(y | x) | q(y | t)]
X
q(t) =
q(t | x) p(x)
(5)
(6)
x
q(y | t) =
1 X
q(t | x) p(x) p(y | x) ,
q(t) x
3
(7)
where the above equations are to be iterated to convergence from some initialization. The
IB solution (Tishby et al., 1999) simply replaces the first two equations with:
∗
qIB
(t | x) =
q(t) −βd(x,t)
e
,
Z(x, β)
(8)
which can be seen as replacing the argmax in DIB with an exponential and a soft max.
The (D)IB is referred to as a “distributional clustering” algorithm (Slonim and Tishby,
2001) due to the KL divergence term d (x, t) = KL[p(y | x) | q(y | t)], which can be seen
as measuring how similar the data point conditional distribution
P p(y | x) is to the cluster
conditional, or mixture of data point conditionals, q(y | t) = x q(x | t) p(y | x). That is,
0
a candidate
point x will be assigned to a cluster based upon how similar its conditional
0
p y | x is to the conditionals p(y | x) for the data points x that make up that cluster.
Thus, both DIB and IB cluster data points based upon the conditionals p(y | x).
To apply (D)IB to a geometric clustering problem, we must choose how to map the
geometric clustering dataset {xi }i=1:N to an appropriate IB dataset P (X, Y ). First, what
should X and Y be? Since X is the data being clustered by IB, we’ll choose that to be the
data point index i. As for the target variable Y that we wish to maintain information about,
it seems reasonable to choose the data point location x (though we will discuss alternative
choices later). Thus, we want to cluster data indices i into cluster indices c in a way that
maintains as much possible info about the location x as possible (Still et al., 2003).
Now, how should we choose the joint distribution p(i, x) = p(x | i) p(i)? At first glance,
one might choose p(x | i) = δxxi , since data point i was observed at location xi . The reason
not to do this lies with the fact that (D)IB is a distributional clustering algorithm, as
discussed two paragraphs above. Data points are compared to one another through their
conditionals p(x | i), and with the choice of a delta function, there will be no overlap unless
two data points are on top of one another. That is, choosing p(x | i) = δxxi leads to a
KL divergence that is either infinite for data points at different locations, or zero for data
points that lie exactly on top of one another, i.e. KL[p(x | i) | p(x | j)] = δxi xj . Trivially,
the resulting clustering would assign each data point to its own cluster, grouping only data
points that are identical. Put another way, all relational information in an IB problem lies
in the joint distribution P (X, Y ). If one wants to perform geometric clustering with an IB
approach, then geometric information must somehow be injected into that joint distribution,
and a series of delta functions does not do that. A previous attempt at linking IB and kmeans made this mistake (Still et al., 2003). Subsequent algebraic errors were tantamount
to incorrectly introducting geometric information into IB, precisely in the way that such
geometric information appears in k-means, and resulting in an algorithm that is not IB. We
describe these errors in more detail in an appendix (section 6).
Based on the problems identified with using delta functions, a better choice for the conditionals is something spatially extended, such as:
1
(9)
p(x | i) ∝ exp − 2 d(x, xi ) ,
2s
where s sets the geometric scale or units of distance, and d is a distance metric, such as
the Euclidean distance d(x, xi ) = kx − xi k2 . If we indeed use the Euclidean distance, then
4
data point index, i
data point location, x
Figure 1: Illustration of data smoothing procedure. Example dataset with one symmetric and one skew cluster. Top row : scatterplot of data points with smoothed probability
distribution overlaid. Bottom row : heat map of the joint distribution P (i, x) that is fed into
DIB. The two spatial dimensions in the top row are binned and concatenated into a single
dimension (on the horizontal axis) in the bottom row, which is the source of the “striations.”
p(x | i) will be (symmetric) gaussian (with variance s2 ), and this corresponds to gaussian
smoothing our data. In any case, the obvious choice for the marginal is p(i) = N1 , where
N is the number of data points, unless one has a reason a priori to favor certain data
points over others. These choices for p(i) and p(x | i) determine completely our dataset
p(i, x) = p(x | i) p(i). Figure 1 contains an illustration of this data smoothing procedure.
We will explore the effect of the choice of smoothing scale s throughout this paper.
With the above choices, we have a fully specified DIB formulation of a geometric clustering
problem. Using our above notational choices, the equations for the nth step in the iterative
DIB solution is (Strouse and Schwab, 2017):
c∗(n) (i) = argmax log q (n−1) (c) − βdn (i, c)
c
dn (i, c) ≡ KL p(x | i) | q (n) (x | c)
q (n) (c | i) = δ c − c∗(n) (i)
q (n) (c) =
q (n) (x | c) =
(10)
(11)
(12)
(n)
nc
N
X
(13)
q (n) (i | c) p(x | i)
(14)
i
=
1
(n)
nc
(n)
where nc
X
p(x | i) ,
(15)
i:c∗(n) (i)=c
(n)
the number of data points assigned to cluster c at step n, nc
5
≡ N q (n) (c) =
q (n) (c | i) = i : c∗(n) (i) = c .
Note that this solution contains β as a free parameter. As discussed above, it allows us to
set our preference between solutions with fewer clusters and those that retain more spatial
information. It is common in the IB literature to run the algorithm for multiple values of
β and to plot the collection of solutions in the “information plane” with the relevance term
I(Y ; T ) on the y-axis and the compression term I(X; T ) on the x-axis (Palmer et al., 2015;
Creutzig et al., 2009; Chechik et al., 2005; Slonim et al., 2005; Still and Bialek, 2004; Tishby
and Zaslavsky, 2015; Rubin et al., 2016; Strouse and Schwab, 2017; Ravid and Tishby, 2017).
The natural such plane for the DIB is with the relevance term I(Y ; T ) on the y-axis and its
compression term H(T ) on the x-axis (Strouse and Schwab, 2017). The curve drawn out by
(D)IB solutions in the information plane can be viewed as a Pareto-optimal boundary of how
much relevant information can be extracted about Y given a fixed amount of information
about X (IB) or representational capacity by T (DIB) (Strouse and Schwab, 2017). Solutions
lying below this curve are of course suboptimal, but a priori, the (D)IB formalism doesn’t tell
us how to select a single solution from the family of solutions lying on the (D)IB boundary.
Intuively however, when faced with a boundary of Pareto-optimality, if we must pick one
solution, its best to choose one at the “knee” of the curve. Quantitatively, the “knee” of the
curve is the point where the curve has its maximum magnitude second derivative. In the
most extreme case, the second derivative is infinite when there is a “kink” in the curve, and
thus the largest kinks might correspond to solutions of particular interest. In our case, since
the slope of the (D)IB curve at any given solution is β −1 (which can be read off from the
cost functionals), kinks indicate solutions that are valid over a wide range of β. So large
kinks also correspond to robust solutions, in the sense that they optimize a wide range of
(D)IB tradeoffs. Quantitatively, we can measure the size of a kink by the angle θ of the
discontinuity it causes in the slope of the curve; see figure 2 for details. We will show in
the next section that searches for solutions with large θ result in recovering the generative
cluster labels for geometric data, including the correct number of clusters.
Note that this model selection procedure would not be possible if we had chosen to use
IB instead of DIB. IB uses all the clusters available to it, regardless of the choice of β. Thus,
all solutions on the curve would have the same number of clusters anyway, so any knees or
kinks cannot be used to select the number of clusters.
P
i
6
m 1
in
DIB informativeness term, I(c, x)
✓
1
max
DIB compression term, H(c)
Figure 2: “Kinks” in DIB information curve as model selection. βmin and βmax are
−1
−1
the smallest and largest β at which the solution at the kink is valid. Thus, βmin
and βmax
are
π
the slopes of upper and lower dotted lines. The “kink angle” is then θ = 2 − arctan(βmin ) −
−1
). It is a measure of how robust a solution is to the choice of β; thus high values
arctan(βmax
of θ indicate solutions of particular interest.
3
Results: geometric clustering with DIB
We ran the DIB as described above on four geometric clustering datasets, varying the
smoothing width s (see eqn 9) and tradeoff parameter β, and measured for each solution the
˜ x) = I(c;x) 1 and the number of clusters used nc
fraction of spatial information extracted I(c;
I(i;x)
, as well as the kink angle θ. We iterated the DIB equations above just as in Strouse and
Schwab (2017) with one difference. Iterating greedily from some initialization can lead to
local minima (the DIB optimization problem is non-convex). To help overcome suboptimal
solutions, upon convergence, we checked whether merging any two clusters would improve
the value L of the cost functional in eqn 2. If so, we chose the merging with the highest
such reduction, and began the iterative equations again. We repeated this procedure until
the algorithm converged and no merging reduced the value of L. We found that these
“non-local” steps worked well in combination with the greedy “local” improvements of the
DIB iterative equations. While not essential to the function of DIB, this improvement in
performance produced cleaner information curves with less “noise” caused by convergence to
local minima.
Results are shown in figure 3. Each large row represents a different dataset. The left
˜ x) versus number of clusters used nc , stacked
column shows fractional spatial information I(c;
2
by smoothing width s. The center column shows the kink angle θ for each cluster number
nc , again stacked by smoothing width s. Finally, the right column shows example solutions.
In general, note that as we increase β, we move right along the plots in the left column,
˜ x). Not
that is towards higher number of clusters nc and more spatial information I(c;
all values of nc are present because while varying the implicit parameter β, DIB will not
1
Note that I(i; x) is an upper bound on I(c; x) due to the data processing inequality,(Cover and Thomas,
˜ x) is indeed the fraction of potential geometric information extracted from the smoothed P (i, x).
2006) so I(c;
2
Note that this is not the same as the information plane curve from figure 2. While the y-axes are the
same (up to the normalization), the x-axes are different.
7
smoothing s
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
●
4
●
8
4
8
12
16
5
5
● ●
●
● ●● ●
●
● ●● ●
●● ● ●●● ●
●
● ●●●
●
●
● ●
−5
−5
0
2
4
6
8
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
1
●
●
●
●
●
●
●
4
8
12
16
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
4
8
12
16
2
4
6
8
●
●
●
●
●
●
●
●
●● ●
●
● ● ● ●●
●
●●● ●
20
10
0
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
4
8
# of clusters, nc
●
10
●
●● ●
● ●
●● ●
●
●●●
●
●
●● ●
●●
● ●
●●
●
●
●
● ●
●
●●
●
● ●●
●
10
nc = 4
5
5
0
0
0
nc = 5
●
●● ●
● ●
●● ●
●
●●●
●
●
●● ●
●●
● ●
●●
●
●
●
● ●
●
●●
●
● ●●
●
10
5
●
●
●● ●●
●●
●
●
●
●●● ●
●●
●
−10 ●
●
●●●●
●
●
●
●
●●
●● ●
●
●
●
●
●
−5
2
4
6
10
●
●● ●
● ●
●● ●
●
●●●
●
●
●● ●
●●
● ●
●●
●
●
●
● ●
●
●●
●
● ●●
●
5
0
●
●● ●
●●
●●
●
●●●
●
●
●
●●
●●
−5
●
●
●● ●●
●●
●
●
●
●●● ●
●●
●
−10 ●
5
●
●●●●
●
●
●
●
●●
●● ●
●
●
●
●
●
−5
0
●
●● ●
●●
●●
●
●●●
●
●
●
●●
●●
−5
0
●
●
●● ●●
●●
●
●
●
●●● ●
●●
●
−10 ●
5
●
●● ●
●●
●●
●
●●●
●
●
●
●●
●●
●
●●●●
●
●
●
●
●●
●● ●
●
●
●
●
●
−5
0
●
●● ●
●●
●●
●
●●●
●
●
●
●●
●●
−5
● ●
●● ●●
●●
●
●
●
●●● ●
●●
●
−10 ●
5
●
●●●●
●
●
●
●
●●
●● ●
●
●
●
●
●
−5
0
5
8
12
16
40
30
20
10
0
40
30
20
10
0
40
30
20
10
0
40
30
20
10
0
nc = 6
10
nc = 12
10
●
●
0
−5
●
●
●
●
●
●
●
●
●
●● ● ●
●
●
●
●
●
●●
● ●
●
●●●● ● ●
●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●
●
●●● ●
● ●
●
●
●● ●
●
●
●
● ●●
●
● ●
●
●
●● ●
●
●
●
● ● ●
●
●
●
●
●
●
●
●
●
●
●
5
●
●
●
●
●
●
●● ● ●
●
●
●
●
●
●●
● ●
●
●●●● ● ●
●
●
● ●●
●
●
●
●
●
●
●
●
●
● ●●
●
●●● ●
● ●
●
●
●● ●
●
●
●
● ●●
●
● ●
●
●
●● ●
●
●
●
● ● ●
●
●
●
●
●
5
●
0
−5
●
8
8
0
10
4
0
1
●
●
●
kink angle, θ (deg)
●
●
4
●
0
nc = 3
●
●● ●
● ●
●● ●
●
●●●
●
●
●● ●
●●
● ●
●●
●
●
●
● ●
●
●●
●
● ●●
●
2
●
●
2
●
●
●
●
●
●
−10
−5
1
1
~
spatial information, I (c, x)
●
0
1
●
●●
nc = 2
20
10
0
●
●
●
●
●
● ●●● ●
●
● ●
● ●●
● ●●
●● ●
●
●●
●●
●
# of clusters, nc
1
●
●
●
● ●
●
# of clusters, nc
0
1
●●
0
8
8
0
10
nc = 3
●
●●●
●
●
● ●
●
●●●
● ●
● ●●●
● ●
● ●
20
10
0
20
10
0
0
4
●
4
0
1
4
●
●
●
●
●
●
●
●
●
●
●
● ●●● ●
●
● ●
● ●●
● ●●
●● ●
●
●●
●●
●
●
−10
2
0
1
●
2
●
kink angle, θ (deg)
●
●
●
●
●
●●
●
1
0
1
10
●●
●
●
●
●
●
●
●
●
●● ●
●
● ● ● ●●
●
●●● ●
−4
# of clusters, nc
1
~
spatial information, I (c, x)
●
●
5
● ●
−4
●
0
0
# of clusters, nc
1
−5
nc = 2
●
●●●
●
●
● ●
●
●●●
● ●
● ●●●
●
●
8
8
0
10
4
4
0
1
20
15
10
5
0
20
15
10
5
0
20
15
10
5
0
20
15
10
5
0
2
●
kink angle, θ (deg)
●
2
~
spatial information, I (c, x)
●
5
●
●
1
0
1
●
●
●● ●
●●●●● ●●
●
●● ● ●
●●
●●
●●
●
●●●●
●
●
●
−10
−5
# of clusters, nc
1
●
0
●
●● ●
●●●●● ●●
●
●● ● ●
●●
●●
●●
●
●●●●
●
●
●
−10
4
●
● ●
●
●
● ●● ●
●
● ●● ●
●● ● ●●● ●
●
● ●●●
●
●
● ●
●
0
# of clusters, nc
1
nc = 4
●
● ●●
●
●●
●
●●
●●
●●●● ●● ●
●
●●
●
●
●●
●● ●●
10
8
0
1
●
nc = 3
●
● ●●
●
●●
●
●●
●●
●●●● ●● ●
●
●●
●
●
●●
●● ●●
10
4
●
smoothing s
30
20
10
0
30
20
10
0
30
20
10
0
30
20
10
0
2
2
0
1
kink angle, θ (deg)
●
1
0
1
●
1
~
spatial information, I (c, x)
1
●
−10
−10
2
4
6
−5
●
●
●
●
−10
0
5
10
−10
−5
●
●
0
5
8
# of clusters, nc
Figure 3: Results: model selection and clustering with DIB. Results for four datasets.
Each row represents a different dataset. Left column: fraction of spatial information ex˜ x) = I(c;x) , versus number of clusters used, nc , across a variety of smoothing
tracted, I(c;
I(i;x)
scales, s. Center column: kink angle θ (of the I(c; x) vs H(c) curve) versus number of
clusters used, nc , across a variety of smoothing scales, s. Right column: example resulting
clusters.
8
10
necessarily “choose” to use all possible cluster numbers. For example, for small smoothing
width s, most points won’t have enough overlap in p(x | i) with their neighbors to support
solutions with few clusters, and for large smoothing width s, local spatial information is
thrown out and only solutions with few clusters are possible. More interestingly, DIB may
retain or drop solutions based on how well they match the structure of the data, as we will
discuss for each dataset below. Additionally, solutions that match well the structure in the
data (for example, ones with nc matched to the generative parameters) tend to be especially
robust to β, that is they have a large kink angle θ. Thus, θ can be used to perform model
selection. For datasets with structure at multiple scales, the kink angle θ will select different
solutions for different values of the smoothing width s. This allows us to investigate structure
in a dataset at a particular scale of our choosing. We now turn to the individual datasets.
The first dataset (top large row) consists of 3 equally spaced, equally sampled symmetric
gaussian clusters (see solutions in right column). We see that the 3-cluster solution stands
out in several ways. First, it is robust to spatial scale s. Second, the 3-cluster solution
extract nearly all of the available spatial information; solutions with nc ≥ 4 extract little
˜ x). Third and perhaps most salient, the 3-cluster solution has by far the largest
extra I(c;
value of kink angle θ across a wide range of smoothing scales. In the right column, we show
examples of 3 and 4-cluster solutions. Note that while all 3-cluster solutions look exactly
like this one, the 4-cluster solutions vary in how they chop one true cluster into two.
The second dataset (second row) consists of 3 more equally sampled symmetric gaussian
clusters, but this time not equally spaced; two are much closer to one another than the
third. This is a dataset with multiple scales present, thus we should expect that the number
of clusters picked out by any model selection procedure, e.g. kink angle, should depend on
the spatial scale of interest. Indeed, we see that to be true. The 3-cluster solution is present
for all smoothing widths shown, but is only selected out as the best solution by kink angle
for intermediate smoothing widths (s = 2). For large smoothing widths (s = 8), we see
that the 2-cluster solution is chosen as best. For smoothing widths in between (s = 4),
the 2 and 3-cluster solutions are roughly equally valid. In terms of spatial information, the
2 and 3-cluster solutions are also prominent, with both transitions from nc = 1 → 2 and
˜ x) (but little improvement for more
nc = 2 → 3 providing significant improvement in I(c;
fine-grained clusterings).
The third dataset (third row) features even more multi-scale structure, with 5 symmetric,
equally sampled gaussians, again with unequal spacing. Sensible solutions exist for nc = 2−5,
˜ x)
and this can be seen by the more gradual rise of the fractional spatial information I(c;
with nc in that regime. We also again see a transition in the model selection by θ from the
5-cluster solution at small smoothing widths (s = 1, 2) and the 2-cluster solution at larger
smoothing widths (s = 8), with intermediate nc favoring those and intermediate solutions.
Example clusters for nc = 2 − 5 are shown at right.
Finally, we wanted to ensure that DIB and our model selection procedure would not
halluscinate structure where there is none, so we applied it to a single gaussian blob, with
the hope that no solution with nc > 1 would stand out and prove robust to β. As can be seen
in the fourth row of figure 3, that is indeed true. No solution at any smoothing width had
˜ x) versus
particuarly high kink angle θ, and no solution remained at the “knee” of the I(c;
nc curve across a wide range of smoothing widths.
Overall, these results suggest that DIB on smoothed data is able to recover generative
9
geometric structure at multiple scales, using built-in model selection procedures based on
identifying robust, spatially informative solutions.
4
Results: DIB vs GMMs & k-means
Here we show that in the limit of infinite data and small smoothing scale s, the behavior
of (D)IB is intimately related to the hard cluster boundaries implied by GMMs. We assume
we have one gaussian cluster centered at µ1 = (0, 0) with covariance Σ1 = diag(σ1 , σ2 ), and
a second gaussian cluster centered at µ2 = (L, 0) with covariance Σ2 = diag(σ, σ). If we
have a mixture model with weights w1 and w2 , then the hard maximum likelihood boundary
between these two clusters in the (x1 , x2 ) plane is given by:
x22
x21
+
+ log σ1 σ2
2σ12 2σ22
1
T2 ≡
x21 + x22 + L2 − 2Lx1
2σ2
log w1 − T1 = log w2 − T2 .
T1 ≡
On the other hand, the (D)IB algorithm would classify a new test point at location (x1 , x2 )
gaussian smoothed by s based on the KL divergence between its smoothed distribution and
the two cluster gaussians:
2
x22
k
σ1 σ2
x21
σ1 + σ22
2
+
− + log 2
(16)
+
KL1 = s
2 2
2
2
2σ1 σ2
2σ1 2σ2
2
s
k
s2
1
σ2
KL2 = 2 + 2 (x1 − L)2 + x22 − + log 2 ,
(17)
σ
2σ
2
s
where k = 2 is the number of dimensions. The boundary implied by DIB is found by
setting:
log w1 − βKL1 = log w2 − βKL2 .
(18)
Notice that for small smoothing width s → 0 and either β = 1 or evenly sampled clusters
w1 = w2 , this is identical to the hard boundary implied by the GMM. For β > 1, w1 6= w2 ,
and small smoothing width, we see that, compared with a GMM, DIB encourages capturing more information about spatial location at the expense of using clusters more equally.
1
Put another way, the effect of the cluster prior term log w
is reduced by pulling it closer
w2
1/β
1
to zero, i.e. replacing it with log w
. This provides an interesting information theow2
retic interpretation of GMMs and also shows the manner in which clustering with DIB is a
generalization.
To see the effect of larger smoothing widths, we compared the numerically calculated
DIB, GMM, and k-means cluster boundaries for the “true” assignments with nc = 2 over
a range of smoothing widths (see figure 4). The data consisted of 1000 points sampled
equally (w1 = w2 ) from one isotropic and one skew gaussian as shown. We can see that for
small smoothing widths, the DIB boundary indeed approaches that of the GMM. For larger
10
smoothing widths, the effect of the “shape” of the clusters is muted and the DIB boundary
approaches k-means. Note that this is just one particular example however and DIB need
not approach k-means in the large s limit in general.
10
● ● ●
● ●● ●● ● ●
●● ● ●●●
● ●●● ●●●
●
●●●
●●
●
● ●● ●
●
●
●●●
●
●
●
●
●●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
● ●● ● ●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
● ●●●
●●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
● ●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●●
●
●●
●
●
●
●
●
●
●●
●●
●
●●
●
● ●
●
● ●●
●
●●
●
●●
●●
●●
●
●
●● ●
●●
●
●● ●
●●●●●
●
●●●
●●●
●
●● ● ●
●
● ●
●
●
●●
boundary
●
●
●
●
●
●
0
●
GMM
DIB (s=2)
DIB (s=4)
DIB (s=8)
k−means
●● ●
●●
●● ●
●●
●
●●
● ● ● ●●
●
●
●●●
●●
●
●●●
●
● ●
●
●●
●
●
● ●● ●
● ●●●●
●●●
● ● ●●
●●
●
●●●
●
● ● ● ●●●●●
●● ●
● ● ●●
● ●●
●●
●
●●
●● ●
● ●●
●●
●●●
●
●
●●
●●
●
●●● ●● ●
●●
●●
●● ●
●
● ● ●●
● ●●
●
●
●
●
●●
●●●
●
●●●
●
●
●●
●
●● ●●
●●●●
●●●
● ●
●●●
●
●
●● ● ●
●
●● ●●
●
●●●
●
●
●●● ●
●●
●
●
●
●●
●
●
●●
●
●
●●
●●●
●●●● ●
●●
●●●● ●
●
●
●●
●
●
●
●
●●●
●
●
● ●●● ●●●
●
●
●● ●
●●
●
●
●● ● ●
● ●● ●
●
●● ● ● ●●●●
●●
●●●
●●
● ● ● ● ●● ●
●●●●●
●
●
●
●
●
●
●●
●
●●●● ● ●
●
●●
●●●● ●● ●● ● ●●● ●●●
●
●● ●●● ●
●●● ●
●●
●
●
●●
●● ●●● ●●
●
●
●
●
●
●● ● ● ●● ● ● ● ●●
●
●
●
●
●● ●
●● ●● ●●●
●
● ● ●●●●●
●
●● ●●●●●
● ●●
●
●●
●
● ●
●
●
●●
●●
● ●● ●
●●● ●
●
● ●● ●
●
●●●●●● ●
●
●
●
●
●
● ● ●● ●
●●
●
●●
●
● ●
●
●
●
●
●
●
● ● ●● ● ●
●
●
●
●
●
●
●
● ●
●●
● ●●
● ●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
−20
−10
0
10
20
Figure 4: Cluster boundaries for different algorithms. Colored lines show boundaries
separating two clusters for 3 different algorithms: k-means, GMMs, and DIB with 3 different
levels of smoothing. Dataset was 1000 points drawn equally from a single symmetric gaussian
and a single skew gaussian. Black points show data.
5
Discussion
Here, we have shown how to use the formalism of the information bottleneck to perform
geometric clustering. A previous paper (Still et al., 2003) claimed to contribute similarly,
however for the reasons discussed in sections 2 and 6, their approach contained fundamental
flaws. We amend and improve upon that paper in four ways. First, we show to fix the errors
they made in their problem setup (with the data preparation). Second, we argue for using
DIB over IB in this setting for its preference for using as few clusters as it can. Third, we
introduce a novel form of model selection for the number of clusters based on discontinuities
(or “kinks”) in the slope of the DIB curve, which indicate solutions that are robust across the
DIB tradeoff parameter β. We show that this information-based model selection criterion
allows us to correctly recover generative structure in the data at multiple spatial scales.
Finally, we compare the resulting clustering algorithm to k-means and gaussian mixture
models (GMMs). We found that for large smoothing width, s, the performance of the
method seems to behave similarly to k-means. More interestingly, we found that for small
smoothing width, the method behaves as a generalization of a GMM, with a tunable tradeoff
between compression and fidelity of the representation.
We have introduced one way of doing geometric clustering with the information bottleneck, but we think it opens avenues for other ways as well. First, the uniform smoothing
we perform above could be generalized in a number of ways to better exploit local geometry
and better estimate the “true” generative distribution of the data. For example, one could
do gaussian smoothing with mean centered on each data point but the covariance estimated
by the sample covariance of neighboring data points around that mean. Indeed, our early
experiments with this alternative suggest it may be useful for certain datasets. Second, while
choosing spatial location as the relevant variable for DIB to preserve information about seems
to be the obvious first choice to investigate, other options might prove interesting. For ex11
ample, preserving information about the identity of neighbors, if carefully formulated, might
make fewer implicit assumptions about the shape of the generative distribution, and enable
the extension of our approach to a wider range of datasets.
Scaling the approach introduced here to higher-dimensional datasets is non-trivial because
the tabular representation used in the original IB (Tishby et al., 1999)and DIB (Strouse and
Schwab, 2017) algorithms leads to an exponential scaling with the number of dimensions.
Recently, however, Alemi et al. (2016) introduced a variational version of IB, in which one
parameterizes the encoder q(t | x) (and “decoder” q(y | t)) with a function approximator,
e.g. a deep neural network. This has the advantage of allowing scaling to much larger
datasets. Moreover, the choice of parameterization often implies a smoothness constraint on
the data, relieving the problem encountered above of needing to smooth the data. It would
be interesting to develop a variational version of DIB, which could then be used to perform
information-theoretic clustering as we have done here, but on larger problems and perhaps
with no need for data smoothing.
6
Appendix: errors in Still et al. (2003)
A previous attempt was made to draw a connection between IB and k-means (Still et al.,
2003). Even before reviewing the algebraic errors that lead their result to break down, there
are two intuitive reasons why such a claim is unlikely to be true. First, IB is a soft clustering
algorithm, and k-means is a hard clustering algorithm. Second, the authors made the choice
not to smooth the data and to set p(x | i) = δxxi . As discussed in section 2, (D)IB clusters
data points based on these conditionals, and delta functions trivially only overlap when they
are identical.
The primary algebraic mistake appears just after eqn 14, in the claim that pn (x | c) ∝
pn−1 (x | c)1/λ . Combining the previous two claims in that proof, we obtain:
pn (x | c) =
1 X δxxi
pn−1 (xi | c)1/λ .
N i Zn (i, λ)
(19)
Certainly, this does not imply that pn (x | c) ∝ pn−1 (x | c)1/λ everywhere, because of the
δxxi factor which picks out only a finite number of points.
One might wonder why with these mistakes, the authors still obtain an algorithm that
looks and performs like k-means. The reason is because their sequence of mistakes leads to
the result in eqn 15 that effectively assumes that IB has access to geometric information it
should not, namely the cluster centers at step n. Since these are exactly what k-means uses
to assign points to clusters, it is not surprising that the behavior then resembles k-means.
7
Acknowledgements
We would like to thank Léon Bottou for helpful discussions.. We would also like to
acknowledge financial support from NIH K25 GM098875 (Schwab) and the Hertz Foundation
(Strouse).
12
References
Alemi, A. A., Fischer, I., Dillon, J. V., and Murphy, K. (2016). Deep Variational Information
Bottleneck. arXiv e-prints. https://arxiv.org/abs/1612.00410.
Chechik, G., Globerson, A., Tishby, N., and Weiss, Y. (2005). Information Bottleneck for
Gaussian Variables. Journal of Machine Learning Research, 6:165–188.
Cover, T. M. and Thomas, J. A. (2006). Elements of Information Theory. Wiley-Interscience.
Creutzig, F., Globerson, A., and Tishby, N. (2009). Past-future information bottleneck in
dynamical systems. Physical Review E, 79(4):041925–5.
Kingma, D. P. and Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv e-prints.
https://arxiv.org/abs/1312.6114.
LeCun, Y. (2016). Predictive learning. NIPS.
MacKay, D. J. C. (2002). Information Theory, Inference, and Learning Algorithms. Cambridge University Press.
Palmer, S. E., Marre, O., Berry II, M. J., and Bialek, W. (2015). Predictive information in a
sensory population. Proceedings of the National Academy of Sciences, 112(22):6908–6913.
Ravid, S. and Tishby, N. (2017). Opening the black box of deep neural networks via information. arXiv e-prints, cs.LG. https://arxiv.org/abs/1703.00810.
Rubin, J., Ulanovsky, N., Nelken, I., and Tishby, N. (2016). The representation of prediction
error in auditory cortex. PLoS computational biology, 12(8):e1005058–28.
Slonim, N., Atwal, G., Tkacik, G., and Bialek, W. (2005). Information-based clustering.
Proceedings of the National Academy of Sciences, 102(51):18297–18302.
Slonim, N. and Tishby, N. (2001). The Power of Word Clusters for Text Classification. In
European Colloquium on Information Retrieval Research.
Still, S. and Bialek, W. (2004). How many clusters an information-theoretic perspective.
Neural Computation, 16(12):2483–2506.
Still, S., Bialek, W., and Bottou, L. (2003). Geometric clustering using the information
bottleneck method. NIPS.
Strouse, D. and Schwab, D. J. (2017). The Deterministic Information Bottleneck. Neural
Computation.
Tishby, N., Pereira, F. C., and Bialek, W. (1999). The information bottleneck method.
Proceedings of The 37th Allerton Conference on Communication, Control, and Computing,
pages 368–377.
Tishby, N. and Zaslavsky, N. (2015). Deep Learning and the Information Bottleneck Principle. arXiv e-prints. https://arxiv.org/abs/1703.00810.
13
| 2 |
Distributed Kalman Filter in a Network of Linear Dynamical Systems
Damián Marellia,b , Mohsen Zamanic , Minyue Fuc,a
a Department
arXiv:1711.07625v1 [] 21 Nov 2017
of Control Science and Engineering and State Key Laboratory of Industrial Control Technology, Zhejiang University, 388
Yuhangtang Road Hangzhou, Zhejiang Province, 310058, P. R. China.
b French-Argentinean International Center for Information and Systems Sciences, National Scientific and Technical Research Council, Ocampo
Esmeralda, Rosario 2000, Argentina.
c School of Electrical Engineering and Computer Science, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia.
Abstract
This paper is concerned with the problem of distributed Kalman filtering in a network of interconnected subsystems
with distributed control protocols. We consider networks, which can be either homogeneous or heterogeneous, of
linear time-invariant subsystems, given in the state-space form. We propose a distributed Kalman filtering scheme for
this setup. The proposed method provides, at each node, an estimation of the state parameter, only based on locally
available measurements and those from the neighbor nodes. The special feature of this method is that it exploits the
particular structure of the considered network to obtain an estimate using only one prediction/update step at each time
step. We show that the estimate produced by the proposed method asymptotically approaches that of the centralized
Kalman filter, i.e., the optimal one with global knowledge of all network parameters, and we are able to bound the
convergence rate. Moreover, if the initial states of all subsystems are mutually uncorrelated, the estimates of these
two schemes are identical at each time step.
Keywords: Estimation, Kalman Filter, Distributed Systems.
1. Introduction
There has been an increasing activity in the study of
distributed estimation in a network environment. This
is due to its broad applications in many areas, including
formation control Subbotin and Smith [16], Lin et al.
[8], distributed sensor network Zhang et al. [23] and
cyber security Teixeira et al. [18], Zamani et al. [22].
This paper examines the problem of distributed estimation in a network of subsystems represented by a finite
dimensional state-space model. Our focus is on the scenario where each subsystem obtains some noisy measurements, and broadcasts them to its nearby subsystems, called neighbors. The neighbors exploit the received information, together with an estimate of their
internal states, to make a decision about their future
states. This sort of communication coupling arises in
different applications. For example, in control system
security problems Teixeira et al. [18], distributed state
estimation is required to calculate certain estimation er-
Email addresses: Damian.Marelli@newcastle.edu.au
(Damián Marelli), Mohsen.Zamani@newcastle.edu.au (Mohsen
Zamani), Minyue.Fu@newcastle.edu.au (Minyue Fu)
Preprint submitted to Systems and Control Letters
ror residues for attack detection. Similarly, for formation control Lin et al. [10], Zheng et al. [24], Lin et al.
[9], each subsystem integrates measurements from its
nearby subsystems, and states of each subsystem need
to be estimated for distributed control design purposes.
The main objective of this paper is to collectively estimate the states of all subsystems within such a network.
We will propose a novel distributed version of the celebrated Kalman filter.
The current paper, in broad sense, belongs to the large
body of literature regarding distributed estimation. One
can refer to Lopes and Ali [11], Kar et al. [6], Conejo
et al. [4], Gómez-Expósito et al. [5], Marelli and Fu
[12], Olfati-Saber [13], Ugrinovskii [19, 20], Zamani
and Ugrinovskii [21], Khan and Moura [7], Olfati-Saber
[14] and the survey paper Ribeiro et al. [15], as well as
references listed therein, for different variations of distributed estimation methods among a group of subsystems within a network. A consensus based Kalman filter
was proposed in Olfati-Saber [13]. The author of Ugrinovskii [19] utilized a linear matrix inequality to minimize a H∞ index associated with a consensus based estimator, which can be implemented locally. Some of the
results there were then extended to the case of switching
November 22, 2017
topology in Ugrinovskii [20]. The same problem was
solved using the minimum energy filtering approach
in Zamani and Ugrinovskii [21]. A common drawback
of the state estimation methods described above is that,
being based on consensus, they require, in theory, an infinite number of consensus iterations at each time step.
This results in computational and communication overload. To avoid this, in this paper we exploit the network
structure to achieve a distributed Kalman filter method
which requires only one prediction/update step at each
time step. Moreover, it is worthwhile noting that there is
a major difference between the above-mentioned works
and the problem formulation in the current paper. More
precisely, in the former, the aim of each subsystem is
to estimate the aggregated state which is common to all
subsystems. In contrast, in the problem studied here,
each subsystem is dedicated to the estimation of its own
internal state, which in general is different from those
of other subsystems. This allows the distributed estimation algorithm to be scalable to networked systems with
a large number of subsystems where requiring each subsystem to estimate the aggregated state is both computationally infeasible and practically unnecessary.
To show the effectiveness of the proposed algorithm,
we compare our method with the classical (centralized)
Kalman filter, which is known to be optimal (in the minimum error covariance sense). The classical method requires the simultaneous knowledge of parameters and
measurements from all subsystems within the network
to carry out the estimation. In contrast, our proposed
distributed estimation algorithm runs a local Kalman filter at each subsystem, which only requires the knowledge of local measurements and parameters, as well as
measurements from neighbor subsystems. Hence, it can
be implemented in a fully distributed fashion. We show
that the state estimate, and its associated estimation error covariance matrix, produced by the proposed distributed method asymptotically converge to those produced by the centralized Kalman filter. We provide
bounds for the convergence of both the estimate and the
estimation error covariance matrix. A by-product of our
result is that, if the initial states of all subsystems are
uncoupled (i.e., they are mutually uncorrelated), the estimates produced by our method are identical to that of
the centralized Kalman filter.
The rest of the paper is structured as follows. In Section 2, we describe the network setup and its associated centralized Kalman filter. In Section 4, we describe
the proposed distributed Kalman filter scheme. In Section 5, we demonstrate the asymptotic equivalence between the proposed distributed filter and the centralized
one, and provide bounds for the convergence of the esti-
mates and their associated estimation error covariances.
Simulation results that support our theoretical claims
are presented in Section 6. Finally, concluding remarks
are given in Section 7.
2. System Description
In this paper we study networks of I linear timeinvariant subsystems. Subsystem i is represented by the
following state-space model
(i)
xk+1
=
(i)
A(i) xk(i) + z(i)
k + wk ,
y(i)
k
=
C (i) xk(i)
+
v(i)
k .
The subsystems are interconnected as follows
X
z(i)
=
L(i, j) y(kj) ,
k
(1)
(2)
(3)
j∈Ni
where xk(i) ∈ Rni is the state, y(i)
∈ R pi the outk
(i)
put, wk is an i.i.d Gaussian disturbance process with
(0, Qi ), and v(i)
w(i)
k ∼ N
k is an i.i.d. Gaussian measurement noise process with v(i)
∼ N (0, Ri ). We further
(i) ( j)> k
>
suppose that E wk wk
= 0 and E v(i)
v(kj) = 0,
k
>
>
∀i , j and E xk(i) wk( j) = 0, E xk(i) v(kj) = 0 ∀i, j.
We also
n denote theo neighbor set of the subsystem i by
Ni = j : L(i, j) , 0 .
Remark 1. We note in (1)-(2) that the coupling between neighboring subsystems is solely caused through
the zk(i) term in (3). The main motivation for considering such coupling comes from distributed control
where (1) represents the model of an autonomous subsystem (or agent) with the zk(i) being the control input
and that (3) represents a distributed control protocol
which employs feedback only from neighboring measurements. This type of distributed control is not only
common for control of multi-agent systems (see, for example, [8, 10, 9, 24]), but also realistic for large networked systems in the sense that only neighbouring information is both easily accessible and most useful for
each subsystem. We emphasize that the distributed state
estimation problem arises for the networked system (1)(3) because of our allowance for measurement noises v(i)
k
in (2). This consideration is very important for applications because measurement noises are unavoidable in
practice. This also sharply distinguishes our distributed
control formulation from most distributed control algorithms in the literature where perfect state measurement
is often implicitly assumed.
>
>
We define ξk> = ξk(1) , · · · , ξk(I)
and Ξk =
{ξ1 , · · · , ξk }, where (ξ, Ξ) stands for either (x, X), (y, Y),
2
(z, Z),n (w, W) or (v,
o V); moreover, we denote Υ =
diag Υ(1) , · · · , Υ(I) , where Υ stands for either A, B, C,
h
i
Q or R, and L = L(i, j) : i, j = 1, · · · , I .
Using the above notation, we let the initial state x1 of
all subsystems have the joint distribution x0 ∼ N (0, P).
We can also write the aggregated model of the whole
network as
xk+1
yk
=
Axk + LCxk + wk + BLvk
=
Ãxk + ek ,
(4)
= Cxk + vk ,
(5)
4. Distributed Kalman Filter
Consider the i-th subsystem (1)-(2). Notice that,
since the measurements y(kj) , j ∈ Ni , are known by the
i-th subsystem, they can be treated as external inputs.
This observation leads us to the following intuitive approach for a distributed Kalman filter scheme.
Let, for all i = 1, · · · , I and k, l ∈ N,
(i)
, E xk(i) |y(mj) ; j ∈ Ni ∪ {i}, m = 1, · · · , l ,
x̂k|l
h
i h (i)
i
(i)
(i)
(i) >
Σ(i)
,
E
x
−
x̂
x
−
x̂
,
k
k
k|l
k|l
k|l
with
à = A + LC
ek = wk + Lvk .
and
It then follows that
"
#
ek h >
ek
cov
vk
v>k
i!
"
=
Q̃
S̃ >
S̃
R
(6)
and Σ(i)
= P(i) . Then, the prediction and update steps
0|0
for the proposed distributed Kalman filter are:
#
,
(14)
(7)
1. Prediction:
(i)
x̂k+1|k
where Q̃ = Q + LRL> and S̃ = LR.
=
(i)
+
A(i) x̂k|k
X
Li,(i,j j) yk( j) ,
(15)
j∈Ni
Σ(i)
k+1|k
3. Centralized Kalman Filter
(i)
(i)
(i)
x̂k|k
= x̂k|k−1
+ Kk(i) yk(i) − C (i) x̂k|k−1
,
(i)
(i)
(i) (i)
Σk|k = I − Kk C Σk|k−1 ,
(8)
Since the distributed Kalman filter approach given in
Section 4 is motivated by intuition, the question naturally arises as to which extent it is optimal. In this
section
this question.
we address
h (i)>To this end, we
i de?
?
?>
fine x̂k|l , Σk|l , where x̂k|l = x̂k|l : i = 1, · · · , N and
(i)
Σ?k|l = diag Σk|l
: i = 1, · · · , N , to be the outcomes of
distributed filter and x̂k|l , Σk|l to be those of centralized
one. In Section 5.1, we show that the estimation error
covariance of the distributed filter Σ?k|k converges to that
of the centralized one Σk|k , and provide a bound for this
convergence. In Section 5.2, we do the same for the
?
convergence of x̂k|k
to x̂k|k .
(10)
= AΣk|k A + Q.
1. Update:
>
5.1. Convergence of Σ?k|k to Σk|k
(11)
In this section, we show that the covariance matrices
Σk|k and Σ?k|k exponentially converge to each other, and
(12)
with
−1
Kk = Σk|k−1C CΣk|k−1C > + R .
(18)
5. Optimality analysis
>
Σk+1|k = M − S̃ R−1C Σk|k M − S̃ R−1C
>
(17)
(i)>
(i)>
(i) (i)
(i) −1
C
C
Kk(i) = Σ(i)
C
Σ
+
R
. (19)
k|k−1
k|k−1
and
x̂k|k = x̂k|k−1 + Kk yk − C x̂k|k−1 ,
Σk|k = (I − Kk C) Σk|k−1 ,
(16)
with
and Σ0|0 = P. Our aim in this subsection is to compute
x̂k|k in a standard centralized way. Notice that equation (7) implies that, in the aggregated system formed
by (1)-(2), the process noise ek and the measurement
noise vk are mutually correlated. Taking this into account, it follows from [1, S 5.5] that the prediction and
update steps of the (centralized ) Kalman filter are given
by:
1. Prediction:
x̂k+1|k = Ã − S̃ R−1C x̂k|k + S̃ R−1 yk
(9)
= A x̂k|k + Lyk ,
+ Q̃ − S̃ R−1 S̃
>
(i) (i)
A(i) Σk|k
A + Q(i) .
2. Update:
Consider the standard (centralized) Kalman filter. For
all k, l ∈ N, let
x̂k|l , E (xk |Yl ) ,
Σk|l , E xk − x̂k|l xk − x̂k|l > ,
=
introduce a bound on Σk|k − Σ?k|k . To this end, we require the following definition from [3, Def 1.4].
(13)
3
Proof. Let Σ̄? = limk→∞ Σ?k|k and
n
o
σ̃ = max kPk , P? , Σ̄ , Σ̄? ,
(25)
−1
−1
. (26)
ω̃ = max P−1 , P? , Σ̄−1 , Σ̄?
Definition 2. For n × n matrices P, Q > 0, the Riemannian distance is defined by
v
t n
X
log2 σk PQ−1 ,
δ (P, Q) =
k=1
We can then appeal to the fact that the Riccati equation
is monotonic Bitmead et al. [2], to conclude that, for all
k ∈ N,
n
o
Σk|k ≤ max kPk , Σ̄ ≤ σ̃,
(27)
n
o
(28)
Σ?k|k ≤ max P? , Σ̄? ≤ σ̃,
n
o
−1
−1
Γk|k ≤ max P , Σ̄
≤ ω̃,
(29)
−1
−1
≤ ω̃.
(30)
Γ?k|k ≤ max P? , Σ̄?
where σ1 (X) ≥ · · · ≥ σn (X) denote the singular values
of matrix X.
Several properties of the above definition, which we
use to derive our results, are given in the following
proposition.
Proposition 3. [17, Proposition 6] For n × n matrices
P, Q > 0, the following holds true:
1. δ(P,
P) = 0.
2. δ P−1 , Q−1 = δ (Q, P) = δ (P, Q) .
3. For any m × m matrix W > 0 and m × n matrix B,
we have
α
δ(P, Q),
δ(W + BPB> , W + BQB> ) ≤
α+β
Recall that
Σk+1|k = AΣk|k A> + Q.
Also, from [1, p. 139], we have
Γk|k = Γk|k−1 + U.
Clearly, similar relations hold for Σ?k|l and Γ?k|l . Then, it
follows from Proposition 3-3 that,
δ Σk+1|k , Σ?k+1|k =δ AΣk|k A> + Q, AΣ?k|k A> + Q
≤υ̃1 δ Σk|k , Σ?k|k ,
(31)
δ Γk|k , Γ?k|k =δ Γk|k−1 + U, Γ?k|k−1 + U
≤υ̃2 δ Γk|k−1 , Γ?k|k−1 ,
(32)
where α = max{kBPB> k, kBQB> k} and β =
σmin (W).
4. If P > Q, then kP − Qk ≤ eδ(P,Q) − 1 kQk .
The main result of this section is given in Theorem 5.
Its proof requires the following technical result.
?
?
Lemma 4. Let Γk|l = Σ−1
k|l and Γk|l = Σk|l . Then
−1
Σk|k , Σ?k|k
≤
σ,
Γk|k , Γ?k|k
≤
ω,
with
υ̃1 =
and
δ Σk|k , Σ?k|k ≤
δ Γk|k , Γ?k|k ≤
υk δ P, P? ,
υk δ P, P? ,
(21)
ω
=
o
n
max kPk , P? , Σ̄ ,
−1
max P−1 , P? , Σ̄−1 ,
υ2
=
=
υ1 υ1 ,
υ2 =
ω
ω + U −1
−1
,
σ kAk2
σ kAk +
2
−1
Q−1
(23)
,
−1
Q−1
and
υ̃2 =
ω̃
ω̃ + U −1
−1
.
with υ̃ = υ̃1 υ̃2 . Finally, the above implies that Σ̄? = Σ̄.
Hence, the parameters σ̃ and ω̃ given in (25)-(26) are
equivalent to σ and ω in (22)-(23), respectively, and the
result follows.
(22)
with P? denoting the diagonal matrix formed by the
block diagonal entries of the matrix P,
υ
σ̃ kAk +
2
It then follows from (31)-(32) and Proposition 3-2, that
δ Σk|k , Σ?k|k ≤ υ̃k δ P, P? ,
δ Γk|k , Γ?k|k ≤ υ̃k δ P, P? .
(20)
where
σ =
σ̃ kAk2
We now introduce the main result of the section, stating a bound on Σk|k − Σ?k|k .
Theorem 5. Let Σ̃k|l = Σk|l − Σ?k|l and Γ̃k|l = Γk|l − Γ?k|l .
Then
(24)
U = C > R−1C,
Σ̃k|k ≤ κσυk
where
and Σ̄ = limk→∞ Σk|k .
4
and
Γ̃k|k ≤ κωυk ,
?
κ = eδ(P,P ) − 1.
Lemma 7. Let
Proof. Using (22)-(23), together with (20)-(21), Proposition 3-4 and Lemma 11, we obtain
k
?
Σ̃k|k ≤ eυ δ(P,P ) − 1 Σk|k ≤ κυk Σk|k
Γ̃k|k
>
∆k = E x̃k|k−1 x̃k|k−1
.
(34)
∆k ≤ Hk ∆k−1 Hk> + λυk I,
(35)
Then
≤
κσυk ,
≤
κυk Γk|k ≤ κωυk .
where I is the identity matrix, υ is defined in (24), and
!
q
λ , sup ζ + 2 ζ kHk k2 k∆k−1 k < ∞,
(36)
?
5.2. Convergence of x̂k|k
to x̂k|k
In this subsection, we study the convergence of state
?
estimate x̂k|k
, obtained through the distributed method,
and that of the centralized one x̂k|k . Moreover, we derive
?
a bound on the error x̂k|k - x̂k|k
. We start by introducing a
number of lemmas which are instrumental for establishing our main results.
k∈N
with
?
Lemma 6. Let x̃k|l = x̂k|l − x̂k|l
. Then
ζ
=
α
=
p
(α + β) + 2 αβ,
κ2 ω2 σ2 kAk2 σ kAk2 + kQk ,
β
=
κ ω σ kAk .
2
2
3
(37)
2
Hk
=
A I − Σk|k U ,
Proof. We split the argument in steps:
Step 1) From Lemmas 6 and 12
E ξk ξk>
≤ E ak a>k + E bk b>k
r
E ak a>k
E bk b>k .
+ 2
ξk
=
ak + bk ,
=
?
AΣk|k Γ̃k|k x̂k|k−1
,
Now, using Lemma 4,
ak
bk
=
?
.
AΣ̃k|k Γ?k|k x̂k|k
x̃k+1|k = Hk x̃k|k−1 + ξk .
(33)
where
E ak a>k ≤ kAk2 Σk|k
2
Γ̃k|k
2
?>
?
x̂k|k−1
E x̂k|k−1
≤ κ2 ω2 σ2 kAk2 Σ?k|k−1 υ2k ≤ αυ2k ,
?
?
and γ̃k|l = γk|l −
= Γ?k|l x̂k|l
Proof. Let γk|l = Γk|l x̂k|l , γk|l
?
γk|l . We can easily obtain
and
x̃k+1|k = A x̃k|k .
E bk b>k
Also, from [1, p. 140], we obtain
Then
γ̃k|k = γ̃k|k−1 .
Then it is easy to check that
2
2
? ?>
x̂k|k
E x̂k|k
≤
kAk2 Γ?k|k
≤
κ2 ω2 σ2 kAk2 Σ?k|k υ2k ≤ βυ2k .
Σ̃k|k
E ξk ξk> ≤ ζυ2k .
Step 2) From (33) and Lemma 12, we have
?
?
= Σk|k γk|k − Σ?k|k γk|k
x̃k|k = x̂k|k − x̂k|k
∆k ≤ Hk ∆k−1 Hk > + E ξk ξk>
r
?
= Σk|k γ̃k|k + Σ̃k|k γk|k
,
+2
and
kHk ∆k−1 Hk k E ξk> ξk I ≤ Fk (∆k−1 ),
with
?
γ̃k|k−1 = γk|k−1 − γk|k−1
?
?
= Γk|k−1 x̂k|k−1 −Γ?k|k−1 x̂k|k−1
= Γk|k−1 x̃k|k−1 +Γ̃k|k−1 x̂k|k−1
.
Fk (X) =
We then have
Hk XHk>
!
q
2
+ ζυ + 2 ζ kHk k kXk Iυk .
k
Clearly, if A > B then Fk (A) > Fk (B). Also, there
¯ < ∆,
¯ for all k ≥ k̄.
clearly exists k̄ and ∆¯ such that Fk (∆)
Hence, limk→∞ ∆(k) < ∞, and the result follows.
?
x̃k+1|k = AΣk|k γ̃k|k−1 + AΣ̃k|k γk|k
= AΣk|k Γk|k−1 x̃k|k−1 + ξk
= AΣk|k Γk|k − U x̃k|k−1 + ξk = Hk x̃k|k−1 + ξk .
The following result states a family of upper bounds
on the norm of the covariance matrix of x̃k|l .
5
Let dk = kDk k. Then
Theorem 8. Consider ∆k as defined in (34). Let Hk =
Vk Jk Vk −1 and H̄ = V̄ J¯V̄ −1 be the Jordan decompositions of Hk and H̄, respectively. Then for every > 1,
there exists k ∈ N such that
dk ≤ λ
A =
λψ φ
,
ψ − υ
B =
υl Πl,k Π>l,k ≤ λφ
hk = φ ψk
d(z) = h(z)u(z) =
=
φ
=
m
=
H̄ = lim Hk ,
k→∞
2(k −1)
m
2
−1 2
,
V̄
V̄
ρ H̄
n
o
max 1, kH1 k , · · · , Hk −1 .
and
uk = λυk .
Taking z-transform we get
and
ψ
υl ψk−l = (hk ∗ uk ) ,
with
λυφ
.
υ − ψ
ρ H̄ ,
k
X
l=1
l=1
k∆k k ≤ A ψk + B υk ,
where
k
X
(38)
λφ
1 − ψ z−1 1 − υz−1
A
B
=
+
.
1 − ψ z−1 1 − υz−1
Hence,
dk = A ψk + B υk ,
and the result follows from the definition of dk and (39).
Proof. We split the argument in steps:
Step 1) Let
Theorem 8 states that the covariance of the difference
?
between x̂k|k−1
and x̂k|k−1 is bounded by two exponential terms. The term B υk is due to the convergence
of the Kalman gain Kk? to Kk , while the term A ψk is
due to the convergence of the states given by the system dynamics. In order to use this result to show the
?
asymptotic convergence of x̂k|k−1
to x̂k|k−1 , we need that
υ < 1 and ψ < 1, for some > 0. While it is clear
from (24) that the former is true, guaranteeing the latter
is not that straightforward. The following proposition
addresses this issue.
Dk = Hk Dk−1 Hk > + λIυk .
with D1 = 0, From (35), and since ∆1 = D1 = 0, it
follows that
∆ k ≤ Dk .
(39)
Step 2) Let
Πl,k
=
Hk−1 Hk−2 · · · Hl
=
Vk−1 Jk−1 Vk−1 −1 · · · Vl Jl Vl −1 .
From (38), there exists k ∈ N such that, for all k ≥ k ,
√
V̄ ,
kVk k ≤
√
−1
Vk
V̄ −1 ,
≤
√
,
Vk+1 −1 Vk ≤
√
ρ H̄ .
kJk k ≤
Proposition 9. If theh pair [A,
i C] is completely detectable and the pair A, Q1/2 is completely stabiliz
able, then ρ H̄ < 1, where ρ(H̄) denotes the spectral
radius of matrix H̄.
Proof. Let Kk? = diag Kk(i) : i = 1, · · · , N . From Theorem 5,
lim Kk = lim Kk? , K̄.
Then, for all k ≥ l,
Πl,k
≤ mk −l kVk−1 k kJk−1 k Vk −1 Vk−1 ×
k→∞
· · · × Vk +1 −1 Vk Jk Vk
k−k
√
≤
V̄ V̄ −1 mk −l ρ H̄
m k −l k−l
√
−1
=
V̄ V̄
ρ H̄
ρ H̄
k−l
p
≤
φ ψ 2 .
Dk+1 = λ
x̂k+1|k
=
?
x̂k+1|k
=
A (I − Kk C) x̂k|k−1 + (AKk + L) yk ,
?
A I − Kk?C x̂k|k−1
+ AKk? + L yk .
Hence, if we had that Kk = Kk? = K̄, for all k ∈ N, then
x̃k+1|k = A I − K̄C x̃k|k−1 .
Step 3) We have
k
X
k→∞
Now,
However, under the same assumption, according to
Lemma 6, x̃k+1|k = H̄ x̃k|k−1 . Hence,
H̄ = A I − K̄C .
υl Πl,k Π>l,k .
l=1
6
0.07
S
5
S
4
S
0.06
1
0.05
S
3
S
x
~kjk!1
0.04
2
0.03
0.02
0.01
Figure 1: Communication graph of the five subsystems S i , i =
1, 2, . . . , 5.
0
-0.01
-0.02
0
1
2
3
4
5
6
7
k
i.e., H̄ equals the matrix that determines the asymptotic
dynamics of the centralized Kalman filter’s estimation
error. Then, in view of the model (4)-(5), the result follows from [1, S 4.4].
Figure 2: Difference between the estimated states obtained via both
filtering schemes.
6
5.3. The case when the initial covariance is block diagonal
#10 -17
4
It turns out that, when the initial covariance matrix
has a block diagonal structure both estimation methods
are completely identical. This is summarized in the following corollary.
x
~kjk!1
2
0
-2
10.inConsider
TexPointCorollary
fonts used
EMF. the network of subsystems (1)(2).
If
the
matrix
is block
thenbox.:
the disRead the TexPoint manualPbefore
youdiagonal,
delete this
AA
-4
-6
tributed Kalman filter scheme (15)-(19) produces, for
each i, the same estimate as the centralized Kalman filter (9)-(13).
0
1
2
3
4
5
6
7
k
Figure 3: Difference between the estimated states obtained via both
filtering schemes, when P is block-diagonal.
Proof. Recall that matrices A, Q, C and R are all block
diagonal. It then follows from (10) that, if Σk|k is block
diagonal, so is Σk+1|k . One can easily check from (12)
and (13) that the same holds for Kk and Σk|k if Σk|k−1
is block diagonal. Since Σ1|0 = P is block diagonal, it
follows that the matrices Σk|k−1 and Σk|k remain block
diagonal for all k. Now, it is straightforward to verify
that (9)-(13) become equivalent to (15)-(19), when Σk|k
and Σk|k−1 are block diagonal. Hence, the distributed
and centralized Kalman filters produce the same estimate and the result follows.
centralized one. To this end, we set the initial conditions
?
= 0.
for both schemes to be the same, i.e., x̂0|0 = x̂0|0
6. Simulations
In the first simulation, the initial covariance matrix is
chosen by randomly generating a positive-definite matrix using P = LL> + 0 I5 , where 0 = 0.1 and the entries L ∈ R5×5 are drawn from the uniform distribution
U(0, 1). Fig. 2 shows the time evolution of the entries
?
of the difference x̃k|k−1 = x̂k|k−1
− x̂k|k−1 between the es?
timation outcome x̂k|k−1 of the distributed Kalman filter
and that x̂k|k−1 of the centralized one. We see that how
this difference asymptotically converges to zero.
In this section, we present numerical results
to compare the performance of the proposed distributed scheme (15)-(19) and the optimal centralized
scheme (9)-(13). We assume that each subsystem is a
single integrator with a pole at 0.2. The communication
graph is undirected, as in Fig. 1, and the nonzero values
of L(i, j) are set to 0.3. Furthermore, vk ∼ N (0, 0.1I5 )
and wk ∼ N (0, 0.1I5 ).
We now compare the state estimation produced by the
proposed distributed scheme with that produced by the
In the second simulation we evaluate the same difference in the case where the initial covariance matrix is
block-diagonal. To this end, we choose P = 1 I5 , where
1 is a random scaler drawn from the uniform distribution U(0, 1). The time evolution entries of the difference x̃k|k−1 are shown in Fig. 3. We see that these differences are very negligible, only due to numerical errors. This confirms our claim that the estimates of both
Kalman filter schemes are the same when the matrix P
is block-diagonal.
7
7. Conclusion
[10] Z. Lin, L. Wang, Z. Han, and M. Fu. A graph laplacian approach
to coordinate-free formation stabilization for directed networks.
IEEE Transactions Automatic Control, 61(5):1269–1280, 2016.
[11] C. G. Lopes and A. H. Ali. Diffusion least-mean squares
over adaptive networks: Formulation and performance analysis.
IEEE T. Signal Proces, 56(7):3122–3136, 2008.
[12] D. Marelli and M. Fu. Distributed weighted linear least squares
estimation with fast convergence in large-scale systems. Automatica, 51:27–39, 2015.
[13] R. Olfati-Saber. Distributed kalman filter with embedded consensus filters. In IEEE Conference on Decision and Control,
pages 8179–8184, 2005.
[14] R. Olfati-Saber. Kalman-consensus filter : Optimality, stability,
and performance. In IEEE Conference on Decision and Control,
pages 7036–7042, 2009.
[15] A. Ribeiro, I. Schizas, S. Roumeliotis, and G. Giannakis.
Kalman filtering in wireless sensor networks. IEEE Control Systems, 30(2):66–86, 2010.
[16] M. Subbotin and R. Smith. Design of distributed decentralized
estimators for formations with fixed and stochastic communication topologies. Automatica, 45(11):2491 – 2501, 2009.
[17] T. Sui, D. Marelli, and M. Fu.
Accuracy analysis for distributed weighted least-squares estimation in finite steps and loopy networks.
submitted to Automatica.
URL http://www.cifasis-conicet.gov.ar/
marelli/DWLS_accuracy.pdf.
[18] A. Teixeira, I. Shames, H. Sandberg, and K. H. Johansson. A
secure control framework for resource-limited adversaries. Automatica, 51:135–148, 2015.
[19] V. Ugrinovskii. Distributed robust filtering with consensus of
estimates. Automatica, 47(1):1 – 13, 2011.
[20] V. Ugrinovskii. Distributed robust estimation over randomly
switching networks using consensus. Automatica, 49(1):160 –
168, 2013.
[21] M. Zamani and V. Ugrinovskii. Minimum-energy distributed
filtering. In IEEE Conference on Decision and Control, pages
3370–3375, 2014.
[22] M. Zamani, U. Helmke, and B. D. O. Anderson. Zeros of networked systems with time-invariant interconnections,. Automatica, pages 97–105, 2015.
[23] W. Zhang, M. S. Branicky, and S. M. Phillips. Stability of networked control systems. IEEE Control Systems, 21(1):84–99,
2001.
[24] R. Zheng, Z. Lin, M. Fu, and D. Sun. Distributed control for uniform circumnavigation of ring-coupled unicycles. Automatica,
53:23–29, 2015.
We studied the distributed Kalman filter problem in
a network of linear time-invariant subsystems. We proposed a distributed Kalman filter scheme, which uses
only local measurements, and we studied the extent to
which this scheme approximates the centralized (i.e.,
optimal) Kalman filter. It turned out that the covariance
matrix associated with the initial value of the state vector plays an important role. We showed that if this matrix is block diagonal, the proposed distributed scheme
is optimal. Moreover, if that condition is dropped, the
estimation error covariances, and the associated estimates, obtained through these two approaches approximate each other exponentially fast. We also established
proper bounds on error between estimates and its covariance matrix.
Appendix A. Some lemmas
Lemma 11. [17, Lemma 25] For every x ∈ R and 0 ≤
y ≤ 1, e xy − 1 ≤ (e x − 1) y.
"
#
A B>
Lemma 12. [17, Lemma 26] If
≥ 0, then
B C
√
kBk ≤ kAk kCk.
References
[1] B .D. O. Anderson and J. B. Moore. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall, 1979.
[2] R. R. Bitmead, M. R. Gevers, I. R. Petersen, and R. J. Kaye.
Monotonicity and stabilizability-properties of solutions of the
riccati difference equation: Propositions, lemmas, theorems, fallacious conjectures and counterexamples. Systems & Control
Letters, 5(5):309–315, 1985.
[3] P. Bougerol. Kalman filtering with random coefficients and contractions. SIAM Journal on Control and Optimization, 31(4):
942–959, 1993.
[4] A. J. Conejo, S. De la Torre, and M. Canas. An optimization
approach to multiarea state estimation. IEEE Transactions on
Power Systems, 22(1):213–221, 2007.
[5] A. Gómez-Expósito, A. De la Villa Jaén, C. Gómez-Quiles, Patricia P. Rousseaux, and T. Van Cutsem. A taxonomy of multiarea state estimation methods. Electric Power Systems Research,
81(4):1060–1069, 2011.
[6] S. Kar, J. M. F Moura, and K. Ramanan. Distributed parameter
estimation in sensor networks: Nonlinear observation models
and imperfect communication. IEEE Transactions on Information Theory, 58(6):3575–3605, 2012.
[7] U. A. Khan and J. M. F. Moura. Distributing the kalman filter for
large-scale systems. IEEE Transactions on Signal Processing,
56(10):4919–4935, 2008.
[8] Z. Lin, L. Wang, Z. Han, and M.Fu. Distributed formation
control of multi-agent systems using complex Laplacian. IEEE
Transactions Automatic Control, 59(7):1765–1777, 2014.
[9] Z. Lin, L. Wang, Z. Chen, M. Fu, and Z. Han. Necessary
and sufficient graphical conditions for affine formation control. IEEE Transactions Automatic Control, 61(10):2877–2891,
2016.
8
| 3 |
arXiv:1609.07635v1 [] 24 Sep 2016
AMENABLE GROUPS OF FINITE COHOMOLOGICAL DIMENSION
AND THE ZERO DIVISOR CONJECTURE
DIETER DEGRIJSE
Abstract. We prove that every amenable group of cohomological dimension two whose
integral group ring is a domain is solvable and investigate certain homological finiteness
properties of groups that satisfy the analytic zero divisor conjecture and act on an acyclic
CW -complex with amenable stabilisers.
1. Introduction
All groups appearing in this paper are assumed to be discrete. Recall that the class
of elementary amenable groups is by definition the smallest class of groups that contains
all finite and all abelian groups and that is closed under directed unions, extensions,
quotients and subgroups. A group G is said to be amenable if for every continuous action
of G on a compact, Hausdorff space X, there is a G-invariant probability measure on X.
As the name suggest, all elementary amenable groups are amenable (e.g. see [3, Appendix
G]) and by now there is a multitude of examples of groups that are amenable but not
elementary amenable. Such groups arise for example in the class of branch groups, which
includes the first example by Grigorchuk of a group of intermediate growth (see [2],[17]).
Another important source of amenable non-elementary amenable groups are (commutator
subgroups of) topological full groups of Cantor minimal systems (see [16]) and we refer
to the introduction of [17] for more examples.
However, as far as we are aware none of the currently known examples of amenable
but not elementary amenable groups are known to have finite cohomological dimension
over any field. For example, many Branch groups, including Grigorchuk’s example, are
commensurable to an n-fold direct product of themselves for some n ≥ 2 (see [2]). By
Corollary 2.3, the cohomological dimension of such a non locally-finite group is infinite
over any field. Since
L commutator subgroups of topological full groups of Cantor minimal
systems contain n≥0 Z as a subgroup (see [13, Remark 4.3(2)]), these groups also have
infinite cohomological dimension over any field. Hence, one might well wonder if in fact all
amenable groups of finite cohomological dimension (over some ring or field) are elementary
amenable.
We will be working with cohomological dimension over the integers, which we will
therefore just refer to as cohomological dimension. Since all elementary amenable groups
of finite cohomological dimension must be torsion-free and have finite Hirsch length by
1
2
[14, Lemma 2], one can conclude from [15, Corollary 1] that they are virtually solvable.
We are therefore led to the following question, which was brought to the authors attention
by Peter Kropholler.
Question. Is every amenable group of finite cohomological dimension virtually solvable?
By Stalling’s theorem ([27]) this question obviously has a positive answer for groups of
cohomological dimension at most one. By the Tits alternative, all amenable linear groups
of finite cohomological dimension are virtually solvable. Also note that the answer to the
more general question where one replaces ‘amenable’ by ‘does not contain non-abelian
free subgroups’ is no. Indeed, Olshanskii’s example of a torsion-free Tarski monster, i.e. a
finitely generated torsion-free non-cyclic group all of whose proper subgroups are cyclic, is
non-amenable and admits an aspherical presentation, showing that it has cohomological
dimension two (e.g. see [25]).
Our main results, which we will state below, depend on the validity of Kaplansky’s
zero divisor conjecture for group rings or a generalisation of it due to Linnell (e.g. see
[24]), called the analytic zero divisor conjecture, which states that if α ∈ C[G] \ {0} and
β ∈ l2 (G) \ {0}, then αβ 6= 0. There are no known counterexamples to the (analytic) zero
divisor conjecture and it has been proven for a wide class of groups, including torsion-free
elementary amenable groups (see [19, Th. 1.4]) and extensions of right orderable groups by
torsion-free elementary amenable groups (e.g. see [24, Th. 8.2]). Moreover, Elek showed
in [11] that for finitely generated amenable groups the Kaplansky zero divisor conjecture
over C and the analytic zero divisor conjecture are equivalent. We also remark that for a
torsion-free group G, Z[G] is a domain if and only if Q[G] is a domain and for any field
Z ⊆ k ⊆ C, the Atiyah conjecture of order Z implies that k[G] is a domain (e.g. see [23,
Lemma 10.15]).
Our first main result concerns the homological finiteness properties F Pn and F P of a
group. These notions will be recalled in the next section.
Theorem A. Let G be an amenable group such that Z[G] is a domain. If G has cohomological dimension n and is of type F Pn−1 , then G is of type F P .
Note that the conclusion of Theorem A is certainly false in general for non-amenable
torsion-free groups. Indeed, for n ≥ 1, the Bestvina-Brady group HL associated to any
flag-triangulation L of the (n − 1)-sphere contains a non-abelian free subgroup, has cohomological dimension n by [21, Th. 22] and is of type F Pn−1 but not of type F P by the
main theorem of [5]. Also, the Tarski monster mentioned above is finitely generated and
of cohomological dimension two, but is not finitely presented.
Using Theorem A we obtain a positive answer to the question above in the 2-dimensional
case, assuming the validity of the zero-divisor conjecture. This generalises [18, Th. 3].
3
Theorem B. Every amenable group G of cohomological dimension 2 such that Z[G] is a
domain is solvable and hence isomorphic to a solvable Baumslag-Solitar group BS(1, m)
for some non-zero m ∈ Z or to a non-cyclic subgroup of the additive rationals.
A modified version of the proof of Theorem A gives the following theorem.
Theorem C. Let G be a group of cohomological dimension n that satisfies the analytic
zero-divisor conjecture and admits an acyclic G-CW-complex X with amenable stabilisers.
If G is of type F Pn−1 and X does not have a free G-orbit of n-cells, then G is of type F P .
Since an n-dimensional G-CW-complex obviously has no free G-orbit of (n + 1)-cells,
the following corollary is immediate.
Corollary 1. Let Γ be a group that admits an acyclic n-dimensional Γ-CW-complex with
amenable stabilisers. Then any subgroup G of Γ of cohomological dimension n + 1 that
satisfies the analytic zero-divisor conjecture and is of type F Pn is of type F P .
Recall that a group is said to be almost coherent, if every finitely generated subgroup
is almost finitely presented, i.e. of type F P2 .
Corollary 2. Fundamental groups of graphs of groups with cyclic edge groups and vertex
groups that are either subgroups of the additive rational numbers or solvable BaumslagSolitar groups are almost coherent if they satisfy the analytic zero divisor conjecture.
A group G is said to commensurate a subgroup H if H g ∩ H has finite index in both H
and H g for every element g ∈ G. In particular, if H is normal in G then G commensurates
H. Using Theorem C, we prove the following.
Corollary 3. Let G be a group that satisfies the analytic zero-divisor conjecture and
commensurates a non-trivial amenable subgroup. If G has cohomological dimension n and
is of type F Pn−1 , then G is of type F P .
Acknowledgements. The author is grateful to Peter Kropholler for introducing him
to the question posed in this paper and for valuable discussions and remarks concerning
a preliminary version of this work. This research was partially supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation
(DNRF92).
2. Finiteness conditions
We start by recalling some basic facts about homological finiteness properties of groups
and refer the reader to [6], [8] and [31] for more details. Let G be a group, let R be a
commutative ring with unit and let R[G] denote the group ring of G with coefficients in
R. In this paper we will always consider left R[G]-modules, unless stated otherwise. If
there is no mention of a ring R, then R is assumed to equal Z.
The cohomological dimension cdR (G) of G over R is by definition the length of the
shortest projective R[G]-resolution of the trivial R[G]-module R. If there does not exist
4
such a finite length projective resolution, then the cohomological dimension is by definition
infinite. Equivalently, cdR (G) is the largest integer n ≥ 0 for which there exists an R[G]module M such that the n-th cohomology group Hn (G, M) is non-zero. The homological
dimension hdR (G) of G over R is by definition the length of the shortest flat R[G]resolution over the trivial R[G]-module R. Again, if there does not exist a finite length flat
resolution then the homological dimension is by definition infinite. Equivalently, hdR (G)
is the largest integer n ≥ 0 for which there exists a right R[G]-module M such that the
n-th homology group Hn (G, M) is non-zero. In general one has hdR (G) ≤ cdR (G), and
if G is countable one also has cdR (G) ≤ hdR (G) + 1 (e.g. see [6, Th 4.6]). Note that
cdR (G) ≤ cd(G), and that cd(G) < ∞ implies that G is torsion-free.
The group G is said to be of type F Pn over R for an integer n ≥ 0 if and only if
there exists a projective R[G]-resolution P∗ of the trivial R[G]-module R such that Pd is
a finitely generated R[G]-module for all 0 ≤ d ≤ n. The group G is said to be of type F P
over R if and only if there exists a finite length projective R[G]-resolution P∗ of the trivial
R[G]-module R such that Pd is a finitely generated R[G]- module for all d ≥ 0. Every
group G is of type F P0 over R and G is of type F P1 over R if and only if G is finitely
generated. If G is finitely presented then G is of type F P2 over R but not necessarily the
other way around (see [6, Prop. 2.1 and 2.2] and [5]). Note also that if G is of type F Pn
over R and cdR (G) = n, then G is of type F P over R (e.g. see [8, VIII Prop. 6.1]).
The following lemma is a variation on a classical criterion due to Strebel implying that
a group is of type F Pn . It will be well-known to experts.
Lemma 2.1. Let R be a commutative ring with unit and let G be a group with cdR (G) = n
that is type F Pn−1 over R. Then G is of type F P over R if and only if for any directed
system of free R[G]-modules {Fi }i∈I with lim Fi = 0 one has
−→
n
lim H (G, Fi ) = 0.
−→
Proof. A classical result of Strebel (e.g. see [6, Th. 1.3.]) implies that if G is of type
F Pn−1 over R and cdR (G) = n then G is of type F P if and only if lim Hn (G, Mi ) = 0
−→
for any directed system of R[G]-modules {Mi }i∈I with lim Mi = 0. Let {Mi }i∈I be such
−→
a directed system. Now for each i ∈ I, consider the free R[G]-module
M
Fi =
R[G]
m∈Mi \{0}
and the surjection of R[G]-modules
πi : Fi → Mi : em 7→ m
where em denotes the identity element in the R[G]-factor corresponding to m ∈ M. Given
i ≤ j in I we have the corresponding R[G]-homomorphism ϕi,j : Mi → Mj . Define the
5
R[G]- homomorphism
θi,j : Fi → Fj : em 7→
n
eϕi,j (m) if ϕi,j (m) 6= 0
0
otherwise.
One checks that the maps θi,j assemble to form a directed system {Fi }i∈I with vanishing
direct limit and that
{πi }i∈I : {Fi }i∈I → {Mi }i∈I
is a surjective map of directed systems, i.e. ϕi,j ◦ πi = πj ◦ θi,j for all i ≤ j in I. Since
cdR (G) = n we obtain from a collection of natural long exact cohomology sequences, a
surjective map of directed systems
{Hn (G, Fi )}i∈I → {Hn (G, Mi )}i∈I → 0.
Since taking direct limits is exact, there is a surjection
lim Hn (G, Fi ) → lim Hn (G, Mi ) → 0,
−→
−→
from which the lemma easily follows.
Recall that two groups are said to be abstractly commensurable if they have isomorphic
subgroups of finite index. In the introduction we mentioned that non locally-finite groups
G that are abstractly commensurable to a direct product Gn for some n ≥ 2, have infinite
cohomological dimension over any field k. The following lemma will allow us to prove this
in the corollary below.
Lemma 2.2. Let k be a field and let Gi be a group with hdk (Gi ) = ni for i = 1, 2, then
hdk (G1 × G2 ) = n1 + n2 .
Proof. Let Mi be a right k[Gi ]-module such that Hni (Gi , Mi ) 6= 0 for i = 1, 2 and consider
right the k[G1 ×G2 ]-module M1 ⊗k M2 . By the Lyndon-Hochschild-Serre spectral sequence
associated to the extension
1 → G1 → G1 × G2 → G2 → 1
it suffices to show that
Hn2 (G2 , Hn1 (G1 , M1 ⊗k M2 )) 6= 0.
Let P∗ be a projective k[G1 ]-resolution of the trivial k[G1 ]-module k. Since
(M1 ⊗k M2 ) ⊗k[G ] P∗ ∼
= (M1 ⊗k[G ] P∗ ) ⊗k M2
1
1
and − ⊗k M2 is an exact functor because k is a field, we have
Hn (G1 , M1 ⊗k M2 ) ∼
= Hn (G1 , M1 ) ⊗k M2 .
1
1
A similar reasoning shows that
Hn2 (G2 , Hn1 (G1 , M1 ) ⊗k M2 ) ∼
= Hn1 (G1 , M1 ) ⊗k Hn2 (G2 , M2 ).
6
We conclude that
Hn2 (G2 , Hn1 (G1 , M1 ⊗k M2 )) ∼
= Hn1 (G1 , M1 ) ⊗k Hn2 (G2 , M2 ) 6= 0,
proving the lemma.
Corollary 2.3. Let G be a group that is not locally finite. If G is abstractly commensurable
to a direct product Gn for some n ≥ 2, then hdk (G), and hence also cdk (G), is infinite
for any field k.
Proof. Suppose hdk (G) < ∞. Then hdk (Gn ) < ∞ and we have hdk (G) = hdk (K) and
hdk (Gn ) = hdk (L) for every finite index subgroup K of G and finite index subgroup L
of Gn by [6, Cor. 5.10]. Since G is abstractly commensurable with Gn , this implies that
hdk (G) = hdk (Gn ). But then the lemma above says that n(hdk (G)) = hdk (Gn ) = hdk (G)
for some n ≥ 2, which implies that hdk (G) = 0. We now conclude from [6, Prop. 4.12(b)]
that G is locally finite, which is a contradiction. This proves that hdk (G) = ∞.
3. Non-commutative localisation
Later on we will need to embed the group ring Z[G] of a certain group G, into a ring
D such that every non-zero element of Z[G] is invertible in D. In this section we recall
how to construct such a ring D, assuming G satisfies the assumptions of either Theorem
A or Theorem B.
We start by briefly reviewing the Ore condition and the process of Ore localisation and
refer the reader to [20, 4.10] for details and proofs. Let R be a ring with a unit and let S
be a multiplicatively closed subset of R that contains the unit of R but does not contain
any zero divisors of R. The set S is said to satisfy the left Ore condition with respect to
R if for every s ∈ S and every a ∈ R, there exists t ∈ S and b ∈ R such that ta = bs. If
S satisfies the left Ore condition with respect to R one can construct the ring
S −1 R = S × R/ ∼
where (s, a) ∼ (s′ , a′ ) if there exist b, b′ ∈ R such that bs = b′ s′ ∈ S and ba = b′ a′ ∈ R.
One denotes an equivalence class containing (s, a) suggestively by as and defines
sa1 + ra2
a1 a2
+
=
s1
s2
ss1
where r ∈ R and s ∈ S are such that rs2 = ss1 ∈ S, and
ra2
a1 a2
·
=
s1 s2
ss1
7
where r ∈ R and s ∈ S are such that rs2 = sa1 ∈ R. One can check that this turn S −1 R
into a ring equipped with an injective ring homomorphism
ϕ : R → S −1 R : r 7→ (1, r)
such that the image under ϕ of every element in S is invertible in S −1 R. We will identify
R with its image ϕ(R) in S −1 R. The ring S −1 R is called the left Ore localisation of R at
S. Finally, note that S −1 R is a flat right R-module and that we can also localise any left
R-module M at S by defining
S −1 M = S −1 R ⊗R M.
If R is a domain and the set of all non-zero elements of R satisfies the left Ore condition
with respect to R, then R is called a left Ore domain.
Returning to our specific setting, note that if G is a torsion-free group such that k[G] is
a domain for a field k, then k[G] is a left Ore domain if G is amenable by [29]. In fact, this
is an if and only if by the appendix of [1] due to Kielak. When exploring these references,
note that one can also consider the ‘right’ Ore condition and all corresponding ‘right’
notions but that for group rings (or more generally rings with an involution), the left Ore
condition and the right Ore condition are equivalent and the corresponding localisations
are isomorphic. For a group G one easily verifies that Z[G] is a left Ore domain if and
only if Q[G] is a left Ore domain. We may therefore conclude that if G is an amenable
group such that Z[G] is a domain, then Z[G] is a left Ore domain and can therefore be
embedded in a ring D = S −1 Z[G] (S = Z[G] \ {0}) such that every non-zero element of
Z[G] is invertible in D. This deals with the setup of Theorem A. In Theorem C however,
G does not need to be amenable. Next, we explain how to proceed in that situation.
Let G be a torsion-free group that satisfies the analytic zero divisor conjecture and
consider the Hilbert spaces
X
X
l2 (G) = {
ag g (ag ∈ C) |
|ag |2 < ∞}
g∈G
g∈G
and
l∞ (G) = {
X
ag g (ag ∈ C) | sup |ag | < ∞}.
g∈G
g∈G
One has the obvious inclusion C[G] ⊆ l2 (G) ⊆ l∞ (G) and the natural ring multiplication
on C[G] extends to a map
X
X
XX
l2 (G) × l2 (G) → l∞ (G) : (α =
ag g, β =
bh h) 7→ αβ =
agh−1 bh g.
g∈G
h∈G
g∈G
h∈H
The group von Neumann algebra N (G) can be defined as (e.g. see [24, Section 8]) as
N (G) = {α ∈ l2 (G) | αβ ∈ l2 (G), ∀β ∈ l2 (G)}
8
and we have an inclusion of rings C[G] ⊆ N (G) such that no non-zero element of C[G] is a
zero-divisor in N (G), by the analytic zero divisor conjecture. Since the set S of non-zero
divisors in N (G) satisfies the left (and right) Ore condition in N (G) (e.g. see [23, Th.
8.22]), one can consider the left (or right) Ore localisation S −1 N (G) which turns out to
be isomorphic to the algebra U(G) of operators affiliated to N (G) (e.g. see [23, Ch. 8] ).
Important for our purposes is that, under the assumptions of Theorem C, we have found
a ring U(G) containing C[G] (and hence Z[G]) such that every non- zero element of C[G]
(and hence Z[G]) is invertible in U(G).
To summarise, if a groups G satisfies the assumptions of either Theorem A or Theorem
C, then the group ring Z[G] can be embedded in a ring D such that every non-zero
element of Z[G] is invertible in D. Our reason for needing such an embedding is contained
in Lemma 3.1 below, which will be used in the proof of Theorem A and C in the next
section.
Lemma 3.1. Let G be a non-trivial amenable group and assume that the group ring Z[G]
can be embedded in a ring D such that
L every non-zero element of Z[G] is invertible in D.
Then for any index set J, the sum j∈J D is an injective Z[G]-module and for all n ≥ 0
Hn (G,
M
D) = 0.
j∈J
Proof. Since every non-zero element of Z[G] isL
invertible in D and G is amenable, it
follows that Z[G] is an Ore domain andL
that
i∈I D is a torsion-free divisible Z[G]module. By [30, Th. 1], this implies that j∈J D is an injective Z[G]-module. From this
L
we immediately conclude that Hn (G, j∈J D) = 0 for all n > 0. Since g − 1 is invertible
L
in D for every g ∈ G we have H0 (G, D) = D G = 0 and hence also H0 (G, j∈J D) = 0.
Remark 3.2. Let G be a group that contains a copy of the free group on 2 generators
and assume that Z[G] embeds into a ring D such that every non-zero element of Z[G] is
invertible in D (for example, the group ring of any torsion-free one-relator group embeds
into a skew-field by [22]). Then D is a torsion-free divisible Z[G]-module. In particular,
(g − 1)m 6= 0 for any non-trivial g ∈ G and non-zero m ∈ D. But as observed in [26, Ch.
4. Prop. 2.2.] this implies that D is not an injective Z[G]-module. Hence the conclusion
of the lemma above does not hold for groups that contain non-abelian free subgroups.
This also shows that the injective hull I(D) of D viewed as a Z[G]-module cannot be
given the structure of a D-module extending the ring structure of D. Indeed, assume this
was the case and take a non-zero m ∈ I(D). Since I(D) is an essential extension of D
as an Z[G]-module, there exists a non-zero r ∈ Z[G] such that r · m ∈ D. But since r is
invertible in D and the ring structure of D can be extended to an D-module structure on
I(D), we have m = (rr −1) · m = r −1 (r · m) ∈ D. Hence D = I(D) and D is an injective
Z[G]-module, which is a contradiction.
9
4. Proofs of the main results
We are now ready to prove our main results.
proof of Theorem A and C. Let G be a group that satisfies either the assumptions of
Theorem A or Theorem C. Our task is to prove that G of type F P . By Lemma 2.1 it
suffices to show that
lim Hn (G, Fi ) = 0
−→
for any vanishing directed system of free Z[G]-modules {Fi }i∈I . Let {Fi }i∈I be such a
system. As we discussed in the previous section, we can embed the ring Z[G] into a ring
D such that every non-zero element of Z[G] is invertible in D. Consider the short exact
sequence
0 → Z[G] → D → D/Z[G] → 0.
Tensoring with − ⊗Z[G] Fi for every i ∈ I, we obtain a short exact sequence of vanishing
directed systems of left Z[G]-modules
0 → {Fi }i∈I → {D ⊗Z[G] Fi }i∈I → {D ⊗Z[G] Fi /Fi }i∈I → 0.
By considering the associated directed system of long exact cohomology sequences and
taking the direct limit, we obtain the exact sequence
lim Hn−1 (G, D ⊗Z[G] Fi /Fi ) → lim Hn (G, Fi ) → lim Hn (G, D ⊗Z[G] Fi ).
−→
−→
−→
Since G is of type F Pn−1 , it follows from [6, Th. 1.3.]that
lim Hn−1 (G, D ⊗Z[G] Fi /Fi ) = 0.
−→
L
For each i ∈ I, we have D ⊗Z[G] Fi ∼
= j∈J D for some index set J depending on i.
Now let X be an acyclic G-CW complex with amenable stabilisers and without a free
orbit of n-cells. If G is amenable we can choose X = {∗}. Associated to the acyclic
G-CW-complex X, there is a convergent stabiliser spectral sequence (e.g. see [8, Ch.VIII
Section 7])
E1p,q =
Y
σ∈∆p
Hq (Kσ ,
M
D)) =⇒ Hp+q (G,
j∈J
M
D).
j∈J
Here, ∆p is a set of representatives of G-orbits of p-cells of X and Kσ is the stabiliser of
σ. Since every Kσ is amenable, we conclude from Lemma 3.1 that
M
Hq (Kσ ,
D) = 0
j∈J
1
for every q > 0, every p ≥ 0 and every σ ∈ ∆p . Hence, Ep,q
= 0 for all q > 0.
10
Since Kσ is a non-trivial amenable group for every σ ∈ ∆n it also follows from Lemma
3.1 that
1
En,0
=
Y
H0 (Kσ ,
σ∈∆n
M
D) = 0.
j∈J
We conclude that the entireL(p, q)-line with p + q = n of the spectral sequence above is
zero. It follows that Hn (G, j∈J D) = 0 and hence
lim Hn (G, D ⊗Z[G] Fi ) = 0.
−→
n
This implies that lim H (G, Fi ) = 0, which proves Theorems A en C.
−→
proof of Corollary 2. Let Γ be the fundamental group of a graph of groups with cyclic
edge groups and vertex groups that are either subgroups of the additive rational numbers
or solvable Baumslag-Solitar groups. By Bass-Serre theory ([28]), Γ acts on a tree T
(a 1-dimensional acyclic Γ-CW-complex) with cyclic edge (1-cell) stabilisers and vertex
(0-cell) stabilisers that are either subgroups of the additive rational numbers or solvable
Baumslag-Solitar groups. It follows that the stabilisers of 0-cells have cohomological dimension at most two (e.g. see [12]) while the stabilisers of 1-cells have cohomological
dimension at most one. An application of the stabiliser spectral sequence associated to
the action of Γ on T shows that Γ, and hence also every subgroup of Γ, has cohomological
dimension at most 2. Since finitely generated groups are of type F P1 , the corollary follows
from Corollary 1.
proof of Corollary 3. Let H be a non-trivial amenable`subgroup of G such that G commensurates H. Now consider the discrete G-set S = g∈G G/H g and let X denote the
infinite join of S with itself. By construction, there is a cellular and admissible action of
G on X such that the cell-stabilisers are finite intersections of conjugates of H. Since G
commensurates H, we conclude that every cell in X has a non-trivial amenable stabiliser.
Moreover, since taking joins increases the connectivity, X is acyclic. We conclude that X
is an acyclic G-CW complex with amenable stabilisers and no free orbits of n-cells, for
any n. The corollary now follows from Theorem C.
We now turn to the proof of Theorem B.
proof of Theorem B. First consider the case where G is finitely generated. In this case it
follows from Theorem A that G is of type F P . Since amenable groups satisfy the Bass
conjecture by [4], we conclude from [10, Section 4.1] that the ordinary Euler characteristic
of G coincides with the L2 -Euler characteristic of G and is therefore equal to zero (see
also [23]). This implies that H1 (G, C) ∼
= C ⊗Z G/[G, G] 6= 0, proving that G admits a
surjection π onto Z. Since G is of type F P2 , i.e. almost finitely presented, it follows from
11
[7, Theorem A] that G can be realised as an HNN-extension of a finitely generated base
group K with stable letter t mapped to 1 ∈ Z by π. Since G is amenable, it does not
contain non-abelian free subgroups. This forces the HNN-extension to be ascending, by
Britton’s Lemma. Also, by Theorem A, K is of type F P . We claim that cd(K) = 1. To
see this, assume by contradiction cd(K) = 2. Since K must be one ended (since it does
not contain non-abelian free subgroups) this implies that H1 (K, Z[G]) = 0. But by [9,
Th. 0.1], the map α in the Mayer-Vietoris sequence
α
H1 (K, Z[G]) → H2 (G, Z[G]) → H2 (K, Z[G]) −
→ H2 (K, Z[G])
associated to the ascending HNN-extensions G = K∗t is injective. We therefore conclude
that H2 (G, Z[G]) = 0, contradicting the assumption that cd(G) = 2. This proves our
claim, implying that K is free by Stalling’s Theorem [27]. Since K is amenable, this
forces K = Z = hai, proving that G equals a solvable Baumslag-Solitar group
BS(1, m) = ha, t | t−1 at = am i
for some non-zero m ∈ Z.
Now consider the general case. Since every finitely generated group of cohomological
dimension 1 is free by Stallings’ theorem it follows from the above that every finitely
generated subgroup of G either infinite cyclic or a solvable Baumslag-Solitar group. In
particular, G is locally solvable and elementary amenable of finite Hirsch length. Since
torsion-free elementary amenable groups of finite Hirsch length are virtually solvable by
[15], it follows that G must be solvable. By [12, Theorem 5], every solvable group of
cohomological dimension two is isomorphic to either a solvable Baumslag-Solitar group
or a non-cyclic subgroup of the additive rationals.
References
[1] Bartholdi L., Kielak, D. Amenability of groups is characterized by Myhill’s Theorem, arXiv
preprint (2016)
[2] Bartholdi L., Grigorchuk, R., and Sunik, Z., Branch Groups, Handbook of algebra, Vol. 3 (2003),
North-Holland, Amsterdam, 989–1112
[3] Bekka, B., de la Harpe, P., and Valette, A., Kazhdan’s Property (T), New Mathematical Monographs. Cambridge University Press (2008)
[4] Berrick, A. J., Chatterji, I, and, Mislin, G., From acyclic groups to the Bass conjecture for
amenable groups, Math. Ann. Vol. 329(4) (2004), 597–621
[5] Bestvina, M. and Brady, N., Morse theory and finiteness properties of groups, Invent. Math.
129(3) (1997), 445–470
[6] Bieri, R., Homological Dimension of Discrete Groups, Queen Mary College Mathematics Notes
[7] Bieri, R., and Strebel, R., Almost finitely presented soluble groups, Comm. Math. Helv. Vol. 53(1)
(1987), 258–278
[8] Brown, K. S., Cohomology of groups, Graduate texts in Mathematics 77, Springer (1982)
[9] Brown, K. S. and Geoghegan R., Cohomology with free coefficients of the fundamental group of
a graph of groups, Comm. Math. Helv. Vol. 60 (1985), 31–45
12
[10] Eckmann, B., Projective and Hilbert modules over group algebras, and finitely dominated spaces,
Comm. Math. Helv. Vol. 71(1) (1996), 453–462
[11] Elek, G., On the analytic zero divisor conjecture of Linnell, Bulletin of the London Mathematical
Society, Vol. 35(2) (2003), 236–238
[12] Gildenhuys, D., Classification of soluble groups of cohomological dimension two, Math. Zeit.,
Vol. 166(1) (1979), 21–25
[13] Grigorchuk, R., and Medynets, K., On algebraic properties of topological full groups, Sbornik:
Mathematics, Vol. 205(6) (2014), 87–108
[14] Hillman, J., A., Elementary amenable groups and 4-manifolds with Euler characteristic0, J. Austral. Math. Soc. Ser. A 50 (1991), 160–170
[15] Hillman, J. A., and Linnell, P. A., Elementary amenable groups of finite Hirsch length are locallyfinite by virtually-solvable, J. Austral. Math. Soc. Ser. A, Vol. 52(2) (1992), 237–241
[16] Juschenko, K., and Monod, N., Cantor systems, piecewise translations and simple amenable
groups, Ann. Math. (2) 178(2) (2013), 775–787
[17] Juschenko, K., Non-elementary amenable subgroups of automata groups, Journal of Topology and
Analysis (to appear)
[18] Kropholler P., Linnell, P., and Lück, W., Groups of small homological dimension and the Atiyah
conjecture, Geometric and Cohomological Methods in Group Theory, LMS Lecture Note Series
358 (2009), 272–277
[19] Kropholler P., Linnell, P., and Moody, J.A, Applications of a New K-Theoretic Theorem to
Soluble Group Rings, Proc. Am. Math. Soc. Vol. 104(3) (1988), 675–684
[20] Lam, T.Y., Lectures on Modules and Rings, Graduate texts in Mathematics 189 (1998), Springer
[21] Leary, I. J. and Saadetoğlu M., The cohomology of Bestvina-Brady groups, Groups, Geometry
and Dynamics 5 (1) (2011), 121-138.
[22] Lewin J. and Lewin T., An embedding of the group algebra of a torsion-free one-relator group in
a field, Journal of Algebra Vol. 52(1) (1978), 39–74
[23] Lück, W., L2-invariants: theory and applications to geometry and K-theory, A Series of Modern
Surveys in Mathematics Vol, 44. Springer-Verlag, Berlin, 2002.
[24] Linnell, P., Analytic versions of the zero divisor conjecture, Geometry and cohomology in group
theory (Durham 1994), London Math. Soc. Lecture Note Series 252, Cambridge University Press
(1998) 209–248.
[25] Olshanskii, A.Y., The Geometry of Defining Relations in Groups, Kluwer Academic Publishers,
(1991)
[26] Roman A.H., Zero Divisors, Group Von Neumann Algebras and Injective Modules, Master thesis,
Virginia Polytechnic Institute and State University (2015)
[27] Stallings J. R., Groups of dimension 1 are locally free, Bull. Amer. Math. Soc. 74 (1968), 361–364
[28] Serre, J.-P., Trees, Springer (1980).
[29] Tamari D., A refined classification of semi-groups leading to generalised polynomial rings with a
generalized degree concept, Proc. ICM vol. 3, Amsterdam (1954) 439–440.
[30] Van de Water, A., A Property of torsion-free modules over left Ore domains, Proc. Am. Math.
Soc. Vol. 25(1) (1970), 199–201
[31] Weibel, C.A., An introduction to homological algebra, Cambridge University Press (1994)
School of Mathematics, Statistics and Applied Mathematics, NUI Galway, Ireland
E-mail address: dieter.degrijse@nuigalway.ie
| 4 |
arXiv:1706.04387v1 [] 14 Jun 2017
TOPOLOGICAL FINITENESS PROPERTIES OF MONOIDS
PART 1: FOUNDATIONS
ROBERT D. GRAY AND BENJAMIN STEINBERG
Abstract. We initiate the study of higher dimensional topological finiteness properties of
monoids. This is done by developing the theory of monoids acting on CW complexes. For this
we establish the foundations of M -equivariant homotopy theory where M is a discrete monoid.
For projective M -CW complexes we prove several fundamental results such as the homotopy
extension and lifting property, which we use to prove the M -equivariant Whitehead theorems.
We define a left equivariant classifying space as a contractible projective M -CW complex. We
prove that such a space is unique up to M -homotopy equivalence and give a canonical model for
such a space via the nerve of the right Cayley graph category of the monoid. The topological
finiteness conditions left-Fn and left geometric dimension are then defined for monoids in terms
of existence of a left equivariant classifying space satisfying appropriate finiteness properties.
We also introduce the bilateral notion of M -equivariant classifying space, proving uniqueness
and giving a canonical model via the nerve of the two-sided Cayley graph category, and we define
the associated finiteness properties bi-Fn and geometric dimension. We explore the connections
between all of the these topological finiteness properties and several well-studied homological
finiteness properties of monoids which are important in the theory of string rewriting systems,
including FPn , cohomological dimension, and Hochschild cohomological dimension. We also
develop the corresponding theory of M -equivariant collapsing schemes (that is, M -equivariant
discrete Morse theory), and among other things apply it to give topological proofs of results of
Anick, Squier and Kobayashi that monoids which admit presentations by complete rewriting
systems are left- right- and bi-FP∞ .
1. Introduction
The study of the higher dimensional finiteness properties of groups was initiated fifty years
ago by C. T. C. Wall [Wal65] and Serre [Ser71]. An Eilenberg–MacLane complex K(G, 1) for a
discrete group G, also called a classifying space, is an aspherical CW complex with fundamental
group G. Such a space can always be constructed for any group G (e.g. via the bar construction) and it is unique up to homotopy equivalence. While useful for theoretical purposes, this
canonical K(G, 1)-complex is very big and is often not useful for practical purposes, specifically
if one wants to compute the homology of the group. It is therefore natural to seek a ‘small’
K(G, 1) for a given group by imposing various finiteness conditions on the space. Two of the
most natural and well-studied such conditions are the topological finiteness property Fn and
the geometric dimension gd(G) of the group.
Property Fn was introduced by C. T. C. Wall in [Wal65]. A group G is said to be of type
Fn if it has an Eilenberg–MacLane complex K(G, 1) with finite n-skeleton. It is easily verified
that a group is finitely generated if and only if it is of type F1 and is finitely presented if
Date: June 15, 2017.
2010 Mathematics Subject Classification. 20M50, 20M05, 20J05, 57M07, 20F10, 20F65.
Key words and phrases. CW complex, monoid, equivariant homotopy theory, homological finitenss property
FPn , cohomological dimension, Hoschild cohomology, rewriting system, collapsing scheme, discrete Morse theory.
This work was supported by the EPSRC grant EP/N033353/1 ‘Special inverse monoids: subgroups, structure,
geometry, rewriting systems and the word problem’. The second author was supiported in part by United StatesIsrael Binational Science Foundation #2012080 and NSA MSP #H98230-16-1-0047.
1
2
ROBERT D. GRAY AND BENJAMIN STEINBERG
and only if it is of type F2 . Thus property Fn generalises the two fundamental finiteness
properties of being finitely generated, or finitely presented, to higher dimensions. The geometric
dimension of G, denoted gd(G), is the smallest non-negative integer n such that there exists
an n-dimensional K(G, 1) complex. If no such n exists, then we set gd(G) = ∞. For more
general background on higher dimensional finiteness properties of groups we refer the reader to
the books [Bro94, Chapter 8], [Geo08, Chapters 6-9], or the survey article [Bro10].
Each of these topological finiteness properties has a natural counterpart in homological algebra given in terms of the existence of projective resolutions of ZG-modules. The analogue
of Fn in this context is the homological finiteness property FPn , while geometric dimension
corresponds to the cohomological dimension of the group. Recall that a group G is said to be
of type FPn (for a positive integer n) if there is a projective resolution P of Z over ZG such
that Pi is finitely generated for i ≤ n. We say that G is of type FP∞ if there is a projective
resolution P of Z over ZG with Pi finitely generated for all i. The property FPn was introduced
for groups by Bieri in [Bie76] and since then has received a great deal of attention in the literature; see [BB97,BH01,Bra99,BW07,LS06]. For groups, Fn and FPn are equivalent for n = 0, 1,
while important results of Bestvina and Brady [BB97] show that FP2 is definitely weaker than
F2 . For higher n there are no further differences, in that a group G is of type Fn (2 ≤ n ≤ ∞)
if and only if it is finitely presented and of type FPn . One natural line of investigation has
been the study of the closure properties of FPn . Examples include results about the behaviour
of FPn under taking: finite index subgroups or extensions, direct (and semidirect) products,
wreath products, HNN extensions, amalgamated free products, and quasi-isometry invariance;
see for example [Alo94, BBG98, Bie76]. In [BHMS02] it is shown that if G is a subgroup of a
direct product of n surface groups, then if G is of type FPn then G has a subgroup of finite
index which is the direct product of at most n finitely generated surface groups. Thompson’s
groups, and several interesting generalisations of these groups, have all be shown to be of type
FP∞ ; see for example [Bro87, FMWZ13, Ste92, GS06].
The cohomological dimension of a group G, denoted cd(G), is the smallest non-negative
integer n such that there exists a projective resolution P = (Pi )i≥0 of Z over ZG of length ≤ n,
i.e., satisfying Pi = 0 for i > n. (Or, if no such n exists, then we set cd(G) = ∞.) The geometric
dimension of a group provides an upper bound for the cohomological dimension. It is easily
seen that gd(G) = cd(G) = 0 if and only if G is trivial. It follows from important results of
Stallings [Sta68] and Swan [Swa69] that gd(G) = cd(G) = 1 if and only if G is non-trivial free
group. Eilenberg and Ganea [EG57] proved that for n ≥ 3 the cohomological and the geometric
dimension of a group are the same. The famous Eilenberg–Ganea problem asks whether this
also holds in dimension two.
Working in the more general context of monoids, and projective resolutions of left ZMmodules, gives the notion of left-FPn , and left cohomological dimension, of a monoid M . There
is an obvious dual notion of monoids of type right-FPn , working with right ZM -modules.
Working instead with bimodules resolutions of the (ZM , ZM )-bimodule ZM one obtains the
notion bi-FPn introduced and studied in [KO01]. Property bi-FPn is of interest from the point of
view of Hochschild cohomology, which is the standard notion of cohomology for rings; [Hoc45],
[Wei94, Chapter 9], or [Mit72]. For monoids all these notions of FPn are known to be different,
while for groups they are all equivalent; see [Coh92, Pri06]. Similarly there is a dual notion of
the right cohomological dimension of a monoid which again is in general not equal to the left
cohomological dimension; see [GP98]. The two-sided notion is the Hochschild cohomological
dimension [Mit72].
TOPOLOGICAL FINITENESS PROPERTIES
3
In monoid and semigroup theory the property FPn arises naturally in the study of string
rewriting systems (i.e. semigroup presentations). The history of rewriting systems in monoids and groups is long and distinguished, and has roots in fundamental work of Dehn and
Thue. A central topic in this area is the study of complete rewriting systems and in methods for computing normal forms. A finite complete rewriting system is a finite presentation for a monoid of a particular form (both confluent and Noetherian) which in particular
gives a solution of the word problem for the monoid; see [BO93]. It is therefore of considerable interest to develop an understanding of which monoids are presentable by such rewriting systems. Many important classes of groups are known to be presentable by finite complete rewriting systems, including surface groups, Coxeter groups, and many closed threemanifold groups. Rewriting systems continue to receive a lot of attention in the literature;
see [CS09a,CS09b,Cho06,DDM09,GS08,HS99,PO05]. The connection between complete rewriting systems and homological finiteness properties is given by a result of Anick [Ani86] (see
also [Bro92]) which shows that a monoid that admits such a presentation must be of type
left- and right-FP∞ ; the special case of FP3 was also handled by Squier [Squ87]. More generally Kobayashi [Kob05] proved that any such monoid is of type bi-FP∞ . A range of other
interesting homotopical and homological finiteness properties have been studied in relation to
monoids defined by compete rewriitng systems including finite homological type, finite derivation type, and higher dimensional analogues FDTn ; see [SOK94, PO04, PO05, GM12]. More
background on the importance the property FPn (and other related finiteness conditions) in
semigroup theory, and the connections with the theory of string rewriting systems may be found
in the survey articles [Coh97, OK97]. Results on cohomology, and cohomological dimension, of
monoids include [AR67, GP96, Nic69, Nic72, CS80, Nun95, GP98] and [Nov98]. The cohomological dimension of left regular bands was recently considered in [MSS15b] and [MSS15a] where
connections with the Leray number of simplicial complexes [KM05] and the homology of cell
complexes was obtained.
It is often easier to establish the topological finiteness properties Fn for a group than the
homological finiteness properties FPn , especially if there is a suitable geometry or topological
space available on which the group acts cocompactly. The desired homological finiteness properties can then be derived by the above-mentioned result for groups, that Fn (for n ≥ 2) is
equivalent to being finitely presented and of type FPn . In contrast, no corresponding theory of
Fn for monoids currently exists. Similarly, there is currently no analogue of geometric dimension of monoids in the literature. The study of homological finiteness properties of monoids
should greatly profit from the development of a corresponding theory of topological finiteness
properties of monoids. The central aim of the present article is to lay the foundations of such
a theory.
For such a theory to be useful in the study of homological finiteness properties of monoids
there are certain properties that any definition of left-Fn , and left geometric dimension, should
certainly satisfy. Specifically left-Fn should imply left-FPn , and the left geometric dimension
should provide an upper bound for the left cohomological dimension. The fundamental question
that needs to be addressed when developing this theory is to determine the correct analogue
of the K(G, 1)-complex in the theory for monoids? There is a natural notion of classifying
space |BM | of a monoid M . This is obtained by viewing M as a one-point category, letting
BM denote the nerve of this category, and setting |BM | as the geometric realisation of the
nerve; see Section 5 for full details of this construction. For a group G this space |BG| is a
K(G, 1)-complex, it is the canonical complex for G mentioned earlier in the introduction. Since
K(G, 1)s are unique up to homotopy equivalence the finiteness conditions Fn and cohomological
dimension can all be defined in terms of existence of CW complexes homotopy equivalent to
4
ROBERT D. GRAY AND BENJAMIN STEINBERG
|BG| satisfying the appropriate finiteness property. Indeed in group theory it is a common
approach in the study of these topological finiteness properties to begin with the space |BG|
and then seek transformations on the space which preserve the homotopy equivalence class, but
make the space smaller. This is, for example, the basic idea behind the theory of collapsing
schemes (which will be discussed in more detail in Section 8). It could be regarded as natural
therefore to try define and study topological finiteness properties of a monoid M in terms of
the space |BM |. We note that there is an extensive literature on the study of classifying spaces
|BM | of monoids and related topics; see for instance [Fie84,Seg78,Nun95,McC69,McD79,MS76,
LN01, Pup58, Pup59, KT76, Hur89].
It turns out, however, that using |BM | to define topological finiteness properties of monoids
is not the right approach in the sense that it would lead to a definition of Fn for monoids which
does not imply left- or right-FPn , and there are similar issues for the corresponding definition
of geometric dimension. This essentially comes does to the fact that the space |BM | does not
contain enough information about the monoid M to recover the corresponding homological
finiteness properties.
In more detail, by applying results of MacDuff [McD79] it is possible to show that there are
examples of monoids which are not of type left-FP1 even though |BM | is contractible. For
example, if M is an infinite left zero semigroup (a semigroup with multiplication xy = x for
all elements x and y) with an identity adjoined then by [McD79, Lemma 5] the space |BM |
is contractible while it is straightforward to show that M does not even satisfy the property
left-FP1 (this also follows from Theorem 6.13 below). This shows that one should not define
property Fn for monoids using the space |BM |. Similar comments apply to attempts to define
geometric dimension–if one tries to define geometric dimension using |BM | then if M is any
monoid with a left zero element but no right zero element, the left cd(M ) would not equal zero
(by Proposition 6.28) while the geometric dimension would be zero.
This issue in fact arose in work of Brown [Bro92] when he introduced the theory of collapsing
schemes. In that paper Brown shows that if a monoid M admits a presentation by a finite
complete rewriting system, then |BM | has the homotopy type of a CW complex with only
finitely many cells in each dimension. When M is a group this automatically implies that the
group is of type FP∞ . Brown goes on to comment
“We would like, more generally, to construct a ‘small’ resolution of this type
for any monoid M with a good set of normal forms, not just for groups. I do
not know any way to formally deduce such a resolution from the existence of the
homotopy equivalence for |BM | above”.
As the comments above show, just knowing about the homotopy equivalence class of |BM |
will never suffice in order to deduce that the monoid is left- (or right-) FP∞ . It turns out
that the correct framework for studying topological finiteness properties of monoids is to pass
−−→
to the universal bundle |EM | over |BM |, which has a concrete description as the geometric
realisation of the right Cayley graph category of the monoid (this will be explained in detail
−−→
in Section 5). The space |EM | is contractable and the monoid has a natural action by left
multiplication on this space. This action is free and sends n-cells to n-cells. It turns out that
this space is the correct canonical model of what we shall call a left equivariant classifying space
for the monoid. In fact we are able to work in the more general context of projective M -sets,
and shall define a left equivariant classifying space as a contractable projective M -CW complex
(see Section 2 for the definition of projective M -CW complex). The corresponding finiteness
conditions left-Fn and left geometric dimension are then defined in the obvious natural way
in terms of the existence of a left equivariant classifying space satisfying appropriate finiteness
properties. Consequently, in order to define and study the topological finiteness properties of
TOPOLOGICAL FINITENESS PROPERTIES
5
monoids we are interested in, it will first be necessary for us to develop some of the foundations
of M -equivariant homotopy theory. Equivariant homotopy theory, and cohomology theory, for
groups is an important and well-established area; see for example [May96] for an introduction.
In this way, we are interested in studying homological finiteness properties of monoids by
investigating their actions on CW complexes.
The paper is structured as follows. The notions of free and projective M -CW complexes
are introduced in Section 2. For projective M -CW complexes we then go on to prove an
M -equivariant version of HELP (homotopy extension and lifting property), from which we
deduce the M -equivariant Whitehead theorems. We also apply HELP to prove that every
M -equivariant continuous mapping of projective M -CW complexes is M -homotopy equivalent
to a cellular one (the cellular approximation theorem). In Section 3 some useful base change
theorems are established. Section 4 is concerned with monoids acting on simplicial sets. In
particular we show how (rigid) M -CW complexes arise from (rigid) M -simplicial sets by taking
the geometric realisation. In Section 5 we recall the definition of the nerve of a category. By
considering three natural categories associated with a monoid M , namely the one-point category, the right Cayley graph category, and the two-sided Cayley graph category, we associate
−−→
←−→
three CW complexes with the monoid, denoted |BM |, |EM | and |EM |. We go through these
−−→
←−→
constructions in detail in that section. In particular the spaces |EM | and |EM | are the canonical models of M -equivariant (respectively two-sided M -equivariant) classifying spaces of the
monoid. In Section 6 we will define left-equivariant, and dually right-equivariant, classifying
spaces for a monoid M . We prove that such spaces always exist, and that they are unique
up to M -homotopy equivalence. Using the notion of left-equivariant classifying space we define property left-Fn for monoids. We prove several fundamental results about this property,
including results showing its relationship with property left-FPn , and its connection with the
properties of being finitely generated and finitely presented. We prove some results about the
closure properties of property left-Fn , in particular results relating the property holding in M
to it holding in certain maximal subgroups of M . We also introduce the notion of left geometric
dimension of a monoid in this section. We show that it is possible to find an M -finite equivariant classifying space for M which is projective when no M -finite free equivariant classifying
space exists, justifying our choice to work with projective M -CW complexes. The geometric dimension is proved to provide an upper bound for the cohomological dimension, and we
characterize monoids with left geometric dimension equal to zero. In Section 7 we introduce
the bilateral notion of a classifying space in order to introduce the stronger property of bi-Fn .
We prove results for bi-Fn analogous to those previously established for left- and right-Fn . In
particular we show that bi-Fn implies bi-FPn which is of interest from the point of view of
Hochschild cohomology. We also define the geometric dimension as the minimum dimension of
a bi-equivariant classifying space and show how it relates to the Hochschild dimension. In Sections 8 and 9 we develop the theory of M -equivariant collapsing schemes [Bro92]. Equivalently,
this may be viewed as the development of M -equivariant discrete Morse theory in the sense
of Forman [For02]. We show in Section 10 that this theory can be applied to monoids which
admit a, so-called, guarded collapsing scheme. Then, in Section 11, we identify some classes of
monoids which admit guarded collapsing schemes, and in particular recover a topological proof
of the Anick’s theorem, and more generally Kobayashi’s theorem, that monoids defined by finite
complete rewriting systems are of type bi-FP∞ (by proving that they are all of type bi-F∞ ).
Brown proved Anick’s theorem by developing a theory of collapsing schemes (or discrete Morse
theory) for chain complexes, which has been rediscovered by other authors [Koz05] later on.
Our approach obviates the need to develop a chain complex analogue in this setting.
6
ROBERT D. GRAY AND BENJAMIN STEINBERG
Applications of the topological approach set out in this paper will be given in a future
article by the current authors [GS]. Among other things in that paper we shall show how
our topological approach can be used to prove results about the closure properties of left-Fn
and bi-Fn for (i) amalgamated free products of monoids (simplifying and vastly improving on
some results of Cremanns and Otto [CO98]) (ii) HNN-like extensions in the sense of Otto and
Pride [PO04] (in particular establishing results which generalise [PO04, Theorem 1] to higher
dimensions), and (iii) HNN extensions of the sort considered by Howie [How63]. For example,
we prove that if A, B are monoids containing a common submonoid C such that A and B are
free right C-set, then if A and B are of type left-Fn and C is of type left-Fn−1 , then A ∗C B is of
type left-Fn . An analogous result is proved for the homological finiteness property FPn , under
the weaker hypothesis that ZA and ZB are flat as ZC-modules. Monoid amalgamated products
are much more complicated than group ones. For instance, the amalgamated free product of
finite monoids can have an undecidable word problem, and the factors do not have to embed
or intersect in the base monoid.
Additionally, we shall give a topological proof that a free inverse monoid on one or more
generators is neither of type left-FP2 nor right-FP2 generalising a classical result of Schein
[Sch75] that such monoids are not finitely presented.
Finally, in [GS] we shall apply our methods to prove results about the homological finiteness
properties of special monoids, that is, monoids defined by finite presentations of the form
hA | w1 = 1, . . . , wk = 1i. We prove that if M is a special monoid with group of units G then
if G is of type FPn with 1 ≤ n ≤ ∞, then M is of type left- and right-FPn ; and moreover that
cd G ≤ cd M ≤ max{2, cd G}. As a corollary we obtain that all special one-relation monoids are
of type left- and right-FP∞ , answering a special case of a question of Kobayashi [Kob00], and
we recover Kobayashi’s result that if the relator is not a proper power then the cohomological
dimension is at most 2. Specifically we show that if M is a special one-relation monoid then
M is of type left- and right-FP∞ , and if M = hA | w = 1i with w not a proper power, then
cd M ≤ 2; otherwise cd M = ∞.
2. Projective M -CW complexes and M -homotopy theory
2.1. CW complexes and topological finiteness properties of groups. For background
on CW complexes, homotopy theory, and algebraic topology for group theory, we refer the
reader to [Geo08] and [May99]. Throughout B n will denote the closed unit ball in Rn , S n−1
the (n − 1)-sphere which is the boundary ∂B n of the n-ball, and I = [0, 1] the unit interval. We
use en to denote an open n-cell, homeomorphic to the open n ball B̊ n = B n − ∂B n , ∂e denotes
the boundary of e and ē = cl(e) the closure of e, respectively. We identify I r = I r × 0 ⊂ I r+1 .
A CW complex is a space X which is a union of subspaces Xn such that, inductively, X0 is
a discrete set of points, and Xn is obtained from Xn−1 by attaching balls B n along attaching
maps j : S n−1 → Xn−1 . The resulting maps B n → Xn are called the characteristic maps. So
Xn is the quotient space obtained from Xn−1 ∪ (Jn × Bn ) by identifying (j, x) with j(x) for
x ∈ S n , where Jn is the discrete set of such attaching maps. Thus Xn is obtained as a pushout
of spaces:
Jn × S n−1
Xn−1
Jn × B n
Xn .
The topology of X should be that of the inductive limit X = lim Xn
−→
TOPOLOGICAL FINITENESS PROPERTIES
7
S
A CW complex X is then equal as a set to the disjoint union of (open) cells X = α eα
where the eα are the images of B̊ n under the characteristic maps. Indeed, an alternative way of
defining CW complex, which shall be useful for us to use later on, is as follows. A CW complex
is a Hausdorff space K along with a family {eα } of open cells of various dimensions such that,
letting
[
K j = {eα : dim eα ≤ j},
the following conditions are satisfied
[
eα and eα ∩ eβ = ∅ for α 6= β.
(CW1) K =
α
(CW2) For each cell eα there is a map ϕα : B n → K (called the characteristic map) where B n
is a topological ball of dimension n = dim eα , such that
(a) ϕα |B̊ n is a homeomorphism onto eα ;
(b) ϕα (∂B n ) ⊂ K n−1 .
(CW3) Each eα0 is contained in a union of finitely many eα .
(CW4) A set A ⊂ K is closed in K if and only if A ∩ eα is closed in eα for all eα .
Note that each characteristic map ϕ : B n → K gives rise to a characteristic map ϕ′ : I n → K
be setting ϕ′ = ϕh for some homeomorphism h : I n → B n . So we can restrict our attention to
characteristic maps with domain I n when convenient. If ϕ : B n → K is a characteristic map
map for e. A subcomplex is a subset L ⊂ K with a
for a cell e then ϕ|∂B n is called an attaching
[
subfamily {eβ } of cells such that L =
eβ and every eβ is contained in L. If L is a subcomplex
of K we write L < K and call (K, L) a CW pair. If e is a cell of K which does not lie in (and
hence does not meet) L we write e ∈ K − L. An isomorphism between CW complexes is a
homeomorphism that maps cells to cells.
Let M be a monoid. We shall define notions of free and projective M -CW complexes and
then use these to study topological finiteness properties of M . The notion of a free M -CW
complex is a special case of a free C-CW complex for a category C considered by Davis and
Lück in [DL98] and so the cellular approximation theorem, HELP Theorem and Whitehead
Theorem in this case can be deduced from their results. The HELP Theorem and Whitehead
Theorem for for projective M -CW complexes can be extracted with some work from the more
general results of Farjoun [DFZ86] on diagrams of spaces but to keep things elementary and
self-contained we present them here .
2.2. The category of M -sets. A left M -set consists of a set X and a mapping M × X → X
written (m, x) 7→ mx called a left action, such that 1x = x and m(nx) = (mn)x for all m, n ∈ M
and x ∈ X. Right M -sets are defined dually, they are the same thing as left M op -sets. A biM -set is an M × M op -set. There is a category of M -sets and M -equivariant mappings, where
f : X → Y is M -equivariant if f (mx) = mf (x) for all x ∈ X, m ∈ M .
A (left) M -set X is said to be free on a set A if there is a mapping ι : A → X such that for
any mapping f : A → Y with Y an M -set, there is a unique M -equivariant map F : X → Y
such that
A ι X
F
f
Y
8
ROBERT D. GRAY AND BENJAMIN STEINBERG
commutes. The mapping ι is necessarily injective. If X is an M -set and A ⊆ X, then A is a
free basis for X if and only if each element of X can be uniquely expressed as ma with m ∈ M
and a ∈ A.
The free left M -set on A exists and can be realised as the set M × A with action m(m′ , a) =
(mm′ , a) and ι is the map a 7→ (1, a). Note that if G is a group, then a left G-set X is free if
and only if G acts freely on X, that is, each element of X has trivial stabilizer. In this case,
any set of orbit representatives is a basis.
An M -set P is projective if any M -equivariant surjective mapping f : X → P has an M equivariant section s : P → X with f ◦ s = 1P . Free M -sets are projective and an M -set is
projective if and only if it is a retract of a free one.
`
Each projective M -set P is isomorphic to an M -set of the form a∈A M ea (disjoint union,
which is the coproduct in the category of M -sets) with ea ∈ E(M ). Here E(M ) denotes the
set of idempotents of the monoid M . In particular, projective G-sets are the same thing as free
G-sets for a group G. (See [Kna72] for more details.)
2.3. Equivariant CW complexes. A left M -space is a topological space X with a continuous
left action M × X → X where M has the discrete topology. A right M -space is the same thing
as an M op -space and a bi-M -space is an M × M op -space. Each M -set can be viewed as a
discrete M -space. Note that colimits in the category of M -spaces are formed by taking colimits
in the category of spaces and observing that the result has a natural M -action.
Let us define a (projective) M -cell of dimension n to be an M -space of the form M e × B n
where e ∈ E(M ) and B n has the trivial action; if e = 1, we call it a free M -cell. We will define
a projective M -CW complex in an inductive fashion by imitating the usual definition of a CW
complex but by attaching M -cells M e × B n via M -equivariant maps from M e × S n−1 to the
(n − 1)-skeleton.
Formally, a projective (left) relative M -CW complex is a pair (X, A) of M -spaces such that
X = lim Xn with in : Xn → Xn+1 inclusions, X−1 = A, X0 = P0 ∪ A with P0 a projective M -set
−→
and where Xn is obtained as a pushout of M -spaces
Pn × S n−1
Xn−1
(2.1)
Pn × B n
Xn
with Pn a projective M -set and B n having a trivial M -action for n ≥ 1. As usual, Xn is called
the n-skeleton of X and if Xn = X and Pn 6= ∅, then X is said to have dimension n. Notice
that since Pn is isomorphic to a coproduct of M -sets of the form M e with e ∈ E(M ), we are
indeed attaching M -cells at each step. If A = ∅, we call X a projective M -CW complex. Note
that a projective M -CW complex is a CW complex and the M -action is cellular (in fact, takes
n-cells to n-cells). We can define projective right M -CW complexes and projective bi-M -CW
complexes by replacing M with M op and M × M op , respectively. We say that X is a free
M -CW complex if each Pn is a free M -set. If G is a group, a CW complex with a G-action is a
free G-CW complex if and only if G acts freely, cellularly, taking cells to cells, and the setwise
stabilizer of each cell is trivial [Geo08, Appendix of Section 4.1].
More generally we define an M -CW complex in the same way as above except that the
Pi are allowed to be arbitrary M -sets. Most of the theory developed below is only valid in
the projective setting, but there will be a few occasions (e.g. when we discuss M -simplicial
sets) where it will be useful for us to be able to refer to M -CW complexes in general. For
future reference we should note here that, just as for the theory of G-CW complexes, there
is alternative way of defining M -CW complex in terms of monoids acting on CW complexes.
TOPOLOGICAL FINITENESS PROPERTIES
9
This follows the same lines as that of groups, see for example [Geo08, Section
3.2 and page 110]
S
or [May96]. Let Y be a left M -space where M is a monoid and Y = α eα is a CW complex
with characteristic maps ϕα : B n → Y . We say that Y is a rigid left M -CW complex if it is:
• Cellular and dimension preserving: For every eα and m ∈ M there exists an eβ such that
meα = eβ and dim(eβ ) = dim(eα ); and
• Rigid on cells: If meα = eβ then mϕα (k′ ) = ϕβ (k′ ) for all k′ ∈ B n − ∂B n .
If the action of M on the set of n-cells is free (respectively projective) then we call Y a free
(respectively projective) rigid left M -CW complex. The inductive process described above for
building (projective, free) left M -CW complexes is easily seen to give rise to rigid (projective,
free) left M -CW complexes, in the above sense. Conversely every rigid (projective, free) left
M -CW complex arises in this way. In other words, the two definitions are equivalent. For an
explanation of this in the case of G-CW complexes see, for example, [Geo08, page 110]. The
proof for monoids is analogous and is omitted. Similar comments apply for rigid right M -CW
complexes and rigid bi-M -CW complexes.
We say that a projective M -CW complex X is of M -finite type if Pn is a finitely generated
projective M -set for each n and we say that X is M -finite if it is finite dimensional and of
M -finite type (i.e., X is constructed from finitely many M -cells).
Notice that if m ∈ M , then mX is a subcomplex of X for all m ∈ M with n-skeleton mXn .
Indeed, mX0 = mP0 is a discrete set of points and mXn is obtained from mXn−1 via the
pushout diagam
mPn × S n−1
mXn−1
mPn × B n
mXn .
A projective M -CW subcomplex of X is an M -invariant subcomplex A ⊆ X which
` ′′is a union
′
of M -cells of X. In other words, each Pn (as above) can be written Pn = Pn Pn with the
images of the Pn′ × B n giving the cells of A. Notice that if A is a projective M -CW subcomplex
of X, then (X, A) can be viewed as a projective relative M -CW complex in a natural way. Also
note that a cell of X belongs to A if and only if each of its translates do.
A projective {1}-CW complex is the same thing as a CW complex and {1}-finite type ({1}finite) is the same thing as finite type (finite).
If e ∈ E(M ) is an idempotent and m ∈ M e, then left multiplication by m induces an
isomorphism Hn ({e} × B n , {e} × S n−1 ) → Hn ({m} × B n , {m} × S n−1 ) (since it induces a
homeomorphism {e} × B n /{e} × S n−1 → {m} × B n /{m} × S n−1 ) and so if we choose an
orientation for the n-cell {e} × B n , then we can give {m} × B n the orientation induced by this
isomorphism. If m ∈ M and m′ ∈ M e, then the isomorphism
Hn ({e} × B n , {e} × S n−1 ) → Hn ({mm′ } × B n , {mm′ } × S n−1 )
induced by mm′ is the composition of the isomorphism
Hn ({e} × B n , {e} × S n−1 ) → Hn ({m′ } × B n , {m′ } × S n−1 )
induced by m′ and the isomorphism
Hn ({m′ } × B n , {m′ } × S n−1 ) → Hn ({mm′ } × B n , {mm′ } × S n−1 )
induced by m and so the action of m preserves orientation. We conclude that the degree n
component of the cellular chain complex for X is isomorphic to ZPn as a ZM -module and
10
ROBERT D. GRAY AND BENJAMIN STEINBERG
L
`
∼
∼
hence is projective (since Z
a∈A ZM ea and ZM = ZM e ⊕ ZM (1 − e) for any
a∈A M ea =
idempotent e ∈ E(M )).
If X is a projective M -CW complex then so is Y = M × I `
where I is given the trivial action.
∼
If we retain the above notation, then Y0 = X`
×
∂I
X
X0 . The n-cells for n ≥ 1 are
=
0
0
n and P
n−1 × I. Notice that
obtained from attaching Pn × B n × ∂I ∼
(P
P
)
×
B
= n
n
n−1 × B
X × ∂I is a projective M -CW subcomplex of X × I.
If X, Y are M -spaces, then an M -homotopy between M -equivariant continuous maps f, g : X →
Y is an M -equivariant mapping H : X × I → Y with H(x, 0) = f (x) and H(x, 1) = g(x) for
x ∈ X where I is viewed as having the trivial M -action. We write f ≃M g in this case. We say
that X, Y are M -homotopy equivalent, written X ≃M Y , if there are M -equivariant continuous
mappings (called M -homotopy equivalences) f : X → Y and g : Y → X with gf ≃M 1X and
f g ≃M 1Y . We write [X, Y ]M for the set of M -homotopy classes of M -equivariant continuous
mappings X → Y .
Lemma 2.1. Let X, Y be projective M -CW complexes and A a projective M -CW subcomplex
`
of X. Let f : A → Y be a continuous M -equivariant cellular map. Then the pushout X A Y
is a projective M -CW complex.
`
Proof. It is a standard result that X A Y is a CW complex whose n-cells are the`n-cells of Y
together with the n-cells of X not belonging to A. In more detail, let q : X → X A Y be the
canonical mapping and view Y as a subspace of the adjunction space. Then the attaching map
of a cell coming from Y is the original attaching map, whereas the attaching map of a cell of
X not belonging to A is the composition of q with its original attaching mapping. `
It follows
from the definition of a projective M -CW subcomplex and the construction that X A Y is a
projective M -CW complex. Here it is important that a translate by M of a cell from X \ A is
a cell of X \ A.
A free M -CW subcomplex of a free M -CW complex X is an M -invariant subcomplex A ⊆ X
which is a union of M -cells of X.
The proof of Lemma 2.1 yields the following.
Lemma 2.2. Let X, Y be free M -CW complexes and A a free M -CW subcomplex
` of X. Let
f : A → Y be a continuous M -equivariant cellular map. Then the pushout X A Y is a free
M -CW complex.
A continuous mapping f : X → Y of spaces is an n-equivalence if
f∗ : πq (X, x) → πq (Y, f (x))
is a bijection for 0 ≤ q < n and a surjection for q = n where π0 (Z, z) = π0 (Z) (viewed as a
pointed set with base point the component of z). It is a weak equivalence if it is an n-equivalence
for all n, i.e., f∗ is a bijection for all q ≥ 0. We will consider a weak equivalence as an ∞equivalence. We shall see later that an M -equivariant weak equivalence of projective M -CW
complexes is an M -homotopy equivalence.
Let Top(X, Y ) denote the set of continuous maps X → Y for spaces X, Y and TopM (X, Y )
denote the set of continuous M -equivariant maps X → Y between M -spaces X, Y .
Proposition 2.3. Let X be a space with a trivial M -action, e ∈ E(M ) and Y an M -space.
Then there is a bijection between TopM (M e × X, Y ) and Top(X, eY ). The bijection sends
g: Me × X → Y
f : M e × X → Y to f : X → eY given by f (x) = f (e, x) and g : X → eY to b
given by gb(m, x) = mg(x).
TOPOLOGICAL FINITENESS PROPERTIES
11
Proof. If x ∈ X, then f (x) = f (e, x) = f (e(e, x)) = ef (e, x) ∈ eY . Clearly, f is continuous. As
gb is the composition of 1M e × g with the action map, it follows that gb is continuous. We show
that the two constructions are mutually inverse. First we check that
fb(m, x) = mf (x) = mf (e, x) = f (m(e, x)) = f (me, x) = f (m, x)
for m ∈ M e and x ∈ X. Next we compute that
gb(x) = gb(e, x) = eg(x) = g(x)
since g(x) ∈ eY . This completes the proof.
Proposition 2.3 is the key tool to transform statements about projective M -CW complexes
into statement about CW complexes. We shall also need the following lemma relating equivariant n-equivalences and n-equivalences.
Lemma 2.4. Let Y, Z be M -spaces and let k : Y → Z be an M -equivariant n-equivalence with
0 ≤ n ≤ ∞. Let e ∈ E(M ) and k′ = k|eY : eY → eZ. Then k′ is an n-equivalence.
Proof. First note that k(ey) = ek(y) and so k|eY does indeed have image contained in eZ. Let
y ∈ eY and q ≥ 0. Let α : eY → Y and β : eZ → Z be the inclusions. Then note that the
action of e gives retractions Y → eY and Z → eZ. Hence we have a commutative diagram
k∗
πq (Y, y)
α∗
πq (Z, k(y))
e∗
β∗
πq (eY, y)
k∗′
e∗
πq (eZ, k(y))
with e∗ α∗ and e∗ β∗ identities. Therefore, if k∗ is surjective, then k∗′ is surjective and if k∗ is
injective, then k∗′ is injective. The lemma follows.
2.4. Whitehead’s theorem. With Lemma 2.4 in hand, we can prove an M -equivariant version
of HELP (homotopy extension and lifting property) [May99, Page 75], which underlies most of
the usual homotopy theoretic results about CW complexes. If X is a space, then ij : X → X ×I,
for j = 0, 1, is defined by ij (x) = (x, j).
Theorem 2.5 (HELP). Let (X, A) be a projective relative M -CW complex of dimension at
most n ∈ N ∪ {∞} and k : Y → Z an M -equivariant n-equivalence of M -spaces. Then given M equivariant continuous mappings f : X → Z, g : A → Y and h : A × I → Z such that kg = hi1
and f i = hi0 (where i : A → X is the inclusion), there exist M -equivariant continuous mappings
ge and e
h making the diagram
i0
A
A×I
i1
A
h
g
Y
e
h
f
X
k
Z
i
i0
i
g
e
X ×I
i1
X
commute.
Proof. Proceeding by induction on the skeleta and adjoining an M -cell at a time, it suffices to
handle the case that
(X, A) = (M e × B q , M e × S q−1 )
12
ROBERT D. GRAY AND BENJAMIN STEINBERG
with 0 ≤ q ≤ n. By Proposition 2.3 it suffices to find continuous mappings ge and e
h making the
diagram
i0
S q−1
S q−1 × I
i1
S q−1
h
g
eY
e
h
f
Bq
k
eZ
i
i0
i
g
e
Bq × I
i1
Bq
commute where we have retained the notation of Proposition 2.3. The mapping k : eY → eZ is
an n-equivalence by Lemma 2.4 and so we can apply the usual HELP theorem [May99, Page 75]
for CW complexes to deduce the existence of ge and e
h. This completes the proof.
As a consequence we may deduce the M -equivariant Whitehead theorems.
Theorem 2.6 (Whitehead). If X is a projective M -CW complex and k : Y → Z is an M equivariant n-equivalence of M -spaces, then the induced mapping k∗ : [X, Y ]M → [X, Z]M is a
bijection if dim X < n or n = ∞ and a surjection if dim X = n < ∞.
Proof. For surjectivity we apply Theorem 2.5 to the pair (X, ∅). If f : X → Z, then e
g: X → Y
satisfies kg ≃M f . For injectivity, we apply Theorem 2.5 to the pair (X × I, X × ∂I) and note
that X ×I has dimension one larger than X.
` Suppose that p, q : X → Y are such that kp ≃M kq
via a homotopy f : X × I → Z. Put g = p q : X × ∂I → Y and define h : X × ∂I × I → Z by
h(x, s, t) = f (x, s). Then e
g : X × I → Y is a homotopy between p and q.
Corollary 2.7 (Whitehead). If k : Y → Z an M -equivariant weak equivalence (n-equivalence)
between projective M -CW complexes (of dimension less than n), then k is an M -homotopy
equivalence.
Proof. Under either hypothesis, k∗ : [Z, Y ]M → [Z, Z]M is a bijection by Theorem 2.6 and so
kg ≃M 1Z for some M -equivariant g : Z → Y . Then kgk ≃M k and hence, since k∗ : [Y, Y ] →
[Y, Z] is a bijection by Theorem 2.6, we have that gk ≃M 1Y . This completes the proof.
2.5. Cellular approximation. Our next goal is to show that every M -equivariant continuous
mapping of projective M -CW complexes is M -homotopy equivalent to a cellular one. We
shall need the well-known fact that if Y is a CW complex, then the inclusion Yn ֒→ Y is an
n-equivalence for all n ≥ 0 [May99, Page 76].
Theorem 2.8 (Cellular approximation). Let f : X → Y be a continuous M -equivariant mapping with X a projective M -CW complex and Y a CW complex with a continuous action of M
by cellular mappings. Then f is M -homotopic to a continuous M -equivariant cellular mapping.
Any two cellular approximations are homotopy equivalent via a cellular M -homotopy.
Proof. We prove only the first statement. The second is proved using a relative version of the
first whose statement and proof we omit. Note that Yn is M -equivariant for all n ≥ 0 because
M acts by cellular mappings. We construct by induction M -equivariant continuous mappings
fn : Xn → Yn such that f |Xn ≃M fn |Xn via an M -homotopy hn and fn , hn extend f`n−1 , hn−1 ,
respectively (where we take X−1 = ∅). We have, without loss of generality, X0 = a∈A M ea .
Since ea Y is a subcomplex of Y with 0-skeleton ea Y0 and f (ea ) ∈ ea Y , we can find a path pa
in ea Y from f (ea ) to an element ya ∈ ea Y0 . Define f0 (mea ) = mya and h0 (mea , t) = mpa (t),
cf. Proposition 2.3.
TOPOLOGICAL FINITENESS PROPERTIES
13
Assume now that fn , hn have been defined. Since the inclusion Yn+1 → Y is an M -equivariant
(n + 1)-equivalence, Theorem 2.5 gives a commutative diagram
i0
Xn
Xn × I
i1
Xn
hn
fn
Y
i
Yn+1
hn+1
f
Xn+1
i0
i
fn+1
Xn+1 × I
i1
Xn+1
thereby establishing the inductive step. We obtain our desired cellular mapping and M homotopy by taking the colimit of the fn and hn .
3. Base change
If A is a right M -set and B is a left M -set, then A ⊗M B is the quotient of A × B by the least
equivalence relation ∼ such that (am, b) ∼ (a, mb) for all a ∈ A, b ∈ B and m ∈ M . We write
a ⊗ b for the class of (a, b) and note that the mapping (a, b) 7→ a ⊗ b is universal for mappings
f : A × B → X with X a set and f (am, b) = f (a, mb). If M happens to be a group, then M
acts on A × B via m(a, b) = (am−1 , mb) and A ⊗M B is just the set of orbits of this action.
The tensor product A ⊗M () preserves all colimits because it is a left adjoint to the functor
X 7→ X A .
If B is a left M -set there is a natural preorder relation ≤ on B where x ≤ y if and only if
M x ⊆ M y. Let ≈ denote the symmetric-transitive closure of ≤. That is, x ≈ y if there is a
sequence z1 , z2 , . . . , zn of elements of B such that for each 0 ≤ i ≤ n − 1 either zi ≤ zi+1 or
zi ≥ zi+1 . This is clearly an equivalence relation and we call the ≈-classes of B the weak orbits
of the M -set. This corresponds to the notion of the weakly connected components of a directed
graph. If B is a right M -set then we use B/M to denote the set of weak orbits of the M -set.
Dually, if B is a left M -set we use M \B to denote the set of weak orbits. Note that if 1 denotes
the trivial right M -set and B is a left M -set, then we have M \B = 1 ⊗M B.
Let M, N be monoids. An M -N -biset is an M × N op -set. If A is an M -N -biset and B is a
left N -set, then the equivalence relation defining A ⊗N B is left M -invariant and so A ⊗N B is
a left M -set with action m(a ⊗ b) = ma ⊗ b.
Proposition 3.1. Let A be an M -N -biset that is (finitely generated) projective as an M -set
and let B be a (finitely generated) projective N -set. Then A ⊗N B is a (finitely generated)
projective M -set.
Proof. As B is a (finite) coproduct of N -sets N e with e ∈ E(N ), it suffices to handle the case
B = N e. Then A ⊗N N e ∼
= Ae via a ⊗ n 7→ an with inverse a 7→ a ⊗ e for a ∈ Ae. Now
define r : A → Ae by r(a) = ae. Then r is an M -equivariant retraction. So Ae is a retract of a
(finitely generated) projective and hence is a (finitely generated) projective.
If X is a left M -space and A is a right M -set, then A ⊗M X is a topological space with the
quotient topology. Again the functor A ⊗M () preserves all colimits. In fact, A ⊗M X is the
coequalizer in the diagram
a
a
X⇒
X → A ⊗M X
A×M
A
where the top map sends x in the (a, m)-component to mx in the a-component and the bottom
map sends x in the (a, m)-component to x in the am-component.
14
ROBERT D. GRAY AND BENJAMIN STEINBERG
Corollary 3.2. If A is an M -N -biset that is projective as an M -set and X is a projective
N -CW complex, then A ⊗N X is a projective M -CW complex. If A is in addition finitely
generated as an M -set and X is of N -finite type, then A ⊗N X is of M -finite type. Moreover,
dim A ⊗N X = dim X.
Proof. Since A ⊗N () preserves colimits, A ⊗N X = lim A ⊗N Xn . Moreover, putting X−1 = ∅,
−→
we have that if Xn is obtained as per the pushout square (2.1), then A ⊗N Xn is obtained from
the pushout square
(A ⊗N Pn ) × S n−1
A ⊗N Xn−1
(A ⊗N Pn ) × B n
A ⊗N Xn
by preservation of colimits and the observation that if C is a trivial left N -set and B is a left
N -set, then A ⊗N (B × C) ∼
= (A ⊗N B) × C via a ⊗ (b, c) 7→ (a ⊗ b, c). The result now follows
from Proposition 3.1
By considering the special case where M is trivial and A is a singleton, and observing that
a projective M -set P is finitely generated if and only if M \P is finite, we obtain the following
corollary.
Corollary 3.3. Let X be a projective M -CW complex. Then M \X is a CW complex. Moreover,
X is M -finite (of M -finite type) if and only if M \X is finite (of finite type).
The following observation will be used many times.
Proposition 3.4. Let X be a locally path connected N -space and A an M -N -biset. Then π0 (X)
is an N -set and π0 (A ⊗N X) ∼
= A ⊗N π0 (X).
Proof. Note that the functor X 7→ π0 (X) is left adjoint to the inclusion of the category of N -sets
into the category of locally path connected M -spaces and hence it preserves all colimits. The
result now follows from the description of tensor products as coequalizers of coproducts.
The advantage of working with M -homotopies is that they behave well under base change.
Proposition 3.5. Let A be an M -N -biset and let X, X ′ be N -homotopy equivalent N -spaces.
Then A ⊗N X ′ is M -homotopy equivalent to A ⊗N X ′ . In particular, if Y, Z are M -spaces and
Y ≃M Z, then M \Y ≃ M \Z.
Proof. It suffices to prove that if Y, Z are N -spaces and f, g : Y → Z are N -homotopic N equivariant maps, then
A ⊗N f, A ⊗N g : A ⊗N Y → A ⊗N Z
are M -homotopic. This follows immediately from the identification of A ⊗N (Y × I) with
(A ⊗N Y ) × I. For if H : Y × I → Z is an N -homotopy between f and g, then A ⊗N H provides
the M -homotopy between A ⊗N f and A ⊗N g.
The following base change lemma, and its dual, is convenient for dealing with bisets.
Lemma 3.6. Let A be an M × M op -set and consider the right M × M op -set M with the
right action m(mL , mR ) = mmL . Then A/M is a left M -set and there is an M -equivariant
isomorphism A/M → M ⊗M ×M op A.
TOPOLOGICAL FINITENESS PROPERTIES
15
Proof. Clearly, A/M = A ⊗M 1 is a left M -set. Write [a] for the class of a in A/M . Define
f : A/M → M ⊗M ×M op A by f ([a]) = 1 ⊗ a. This is well defined and M -equivariant because if
a ∈ A and m ∈ M , then 1 ⊗ am = 1 ⊗ (1, m)a = 1(1, m) ⊗ a = 1 ⊗ a and 1 ⊗ ma = 1 ⊗ (m, 1)a =
1(m, 1) ⊗ a = m ⊗ a. Define G : M × A → A/M by G(m, a) = [ma]. If m, mL , mR ∈ M , then
G(m(mL , mR ), a) = G(mmL , a) = [mmL a] and G(m, (mL , mR )a) = [mmL amR ] = [mmL a].
Therefore, G induces a well defined mapping g : M ⊗M ×M op A → A/M . Then we check that
gf ([a]) = g(1⊗ a) = [a] and f g(m ⊗ a) = f ([ma]) = 1⊗ ma = 1⊗ (m, 1)a = 1(m, 1)⊗ a = m ⊗ a.
Thus f and g are inverse isomorphisms.
The following basic result will be used later.
Proposition 3.7. Let G be a group. Then G × G is a (G × Gop )-G-biset that is free as right
G-set on cardinality of G generators under the right action (g, g ′ )h = (gh, h−1 g ′ ).
Proof. It is easy to check that right action of G is indeed an action commuting with the left
action of G × Gop . Moreover, the right action of G is free and two elements (g1 , g2 ) and (g1′ , g2′ )
are in the same right G-orbit if and only if g1 g2 = g1′ g2′ . This completes the proof.
Corollary 3.8. Let M be a monoid and X a projective M × M op -CW complex.
(1) X/M is a projective M -CW complex and M \X is a projective M op -CW complex.
(2) If X is of M × M op -finite type, then X/M is of M -finite type and dually for M \X.
(3) dim X/M = dim X = dim M \X.
(4) If X, Y are M × M op -homotopic projective M × M op -CW complexes, then X/M and
Y /M (respectively, M \X and M \Y ) are M -homotopic projective M -CW complexes
(respectively, M op -homotopic projective M op -CW complexes).
Proof. The first three items follow from Corollary 3.2 and Lemma 3.6 (and their duals). The
final statement follows from Lemma 3.6 and Proposition 3.5.
We shall frequently use without comment that if A is an M -N -biset and B is an N -set, then
Z[A ⊗N B] ∼
= ZA ⊗ZN ZB as left ZM -modules. Indeed, there are natural isomorphisms of
abelian groups
HomZM (ZA ⊗ZN ZB, V ) ∼
= HomZN (ZB, HomZM (ZA, V ))
∼
= HomN (B, HomM (A, V ))
∼
= HomM (A ⊗N B, V )
∼
= HomZM (Z[A ⊗N B], V )
for a ZM -module V and so we can apply Yoneda’s Lemma.
4. Simplicial sets
An important source of examples of rigid M -CW complexes will come from simplicial sets
which admit suitable monoid actions. In this section we introduce the notion of a rigid M simplicial set, and we show how these give rise to rigid M -CW complexes via the geometric
realisation functor. For further background on simplicial sets we refer the reader to [Wei94,
Chapter 8] or [May67].
Let ∆ denote the simplicial category. It has objects all the finite linearly ordered sets [n] =
{0, 1, . . . , n − 1} (n ≥ 0) and morphisms given by (non-strictly) order-preserving maps. A
simplicial set X is then a functor X : ∆op → Set from ∆op to the category of sets. For each n,
the image of [n] under X is denoted Xn and is called the set of n-simplicies of the simplicial
16
ROBERT D. GRAY AND BENJAMIN STEINBERG
set. Any simplicial set X may be defined combinatorially as a collection of sets Xn (n ≥ 0) and
functions di : Xn → Xn−1 and si : Xn → Xn+1 (0 ≤ i ≤ n) satisfying
di dj = dj−1 di
(i < j)
si sj = sj+1 si (i ≤ j)
i = j, j + 1
1
di sj = sj−1 di i < j
sj di−1 i > j + 1.
Here the di are called the face maps and the si are called the degeneracy maps. We say that
an n-simplex x ∈ Xn is degenerate if it is the image of some degeneracy map.
A simplicial morphism f : X → Y between simplicial sets is a natural transformation between
the corresponding functors, i.e., a sequence of functions fn : Xn → Yn for each n ≥ 0 such that
fn−1 di = di fn and fn+1 sj = sj fn . There is a functor | · | : SSet → CG, called the geometric
realization functor, from the category SSet of simplicial
sets and the category CG of compactlyS
generated Hausdorff topological spaces. Let K = i≥0 Ki be a simplicial set with degeneracy
and face maps di , si . The geometric realisation |K| of K is the CW complex constructed from
K in the following way. Let
n
o
X
∆n = (t0 , . . . , tn ) : 0 ≤ ti ≤ 1,
ti = 1 ⊆ Rn+1
denote the standard topological n-simplex. Define
δi :
∆n−1
→
∆n
(t0 , . . . , tn−1 ) 7→ (t0 , . . . , ti−1 , 0, ti , . . . , tn−1 ),
σi :
∆n+1
→
∆n
(t0 , . . . , tn+1 ) 7→ (t0 , . . . , ti + ti+1 , . . . , tn−1 ).
and
Then
|K| =
G
n≥0
Kn × ∆n /∼
where ∼ is the equivalence relation generated by
(x, δi (u)) ∼ (di (x), u),
We give
G
0≤n≤q
(x, σi (v)) ∼ (si (x), v).
Kn × ∆n /∼
the quotient topology for all q and take the inductive limit of the resulting topologies. The
geometric realisation |K| is a CW complex whose cells are in natural bijective correspondence
with the non-degenerate simplicies of K. To see this, write
G
K=
Kn × ∆n .
n≥0
Then a point (k, x) ∈ K is called non-degenerate if k is a non-degenerate simplex and x is an
interior point. The following is [Mil57, Lemma 3].
Lemma 4.1. Each point (k, x) ∈ K is ∼-equivalent to a unique non-degenerate point.
TOPOLOGICAL FINITENESS PROPERTIES
17
In each case, the point in question is determined by the maps δi , di , σi and si (see [Mil57] for
details). This lemma is the key to proving that that |K| is a CW complex: we take as n-cells
of |K| the images of the non-degenerate n-simplices of K, and the above lemma shows that the
interiors of these cells partition |K|. The remaining properties of a CW complex are then easily
verified. The following lemma shows that geometric realisation defines a functor from SSet to
CG.
The next result is [Mil57, Lemma 4].
S
S
Lemma 4.2. If K = Ki and L = Li are simplicial sets and f : K → L is a simplicial
morphism then f given by
f n : Kn × ∆n → Ln × ∆n ,
(x, u) 7→ (f (x), u)
is continuous, and induces a well-defined continuous map
|f | : |K| → |L|,
(x, u)/∼ 7→ (f (x), u)/∼
of the corresponding geometric realizations, which is cellular.
A left M -simplicial set is a simplicial set equipped with a left action of M by simplicial
morphisms. In order to construct rigid M -CW complexes we shall need the following special
kind of M -simplicial set.
S
Definition 4.3 (Rigid M -simplicial set). Let K = i≥0 Ki be a simplicial set with degeneracy
and face maps di , si , and let M be a monoid. We call K a rigid left M -simplicial set if K
comes equipped with an action of M × K → K such that
• M is acting by simplicial morphisms, i.e., M maps n-simplicies to n-simplicies, and
commutes with di and si ;
• M preserves non-degeneracy, i.e., for every non-degenerate n-simplex x and every m ∈
M the n-simplex mx is also non-degenerate.
A rigid right M -simplicial set is defined dually, and a rigid bi-M -simplicial set is simultaneously both a left and a right M -simplicial set, with commuting actions. A bi-M -simplicial set
is the same thing as a left M × M op -simplicial set. Note that it follows from the condition that
M acts by simplicial morphisms that, under the action of M , degenerate n-simplicies are sent
to to degenerate n-simplicies. The geometric realisation construction defines a functor from the
category of left M -simplicial sets (with M -equivariant simplicial morphisms) to the category of
left M -spaces. In particular, this functor associates with each rigid left M -simplicial set a rigid
M -CW complex. Corresponding statements hold for both rigid right and bi-M -simplicial sets.
S
Lemma 4.4. For any rigid left M -simplicial set K = i≥0 Ki the geometric realisation |K| is
a rigid left M -CW complex with respect to the induced action given by
m · [(x, u)/∼] = (m · x, u)/∼.
Proof. It follows from Lemma 4.2 that the action is continuous. By the definition of rigid left
M -simplicial set the M -action maps non-degenerate simplices to non-degenerate simplices, and
the cells of |K| are in natural bijective correspondence with the non-degenerate simplicies of K.
It follows that the action of M on |K| sends n-cells to n-cells. The action is rigid by definition.
Thus |K| is a rigid M -CW complex.
There are obvious right- and bi-M -simplicial set analogues of Lemma 4.4 obtained by replacing M by M op and M × M op , respectively.
18
ROBERT D. GRAY AND BENJAMIN STEINBERG
5. Standard constructions of projective M -CW complexes
In this section we shall give a fundamental method that, for any monoid M , allows us to
construct in a canonical way free left-, right- and bi-M -CW complexes. These constructions
will be important when we go on to discuss M -equivariant classifying spaces later on in the
article. Each of the constructions in this section is a special case of the general notion of the
nerve of a category.
To any (small) category C we can associate a simplicial set N (C) called the nerve of the
category. For each k ≥ 0 we let N (C)k denote the set of all sequences (f1 , . . . , fk ) composable
arrows
f1
f2
f
k
A0 −→ A1 −→ · · · −→
Ak
(5.1)
where we allow objects to repeat in these sequences. The objects of C are the 0-simplices. The
face map di : N (C)k → N (C)k−1 omits Ai , so it carries the above sequence to
f1
f2
fi+1 ◦fi
fi−1
fi+2
f
k
A0 −→ A1 −→ · · · −−−→ Ai−1 −−−−→ Ai+1 −−−→ · · · −→
Ak
while the degeneracy map si : N (C)k → N (C)k+1 carries it to
f1
f2
fi
idA
fi+1
fi+2
f
k
i
A0 −→ A1 −→ · · · −
→ Ai −−→
Ai −
−−
→ Ai+1 −
−−
→ · · · −→
Ak
The classifying space of a (small) category C is the geometric realisation |N (C)| of the nerve
N (C) of C.
The nerve is a functor from Cat (the category of small categories) to SSet (the category
of simplicial sets, with simplicial morphisms) given by applying the functor to the diagram
(5.1). From this it follows that a functor between small categories C and D induces a map of
simplicial sets N (C) → N (D), which in turn induces a continous map between the classifying
spaces |N (C)| → |N (D)|. Also, a natural transformation between two functors between C
and D induces a homotopy between the induced maps on the classifying spaces. In particular,
equivalent categories have homotopy equivalent classifying spaces. Any functor which is left or
right adjoint induces a homotopy equivalence of nerves. Consequently, |N (C)| is contractible if
C admits an initial or final object. (For a proof of this see [Sri96, Corollary 3.7].)
It is obvious from the nerve construction that the nerve of a category which is not connected
is the disjoint union of the nerves of the connected components of the category. Thus, if every
component of C admits an initial or final object, then each of the components of |N (C)| will
be contractible.
It is well know that the geometric realisations of the nerve of a category C and its reversal
C op are homeomorphic.
5.1. The classifying space |BM | of a monoid M . In the general context above, given a
monoid M we can construct a category with a single object, one arrow for every m ∈ M , and
composition given by multiplication. The classifying space |BM | of the monoid M is then the
geometric realisation of the nerve of the category corresponding to M op (the reversal is for
the technical reason of avoiding reversals in the face maps). In more detail, the nerve of this
category is the simplicial set BM with n-simplices: σ = (m1 , m2 , ..., mn ), n-tuples of elements
of M . The face maps are given by
i=0
(m2 , . . . , mn )
di σ = (m1 , . . . , mi−1 , mi mi+1 , mi+2 , . . . , mn ) 0 < i < n
(m1 , . . . , mn−1 )
i = n,
TOPOLOGICAL FINITENESS PROPERTIES
19
and the degeneracy maps are given by
si σ = (m1 , . . . , mi , 1, mi+1 , . . . , mn ) (0 ≤ i ≤ n).
The geometric realisation |BM | is called the classifying space of M . Then |BM | is a CW
complex with one n-cell for every non-degenerate n-simplex of BM , i.e., for every n-tuple all
of whose entries are different from 1. As mentioned in the introduction, classifying spaces of
monoids have received some attention in the literature.
5.2. Right Cayley graph category. Let Γr (M ) denote the right Cayley graph category for
M , which has
• Objects: M ;
• Arrows: (x, m) : x → xm; and
• Composition of arrows: (xm, n) ◦ (x, m) = (x, mn).
The identity at x is (x, 1). This composition underlies our use of M op in defining BM .
−−→
−−→
Let EM be the nerve of the category Γr (M ). The n-simplies of EM may be written using
the notation m(m1 , m2 , ..., mn ) = mτ where τ = (m1 , m2 , ..., mn ) is an n-simplex of BM . Here
m(m1 , m2 , ..., mn ) denotes the n-tuple of composable arrows in the category Γr (M ) where we
start at m and the follow the path labelled by m1 , m2 , ..., mn .
−−→
The face maps in EM are given by
i=0
mm1 (m2 , ..., mn )
di (m(m1 , m2 , ..., mn )) = m(m1 , m2 , ..., mi mi+1 , ..., mn ) 0 < i < n
m(m1 , m2 , ..., mn−1 )
i=n
and the degeneracy maps are given by
si σ = m(m1 , . . . , mi , 1, mi+1 , . . . , mn ) (0 ≤ i ≤ n).
where σ = m(m1 , ..., mn ).
−−→
−−→
−−→
Let |EM | denote the geometric realisation of EM . So |EM | is a CW complex with one
−−→
n-cell for every non-degenerate n-simplex of EM , that is, for every m(m1 , m2 , . . . , mn ) with
−−→
mj 6= 1 for 1 ≤ j ≤ n. As a consequence, by an n-cell of EM we shall mean a non-degenerate
n-simplex.
Consider the right Cayley graph category Γr (M ). For each m ∈ M there is precisely one
morphism (1, m) from 1 to m. Since the category has an initial object we conclude that the
−−→
geometric realisation of its nerve |EM | is contractible.
Applying the nerve functor to the projection functor from the category Γr (M ) to the onepoint category M op , which identifies all the vertices of Γr (M ) to a point, gives a simplicial
−−→
morphism π : EM → BM between the corresponding nerves, which maps
m(m1 , m2 , ..., mn ) 7→ (m1 , m2 , ..., mn ).
−−→
Observe that, for each n, the projection π maps the set of n-cells of EM onto the set of n-cells
−−→
BM . If we then apply the geometric realisation functor we obtain a projection π : |EM | →
|BM | (we abuse notation slightly by using the same notation π to denote this map).
The monoid M acts by left multiplication on the category Γr (M ). By functoriality of the
−−→
nerve, it follows that M acts on the left of EM n by simplicial morphisms via
s · m(m1 , m2 , ..., mn ) = sm(m1 , m2 , ..., mn ).
−−→
Under this action EM n is a free left M -set with basis BMn . Also, if we restrict to the n-cells
(i.e., non-degenerate simplices), then we obtain a free left M -set with basis the set of n-cells of
20
ROBERT D. GRAY AND BENJAMIN STEINBERG
BM . It is an easy consequence of the definitions that this is an action by simplicial morphisms
and that it preserves non-degeneracy in the sense that s · mσ is an n-cell if and only if mσ is an
−−→
−−→
n-cell for all s ∈ M and mσ ∈ EM . Therefore EM is a rigid left M -simplicial set. Combining
−−→
these observations with Lemma 4.4 we conclude that |EM | is a free left M -CW complex which
is contractible.
←−−
Dually, we use EM to denote the nerve of the left Cayley graph category Γl (M ). The
−−→
←−−
simplicial set EM satisfies all the obvious dual statements to those above for EM . In particular
←−−
←−−
M acts freely via right multiplication action on EM by simplicial morphisms, and |EM | is a
free right M -CW complex which is contractible.
←−−→
5.3. Two-sided Cayley graph category. Let Γ(M ) denote the two-sided Cayley graph category for M , which has
• Objects: M × M ;
• Arrows: M × M × M where (mL , m, mR ) : (mL , mmR ) → (mL m, mR ); and
• Composition of arrows: (nL , n, nR ) ◦ (mL , m, mR ) = (mL , mn, nR ) where (mL m, mR ) =
(nL , nnR ). Equivalently this is the same as the composition (mL m, n, nR )◦(mL , m, nnR ) =
(mL , mn, nR ) and corresponds to the path
(mL , mnnR ) → (mL m, nnR ) → (mL mn, nR ).
This is in fact the kernel category of the identity map, in the sense of Rhodes and Tilson [RT89].
←−−→
There is a natural M × M op action of the category Γ(M ).
←−−→
←−→
←−→
Let EM be the nerve of the category Γ(M ). The simplicial set EM parallels the two-sided
←−→
geometric bar construction of J. P. May; see [May72, May75]. The n-simplies of EM may
be written using the notation m(m1 , m2 , ..., mn )s = mτ s where τ = (m1 , m2 , ..., mn ) is an
n-simplex of BM .
←−−→
Here m(m1 , m2 , ..., mn )s denotes the n-tuple of composable morphisms in the category Γ(M )
where we start at (m, m1 m2 . . . mn s) and follow the path
(m, m1 m2 m3 . . . mn s) → (mm1 , m2 m3 . . . mn s) → . . .
. . . (mm1 m2 , m3 . . . mn s) → . . . (mm1 m2 . . . mn , s)
labelled by the edges
(m, m1 , m2 m3 . . . mn s), (mm1 , m2 , m3 . . . mn s), . . . ,
(mm1 m2 . . . mn−2 , mn−1 , mn s), (mm1 m2 . . . mn−1 , mn , s)
←−→
and finish at (mm1 m2 . . . mn , s). The face maps in the nerve EM are given by
i=0
mm1 (m2 , ..., mn )s
di (m(m1 , m2 , ..., mn )s) = m(m1 , m2 , ..., mi mi+1 , ..., mn )s 0 < i < n
m(m1 , m2 , ..., mn−1 )mn s
i=n
and the degeneracy maps are given by
si σ = m(m1 , . . . , mi , 1, mi+1 , . . . , mn )s (0 ≤ i ≤ n),
where σ = m(m1 , ..., mn )s.
←−→
←−→
←−→
Let |EM | denote the geometric realisation of EM . So |EM | is a CW complex with one n-cell
←−→
for every non-degenerate n-simplex of EM . Observe that (mL , mR ) and (m′L , m′R ) are in the
←−−→
same component of the two-sided Cayley graph category Γ(M ) if and only if mL mR = m′L m′R .
Moreover, for each m ∈ M the vertex (1, m) is initial in its component. It follows from
TOPOLOGICAL FINITENESS PROPERTIES
21
←−→
←−→
these observations that π0 (|EM |) ∼
= M as an M × M op -set, and each component of |EM | is
←−−→
contractible. There is a natural projection from Γ(M ) to the one-point category M op mapping
(mL , m, mR ) to its middle component m. Applying the nerve functor to this projection gives a
←−→
simplicial morphism π : EM → BM given by
m(m1 , m2 , ..., mn )s 7→ (m1 , m2 , ..., mn ).
←−→
As in the one-sided case, this projection sends n-cells to n-cells and induces a map π : |EM | →
|BM | between the corresponding geometric realisations.
←−→
The monoid M has a natural two-sided action on EM n via
x · [m(m1 , m2 , ..., mn )s] · y = xm(m1 , m2 , ..., mn )sy.
←−→
Under this action EM is a free rigid bi-M -simplicial set. Combining these observations with
←−→
←−→
Lemma 4.4 we conclude that |EM | is a free bi-M -CW complex such that π0 (|EM |) ∼
= M as an
←−→
op
M × M -set and each component of |EM | is contractible
6. One-sided classifying spaces and finiteness properties
We will define left and right equivariant classifying spaces for a monoid M . Two-sided
equivariant classifying spaces will be defined in the next section. As we shall see, the examples
discussed in Section 5 will serve as the standard models of such spaces.
We say that a projective M -CW complex X is a (left) equivariant classifying space for M if it
is contractible. A right equivariant classifying space for M will be a left equivariant classifying
space for M op . Notice that the augmented cellular chain complex of an equivariant classifying
space for M provides a projective resolution of the trivial (left) ZM -module Z.
Example 6.1. The bicyclic monoid is the monoid B with presentation ha, b | ab = 1i. It is not
hard to see that each element of B is uniquely represented by a word of the form bi aj where
i, j ≥ 0. Figure 1 shows an equivariant classifying space for B. The 1-skeleton is the Cayley
graph of B and there is a 2-cell glued in for each path labelled ab. This example is a special
case of far more general results about equivariant classifying spaces of special monoids which
will appear in a future paper of the current authors [GS].
..
.
..
.
a
b
a
b
a
b
a
b
b
b
···
Figure 1. An equivariant classifying space for the bicyclic monoid
Our first goal is to show that any two equivariant classifying spaces for M are M -homotopy
equivalent.
Lemma 6.2. Let X be an equivariant classifying space for M and let Y be a contractible
M -space. Then there exists a continuous M -equvariant mapping f : X → Y .
22
ROBERT D. GRAY AND BENJAMIN STEINBERG
Proof. The proof constructs by induction M -equivariant
` continuous mappings fn : Xn → Y with
fn extending fn−1 . To define f0 , observe Q
that X0 = a∈A M ea (without loss of generality) and
so, by Proposition 2.3, TopM (X0 , Y ) ∼
= a∈A ea Y 6= ∅ and so we can define f0 . Assume that
fn : Xn → Y has been defined. Let Z be the one-point space with the trivial M -action and let
k : Y → Z be the unique M -equivariant map. Then k is a weak equivalence. So by Theorem 2.5
we can construct a commutative diagram
Xn
i0
Xn × I
i1
i
Z
k
Y
Xn
fn
i
fn+1
Xn+1
i0
Xn+1 × I
i1
Xn+1
with fn+1 M -equivariant. Now take f to be the colimit of the fn .
Theorem 6.3. Let X, Y be equivariant classifying spaces for M . Then X and Y are M homotopy equivalent by a cellular M -homotopy equivalence.
Proof. By Corollary 2.7 and Theorem 2.8 it suffices to construct an M -equivariant continuous
mapping f : X → Y . But Lemma 6.2 does just that.
Next we now give an elementary proof that contractible free M -CW complexes exist and
hence there are equivariant classifying spaces for M . A more canonical construction, using
simplicial sets, was given in the previous section.
Lemma 6.4. Let M be a monoid.
(1) If X0 is a non-empty projective (free) M -set, then there is a connected projective (free)
M -graph X with vertex set X0 .
(2) If X is a connected projective (free) M -CW complex such that πq (X) = 0 for 0 ≤ q < n,
then there exists a projective M -CW complex Y containing X as a projective M -CW
subcomplex and such that Yn = Xn and πq (Y ) = 0 for 0 ≤ q ≤ n.
(3) If X is a connected projective (free) M -CW complex such that πq (X) is trivial for
0 ≤ q < n, then there exists a contractible projective (free) M -CW complex Y containing
X as a projective M -CW subcomplex and such that Yn = Xn .
Proof. For the first item, fix x0 ∈ X0 . The edge set of X will be bijection with M × X0 with the
edge corresponding to (m, x) connecting mx0 to mx. Then X is a projective (free) M -graph
and each vertex x is connected to x0 via the edge (1, x).
For the second item, we show that we can add free M -cells of dimension n + 1 to X to obtain
a new projective M -CW complex Y with πn (Y ) = 0. If πn (X) = 0, then take Y = X. So
assume that πn (X) is non-trivial. Fix a base point x0 ∈ X0 and let fa : S n → X, a ∈ A, be
mappings whose based homotopy classes form a set of generators for πn (X, x0 ). As Xn → X is
an n-equivalence, we may assume without loss of generality that fa : S n → Xn . Suppose that
X is constructed from pushouts as per (2.1). Note that M × A, where A has the trivial action,
is a free M -set. Let us define Y by putting Yk = Xk for 0 ≤ k ≤ n, defining Yn+1 to be the
pushout
`
(Pn+1 × S n ) (M × A × S n )
Xn
(Pn+1 × B n+1 )
`
(M × A × B n+1 )
Yn+1 ,
TOPOLOGICAL FINITENESS PROPERTIES
23
where the top map is the union of the attaching map for X with the mapping (m, a, x) 7→ mfa (x)
(cf. Proposition 2.3), and putting Yk = Xk ∪ Yn+1 for k > n + 1. Then Y is a projective M -CW
complex containing X as a projective M -CW subcomplex and with Yn = Xn . Moreover, since
Xn = Yn → Y is an n-equivalence, it follows that the based homotopy classes of the fa : S n → Y
generate πn (Y, x0 ). By construction the fa can be extended to B n+1 → Y and so their classes
are trivial in πn (Y, x0 ). Thus πn (Y ) = 0. Also, because Xn = Yn → Y is an n-equivalence, we
have that πq (Y ) = 0 for 0 ≤ q < n.
The final item follows from Whitehead’s theorem, iteration of the second item and that
Yn → Y is an n-equivalence for all n ≥ 0.
Corollary 6.5. Let M be a monoid. Then there exists a contractible free M -CW complex.
Proof. Put X0 = M . Then by Lemma 6.4 we can find a connected free M -graph X with vertex
set X0 . Now applying the third item of Lemma 6.4 we can find a contractible free M -CW
complex with 1-skeleton X.
Example 6.6. It follows from the definitions and results in Section 5 that the geometric realisa−−→
tion |EM | of the nerve of the right Cayley graph category of M is a left equivariant classifying
space for M .
Corollary 6.7. If X, Y are equivariant classifying spaces for M , then M \X and M \Y are
homotopy equivalent. In particular, M \X ≃ |BM |. Therefore, if G is a group and X is an
equivariant classifying space for G, then G\X is an Eilenberg-Mac Lane space for G. Conversely, the universal cover of any Eilenberg-Mac Lane space for G is an equivariant classifying
space for G.
Proof. The first statement follows from Theorem 6.3 and Proposition 3.5. The second statement
−−→
follows from the first as |EM | is an equivariant classifying space for M . The group statements
then follow from the previous statements and classical covering space theory.
If M and N are monoids, then E(M ×N ) = E(M )×E(N ) and (M ×N )(e, f ) = M e×N f . It
follows that if P is a (finitely generated) projective M -set and Q a (finitely generated) projective
N -set, then P × Q is a (finitely generated projective) M × N -set.
Proposition 6.8. Let M, N be monoids and let X, Y be equivariant classifying spaces for M, N ,
respectively. Then X × Y is an M × N -equivariant classifying space, which is of M × N -finite
type whenever X is of M -finite type and Y is of N -finite type.
Proof. Assume that X is obtained via attaching projective M -cells Pn × B n and that Y is
obtained by
N -cells Qn × B n . Then the n-cells of X × Y are obtained by
`nattaching projective
adjoining i=0 Pi × Qn−i × B n and hence X × Y is an M × N -projective CW complex which
is of M × N -finite type whenever X is of M -finite type and Y is of N -finite type. As X × Y is
contractible, we deduce that it is an M × N -equivariant classifying space.
6.1. Monoids of type left-Fn . A key definition for this paper is the following. A monoid M
is of type left-Fn if there is an equivariant classifying space X for M such that Xn is M -finite,
i.e., such that M \X has finite n-skeleton. We say that M is of type left-F∞ if M has an
equivariant classifying space X that is of M -finite type, i.e., M \X is of finite type. The monoid
M is defined to have type right-Fn if M op is of type left-Fn for 0 ≤ n ≤ ∞. The following
proposition contains some basic facts.
Proposition 6.9. Let M be a monoid.
(1) A group is of type left-Fn if and only if it is of type Fn in the usual sense for any
0 ≤ n ≤ ∞.
24
ROBERT D. GRAY AND BENJAMIN STEINBERG
(2) For 0 ≤ n ≤ ∞, if M is of type left-Fn , then it is of type left-FPn .
(3) If M is of type left-F∞ , then it is of type left-Fn for all n ≥ 0.
Proof. The first item follows from Corollary 6.7 and Corollary 3.3. The second is immediate
using that the augmented cellular chain complex of an equivariant classifying space X gives a
projective ZM -resolution of the trivial ZM -module since if X is built up from pushouts as per
(2.1), then the nth -chain module is isomorphic to ZPn . The final item is trivial.
−−→
Note that, trivially, if M is a finite monoid then |EM | has finitely many cells in each dimension
and thus M is of type left-F∞ .
Sometimes it will be convenient to use the following reformulation of the property left-Fn .
Proposition 6.10. Let M be a monoid. The following are equivalent for 0 ≤ n < ∞.
(1) M is of type left-Fn
(2) There is a connected M -finite projective M -CW complex X of dimension at most n with
πq (X) = 0 for 0 ≤ q < n.
Proof. If Y is an equivariant classifying space for M such that Yn is M -finite, then since Yn → Y
is an n-equivalence, we deduce that X = Yn is as required for the second item. Conversely, if
X is as in the second item, we can construct by Lemma 6.4 an equivariant classifying space Y
for M with Yn = X. Thus M is of type left-Fn .
Recall that the fundamental group of |BM | is isomorphic to the universal group (or maximal
group image, or group completion) U (M ) of M i.e., the group with generators M and relations
the multiplication table of M (cf. [GZ67]).
Corollary 6.11. Let M be a monoid. If M is of type left-F1 , then U (M ) is finitely generated.
If M is of type left-F2 , then U (M ) is finitely presented.
Proof. By Corollary 6.7, |BM | in the first case is homotopy equivalent to a CW complex with
finite 1-skeleton and in the second case to a CW complex with finite 2-skeleton by Corollary 3.3.
Thus U (M ) ∼
= π1 (BM ) has the desired properties in both cases.
Recall that an inverse monoid is a monoid M with the property that for every m ∈ M there
is a unique element m′ ∈ M such that mm′ m = m and m′ mm′ = m′ . For more on inverse
monoids, and other basic concepts from semigroup theory we refer the reader to [How95].
Corollary 6.12. Let M be a monoid such that |BM | is an Eilenberg-Mac Lane space (e.g., if
M is cancellative with a left or right Ore condition or if M is an inverse monoid) and suppose
that 0 ≤ n ≤ ∞. If M is of type left-Fn , then U (M ) is of type Fn .
Proof. If X is an equivariant classifying space for M , then M \X is homotopy equivalent to
|BM | by Corollary 6.7 and hence is an Eilenberg-Mac Lane space for U (M ). The result now
follows from Corollary 3.3.
Since, as already mentioned above, D. McDuff [McD79] has shown that every path-connected
space has the weak homotopy type of the classifying space of some monoid, not every monoid
|BM | is an Eilenberg-Mac Lane space. So not every monoid satisfies the hypotheses of Corollary 6.12. The fact that if M is cancellative with a left or right Ore condition then |BM | is
an Eilenberg-Mac Lane space is well known. If M is an inverse monoid then |BM | may also
be shown to be an Eilenberg-Mac Lane space. Both of these results can easily be proved by
appealing to Quillen’s theorem A, see [Wei13, Chapter 4], and should be considered folklore.
TOPOLOGICAL FINITENESS PROPERTIES
25
The converse of Corollary 6.12 does not hold. For example, the free inverse monoid on one
generator is not of type left-F2 while its maximal group image Z is F∞ (this proof of the fact
that the free inverse monoid on one generator is not left-F2 will appear in [GS]).
For groups, being of type F1 is equivalent to finite generation. For monoids, the condition of
being left-F1 is considerably weaker. Recall that if M is a monoid and A ⊆ M , then the (right)
Cayley digraph Γ(M, A) of M with respect to A is the graph with vertex set M and with edges
in bijection with M × A where the directed edge (arc) corresponding to (m, a) starts at m and
ends at ma. Notice that Γ(M, A) is a free M -CW graph and is M -finite if and only if A is
finite.
Theorem 6.13. Let M be a monoid. The following are equivalent.
(1) M is of type left-F1 .
(2) M is of type left-FP1 .
(3) There is a finite subset A ⊆ M such that Γ(M, A) is connected as an undirected graph.
In particular, any finitely generated monoid is of type left-F1 .
Proof. Item (1) implies (2) by Proposition 6.9, whereas (2) implies (3) by a result due to
Kobayashi [Kob07]. For completeness, let us sketch the proof that (2) implies (3). Let ε : ZM →
Z be the augmentation map; the ideal I = ker ε is called the augmentation ideal. If M is of
type left-FP1 , then I must be finitely generated because the augmentation map gives a partial
free resolution. But I is generated by all elements of the form m − 1 with m ∈ M . Hence there
is a finite subset A ⊆ M such that the elements a − 1 with a ∈ A generate I. Consider the
Cayley digraph Γ(M, A). Then M acts cellularly on Γ(M, A) and hence acts on π0 (Γ(M, A)).
There is a surjective ZM -module homomorphism η : ZM → Zπ0 (Γ(M, A)) mapping m ∈ M to
the connected component of the vertex m of Γ(M, A). Moreover, the augmentation ε factors
through η. Thus to show that Γ(M, A) is connected, it suffices to show that I = ker η. By
construction ker η ⊆ I. But if a ∈ A, then a and 1 are in the same connected component of
Γ(M, A) and thus a − 1 ∈ ker η. Since the a − 1 with a ∈ A generate I, we deduce that I ⊆ ker η
and hence Γ(M, A) is connected.
Finally, (3) implies (1) by Proposition 6.10 as Γ(M, A) is an M -finite connected free M -CW
complex of dimension at most 1.
We next show that a finitely presented monoid is of type left-F2 . In fact, we shall see later
that finitely presented monoids are of type bi-F2 , which implies left-F2 , but the proof of this
case is instructive.
Theorem 6.14. Let M be a finitely presented monoid. Then M is of type left-F2
Proof. Suppose that M is generated by a finite set A with defining relations u1 = v1 , . . . , un =
vn . Let us construct a 2-dimensional, M -finite, free M -CW complex X with 1-skeleton the
Cayley graph Γ(M, A) by attaching a free M -cell M × B 2 for each relation. Let pi , qi be the
paths from 1 to mi labelled by ui and vi , respectively, where mi is the image of ui (and vi ) in M .
Then we glue in a disk di with boundary path pi qi−1 and glue in M ×B 2 using Proposition 2.3 (so
{m} × B 2 is sent to mdi ). Then X is an M -finite connected free M -CW complex of dimension
at most 2. See Figure 1 for this construction for the bicyclic monoid. By Proposition 6.10, it
suffices to prove that X is simply connected.
A digraph is said to be rooted if there is a vertex v so that there is a directed path from v to
any other vertex. For instance, Γ(M, A) is rooted at 1. It is well known that a rooted digraph
admits a spanning tree, called a directed spanning tree, such that the geodesic from the root to
any vertex is directed. Let T be a directed spanning tree for Γ(M, A) rooted at 1 (this is the
same thing as a prefix-closed set of normal forms for M so, for instance, shortlex normal forms
26
ROBERT D. GRAY AND BENJAMIN STEINBERG
a
would do). Let e = m −→ ma be a directed edge not belonging to T . Then the corresponding
generator of π1 (X, 1) is of the form peq −1 where p and q are directed paths from 1 to m and
ma, respectively. Let u be the label of p and v be the label of q. Then ua = v in M . Thus it
suffices to prove that if x, y ∈ A∗ are words which are equal in M to an element m′ , then the
loop ℓ labelled xy −1 at 1, corresponding to the pair of parallel paths 1 to m′ labelled by x and
y, is null homotopic. By induction on the length of a derivation from x to y, we may assume
that x = wui w′ and y = wvi w′ for some i = 1, . . . , n. Let m0 be the image of w in M . Then
m0 di is a 2-cell with boundary path the loop at m0 labeled by ui vi−1 . It follows that ℓ is null
homotopic. This completes the proof.
The converse of Theorem 6.14 is not true, e.g., the monoid (R, ·) is of type left-F2 (by
Corollary 6.23) but is not even finitely generated. It is natural to ask whether there is a nice
characterisation, analogous to Theorem 6.13(3), for left-F2 in terms of the right Cayley graph
together with the left action of M . We would guess that M is of type left-F2 if and only if it
has a finite subset A ⊆ M such that Γ(M, A) is connected, and finitely many free M -2-cells
can be adjoined to make a simply connected 2-complex.
It is well known that, for finitely presented groups, the properties Fn and FPn are equivalent
for 3 ≤ n ≤ ∞. We now provide the analogue in our context. Here we replace finitely presented
by left-F2 .
Theorem 6.15. Let M be a monoid of type left-F2 . Then M is of type left-Fn if and only if
M is of type left-FPn for 0 ≤ n ≤ ∞.
Proof. We prove that if there is a connected M -finite projective M -CW complex X of dimension
at most n with πq (X) = 0 for 0 ≤ q < n with n ≥ 2 and M is of type left-FPn+1 , then there
is a connected M -finite projective M -CW complex Y of dimension at most n + 1 with Yn = X
and πq (Y ) = 0 for all 0 ≤ q < n + 1. This will imply the theorem by Proposition 6.10,
Proposition 6.9 and induction.
Since X is simply connected, Hq (X) = 0 for 1 ≤ q < n and Hn (X) ∼
= πn (X) by the Hurewicz
theorem. Therefore, the augmented cellular chain complex of X gives a partial projective
resolution of Z of length at most n, which is finitely generated in each degree. Therefore,
since M is of type left-FPn+1 , it follows that Hn (X) = ker dn : Cn (X) → Cn−1 (X) is finitely
generated as a left ZM -module. Choose representatives of fa : S n → X, with a ∈ A, of a
finite set of elements of πn (X) that map to a finite ZM -module generating set of Hn (X) under
the Hurewicz isomorphism. Then form Y by adjoining M × A × B n+1 to X via the attaching
map M × A × S n → Xn given by (m, a, x) 7→ mfa (x). Then Y is an M -finite projective
M -CW complex of dimension n + 1 with Yn = X. Since the inclusion of X = Yn into Y is
an n-equivalence, we deduce that πq (Y ) = 0 for 1 ≤ q < n and that the inclusion X → Y is
surjective on πn . But since the Hurewicz map in degree n is natural and is an isomorphism for
both X and Y , we deduce that the inclusion X → Y induces a surjection Hn (X) → Hn (Y ).
But, by construction, the images of the ZM -module generators of Hn (X) are trivial in Hn (Y )
(since they represent trivial elements of πn (Y )). We deduce that Hn (Y ) = 0 and hence, by the
Hurewicz theorem, πn (Y ) = 0. This completes the induction.
Notice that Theorem 6.15 implies that M is of type left-F∞ if and only if M is of type left-Fn
for all n ≥ 0.
Proposition 6.16. If M is of type left-Fn with n ≥ 1, then M has a free contractible M -CW
complex X such that Xn is M -finite.
Proof. This is clearly true for n = 1 by Theorem 6.13. Note that Lemma 6.4 and the construction in the proof of Theorem 6.15 show that if Y is a simply connected M -finite free M -CW
TOPOLOGICAL FINITENESS PROPERTIES
27
complex of dimension at most 2 and M is of type left-FPn , then one can build a contractible
free M -CW complex X with X2 = Y such that Xn is M -finite. Thus it remains to prove that
if there is a simply connected M -finite projective M -CW complex Y of dimension 2, then there
is a simply connected
` M -finite free M -CW complex X of dimension 2.
Note that Y0 = a∈A M ea with A a finite set. Define X0 = M × A. Identifying M ea with
M ea × {a}, we may view Y0 as an M -subset of X0 . Using this identification, we can define X1
to consists of Y1 (the edges of Y ) along with some new edges. We glue in an edge from (m, a)
to (mea , a) for each m ∈ M and a ∈ A; that is we glue in a free M -cell M × A × B 1 where
the attaching map takes (m, a, 0) to (m, a) and (m, a, 1) to (mea , a). Notice that all vertices of
X0 \ Y0 are connected to a vertex of Y0 in X1 and so X1 is connected as Y1 was connected.
To define X2 , first we keep all the two-cells from Y2 . Notice that if T is a spanning tree for Y ,
then a spanning tree T ′ for X can be obtained by adding to T all the edges (m, a) −→ (mea , a)
with m ∈
/ M ea (all vertices of X0 \ Y0 have degree one). Thus the only edges in X1 \ Y1 that do
not belong to T ′ are the loop edges (m, a) −→ (mea , a) for m ∈ M ea that we have added. So if
we attach M × A × B 2 to X1 by the attaching map M × A × S 1 → X1 mapping {m} × {a} × S 1
to the loop edge (mea , a) −→ (mea , a) from X1 \ Y1 , then we obtain a simply connected free
M -CW complex X which is M -finite. This completes the proof.
In light of Proposition 6.16, one might wonder why we bother allowing projective M -CW
complexes rather than just free ones. The reason is because projective M -CW complexes are
often easier to construct and, as we are about to see, sometimes it is possible to find an M finite equivariant classifying space for M which is projective when no M -finite free equivariant
classifying space exists. This will be relevant when considering geometric dimension.
Let M be a non-trivial monoid with a right zero element z. Then M z = {z} is a one-element
set with the trivial action. Since z is idempotent, it follows that the trivial M -set is projective
but not free. Therefore, the one-point space with the trivial M -action is an M -finite equivariant
classifying space for M , which is not free. We will show that if M has a finite number of right
zeroes (e.g., if M has a zero element), then there is no finite free resolution of the trivial module
which is finitely generated in each degree. In this case, every free equivariant classifying space
for M of M -finite type will be infinite dimensional.
A finitely generated projective module P over a ring R is said to be stably free if there are
finite rank free modules F, F ′ such that P ⊕ F ′ ∼
= F . The following lemma is well known, but
we include a proof for completeness.
Lemma 6.17. Let P be a finitely generated projective (left) module over a ring R. Then P has
a finite free resolution, finitely generated in each degree, if and only if P is stably free.
Proof. Suppose that P is stably free, say P ⊕ F ′ ∼
= F with F, F ′ finite rank free R-modules.
Then the exact sequence
0 −→ F ′ −→ F −→ P −→ 0
provides a finite free resolution of P that is finitely generated in each degree.
Conversely, suppose that
0 −→ Fn −→ Fn−1 −→ · · · −→ F0 −→ P −→ 0
is a free resolution with Fi finitely generated for all 0 ≤ i ≤ n. We also have a projective
resolution
0 −→ Pn −→ Pn−1 −→ · · · −→ P0 −→ P −→ 0
with P0 = P and Pi = 0 for 1 ≤ i ≤ n because P is projective. By the generalized Schanuel’s
lemma, we have that
P0 ⊕ F1 ⊕ P2 ⊕ F3 ⊕ · · · ∼
= F0 ⊕ P1 ⊕ F2 ⊕ P3 ⊕ · · ·
28
ROBERT D. GRAY AND BENJAMIN STEINBERG
and hence
P ⊕ F1 ⊕ F3 ⊕ · · · ∼
= F0 ⊕ F2 ⊕ · · ·
and so P is stably free.
So we are interested in showing that the trivial module for ZM is not stably free if M is a
non-trivial monoid with finitely many right zeroes (and at least one).
Recall that a ring R is said to have the Invariant Basis Number property (IBN) if whenever
Rn ∼
= Rm as R-modules, one has m = n (where m, n are integers); in this definition it does not
matter if one uses left or right modules [Lam99].
Our first goal is to show that if M is a monoid with zero z, then the contracted monoid
ring ZM/Zz has IBN. This result is due to Pace Nielsen, whom we thank for allowing us to
reproduce it. It is equivalent to show that if M is a monoid and I is a proper ideal of M , then
ZM/ZI has IBN. The proof makes use of the Hattori-Stallings trace (see [Wei13, Chapter 2]).
Let ∼ be the least equivalence relation on a monoid M such that mn ∼ nm for all m, n ∈ M ;
this relation, often called conjugacy, has been studied by a number of authors.
Lemma 6.18. Let M be a monoid and e ∈ M an idempotent. Suppose that e is conjugate to
an element of an ideal I. Then e ∈ I.
Proof. Suppose that e is conjugate to m ∈ I. Then we can find elements x1 , . . . , xn , y1 , . . . , yn ∈
M with e = x1 y1 , yi xi = xi+1 yi+1 and yn xn = m. Then e = en+1 = (x1 y1 )n+1 = x1 (y1 x1 )n y1 =
x1 (x2 y2 )n y1 = x1 x2 (y2 x2 )n−1 y2 y1 = · · · = x1 · · · xn (yn xn )yn yn−1 · · · y1 ∈ I as yn xn = m ∈
I.
If R is a ring, then [R, R] denotes the additive subgroup generated by all commutators
ab − ba with a, b ∈ R. The abelian group R/[R, R] is called the Hattori-Stallings trace of R;
this is also the 0-Hochschild homology group of R. Cohn proved that if 1 + [R, R] has infinite
order in R/[R, R], then R has IBN (see [Lam99, Exercise 1.5]). The point is that if A is an
m × n matrix over R and B is an n × m-matrix over R with AB = Im and BA = In , then
m+[R, R] = T (AB) = T (BA) = n+[R, R] where, for a square matrix C over R, we define T (C)
to be the class of the sum of the diagonal entries of C in R/[R, R]. The following proposition
is an elaboration of Pace Nielsen’s argument (see [Nie]).
Proposition 6.19. Let M be a monoid and I a (possibly empty) ideal. Let R = ZM/ZI.
Then R/[R, R] is a free abelian group on the conjugacy classes of M that do not intersect I.
More precisely, if T is a transversal to the conjugacy classes of M not intersecting I, then the
elements of the form m + [R, R] with m ∈ T form a basis for R/[R, R].
Proof. We view R as having basis M \ I subject to the relations of the multiplication table
of M and that the elements of I are 0. Let A be the free abelian group on the conjugacy
classes of M that do not intersect I. Write [m] for the conjugacy class of m ∈ M . Define
an abelian group homomorphism f : A → R/[R, R] by f ([m]) = m + [R, R]. This is well
defined because xy + [R, R] = yx + [R, R] for x, y ∈ M with xy, yx ∈
/ I. To see that f
is surjective, note that if m ∈ M \ I with [m] ∩ I 6= ∅, then m + [R, R] = [R, R]. This
follows because if m = x1 y1 , yi xi = xi+1 yi+1 , for i = 1, . . . , n − 1, and yn xn ∈ I, then
[R, R] = yn xn + [R, R] = xn yn + [R, R] = · · · = y1 x1 + [R, R] = m + [R, R].
Let us define g : R → A on m ∈ M \ I by
(
[m], if [m] ∩ I = ∅
g(m) =
0,
else.
TOPOLOGICAL FINITENESS PROPERTIES
Then if a, b ∈ R with, say,
a=
X
cm m, b =
m∈M \I
then we have that
ab − ba =
X
29
dn n
n∈M \I
X
cm dn (mn − nm).
m,n∈M \I
Since mn ∼ nm, either both map to 0 under g or both map to [mn] = [nm]. Therefore,
ab − ba ∈ ker g and so g induces a homomorphism g′ : R/[R, R] → A. Clearly, if [m] ∩ I = ∅,
then gf ([m]) = g′ (m + [R, R]) = g(m) = [m]. It follows that f is injective and hence an
isomorphism. The result follows.
As a consequence we deduce the result of Nielsen.
Corollary 6.20. Let M be a monoid and I a proper ideal (possibly empty). Then ZM/ZI has
IBN. In particular, contracted monoid rings have IBN.
Proof. Put R = ZM/ZI. If I is a proper ideal, then 1 is not conjugate to any element of I by
Lemma 6.18. It follows from Proposition 6.19 that 1 + [R, R] has infinite order in R/[R, R] and
hence R has IBN.
Theorem 6.21. Let M be a non-trivial monoid with a finitely many right zeroes (and at least
one). Then the trivial left ZM -module Z is projective but not stably free and hence does not
have a finite free resolution that is finitely generated in each degree.
Proof. Let I be the set of right zero elements of M and fix z ∈ I. Observe that I is a proper
two-sided ideal. Note that z, 1 − z form a complete set of orthogonal idempotents of ZM and
so ZM ∼
= ZM z ⊕ ZM (1 − z) and hence ZM z ∼
= Z is projective. Suppose that Z is stably free,
′ free ZM -modules of rank r, r ′ , respectively.
F
with
F,
F
that is, Z ⊕ F ′ ∼
=
There is a exact functor from ZM -modules to zZM z-modules given by V 7→ zV . Note that
zZM z = Zz ∼
= Z as a ring. Also, zZM = ZI is a free abelian group (equals zZM z-module) of
rank |I| < ∞. Therefore,
′
′
Z|I|r ∼
= (ZI)r ∼
= zF ∼
= Z ⊕ zF ′ ∼
= Z ⊕ (ZI)r ∼
= Z1+|I|r
as Z-modules and hence r|I| = 1 + r ′ |I| as Z has IBN. But putting R = ZM/ZI and observing
that Z/ZI · Z = 0, we have that
′
Rr ∼
= R ⊗ZM F ∼
= (R ⊗ZM Z) ⊕ (R ⊗ZM F ′ ) ∼
= Rr
and hence r = r ′ as R has IBN by Corollary 6.20. This contradiction completes the proof.
There is, of course, a dual result for left zeroes. In particular, if M is non-trivial monoid with
a zero element, then Z is not stably free as either a left or right ZM -module. Thus, if M is a
non-trivial monoid with zero, it has no M -finite free left or right equivariant classifying space
but it has an M -finite projective one. This justifies considering projective M -CW complexes.
If L is a left ideal of M containing an identity e, then L = M e = eM e. Note that ϕ : M → M e
given by ϕ(m) = me is a surjective monoid homomorphism in this case since ϕ(1) = e and
ϕ(mn) = mne = mene = ϕ(m)ϕ(n) as ne ∈ M e = eM e. Also note that the left M -set structure
on M e given by inflation along ϕ corresponds to the left M -set structure on M e induced by left
multiplication because if m ∈ M and n ∈ M e, then n = en and so mn = men = ϕ(m)n. Notice
that if f ∈ E(M e), then f ∈ E(M ) and M ef = M f is projective as both an M e-set and an M set. Thus each (finitely generated) projective M e-set is a (finitely generated) projective M -set
via inflation along ϕ. Note that if A is a left M -set, then eM ⊗M A ∼
= eA via em ⊗ a 7→ ema.
30
ROBERT D. GRAY AND BENJAMIN STEINBERG
Proposition 6.22. Suppose that M and e ∈ E(M ) with M e = eM e and 0 ≤ n ≤ ∞. If M e
is of type left-Fn , then so is M . The converse holds if eM is a finitely generated projective left
eM e-set.
Proof. If X is a projective M e-CW complex constructed via pushouts as in (2.1) (but with
M replaced by M e), then each Pn is a projective M -set via inflation along ϕ and so X is a
projective M -CW complex. Moreover, if X is of M e-finite type (respectively, M e-finite), then
it is of M -finite type (respectively, M -finite). Thus if M e is of type left-Fn , then so is M .
Suppose that X is an equivariant classifying space for M and eM is a finitely generated
projective left eM e-set. Then eM ⊗M X ∼
= eX. Now eX is a projective eM e-CW complex and
if Xn is M -finite, then (eX)n = eXn is eM e-finite by Corollary 3.2. Moreover, since eX is a
retract of X as a CW complex and X is contractible, it follows that eX is contractible. Thus
eX is an equivariant classifying space for eM e. The result follows.
Our first corollary is that having a right zero guarantees the property left-F∞ (which can be
viewed as a defect of the one-sided theory).
Corollary 6.23. If M contains a right zero, then M is of type left-F∞ . Hence any monoid
with a zero is both of type left- and right-F∞ .
Proof. If e is a right zero, then M e = {e} = eM e and {e} is of type left-F∞ . Thus M is of type
left-F∞ by Proposition 6.22.
Recall that two elements m and n a monoid M are said to be L -related if and only if they
generate the same principal left ideal, i.e., if M m = M n. Clearly L is an equivalence relation
on M .
Corollary 6.24. Suppose that M is a monoid and e ∈ E(M ) with eM a two-sided minimal
ideal of M and 0 ≤ n ≤ ∞. Note that Ge = eM e is the maximal subgroup at e. If Ge is of
type-Fn , then M is of type left-Fn . If eM contains finitely many L -classes, then the converse
holds.
Proof. Note that M e = eM e = Ge and so the first statement is immediate from Proposition 6.22. For the converse, it follows from Green’s lemma [How95, Lemma 2.2.1] that eM is
free left Ge -set and that the orbits are the L -classes of eM . Thus the second statement follows
from Proposition 6.22.
Corollary 6.24 implies that if M is a monoid with a minimal ideal that is a group G, then
M is of type left-Fn if and only if G is of type Fn and dually for right-Fn .
The following is a slight extension of the fact that a finite index subgroup of a group of type
Fn is also of type Fn ; see [Bro94, Chapter VIII, Proposition 5.1].
Proposition 6.25. Let M be a monoid and N a submonoid such that M is a finitely generated
projective left N -set. If M is of type left-Fn , then N is of type left-Fn , as well.
Proof. Observe that each finitely generated free left M -set is a finitely generated projective
N -set. Hence each finitely generated projective M -set, being a retract of a finitely generated
free left M -set, is a finitely generated projective N -set. Thus any equivariant classifying space
for M is also an equivariant classifying space for N .
An immediate consequence of Proposition 6.8 is the following.
Proposition 6.26. Let M, N be monoids of type left-Fn . Then M × N is of type left-Fn .
TOPOLOGICAL FINITENESS PROPERTIES
31
6.2. Left geometric dimension. Let us define the left geometric dimension of M to be
the minimum dimension of a left equivariant classifying space for M . The right geometric
dimension is, of course, defined dually. Clearly, the geometric dimension is an upper bound on
the cohomological dimension cd M of M . Recall that the (left) cohomological dimension of M
is the projective dimension of the trivial module Z, that is, the shortest length of a projective
resolution of Z. As mentioned in the introduction, for groups of cohomological dimension
different than 2, it is known that geometric dimension coincides with cohomological dimension,
but the general case is open.
Theorem 6.27. Let M be a monoid. Then M has an equivariant classifying space of dimension
max{cd M, 3}.
Proof. If M has infinite cohomological dimension, then this is just the assertion that M has an
equivariant classiyfing space. So assume that cd M < ∞. Put n = max{cd M, 3}. Let Y be
an equivariant classifying space for M . As the inclusion Yn−1 → Y is an (n − 1)-equivalence,
we deduce that πq (Yn−1 ) is trivial for 0 ≤ q < n − 1. Also, as the augmented cellular chain
complex of Yn−1 provides a partial projective resolution of the trivial module of length n−1 and
cd M ≤ n, it follows that ker dn−1 = Hn−1 (Yn−1 ) is a projective ZM -module. By the Eilenberg
swindle, there is a free ZM -module F such that Hn−1 (Yn−1 ) ⊕ F ∼
= F . Suppose that F is free
on a set A. Fix a basepoint y0 ∈ Yn−1 . We glue a wedge of (n − 1)-spheres, in bijection with
A, into Yn−1 at y0 as well as freely gluing in its translates. That is we form a new projective
M -CW complex Z with Zn−2 = Yn−2 and where Z = Zn−1 consists of the (n − 1)-cells from
Yn−1 and M × A × B n−1 where the attaching map M × A × S n−2 is given by (m, a, x) 7→ my0 .
Notice that Cn−1 (Z) ∼
= Cn−1 (Yn−1 ) ⊕ F as a ZM -module and that the boundary map is zero
on the F -summand since the boundary of each of the new (n − 1)-cells that we have glued in is
a point and n ≥ 3. Therefore, Hn−1 (Z) = ker dn−1 = Hn−1 (Yn−1 ) ⊕ F ∼
= F . As the inclusion
Yn−2 = Zn−2 → Z is an (n−2)-equivalence, we deduce that πq (Z) is trivial for 0 ≤ q ≤ n−2. In
particular, Z is simply connected as n ≥ 3. By the Hurewicz theorem, πn−1 (Z, y0 ) ∼
= Hn−1 (Z).
Choose mappings fa : S n−1 → Z, for a ∈ A, whose images under the Hurewicz mapping from a
ZM -module basis for Hn−1 (Z) ∼
= F . Then form X by attaching M × A × B n to Z = Zn−1 via
the mapping
M × A × S n−1 → Z
sending (m, a, x) to mfa (x).
Note that X is an n-dimensional CW complex with Xn−1 = Zn−1 and hence the inclusion Z = Xn−1 → X is an (n − 1)-equivalence. Therefore, πq (X) = 0 = Hq (X) for
0 ≤ q ≤ n − 2. Also πn−1 (X, y0 ) ∼
= Hn−1 (X) via the Hurewicz isomorphism. Moreover,
as the inclusion Z = Xn−1 → X is an (n − 1)-equivalence, we deduce that the inclusion
induces a surjective homomorphism πn−1 (Z, y0 ) → πn−1 (X, y0 ) and hence a surjective homomorphism Hn−1 (Z) → Hn−1 (X). As the ZM -module generators of Hn−1 (Z) have trivial images
in Hn−1 (X) by construction and the Hurewicz map, we deduce that Hn−1 (X) = 0.
Recall that Cn (X) = Hn (Xn , Xn−1 ). By standard cellular homology Hn (Xn , Xn−1 ) is a free
ZM -module on the images of the generator of the relative homology of (B n , S n−1 ) under the
characteristic mappings
ha : ({1} × {a} × B n , {1} × {a} × S n−1 ) → (Xn , Xn−1 )
and the boundary map ∂n : Hn (Xn , Xn−1 ) → Hn−1 (Xn−1 ) sends the class corresponding to
a ∈ A to the image of the generator of S n−1 under the map on homology induced by the
attaching map fa : S n−1 → Xn−1 . Hence a free basis of Hn (Xn , Xn−1 ) is sent by ∂n bijectively
to a free basis for Hn−1 (Xn−1 ) and so ∂n is an isomorphism. The long exact sequence for
32
ROBERT D. GRAY AND BENJAMIN STEINBERG
reduced homology and the fact that an (n − 1)-dimensional CW complex has trivial homology
in degree n provides an exact sequence
∂
n
0 = Hn (Xn−1 ) −→ Hn (Xn ) −→ Hn (Xn , Xn−1 ) −→
Hn−1 (Xn−1 )
and so Hn (X) = Hn (Xn ) ∼
= ker ∂n = 0. As X is a simply connected n-dimensional CW
complex with Hq (X) = 0 for 0 ≤ q ≤ n, we deduce that X is contractible by the Hurewicz and
Whitehead theorems. Therefore, X is an n-dimensional equivariant classifying space for M ,
completing the proof.
We end this section by observing that monoids of left cohomological dimenison 0 are precisely
the monoids of left geometric dimension 0. The following result generalises [GP98, Lemma 1
and Theorem 1].
Proposition 6.28. Let M be a monoid. Then the following are equivalent.
(1) M has a right zero element.
(2) M has left cohomological dimension 0.
(3) M has left geometric dimension 0.
Proof. If M has a right zero z, then M z = {z} is a projective M -set and hence the one point
space is an equivariant classifying space for M . Thus M has left geometric dimension zero. If
M has left geometric dimension zero, then it has left cohomological dimension zero. If M has
left cohomological dimension zero, then Z is a projective ZM -module and so the augmentation
mapping ε : ZM → Z splits. Let P be the image of the splitting, so that ZM = P ⊕ Q. As P is
a retract of ZM and each endomorphism of ZM is induced by a right multiplication, we have
that Z ∼
= P = ZM e for some idempotent e ∈ ZM with ε(e) = 1. Then since me = e for all
m ∈ M and e has finite support X, we must have that M permutes X under left multiplication.
Let G be the quotient of M that identifies two elements if they act the same on X. Then G is
a finite group and Z must be a projective ZG-module. Therefore, G is trivial. But this means
that if x ∈ X, then mx = x for all m ∈ M and so M has a right zero.
We do not know whether left geometric dimension equals left cohomological dimension for
monoids of cohomological dimension one or two, although the former is true for groups by the
Stallings-Swan theorem.
7. Bi-equivariant classifying spaces
Let M be a monoid. We now introduce the bilateral notion of a classifying space in order
to introduce a stronger property, bi-Fn . It will turn out that bi-Fn implies both left-Fn and
right-Fn , but is strictly stronger. Moreover, bi-Fn implies bi-FPn which is of interest from the
point of view of Hochschild cohomology, which is the standard notion of cohomology for rings.
Many of the results are similar to the previous section, but the proofs are more complicated.
First recall that M is an M × M op -set via the action (mL , mR )m = mL mmR . We say that a
projective M × M op -CW complex X is a bi-equivariant classifying space for M if π0 (X) ∼
=M
as an M × M op -set and each component of X is contractible; equivalently, X has an M × M op equivariant homotopy equivalence to the discrete M × M op -set M .
We can augment the cellular chain complex of X via the canonical surjection ε : C0 (X) →
H0 (X) ∼
= Zπ0 (X) ∼
= ZM . The fact that each component of X is contractible guarantees that
this is a resolution, which will be a projective bimodule resolution of ZM and hence suitable for
computing Hochschild cohomology. We begin by establishing the uniqueness up to M × M op homotopy equivalence of bi-equivariant classifying spaces.
TOPOLOGICAL FINITENESS PROPERTIES
33
Lemma 7.1. Let X be a bi-equivariant classifying space for M and let Y be a locally path
connected M × M op -space with contractible connected components. Suppose that g : π0 (X) →
π0 (Y ) is an M ×M op -equivariant mapping. Then there exists a continuous M ×M op -equivariant
mapping f : X → Y such that the mapping f∗ : π0 (X) → π0 (Y ) induced by f is g.
Proof. Let r : X → π0 (X) and k : Y → π0 (Y ) be the projections to the set of connected
components. Then k and r are continuous M × M op -equivariant maps where π0 (X) and π0 (Y )
carry the discrete topology. Our goal will be to construct an M × M op -equivariant continuous
mapping f : X → Y such that the diagram
X
f
Y
r
π0 (X)
k
π0 (Y )
g
commutes. We construct, by induction, M ×M op -equivariant continuous mappings fn : Xn → Y
such that
Xn
fn
Y
r
π0 (X)
(7.1)
k
π0 (Y )
g
commutes and fn extends fn−1 . `
To define f0 , observe that X0 = a∈A M ea × e′a M . Choose ya ∈ Y with k(ya ) = g(r(ea , e′a )).
Then k(ea ye′a ) = ea k(ya )e′a = ea g(r(ea , e′a ))e′a = g(r(ea , e′a )) and so replacing ya by ea ye′a , we
may assume without loss of generality that ya ∈ ea Y e′a . Then by Proposition 2.3 there is an
M × M op -equivariant mapping X0 → Y given by (mea , e′a m′ ) 7→ mea ya e′a m′ for a ∈ A and
m ∈ M . By construction, the diagram (7.1) commutes.
Assume now that fn has been defined. The map k : Y → π0 (Y ) is M × M op -equivariant
and a weak equivalence (where π0 (Y ) has the discrete topology) because Y has contractible
connected components. So by Theorem 2.5 we can construct a commutative diagram
i0
Xn
Xn × I
i1
Xn
grπXn
π0 (Y )
i
k
fn
Y
Xn+1
i
fn+1
gr
i0
Xn+1 × I
i1
Xn+1
where fn+1 is M × M op -equivariant and πXn is the projection. Note that kfn+1 ≃ gr and hence,
since π0 (Y ) is a discrete space, we conclude that kfn+1 = gr. Now take f to be the colimit of
the fn . This completes the proof.
Theorem 7.2. Let X, Y be bi-equivariant classifying spaces for M . Then X and Y are M ×
M op -homotopy equivalent by a cellular M × M op -homotopy equivalence.
Proof. As π0 (X) ∼
=M ∼
= π0 (Y ) as M × M op -sets, there is an M × M op -equivariant isomorphism
g : π0 (X) → π0 (Y ). Then by Lemma 7.1, there is an M × M op -equivariant continuous mapping
f : X → Y inducing g on connected components. It follows that f is a weak equivalence as X
and Y both have contractible connected components. The result now follows from Corollary 2.7
and Theorem 2.8.
34
ROBERT D. GRAY AND BENJAMIN STEINBERG
Next we prove in an elementary fashion that bi-equivariant classifying spaces for M exist. A
more canonical construction, using simplicial sets, was described earlier.
Lemma 7.3. Let M be a monoid.
(1) If X is a projective (free) M ×M op -CW complex such that π0 (X) ∼
= M and πq (X, x) = 0
for all 1 ≤ q < n and x ∈ X, then there exists a projective M × M op -CW complex Y
containing X as a projective M × M op -CW subcomplex and such that Yn = Xn and
πq (Y, y) = 0 for all y ∈ Y and 1 ≤ q ≤ n.
(2) If X is a projective (free) M ×M op -CW complex such that π0 (X) ∼
= M and πq (X, x) = 0
for all 1 ≤ q < n and x ∈ X, then there exists a projective (free) M × M op -CW complex
Y with contractible connected components containing X as a projective M × M op -CW
subcomplex and such that Yn = Xn .
Proof. This is a minor adaptation of the proof of Lemma 6.4 that we leave to the reader.
Corollary 7.4. Let M be a monoid. Then there exists a free M × M op -CW complex X with
π0 (X) ∼
= M and each connected component of X contractible.
∼ M . We take
Proof. By Lemma 7.3 it suffices to construct a free M ×M op -graph X with π0 (X) =
X0 = M × M and we take an edge set in bijection with M × M × M . The edge (mL , mR , m) will
connect (mL , mmR ) to (mL m, mR ). Then X is a free M × M op -graph. Notice that if (m1 , m2 )
is connected by an edge to (m′1 , m′2 ), then m1 m2 = m′1 m′2 . On the other hand, (1, m2 , m1 ) is
an edge from (1, m1 m2 ) to (m1 , m2 ) and hence there is a bijection π0 (X) → M sending the
component of (m1 , m2 ) to m1 m2 and this mapping is an M × M op -equivariant bijection.
Example 7.5. It follows from the definitions and results in Section 5 that the geometric re←−→
alisation |EM | of the nerve of the two-sided Cayley graph category of M is a bi-equivariant
classifying space for M .
Corollary 7.6. If X is a bi-equivariant classifying space for M , then M \X/M ≃ |BM |.
←−→
Proof. We have M \|EM |/M ∼
= |BM |. The result now follows from Theorem 7.2 and Proposition 3.5.
Another important definition for this paper is the following. A monoid M is of type bi-Fn if
there is a bi-equivariant classifying space X for M such that Xn is M ×M op -finite, i.e., M \X/M
has finite n-skeleton. We say that M is of type bi-F∞ if M has a bi-equivariant classifying space
X that is of M × M op -finite type, i.e., M \X/M is of finite type. Clearly by making use of the
←−→
canonical two-sided classifying space |EM | we can immediately conclude that any finite monoid
is of type bi-F∞ .
Recall that a monoid M is said to be of type bi-FPn if there is a partial resolution of the
(ZM, ZM )-bimodule ZM
An → An−1 → · · · → A1 → A0 → ZM → 0
where A0 , A1 , . . . , An are finitely generated projective (ZM, ZM )-bimodules. Monoids of type
bi-FPn were studied by Kobayashi and Otto in [KO01]. We note that this differs from the
definition of bi-FPn considered in [AH03], which is called weak bi-FPn by Pride in [Pri06] where
it is shown to be equivalent to being simultaneously of type left- and right-FPn . In this paper by
bi-FPn we shall always mean bi-FPn in the above sense of Kobayashi and Otto. The property biFPn is of interest because of its connections with the study of Hochschild cohomology [Wei94,
Chapter 9]. Kobayashi investigated bi-FPn in [Kob05, Kob07, Kob10] proving, in particular,
that any monoid which admits a presentation by a finite complete rewriting system is of type
TOPOLOGICAL FINITENESS PROPERTIES
35
bi-FPn . This has applications for the computation of Hochschild cohomology. We shall recover
this theorem of Kobayashi below in Section 11 as an application of our results on equivariant
discrete Morse theory and collapsing schemes. See also [Pas08] for further related results on
bi-FPn .
The following result relates bi-Fn with bi-FPn .
Proposition 7.7. Let M be a monoid.
(1) For 0 ≤ n ≤ ∞, if M is of type bi-Fn , then it is of type bi-FPn .
(2) If M is of type bi-F∞ , then it is of type bi-Fn for all n ≥ 0.
(3) If M is of type bi-Fn for 0 ≤ n ≤ ∞, then M is of type left-Fn and type right-Fn .
(4) For 0 ≤ n ≤ ∞, a group is of type bi-Fn if and only if it is of type Fn .
Proof. The first item follows using that the cellular chain complex of a bi-equivariant classying
space X can be augmented, as discussed earlier, to give a bimodule resolution of ZM and that
if X is built up from pushouts as per (2.1) (with M × M op in place of M ), then the nth -chain
module is isomorphic to ZPn as a bimodule and hence is projective. The second item is trivial.
←−→
←−−
For the third item, one verifies that if EM is the two-sided bar construction, then EM ∼
=
←−→
←−−
EM /M where EM is the left bar construction. Suppose now that X is a bi-equivariant
←−→
classifying space for M such that Xn is M × M op -finite. Then X ≃M ×M op EM by Theo←−−
←−→
rem 7.2. Therefore, X/M ≃M |EM |/M = |EM | and X/M is a projective M -CW complex with
(X/M )n = Xn /M being M -finite by Corollary 3.8. Thus if M is of type bi-Fn for 0 ≤ n ≤ ∞,
then M is of type left-Fn and dually right-Fn .
If G is a group of type bi-Fn , then it is of type Fn by the previous item and Proposition 6.9.
Conversely, suppose that X is a free G-CW complex with G-finite n-skeleton. Then using the
right G-set structure on G × G from Proposition 3.7 we have that Y = (G × G) ⊗G X is a
projective G × Gop -CW complex by Proposition 3.2 such that Yn is G × Gop -finite. Moreover,
π0 (Y ) ∼
= (G×G)⊗G π0 (X) by Proposition 3.4. But π0 (X) is the trivial G-set and (G×G)⊗G 1 ∼
=
G as a G × Gop -set via (g, h) ⊗ 1 7→ gh. Finally, since G is a free
right
G-set
of
on
G-generators
`
by Proposition 3.7, it follows that as a topological space Y = G X and hence each component
of Y is contractible. This completes the proof.
The proof of Proposition 7.7 establishes the following proposition.
Proposition 7.8. If X is a bi-equivariant classifying space for M , then X/M is an equivariant
classifying space for M .
Sometimes it will be convenient to use the following reformulation of the property bi-Fn .
Proposition 7.9. Let M be a monoid. The following are equivalent for 0 ≤ n < ∞.
(1) M is of type bi-Fn
(2) There is an M × M op -finite projective M × M op -CW complex X of dimension at most
n with π0 (X) ∼
= M and πq (X, x) = 0 for 1 ≤ q < n and x ∈ X.
Proof. This is entirely analogous to the proof of Proposition 6.10.
←−−−−→
If M is a monoid and A ⊆ M , then the two-sided Cayley digraph Γ(M, A) is the digraph
with vertex set M × M and with edges in bijection with elements of M × M × A. The directed
edge (mL , mR , a) goes from (mL , amR ) to (mL a, mR ) and we draw it as
a
(mL , amR ) −→ (mL a, mR ).
←−−−−→
Note that Γ(M, A) is a free M × M op -graph and is M × M op -finite if and only if A is finite.
Also note that if (m1 , m2 ) is connected to (m′1 , m′2 ) by an edge, then m1 m2 = m′1 m′2 . Hence
36
ROBERT D. GRAY AND BENJAMIN STEINBERG
multiplication of the coordinates of a vertex induces a surjective M × M op -equivariant mapping
←−−−−→
π0 (Γ(M, A)) → M . If A is a generating set for M , then the mapping is an isomorphism because
if m1 , m2 ∈ M and u ∈ A∗ is a word representing m1 , then there is a directed path labelled by
u from (1, m) to (m1 , m2 ). Namely, if u = a1 · · · ak with ai ∈ A, then the path labelled by u
from (1, m) to (m1 , m2 ) is
a
a
a
a
(1, m) −−1→ (a1 , a2 · · · ak m2 ) −−2→ (a1 a2 , a3 · · · ak m2 ) −−3→ · · · −−k→ (m1 , m2 ).
(7.2)
A monoid M is said to be dominated by a subset A if whenever f, g : M → N are monoid
homomorphisms with f |A = g|A , one has f = g. In other words, the inclusion hAi ֒→ M is
an epimorphism (in the category theory sense). Of course, a generating set of M dominates
M . Note that if A is a subset of an inverse monoid M (e.g., a group), then A dominates M if
and only if A generates M as an inverse monoid. Hence M is finitely generated if and only if
M is finitely dominated. Kobayashi gives an example of an infinitely generated monoid that is
finitely dominated. See [Kob07] for details.
Theorem 7.10. The following are equivalent for a monoid M .
(1) M is of type bi-F1 .
(2) M is of type bi-FP1 .
←−−−−→
(3) There is a finite subset A ⊆ M such that the natural mapping π0 (Γ(M, A)) → M is an
isomorphism.
(4) There is a finite subset A ⊆ M that dominates M .
In particular, any finitely generated monoid is of type bi-F1 .
Proof. The equivalence of (2) and (4) was established by Kobayashi [Kob07] using Isbel’s zig-zag
lemma (actually, the equivalence of (3) and (4) is also direct from Isbel’s zig-zag lemma).
←−−−−→
Assume that (3) holds. Then Γ(M, A) is M × M op -finite and so M is of type bi-F1 by
Proposition 7.9. Proposition 7.7 shows that (1) implies (2).
Assume that M is of type bi-FP1 . Then we have a partial free resolution
µ
ZM ⊗ ZM −−→ ZM −→ 0
of finite type where µ is induced by the multiplication in ZM . Since M is of type bi-FP1 , ker µ
op -module by the
is finitely generated. It is well known that ker µP
is generated as a ZM ⊗ ZMP
elements m ⊗ 1 − 1 ⊗ m with m ∈ M . Indeed, if
ci mi ⊗ ni is in ker µ, then
ci mi ni = 0 and
so
X
X
X
ci mi ⊗ ni =
ci mi ⊗ ni −
ci (1 ⊗ mi ni )
X
=
ci (mi ⊗ ni − 1 ⊗ mi ni )
X
=
ci (mi ⊗ 1 − 1 ⊗ mi )ni .
Hence there is a finite subset A ⊆ M such that ker µ is generated by the elements a⊗1−1⊗a with
←−−−−→
a ∈ A. We claim that the natural surjective mapping π0 (Γ(M, A)) → M is an isomorphism.
Identifying Z[M ×M op ] with ZM ⊗ZM op as rings and Z[M ×M ] with ZM ⊗ZM as bimodules,
we have a bimodule homomorphism
←−−−−→
λ : Z[M × M ] → Zπ0 (Γ(M, A))
←−−−−→
sending (mL , mR ) to its connected component in Γ(M, A) and µ factors as λ followed by the
natural mapping
←−−−−→
Zπ0 (Γ(M, A)) → ZM.
TOPOLOGICAL FINITENESS PROPERTIES
37
Clearly, ker λ ⊆ ker µ and so to prove the result it suffices to show that ker µ ⊆ ker λ. Now ker µ
is generated by the elements (1, a) − (a, 1) with a ∈ A under our identifications. But (1, 1, a) is
an edge from (1, a) to (a, 1). Thus (1, a) − (a, 1) ∈ ker λ for all a ∈ A. This establishes that (2)
implies (3), thereby completing the proof.
If G is a group, then it follows from Theorem 7.10 that G ∪ {0} is of type bi-F1 if and only
if G is finitely generated. Indeed, G ∪ {0} is an inverse monoid and hence finitely dominated
if and only if finitely generated. But G ∪ {0} is finitely generated if and only if G is finitely
generated. On the other hand, G ∪ {0} is both of type left- and right-F∞ for any group G by
Corollary 6.23. Thus bi-Fn is a much stronger notion.
Remark 7.11. It can be shown that if M is a monoid and M 0 is the result of adjoining a 0 to M ,
then if M 0 is of type bi-Fn , then M is of type bi-Fn . The idea is that if X is a bi-equivariant
classifying space for M 0 , then the union Y of components of X corresponding to elements of M
is a bi-equivariant classifying space for M and Yn will be M × M op -finite if Xn is M 0 × (M 0 )op finite. More generally, if T is a submonoid of M such that M \ T is an ideal, then M being of
type bi-Fn implies T is also of type bi-Fn .
Next we show that finitely presented monoids are of type bi-F2 . The proof is similar to the
proof of Theorem 6.14, which is in fact a consequence.
Theorem 7.12. Let M be a finitely presented monoid. Then M is of type bi-F2 .
Proof. Suppose that M is generated by a finite set A with defining relations u1 = v1 , . . . , un =
vn . We construct an M × M op -finite 2-dimensional free M × M op -CW complex X with 1←−−−−→
skeleton the two-sided Cayley graph Γ(M, A) by attaching an M × M op -cell M × M × B 2 for
each relation. Suppose that ui and vi map to mi in M . Let pi , qi be the paths from (1, mi ) to
(mi , 1) labelled by ui and vi , respectively, cf. (7.2). Then we glue in a disk di with boundary
path pi qi−1 and glue in M × M × B 2 using Proposition 2.3 (so {(mL , mR )} × B 2 is sent to
mL di mR ). Then X is a free M × M op -CW complex of dimension at most 2 that is M × M op finite and π0 (X) ∼
= M . By Proposition 6.10, it suffices to prove that each connected component
of X is simply connected.
The connected component X(m) of X corresponding to m ∈ M is a digraph rooted at (1, m)
a
by (7.2). Let Tm be a directed spanning tree for X(m) rooted at (1, m). Let e = (n1 , an2 ) −→
(n1 a, n2 ) be a directed edge of X(m) not belonging to Tm . Then the corresponding generator of
π1 (X(m), (1, m)) is of the form peq −1 where p and q are directed paths from (1, m) to (n1 , an2 )
and (n1 a, n2 ), respectively. Let u be the label of p and v be the label of q. Then ua = v in
M . Thus it suffices to prove that if x, y ∈ A∗ are words which are equal in M to m′ labelling
respective paths from (1, m) to (m′ , m′′ ) with m′ m′′ = m, then the corresponding loop ℓ labelled
xy −1 at (1, m) is null homotopic.
By induction on the length of a derivation from x to y, we may assume that x = wui w′
and y = wvi w′ for some i = 1, . . . , n. Then the path labelled by w starting at (1, m) ends at
(w, mi w′ m′′ ) where we recall that mi is the image of ui , vi in M . Then wdi w′ m′′ is a 2-cell
bounded by parallel paths from (w, mi w′ m′′ ) to (wmi , w′ m′′ ) labeled by ui and vi , respectively.
It follows that the paths labelled by x and y from (1, m) to (m′ , m′′ ) are homotopic relative
to endpoints and hence ℓ is null homotopic. This completes the proof that X(m) is simply
connected.
Remark 7.13. We currently do not know the precise relationship between bi-F2 and finitely
presentability for monoids. Specifically we have the question: Is there a finitely generated bi-F2
monoid that is not finitely presented? Even for inverse monoids this question remains open.
38
ROBERT D. GRAY AND BENJAMIN STEINBERG
We next observe that finitely generated free monoids are bi-F∞ .
Proposition 7.14. Let A be a finite set. Then the free monoid A∗ is of type bi-F∞ .
←−−−−→
←−−−−→
Proof. Each connected component of Γ(M, A) is a tree and hence contractible. Thus Γ(M, A)
is an A∗ × (A∗ )op -finite bi-equivariant classifying space for A∗ .
Theorem 6.15 has an analogue for bi-Fn and bi-FPn with essentially the same proof, which
we omit.
Theorem 7.15. Let M be a monoid of type bi-F2 . Then M is of type bi-Fn if and only if M
is of type bi-FPn for 0 ≤ n ≤ ∞.
Observe that Theorem 7.15 implies that M is of type bi-F∞ if and only if M is of type bi-Fn
for all n ≥ 0. The analogue of Proposition 6.16 in our setting again admits a very similar proof
that we omit.
Proposition 7.16. If M is of type bi-Fn with n ≥ 1, then M has a free M × M op -CW complex
X that is a bi-equivariant classifying space for M where Xn is M × M op -finite.
Proposition 6.26 also has a two-sided analogue.
Proposition 7.17. If M, N are of type bi-Fn , then M × N is of type bi-Fn .
Let us turn to some inheritance properties for bi-Fn . If I is an ideal of M containing an
identity e, then e is a central idempotent and M eM = M e = eM = eM e. Indeed, em =
(em)e = e(me) = me as em, me ∈ I. If f, f ′ ∈ E(M ), then f e, f ′ e ∈ E(eM e) and e(M f ×
f ′ M )e = eM ef e × f ′ eM e as an eM e-eM e-biset and hence is finitely generated projective.
Thus if P is a (finitely generated) projective M × M op -set, then eP e is a (finitely generated)
projective eM e × eM eop -set.
Proposition 7.18. Let M be a monoid and 0 ≤ n ≤ ∞ and e ∈ E(M ) be a central idempotent.
If M is of type bi-Fn , then so is eM e.
Proof. Let X be a bi-equivariant classifying space of M such that Xn is M ×M op -finite. Suppose
that X is obtained via pushouts as per (2.1) (but with M ×M op in place of M ). Then each ePk e
is a projective eM e × eM eop -set and is finitely generated whenever Pk was finitely generated
by the observation preceding the proposition. Thus eXe is a projective eM e × eM eop -CW
complex and (eXe)n = eXn e is eM e × eM eop -finite. Also, since eXe ∼
= (eM × M e) ⊗M ×M op X,
∼
∼
we deduce that π0 (eXe) = eπ0 (X)e = eM e by Proposition 3.4. If X(m) is the component of X
corresponding to m ∈ eM e, then eX(m)e is the component of eXe corresponding to m in eXe
and is a retract of X(m). But X(m) is contractible and hence eX(m)e is contractible. This
shows that eXe is a bi-equivariant classifying space for eM e, completing the proof.
Two monoids M and N are Morita equivalent if the categories of left M -sets and left N -sets
are equivalent. It is known that this is the case if and only if there is an idempotent e ∈ E(M )
such that xy = 1 for some x, y ∈ M with ey = y and eM e ∼
= N [Kna72]. It follows easily
that if M and N are Morita equivalent, then so are M op and N op . Note that if e is as above,
then the functor A 7→ eA ∼
= eM ⊗M A from M -sets to N -sets (identifying N with eM e) is an
equivalence of categories with inverse B 7→ M e ⊗N B. This uses that M e ⊗eM e eM ∼
= M as
op
M × M -sets via the multiplication map (the inverse bijection takes m ∈ M to mxe ⊗ y) and
eM ⊗M M e ∼
= eM e as eM e × (eM e)op -sets (via the multiplication with inverse eme 7→ em ⊗ e).
It follows that if P is a (finitely generated) projective M -set, then eP is a (finitely generated)
projective N -set (as being projective is categorical and a projective is finitely generated if and
only if it is a coproduct of finitely many indecomposable projectives, which is also categorical).
In particular, eM is a finitely generated projective N -set.
TOPOLOGICAL FINITENESS PROPERTIES
39
Proposition 7.19. Let M and N be Morita equivalent monoids and 0 ≤ n ≤ ∞.
(1) M is of type left-Fn if and only if N is of type left-Fn .
(2) M is of type right-Fn if and only if N is of type right-Fn .
(3) M is of type bi-Fn if and only if N is of type bi-Fn .
Proof. By symmetry it suffice to prove the implications from left to right. We may assume
without loss of generality that N = eM e where 1 = xy with ey = y. Notice that 1 = xy = xey
and so replacing x by xe, we may assume that xe = x. To prove (1), suppose that X is an
equivariant classifying space for M such that Xn is M -finite. Then eM ⊗M X ∼
= eX is a
projective N -CW complex by Proposition 3.2 such that (eX)n = eXn is N -finite. But eX is a
retract of X and hence contractible. We deduce that N is of type left-Fn .
The proof of (2) is dual. To prove (3), observe that (e, e)(M × M op )(e, e) = eM e × (eM e)op
and that we have (x, y)(y, x) = (1, 1) and (e, e)(y, x) = (y, x) in M ×M op because xy = e, ey = y
and xe = x. Thus M ×M op is Morita equivalent to N ×N op and eM ×M e is a finitely generated
projective N × N op -set. Suppose that X is a bi-equivariant classifying space for M such that
Xn is M ×M op -finite. Then (eM ×M e)⊗M ×M op X ∼
= eXe is a projective N ×N op -CW complex
such that (eXe)n = eXn e is N × N op -finite by Corollary 3.2. Also, π0 (eXe) ∼
=N
= eπ(X)e ∼
by Proposition 3.4. Moreover, if m ∈ eM e and X(m) is the component of X corresponding to
m, then the component of eXe corresponding to m is eX(m)e, which is a retract of X(m) and
hence contractible. Thus eXe is a bi-equivariant classifying space of N . The result follows.
There are examples of Morita equivalent monoids that are not isomorphic; see [Kna72].
We define the geometric dimension of M to be the minimum dimension of a bi-equivariant
classifying space for M . The Hochschild cohomological dimension of M , which we write dim M ,
is the length of a shortest projective resolution of ZM as a Z[M × M op ]-module. Of course,
the Hochschild cohomological dimension bounds both the left and right cohomological dimension and the geometric dimension bounds the Hochschild cohomological dimension. Also the
geometric dimension bounds both the left and right geometric dimensions because if X is a biequivariant classifying space for M of dimension n, then X/M is a classifying space of dimension
n.
The following theorem has an essentially identical proof to Theorem 6.27.
Theorem 7.20. Let M be a monoid. Then M has a bi-equivariant classifying space of dimension max{dim M, 3}.
Free monoids have a forest for a bi-equivariant classifying space and hence have geometric
dimension 1. It is well known (see e.g. [Mit72]) that they have Hochschild cohomological
dimension 1.
It is known that a monoid has Hochschild cohomological dimension 0 if and only if it is a
finite regular aperiodic monoid with sandwich matrices invertible over Z (see [Che84]). For
instance, any finite aperiodic inverse monoid has Hochschild cohomogical dimension 0. A nontrivial monoid of Hochschild cohomological dimension 0 does not have geometric dimension 0
because M would have to be a projective M -biset. So M ∼
= M e × f M , with e, f ∈ E(M ),
via an equivariant map ϕ sending (e, f ) to 1 (as M being finite aperiodic implies that 1 is the
unique generator of M as a two-sided ideal). But then f = ϕ(e, f )f = ϕ(e, f ) = 1 and similarly
e = 1 and so M is trivial. Thus non-trivial monoids of Hochschild cohomological dimension 0
do not have geometric dimension 0.
40
ROBERT D. GRAY AND BENJAMIN STEINBERG
8. Brown’s theory of collapsing schemes
The theory of collapsing schemes was introduced by Brown in [Bro92]. Since then it has
become an important and often-used tool for proving that certain groups are of type F∞ . The
first place the idea appears is in a paper of Brown and Geoghegan [BG84] where they had a
cell complex with one vertex and infinitely many cells in each positive dimension, and they
showed how it could be collapsed to a quotient complex with only two cells in each positive
dimension. Brown went on to develop this idea further in [Bro92] formalising it in his theory
of collapsing schemes, and applying it to give a topological proof that groups which admit
presentations by finite complete rewriting systems are of type F∞ (see Section 11 below for
the definition of complete rewriting system). Brown’s theory of collapsing schemes was later
rediscovered under the name of discrete Morse theory [For95, For02], an important area in
algebraic combinatorialists. Chari [Cha00] formulated discrete Morse theory combinatorially
via Morse matchings, which turn out to be the same thing as collapsing schemes.
The basic idea of collapsing schemes for groups is a follows. Suppose we are given a finitely
presented group G and we would like to prove it is of type F∞ . Then we can first begin with the
big K(G, 1) complex |BG| with infinitely many n-cells for each n. Then in certain situations
it is possible to show how one can collapse away all but finitely many cells in each dimension
resulting in a K(G, 1) much smaller than the one we started with. The collapse is carried out
using a so-called collapsing scheme associated with the simplicial set BG. It turns out that any
group which is presentable by a finite complete rewriting system admits a collapsing scheme
that, using this process, can be used to prove the group is of type F∞ ; see [Bro92, page 147].
As mentioned in the introduction above, Brown in fact develops this theory for monoids in
general, and applies the theory of collapsing schemes to show that if M admits a presentation
by a finite complete rewriting system then its classifying space |BM | has the homotopy type of
a CW complex with only finitely many cells in each dimension. However, as discussed in detail
in the introduction to this article, this information about the space |BM | is not enough on its
own to imply that the monoid M is of type left-FP∞ .
Motivated by this, in this section we shall develop the theory of M -equivariant collapsing
schemes. We shall prove that if an M -simplicial set admits an M -equivariant collapsing scheme
of finite type then the monoid is of type left-F∞ . We then prove that if M admits a finite
−−→
complete rewriting system then EM admits an M -equivariant collapsing scheme of finite type,
thus giving a topological proof that such monoids are of type left-F∞ . To do this, we shall identify conditions under which a collapsing scheme for BM can be lifted to give an M -equivariant
−−→
collapsing scheme for EM . These conditions will hold in a number of different situations, including when M admits a presentation by a finite complete rewriting system and when M is a,
so-called, factorable monoid [HO14]. We also develop the two-sided theory. As a consequence
we also obtain a topological proof of the fact that such a monoid is of type bi-F∞ , recovering
a theorem of Kobayashi [Kob05].
S
8.1. Collapsing schemes. Let K = i≥0 Ki be a simplicial set and let X = |K| be its
geometric realisation. We identify the cells of X with the non-degenerate simplices of K. A
collapsing scheme for K consists of the following data:
• A partition of the cells of X into three classes, E, C, R, called the essential, collapsible
and redundant cells, respectively, where the collapsible cells all have dimension at least
one.
• Mappings c and i which associate with each redundant n-cell τ a collapsible (n + 1)-cell
c(τ ), and a number i(τ ), such that τ = di(τ ) (c(τ )).
TOPOLOGICAL FINITENESS PROPERTIES
41
Let σ = c(τ ). If τ ′ is a redundant n-cell such that τ ′ = dj σ for some j 6= i(τ ) then we call τ ′
an immediate predecessor of τ and write τ ′ ≺ τ . Furthermore, the conditions for a collapsing
scheme are satisfied, which means:
(C1) for all n, the mapping c defines a bijection between Rn (the redundant n-cells) and
Cn+1 (the collapsible (n + 1)-cells).
(C2) there is no infinite descending chain τ ≻ τ ′ ≻ τ ′′ ≻ · · · of redundant n-cells.
Condition (C2) clearly implies that there is a unique integer i such that τ = di (c(τ )) (otherwise
we would have τ ≻ τ , leading to an infinite descending chain). It also follows from (C2) that,
by Königs lemma, there cannot be arbitrarily long descending chains τ0 ≻ · · · ≻ τk . This is a
key fact in the proof of [Bro92, Proposition 1] since it gives rise to the notion of ‘height’:
Definition 8.1 (Height). The height of a redundant cell τ , written height(τ ), is the maximum
length of a descending chain τ = τ0 ≻ τ1 ≻ · · · ≻ τk .
We say that a collapsing scheme is of finite type if it has finitely many essential cells of each
dimension.
In the construction of the ‘small’ CW complex in the proof of [Bro92, Proposition 1] the
redundant n-cells are adjoined in order of their heights, guaranteeing in the proof that the
adjunction of τ and c(τ ) is an elementary expansion. This is the the key idea in the proof
which is that each pair (τ, c(τ )) of redundant and corresponding collapsible cells may be adjoined
without changing the homotopy type, and so in the end it is only the essential cells that matter.
More precisely, Brown proves that if K be a simplical set with a collapsing scheme, then its
geometric realisation X = |K| admits a canonical quotient CW complex Y , whose cells are
in 1–1 correspondence with the essential cells of X. This notion of height in Brown’s theory
relates to the values taken by the discrete Morse function in Forman’s theory (see [For02, page
10]). A discrete Morse function gives one a way to build the simplicial complex by attaching
the simplices in the order prescribed by the function, i.e., adding first the simplices which are
assigned the smallest values. Brown’s essential cells are called ‘critical’ in Forman’s theory.
9. M -equivariant collapsing schemes
In this section we develop the theory of M -equivariant collapsing schemes, or equivalently,
of M -equivariant discrete Morse theory. Results on G-equivariant discrete Morse theory, for G
a group, may
S be found in [Fre09].
Let K = i≥0 Ki be a simplicial set with degeneracy and face operators di , si , and equipped
with a collapsing scheme (E, R, C, c, i). Here E, R and C partition the cells (which are in
bijective correspondence with the non-degenerate simplices) of K.
Let M be a monoid acting on the simplicial set K with the following conditions satisfied:
(A1) The action of M maps n-simplicies to n-simplicies, and commutes with di and si , that
is, M is acting by simplicial morphisms.
(A2) For every n-simplex σ and m ∈ M , σ is a cell (i.e. is a non-degenerate simplex) if and
only if mσ is a cell, in which case σ ∈ E (respectively R, C) if and only if mσ ∈ E
(respectively R, C).
(A3) If (σ, τ ) ∈ Rn × Cn+1 is a matched redundant-collapsible pair (i.e. τ = c(σ)) then so is
the pair m(σ, τ ) = (mσ, mτ ) ∈ Rn × Cn+1 , i.e., c(mσ) = mc(σ) for σ ∈ Rn .
(A4) There is a subset B ⊆ E ∪ R ∪ C such that for all n the set of n-cells is a free left M -set
with basis Bn (where Bn is the set of n-cells in B). Let E B = E ∩ B, RB = R ∩ B and
C B = C ∩ B. Then En is a free left M -set with basis EnB , and similarly for Rn and Cn .
(A5) For every matched pair (σ, τ ) ∈ R × C, σ ∈ RB if and only if τ ∈ C B . In particular,
for every matched pair (σ, τ ) there is a unique pair (σ ′ , τ ′ ) ∈ RB × C B and m ∈ M
42
ROBERT D. GRAY AND BENJAMIN STEINBERG
such that (σ, τ ) = m(σ ′ , τ ′ ); namely, if σ = mσ ′ with σ ′ ∈ RB and τ ′ = c(σ ′ ), then
mτ ′ = mc(σ ′ ) = c(mσ ′ ) = c(σ) = τ .
(A6) For every redundant cell τ and every m ∈ M
height(τ ) = height(mτ ),
with height defined as in Definition 8.1 above.
These conditions imply that K is a rigid free left M -simplicial set and hence by Lemma 4.4 the
action of M on K induces an action of M on the geometric realisation |K| by continuous maps
making |K| into a free left M -CW complex. When the above axioms hold, we call (E, R, C, c, i)
an M -equivariant collapsing scheme for the rigid free left M -simplicial set K. Dually, given a
rigid free right M -simplicial set K with a collapsing scheme satisfying the above axioms for K
as an M op -simplicial set we call (E, R, C, c, i) an M -equivariant collapsing scheme for K. If K
is a bi-M -simplicial set we say (E, R, C, c, i) an M -equivariant collapsing scheme if the axioms
are satisfied for K as a left M × M op -simplicial set.
Our aim is to prove a result about the M -homotopy type of |K| when K has an M -equivariant
collapsing scheme. Before doing this we first make some observations about mapping cylinders
and the notion of elementary collapse.
9.1. Mapping cylinders and elementary collapse. If X is a subspace of a space Y then
D : Y → X is a strong deformation retraction if there is a map F : Y × I → Y such that, with
Ft : Y → Y defined by Ft (y) = F (y, t), we have (i) F0 = 1Y , (ii) Ft (x) = x for all (x, t) ∈ X × I,
and (iii) F1 (y) = D(y) for all y ∈ Y . If D : X → Y is a strong deformation retraction then D
is a homotopy equivalence, a homotopy inverse of which is the inclusion i : X ֒→ Y .
Definition 9.1 (Mapping cylinder). Let f : X → Y be a cellular map
`between CW complexes.
The mapping cylinder Mf is defined to be the adjunction complex Y f0 (X × I) where f0 : X ×
{0} is the map (x, 0) 7→ f (x). Let i1 : X → X × I, x 7→ (x, 1) and let i0 : X → X × I, x 7→ (x, 0).
Let p be the projection p : X × I → X, (x, i) 7→ x. Also set i = k ◦ i1 , with k as below. Thus
we have
X
p
f
Y
j
i1 i0
X ×I
k
Mf
The map from X × I to Y taking (x, t) to f (x), and the identity map on Y , together induce a
retraction r : Mf → Y .
The next proposition is [Geo08, Proposition 4.1.2].
Proposition 9.2. The map r is a homotopy inverse for j, so r is a homotopy equivalence.
Indeed there is a strong deformation retraction D : Mf × I → Mf of Mf onto Y such that
D1 = r.
The following result is the M -equivariant analogue of [Geo08, Proposition 4.1.2]. Recall that
if X is a projective (resp. free) M -CW complex then Y = M × I is a projective (resp. free)
M -CW complex, where I is given the trivial action.
TOPOLOGICAL FINITENESS PROPERTIES
43
Lemma 9.3. Let f : X → Y be a continuous M -equivariant cellular map between free (projective) M -CW complexes X and Y . Let Mf be the pushout of
X
f
Y
j
i0
X ×I
k
Mf
where i0 : X → X × I, x 7→ (x, 0). Then:
(i) Mf is a free (projective) M -CW complex; and
(ii) there is an M -equivariant strong deformation retraction r : Mf → Y . In particular Mf
and Y are M -homotopy equivalent.
Proof. It follows from Lemma 2.2 that Mf has the structure of a free (projective) M -CW
complex, proving part (i). For part (ii) first note that the map from X × I to Y taking (x, t)
to f (x), and the identity map on Y , together induce a retraction r : Mf → Y . It follows from
Proposition 9.2 that r is a homotopy equivalence with homotopy inverse j. By Corollary 2.7 to
show that r is an M -homotopy equivalence it suffices to verify that r is an M -equivariant map
between the sets Mf and Y . But M -equivariance of r follows from the definitions of r and Mf ,
the definition of the action of M on Mf which in turn is determined by the actions of M on
X × I and on Y , together with the assumption that f : X → Y is M -equivariant.
The fundamental idea of collapsing schemes, and discrete Morse theory, is that of a collapse.
The following definition may be found in [Coh73, page 14] and [FS05, Section 2]. We use the
same notation as in [Coh73]. In particular ≈ denotes homeomorphism of spaces.
Definition 9.4 (Elementary collapse). If (K, L) is a CW pair then K collapses to L by an
elementary collapse, denoted K ցe L, if and only if:
(1) K = L∪en−1 ∪en where en and en−1 are open cells of dimension n and n−1 respectively,
which are not in L, and
(2) there exists a ball pair (B n , B n−1 ) ≈ (I n , I n−1 ) and a map ϕ : B n → K such that
(a) ϕ is a characteristic map for en
(b) ϕ|B n−1 is a characteristic map for en−1
(c) ϕ(P n−1 ) ⊆ Ln−1 where P n−1 = cl(∂B n − B n−1 ).
Note that in this statement P n−1 is an (n − 1)-ball (i.e. is homeomorphic to I n−1 ). We
say that K collapses to L, writing K ց L, if L may be obtained from K by a sequence of
elementary collapses. We also say that K is an elementary expansion of L. An elementary
collapse gives a way of modifying a CW complex K, by removing the pair {en−1 , en }, without
changing the homotopy type of the space. We can write down a homotopy which describes such
an elementary collapse K ցe L as follows. Let (K, L) be a CW pair such that K ցe L. Set
ϕ0 = ϕ|pn−1 in the above definition. Then
ϕ0 : (P n−1 , ∂P n−1 ) → (Ln−1 , Ln−2 ),
(using the identification P n−1 ≈ I n−1 ) and
(K, L) ≈ (L
a
B n , L).
ϕ0
The following is [Coh73, (4.1)].
Lemma 9.5. If K ցe L then there is a cellular strong deformation retraction D : K → L.
44
ROBERT D. GRAY AND BENJAMIN STEINBERG
Indeed, let K = L ∪ en−1 ∪ en . There is a map ϕ0 : I n−1 ≈ P n−1 → Ln−1 such that
a
B n , L).
(K, L) ≈ (L
ϕ0
`
But L ϕ0 B n is the mapping cylinder Mϕ0 of ϕ0 : I n−1 → Ln−1 . Thus by Proposition 9.2 there
is a strong deformation retraction
a
In → L
D: K ≈ L
ϕ0
such that D(en ) = ϕ0 (I n−1 ) ⊂ Ln−1 . The map D is given by the map r in Definition 9.1.
We may now state and prove the main result of this section.
Theorem 9.6. Let K be a rigid free left M -simplicial set with M -equivariant collapsing scheme
(E, R, C, c, i) (that is, the conditions (A1)–(A6) are satisfied). Then, with the above notation,
there is a free left M -CW complex Y with Y ≃M |K| and such that the cells of Y are in bijective
correspondence with E, and under this bijective correspondence Yn is a free left M -set with basis
EnB for all n.
Proof. Let X be the geometric realisation |K| of the simplicial set K. By axiom (A1) we have
that K is a left M -simplical set, and it follows that X = |K| has the structure of a left M -CW
complex where the M -action is given by Lemma 4.4. In fact, by assumptions (A2)-(A6), X is
a free M -CW complex where, for each n, the set EnB ∪ RnB ∪ CnB is a basis for the n-cells.
Write X as an increasing sequence of subcomplexes
X0 ⊆ X0+ ⊆ X1 ⊆ X1+ ⊆ . . .
where, X0 consists of the essential vertices, Xn+ is obtained from Xn by adjoining the redundant
n-cells and collapsible (n + 1)-cells, and Xn+1 is obtained from Xn+ by adjoining the essential
(n + 1)-cells. We write Xn+ as a countable union
Xn = Xn0 ⊆ Xn1 ⊆ Xn2 ⊆ . . .
`
with Xn+ = i≥0 Xni where Xnj+1 is constructed from Xnj by adjoining (τ, c(τ )) for every redundant n-cell τ of height j. From assumptions (A1)-(A6), for every n and j, each of Xn+ , Xn and
Xnj is a free M -CW subcomplex of X.
As argued in the proof of [Bro92, Proposition 1], for every redundant n-cell τ of height j the
adjunction of (τ, c(τ )) is an elementary expansion. In this way Xnj+1 can be obtained from Xnj
by a countable sequence of simultaneous elementary expansions. The same idea, together with
Lemma 9.3, can be used to obtain an M -homotopy equivalence between Xnj and Xnj+1 . The
details are as follows.
Recall that Xnj+1 is obtained from Xnj by adjoining (τ, c(τ )) for every redundant n-cell τ of
height j. It follows from the axioms (A1)-(A6) that this set of pairs (τ, c(τ )) is a free M -set with
B
B
with height(τ ) = j,
: height(τ ) = j}. Let (τ, c(τ )) ∈ RnB × Cn+1
basis {(τ, c(τ )) ∈ RnB × Cn+1
and let m ∈ M . From the assumptions (A1)-(A6) it follows that
m · (τ, c(τ )) = (mτ, c(mτ )),
and
height(mτ ) = height(τ ) = j.
The pair
(Xnj+1 , Xnj+1 − {mτ, c(mτ )})
satisfies the conditions of an elementary collapse. Indeed (as argued in the proof of [Bro92,
Proposition 1]) every face of c(mτ ) other than mτ is either (i) a redundant cell of height
< j, (ii) is essential (so has height 0), or (iii) is collapsible or degenerate. It follows that the
adjunction of mτ and c(mτ ) is an elementary expansion. This is true for every pair (mτ, c(mτ ))
TOPOLOGICAL FINITENESS PROPERTIES
45
B
with height(τ ) = j and m ∈ M . Now Xnj+1 is obtained from Xnj
where (τ, c(τ )) ∈ RnB × Cn+1
B
: height(τ ) = j} and
by adjoining all such pairs (mτ, c(mτ )). Let A = {(τ, c(τ )) ∈ RnB × Cn+1
let M × A denote the free left M -set with basis {(1, a) : a ∈ A} and action m(n, a) = (mn, a).
Then (M × A) × I n+1 is a disjoint union of the free M -cells (M × {a}) × I n+1 with a ∈ A. The
characteristic maps for the collapsible n + 1 cells of height j combine to give an M -equivariant
map ϕ : (M × A) × I n+1 → Xnj+1 such that
(E1) ϕ restricted to the (m, a) × I n+1 , (m ∈ M, a ∈ A), gives characteristic maps for each of
the collapsible n + 1 cells of height j;
(E2) ϕ restricted to the (m, a) × I n , (m ∈ M, a ∈ A), gives characteristic maps for each
τ ∈ Rn such that c(τ ) is a collapsible n + 1 cell of height j;
(E3) ϕ((M × A) × P n ) ⊆ (Xnj+1 )≤n (where (Xnj+1 )≤n is the subcomplex of Xnj+1 of cells of
dimension ≤ n) where P n = cl(∂I n+1 − I n ).
Set ϕ0 = ϕ|(M ×A)×P n . Then
ϕ0 : (M × A) × P n → (Xnj+1 )≤n
is a continuous M -equivariant cellular map between free M -CW complexes (M × A) × P n and
(Xnj+1 )≤n . It follows that Xnj+1 is M -equivariantly isomorphic to
a
(9.1)
(Xnj+1 − {(τ, c(τ )) ∈ Rn × Cn+1 : height(τ ) = j}) ((M × A) × I n+1 ).
ϕ0
Pn
cl(∂I n+1
− I n)
In
But since
=
≈
we conclude that (9.1) is just the mapping cylinder of ϕ0 .
Thus we can apply Lemma 9.3 to obtain a strong deformation retraction rj : Xnj+1 → Xnj which
is also an M -homotopy equivalence. It follows that there is a retraction rn : Xn+ → Xn which
is an M -equivariant homotopy equivalence.
We build the space Y such that |K| ≃M Y inductively. First set Y0 = X0 . Now suppose
+
that we have an M -homotopy equivalence π n−1 : Xn−1
→ Yn−1 is given. Define Yn to be the
B
B
the M -CW complex Yn−1 ∪ (M × En ) where (M × En ) is a collection of free M -cells indexed
by EnB . These free M -cells are attached to Yn by composing with π n−1 the attaching maps
+
for the essential n-cells of X. This makes sense because Xn−1
contains the (n − 1)-skeleton
n−1
n−1 : X → Y in the obvious way. This
of X. Extend π
to an M -homotopy equivalence π[
n
n
+
is possible since Xn is obtained from Xn−1 by adjoining the essential n-cells. Composing rn
n−1 then gives an M -homotopy equivalence X + → Y . Passing to the union gives the
with π[
n
n
M -homotopy equivalence X ≃M Y stated in the theorem.
There is an obvious dual result for right simplicial sets which follows from the above result
by replacing M by M op . We also have the two-sided version.
Theorem 9.7. Let K be a rigid free bi-M -simplicial set with M -equivariant collapsing scheme
(E, R, C, c, i). Then, with the above notation, there is a free bi-M -CW complex Y with Y ≃M
|K| and such that the cells of Y are in bijective correspondence with E, and under this bijective
correspondence Yn is a free bi-M -set with basis EnB for all n.
Proof. This follows by applying Theorem 9.6 to the rigid free left M ×M op -simplicial set K.
10. Guarded collapsing schemes
In this section we introduce the idea of a left guarded collapsing scheme. We shall prove that
−−→
whenever BM admits a left guarded collapsing scheme then EM will admit an M -equivariant
collapsing scheme whose M -orbits of cells are in bijective correspondence with the essential
46
ROBERT D. GRAY AND BENJAMIN STEINBERG
cells of the given collapsing scheme for BM . Applying Theorem 9.6 it will then follow that
when BM admits a left guarded collapsing scheme of finite type then the monoid M is of
type left-F∞ . Analogous results will hold for right guarded and right-F∞ , and (two-sided)
guarded and bi-F∞ . In later sections we shall give some examples of monoids which admit
guarded collapsing schemes of finite type, including monoids with finite complete presentations
(rewriting systems), and factorable monoids in the sense of [HO14].
S
Definition 10.1 (Guarded collapsing schemes). Let K = i≥0 Ki be a simplicial set and let
X be its geometric realisation. We identify the cells of X with the non-degenerate simplices of
K, and suppose that K admits a collapsing scheme (E, C, R, c, i). We say that this collapsing
scheme is
• left guarded if for every redundant n-cell τ we have i(τ ) 6= 0;
• right guarded if for every redundant n-cell τ we have i(τ ) 6= n + 1;
• guarded if it is both left and right guarded.
In other words, the collapsing scheme is guarded provided the function i never takes either
of its two possible extreme allowable values. The aim of this section is to prove the following
result.
Theorem 10.2. Let M be a monoid and suppose that BM admits a collapsing scheme (E, C, R, c, i).
(a) If (E, C, R, c, i) is left guarded , then there is an M -equivariant collapsing scheme (E, R, C, κ, ι)
−−→
for the free left M -simplicial set EM such that, for each n, the set of essential n-cells En
is a free left M -set of rank |En |.
(b) If (E, C, R, c, i) is right guarded , then there is an M -equivariant collapsing scheme (E, R, C, κ, ι)
←−−
for the free right M -simplicial set EM such that, for each n, the set of essential n-cells En
is a free right M -set of rank |En |.
(c) If (E, C, R, c, i) is guarded , then there is an M -equivariant collapsing scheme (E, R, C, κ, ι)
←−→
for the free bi-M -simplicial set EM such that, for each n, the set of essential n-cells En is
a free M × M op -set of rank |En |.
Corollary 10.3. Let M be a monoid and let |BM | be its classifying space.
(a) If BM admits a left guarded collapsing scheme of finite type then M is of type left F∞ (and
therefore also is of type left-FP∞ ).
(b) If BM admits a right guarded collapsing scheme of finite type then M is of type right-F∞
(and therefore also is of type right-FP∞ ).
(c) If BM admits a guarded collapsing scheme of finite type then M is of type bi-F∞ (and
therefore also is of type bi-FP∞ ).
Proof. This follows directly from Theorem 9.6 and its dual, and Theorems 9.7 and 10.2.
We shall give some examples of monoids to which this corollary applies in the next section.
The rest of this section will be devoted to the proof of Theorem 10.2. Clearly part (b) of the
theorem is dual to part (a). The proofs of parts (a) and (c) are very similar, the only difference
being that the stronger guarded condiion is needed for (c), while only left guarded is needed
for (a). We will begin by giving full details of the proof of Theorem 10.2(a). Then afterwards
we will explain the few modifications in the proof necessary to obtain the two-sided proof for
(c), in particular highlighting the place where the guarded condition is needed.
10.1. Proof of Theorem 10.2(a). Let (E, C, R, c, i) be a left guarded collapsing scheme for
BM . We can now ‘lift’ this collapsing scheme to a collapsing scheme (E, R, C, κ, ι) for the
TOPOLOGICAL FINITENESS PROPERTIES
47
−−→
simplicial set EM in the following natural way. First observe that
m(m1 , m2 , ..., mn ) is an n-cell of EM
⇔ mi 6= 1 for all 1 ≤ i ≤ n
⇔ (m1 , m2 , ..., mn ) is an n-cell of BM .
−−→
Define an n-cell m(m1 , m2 , ..., mn ) of EM to be essential (respectively redundant, collapsible respectively) if and only if (m1 , m2 , ..., mn ) is essential (respectively redundant, collapsible
respectively) in the collapsing scheme (E, R, C, c, i). This defines the partition (E, R, C) of the
−−→
n-cells of EM for each n. We call these sets the essential, redundant and collapsible cells,
−−→
−−→
respectively, of EM . For the mappings κ and ι, given mτ ∈ EM where τ = (m1 , ..., mn ) is a
redundant n-cell of BM we define
ι(mτ ) = i(τ )
κ(mτ ) = m(c(τ )).
(10.1)
(10.2)
We claim that (E, R, C, κ, ι) is an M -equivariant collapsing scheme for the free left M -simplicial
−−→
set EM such that, for each n, the set of essential n-cells En is a free left M -set of rank |En |.
Once proved this will complete the proof of Theorem 10.2(a).
−−→
We begin by proving that (E, R, C, κ, ι) is a collapsing scheme for EM , and then we will
verify that all of the conditions (A1) to (A6) are satisfied.
10.1.1. Proving that (E, R, C, κ, ι) is a collapsing scheme.
Proposition 10.4. With the above definitions, (E, R, C, κ, ι) is a collapsing scheme for the
−−→
simplicial set EM .
−−→
We have already observed that (E, R, C) partitions the cells of EM . To complete the proof
of the proposition we need to verify that the conditions (C1) and (C2) in the definition of
collapsing scheme are both satisfied. For part of the proof we will find it useful to recast the
ideas in terms of certain bipartite digraphs. The idea of viewing collapsing schemes in terms
of matchings in bipartite graphs is a natural one and has been used in the literature; see for
example [Cha00]. Let us now introduce the terminology and basic observations about digraphs
that we shall need.
A directed graph D consists of: a set of edges ED, a set of vertices V D and functions α
and β from ED to V D. For e ∈ E we call α(e) and β(e) the initial and terminal vertices of
the directed edge e. A directed path of length n in D is a sequence of edges e1 e2 . . . en such
that β(ei ) = α(ei+1 ) for each directed edge. Note that edges in paths are allowed to repeat,
and vertices can also repeat in the sense that β(ei ) = β(ej ) is possible for distinct i and j
(in graph theory literature what we call a path here is often called a walk). By an infinite
directed path we mean a path (ei )i∈N . Note that an infinite directed path need not contain
infinitely many distinct edges. For example, if a digraph contains a directed circuit e1 e2 e3
then e1 e2 e3 e1 e2 e3 . . . would be an infinite directed path. A bipartite digraph D with bipartition
V D1 ∪ V D2 has vertex set V D = V D1 ∪ V D2 where V D1 and V D2 are disjoint, such that for
every e ∈ ED we have either α(e) ∈ V D1 and β(e) ∈ V D2 , or α(e) ∈ V D2 and β(e) ∈ V D1
(i.e., there are no directed edges between vertices in the same part of the bipartition). A
homomorphism ϕ : D → D ′ between digraphs is a map ϕ : (V D ∪ ED) → (V D ′ ∪ ED ′ ) which
maps vertices to vertices, edges to edges, and satisfies α(ϕ(e)) = ϕ(α(e)) and β(ϕ(e)) = ϕ(β(e)).
If p = e1 e2 . . . en is a path of length n in D and ϕ : D → D ′ is a digraph homomorphism then
we define ϕ(p) = ϕ(e1 )ϕ(e2 ) . . . ϕ(en ) which is a path of length n in D ′ . Note that in general a
48
ROBERT D. GRAY AND BENJAMIN STEINBERG
homomorphism is allowed to map two distinct vertices (resp. edges) of D to the same vertex
(resp. edge) of D ′ . Since digraph homomorphisms map paths to paths, we have the following
basic observation.
Lemma 10.5. Let ϕ : D → D ′ be a homomorphism of directed graphs. If D has an infinite
directed path than D ′ has an infinite directed path.
For each n ∈ N let Γ(n) (BM ) be the directed bipartite graph defined as follows. The vertex
set V Γ(n) (BM ) = Cn ∪ Cn+1 where Ci denotes the set of i-cells BM . The directed edges EV of
V are are of two types:
(DE1) A directed edge τ −→ dj (τ ) (with initial vertex τ and terminal vertex dj (τ )) for every
collapsible τ ∈ Cn+1 and j ∈ {0, . . . , n + 1} such that dj (τ ) is a redundant n-cell (i.e., is a
redundant non-degenerate n-simplex) and either c(dj (τ )) 6= τ or c(dj (τ )) = τ but j 6= i(dj (τ ));
(DE2) A directed edge σ −→ c(σ) (with initial vertex σ and terminal vertex c(σ)) for every
redundant σ ∈ Cn .
We sometimes refer to these two types of directed arcs as the “down-arcs” and the “uparcs” (respectively) in the bipartite graph. Note that condition (C2) in the collapsing scheme
definition is equivalent to saying that the digraph D(n, n + 1) does not contain any infinite
−−→
directed path, and in particular does not contain any directed cycles. Let Γ(n) (EM ) be the
corresponding directed bipartite graph defined in the same way using (E, R, C, κ, ι), with vertex
−−→
set the n and n + 1 cells of EM and directed edges determined by the maps κ and ι. To simplify
notation, let us fix n ∈ N and set Γ(BM ) = Γ(n) (BM ) and Γ(EM ) = Γ(n) (EM ).
−−→
−−→
Lemma 10.6. Let π : V Γ(EM ) ∪ EΓ(EM ) → V Γ(BM ) ∪ EΓ(BM ) be defined on vertices by:
m(m1 , . . . , mk ) 7→ (m1 , . . . , mk ) (k ∈ {n, n + 1})
and defined on edges (DE1) and (DE2) by π(x → y) = π(x) → π(y). Then π is a digraph
homomorphism.
−−→
Proof. We need to prove, for each directed edge x → y from EΓ(EM ), that π(x) → π(y) is a
directed edge in EΓ(BM ). There are two cases that need to be considered depending on arc
type. The two arc types depend on whether the arc is going downwards from level n + 1 to
level n (arc type 1) or upwards from level n to level n + 1 (arc type 2).
−−→
Case: Arc type 1. Let m(m1 , . . . , mn+1 ) be a collapsible n + 1 cell in EM and let j ∈
{0, . . . , n + 1} and suppose that
m(m1 , . . . , mn+1 ) −→ dj (m(m1 , . . . , mn+1 ))
−−→
−−→
is an arc in Γ(EM ). This means that dj (m(m1 , . . . , mn+1 )) is a redundant n-cell in EM and
if κ(dj (m(m1 , . . . , mn+1 ))) = m(m1 , . . . , mn+1 ), then j 6= ι(dj (m(m1 , . . . , mn+1 ))). Note that
j = 0 or j = n + 1 is possible here. We claim that
π(m(m1 , . . . , mn+1 )) −→ π(dj (m(m1 , . . . , mn+1 )))
−−→
in Γ(BM ). Indeed, we saw above in Section 5 that the projection π : EM → BM is a simplicial
morphism with
π(dj (m(m1 , . . . , mn+1 ))) = dj (m1 , . . . , mn+1 ) = dj (π(m(m1 , . . . , mn+1 ))).
TOPOLOGICAL FINITENESS PROPERTIES
49
Therefore
π(m(m1 , . . . , mn+1 ))
= (m1 , . . . , mn+1 ) → dj (m1 , . . . , mn+1 )
= π(dj (m(m1 , . . . , mn+1 ))),
in Γ(BM ), since if c(dj (m1 , . . . , mn+1 )) = (m1 , . . . , mn+1 ), then
κ(dj (m(m1 , . . . , mn+1 ))) = κ(mdj (m1 , . . . , mn+1 )) = mc(dj (m1 , . . . , mn+1 )) = m(m1 , . . . , mn+1 )
and hence by definition of ι we have
j 6= ι(dj (m(m1 , . . . , mn+1 ))) = i(dj (m1 , . . . , mn+1 )).
Case: Arc type 2. These are the arcs that go up from level n to level n + 1. A typical such
−−→
arc arises as follows. Let mσ ∈ Cn be a redundant cell in EM where σ is a redundant cell in
BM and m ∈ M . Then κ(mσ) ∈ Cn+1 is collapsible and
dι(mσ) (κ(mσ)) = mσ.
A typical type 2 arc has the form
mσ −→ κ(mσ) = m(c(σ)),
by definition of κ. Applying π to this arc gives
π(mσ) = σ −→ c(σ) = π(m(c(σ))),
which is a type 2 arc in Γ(BM ) completing the proof of the lemma.
Proof of Proposition 10.4. We must check that (E, R, C, κ, ι) satisfies the two collapsing scheme
conditions (C1) and (C2).
Verifying collapsing scheme condition (C1). We must prove that the map κ defines a
bijection from the redundant n-cells to the collapsible n + 1-cells.
The map is injective since κ(mτ ) = κ(m′ σ) implies mc(τ ) = m′ c(σ) so m = m′ and τ = σ
since c is injective by assumption. Also, given an arbitrary collapsible n + 1 cell mσ there exists
σ = c(τ ) where τ is a redundant n-cell an so mσ = κ(mτ ).
Moreover, for every redundant n-cell mτ , we have
dι(mτ ) (κ(mτ )) = di(τ ) (mc(τ )) (by definition)
= mdi(τ ) (c(τ )) (since i(τ ) 6= 0)
= mτ,
and this concludes the proof that collapsing scheme condition (C1) holds.
Note that it is in the second line of the above sequence of equations that we appeal to our
assumption that (E, C, R, c, i) is left guarded (which implies i(τ ) 6= 0). In fact, this will be the
only place in the proof of Theorem 10.2(a) that the left guarded assumption is used.
Verifying collapsing scheme condition (C2). To see that collapsing scheme condition
−−→
−−→
(C2) holds let Γ(BM ) and Γ(EM ) be the level (n, n + 1) bipartite digraphs of BM and EM ,
respectively, defined above. By Lemma 10.6 the mapping π defines a digraph homomorphism
−−→
−−→
from Γ(EM ) to Γ(BM ). If Γ(EM ) contained an infinite directed path then by Lemma 10.5
the image of this path would be an infinite directed path in Γ(BM ) which is impossible since
50
ROBERT D. GRAY AND BENJAMIN STEINBERG
−−→
(E, R, C, c, i) is a collapsing scheme. Therefore Γ(EM ) contains no infinite path and thus
(E, R, C, κ, ι) satisfies condition (C2).
This completes the proof of Proposition 10.4.
−−→
Remark 10.7. It follows from Proposition 10.4 that every down arc in Γ(EM ) (that is, every
arc of type (DE1)) is of the form τ → dj (τ ) where τ ∈ Cn+1 , dj (τ ) ∈ Rn , and κ(dj (τ )) 6= τ .
10.1.2. Proving (E, R, C, κ, ι) is a left M -equivariant collapsing scheme. To prove that (E, R, C, κ, ι)
−−→
is a left M -equivariant collapsing scheme for EM we need to verify that conditions (A1)–(A6)
−−→
hold. In Section 5 we proved that EM is a rigid free left M -simplicial set. In addition to
−−→
this, from the definitions an n-cell m(m1 , m2 , ..., mn ) of EM essential (redundant, collapsible
respectively) if and only if (m1 , m2 , ..., mn ) is essential (redundant, collapsible respectively) in
the collapsing scheme (E, R, C, c, i) of BM . These facts prove that (A1) and (A2) both hold:
−−→
(A1) The action of M on EM maps n-simplicies to n-simplicies, and commutes with di and
−−→
si , that is, M is acting by simplicial morphisms on EM .
(A2) For every n-simplex σ and m ∈ M , σ is a cell (i.e. is a non-degenerate simplex) if and
only if mσ is a cell, in which case σ ∈ E (respectively R, C) if and only if mσ ∈ E
(respectively R, C).
The next axiom we need to verify is:
(A3) If (σ, τ ) ∈ Rn × Cn+1 is a matched redundant-collapsible pair (i.e. τ = c(σ)) then so is
the pair m(σ, τ ) = (mσ, mτ ) ∈ Rn × Cn+1 .
−−→
We shall prove a little more than this. We consider the bipartite digraph Γ(EM ) between levels
n and n + 1 defined above. We want to prove that M acts on this digraph, that is, that arcs
are sent to arcs under the action. We note that the action of M on the vertices preserves the
bipartition. As above, there are two types of directed arcs that need to be considered, the
up-arcs and the down-arcs.
First consider the down-arcs. My Remark 10.7, a typical down-arc has the form
dj
mσ −→ dj (mσ)
where κ(dj (mσ)) 6= mσ. Let n ∈ M be arbitrary. Then we have
dj
nmσ −→ dj (nmσ) = ndj (mσ),
by definition of dj . This is a down-arc because if κ(dj (nmσ)) = nmσ, then from nmdj (σ) =
dj (nmσ) we deduce that nmc(dj (σ)) = nmσ and so c(dj (σ)) = σ, whence κ(dj (mσ)) =
κ(mdj (σ)) = mc(dj (σ)) = mσ, which is a contradiction.
Now consider up-arcs. A typical up-arc has the form mσ → κ(mσ). Let n ∈ M . Then
nmσ → κ(nmσ) = nm(c(σ)) = nκ(mσ),
which is an up-arc as required.
−−→
−−→
This covers all types of arcs in Γ(EM ) and we conclude that M acts on Γ(EM ) by digraph
endomorphisms. This fact together with (A2) then implies property (A3), since the up-arcs in
this bipartite graph are precisely the matched redundant-collapsible pairs. Next consider
TOPOLOGICAL FINITENESS PROPERTIES
51
(A4) There is a subset B ⊆ E ∪ R ∪ C such that for all n the set of n-cells is a free left M -set
with basis Bn (the n-cells in B). Let EB = E ∩ B, RB = R ∩ B and CB = C ∩ B. Then
En is a free left M -set with basis EB
n , and similarly for Rn and Cn .
−−→
We saw in Section 5 that the set of n-cells of EM is a free left M -set with basis the set of
n-cells
B = {(m1 , . . . , mn ) : mi 6= 1 for all i}
of BM . The last clause of (A4) then follows from (A2). Now we shall prove
(A5) For every matched pair (σ, τ ) ∈ R × C, σ ∈ RB if and only if τ ∈ CB . In particular, for
every matched pair (σ, τ ) there is a unique pair (σ ′ , τ ′ ) ∈ RB × CB and m ∈ M such
that (σ, τ ) = m(σ ′ , τ ′ ).
−−→
The matched pairs are the up-arcs in the graph Γ(EM ). A typical up-arc has the form
m(m1 , . . . , mn ) → κ(m(m1 , . . . , mn ) = mc(m1 , . . . , mn ).
So this pair is
m · ((m1 , . . . , mn ) → c(m1 , . . . , mn ))
where
(m1 , . . . , mn ) → c(m1 , . . . , mn )
is a uniquely determined matched basis pair. Also, if (σ, τ ) is a matched pair then σ =
(m1 , . . . , mn ) belongs to the basis if and only if κ(m1 , . . . , mn ) = c(m1 , . . . , mn ) belongs to
the basis, completing the proof of (A5). Finally we turn our attention to showing axiom
(A6) For every redundant cell τ and every m ∈ M
height(τ ) = height(mτ )
where height is taken with respect to the collapsing scheme (E, R, C, κ, ι).
The following lemma will be useful.
−−→
−−→
Lemma 10.8 (Path lifting property). Define a mapping π : V Γ(EM )∪EΓ(EM ) → V Γ(BM )∪
EΓ(BM ) defined on vertices by:
m(m1 , . . . , mk ) 7→ (m1 , . . . , mk ) (k ∈ {n, n + 1})
and defined on edges (DE1) and (DE2) by π(x → y) = π(x) → π(y). Let µ be a redundant
−−→
n-cell in EM . Then for each path p in Γ(BM ), with initial vertex π(µ) there is a path p̂ in
−−→
Γ(EM ), with initial vertex µ, such that π(p̂) = p.
Proof. We shall establish two claims, from which the lemma will be obvious by induction on
−−→
path length. First we claim that if y = mτ is a redundant n-cell of EM (with m ∈ M and τ and
n-cell of BM ) and σ ∈ V Γ(BM ) is a vertex such that there is a directed edge τ = π(y) → σ,
−−→
−−→
then there is a vertex z ∈ V Γ(EM ) such that y → z is a directed edge in EΓ(EM ), and
−−→
π(y → z) = π(y) → σ. Indeed, set z = κ(y). Then y → z is a directed edge in EΓ(EM ) by
definition and π(z) = π(κ(y)) = π(κ(mτ )) = π(m(c(τ ))) = c(τ ) = σ.
−−→
Next we claim that if x → y is an up-arc of EΓ(EM ) as in (DE2) and σ is a vertex in V Γ(BM )
−−→
such that π(y) → σ is a directed edge in EΓ(BM ), then there exists a vertex z ∈ V Γ(EM ) such
52
ROBERT D. GRAY AND BENJAMIN STEINBERG
−−→
that y → z is a directed edge in EΓ(EM ), and π(y → z) = π(y) → σ. Write x = mτ where
m ∈ M and τ is a redundant n-cell of BM . Then x → y is equal to mτ −→ κ(mτ ) = mc(τ ). In
Γ(BM ) we have the path π(x) → π(y) → σ which equals τ → c(τ ) → σ. Therefore c(τ ) → σ is
an arc in Γ(BM ) of type 1. Therefore, by Remark 10.7 applied to the graph Γ(BM ), it follows
that σ is a redundant n-cell with σ = dj (c(τ )) for some j ∈ {0, . . . , n + 1} and σ 6= τ (using that
c is a bijection). Set z = dj (y). We need to show that π(z) = σ and that y → z is a directed
−−→
edge in Γ(EM ).
For the first claim, since π is a simplicial morphism we have
π(z) = π(dj (y)) = dj (π(y)) = dj (c(τ )) = σ.
−−→
For the second claim, to verify that y → z is a directed edge in Γ(EM ) we just need to
show that z is a redundant cell and y 6= κ(z). From the definitions it follows that under the
mapping π any vertex in the preimage of a redundant cell is a redundant cell. Thus, since σ is
redundant and π(z) = σ it follows that z is redundant. Finally, if y = κ(z), then z = x because
κ is injective. Therefore, σ = π(z) = π(x) = τ , which is a contradiction.
We can now construct the path p̂ by induction on the length of p, where the inductive step
uses the first claim if the lift of the previous portion ends at a redundant cell and uses the
second claim if it ends at a collapsible cell.
Axiom (A6) is then easily seen to be a consequence of the following lemma.
Lemma 10.9. Let τ be a redundant cell in the collapsing scheme (E, R, C, κ, ι). Write τ = mσ
−→ (mσ) denote the height of mσ with respect to
where σ is a redundant cell in BM . Let height−
EM
the collapsing scheme (E, R, C, κ, ι), and let heightBM (σ) denote the height of σ with respect to
−→ (mσ) = heightBM (σ).
the collapsing scheme (E, R, C, c, i). Then height−
EM
Proof. Let
mσ = τ = τ0 ≻ τ1 ≻ · · · ≻ τk
be a descending chain of redundant n-cells from R. It follows that there is a directed path
p = τ0 → κ(τ0 ) → · · · → κ(τk−1 ) → τk
−−→
in Γ(EM ). Since π is a digraph homomorphism it follows that π(p) is a directed path in Γ(BM )
and hence
σ = π(τ0 ) ≻ π(τ1 ) ≻ · · · ≻ π(τk )
−→ (mσ) ≤ heightBM (σ).
is a descending chain of redundant n-cells in R. This proves that height−
EM
For the converse, let
σ = σ0 ≻ σ1 ≻ · · · ≻ σk
be a descending chain of redundant n-cells from R. Then there is a directed path
q = σ0 → c(σ0 ) → · · · → c(σk−1 ) → σk
−−→
in Γ(BM ). By Lemma 10.8 we can lift q to a path q̂ in Γ(EM ) with initial vertex mσ and such
that π(q̂) = q. But then the redundant cells in the path q̂ form a decending chain of length k
−→ (mσ) ≥ heightBM (σ).
starting at mσ, proving that height−
EM
This completes the proof of Theorem 10.2(a) and its dual Theorem 10.2(b).
TOPOLOGICAL FINITENESS PROPERTIES
53
10.2. Proof of Theorem 10.2(c). We shall explain how the above proof of Theorem 10.2(c)
is modified to prove the two-sided analogue Theorem 10.2(c).
←−→
Let (E, C, R, c, i) be a guarded collapsing scheme. Define an n-cell m(m1 , m2 , ..., mn )u of EM
to be essential (respectively redundant, collapsible respectively) if and only if (m1 , m2 , ..., mn ) is
essential (respectively redundant, collapsible respectively) in the collapsing scheme (E, R, C, c, i).
This defines (E, R, C).
←−→
For the mappings κ and ι, given mτ u ∈ EM where τ = (m1 , ..., mn ) is an n-cell of BM we
define
ι(mτ u) = i(τ )
κ(mτ u) = m(c(τ ))u.
(10.3)
(10.4)
With these definitions we claim that (E, R, C, κ, ι) is an M -equivariant collapsing scheme for
←−→
the free bi-M -simplicial set EM such that for each n the set of essential n-cells En is a free
bi-M -set of rank |En |.
10.2.1. Proving that (E, R, C, κ, ι) is a collapsing scheme.
Proposition 10.10. With the above definitions, (E, R, C, κ, ι) is a collapsing scheme for the
←−→
simplicial set EM .
The proof is analogous to the proof of Proposition 10.4. As in the proof of that proposition,
←−→
←−→
we have a digraph homomorphism π : V Γ(EM ) ∪ EΓ(EM ) → V Γ(BM ) → EΓ(BM ) given by
m(m1 , . . . , mk )n → (m1 , . . . , mk ),
π(x → y) = π(x) → π(y).
To verify collapsing scheme condition (C1) it suffices to observe that for every redundant n-cell
mτ we have
dι(mτ n) (κ(mτ n)) = di(τ ) (mc(τ )n) (by definition)
= mdi(τ ) (c(τ ))n (since i(τ ) 6= 0 and i(τ ) 6= n + 1)
= mτ n.
Here the second line appeals to the fact that the original collapsing scheme is guarded. Collapsing scheme condition (C2) holds again by applying Lemma 10.5 and the fact that π is a
digraph homomorphism.
10.2.2. Proving (E, R, C, κ, ι) is an M -equivariant collapsing scheme. We have already seen in
←−→
Section 5 that EM is a bi-M -simplicial set. We need to show that (E, R, C, κ, ι) is an M ←−→
equivariant collapsing scheme for EM . By definition, for this we need to verify that axioms
←−→
(A1)-(A6) are satisfied for EM as a left M × M op -simplicial set.
←−→
In Section 5 we saw that EM is a rigid free left M × M op -simplicial set. Together with the
←−→
definition of the action of M × M op on EM and the definition of E, R and C, axioms (A1) and
(A2) both then follow. Axioms (A3)-(A6) are then proved exactly as above just with M × M op
in place of M in the proof. This then completes the proof of Theorem 10.2(c).
11. Monoids admitting guarded collapsing schemes
In this section we give examples of classes of monoids to which the above theory of equivariant
collapsing schemes applies. In particular, this will allow us to use the theory developed above
54
ROBERT D. GRAY AND BENJAMIN STEINBERG
to give a topological proof of the fact that monoids which admit finite complete presentations
are of type bi-F∞ .
Let M be a monoid defined by a finite presentation hA | Ri with generators A and defining
∗ is the smallest congruence
∗ where ↔
relations R ⊆ A∗ × A∗ . Thus, M is isomorphic to A∗ / ↔
R
R
∗
on A containing R. We view hA | Ri as a string rewriting system, writing l → r for the pair
(l, r) ∈ R. We define a binary relation → on A∗ , called a single-step reduction, in the following
way:
u → v ⇔ u ≡ w1 lw2 and v ≡ w1 rw2
for some (l, r) ∈ R and w1 , w2 ∈ X ∗ . A word is called irreducible if no single-step reduction
∗ .
rule may be applied to it. The transitive and reflexive closure of →R is denoted by −→
R
This rewriting system is called noethereian if there are no infinite descending chains
w1 →R w2 →R w3 →R · · · →R wn →R · · · .
∗ v ′ there is a word w ∈ X ∗ such
∗ v and u −→
It is called confluent if whenever we have u −→
R
R
∗ w. If R is simultaneously noetherian and confluent we say that R is
∗ w and v ′ −→
that v −→
R
R
complete. The presentation hA | Ri is called complete if the rewriting system R is complete.
It is well-known (see for example [BO93]) that if hA | Ri is a finite complete presentation then
∗ -class of A∗ contains exactly one irreducible element. So the set of irreducible elements
each ↔
R
give a set of normal forms for the elements of the monoid M . In particular, if a monoid admits
a presentation by a finite compete rewriting system then the word problem for the monoid is
decidable.
In [Bro92, page 145] a method is given for constructing a collapsing scheme on BM for any
monoid M that is given by a finite complete rewriting system. It is easily observed from [Bro92,
page 145] that in the given collapsing scheme (E, R, C, c, i) the function i never takes either of its
two extreme allowable values, that is, the collapsing scheme for BM given in [Bro92] is guarded
in the sense of Definition 10.1. Also, as Brown observes (see [Bro92, page 147]), it follows easily
from his definition that there are only finitely many essential cells in each dimension. Thus we
have:
Proposition 11.1. Let M be a monoid. If M admits a presentation by a finite complete
rewriting system then BM admits a guarded collapsing scheme of finite type.
It follows that the theory of M -equivariant collapsing schemes developed in the previous
section applies to monoids with complete presentations, giving:
Corollary 11.2. Let M be a monoid that admits a presentation by a finite complete rewriting
system. Then M is of type left-F∞ , right-F∞ and bi-F∞ .
Proof. By Proposition 11.1 the simplicial set BM admits a guarded collapsing scheme of finite
type. Then the result follows from Corollary 10.3.
We obtain the following result of Kobayashi as a special case.
Corollary 11.3 ([Kob05]). Let M be a monoid that admits a presentation by a finite complete
rewriting system. Then M is of type bi-FP∞ .
Proof. Follows from Proposition 7.7 and Corollary 11.2.
More recently the class of, so-called, factorable monoids was introduced in work of Hess
and Ozornova [HO14]. Since it is quite technical we shall not give the definition of factorable
monoid here, we refer the reader to [HO14] to the definition, and we shall use the same notation
as there. In their work they show that a number of interesting classes of monoids admit
TOPOLOGICAL FINITENESS PROPERTIES
55
factorability structures. In some cases (e.g. Garside groups) the rewriting systems associated
with factorability structures are finite and complete, and so in these cases the monoids are
bi-F∞ . On the other hand, in [HO14, Appendix] they give an example of a factorable monoid
where the associated rewriting system is not noetherian and thus not complete (although it is
not discussed there whether the monoid admits a presentation by some other finite complete
presentation). So, there are some examples where factorability structures may be seen to exist,
even when the given presentation is not complete. In [HO14, Section 9] the authors construct
a matching on the reduced, inhomogeneous bar complex of a factorable monoid. As they say
in their paper, the matching they construct is very similar to the construction used by Brown
giving a collapsing scheme for monoids defined by finite complete rewriting systems [Bro92, page
145]. Details of the matching they construct for factorable monoids may be found on pages 27
and 28 of [HO14]. It is immediate from the definition of the mapping µ on page 28 that their
construction defines a guarded collapsing scheme for the simplicial set BM where M is a any
factorable monoid (M, E, η). If the generating set E for the monoid is finite, then the number of
essential cells in each dimension is finite, and so we have a guarded collapsing scheme of finite
type, giving:
Corollary 11.4. Let M be a monoid. If M admits a factorability structure (M, E, η) with finite
generating set E then BM admits a guarded collapsing scheme of finite type. In particular M
is of type left-F∞ , right-F∞ and bi-F∞ .
References
[AH03]
[Alo94]
[Ani86]
[AR67]
[BB97]
[BBG98]
[BG84]
[BH01]
[BHMS02]
[Bie76]
[BO93]
[Bra99]
[Bro87]
[Bro92]
J. M. Alonso & S. M. Hermiller. ‘Homological finite derivation type’. Internat. J. Algebra Comput.,
13, no. 3 (2003), pp. 341–359. doi: 10.1142/S0218196703001407.
J. M. Alonso. ‘Finiteness conditions on groups and quasi-isometries’. J. Pure Appl. Algebra, 95, no. 2
(1994), pp. 121–129. doi: 10.1016/0022-4049(94)90069-8.
D. J. Anick. ‘On the homology of associative algebras’. Trans. Amer. Math. Soc., 296, no. 2 (1986),
pp. 641–659. doi: 10.2307/2000383.
W. W. Adams & M. A. Rieffel. ‘Adjoint functors and derived functors with an application to the
cohomology of semigroups’. J. Algebra, 7 (1967), pp. 25–34. doi: 10.1016/0021-8693(67)90065-8.
M. Bestvina & N. Brady. ‘Morse theory and finiteness properties of groups’. Invent. Math., 129,
no. 3 (1997), pp. 445–470. doi: 10.1007/s002220050168.
G. Baumslag, M. R. Bridson, & K. W. Gruenberg. ‘On the absence of cohomological finiteness in
wreath products’. J. Austral. Math. Soc. Ser. A, 64, no. 2 (1998), pp. 222–230.
K. S. Brown & R. Geoghegan. ‘An infinite-dimensional torsion-free FP∞ group’. Invent. Math., 77,
no. 2 (1984), pp. 367–381. doi: 10.1007/BF01388451.
R. Bieri & J. Harlander. ‘On the FP3 -conjecture for metabelian groups’. J. London Math. Soc. (2),
64, no. 3 (2001), pp. 595–610.
M. R. Bridson, J. Howie, C. F. Miller, III, & H. Short. ‘The subgroups of direct products of surface
groups’. Geom. Dedicata, 92 (2002), pp. 95–103. Dedicated to John Stallings on the occasion of his
65th birthday. doi: 10.1023/A:1019611419598.
R. Bieri. Homological dimension of discrete groups. Mathematics Department, Queen Mary College,
London, 1976. Queen Mary College Mathematics Notes.
R. V. Book & F. Otto. String-rewriting systems. Texts and Monographs in Computer Science.
Springer-Verlag, New York, 1993.
N. Brady. ‘Branched coverings of cubical complexes and subgroups of hyperbolic groups’. J. London
Math. Soc. (2), 60, no. 2 (1999), pp. 461–480.
K. S. Brown. ‘Finiteness properties of groups’. In Proceedings of the Northwestern conference on cohomology of groups (Evanston, Ill., 1985), vol. 44, pp. 45–75, 1987. doi:
10.1016/0022-4049(87)90015-6.
K. S. Brown. ‘The geometry of rewriting systems: a proof of the Anick-Groves-Squier theorem’. In
Algorithms and classification in combinatorial group theory (Berkeley, CA, 1989), vol. 23 of Math.
Sci. Res. Inst. Publ., pp. 137–163. Springer, New York, 1992. doi: 10.1007/978-1-4613-9730-4-6.
56
[Bro94]
ROBERT D. GRAY AND BENJAMIN STEINBERG
K. S. Brown. Cohomology of groups, vol. 87 of Graduate Texts in Mathematics. Springer-Verlag,
New York, 1994. Corrected reprint of the 1982 original.
[Bro10]
K. S. Brown. ‘Lectures on the cohomology of groups’. In Cohomology of groups and algebraic Ktheory, vol. 12 of Adv. Lect. Math. (ALM), pp. 131–166. Int. Press, Somerville, MA, 2010.
[BW07]
K.-U. Bux & K. Wortman. ‘Finiteness properties of arithmetic groups over function fields’. Invent.
Math., 167, no. 2 (2007), pp. 355–378.
[Cha00]
M. K. Chari. ‘On discrete Morse functions and combinatorial decompositions’. Discrete Math., 217,
no. 1-3 (2000), pp. 101–113. Formal power series and algebraic combinatorics (Vienna, 1997). doi:
10.1016/S0012-365X(99)00258-7.
[Che84]
C. C.-a. Cheng. ‘Separable semigroup algebras’. J. Pure Appl. Algebra, 33, no. 2 (1984), pp. 151–158.
doi: 10.1016/0022-4049(84)90003-3.
[Cho06]
F. Chouraqui. ‘Rewriting systems in alternating knot groups’. Internat. J. Algebra Comput., 16,
no. 4 (2006), pp. 749–769.
[CO98]
R. Cremanns & F. Otto. ‘FP∞ is undecidable for finitely presented monoids with word problems
decidable in polynomial time’. Mathematische Schriften Kassel 11/98, Universitat Kassel, September
(1998) .
[Coh73]
M. M. Cohen. A course in simple-homotopy theory. Springer-Verlag, New York, 1973. Graduate
Texts in Mathematics, Vol. 10.
[Coh92]
D. E. Cohen. ‘A monoid which is right F P∞ but not left F P1 ’. Bull. London Math. Soc., 24, no. 4
(1992), pp. 340–342.
[Coh97]
D. E. Cohen. ‘String rewriting and homology of monoids’. Math. Structures Comput. Sci., 7, no. 3
(1997), pp. 207–240.
[CS80]
C. C.-a. Cheng & J. Shapiro. ‘Cohomological dimension of an abelian monoid’. Proc. Amer. Math.
Soc., 80, no. 4 (1980), pp. 547–551. doi: 10.2307/2043420.
[CS09a]
J. Cassaigne & P. V. Silva. ‘Infinite periodic points of endomorphisms over special confluent rewriting
systems’. Ann. Inst. Fourier, 59, no. 2 (2009), pp. 769–810.
[CS09b]
J. Cassaigne & P. V. Silva. ‘Infinite words and confluent rewriting systems: endomorphism extensions’. Internat. J. Algebra Comput., 19, no. 4 (2009), pp. 443–490.
[DDM09]
V. Diekert, A. Duncan, & A. Miasnikov. ‘Geodesic rewriting systems and pregroups’. To appear in Combinatorial and Geometric Group Theory, Dortmund and Carleton Conferences. Series:
Trends in Mathematics Bogopolski, O.; Bumagin, I.; Kharlampovich, O.; Ventura, E. (Eds.) 2009.,
arXiv:0906.2223v1, 2009.
[DFZ86]
E. Dror Farjoun & A. Zabrodsky. ‘Homotopy equivalence between diagrams of spaces’. J. Pure Appl.
Algebra, 41, no. 2-3 (1986), pp. 169–182. doi: 10.1016/0022-4049(86)90109-X.
[DL98]
J. F. Davis & W. Lück. ‘Spaces over a category and assembly maps in isomorphism conjectures in
K- and L-theory’. K-Theory, 15, no. 3 (1998), pp. 201–252. doi: 10.1023/A:1007784106877.
[EG57]
S. Eilenberg & T. Ganea. ‘On the Lusternik-Schnirelmann category of abstract groups’. Ann. of
Math. (2), 65 (1957), pp. 517–518. doi: 10.2307/1970062.
[Fie84]
Z. Fiedorowicz. ‘Classifying spaces of topological monoids and categories’. Amer. J. Math., 106,
no. 2 (1984), pp. 301–350. doi: 10.2307/2374307.
[FMWZ13] M. G. Fluch, M. Marschler, S. Witzel, & M. C. B. Zaremsky. ‘The Brin-Thompson groups sV are
of type F∞ ’. Pacific J. Math., 266, no. 2 (2013), pp. 283–295. doi: 10.2140/pjm.2013.266.283.
[For95]
R. Forman. ‘A discrete Morse theory for cell complexes’. In Geometry, topology, & physics, Conf.
Proc. Lecture Notes Geom. Topology, IV, pp. 112–125. Int. Press, Cambridge, MA, 1995.
[For02]
R. Forman. ‘A user’s guide to discrete Morse theory’. Séminaire Lotharingien de Combinatoire, B48c
(2002), pp. 1–35. url: http://www.emis.ams.org/journals/SLC/wpapers/s48forman.pdf.
[Fre09]
R. Freij. ‘Equivariant discrete Morse theory’. Discrete Math., 309, no. 12 (2009), pp. 3821–3829.
doi: 10.1016/j.disc.2008.10.029.
[FS05]
D. Farley & L. Sabalka. ‘Discrete Morse theory and graph braid groups’. Algebr. Geom. Topol., 5
(2005), pp. 1075–1109. doi: 10.2140/agt.2005.5.1075.
[Geo08]
R. Geoghegan. Topological methods in group theory, vol. 243 of Graduate Texts in Mathematics.
Springer, New York, 2008. doi: 10.1007/978-0-387-74614-2.
[GM12]
Y. Guiraud & P. Malbos. ‘Higher-dimensional normalisation strategies for acyclicity’. Adv. Math.,
231, no. 3-4 (2012), pp. 2294–2351. doi: 10.1016/j.aim.2012.05.010.
[GP96]
V. S. Guba & S. J. Pride. ‘Low-dimensional (co)homology of free Burnside monoids’. J. Pure Appl.
Algebra, 108, no. 1 (1996), pp. 61–79. doi: 10.1016/0022-4049(95)00038-0.
TOPOLOGICAL FINITENESS PROPERTIES
[GP98]
[GS]
[GS06]
[GS08]
[GZ67]
[HO14]
[Hoc45]
[How63]
[How95]
[HS99]
[Hur89]
[KM05]
[Kna72]
[KO01]
[Kob00]
[Kob05]
[Kob07]
[Kob10]
[Koz05]
[KT76]
[Lam99]
[LN01]
[LS06]
[May67]
[May72]
[May75]
[May96]
57
V. S. Guba & S. J. Pride. ‘On the left and right cohomological dimension of monoids’. Bull. London
Math. Soc., 30, no. 4 (1998), pp. 391–396. doi: 10.1112/S0024609398004676.
R. D. Gray & B. Steinberg. ‘Topological finiteness properties of monoids. Part 2: amalgamated free
products, HNN extensions, and special monoids’. in preparation.
V. S. Guba & M. V. Sapir. ‘Diagram groups and directed 2-complexes: homotopy and homology’.
J. Pure Appl. Algebra, 205, no. 1 (2006), pp. 1–47. doi: 10.1016/j.jpaa.2005.06.012.
O. Goodman & M. Shapiro. ‘On a generalization of Dehn’s algorithm’. Internat. J. Algebra Comput.,
18, no. 7 (2008), pp. 1137–1177.
P. Gabriel & M. Zisman. Calculus of fractions and homotopy theory. Ergebnisse der Mathematik
und ihrer Grenzgebiete, Band 35. Springer-Verlag New York, Inc., New York, 1967.
A. Hess & V. Ozornova. ‘Factorability, string rewriting and discrete morse theory’. arXiv:1412.3025,
2014.
G. Hochschild. ‘On the cohomology groups of an associative algebra’. Ann. of Math. (2), 46 (1945),
pp. 58–67. doi: 10.2307/1969145.
J. M. Howie. ‘Embedding theorems for semigroups’. Quart. J. Math. Oxford Ser. (2), 14 (1963), pp.
254–258. doi: 10.1093/qmath/14.1.254.
J. M. Howie. Fundamentals of semigroup theory. Academic Press [Harcourt Brace Jovanovich Publishers], London, 1995. L.M.S. Monographs, No. 7.
S. Hermiller & M. Shapiro. ‘Rewriting systems and geometric three-manifolds’. Geom. Dedicata, 76,
no. 2 (1999), pp. 211–228.
C. M. Hurwitz. ‘On the homotopy theory of monoids’. J. Austral. Math. Soc. Ser. A, 47, no. 2
(1989), pp. 171–185.
G. Kalai & R. Meshulam. ‘A topological colorful Helly theorem’. Adv. Math., 191, no. 2 (2005), pp.
305–311. doi: 10.1016/j.aim.2004.03.009.
U. Knauer. ‘Projectivity of acts and Morita equivalence of monoids’. Semigroup Forum, 3, no. 4
(1971/1972), pp. 359–370.
Y. Kobayashi & F. Otto. ‘On homotopical and homological finiteness conditions for finitely
presented monoids’. Internat. J. Algebra Comput., 11, no. 3 (2001), pp. 391–403. doi:
10.1142/S0218196701000577.
Y. Kobayashi. ‘Every one-relation monoid has finite derivation type’. In Proceedings of the Third
Symposium on Algebra, Languages and Computation (Osaka, 1999), pp. 16–20. Shimane Univ.,
Matsue, 2000.
Y. Kobayashi. ‘Gröbner bases of associative algebras and the Hochschild cohomology’. Trans. Amer.
Math. Soc., 357, no. 3 (2005), pp. 1095–1124 (electronic). doi: 10.1090/S0002-9947-04-03556-1.
Y. Kobayashi. ‘The homological finiteness property FP1 and finite generation of monoids’. Internat.
J. Algebra Comput., 17, no. 3 (2007), pp. 593–605. doi: 10.1142/S0218196707003743.
Y. Kobayashi. ‘The homological finiteness properties left-, right-, and bi-FPn of monoids’. Comm.
Algebra, 38, no. 11 (2010), pp. 3975–3986. doi: 10.1080/00927872.2010.507562.
D. N. Kozlov. ‘Discrete Morse theory for free chain complexes’. C. R. Math. Acad. Sci. Paris, 340,
no. 12 (2005), pp. 867–872. doi: 10.1016/j.crma.2005.04.036.
D. M. Kan & W. P. Thurston. ‘Every connected space has the homology of a K(π, 1)’. Topology,
15, no. 3 (1976), pp. 253–258.
T. Y. Lam. Lectures on modules and rings, vol. 189 of Graduate Texts in Mathematics. SpringerVerlag, New York, 1999. doi: 10.1007/978-1-4612-0525-8.
I. J. Leary & B. E. A. Nucinkis. ‘Every CW-complex is a classifying space for proper bundles’.
Topology, 40, no. 3 (2001), pp. 539–550. doi: 10.1016/S0040-9383(99)00073-7.
I. J. Leary & M. Saadetoğlu. ‘Some groups of finite homological type’. Geom. Dedicata, 119 (2006),
pp. 113–120.
J. P. May. Simplicial objects in algebraic topology. Van Nostrand Mathematical Studies, No. 11. D.
Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London, 1967.
J. P. May. The geometry of iterated loop spaces. Springer-Verlag, Berlin-New York, 1972. Lectures
Notes in Mathematics, Vol. 271.
J. P. May. ‘Classifying spaces and fibrations’. Mem. Amer. Math. Soc., 1, no. 1, 155 (1975), pp.
xiii+98.
J. P. May. Equivariant homotopy and cohomology theory, vol. 91 of CBMS Regional Conference Series
in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington, DC;
by the American Mathematical Society, Providence, RI, 1996. With contributions by M. Cole, G.
58
[May99]
[McC69]
[McD79]
[Mil57]
[Mit72]
[MS76]
[MSS15a]
[MSS15b]
[Nic69]
[Nic72]
[Nie]
[Nov98]
[Nun95]
[OK97]
[Pas08]
[PO04]
[PO05]
[Pri06]
[Pup58]
[Pup59]
[RT89]
[Sch75]
[Seg78]
[Ser71]
[SOK94]
ROBERT D. GRAY AND BENJAMIN STEINBERG
Comezaña, S. Costenoble, A. D. Elmendorf, J. P. C. Greenlees, L. G. Lewis, Jr., R. J. Piacenza, G.
Triantafillou, and S. Waner. doi: 10.1090/cbms/091.
J. P. May. A concise course in algebraic topology. Chicago Lectures in Mathematics. University of
Chicago Press, Chicago, IL, 1999.
M. C. McCord. ‘Classifying spaces and infinite symmetric products’. Trans. Amer. Math. Soc., 146
(1969), pp. 273–298.
D. McDuff. ‘On the classifying spaces of discrete monoids’. Topology, 18, no. 4 (1979), pp. 313–320.
doi: 10.1016/0040-9383(79)90022-3.
J. Milnor. ‘The geometric realization of a semi-simplicial complex’. Ann. of Math. (2), 65 (1957),
pp. 357–362.
B. Mitchell. ‘Rings with several objects’. Advances in Math., 8 (1972), pp. 1–161. doi:
10.1016/0001-8708(72)90002-3.
D. McDuff & G. Segal. ‘Homology fibrations and the “group-completion” theorem’. Invent. Math.,
31, no. 3 (1975/76), pp. 279–284.
S. Margolis, F. Saliola, & B. Steinberg. ‘Cell complexes, poset topology and the representation
theory of algebras arising in algebraic combinatorics and discrete geometry’. ArXiv e-prints, 2015.
arXiv: 1508.05446.
S. Margolis, F. Saliola, & B. Steinberg. ‘Combinatorial topology and the global dimension of algebras arising in combinatorics’. J. Eur. Math. Soc. (JEMS), 17, no. 12 (2015), pp. 3037–3080. doi:
10.4171/JEMS/579.
W. R. Nico. ‘On the cohomology of finite semigroups’. J. Algebra, 11 (1969), pp. 598–612. doi:
10.1016/0021-8693(69)90093-3.
W. R. Nico. ‘A counterexample in the cohomology of monoids’. Semigroup Forum, 4 (1972), pp.
93–94. doi: 10.1007/BF02570775.
P. Nielsen. ‘Can the trivial module be stably free for a monoid ring?’. MathOverflow.
URL:https://mathoverflow.net/q/248371
(version:
2016-08-26).
url:
https://mathoverflow.net/q/248371.
B. V. Novikov. ‘Semigroups of cohomological dimension one’. J. Algebra, 204, no. 2 (1998), pp.
386–393. doi: 10.1006/jabr.1997.7363.
M. Nunes. ‘Cohomological results in monoid and category theory via classifying spaces’. J. Pure
Appl. Algebra, 101, no. 2 (1995), pp. 213–244. doi: 10.1016/0022-4049(94)00056-O.
F. Otto & Y. Kobayashi. ‘Properties of monoids that are presented by finite convergent stringrewriting systems—a survey’. In Advances in algorithms, languages, and complexity, pp. 225–266.
Kluwer Acad. Publ., Dordrecht, 1997.
E. Pasku. ‘On some homotopical and homological properties of monoid presentations’. Semigroup
Forum, 76, no. 3 (2008), pp. 427–468. doi: 10.1007/s00233-007-9037-1.
S. J. Pride & F. Otto. ‘For rewriting systems the topological finiteness conditions FDT and
FHT are not equivalent’. J. London Math. Soc. (2), 69, no. 2 (2004), pp. 363–382. doi:
10.1112/S0024610703004903.
S. J. Pride & F. Otto. ‘On higher order homological finiteness of rewriting systems’. J. Pure Appl.
Algebra, 200, no. 1-2 (2005), pp. 149–161.
S. J. Pride. ‘Homological finiteness conditions for groups, monoids, and algebras’. Comm. Algebra,
34, no. 10 (2006), pp. 3525–3536. doi: 10.1080/00927870600796110.
D. Puppe. ‘Homotopie und Homologie in abelschen Gruppen- und Monoidkomplexen. I. II’. Math.
Z., 68 (1958), pp. 367–406, 407–421.
D. Puppe. ‘A theorem on semi-simplicial monoid complexes’. Ann. of Math. (2), 70 (1959), pp.
379–394.
J. Rhodes & B. Tilson. ‘The kernel of monoid morphisms’. J. Pure Appl. Algebra, 62, no. 3 (1989),
pp. 227–268. doi: 10.1016/0022-4049(89)90137-0.
B. M. Schein. ‘Free inverse semigroups are not finitely presentable’. Acta Math. Acad. Sci. Hungar.,
26 (1975), pp. 41–52. doi: 10.1007/BF01895947.
G. Segal. ‘Classifying spaces related to foliations’. Topology, 17, no. 4 (1978), pp. 367–382. doi:
10.1016/0040-9383(78)90004-6.
J.-P. Serre. ‘Cohomologie des groupes discrets’. (1971), pp. 77–169. Ann. of Math. Studies, No. 70.
C. C. Squier, F. Otto, & Y. Kobayashi. ‘A finiteness condition for rewriting systems’. Theoret.
Comput. Sci., 131, no. 2 (1994), pp. 271–294. doi: 10.1016/0304-3975(94)90175-9.
TOPOLOGICAL FINITENESS PROPERTIES
[Squ87]
[Sri96]
[Sta68]
[Ste92]
[Swa69]
[Wal65]
[Wei94]
[Wei13]
59
C. C. Squier. ‘Word problems and a homological finiteness condition for monoids’. J. Pure Appl.
Algebra, 49, no. 1-2 (1987), pp. 201–217. doi: 10.1016/0022-4049(87)90129-0.
V. Srinivas. Algebraic K-theory, vol. 90 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston,
MA, second edition, 1996. doi: 10.1007/978-0-8176-4739-1.
J. R. Stallings. ‘On torsion-free groups with infinitely many ends’. Ann. of Math. (2), 88 (1968), pp.
312–334. doi: 10.2307/1970577.
M. Stein. ‘Groups of piecewise linear homeomorphisms’. Trans. Amer. Math. Soc., 332, no. 2 (1992),
pp. 477–514. doi: 10.2307/2154179.
R. G. Swan. ‘Groups of cohomological dimension one’. J. Algebra, 12 (1969), pp. 585–610. doi:
10.1016/0021-8693(69)90030-1.
C. T. C. Wall. ‘Finiteness conditions for CW-complexes’. Ann. of Math. (2), 81 (1965), pp. 56–69.
C. A. Weibel. An introduction to homological algebra, vol. 38 of Cambridge Studies in Advanced
Mathematics. Cambridge University Press, Cambridge, 1994. doi: 10.1017/CBO9781139644136.
C. A. Weibel. The K-book, vol. 145 of Graduate Studies in Mathematics. American Mathematical
Society, Providence, RI, 2013. An introduction to algebraic K-theory.
School of Mathematics,, University of East Anglia,, Norwich NR4 7TJ,, England and Department of Mathematics,, City College of New York,, Convent Avenue at 138th Street,, New York,
New York 10031,, USA
E-mail address: Robert.D.Gray@uea.ac.uk and bsteinberg@ccny.cuny.edu
| 4 |
Capturing Localized Image Artifacts through a CNN-based Hyper-image
Representation
Parag Shridhar Chandakkar and Baoxin Li, Senior member, IEEE
arXiv:1711.04945v1 [] 14 Nov 2017
School of Computing, Informatics and Decision Systems Engineering
Arizona State University, Tempe, USA
{pchandak, baoxin.li}@asu.edu
Abstract
Training deep CNNs to capture localized image artifacts on
a relatively small dataset is a challenging task. With enough
images at hand, one can hope that a deep CNN characterizes
localized artifacts over the entire data and their effect on the
output. However, on smaller datasets, such deep CNNs may
overfit and shallow ones find it hard to capture local artifacts.
Thus some image-based small-data applications first train their
framework on a collection of patches (instead of the entire
image) to better learn the representation of localized artifacts.
Then the output is obtained by averaging the patch-level results. Such an approach ignores the spatial correlation among
patches and how various patch locations affect the output. It
also fails in cases where only few patches mainly contribute
to the image label. To combat these scenarios, we develop
the notion of hyper-image representations. Our CNN has two
stages. The first stage is trained on patches. The second stage
utilizes the last layer representation developed in the first stage
to form a hyper-image, which is used to train the second stage.
We show that this approach is able to develop a better mapping between the image and its output. We analyze additional
properties of our approach and show its effectiveness on one
synthetic and two real-world vision tasks - no-reference image quality estimation and image tampering detection - by its
performance improvement over existing strong baselines.
(a)
(b)
Introduction
Convolutional neural networks (CNN) have recently enjoyed
tremendous success in many areas. In various fields, CNNs
have surpassed the performance of previous state-of-art approaches by significant margins. For example, CNNs have
shown considerable superiority in object recognition (Simonyan and Zisserman 2014; He et al. 2015), face recognition (Taigman et al. 2014), semantic segmentation (Dai et al.
2016) etc. as well as in some understudied but long-standing
and difficult problems such as gaze-following (Recasens et al.
2015). Since the advent of AlexNet (Krizhevsky, Sutskever,
and Hinton 2012), deep networks have also been continuously
evolving for tasks such as segmentation (Zheng et al. 2015),
object detection and image classification (He et al. 2015;
Goodfellow et al. 2014; Salimans et al. 2016).
However, training CNNs to characterize localized image
artifacts and label the image accordingly on a relatively small
dataset remains a challenging task. With large amounts of
data, deep CNNs may be able to learn a good representation
(c)
Figure 1: (a) and (b) Clean (left) and distorted (right) image
pairs in the TID 2013 dataset. The images on the right in
(a) and (b) are distorted by non-uniform and uniform noise
respectively. (c) Authetic and forged image pair from CASIA2 dataset. Red overlay shows distorted/forged regions in an
image. Please zoom in for details and view the online version.
for localized artifacts using a conventional pipeline (i.e. endto-end training on images). Unfortunately, there are many
applications where the labeled data is scarce, and the only
way to obtain more data is by employing human experts,
which is expensive and subject to their availability. This realworld constraint hinders the widespread use of advanced
CNN architectures in such problems. On the other hand, the
nature of some of these problems may exhibit properties that
can be leveraged to increase the localization power as well as
the volume of useful training data. For example, the images
can be divided into smaller patches, and if the labels of these
patches could be derived from the original image, the number
of labeled training samples could be increased potentially by
a factor of, say 10-100, depending on how the patches are
formed. Then CNNs could be trained on the augmented data,
and image-level results could be derived by averaging patchlevel results. Such patch-level training followed by averaging
currently gives the state-of-art results for the problem of noreference image quality estimation (NR-IQA) (Kang et al.
2014) (see Experiments section for details on NR-IQA).
The effectiveness of this patch-level training technique is
only observed if one would assume that the artifacts are uniformly spread over the entire image, which is unfortunately
too strong a constraint in practice. Certain real-world problems such as NR-IQA and image forgery classification are
good examples where these strong constraints are often violated. Fig. 1(a) and 1(b) shows NR-IQA examples containing
original (left) and distorted (right) images. The red overlay
shows the distorted region in both the images. The distortions
are localized and thus only few image patches are responsible
for degrading its quality. Note that in the bottom image, the
flower in the central salient region is distorted whereas in the
upper image, some parts towards the lower non-salient region
are distorted, preserving the quality of salient face. This affects the perceived quality as the top image (score = 5.56) has
been judged to be of higher quality than the bottom one (score
= 3.53) based on the extensive subjective tests conducted in
(Ponomarenko et al. 2015). Interestingly, the type and the
severity of the distortion added are identical for both images.
Thus the image quality is dependent on various factors such
as distortion type, severity, affected patch locations and their
texture etc. This cannot be handled by the aforementioned
patch-level training technique. Similar observations can be
made about Fig. 1(c). The image needs to be categorized into
authentic or forged. The forgery is localized (shown by the
red bounding box) and may not be effectively captured by a
conventional deep CNN-pipeline or independent patch-level
training, especially when the training data is only in the order
of thousands. Patch-level training is only effective in case of
uniform distortion as shown in Fig. 1(b).
To combat these scenarios, we present a novel CNN-based
approach focused towards such type of data. Our approach
does not require the image and its patches to share a similar
distribution. It works well even if there is only one patch
per image which plays an important role in determining the
image label (relaxing the requirement that all patches from
an image should equally contribute to the decision, e.g. by
means of averaging patch-results). We evaluate our approach
on one synthetic data and two real-world, challenging vision
tasks - NR-IQA and image forgery classification. We will
demonstrate that the proposed method produces a state-of-art
performance for both the applications. We now explain the
problem setup followed by our approach.
Problem Setup
We consider a problem where our data has some localized information embedded in it that is crucial to getting the desired
output. Additionally, the quantity of available data is limited
and its artificial generation is non-trivial or even impossible.
We assume that inferring important patches is possible or this
information is provided as ground-truth.
Consider a database of images X containing N images,
and their labels, denoted by (Xi , Yi ) ∀ i. An image has m
j
th
th
patches, x1i , ..., xm
i . Here, xi denotes j patch in the i im1
m
age. We denote patch-level labels by yi , . . . , yi . Patch-level
labels can be inferred from the image or they can be a part of
ground-truth labels. Thus training on patches and then averaging all the patch scores to obtain the image label is a naive
way which, actually, works well in practice. However, we do
not allow the learner to understand the relation between patch
scores and the image score. In other words, the network cannot learn an optimal weighing strategy for all image regions,
especially when only few regions contain the important localized artifacts. Training on the entire image with a deep
stack of convolutional filters may achieve the desired result,
but then limited amounts of data prevents CNN from generalizing well. Thus, our problem becomes, given an image
Xi and its patches x1i , . . . , xm
i , first infer patch-level labels
- yi1 , . . . , yim . Subsequently, develop a CNN framework that
aggregates the information from the collection (xji , yij ) ∀i, ∀j
and forms a mapping function f : Xi → Yi , ∀ i.
Proposed Approach
Our approach has two CNN stages out of which the first one
is trained on a collection of labeled patches. Given a database
of labeled images - X , Y - we first extract all the patches and
derive their corresponding labels. As mentioned earlier, the
patch-level labels are either inferred from the image label or
are provided as ground-truth, if available.
Training the First Stage
The first stage CNN follows a conventional training pipeline
and is trained on a collection of labeled patches. We detail
the pre-processing steps and the training procedure in Experiments section as they vary by application.
It is well-known and empirically verified that deeper layers encode increasingly powerful features (Zeiler and Fergus
2014; Yosinski et al. 2015) that are semantically more meaningful (Zhou et al. 2014). Thus after training the first stage,
we extract the ReLU-activated responses of the last but one
layer for all the patches in the train and validation set. During
our experiments, we observed that ReLU-activated sparse
response provides better validation loss than the pre-ReLU
responses. This will be used as the input representation for
the second stage described as follows.
Training the Second Stage
A trained first stage CNN provides a D-dimensional representation obtained from the last but one layer for any given
image patch. Let us denote the m overlapped patches for
the ith image by x1i , . . . , xm
i . We extract the D-dimensional
representations from these m patches and arrange them in a
U × V × D hyper-image denoted by Hi . The arrangement is
done such that the (u, v) element of the hyper-image - Hiuv corresponds to the representation of (u, v) patch in the original image. In other words, each patch in the original image
now corresponds to a point in D-dimensions in the hyperimage. Note that U and V are smaller as compared with the
image height and width respectively, by a factor proportional
to the patch size and overlap factor.
We now train a second stage CNN on the hyper-images
and their labels, which shares architectural similarities with
its counterpart - the first stage CNN. We do not perform
mean-centering or any other pre-processing since these representations are obtained from normalized images.
Each pixel in the hyper-image being inputted to
the second stage CNN is of the form Hiuv =
f1 (f2 (. . . (fn (xuv
i )) . . .)) ∀ u, v, where f1 , . . . , fn represent
the non-linear operations (i.e. max-pooling, convolutions, ReLUs etc.) on an image as it makes a forward pass through the
first stage. Then in the second stage, the label is inferred as,
yi = g1 (g2 (. . . (gn (Hi )) . . .)), where Hi denotes the hyperimage being inputted to the second stage corresponding to
the ith image and g1 , . . . , gn denote the non-linear operations
in the second stage. The following equation expresses yi as a
highly nonlinear function of xuv
i ∀ u, v.
yi = g1 (g2 (. . . (gn ({Hi11 , . . . , Hiuv })) . . .)),
where, Hiuv = f1 (f2 (. . . (fn (xuv
i )) . . .)) ∀ u, v
(1)
This allows the multi-stage CNN to take decisions based on
context and by jointly considering all the image patches. Both
the stages combined also provide us with higher representational capacity than had we trained it simply with patches
followed by averaging.
Note that the convolutional filters of the second stage CNN
only operate on a small neighborhood at a time (usually
3 × 3), where each point in this neighborhood corresponds
to the D-dimensional representation of a patch. So if the
receptive field of a neuron in the first layer is 3 × 3, it is
looking at 9 patches arranged in a square grid. Thus filters
in the early layers learn simple relations between adjacent
patches whereas the deeper ones will start pooling in patch
statistics from all over the image to build a complex model
which relates the image label and all patches in an image.
This two-stage training scheme raises an issue that it needs
the first stage to produce a good representation of the data,
since the second stage is solely dependent on it. Any errors
in the initial stage may be propagated. However, in our experiments we find that the accuracy of the entire architecture is
always more than the accuracy obtained by:
1. Training end-to-end with a deep CNN.
2. Training on patches and averaging patch scores over the
entire image.
3. End-to-end fine-tuning of a pre-trained network.
This points to two possibilities. Firstly, the second stage is
powerful enough to handle slight irregularities in the first
stage representation. This is expected since CNNs are resilient even in the presence of noise or with jittered/occluded
images to some extent (Dodge and Karam 2016). Secondly,
inputting a hyper-image containing D-dimensional points to a
CNN results in a performance boost. For example, predicting
a quality score for the distorted images shown in Fig. 1(a)
and Fig. 1(b) can be viewed as a regression task with multiple
instances. A single patch having lower quality will not necessarily cause the entire image to have lower quality score.
Similarly, an image with multiple mildly distorted patches
at corners may have higher quality score than an image with
a single severely distorted patch placed in a salient position.
Thus quality score is related to the appearance/texture of all
patches, their locations, distortion severity etc. Our network
acquires location sensitivity to a certain extent (shown in experiments on synthetic data) as it learns on a U × V × D grid.
Distortion strengths of the individual patches are encoded in
the D-dimensions, which are unified by the second stage to
get desired output for a given image. On the other hand, an
end-to-end conventional network pipeline will need to learn
both - 1. the discriminative feature space and 2. the patches
which contribute to the image label. The patch-averaging
technique will fail as it will assign equal weight to all the
individual patches potentially containing distortions of different strengths. To summarize, the proposed architecture
attempts to learn the optimal mapping between image label
and spatially correlated patches of that image.
Testing
Firstly, we extract a fixed number of patches from an image,
extract their representations using the first stage and arrange
it in a U × V × D grid to form a hyper-image. Our approach
does not require resizing of input images. Instead, we change
the strides between patches at run-time to obtain the desired
shape. In applications where resizing images could change
the underlying meaning of images or introduce additional
artifacts (e.g. image quality, image forgery classification),
this approach could be useful. The procedure to compute
the strides at run-time is as follows. we first compute (or
assume) the maximum height and width among all the images in our dataset. If a test image exceeds those maximum
dimensions, we scale it isotropically so that both its dimensions meet the size constraints. Lets denote the maximum
height and width by M and N respectively. Thus the number
of patches required in the x and y direction of the grid are
M
N
npx = d sz
e, npy = d sz
e. Here, szy × szx is the patch size
x
y
that is used in the first stage. For any new M̂ × N̂ image,
the required strides (denoted by sx and sy ) to obtain a fixed
number of patches in the grid are as follows:
sx = rnd
N̂ − szx
npx − 1
!
, sy = rnd
M̂ − szy
n py − 1
!
(2)
After obtaining the hyper-image ∈ RU ×V ×D from the first
stage, we make a forward pass with it through the second
stage to obtain the desired output. In the upcoming sections,
we apply the proposed approach to a synthetic problem and
two real-world challenging vision tasks, review relevant literature and discuss about other aspects of our approach.
Experiments and Results
We evaluate our approach on a synthetic task and two challenging real-word problems - 1. no-reference image quality
assessment (NR-IQA) and 2. image forgery classification.
Apart from the dependence on localized image artifacts, an
additional common factor that aggravates the difficulty level
of both these problems is that the amount of data available is
scarce. Manual labeling is expensive as subject experts need
to be appointed to evaluate the image quality or to detect
forgery. Additionally, artificial generation of data samples is
non-trivial in both these cases. We will now begin by describing the setup for our synthetic task.
Synthetic task: While this task is constructed to be simple, we have included certain features that will examine the
effectiveness of the proposed approach. The task is to map
an image to a real-valued score that quantifies the artifacts
introduced in that image. The dataset contains 128 × 128
gray-scale images that have a constant colored background.
The color is chosen randomly from [0, 1]. We overlay between one to five patches on that image. Each patch contains
only two shades - a dark one ∈ [0, 0.5) and a bright one
∈ [0.5, 1]. A random percentage of pixels in a patch is made
bright and the others are set to dark. Finally, all the pixels are
scrambled. Size of each patch is 16 × 16.
Let us denote the synthetic image by S and the ith patch
by pi . The number of patches overlaid on S is η (≤ 5). Let
s0 denote the constant background value of S. pjk
i denotes
the (j, k) pixel of pi . The center of the patch pi is denoted by
ci . The image center is denoted
ĉ. The score of this image
pPby
η
2
can now be defined as 5 −
i=1 αi , where αi is:
!
16
16 X
jk
X
|s
−
p
|
0
i
+ 1 − ||ci − ĉ||2 (3)
αi =
16 × 16
distN
j=1
k=1
The first term computes the Manhattan distance between a
background pixel and all the pixels of a patch. This can be
viewed as a dissimilarity metric between the background and
a patch. The second term imposes a penalty if the patch is too
close to the image center. The term distN is a normalization
factor. If the patch lies at any of the four corners, then ||ci −
ĉ||2 = distN and the penalty reduces to zero. Higher score
indicates presence of lower artifacts in an image.
We perform another experiment to test the sensitivity of
our approach to the patch locations. To this end, we cre-
Table 1: Results of experiments on synthetic data
Experiment 1 on synthetic data
Approach
SROCC
PLCC
Patch-averaging
0.9132
0.8982
Proposed
0.9611 0.9586
Experiment 2 on synthetic data
Approach
Mean of SROCC and PLCC
Patch-averaging
0.665, 0.7, 0.544, 0.5, 0.439
Proposed
0.886,0.811,0.738,0.744,0.718
ated 1K images that all had identical background. The number of patches and their content (i.e. two shades and pixel
scrambling) were also fixed across all 1K images. The only
variable was their positions in the image. we compared our
two-stage approach and patch-averaging. Patch-averaging assigns nearly identical scores to all 1K images whereas our approach gives higher correlation with the ground truth. When
everything except patch positions was fixed, our approach
had higher correlation and slow degradation with increasing
number of patches. See Table 1 for results. Synthetic images
are provided in supplementary.
No-reference image quality assessment (NR-IQA): Images may get distorted due to defects in acquisition devices,
transmission-based errors etc. The task of IQA requires an
automated method to assess the visual quality of an image.
Conventional error metrics such as RMSE/PSNR cannot capture the correlation between image appearance and humanperception of the image quality. Two variants of this problem
exist - full-reference IQA and no-reference IQA (NR-IQA).
In the former task, an original image and its distorted counterpart is given. A quality score has to be assigned to the
distorted one with respect to the original image. Few representative approaches that try to solve this problem are SSIM
(Wang et al. 2004), MSSIM (Wang, Simoncelli, and Bovik
2003), FSIM (Zhang et al. 2011), VSI (Zhang, Shen, and Li
2014) etc. However, in real-world scenarios, we may not have
a perfect, non-distorted image available for comparison. Thus
NR-IQA variant has been proposed. In NR-IQA, a single image needs to be assigned a quality score with respect to a nondistorted, unobserved version of that image. This score must
correlate well with human perception of image quality. While
creating ground-truth for this problem, a constant value is
associated with a non-distorted image. This serves as a reference on the quality score scale. This problem involves developing a discriminative feature space to different kinds and degrees of distortions. Such setting is more suitable for learning
schemes, which is reflected by the fact that most approaches
used to tackle this problem belong to the learning paradigm.
Few of the representative approaches include BRISQUE (Mittal, Moorthy, and Bovik 2012), CORNIA (Ye et al. 2012;
Ye et al. 2013), DIIVINE (Moorthy and Bovik 2011), BLIINDS (Saad, Bovik, and Charrier 2012), CBIQ (Mittal,
Soundararajan, and Bovik 2013), LBIQ (Tang, Joshi, and
Kapoor 2011) and the current CNN-based state-of-art (Kang
et al. 2014).
We perform all our NR-IQA experiments on two widely
used datasets - 1. LIVE (Sheikh et al. 2005) containing 29
reference images, 779 distorted images, 5 distortion types
and 5-7 distortion levels. The images are gray-scale. Through
subjective evaluation tests, each user has assigned a score to
an image according to its perceptual quality. A metric named
difference of mean opinion scores (DMOS) ∈ [0, 100] was
then developed, where 0 indicates highest quality image. 2.
TID 2013 (Ponomarenko et al. 2015) has 25 reference RGB
images, 3000 distorted images, 24 distortion types and 5 distortion levels. In subjective tests, each user was told to assign
a score between 0-9 where 9 indicates best visual quality.
Mean opinion scores (MOS) were calculated and were provided as ground truth scores/labels for each image. Four of
the 24 distortions are common to LIVE and TID datasets.
LIVE has all uniform distortions whereas 12 distortions out
of total 24 in TID 2013 are non-uniform. See Fig. 1 for an
example of uniform/non-uniform distortion. The aim is to
learn a mapping between the images and the scores which
maximizes Spearman’s (SROCC), Pearson’s correlation coefficient (PLCC) and Kendall’s correlation as those are the
standard metrics used for IQA. Since the number of images
are so small, we run ours and competing algorithms for 100
splits to remove any data-split bias in all four experiments.
As a widely followed convention in IQA, we use 60% of
reference and distorted images for training, 20% each for
validation and testing everywhere.
The subjective tests conducted for these datasets are extensive. Extreme care was taken to make them statistically
unbiased and it is non-trivial to reproduce them. LIVE data
creation needed more than 25K human comparisons for a
total of 779 images. TID 2013 data creation had over 1M visual quality evaluations and 524, 340 comparisons. To avoid
geographical bias, the participants came from five countries.
In summary, the constraint of small data in such applications
comes from the real-world and it is difficult to generate new
data or conduct additional tests. The various experiments we
perform to evaluate our approach are as follows.
In all the experiments, before we feed training images to
the first stage CNN, we preprocess it following the approach
of (Mittal, Moorthy, and Bovik 2012) that performs local
contrast normalization as follows.
3
X
ˆ j) = I(i, j) − µ(i, j) , µ(i, j) =
I(i,
wk,l Ik,l (i, j),
σ(i, j) +
k,l=−3
v
u 3
u X
2
wk,l (Ik,l (i, j) − µ(i, j)) .
and σ(i, j) = t
k,l=−3
(4)
Here, w is a 2-D circular symmetric Gaussian with 3 standard
deviations and normalized to unit volume.
Data generation: The data preparation method of (Kang
et al. 2014) is common for both datasets. It extracts overlapping 32 × 32 patches and then assigns them equal score as
that of the image. This strategy works well for (Kang et al.
2014) as they only handle LIVE and specific TID distortions
that are shared by LIVE. However, to handle non-uniform
distortions, we make a small but important modification to
their method. We compare the corresponding patches of the
original image and its distorted counterpart with the help of
SSIM (Wang et al. 2004). SSIM is used to measure structural similarity between two images and is robust to slight
alterations unlike RMSE. We keep all the patches belonging
to a distorted image that have low SSIM scores with their
original counterparts. This indicates low similarity between
patches which could only point to distortion. We select all
patches belonging to reference images. Finally, we make the
number of reference and distorted patches equal. We call this
method as selective patch training. We now describe certain
protocols common to all our NR-IQA experiments.
Training/testing pipeline: The method of (Kang et al.
2014) trains on 32 × 32 patches and averages patch-level
scores to get a score for an entire image. We train our first
stage on 32 × 32 patches obtained by our selection method.
The second stage is then trained by hyper-images that are
formed using the first-stage patch representations. In our first
three NR-IQA experiments, we use the same first stage as that
of (Kang et al. 2014) to be able to clearly assess the impact
of adding a second stage. Addition of a second stage entails
little overhead as on LIVE (TID) data, one epoch requires 23
(106) and 3 (43) seconds for both stages respectively. Both
the stages as well as (Kang et al. 2014) uses mean absolute
error as the loss function.
All the networks are trained using SGD with initial learning rate 0.005 and decaying at a rate of 10−5 with each update.
Nesterov momentum of 0.9 was used. The learning rate was
reduced by 10 when the validation error reached a plateau.
The first stage was trained for 80 epochs and training was
terminated if no improvement in validation error was seen
for 20 epochs. The second stage was trained for 200 epochs
with the termination criterion set at 40 epochs. Implementations were done in Theano on an Nvidia Tesla K40. We now
describe individual experiments. All the architectures used
for various experiments are listed in the supplementary.
Experiment 1: We evaluate ours and the competing approaches on LIVE. The input sizes for both stages are 32×32
and 24 × 23 × 800 respectively. Even though LIVE contains
only uniform distortions, our approach marginally improves
over (Kang et al. 2014) 100 splits. This could be due to the
better representational capacity of our two stage network as
all the image patches contain the same kind of distortion. The
results obtained for all the approaches are given in Table 2.
Experiment 2: Intuitively, our selective patch training
should give us a significant boost in case of TID 2013 data.
Since only few patches are noisy, assigning same score to
all patches will corrupt the feature space during training. To
verify this, we train on TID 2013 data using the approach of
(Kang et al. 2014) - with and without our selection strategy.
Input sizes for both stages are 32 × 32 × 3 and 23 × 31 ×
800. We also evaluate our two-stage network to show its
superiority over both these approaches. We find that selective
patch training boosts Spearman (SROCC) by 0.0992 and
Pearson correlation coefficient (PLCC) by 0.072. Two-stage
training further improves SROCC by 0.0265 and PLCC by
0.0111. The results are given in Table 2. Hereinafter, we
compare with (Kang et al. 2014) assisted with our selective
patch training to understand the benefits of the second stage.
Experiment 3: First, we take four distortions from TID
that are shared with LIVE (thus these are uniform). We
observe a marginal improvement here for similar reasons
as the first experiment. The second part is designed to
clearly show the adverse effects of non-uniform, localized
distortions on the correlations. Out of 24 distortions, we
already have four common ones. We add just two most
non-uniform distortions - 1. Non eccentricity pattern noise
and 2. Local block-wise distortions. On these six distortions, our approach significantly outperforms that of (Kang
et al. 2014) with patch selection. Thus to characterize nonuniform distortions, one needs to weight every patch differently, which is exactly what our second stage achieves.
Table 2: Results of NR-IQA experiments
Experiment 1 on LIVE data - 100 Splits
Approach
SROCC
PLCC
DIIVINE
0.916
0.917
BLIINDS-II
0.9306
0.9302
BRISQUE
0.9395
0.9424
CORNIA
0.942
0.935
CNN + Selective patch training
0.956
0.953
Proposed
0.9581 0.9585
Experiment 2 on TID data - 100 Splits
CNN
0.6618
0.6907
CNN + Selective patch training
0.761
0.7627
CNN + Selective patch training
0.7875 0.7738
+ Stage 2 CNN (proposed)
Experiment 3 on select distortions of TID - 100 splits
CNN + Selective
Proposed
patch training
# distortions
(SROCC,PLCC)
(SROCC, PLCC)
Four
0.92,0.921
0.932,0.932
Six
0.625,0.653
0.76,0.755
All (24)
0.761,0.763
0.788,0.774
Experiment 4 on TID data - 10 splits
Approach
SROCC
PLCC
Shallow end-to-end CNN
0.2392
0.4082
Deep end-to-end CNN
0.3952
0.52
Experiment 5 on TID using pre-trained VGG - 10 splits
VGG + patch-averaging
0.6236
0.6843
VGG + second stage CNN 0.6878
0.713
Finally, in the third part, we test on the entire TID 2013.
To the best of our knowledge, no other learning-based approach has attempted the entire data. The only approach we
are aware of that tested on TID is CORNIA (Ye et al. 2012;
Ye et al. 2013). However, even they skip two kinds of block
distortions. The reasons could be lack of training samples
or severe degradation in performance as observed here. We
compare our approach with (Kang et al. 2014) plus selective
patch training. The detailed results are listed in Table 2. The
architecture used was identical to that used in the second
experiment.
Experiment 4: We verify that training networks end-toend from scratch gives poor performance with such low
amounts of training data. We define a shallow and a deep
CNN of 8 and 14 layers respectively and train them end-toend on 384 × 512 images from TID 2013. Out of all the
experiments, this produces the worst performance, making
it clear that end-to-end training on such small data is not an
option. See Table 2 for results.
Experiment 5: A popular alternative when the training
data is scarce is to fine-tune a pre-trained network. We took
VGG-16, pre-trained on ILSVRC 2014. We used it as the first
stage to get patch representations. VGG-16 takes 224 × 224
RGB images whereas we have 32 × 32 RGB patches. Thus
we only consider layers till “conv4 3” and get its ReLUactivated responses. All the layers till “conv4 3” reduce a
patch to a size of 2 × 2 × 512. We append two dense layers
of 800 neurons each and train them from scratch. Rest of the
layers are frozen. Please refer to the Caffe VGG prototxt for
further architectural details. To train this network, we use
a batch size of 256 and a learning rate of 0.01. We average
the patch scores obtained from fine-tuned VGG and compute
the correlations over 5 splits. In principle, we should get a
performance boost by appending a second stage after VGG,
since it would pool in VGG features for all patches and
regress them jointly. We use a second stage CNN identical
to the one used in experiment 2. We observe that SROCC
and PLCC improves by 0.06 and 0.0287 respectively. For
detailed results, see Table 2. On the other hand, we see sharp
drop in performance for VGG despite it being deep and pretrained on ImageNet. The reasons for this could be two-fold.
As also observed in (Kang et al. 2014), the filters learned
on NR-IQA datasets turn out to be quite different than those
learned on ImageNet. Thus the semantic concepts represented
by the deeper convolutional layers of pre-trained VGG may
not be relevant for NR-IQA. Secondly, VGG performs a
simple mean subtraction on input images versus we do local
contrast normalization (LCN). The latter helps in enhancing
the discontinuities (e.g. edges, noise etc.) and suppresses
smooth regions, making LCN more suitable for NR-IQA.
Our extensive evaluations on NR-IQA shows that our approach is better at characterizing local distortions present in
an image. It improves on the current state-of-art (Kang et al.
2014) and other approaches, such as training a shallow/deep
network from scratch, or fine-tuning a pre-trained network.
Image forgery classification: In today’s age of social media, fake multimedia has become an issue of extreme importance. To combat this, it is necessary to improve detection systems to categorize fake posts. Here, we focus on
image forgery/tampering, which is defined as altering an
image by various means and then applying post-processing
(e.g. blurring) to conceal the effects of forging. Image tampering comes in various forms, for example, copy-move
forgery (Bayram, Sencar, and Memon 2009; Sutthiwan et
al. 2011) and manipulating JPEG headers (Farid 2009a;
He et al. 2006). Some other techniques have also been developed to detect forgeries from inconsistent lighting, shadows
(Fan et al. 2012; Kee and Farid 2010) etc. See the surveys
for more information (Bayram, Sencar, and Memon 2008;
Farid 2009b). However, the problem of tampering detection
from a single image without any additional information is
still eluding researchers. The current state-of-art uses a blockbased approach (Sutthiwan et al. 2011) which uses blockDCT features. It forms a Markov transition matrix from these
features and finally feeds them into a linear SVM. They carry
out their experiments on the CASIA-2 tampered image detection database 1 . It contains 7491 authentic and 5123 tampered
images of varying sizes as well as types.
Data generation: Given a database of authentic and (corresponding) tampered images, we first focus on getting the
contour of tampered region(s) by doing image subtraction
1
http://forensics.idealtest.org/casiav2/
500-D
Patch from
first image
CNN1
C1 ∈ R500
Figure 2: Authentic (left) and tampered (middle) image. The
resultant contour of the tampered region (right). Please zoomin and view in color.
Approach
Classification accuracy
End-to-end CNN
Current State-of-art
(Sutthiwan et al. 2011)
Proposed two stage CNN
75.22%
79.20%
83.11%
followed by basic morphological operations. The resultant
contour is shown in Fig. 2. We sample 15 pixels along this
contour and crop 64 × 64 patches by keeping the sampled
points as the patch-centroids. Similar to (Sutthiwan et al.
rd
th
2011), we train on 23 of the data and we use 16 data
each for validation and testing. We only subtract the mean
of training patches from each patch and do on-the-fly data
augmentation by flipping the images. Instead of categorizing
patches as authentic/tampered, we develop a ranking-based
formulation, where the rank of an authentic patch is higher
than its counterpart. Note that during testing, we are only
given a single image to be classified as authentic or forged
and thus contour of forged region cannot be found (or used).
Experiment 1: We train an end-to-end deep CNN that
takes an entire image as input and categorizes it as authentic
or tampered. The architecture used is given in supplementary.
It takes 256 × 384 RGB images as input. This size is chosen
since it needs minimum number of resize operations over the
dataset. The classification accuracy is shown in Table 3.
Experiment 2: Our first stage CNN learns to rank authentic patches higher than tampered ones. We propose ranking because every patch may contain different amount of
forged region or blur. This CNN has two identical channels that share weights. Its architecture is given in supplementary. Let us denote the last dense layer features obtained from an authentic and a tampered patch by CAu and
CT p respectively. We need to learn a weight vector such
that wT CAu − wT CT p > 0. However, the network trains
poorly if we keep feeding the first channel with authentic
patches and the second with the tampered ones. Shuffling
of patches is necessary in order to achieve convergence.
We assign a label of 1 if two channels get authentic and
tampered patches (in that order), else -1. Thus we need
d(C1 , C2 ) = w2T y · (f (w1T · (C1 − C2 )) > 0, where Ci
is the feature from the ith channel, whereas y ∈ {−1, 1}
denotes the label. The transformations achieved through two
dense layers and an ReLU are denoted by w2 (·), w1 (·) and
f (·) respectively, as shown in Fig. 3. The loss function be-
−
d
d = w2T · (f (w1T · (C1 − C2 )))
Patch from
second
image
CNN2
C2 ∈ R500
Table 3: Results of image forgery classification
500-D
C1 − C2
f (w1T ·(C1 −C2 ))
Figure 3: Proposed channel architecture. Weight sharing occurs between both channels. Please zoom in to see details.
comes, L = max(0, δ − y · d(C1 , C2 )). The term max(0, ·)
is necessary to ensure that only non-negative loss gets backpropagated. The δ(= 3) is a user-defined parameter that
avoids trivial solutions and introduces class separation.
Stage 1 representation should discriminate between neighborhood patterns along a tampered and an authentic edge
(since we chose patches centered on the contour of tampered
region). Given an image, we extract patches and form the
hyper-image required to train the second stage. We use binary
labels, where 1 denotes authentic image and vice-versa along
with a binary cross-entropy loss. Architecture of the second
stage is provided in supplementary. To overcome class imbalance, we weigh the samples accordingly. We compare our
approach with an end-to-end CNN network (experiment 1)
and the current state-of-art in passive, generic image forgery
classification (Sutthiwan et al. 2011). CNN-baseline gives
the worst performance followed by the current state-of-art.
This is expected since the latter extracts block-level DCT features whereas the CNN-baseline tries to learn from an entire
image - a considerably difficult task especially when the tampered/forged parts are localized and well camouflaged. Our
hybrid approach beats the CNN-baseline and the state-of-art
by 8% and 4% respectively. All these experiments underlines
the importance of collectively learning from image patches
when the data is scarce and shows flexibility of our approach.
Conclusion
We presented the notion of CNN-based hyper-image representations. Our training scheme involving these hyper-images
excels in scenarios where the label is dependent on the localized artifacts in an image. In these networks, the first stage is
only responsible for learning the discriminative representations of small image patches. The second stage collectively
considers all the patches of an image unlike many other previous approaches. It optimally weighs and pools all the patches,
and develops a mapping between them and the image label.
Our approach enables us to train networks with greater representational capacity than their conventional counterparts.
We observe in all our experiments that the second stage always provides us with a significant improvement. We apply
our approach to a synthetic and two challenging vision tasks
- NR-IQA and image forgery classification. Our approach
comfortably outperforms other CNN-baselines as well as the
existing state-of-art approaches.
Table 4: Architectures of deep networks used in this paper. The term C(n, r, s) denotes r × r, n “same” convolutional filters
with stride s. We omit r and s when r = 3 and s = 1. MP(n) is a max-pooling reduce the image size by a factor of n. FC(n) and
Drop(n) denote dense layer with n neurons and dropout rate of n respectively.
Architecture
Synthetic stage 1
Synthetic stage 2
LIVE/TID stage 1
Layer descriptions
Input(128, 128) – C(16) – MP(2) – C(32) – MP(2) – 2 × C(48) – MP(2) – 2 × C(64) –
MP(2) – 2 × C(128) – MP(2) – FC(400) – Drop(0.5) – FC(400) – Drop(0.5) – FC(1,‘linear’)
Input(10,10,400) – 2 × C(16) – MP(2) – 2 × C(32) – MP(2) – 2 × C(64) –
MP(2) – FC(400) – Drop(0.5) – FC(400,‘tanh’) – Drop(0.5) – FC(1,‘linear’)
Please refer to Table 5 on the next page
Input(24,23,800) – 2 × C(32) – MP(2) – 2 × C(48) – MP(2) – 2 × C(64) – MP(2) –
2 × C(128) – MP(2) – FC(500) – Drop(0.5) – FC(500,‘tanh’) – Drop(0.5) – FC(1,‘linear’)
Input(23,31,800) – 2 × C(64) – MP(2) – 2 × C(64) – MP(2) – 2 × C(128) – MP(2) –
TID stage 2
2 × C(128) – MP(2) – FC(500) – Drop(0.5) – FC(500,‘tanh’) – Drop(0.5) – FC(1,‘linear’)
Shallow end-to-end
Input(384, 512, 3) – C(32, 7, 2) – MP(2) – C(64, 7, 1) – MP(2) – 2 * ( C(128) – MP(2) )
network for TID (Expt 4)
– 2 * ( C(256) ) – MP(2) – FC(400) – Drop(0.25) – FC(400, ’tanh’) – Drop(0.25) – FC(1,’linear’)
Input(384, 512, 3) – C(32) – MP(2) – C(64) – MP(2) – 2 * ( C(128) – MP(2) ) Deep end-to-end network
2 * ( C(256) - C(256) - MP(2) ) – 2 * (C(512) – C(512) – MP(2) ) –
for TID (Expt 4)
FC(400) - Drop(0.25) – FC(400, ’tanh’) – Drop(0.25) – FC(1,’linear’)
Input(64,64,3) – 2 × C(64) – MP(2) – 2 × C(128) – MP(2) – 2 × C(128) –
Forgery channel
MP(2) – 2 × C(256) – MP(2) – FC(500) – Drop(0.5) – FC(500) – Drop(0.5)
Input(15,15,500) – 3 × C(64) – MP(2) – 3 × C(128) – MP(2) – 3 × C(256) –
Forgery stage 2
MP(2) – FC(800) – Drop(0.5) – FC(800) – Drop(0.5) – FC(1,‘sigmoid’)
Forgery endInput(256,384,3) – C(32) – MP(2) – C(64) – MP(2) – 2 × C(64) – MP(2) – 2 × C(128) – MP(2) – 2 ×
to-end CNN
C(128) – MP(2) – 2 × C(256) – MP(2) –FC(500) – Drop(0.5) – FC(500) – Drop(0.5) – FC(1,‘sigmoid’)
LIVE stage 2
Table 5: First stage architecture for LIVE/TID data used in this paper.
Layer Details
Input – 32 × 32 × 3 (for TID) and 32 × 32 (for LIVE)
Connected to
–
Convolutional layer – 50 × 7 × 7, name = ’Conv1’
Input
Max-pooling: size = (26,26), stride = (26,26) , name = ’MaxPool’
Conv1
Min-pooling: size = (26,26), stride = (26,26), name = ’MinPool’
Conv1
Concatenation layer: inputs = (MinPool, MaxPool), name = ’Concat’
–
Fully-connected layer: 800 nodes, name = ’FC1’
Concat
Fully-connected layer: 800 nodes, name = ’FC2’
FC1
Output node, name = ’output’
FC2
Loss = ’Mean absolute error’
–
References
[Bayram, Sencar, and Memon 2008] Bayram, S.; Sencar, H. T.; and
Memon, N. 2008. A survey of copy-move forgery detection techniques. In IEEE Western New York Image Processing Workshop,
538–542. Citeseer.
[Bayram, Sencar, and Memon 2009] Bayram, S.; Sencar, H. T.; and
Memon, N. 2009. An efficient and robust method for detecting
copy-move forgery. In 2009 IEEE International Conference on
Acoustics, Speech and Signal Processing, 1053–1056. IEEE.
[Dai et al. 2016] Dai, J.; He, K.; ; and Sun, J. 2016. Instance-aware
semantic segmentation via multi-task network cascades.
[Dodge and Karam 2016] Dodge, S., and Karam, L. 2016. Understanding how image quality affects deep neural networks. arXiv
preprint arXiv:1604.04004.
[Fan et al. 2012] Fan, W.; Wang, K.; Cayre, F.; and Xiong, Z.
2012. 3d lighting-based image forgery detection using shape-fromshading. In Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European, 1777–1781. IEEE.
[Farid 2009a] Farid, H. 2009a. Exposing digital forgeries from jpeg
ghosts. IEEE Transactions on information forensics and security
4(1):154–160.
[Farid 2009b] Farid, H. 2009b. Image forgery detection–a survey.
[Goodfellow et al. 2014] Goodfellow, I.; Pouget-Abadie, J.; Mirza,
M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio,
Y. 2014. Generative adversarial nets. In Advances in Neural
Information Processing Systems, 2672–2680.
[He et al. 2006] He, J.; Lin, Z.; Wang, L.; and Tang, X. 2006. Detecting doctored jpeg images via dct coefficient analysis. In European
conference on computer vision, 423–435. Springer.
[He et al. 2015] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015.
Deep residual learning for image recognition. arXiv preprint
arXiv:1512.03385.
[Kang et al. 2014] Kang, L.; Ye, P.; Li, Y.; and Doermann, D. 2014.
Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 1733–1740.
[Kee and Farid 2010] Kee, E., and Farid, H. 2010. Exposing digital
forgeries from 3-d lighting environments. In 2010 IEEE International Workshop on Information Forensics and Security, 1–6. IEEE.
[Krizhevsky, Sutskever, and Hinton 2012] Krizhevsky,
A.;
Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in
neural information processing systems, 1097–1105.
[Mittal, Moorthy, and Bovik 2012] Mittal, A.; Moorthy, A. K.; and
Bovik, A. C. 2012. No-reference image quality assessment in
the spatial domain. Image Processing, IEEE Transactions on
21(12):4695–4708.
[Mittal, Soundararajan, and Bovik 2013] Mittal, A.; Soundararajan,
R.; and Bovik, A. C. 2013. Making a completely blind image
quality analyzer. IEEE Signal Processing Letters 20(3):209–212.
[Moorthy and Bovik 2011] Moorthy, A. K., and Bovik, A. C. 2011.
Blind image quality assessment: From natural scene statistics
to perceptual quality. IEEE Transactions on Image Processing
20(12):3350–3364.
[Ponomarenko et al. 2015] Ponomarenko, N.; Jin, L.; Ieremeiev, O.;
Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli,
M.; Battisti, F.; et al. 2015. Image database tid2013: Peculiarities,
results and perspectives. Signal Processing: Image Communication
30:57–77.
[Recasens et al. 2015] Recasens, A.; Khosla, A.; Vondrick, C.; and
Torralba, A. 2015. Where are they looking? In Advances in Neural
Information Processing Systems, 199–207.
[Saad, Bovik, and Charrier 2012] Saad, M. A.; Bovik, A. C.; and
Charrier, C. 2012. Blind image quality assessment: A natural scene
statistics approach in the dct domain. Image Processing, IEEE
Transactions on 21(8):3339–3352.
[Salimans et al. 2016] Salimans, T.; Goodfellow, I.; Zaremba, W.;
Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques
for training gans. arXiv preprint arXiv:1606.03498.
[Sheikh et al. 2005] Sheikh, H. R.; Wang, Z.; Cormack, L.; and
Bovik, A. C. 2005. Live image quality assessment database release
2.
[Simonyan and Zisserman 2014] Simonyan, K., and Zisserman, A.
2014. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556.
[Sutthiwan et al. 2011] Sutthiwan, P.; Shi, Y. Q.; Zhao, H.; Ng, T.T.; and Su, W. 2011. Markovian rake transform for digital image
tampering detection. In Transactions on data hiding and multimedia
security VI. Springer. 1–17.
[Taigman et al. 2014] Taigman, Y.; Yang, M.; Ranzato, M.; and
Wolf, L. 2014. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 1701–1708.
[Tang, Joshi, and Kapoor 2011] Tang, H.; Joshi, N.; and Kapoor, A.
2011. Learning a blind measure of perceptual image quality. In
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, 305–312. IEEE.
[Wang et al. 2004] Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility
to structural similarity. Image Processing, IEEE Transactions on
13(4):600–612.
[Wang, Simoncelli, and Bovik 2003] Wang, Z.; Simoncelli, E. P.;
and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In Signals, Systems and Computers, 2004.
Conference Record of the Thirty-Seventh Asilomar Conference on,
volume 2, 1398–1402. Ieee.
[Ye et al. 2012] Ye, P.; Kumar, J.; Kang, L.; and Doermann, D. 2012.
Unsupervised feature learning framework for no-reference image
quality assessment. In Computer Vision and Pattern Recognition
(CVPR), 2012 IEEE Conference on, 1098–1105. IEEE.
[Ye et al. 2013] Ye, P.; Kumar, J.; Kang, L.; and Doermann, D. 2013.
Real-time no-reference image quality assessment based on filter
learning. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 987–994.
[Yosinski et al. 2015] Yosinski, J.; Clune, J.; Nguyen, A.; Fuchs, T.;
and Lipson, H. 2015. Understanding neural networks through deep
visualization. arXiv preprint arXiv:1506.06579.
[Zeiler and Fergus 2014] Zeiler, M. D., and Fergus, R. 2014. Visualizing and understanding convolutional networks. In Computer
vision–ECCV 2014. Springer. 818–833.
[Zhang et al. 2011] Zhang, L.; Zhang, L.; Mou, X.; and Zhang, D.
2011. Fsim: A feature similarity index for image quality assessment.
IEEE Transactions on Image Processing 20(8):2378–2386.
[Zhang, Shen, and Li 2014] Zhang, L.; Shen, Y.; and Li, H. 2014.
Vsi: A visual saliency-induced index for perceptual image quality
assessment. Image Processing, IEEE Transactions on 23(10):4270–
4281.
[Zheng et al. 2015] Zheng, S.; Jayasumana, S.; Romera-Paredes, B.;
Vineet, V.; Su, Z.; Du, D.; Huang, C.; and Torr, P. H. 2015. Condi-
tional random fields as recurrent neural networks. In Proceedings of
the IEEE International Conference on Computer Vision, 1529–1537.
[Zhou et al. 2014] Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.;
and Torralba, A. 2014. Object detectors emerge in deep scene cnns.
arXiv preprint arXiv:1412.6856.
| 1 |
Class-Splitting Generative Adversarial Networks
Grinblat, G.L.1 , Uzal, L.C.1 and Granitto, P.M.1
arXiv:1709.07359v1 [stat.ML] 21 Sep 2017
1
CIFASIS, French Argentine International Center for Information and Systems
Sciences, UNR-CONICET, Argentina
Figure 1: Samples generated with our Splitting GAN method with supervised training on CIFAR-10
dataset. Each line has samples of one of the original classes. Each side has samples corresponding to
one of the two clusters generated for each class. We use ResNet-B architecture (see text for details).
Abstract
when no class information is available, i.e. in the
unsupervised setup. Our generated samples reach
Generative Adversarial Networks (GANs) produce state-of-the-art Inception scores for CIFAR-10 and
systematically better quality samples when class STL-10 datasets in both supervised and unsuperlabel information is provided., i.e. in the con- vised setup.
ditional GAN setup. This is still observed for
the recently proposed Wasserstein GAN formulaIntroduction
tion which stabilized adversarial training and al- 1
lows considering high capacity network architectures such as ResNet. In this work we show how The irruption of Generative Adversarial Nets
to boost conditional GAN by augmenting available (GAN) [5] produced a great leap for image data
class labels. The new classes come from cluster- generation. Samples were generated simply by aping in the representation space learned by the same plying a neural network transform to an input ranGAN model. The proposed strategy is also feasible dom vector sampled from a uniform distribution.
1
There is no need for any Markov chains or unrolled
approximate inference networks during either training or generation of samples [5]. GAN based generative models did not take long to reach impressive
image quality [13, 14, 17, 10] at least for some specific datasets.
However, current GAN models cannot produce
convincing samples when trained on datasets of images with high variability, even for relatively low
resolution images. On the other hand, it is observed
that sample quality improves when class information is taken into account in a conditional GAN
setup [11, 12]. These findings suggest that it is hard
to learn a multimodal distribution from a smooth
transform of a uniform (or Gaussian) distribution
and that providing categorical class information to
the generator alleviates this problem.
Our proposal is inspired in two observations.
First, as mentioned above, conditioning generation with categorical class labels with high level
of abstraction improves image quality. Second, as
early observed in [5, 13], the adversarial network
pair learns an useful hierarchy of representations
in an unsupervised setup. We propose to exploit
the same representation space learned by the GAN
model in order to generate new class labels with a
high level of abstraction. This is done by applying
a simple clustering method in this representation
space. By conditioning generation with this new
class labels the model is able to generate better
samples. This can be done either when prior class
information is available or not.
The main contributions of the present paper are1 :
Figure 2: Class-conditioned samples generated
with WGAN-GP method [7] over the ResNet-B architecture for CIFAR-10 dataset. Each row has
samples of a given class.
art Inception scores for CIFAR-10 and STL-10
datasets in both tasks.
2
Background
In the original GAN formulation [5], the generator is a neural network that transform noise input
z ∼ p(z) into fake samples and the discriminator
D is a neural network with a single scalar output
with a sigmoid activation. This output is interpreted as the model probability for its input being
a real image from the dataset distribution against
being a fake image sampled by generator G. The
discriminator D is trained using the standard binary classification formulation by minimizing the
binary cross-entropy between fake and real distributions. On the other hand, the generator G is
simultaneously trained to mislead the discriminator. This is accomplished in practice by updating
G parameters minimising the same loss but with
fake samples tagged with a ‘true’ label [5].
In other words, the discriminator is updated by
• We propose a method for increasing the number of class labels during conditional GAN
training based on clustering in the representation space learned by the same GAN model.
We base our implementation on the more stable Wasserstein GAN formulation [2, 7].
• We show an effectively way of adapting networks architecture to handle this increasing
number of classes.
• We show that this splitting GAN strategy improves samples quality both in the supervised
and unsupervised tasks, reaching state-of-the1 The source code will be available at https://github.
com/CIFASIS
2
Figure 3: Real CIFAR-10 samples corresponding to the 20 clusters found by our method. Each line is
divided in the same way as in Figure 1.
gradient descent over a negative log likelihood loss of the generated samples by adding more structure
to the GAN latent space and an auxiliary classiLD = − E [log(D(x̃))]− E [log(1−D(G(z)))], fier. This approach requires the dataset to include
x∼Pr
z∼p(z)
class labels, i.e. to work in a supervised setting.
(1)
The generator receives the noise vector z and also
while the generator minimizes
the selected label c so that the generated sample
LG = − E [log(D(G(z)))].
(2) is a function of both. Furthermore, the discrimiz∼p(z)
nator has, in addition to the usual objective, the
task of correctly classifying the real and generated
The main issue in the original GAN formula- samples (through an auxiliary classifier). The gention was the instability of the training process that erator is optimized not only to deceive the discrimmade very hard to improve architectures and to inator, but also to generate fake samples that minscale up to bigger images. In [13] a deep convo- imize the auxiliary classifier error, i.e. to produce
lutional architecture was proposed for both gen- well class-defined samples.
erator and discriminator which presents some degree of stability for adversarial training. This work
was the first one producing convincing image sam- 2.2 WGAN
ples for datasets with low variability (Bedrooms
and Faces) and relatively low resolution (64x64). In order to address the problem of instability in
However, standard GAN formulation fails to gen- GAN training, Arjovsky et al. in a series of works
erate globally consistent samples when trained on [1, 2] proposed a reformulation of the function to be
optimized. They argue that the original loss funcdatasets with high variability like ImageNet.
tion presents discontinuities and vanishing gradients with respect to generator parameters. Instead,
2.1 AC-GAN
they proposed a distance for distributions known
In order to tackle datasets with high variability, as Earth-Mover distance or Wasserstein-1, which
Odena et al. [12] proposed to improve the quality captures the cost of transporting mass in order to
3
transform one distribution into the other. From
this distance they derive the WGAN loss function
for the minimax objective
min max E [D(x)] −
G D∈D x∼Pr
E [D(G(z))]
(3)
z∼p(z)
where D (called critic in WGAN formulation) is
not anymore a binary classifier and is restricted to
be in the set D of 1-Lipschitz functions. Again, z is
a noise vector sampled from a simple distribution
(uniform or Gaussian distribution). The Lipschitzness of D was imposed by weight clipping in this
first version of WGAN.
The importance of Arjovsky’s contribution lies
on a gain in the robustness of the adversarial
training process and a reduction in the frequency
of the mode collapse phenomenon. Furthermore,
the proposed loss function correlates well with the
observed sample quality as opposed to the original GAN loss which gives little information about
training progress.
Figure 4: Samples generated with the ResNet-B architecture trained with Splitting GAN over CIFAR10 without class labels (unsupervised).
is the most recurrent benchmark for image generation in GAN literature. In the unsupervised setting
(without using any class information) they reach
2.2.1 WGAN-GP
the state-of-the-art, while in the supervised case
An improved version of WGAN was recently pro- (following the strategy of AC-GAN and without
posed by Gulrajani et al. [7]. They found that the tuning any hyperparameter nor architecture) they
2
weight clipping can cause convergence problems in reach the second place behind SGAN [8] . All this
some settings and propose to enforce the Lipschitz advantageous features made –to our knowledge–
constraint on the critic D by penalizing its gradi- WGAN the current standard formulation for adversarial training.
ent’s norm.
The penalty term is computed over a set of random points x̂ uniformly sampled from straight lines
Our Method:
Splitting
between real and fake sample points. Naming as Px̂ 3
the latter distribution, the new loss can be written
GAN
as
The main idea is to generate new artificial classes
L =
E [D(x̃)] − E [D(x)]
based on the representation learned by the last hidx∼Pr
x̃∼Pg
den layer of the critic after enough training itera
+λ E (k∇x̂ D(x̂)k2 − 1)2
(4) tions. This is done by applying k-means to each
x̂∼Px̂
class set in this representation space. We divide
each set in two clusters only when the class has
where the penalty coefficient is set to λ = 10.
This improved WGAN formulation exhibits high more samples than a certain threshold. After that,
robustness against changing model architecture. training is resumed replacing the old labels with
The authors tested six different network designs the new ones for the entire dataset.
With this procedure we need to make two mifor both G and D, which typically break standard
GAN training but show stable WGAN training for nor modifications to the model architecture before
all cases. Furthermore, WGAN formulation helps resuming learning:
achieving better quality samples. Quantitative re2 By simply duplicating the number of feature maps of
sults are reported by the authors in terms of the Gulrajani’s networks we found WGAN outperforms SGAN
Inception score [14] over CIFAR-10 dataset, which score. See Sec. 4
4
1. The auxiliary classifier needs to predict a different number of classes, so we extend the last
layer of this classifier adding a copy of the
weights of the parent class for each child class.
Method
Improved GAN (-L+HA) [14]
EGAN-Ent-VI [4]
DFM [15]
Splitting GAN ResNet-B (ours)
WGAN-GP ResNet-B
WGAN-GP ResNet-A [7]
Splitting GAN ResNet-A (ours)
Inception Score
6.86 ± 0.06
7.07 ± 0.10
7.72 ± 0.13
7.80 ± 0.08
7.81 ± 0.10
7.86 ± 0.07
7.90 ± 0.09
2. In the conditional WGAN-GP implementation
[6], the class labels are injected in each batch
normalization layer of the generative network
by setting a specific gain and bias parameters
(γ and b) for each class. We follow this strategy in our proposal and, for the class splitting,
we set the new pair (γ, b) for each child class as Table 1: Unsupervised Inception scores on CIFARγchild = γf ather +∆γ and bchild = bf ather +∆b, 10
with initialization ∆γ = 0 and ∆b = 0 when
the new classes are created. This formulation
Method
Inception Score
implies that child classes start both with the
father class params and they eventually beImproved GAN [14]
8.09 ± 0.07
come different.
AC-GAN [12]
8.25 ± 0.07
WGAN-GP ResNet-A [7]
8.42 ± 0.10
SGAN [8]
8.59 ± 0.12
4 Results
WGAN-GP ResNet-B
8.67 ± 0.14
Splitting GAN ResNet-A (ours)
8.73 ± 0.08
To demonstrate the effectiveness of our proposal,
Splitting GAN ResNet-B (ours)
8.87 ± 0.09
we conducted several experiments with CIFAR-10
[9], a dataset containing 50000 32x32 images corresponding to 10 different classes, and the unlabeled Table 2: Supervised Inception scores on CIFAR-10
set of STL-10, containing 100000 larger and more
diverse images [3]. We based our model on the improved WGAN algorithm proposed by Gulrajani et
al. [7, 6]. In all cases, during training, we sample ture with the original WGAN-GP algorithm. The
50000 images from the current model to select the results are detailed in Table 1 and Table 2.
best one so far based on the Inception score. FiAlso for comparison, samples obtained with the
nally, we sample another 50000 with the best model WGAN-GP supervised model (Figure 2) and obin order to calculate the reported score, following tained with the proposed method (Figure 1) are
[12].
shown. Figure 3 has real samples of CIFAR-10 corresponding to each of the 20 clusters found.
4.1
CIFAR-10
Figures 4 and 5 show generated images and clusters found in the unsupervised test.
With CIFAR-10, an unsupervised test was performed starting from all the examples considered
as a single class and dividing them into two clusMethod
Inception Score
ters every approximately 10 epochs. This was done
with the Gulrajani’s ResNet architecture without
Original Dataset [15]
26.08 ± 0.26
changes (named ResNet-A) and a modified version
DFM [15]
8.51 ± 0.13
(ResNet-B) doubling the number of maps in each
WGAN-GP
ResNet-A
9.05 ± 0.12
convolutional layer. A supervised test was also conSplitting
GAN
ResNet-A
(ours)
9.50
± 0.13
ducted with these two architectures, starting from
the original 10 classes of CIFAR and dividing them
into two at approximately 20 training epochs. For
comparison we also trained the ResNet-B architec- Table 3: Unsupervised Inception scores for STL-10
5
Figure 6: Images generated by the model trained
on STL-10.
4.2
STL-10
We treat STL-10 in the same way as [15]. That
is, we downsample each dimension by 2, resulting
in 48x48 RGB images. We tested our algorithm
with the ResNet-A architecture, with the minimum changes necessary for the model to generate
48x48 images. Table 3 shows the resulting Inception score. Figures 6 and 7 show the generated
images and the clusters found by the method.
5
Discussion
Several things can be observed from the results presented in the previous section. First, regarding the
obtained clusterization of the real samples (Figure
3 for the supervised case and Figure 5 for the unsupervised one), we can visually find rules that define
the vast majority of samples, for at least several
clusters. As an example, in the supervised case
(Figure 3) we can see in the left side of the fourth
row cats looking forward and in the left side of the
eighth row horse side views. Compare with cats
and horses in several positions corresponding to the
Figure 5: The 25 clusters found in the unsupervised clusters in the right side. In the unsupervised case
case (real CIFAR-10 samples). Each line has two (Figure 5) we can see a tendency to generate clusdifferent clusters.
ters for cars, planes or ships, but in general they
6
are much more mixed.
Regarding the generated samples in the supervised case (Figure 1 for our method and Figure 2 for
WGAN-GP), we can see that the class splits allows
the model to generate better samples. Not only for
the more uniform clusters such as the horse side
views or the cats looking forward, but for the whole
original class. Compare for example the fourth row
(cats) or the eighth row (horses) in Figure 1 with
those rows in Figure 2, corresponding to the same
model trained with WGAN-GP. Note that these
samples do not differ too much from those shown in
[7]. Even in classes where the clustering step does
not seem to have found an obvious separation rule,
such as cars (second row), a better sample quality
can be observed than in the original WGAN-GP.
In the unsupervised case with CIFAR-10 (Figure 4), although the Inception score is similar than
the one obtained by the state-of-the-art so far, the
samples generated seem to be of a higher quality.
Nevertheless, they do not reach the quality of the
generated images in a supervised setting. It is always advisable to start the division into clusters
from the predefined classes, if this information is
available.
In the case of STL-10 (Figure 6), there is a noticeable difference in the Inception score. The reason
of this may be that STL-10 is a much more diverse
dataset, so a division into a large number of classes
can be beneficial. It should be noted that in this
case the state-of-the-art is much further from the
actual dataset score than in the case of CIFAR-10.
The success of our Splitting GAN method suggests that reinjecting high level information from
critic to the generative model improves sampling
quality. This breaks the strictly adversarial training and allows some degree of information sharing
between both networks. We believe that this simFigure 7: The 15 clusters found by the model (real ple (but successful) strategy could inspire a new
STL-10 samples).
and better adversarial training formulation where
a small amount of high level information directly
flows from critic’s last layers to generator input.
6
Conclusions
Work
and
Future
In this work we showed that our Splitting GAN
method allows generating better images. This can
7
be seen in the results on CIFAR-10 and STL-10 for
which clearly better images were obtained. This
is supported by an Inception score well above the
previous state-of-the-art for both datasets.
A future direction of research to improve the current Splitting GAN version is oriented to understand how a given model architecture or dataset
determines the optimal number of clusters (or
classes). Also, clusterization could be enhanced
during adversarial training with the addition of an
extra loss term like in [16].
We are also currently working in a generalization of the Splitting GAN ideas following two paths:
First, making the high level information from the
critic’s representation flows continually to the generator, avoiding special purpose steps at predefined
times in the training process. Second, avoiding the
hard threshold clustering step and replacing it with
some sort of continuous representation capturing
the same amount of information.
A. Courville, and Y. Bengio. Generative
adversarial nets. In Advances in neural information processing systems, pages 2672–2680,
2014.
[6] I. Gulrajani. Improved Training of Wasserstein GANs. https://github.com/igul222/
improved_wgan_training, 2017. [Online; accessed 18-Sep-2017].
[7] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans.
arXiv preprint
arXiv:1704.00028, 2017.
[8] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft,
and S. Belongie.
Stacked generative
adversarial networks.
arXiv preprint
arXiv:1612.04357, 2016.
[9] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
[10] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang,
and S. P. Smolley. Least squares generative adversarial networks. arXiv preprint
The authors acknowledge grant support from ANArXiv:1611.04076, 2016.
PCyT PICT-2016-4374.
[11] M. Mirza and S. Osindero.
Conditional
generative adversarial nets. arXiv preprint
arXiv:1411.1784, 2014.
References
7
Acknowledgements
[1] M. Arjovsky and L. Bottou. Towards princi- [12] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier
pled methods for training generative adversarGANs. In D. Precup and Y. W. Teh, editors,
ial networks. In International Conference on
Proceedings of the 34th International ConferLearning Representations 2017, 2017.
ence on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages
[2] M. Arjovsky, S. Chintala, and L. Bot2642–2651, International Convention Centre,
tou.
Wasserstein gan.
arXiv preprint
Sydney, Australia, 06–11 Aug 2017. PMLR.
arXiv:1701.07875, 2017.
[3] A. Coates, A. Ng, and H. Lee. An analysis of [13] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep
single-layer networks in unsupervised feature
convolutional generative adversarial networks.
learning. In Proceedings of the fourteenth inarXiv preprint arXiv:1511.06434, 2015.
ternational conference on artificial intelligence
and statistics, pages 215–223, 2011.
[14] T. Salimans, I. Goodfellow, W. Zaremba,
V. Cheung, A. Radford, X. Chen, and
[4] Z. Dai, A. Almahairi, P. Bachman, E. Hovy,
X. Chen. Improved techniques for training
and A. Courville.
Calibrating energygans. In D. D. Lee, M. Sugiyama, U. V.
based generative adversarial networks. arXiv
Luxburg, I. Guyon, and R. Garnett, editors,
preprint arXiv:1702.01691, 2017.
Advances in Neural Information Processing
[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza,
Systems 29, pages 2234–2242. Curran AssoB. Xu,
D. Warde-Farley,
S. Ozair,
ciates, Inc., 2016.
8
[15] D. Warde-Farley and Y. Bengio. Improving
generative adversarial networks with denoising
feature matching. In International Conference
on Learning Representations 2017, 2017.
[16] Y. Wen, K. Zhang, Z. Li, and Y. Qiao. A
discriminative feature learning approach for
deep face recognition. In European Conference
on Computer Vision, pages 499–515. Springer,
2016.
[17] J. Zhao, M. Mathieu, and Y. LeCun. Energybased generative adversarial network. arXiv
preprint arXiv:1609.03126, 2016.
9
| 1 |
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
arXiv:1512.06304v2 [math.PR] 25 Nov 2017
PERSI DIACONIS AND BOB HOUGH
Abstract. We introduce a new method for proving central limit
theorems for random walk on nilpotent groups. The method is
illustrated in a local central limit theorem on the Heisenberg group,
weakening the necessary conditions on the driving measure. As
a second illustration, the method is used to study walks on the
n ˆ n uni-upper triangular group with entries taken modulo p.
The method allows sharp answers to the behavior of individual
coordinates: coordinates immediately above the diagonal require
order p2 steps for randomness, coordinates on the second diagonal
require order p steps; coordinates on the kth diagonal require order
2
p k steps.
1. Introduction
Let HpRq denote the real Heisenberg group
$¨
,
˛
& 1 x z
.
(1)
HpRq “ ˝ 0 1 y ‚ : x, y, z P R .
% 0 0 1
¨
˛
1 x z
Abbreviate ˝ 0 1 y ‚ with rx, y, zs, identified with a vector in R3 .
0 0 1
Consider simple random walk on G “ HpRq driven by Borel probability
measure µ. For N ě 1, the law of this walk is the convolution power
µ˚N where, for Borel measures µ, ν on G, and for f P Cc pGq,
ż
(2)
xf, µ ˚ νy “
f pghq dµpgq dνphq.
g,hPG
Say that measure µ is non-lattice (aperiodic) if its support is not
contained in a proper closed subgroup of G. For general non-lattice
µ of compact support Breuillard [6] uses the representation theory of
G to prove a local limit theorem for the law of µ˚N , asymptotically
evaluating its density in translates of bounded Borel sets. However,
in evaluating µ˚N on Borel sets translated on both the left and the
2010 Mathematics Subject Classification. Primary 60F05, 60B15, 20B25, 22E25,
60J10, 60E10, 60F25, 60G42.
Key words and phrases. Random walk on a group, Heisenberg group, local limit
theorem, unipotent group.
We are grateful to Laurent Saloff-Coste, who provided us a detailed account of
previous work.
1
2
PERSI DIACONIS AND BOB HOUGH
right he makes a decay assumption on the Fourier transform of the
abelianization of the measure µ, and raises the question of whether
this is needed. We show that this condition is unnecessary. In doing so
we give an alternative approach to the local limit theorem on G treating
it as an extension of the classical local limit theorem on Rn . We also
obtain the best possible rate. The method of argument is analogous to
(though simpler than) the analysis of quantitative equidistribution of
polynomial orbits on G from [19].
Recall that the abelianization Gab “ G{rG, Gs of G is isomorphic to
2
R with projection p : G Ñ Gab given by pprx, y, zsq “ rx, ys. Assume
that the probability measure µ satisfies the following conditions.
i. Compact support.
ii. Centered. The projection p satisfies
ż
(3)
ppgqdµpgq “ 0.
G
iii. Full dimension. Let Γ “ xsupp µy be the closure of the subgroup of G generated by the support of µ. The quotient G{Γ is
compact.
Section 2 gives a characterization of closed subgroups Γ of G of full
dimension.
Under the above conditions, the central limit theorem for µ is known.
Let pdt qtą0 denote the semigroup of dilations given by
dt prx, y, zsq “ rtx, ty, t2 zs
(4)
and denote the Gaussian semigroup pνt qtą0 defined by its generator (see
[6], [10])
ˇ ż
d ˇˇ
(5)
f pgqdνt pgq
Af “ ˇ
dt
t“0
gPG
1
1
2
“ zBz f pidq ` xyBxy
f pidq ` x2 Bx2 f pidq ` y 2 By2 f pidq
2
2
ş
2
2
2
2
“ xy, z.
where σx “ x2 “ g“rx,y,zsPG x dµpgq and similarly σy “ y 2, σxy
With ν “ ν1 , the central limit theorem for µ states that for f P Cc pGq,
A
E
(6)
f, d ?1 µ˚N Ñ xf, νy.
N
For g P G define the left and right translation operators Lg , Rg :
L2 pGq Ñ L2 pGq,
(7)
Lg f phq “ f pghq,
Rg f phq “ f phgq.
Our local limit theorem in the non-lattice case is as follows.
Theorem 1. Let µ be a Borel probability measure of compact support
on G “ HpRq, which is centered and full dimension. Assume that
the projection to the abelianization µab is non-lattice. Let ν be the
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
3
limiting Gaussian measure of d ?1 µ˚N . Uniformly for g, h P G and for
N
f P Cc pGq, as N Ñ 8,
D
`
˘
@
D @
(8)
Lg Rh f, µ˚N “ Lg Rh f, d?N ν ` oµ }f }1 N ´2 .
If the Cramér condition holds:
ˇż
ˇ
(9)
sup ˇˇ
x2 , |λ|ą1
λPR
g“rx,y,zsPG
´iλ¨px,yq
e
ˇ
ˇ
dµpgqˇˇ ă 1
then uniformly for g, h P G and Lipschitz f P Cc pGq, as N Ñ 8
¯
´
@
D @
D
5
(10)
Lg Rh f, µ˚N “ Lg Rh f, d?N ν ` Of,µ N ´ 2 .
Remark. The rate is best possible as may be seen by projecting to the
abelianization. A variety of other statements of the local theorem are
also derived, see eqn. (74) in Section 3.
Remark. For non-lattice µ, [6] obtains (8) with h “ id and for general
h subject to Cramér’s condition. A condition somewhat weaker than
Cramér’s would suffice to obtain (10).
Remark. In the case that µ is supported on a closed discrete subgroup
or has
with respect to Haar measure, [1, 2] obtains an error
´ a density
¯
5
of O N ´ 2 in approximating µ˚N pgq, g P Γ.
Our proof of Theorem 1 applies equally well in the case when µab
has a lattice component, and gives a treatment which is more explicit
than the argument in [1]. To illustrate this, we determine the leading
constant in the probability of return to 0 in simple random walk on
HpZq.
Theorem 2. Let µ0 be the measure on HpZq which assigns equal probability 15 to each element of the generating set
$ ¨
˛ ¨
˛,
1 ˘1 0
1 0 0 .
&
(11)
id, ˝ 0 1 0 ‚, ˝ 0 1 ˘1 ‚ .
%
0 0 1
0 0 1
¯
´
25
´ 25
As N Ñ 8, µ˚N
pidq
“
.
`
O
N
0
16N 2
The basic idea which drives the proof is that permuting segments
of generators in a typical word of the walk generates smoothness in
the central coordinate of the product, while leaving the abelianized
coordinates unchanged. This observation permits passing from a limit
theorem to a local limit theorem by smoothing at a decreasing sequence
of scales. When studying µ˚N near the scale of its distribution, we use
a Lindeberg replacement scheme in which one copy at a time of µ is
replaced with a Gaussian measure in the abelianization. To handle uniformity in the translation in Theorem 1 in the case where the Cramér
4
PERSI DIACONIS AND BOB HOUGH
condition is not assumed we are forced to treat frequencies α which are
unbounded, and thus must consider the large spectrum
(
(12)
Specϑ pµab q “ α P R2 : |µ̂ab pαq| ą 1 ´ ϑ
where ϑ Ñ 0 as a function of N. In treating this, we use an approximate
lattice structure of Specϑ pµab q, see Section 2.1.
As a further application of the word rearrangement technique, answering a question of [13] we determine the mixing time of the central
coordinate in a natural class of random walks on the group Nn pZ{pZq
of n ˆ n uni-upper triangular matrices with entries in Z{pZ.
Theorem 3. Let n ě 2 and let µ be a probability measure on Zn´1
which satisfies the following conditions.
i.
ii.
iii.
iv.
v.
(13)
Bounded support.
Full support. xsupp µy “ Zn´1
Lazy. µp0q ąř0
Mean zero. xPZn´1 xµpxq “ 0
Trivial covariance.
¸n´1
˜
ÿ
xpiq xpjq µpxq
“ In´1 .
xPZn´1
i,j“1
Push forward µ to a probability measure µ̃ on Nn pZq via, for all x P
Zn´1 ,
¨
˛
1 xp1q 0 ¨ ¨ ¨
0
..
.
˚
‹
.
˚ 0 1 xp2q . .
‹
˚ . .
‹
.
.
(14)
µ̃ ˚ ..
‹ “ µpxq.
..
..
..
0
˚
‹
˝
pn´1q ‚
0
1 x
0 ¨¨¨
0
1
Write Z : Nn pZq Ñ Z for the upper right corner entry of a matrix of
Nn pZq. There exists C ą 0 such that, for all primes p, for N ě 1,
˜
¸
ˇ
ÿ ˇˇ
ˇ
N
1
ˇµ̃˚N pZ ” x mod pq ´ ˇ ! exp ´C 2
(15)
.
ˇ
pˇ
p n´1
x mod p
´
Remark. Informally, the top right corner entry mixes in time O p
2
n´1
1
¯
.
This is tight, since archimedean considerations show that the L dis2
tance to uniform is " 1 if the number of steps of the walk is ! p n´1 .
Remark. Although we have considered only the top right corner entry
in Un pZ{pZq, this result determines the mixing time of each entry above
the diagonal by iteratively projecting to the subgroups determined by
the top left or bottom right m ˆ m sub-matrices.
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
5
Remark. Our argument permits treating measures not supported on the
first super-diagonal essentially without change. We treat the simplified
case stated in order to ease the notation.
History
Random walk on groups is a mature subject with myriad projections
into probability, analysis and applications. Useful overviews with extensive references are in [5], [25]. Central limit theorems for random
walk on Lie groups were first proved by [29] with [28] carrying out the
details for the Heisenberg group. Best possible results under a second
moment condition for nilpotent Lie groups are in [24].
A general local limit theorem for the Heisenberg group appears in
[6], which contains a useful historical review. There similar conditions
to those of our Theorem 1 are made, but the argument treats only the
non-lattice case and needs a stronger condition on the characteristic
function of the measure projected to the abelianization. Remarkable
local limit theorems are in [1, 2]. The setting is groups of polynomial growth, and so “essentially” nilpotent Lie groups via Gromov’s
Theorem. The first paper gives quite complete results assuming that
the generating measure has a density. The second paper gives results
for measures supported on a lattice. The arguments in [2] have been
adapted in [7] to give a local limit theorem for non-lattice measures
supported on finitely many points.
Just as for the classical abelian case, many variations have been studied. Central limit theorems for walks satisfying a Lindeberg condition
on general Lie groups are proved in [26], see also references therein.
Large deviations for walks on nilpotent groups are proved in [3]. Central limit theorems on covering graphs with nilpotent automorphism
groups are treated in [21, 22]. This allows walks on Cayley graphs with
some edges and vertices added and deleted. Brownian motion and heat
kernel estimates are also relevant, see [20, 17].
Random walk on finite nilpotent groups are a more recent object
of study. Diaconis and Saloff-Coste [15, 14, 13] show that for simple
symmetric random walk on Z{nZ, order n2 steps are necessary and
sufficient for convergence to uniform. The first paper uses Nash inequalities, the second lifts to random walk on the free nilpotent group
and applies central limit theorems of Hebisch, Saloff-Coste and finally
Harnack inequalities to transfer back to the finite setting. The third
paper uses geometric ideas of moderate growth to show that for groups
of polynomial growth, diameter-squared steps are necessary and sufficient to reach uniformity. This paper raises the question of the behavior
of the individual coordinates on Un pZ{pZq which is finally answered in
Theorem 3. A direct non-commuting Fourier approach to HpZ{pZq is
carried out in [8], where it is shown that order p log p steps suffice to
make the central coordinate random, improved here to order p steps,
6
PERSI DIACONIS AND BOB HOUGH
which is best possible. For a review of the HpZq results, see [12]. Finally there have been quite a number of papers studying the walk on
Un pZ{pZq when both p and n grow. We refer to [23], which contains a
careful review and definitive results.
Notation and conventions
Vectors from Rd , d ě 1 are written in plain text w, their coordinates
with superscripts w piq , and sequences of vectors with an underline w.
The sum of a sequence of vectors w is indicated w. w t denotes the
transpose of w. We have been cavalier in our use of transpose; interpretation of vectors as rows or columns should be clear from the
context. We frequently identify matrix elements in the group Un with
vectors from Euclidean space, and have attempted to indicate the way
in which the vectors should be interpreted. As a rule of thumb, when
the group law is written multiplicatively, the product is in the group
Un , and when additively, in Euclidean space.
The arguments presented use permutation group actions on sequences
of vectors. Given integer N ě 1, denote SN the symmetric group on
rNs “ Z X r1, Ns, which acts on length N sequence of vectors by permuting the indices:
(16)
SN Q σ : pw1 , ..., wN q ÞÑ pwσp1q , ..., wσpN q q.
C2 is the two-element group. For d ě 1, identify C2d with the ddimensional hypercube t0, 1ud . 1d is the element of C2d corresponding
to the sequence of all 1’s on the hypercube. C2d acts on sequences of
vectors of length 2d with the jth factor determining the relative order
of the first and second blocks of 2j´1 elements. To illustrate the action
of C22 on x “ x1 x2 x3 x4 :
(17)
p0, 0q ¨ x “ x1 x2 x3 x4
p1, 0q ¨ x “ x2 x1 x3 x4
p0, 1q ¨ x “ x3 x4 x1 x2
p1, 1q ¨ x “ x3 x4 x2 x1 .
The 2-norm on Rd is indicated } ¨ } and } ¨ }pR{Zqd denotes distance
to the nearest integer lattice point. Given ξ P Rd , eξ p¨q denotes the
character of Rd , eξ pxq “ e2πiξ¨x .
Use δx to indicate the Dirac delta measure at x P Rd . Given f P
Cc pRd q and measure µ, xf, µy denotes the bilinear pairing
ż
(18)
xf, µy “
f pxqdµpxq.
Rd
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
7
Denote the Fourier transform of function f , resp. the characteristic
function of measure µ by, for ξ P Rd ,
ż
ż
(19)
fˆpξq “
e´ξ pxqf pxqdx,
µ̂pξq “
e´ξ pxqdµpxq.
Rd
Rd
For x P Rd , Tx f denotes function f translated by x,
(20)
ˆ
Ty
x f pξq “ e´ξ pxqf pξq
Tx f pyq “ f py ´ xq,
and for real t ą 0, ft denotes f dilated by t,
(21)
ft pxq “ t f ptxq ,
For smooth f ,
(22)
ˆ ˙
ξ
p
ˆ
ft pξq “ f
.
t
d
f pxq “
ż
Rd
fˆpξqeξ pxqdξ.
By a bump function ρ on Rn we mean a smooth non-negative function
of compact support and integral 1. The Fourier transform of ρ satisfies,
for each A ą 0 there is a constant CpA, ρq ą 0 such that
(23)
|ρ̂pξq| ď
CpA, ρq
.
p1 ` }ξ}qA
This follows from integration by parts.
For r P R and σ ą 0, ηpr, σq denotes the one-dimensional Gaussian
distribution with mean r and variance σ 2 , with density and characteristic function
(24)
´
¯
2
exp ´ px´rq
2
`
˘
2σ
{
?
,
ηpr,
σqpξq “ e´ξ prq exp ´2π 2 σ 2 ξ 2 .
ηpr, σqpxq “
2πσ
A centered (mean zero) normal distribution η in dimension d is specified
by its covariance matrix
ˆż
˙d
2
pmq pnq
(25)
σ “
x x ηpxq
Rd
m,n“1
and has density and characteristic function
(26)
´ t 2 ´1 ¯
exp ´ x pσ 2q x
`
˘
{
ηp0, σqpxq “
ηp0,
σqpξq “ exp ´2π 2 ξ t σ 2 ξ .
1 ,
d
p2πq 2 pdet σ 2 q 2
All of our arguments concern the repeated convolution µ˚N of a fixed
measure µ on the upper triangular matrices. The product measure µbN
is abbreviated UN . Asymptotic statements are with respect to N as
the large parameter. The Vinogradov notation A ! B, resp. A " B,
means A “ OpBq, resp. B “ OpAq. A — B means A ! B and B ! A.
8
PERSI DIACONIS AND BOB HOUGH
2. Background to Theorems 1 and 2
This section collects together several background statements regarding the Heisenberg group, its Gaussian semigroups of probability measures and statements of elementary probability which are needed in the
course of the argument.
Write A “ r1, 0, 0s, B “ r0, 1, 0s, C “ r0, 0, 1s. The following commutators are useful,
(27)
rA, Bs “ ABA´1 B ´1 “ r0, 0, 1s “ C,
rA´1 , B ´1 s “ A´1 B ´1 AB “ r0, 0, 1s “ C,
rA, B ´1 s “ AB ´1 A´1 B “ r0, 0, ´1s “ C ´1 ,
rA´1 , Bs “ A´1 BAB ´1 “ r0, 0, ´1s “ C ´1 .
A convenient representation for rx, y, zs P HpRq is C z B y Ax . Using
the commutator rules above, the multiplication rule for w P HpRqN
becomes
N ”
ı “
ź
‰
p2q
p3q
p1q
(28)
wi , wi , wi “ w p1q , wp2q , w p3q ` Hpwq
i“1
where ¨ and H act on sequences of vectors from R3 via
ÿ
ÿ p1q p2q
(29)
w“
wi
Hpwq “
wi wj .
i
iăj
It is also convenient to define
N
1 p1q p2q 1 ÿ p1q p2q
(30)
w w
H pwq “ Hpwq ´ w w `
2
2 i“1 i i
¯
1 ÿ ´ p1q p2q
p2q p1q
“
wi wj ´ wi wj ,
2 1ďiăjďN
‰
“
and for w “ rx, y, zs, w̃ “ x, y, z ´ 21 xy , so that the multiplication
rule may be written
„
N
ź
1 p1q p2q
˚
(31)
wi “ w̃ ` 0, 0, w w ` H pwq .
2
i“1
˚
Let S “ supp µ. Recall that Γ “ xSy is the closure of the group
generated by S. Its abelianization, Γab “ Γ{rΓ, Γs is equal to ppΓq
where p is the projection p : G Ñ Gab . Let Γ0 be the semigroup
generated by S. We record the following descriptions of Γ and Γ0 .
Proposition 4. Let Γ ď HpRq be a closed subgroup of full dimension.
The structure of the abelianization Γab “ Γ{rΓ, Γs and of Γ falls into
one of the following cases.
i.
(32)
Γab “ R2 ,
Γ “ trγ, rs : γ P Γab , r P Ru
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
(33)
9
ii. There exist non-zero orthogonal v1 , v2 P R2 , such that
Γab “ tnv1 ` rv2 : n P Z, r P Ru,
Γ “ trγ, rs : γ P Γab , r P Ru
iii. There exist non-zero v1 , v2 P R2 , linearly independent over R,
such that
(34)
(35)
Γab “ tn1 v1 ` n2 v2 : n1 , n2 P Zu.
In this case, Γ falls into one of two further cases
iv. Γ “ trγ, rs : γ P Γab , r P Ru
v. There exists λ P Rą0 and f : Γab Ñ R such that
Γ “ trγ, λpf pγq ` nqs : γ P Γab , n P Zu .
Proof of Proposition 4. The full dimension condition guarantees that
Γab is a two dimensional closed subgroup of R2 , and the three possibilities given are all such closed subgroups. Likewise, the center of Γ
is a non-trivial subgroup of R, hence either R or λ ¨ Z for some real
λ ą 0. It follows that the fiber over γ P Γab is a translate of the
center. Let v1 , v2 be two linearly independent elements of the abelianization, and choose g1 “ rv1 , z1 s, g2 “ rv2 , z2 s in Γ. The commutator
rg1 , g2 s “ g1 g2 g1´1 g2´1 is bilinear in v1 , v2 , is non-zero, and lies in the
center. It follows that if one of v1 , v2 may be scaled by a continuous
parameter in the abelianization then the center is R.
Lemma 5. The closure of the semigroup Γ0 is Γ0 “ Γ.
Proof. Write Γ0,ab “ ppΓ0 q where p denotes projection to the abelianization Gab . That Γ0,ab “ Γab follows from the local limit theorem on
R2 . To treat the central fiber, in the case Γab “ R2 let 0 ă ǫ ă 41 be a
fixed small parameter and choose x, x1 , y, y 1 in Γ0 such that
(36)
ppxq, ppx1 q, ppyq, ppy 1q « e1 , ´e1 , e2 , ´e2
where the approximation means to within distance ǫ. Take a word
w in T “ tid, x, x1 , y, y 1u of length 4n with product approximating
the identity in Γab to within ǫ, which is such that each of x, x1 , y, y 1
appear ą p1 ´ Opǫqqn times in w. The abelianization of the product is
independent of the ordering of w, but if the letters in w appear in order
y, x, y 1, x1 then the central element is ă ´p1 ` Opǫqqn2 , while if they
appear in order y 1 , x, y, x1 then the central element is ą p1 ` Opǫqqn2 .
Moving from an ordering of the first type to an ordering of the second
by swapping generators one at a time changes the central element by
Op1q at each step. Let ǫ Ó 0 to deduce that Γ0 contains positive and
negative central elements, and hence that Γ0 is a group, equal to Γ. In
the case Γab has a one or two dimensional lattice component, replace
either e1 or both e1 , e2 above with a basis for the lattice component
and repeat the argument.
10
PERSI DIACONIS AND BOB HOUGH
More quantitative structural statements are as follows.
Lemma 6. Let µ be a measure on HpRq, with abelianization µab not
supported on a lattice of R2 . If the Cramér condition holds for the
measure µab then it holds also for the measure on µ1 on R obtained by
pushing forward µab b µab by H ˚ pw1 , w2 q.
Proof. Let ξ P R, |ξ| ě 1 and fix w2 P supppµab q, bounded away from
w1 ^w2
˚
“ 21 w1 ¨ ŵ2 . The claim follows since
ˇ0.ş Write˚ H pw1 , w2 q “ ˇ2
ˇ e´ξ pH pw1 , w2 qq dµab pw1 qˇ is bounded away from 1 uniformly in ξ
and w2 .
Lemma 7. Let µ be a measure on R2 of compact support, with support
generating a subgroup of R2 of full dimension. If µ is lattice supported,
assume that the co-volume of the lattice is at least 1. There is a constant
Y ]
1
c “ cpµq ą 0 such that, uniformly in 0 ă ξ ď 2 , for N “ Npξq “ 2ξ1 ,
(37)
ˇż
ˇ
ˇ
ˇ
R2 ˆR2
˚
e´ξ pH pw1 , w2 qq dµ
˚N
pw1 qdµ
˚N
ˇ
ˇ
pw2 qˇˇ ď 1 ´ cpµq.
Proof. When µ is lattice with lattice of covolume V , the measure
H ˚ pw1 , w2 qdµpw1qdµpw2q is lattice distributed with step size V . Hence
the bound on |ξ| suffices to guarantee the claim for N bounded.
For N growing, a standard application of the functional central limit
theorem implies that N1 H ˚ pw1 , w2 qdµ˚N pw1 qdµ˚N pw2 q converges to a
non-zero density on R as N Ñ 8.
Normalize Haar measure on HpRq to be given in coordinates by
dg “ dxdydz. The density of a Gaussian measure ν on HpRq can
be understood as the rescaled limit of the density of a random walk
with independent Gaussian inputs in the abelianization. Consider the
distribution on the Heisenberg group given by ν2,σ “ rηp0, σq, 0s, which
has projection to the abelianization given by a two dimensional normal distribution of covariance σ, and with trivial central fiber. Write
ν2 “ ν2,I2 for the measure in which σ is the two dimensional identity
matrix. The rescaled distribution d ?1 ν2˚N converges to a Gaussian
N
measure ν0 on HpRq as N Ñ 8. Note that we have not included a
covariance term, which can be accommodated with a linear change of
coordinates. Also, we do not consider
? randomness in the central coordinate as it would scale only as N, whereas the central coordinate
has distribution at scale N.
Let α P R2 and ξ P R. Write the modified characteristic function of
)
ν0 as (recall z̃ “ z ´ xy
2
ż
(38)
Ipα, ξq “
e´α pgab qe´ξ pz̃qdν0 pgq
g“rx,y,zsPG
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
and
(39)
Ipα, ξ; Nq “
ż
pR2 qN
e´α
ˆ
x
?
N
˙
e´ξ
ˆ
H ˚ pxq
N
˙
11
bN
dν2,ab
pxq .
Lemma 8. Let α P R2 , ξ P R and let σ 2 be the covariance matrix of
a non-degenerate two dimensional normal distribution of determinant
δ 2 “ det σ 2 , δ ą 0. Then
ˆ
˙
ˆ ˚ ˙
ż
x
H pxq
e´α ?
e´ξ
dηp0, σqbN pxq “ Ipσα, δξ; Nq.
(40)
N
N
pR2 qN
Proof. Making ´the change
¯ of variables, for each i, σyi “ xi in the
xti σ´2 xi
1
changes x “ σy and H ˚ pxq “ det σ ¨ H ˚ pyq.
density 2πδ exp ´ 2
In view of the multiplication rule (31), for }α}, |ξ| “ Op1q
(41)
lim Ipα, ξ; Nq Ñ Ipα, ξq.
N Ñ8
The following rate of convergence is given in Appendix A.
Theorem 9. For all α P R2 , ξ P R such that p1 ` }α}2qp1 ` ξ 2 q ă N,
´
¯
2
2
1 ` O p1`}α}Nqp1`ξ q
¯
´
.
(42)
I pα, ξ; Nq “
2π}α}2
exp ξ coth πξ cosh πξ
In particular,
(43)
Ipα, ξq “
´
¯
2
exp ´ ξ2π}α}
coth πξ
cosh πξ
.
Remark. While Ipα, ξq characterizes the Gaussian measure, it does not
behave well under convolution.
Along with the above characteristic function calculation the following
moment calculation is used.
Lemma 10. Let η be a two dimensional Gaussian with identity covariance. For each k ě 1, and N ě 2,
(44)
“
‰
N 2k
EηbN H ˚ pwq2k ď µ22k 2k
2
where µ2k “ p2kq!
is the 2kth moment of a standard one dimensional
2k k!
Gaussian.
For any compactly supported probability measure µ of mean zero on
2
R , for any k ě 1, as N Ñ 8,
‰
`
˘
“
(45)
EµbN H ˚ pwq2k ď Ok,µ N 2k .
12
PERSI DIACONIS AND BOB HOUGH
Proof. Write
H ˚ pwq “
(46)
1 ÿ
p1q p2q
p´1q1piąjq wi wj
2 1ďi‰jďN
and expand the moment to find
“
‰
(47) EηbN H ˚ pwq2k
ff
«
ÿ
1
p1q
p1q
wm
¨ ¨ ¨ wm
wnp2q1 ¨ ¨ ¨ wnp2q2k
ď 2k EηbN
1
2k
2
1ďm1 ,...,m2k ,n1 ,...,n2k ďN
ffff2
«
«
ÿ
1
p1q
p1q
wm
¨ ¨ ¨ wm
“ 2k EηbN
1
2k
2
1ďm ,...,m ďN
1
2k
2k
N
.
22k
When treating general µ of compact support,
(48)
‰
“
EµbN H ˚ pwq2k
ff
«
ÿ
1
p1q
p1q
“ 2k EµbN
¨ ¨ ¨ wm
wnp2q1 ¨ ¨ ¨ wnp2q2k .
εm,n wm
1
2k
2
1ďm ,...,m ,n ,...,n ďN
“ µ22k
1
2k
1
2k
with εm,n P t´1, 0, 1u. The expectation vanishes unless each index in
rNs which appears among m1 , ..., m2k , n1 , ..., n2k appears at least twice.
There are OpN 2k q ways to choose which indices appear and Ok p1q ways
to assign m1 , ..., n2k to the indices which appear. For those assignments
which don’t vanish, the expectation is Ok,µp1q by the compact support.
We make the following convention regarding rare events. Say that
a sequence of measurable events tAN uN ě1 such that AN Ă S N occurs
with high probability (w.h.p.) if the complements satisfy the decay
estimate,
`
˘
(49)
@ C ě 0,
µbN pAcN q “ OC N ´C
as N Ñ 8. The sequence of complements is said to be negligible. A
sequence of events tAN u which is negligible for µbN is also negligible
when µbN is conditioned on a non-negligible sequence of events tBN u.
2.1. The large spectrum. Let µ be a mean 0, compactly supported
probability measure on R2 . For 0 ă ϑ ă 1, define the large spectrum
of µ̂ to be
(
(50)
Specϑ pµq “ α P R2 : |µ̂pαq| ą 1 ´ ϑ
and let
(51)
Mϑ pµq “ tα P Specϑ pµq : |µ̂pαq| is a local maximumu.
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
13
Let
µ̌pAq “ µptx´1 : x P Auq
(52)
and set µ2 “ µ˚ µ̌. The measure µ2 is still mean 0, compactly supported
and satisfies
ż
(53)
µ̂2 pαq “
cosp2πα ¨ xqdµ2 pxq “ |µ̂pαq|2 ,
R2
so Specϑ pµq “ Spec2ϑ´ϑ2 pµ2 q and Mϑ pµq “ M2θ´θ2 pµ2 q.
For a differential operator Dβ “ Di1 Di2 ¨ ¨ ¨ Diℓ , set |β| “ ℓ.
Lemma 11. Let 0 ď ϑ ď 1, let α P Specϑ pµ2 q and let Dβ be a differential operator. Then
#
´ 1¯
Oβ ϑ 2
|β| odd
(54)
Dβ µ̂2 pαq “
.
Dβ µ̂2 p0q ` Oβ pϑq
|β| even
Proof. Let Dβ “ Di1 ¨ ¨ ¨ Diℓ . Differentiating under the integral, if ℓ is
odd then
ż
1pℓ”1 mod 4q
ℓ
xi1 ¨ ¨ ¨ xiℓ sinp2πα ¨ xqdµ2 pxq
(55) Dβ µ̂2 pαq “ p´1q
p2πq
R2
so that, using the compact support of µ2 and then Cauchy-Schwarz,
ż
(56)
|Dβ µ̂2 pαq| !ℓ
| sinp2πα ¨ xq|dµ2 pxq
2
ż Ra
“
1 ´ cos2 p2πα ¨ xqdµ2 pxq
R2
ď
If ℓ is even, then
ˆż
ℓ
2
2
R2
1 ´ cos p2πα ¨ xqdµ2 pxq
(57) Dβ µ̂2 pαq “ p´1q p2πq
ℓ
ż
˙ 21
1
ď p2ϑq 2 .
xi1 ¨ ¨ ¨ xiℓ cosp2πα ¨ xqdµ2 pxq
ż
ℓ
ℓ
xi1 ¨ ¨ ¨ xiℓ p1 ´ cosp2πα ¨ xqqdµ2 pxq.
“ Dβ µ̂2 p0q ´ p´1q 2 p2πq
R2
R2
Again using the compact support, the integral in the last line is Opϑq.
The previous lemma has the following consequences.
Lemma 12. There is a constant C1 “ C1 pµq, 0 ă C1 ă 1 such that if
0 ă ϑ ă C1 then the following hold:
i. There are constants C2 pµq, C3 pµq ą 0 such that if α0 P Mϑ pµ2 q
and }α ´ α0 } ă C2 then
(58)
µ̂2 pαq ď µ̂2 pα0 q ´ C3 }α ´ α0 }2 .
14
PERSI DIACONIS AND BOB HOUGH
ii. There is a constant C4 pµq ą 0 such that if α P Specϑ pµ2 q then
there exists α0 P Mϑ pµ2 q with
?
(59)
}α ´ α0 } ď C4 ϑ.
Furthermore, if µ does not have a lattice component, then there is a
growth function F pϑq tending to infinity as ϑ Ó 0 such that, if α0 ‰ α1
are distinct elements of Mϑ pµ2 q then
}α0 ´ α1 } ą F pϑq.
(60)
Proof. To prove i., Taylor expand about α0 using that the first derivatives vanish and that the third derivatives of µ̂2 are uniformly bounded.
The term from the second degree Taylor expansion may be replaced
with the corresponding term at α0 “ 0, making an error which is Opϑq.
This may be absorbed into the constant C3 by making C1 sufficiently
small.
To prove ii., first reduce C1 pµq to guarantee that there is a ball
Bδ pαq, 0 ă δ ă 1 a fixed constant, such that the maximum of µ̂2 does
not occur on the boundary of the ball. This may be achieved by Taylor
1
expanding about α, which now includes a linear term, which is Opϑ 2 q.
Let α0 be the global maximum in the interior, and now
? apply part i.
and µ̂2 pα0 q ´ µ̂2 pαq ď ϑ to conclude that }α ´ α0 } ! ϑ.
To prove the final statement, note that for 0 ď ϑ ď 14 , if α0 , α1 P
Specϑ pµ2 q then α0 ´ α1 P Spec4ϑ pµ2 q, see [27], p. 183. An easier proof
is possible here since the spectrum is positive, indeed,
ż
(61)
1 ´ µ̂2 pα0 ´ α1 q “
1 ´ cosp2πα0 ¨ xq cosp2πα1 ¨ xqdµ2
R2
ż
´
sinp2πα0 ¨ xq sinp2πα1 ¨ xqdµ2 .
R2
Bound the first integral by
(62)
ż
ż
1 ´ cosp2πα0 ¨ xqdµ2 `
R2
R2
cosp2πα0 ¨ xqp1 ´ cosp2πα1 ¨ xqqdµ2 ď 2ϑ.
By Cauchy-Schwarz, the second integral is bounded in size by
ˆż
˙ 21
ż
2
2
(63)
1 ´ cos p2πα0 ¨ xqdµ2
1 ´ cos p2πα1 ¨ xqdµ2
ď 2ϑ.
R2
0.
R2
The claim now follows on considering µ̂2 pαq in growing balls about
The following lemma gives information about variation of the phase
of µ̂pαq in the large spectrum.
Lemma 13. Let µ be a measure of mean 0 and compact support on
R2 , let 0 ď ϑ ď 21 and let α0 P Mϑ pµq. The following hold.
i. Im Di log µ̂pα0 q “ Oµ pϑq
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
15
?
ii. Im Di Dj log µ̂pα0 q “ Oµ p ϑq.
iii. For all α P Spec 1 pµq,
2
Im Di1 Di2 Di3 log µ̂pαq “ Op1q.
(64)
Proof. Let µ̂pα0 q “ eα0 pφ0 q|µ̂pα0 q|.
For i.
ż
Di µ̂pα0 q
2πi
(65)
Di log µ̂pα0 q “
xi eα0 px ´ φ0 qdµpxq.
“
µ̂pα0 q
|µ̂pα0 q| R2
Since µ is mean 0,
ż
2π
(66) Im Di log µ̂pα0 q “
xi pcosp2πα0 ¨ px ´ φ0 qq ´ 1qdµpxq.
|µ̂pα0 q| R2
By the compact support,
ż
(67)
| Im Di log µ̂pα0 q| !
1 ´ cosp2πα0 ¨ px ´ φ0 qqdµpxq
R2
“ 1 ´ |µ̂pα0 q| ď ϑ.
For ii., write
(68)
Di Dj log µ̂pα0 q “
Di Dj µ̂pα0 q Di µ̂pα0 qDj µ̂pα0 q
´
.
µ̂pα0 q
µ̂pα0 q2
i µ̂pα0 q
The subtracted term is real since Dµ̂pα
is imaginary (α0 is a maximum
0q
for |µ̂pα0 q|). Hence, again using the compact support and CauchySchwarz,
(69)
ż
´4π 2
xi xj sinp2πα0 ¨ px ´ φ0 qqdµpxq
Im Di Dj log µ̂pα0 q “
|µ̂pα0 q| R2
ż a
!
1 ´ cos2 p2πα0 ¨ px ´ φ0 qqdµpxq
R2
!
ˆż
R2
1 ´ cosp2πα0 ¨ px ´ φ0 qqdµpxq
˙ 21
1
ď ϑ2.
To obtain iii., note that the first three derivatives of µ̂ are bounded
due to the compact support.
The results of this section are collected into the following lemma
which permits approximating µ̂pαq in neighborhoods of a local maximum for |µ̂pαq|.
Lemma 14. Let µ be a probability measure on Rn of covariance matrix
σ 2 . There is a constant C “ Cpµq ą 0 such that for all 0 ď ϑ ď C and
for all α0 P Mϑ pµq we have
`
˘
(70)
µ̂pα0 ` αq “ µ̂pα0 qEreα pXqs ` O ϑ}α} ` }α}3
with X distributed as ηp0, σq.
16
PERSI DIACONIS AND BOB HOUGH
0 `αq
in a ball of constant radius about
Proof. Taylor expand log µ̂pα
µ̂pα0 q
α “ 0 to find
¯
´
1
µ̂pα0 ` αq
1 t
2
3
2
(71)
log
“ α H0 α ` O ϑ}α} ` ϑ }α} ` }α} ,
µ̂pα0 q
2
with H0 the Hessian of log µ̂pαq at 0. In making this expansion, we’ve
used the estimates for derivatives of µ̂2 pα0 ` αq in Lemma 11 together
with
(72)
Re log
µ̂pα0 ` αq
1
µ̂2 pα0 ` αq
“ log
µ̂pα0 q
2
µ̂2 pα0 q
and the estimates for derivatives of Im log µ̂pα0 ` αq in Lemma 13.
1
Then (we’ve absorbed the ϑ 2 }α}2 error term into the others)
`
˘
(73)
µ̂pα0 ` αq “ µ̂pα0 qEreα pXqs ` O ϑ}α} ` }α}3 .
Since µ̂ is bounded, this formula holds for all α by adjusting the constants appropriately.
3. Proof of Theorem 2
We first treat Theorem 2 which is illustrative of the main idea, before proving Theorem 1. Identify n “ pn1 , n2 , n3 qt P Z3 , with gn “
rn1 , n2 , n3 s P HpZq and let ν be the limiting Gaussian measure under
convolution by µ.
Proposition 15. For each n “ pn1 , n2 , n3 qt P Z3 ,
¯
´
¯
5
1 dν ´
d ?1 gn ` O N ´ 2 .
(74) PN pn1 , n2 , n3 q :“ µ˚N ptgn uq “ 2 ¨
N
N dg
Recalling the multiplication rule
N ”
ı „
ź
p1q
p2q
p1q
p2q 1 p1q p2q
˚
(75)
wi , wi , 0 “ w , w , w w ` H pwq ,
2
i“1
which is valid for wi “ r˘1, 0, 0s or r0, ˘1, 0s, it suffices to calculate,
with UN standing for the product measure µbN and expectation with
respect to UN ,
(76)
ˆ
˙
1
t
˚
UN w ab “ pn1 , n2 q , H pwq “ n3 ´ n1 n2 “
2
ż
ı
¯
”
´
`
˘
n1 n2
E eα pw ab qeξ pH ˚pwqq dξdα.
eα pn1 , n2 qt eξ n3 ´
2
pR{Zq3
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
17
3.1. Reduction to a central limit theorem. The following two lemmas reduce to a quantitative central limit theorem by truncating frequency space to the scale of the distribution.
Lemma 16. For any A ą 0 there is C “ CpAq ą 0 such that if
N
}ξ}R{Z ě C log
, for all α P R2 ,
N
|EUN reα pwab q eξ pH ˚ pwqqs| ď N ´A .
(77)
Proof. Choose k “ kpξq according to the rule
#
1
,
|ξ| ą 10
Y1, ]
.
(78)
kpξq “
1
1
,
|ξ| ď 10
2|ξ|
XN \
1
Let N 1 “ 2k
. The group Gk “ C2N acts on strings of length N with
jth factor exchanging the order of the substrings of length k ending at
p2j ´ 1qk and 2jk.
Given string w, write ŵ for the string of length 2N 1 with jth entry
given by
ŵj “
(79)
Write
(80)
˚
H pwq “
Hk1 pwq
`
k
ÿ
i“1
Hk2 pwq,
wpj´1qk`i .
Hk2 pwq
“
N1
ÿ
j“1
H ˚ pŵ2j´1, ŵ2j q .
Both w ab and Hk1 are invariant under Gk . Exchanging the order of the
expectations, which is justfied because the group action is finite,
(81)
EUN reα pw ab q eξ pH ˚ pwqqs “ Eτ PGk rEUN reα pw ab q eξ pH ˚ pτ ¨ wqqss
“
`
“ `
˘
˘‰‰
“ EUN eα pw ab qeξ Hk1 pwq Eτ PGk eξ Hk2 pτ ¨ wq ,
and, therefore,
(82)
“ `
˘‰ˇ‰
“ˇ
|EUN reα pwab qeξ pH ˚ pwqqs| ď EUN ˇEτ PGk eξ Hk2 pτ ¨ wq ˇ .
By Cauchy-Schwarz,
(83)
”ˇ
“ `
“ `
˘‰ˇ‰2
˘‰ˇ2 ı
“ˇ
EUN ˇEτ PGk eξ Hk2 pτ ¨ wq ˇ ď EUN ˇEτ PGk eξ Hk2 pτ ¨ wq ˇ .
One checks, using the product group structure,
˙
N1 ˆ
ˇ
“ ` 2
˘‰ˇ2 ź
1 ` cos p2πξH ˚pŵ2j´1, ŵ2j qq
ˇ
ˇ
,
(84) Eτ PGk eξ Hk pτ ¨ wq
“
2
j“1
18
PERSI DIACONIS AND BOB HOUGH
and hence, since the coordinates in w are i.i.d.,
”ˇ
“ `
˘‰ˇ2 ı
(85)
EUN ˇEτ PGk eξ Hk2 pτ ¨ wq ˇ
ˆ
˙N 1
1 ` EUN rcos p2πξH ˚pŵ1 , ŵ2 qqs
“
.
2
By Lemma 7 the expectation in the cosine is uniformly bounded in size
by 1 ´ cpµq for some cpµq ą 0. The claim is completed by using the
1
1
estimate p1 ´ xqN ď e´N x , which is valid for 0 ď x ď 1.
The following lemma obtains cancellation in α.
N
Lemma 17. Let A, ǫ ą 0 and 0 ď }ξ}R{Z ď C log
where C is as in
N
1
Lemma 16. For all N sufficiently large, if }α}R2{Z2 ě N ǫ´ 2 , then
EUN reα pwab q eξ pH ˚pwqqs ď N ´A .
(86)
Proof. Let N 1 “ tN 1´ǫ u. Let w0 be w truncated at N 1 and let wt be
the remainder of w so that w is the concatenation w0 ‘ wt . Write
(87)
to bound
(88)
H ˚ pwq “ H ˚pw 0 q ` H ˚ pw0 , wt q ` H ˚pw t q
|EUN reα pw ab q eξ pH ˚ pwqqs|
”ˇ
‰ˇˇı
“
ˇ
ď Ewt „µbpN´N 1 q ˇEw0 „µbN 1 eα pw 0,ab qeξ pH ˚ pw0 q ` H ˚ pw0 , wt qq ˇ .
?
Truncate the outer integral to }wt } ď N log N, which holds w.h.p.
Let Ek pxq denote the degree k Taylor expansion of e1 pxq about 0, and
recall that the integral form of Taylor’s theorem gives
ˇ
ˇ
ż1
k
ˇ p2π|x|qk`1
ˇ
p1
´
tq
k`1
(89) |e1 pxq ´ Ek pxq| “ ˇˇp2πxq
e1 pxtqdtˇˇ ď
.
k!
pk ` 1q!
0
Use Lemma 10 to choose k “ kpA, ǫq be odd and sufficiently large so
that
(90)
Ew0 „µbN 1 r|Ek pξH ˚ pw0 qq ´ eξ pH ˚ pw0 qq|s
‰
“
p2π|ξ|qk`1
Ew0 „µbN 1 |H ˚ pw 0 q|k`1
pk ` 1q!
1
ď Ok,µ p|ξ|Nqk`1 ď
.
2N A
It thus suffices to estimate
ff
«
˘
`
(91)
EUN 1 eα w0,ab eξ pH ˚ pw0 , wt qq Ek pξH ˚pw0 qq .
ď
Expand Ek into PolypNq terms, each depending on boundedly many
indices from w 0 . Expectation over the remaining terms factors as a
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
19
product which is exponentially small in a power of N, hence negligible.
3.2. Quantitative Gaussian approximation. In the range }α} ď
1
N ǫ´ 2 , |ξ| ! logNN , expectation with respect to µ is replaced with expectation taken over a measure with projection to the abelianization
given by a Gaussian of the same covariance matrix as µab . The modified characteristic function in the Gaussian case is evaluated precisely
in Theorem 9, which finishes the proof.
Let σ 2 be the covariance matrix of µab and let ηp0, σq be a centered
Gaussian of the same covariance. Set δ “ det σ. Taylor expand log µ̂pβq
about β “ 0 to find a cubic map T pβq such that
`
˘
(92)
µ̂ab pβq “ η̂pβq p1 ` T pβqq ` O }β}4 .
In the phase eα pwab q eξ pH ˚pwqq let
«
fft
ÿ
ξ ÿ
p2q
p1q
p´1q1piăjqwi , p´1q1piąjqwi
(93)
αj pwq “ α `
2 i‰j
i‰j
so that αj pwq ¨ wj is the part which depends on wj . The Gaussian
replacement scheme is as follows.
Lemma 18. Let 0 ă ǫ ă
N
,
and |ξ| ď C log
N
1
2
1
and C ą 0 be constants. For }α} ď N ǫ´ 2
´?
¯
`
˘
(94) EUN reα pw ab q eξ pH pwqqs “ I
N σ ¨ α, Nδξ ` O N ´1`Opǫq
«
˜
¸ff
ÿ
` EηbN eα pwab q eξ pH ˚ pwqq
.
T pαj pwqq
˚
j
¯
Proof. Since EηbN reα pwab qeξ pH pwqqs “ I N σα, Nδξ; N , and since
in the stated range of α, ξ,
´ 1
¯
´ 1
¯
`
˘
2
2
(95)
I N σα, Nδξ; N “ I N σα, Nδξ ` O N ´1`Opǫq
˚
´
1
2
by Theorem 9, it suffices to prove
(96)
`
˘
EUN reα pwab q eξ pH ˚ pwqqs ` O N ´1`Opǫq
«
˜
¸ff
ÿ
“ EηbN eα pwab q eξ pH ˚ pwqq 1 `
T pαj pwqq
.
j
For convenience, write
(97)
and, for k ‰ j,
(98)
Tj pα, ξ, wq “ T pαj pwqq
Tj pα, ξ, wq “ Tjk pα, ξ, wq ` T̂jk pα, ξ, wq
20
PERSI DIACONIS AND BOB HOUGH
in which Tjk collects monomials in Tj which depend on wk , and T̂jk
collects monomials which don’t depend on wk .
Since the expectation does not depend upon the third coordinate,
write µbN
ab in place of UN . For 0 ď j ď N consider the measure
bpN ´jq
µj “ µbj
in which the first j coordinates are i.i.d. with
ab b η
measure µab and last N ´ j coordinates are independent of the first j
and are i.i.d. η.
We prove (96) iteratively by showing that, for each k ě 1,
(99)
«
˜
Sk :“ Eµk eα pwab qeξ pH ˚ pwqq 1 `
ÿ
jąk
Tj pα, ξ, wq
¸ff
˜
¸ff
ÿ
` ´2`Opǫq ˘
Tj pα, ξ, wq
“O N
` Eµk´1 eα pw ab qeξ pH ˚ pwqq 1 `
`
˘
“ O N ´2`Opǫq ` Sk´1 ,
«
jąk´1
`
˘
which suffices since (96) may be written as |SN ´ S0 | “ O N ´1`Opǫq .
By the triangle inequality, and setting apart expectation in the kth
variable as the inner integral,
(100)
|Sk ´ Sk´1 |
ˇż
ˇ
ˇ
ˇ
ď Eµbpk´1q bηbpN´kq ˇˇ
eαk pwq pwk qpdµab ´ p1 ` Tk pα, ξ, wqq dηqˇˇ
ab
ˇż wk
ˇ
¸
˜
ˇ
ˇ
ÿ
ˇ
ˇ
` Eµbpk´1q bηbpN´kq ˇ
eαk pwq pwk q
Tj pα, ξ, wq pdµab ´ dηqˇ .
ab
ˇ wk
ˇ
jąk
In the first line of the right hand side, note that Tk pα, ξ, wq does not
depend on wk , so that Taylor expanding the exponential obtains a
bound of Op}αk pwq}4 q, which suffices since
“
‰
`
˘
(101)
Eµbpk´1q bηbpN´kq }αk pwq}4 “ O N ´2`4ǫ .
ab
In the second line, write Tj “ Tjk ` T̂jk . Since T̂jk does not depend on
wk , matching the first two moments of µab and η gives
(102)
ˇ
ˇż
˜
¸
ˇ
ˇ
ÿ
ˇ
ˇ
k
T̂j pα, ξ, wq pdµab ´ dηqˇ
Eµbpk´1q bηbpN´kq ˇ
epαk pwq ¨ wk q
ab
ˇ
ˇ wk
jąk
ˇ
ˇff
«
ˇÿ
ˇ
ˇ
ˇ
! Eµbpk´1q bηbpN´kq }αk pwq}3 ˇ T̂jk pα, ξ, wqˇ ! N ´2`6ǫ .
ab
ˇjąk
ˇ
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
21
Finally, to bound the terms from Tjk , Taylor expand epαk pwq ¨ wk q to
degree 2 to bound
(103)
ˇż
ˇ
˜
¸
ˇ
ˇ
ÿ
ˇ
ˇ
epαk pwq ¨ wk q
Eµbpk´1q bηbpN´kq ˇ
Tjk pα, ξ, wq pdµab ´ dηqˇ
ab
ˇ wk
ˇ
jąk
ˇ
ˇż
ˇ
ˇ
ÿ
ˇ
ˇ
k
p1 ` 2πiαk pwq ¨ wk q
Tj pα, ξ, wqpdµab ´ dηqˇ
! Eµbpk´1q bηbpN´kq ˇ
ab
ˇ
ˇ wk
jąk
ˇ
ˇ
ż
ˇÿ
ˇ
ˇ
ˇ
}αk pwq}2 |wk |2 ˇ Tjk pα, ξ, wqˇ pdµab ` dηq.
` Eµbpk´1q bηbpN´kq
ab
ˇjąk
ˇ
wk
Since the first two moments of µab and η match, the only terms which
survive the first line here are degree 3 in wk , and these contribute
OpN ´2`3ǫ q. In the second line here, keeping in mind that Tjk contains
p1q
p2q
only monomials that have a factor of ξwk or ξwk , one obtains a bound
of OpN ´2`5ǫ q by applying Cauchy-Schwarz to separate the integrands.
This completes the iterative estimate (99).
We give two estimates for the error term
«
˜
(104)
T “ EηbN eα pw ab qeξ pH ˚ pwqq
ÿ
j
T pαj pwqq
¸ff
depending on the relative sizes of α and ξ. In this part of the argument
we assume that ηp0, σq has identity covariance, which may be achieved
by rescaling α and ξ by constant factors.
1
Lemma 19. There exists c ą 0 such that, for }α} ď N ǫ´ 2 and |ξ| !
log N
,
N
¯
´
5
3
3
2
(105)
T “ O N}α} ` N |ξ| exp p´cN|ξ|q .
Proof. Bound each term
(106)
EηbN reα pwab qeξ pH ˚pwqqT pαj pwqqs
individually by setting
(107)
k“
$ Y ]
1
’
,
& 2|ξ|
’
% XN \
2
,
|ξ|N ą 1
otherwise
and allowing Gk to act as in Lemma 16. Let G1k be the subgroup
omitting the factor that moves wj . Then G1k leaves T pαj pwqq invariant,
so that
‰
“
p106q “ EηbN eα pwab qEτ PG1k reξ pH ˚pτ ¨ wqqsT pαj pwqq .
22
PERSI DIACONIS AND BOB HOUGH
By Cauchy-Schwarz,
“
‰
|p106q|2 ď EηbN |Eτ PG1k reξ pH ˚pτ ¨ wqqs|2 EηbN r|T pαj pwqq|2 s.
Arguing as in Lemma 16 now obtains the estimate, for some c ą 0,
¯
´
5
(108)
|p104q| ! N}α}3 ` N 2 |ξ|3 exp p´c|ξ|Nq .
To ř
obtain decay in }α} instead of |ξ|, consider the degree 3 polynomial j T pαj pwqq which consists of monomials of which
i. Those constant in w and cubic in α have absolute sum of coefficients OpNq.
ii. Those linear in ξw and quadratic in α have absolute sum of
coefficients OpN 2 q.
iii. Those quadratic in ξw and linear in α have absolute sum of
coefficients OpN 3 q. Of these, those with a repeated factor from
w have absolute sum of coefficients OpN 2 q.
iv. Those that are cubic in ξw have absolute sum of coefficients
OpN 4 q. Of these, those with a repeated factor from w have
absolute sum of coefficients OpN 3 q.
Write M for the typical monic monomial, so that M is of form
(109)
pǫ q
pǫ q
pǫ q
pǫ q
pǫ q
pǫ q
1, wi1 1 , wi1 1 wi2 2 , wi1 1 wi2 2 wi3 3
with ǫj P t1, 2u, according as the case is i., ii., iii. or iv.. Given a
typical monomial M of T , write ωpMq for the number of variables
from w which are odd degree in M.
Lemma 20. There is a constant c ą 0 such that, for 0 ă |ξ| !
a
1
and |ξ| ď }α} ď N ǫ´ 2 ,
„
`
˘
(110)
T “ O }α}p1 ` N}α}2 qp1 ` N 3 |ξ|3 q
ˆ ˆ
ˆ
˙˙˙
1
2
ˆ exp ´c }α} min N,
.
N|ξ|2
Proof. Consider the expectation
(111)
log N
N
EM “ EηbN rMeα pwab q eξ pH ˚ pwqqs .
We show that, for some c ą 0,
ˆ ˆ
ˆ
2
exp ´c }α} min N,
˙˙˙
1
(112)
EM ! }α}
,
N|ξ|2
which suffices on summing over the monomials described in i. through
iv. above.
Let c1 ą 0 be a small constant, and let
# Y ]
c1
,
N|ξ| ą c1
|ξ|
(113)
N1 “
.
N,
otherwise
ωpM q
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
23
Let w0 be the initial string of w of length N 1 and assume that this
includes any variables from M; the general case may be handled by a
straightforward modification. Write w “ w 0 ‘ wt so that wt contains
the remaining variables. Write
H ˚ pwq “ H ˚ pw0 q ` H ˚pw 0 , wt q ` H ˚ pwt q.
ıt
”
p2q
p1q
ξ
Write α̂ “ α ` 2 wt , ´wt . Bound
ˇı
”ˇ
ˇ
ˇ
˚
1
1
(115)
|EM | ď Ewt „ηbpN´N q ˇEw0 „ηbN rMeα̂ pw0 q eξ pH pw 0 qqsˇ .
(114)
Expand eξ pH ˚ pw 0 qq in Taylor series to degree L :“ tN 2ǫ u. The error
in doing so is bounded by
“
‰
p2π|ξ|qL`1
Ew0 „ηbN 1 |M||H ˚ pw0 q|L`1 .
(116)
pL ` 1q!
Apply Cauchy-Schwarz to remove the monomial M, then insert the
moment bound of Lemma 10 to estimate this by
“
‰1
p2π|ξ|qL`1
(117)
!
Ew0 „ηbN 1 H ˚ pw0 q2L`2 2
pL ` 1q!
p2π|ξ|qL`1 p2L ` 2q!
ď
pN 1 qL`1
2L`2
pL ` 1q! pL ` 1q!2
ď p2π|ξ|N 1 qL`1 ď p2πc1 qL`1 .
1
then this is bounded by, for some c ą 0, exp p´cN 2ǫ q .
If c1 ă 2π
In the Taylor expansion, expectation over w 0 is bounded by
(118)
L
ÿ
“
‰ˇ
p2π|ξ|qℓ ˇˇ
EηbN 1 Meα̂ pw 0 qH ˚ pw0 qℓ ˇ
ℓ!
ℓ“0
L
ÿ
p2π|ξ|qℓ
ď
2ℓ ℓ!
ℓ“0
ÿ
m,nPrN 1 sℓ
ˇ
‰ˇ
“
ˇE bN 1 Meα̂ pw 0 qw p1q ¨ ¨ ¨ w p1q w p2q ¨ ¨ ¨ w p2q ˇ .
m1
mℓ n1
nℓ
η
The expectation factors as a product. Those indices of rN 1 s which
do not have a monomial factor contribute, for some c2 ą 0,
`
˘
`
˘
(119)
ď exp ´2π 2 pN 1 ´ 2ℓ ´ 3q}α̂}2 ď exp ´c2 N 1 }α̂}2 .
Let E (resp. O) be those indices in rN 1 s which appear a positive even (resp. odd) number of times among the factors in M and
m1 , ¨ ¨ ¨ , mℓ , n1 , ¨ ¨ ¨ , nℓ .
For indices which appear a positive even hj number of times, bound
|eα̂ pwj q| ď 1, so that the expectation is bounded by the hj th moment
of a 1-dimensional Gaussian,
hj !
(120)
µhj “ hj ´ ¯ .
h
2 2 2j !
24
PERSI DIACONIS AND BOB HOUGH
At indices which appear an odd hj number of times, set eα̂ pwj q “
1 ` Op}α̂}}wj }q. Expectation against 1 vanishes. The remaining expectation is bounded by
! }α̂}µhj `1 .
(121)
The configurations in which no index outside M appears with multiplicity greater than 2, and no more than one of the mj , nj fall on an
odd degree index of M and none fall on an even degree index of M,
make a dominant contribution. Call these base configurations. The
type of a base configuration is described by a triple pp, ℓ1 , ℓ2 q where p
indicates whether an index from m, n falls on each odd degree index
present in M, where ℓ1 counts the number of indices which appear
once, and ℓ2 counts the number of indices which appear twice. Let |p|
be the number of indices which fall on M. Thus
2ℓ “ |p| ` ℓ1 ` 2ℓ2 .
(122)
Let N pp, ℓ1 , ℓ2 q be the number of base configurations that have a given
type. There are
p2ℓq!
|p|!ℓ1 !ℓ2 !2ℓ2
(123)
ways to allot the 2ℓ indices of m, n to belong to p, the ℓ1 singletons or
ℓ2 doubles, and ! pN 1 qℓ1 `ℓ2 ways to place the indices in rN 1 s once they
have been so arranged, so that
N pp, ℓ1 , ℓ2 q !
(124)
p2ℓq!
pN 1 qℓ1 `ℓ2 .
ℓ
2
|p|!ℓ1 !ℓ2 !2
Given m, n of type pp, ℓ1 , ℓ2 q,
“
‰
p1q
p1q p2q
p2q
EηbN 1 Meα̂ pw 0 qwm
(125)
¨ ¨ ¨ wm
w
¨
¨
¨
w
n
n
1
1
ℓ
ℓ
ď expp´c2 N 1 }α̂}2 qOp1qℓ1 `ℓ2 }α̂}ωpM q´|p|`ℓ1 .
Indicating restriction of m, n to base configurations with a 1,
(126)
L
ÿ
p2π|ξ|qℓ
2ℓ ℓ!
ℓ“0
ÿ
m,nPrN 1 sℓ
! expp´c2 N 1 }α̂}2 q
ˆ
ÿ
p
8
ÿ
ℓ1 ,ℓ2 “0
|p|`ℓ1 even
1
ˇ
‰ˇ
“
p1q
p1q p2q
ˇE bN 1 Meα̂ pw 0 qwm
¨ ¨ ¨ wm
wn1 ¨ ¨ ¨ wnp2qℓ ˇ
η
1
ℓ
pπ|ξ|q
|p|`ℓ1 `2ℓ2
2
´
}α̂}ωpM q´|p|`ℓ1 p|p| ` ℓ1 ` 2ℓ2 q!
¯
OpN 1 qℓ1 `ℓ2 .
|p|`ℓ1 `2ℓ2
!ℓ1 !ℓ2 !2ℓ2
2
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
25
Bound
(127)
p|p| ` ℓ1 ` 2ℓ2 q!
4|p|`ℓ1 `2ℓ2
ℓ2 !
|p|`ℓ1 `2ℓ2 ´
¯
¯ ď ´
¯ .
´
ď
4
|p|`ℓ1 `2ℓ2
|p|`ℓ1 `2ℓ2
|p|`ℓ1
!ℓ1 !ℓ2 !
!
!
|p|!
2
2
2
If the constant c1 in (113) is chosen sufficiently small, then the sum over
ℓ2 converges to a bounded quantity and the sum over ℓ1 is bounded by
`
˘
(128)
! exp Op1q}α̂}2 |ξ|pN 1 q2 .
Since |ξ|N 1 ď c1 , if c1 is sufficiently small this obtains a bound,
(129)
´ c
¯´
¯
a
ωpmq
2
! exp ´ N 1 }α̂}2 }α̂}ωpmq ` }α̂}ωpmq´1 |ξ| ` ¨ ¨ ¨ ` |ξ| 2 .
2
This bound with α in place of α̂ obtains (112) for the dominant
terms, and hence bounds the dominant terms unless |ξ| " N1 . In the
remaining case, the bound is acceptable unless }α̂} ă c3 }α} for a small
constant c3 ą 0. In this case one obtains }ξwt } " }α}. Since wt is a
Gaussian with variance of order N ´ N 1 ă N, the´event }ξw
¯ t } " }α}
}α}2
occurs with wt -probability, for some c4 ą 0, ! exp ´c4 N ξ2 , which is
again satisfactory.
To obtain a bound for all configurations from the bound for base
ones, configurations with |O| “ ℓ1 , |E | “ ℓ2 may be enumerated by
adding a number k of double indices to an existing base configuration.
There are OpLqk ways of choosing the indices where the new doubles
will be added, OpLq2k ways of inserting the indices into the list m, n,
and the new indices make a gain in the calculated moments of OpLqk .
Meanwhile, a factor of |ξ|k is saved in the outer sum. Recall L ď N 2ǫ .
If ǫ ă 81 then the sum over k is Op1q for all N sufficiently large.
Proof of Theorem 2. Combining Lemmas 16 and 17 obtains, for any
A ą 0, 0 ă ǫ ă 41 , for some c ą 0
PN pn1 , n2 , n3 q ` OA pN
´A
q“
ij
eα
1
}α}ďN ǫ´ 2
N
|ξ|ď log
N
(130)
`
´
n1 n2 ¯
pn1 , n2 q eξ n3 ´
2
t
˘
” ´?
¯
ı
ˆ I
N σα, Nδξ ` O pEq dαdξ,
where the error term E satisfies the estimates of Lemmas ´18, 19¯and
5
20. Over the range of integration the error integrates to O N ´ 2 .
26
PERSI DIACONIS AND BOB HOUGH
Making a change of variables and extending the integral to R3 obtains
„
ˆ
˙
ż
´
¯
t
1
´ 25
´1 pn1 , n2 q
?
PN pn1 , n2 , n3 q ` O N
eα σ
“ 2 2
δ N R3
N
ˆ ˆ
n1 n2 ˙˙
1 n3 ´ 2
(131)
ˆ eξ
I pα, ξq dαdξ.
δ
N
The right hand side is the Gaussian density of theş limit theorem. To
4
obtain the return probability to 0, use δ 2 “ 25
and R3 Ipα, ξqdαdξ “ 41
in
ż
¯
´
1
´ 25
(132)
Ipα, ξqdαdξ ` O N
PN p0, 0, 0q “ 2 2
δ N R3
¯
´
25
´ 52
.
“
`
O
N
16N 2
4. Proof of Theorem 1, Cramér case
Theorem 1 treats measures for which the abelianized walk is nonlattice. In this case the fibered distribution is also dense in R, and
when the abelianized distribution satisfies the Cramér condition, the
fibered distribution does, also. We assume the Cramér condition in
this section and treat the general case in the next section. In this case,
after making an arbitrary translation on the left and right, the test
functions may be taken to be of the form
´
¯
xy
(133)
f prx, y, zsq “ F x ´ x0 , y ´ y0 , z ´
´ Ax ´ By ´ z0
2
where F is a Lipschitz function of compact support and x0 , y0 , z0 , A, B
are real parameters.
Let ρ be a smooth, compactly supported bump function on R3 , for
t ą 0, ρt pxq “ t3 ρptxq and Ft “ F ˚ ρt the convolution on R3 . Since F
is Lipschitz, }F ´ Ft }8 ! 1t as t Ñ 8. Set
´
¯
xy
(134) ft prx, y, zsq “ Ft x ´ x0 , y ´ y0 , z ´
´ Ax ´ By ´ z0 .
2
5
Choosing t “ tpNq “ N 2 ,
(135)
˚N
´
´ 52
¯
ż
`
F̂t pα, ξq
xf, µ y “ O N
pα,ξqPR3
A `
E
¯
˘ ´
xy
ˆ eα px ´ x0 , y ´ y0 qt eξ z ´
´ Ax ´ By ´ z0 , µ˚N dαdξ.
2
Using the decay of the Fourier transform of the bump function ρt ,
truncate the integral to }α}, |ξ| “ OpN Op1q q with admissible error.
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
27
Apply the multiplication rule (31) to write the central coordinate of
a product of group elements w as
xy
p3q
(136)
z̃ “ z ´
“ H ˚ pwq ` w̃ .
2
p3q
p3q
p3q
The mean of w̃ is N z̃. Let w̃0 “ w̃ ´ N z̃. Let α̃ “ α ´ ξ ¨ pA, Bqt .
Thus
(137)
A `
E
¯
˘ ´
xy
eα px ´ x0 , y ´ y0 qt eξ z ´
´ Ax ´ By ´ z0 , µ˚N “
ż 2
´
¯
`
˘
`
˘
p3q
eα ´px0 , y0 qt eξ N z̃ ´ z0
eα̃ pwab q eξ H ˚ pwq ` w̃0 dµbN .
HpRqN
argument
of Lemma 16 applies as before to truncate to |ξ| “
`The
˘
log N
O N . This uses Lemma 7 in the case |ξ| “ Op1q and Lemma 6
in the case |ξ| " 1. The argument of Lemma 17 applies as before to
1
truncate to }α̃} ! N ´ 2 `ǫ .
A small modification is needed to the application of Lemma 18 which
we now describe. Here one can now include in the measure µab a
third dimension corresponding the z̃ ´ z̃, and make η a 3 dimensional
Gaussian with the same covariance matrix. The Gaussian replacement scheme goes through essentially unchanged, the addition of the
third coordinate evaluated at the small frequency ξ making a negligible change; these terms do not need to be included in Tj pα̃, ξ, wq. The
main term becomes
´
”
¯ı
p3q
(138)
EηbN eα̃ pwab qeξ pH ˚ pwqqeξ w̃0
.
After a linear change of coordinates, the third coordinate is independent of the first two and α̃ is mapped to α1 “ α̃ ` Opξq. Hence
”
¯ı
´
p3q
(139)
EηbN eα̃ pwab qeξ pH ˚pwqqeξ w̃0
`
˘
“ 1 ` Opξ 2Nq EηbN reα1 pw ab qeξ pH ˚ pwqqs .
Note that, since }α1 }2 “ }α̃}2 ` Op}α̃}|ξ|q ` Op|ξ|2 q,
´
¯
1 }2
´ 1
¯ exp ´ δξ 2π}α
coth N δπξ
I N 2 σα1 , Nδξ “
(140)
cosh Nδπξ
´ 1
¯
“ I N 2 σ α̃, Nδξ p1 ` Op}α̃} ` |ξ|qq .
In the error term,
«
(141)
EηbN
´
ff
´
¯ÿ
p3q
Tj pα̃, ξ, wq ,
eα̃ pw ab qeξ pH ˚ pwqqeξ w̃ 0
p3q
¯
j
the factor of eξ w̃0
may be removed by Taylor expanding to degree
1, so that this part of the argument is unchanged.
28
PERSI DIACONIS AND BOB HOUGH
To complete the argument, integrate as before
ż
@
D
˚N
t
(142)
f, µ
“
1 `ǫ F̂t pα, ξqeα p´px0 , y0 q qeξ pN z̃ ´ z0 q
´2
}α̃}!N
N
|ξ|! log
N
” ´ 1
¯
ı
´
¯
5
ˆ I N 2 σα1 , Nδξ ` OpEq dαdξ ` O N ´ 2 .
The argument is now completed essentially as before, to find
´
´
¯ @
¯
@
D @
D
D
5
5
(143) f, µ˚N “ ft , d?N ν ` O N ´ 2 “ f, d?N ν ` O N ´ 2 .
5. Proof of Theorem 1, general case
We now consider the case in which µab does not necessarily satisfy a
Cramér condition. In this section the test functions take the form
´
¯
xy
(144)
f prx, y, zsq “ F x ´ x0 , y ´ y0 , z ´
´ Ax ´ By ´ z0
2
with F continuous and of compact support. Since we ask only for an
asymptotic, it suffices by Selberg’s theory of band-limited majorants
and minorants [18] to assume that F takes the form
(145)
F prx, y, zsq “ φab px, yqφ3 pzq
with φab and φ3 functions with Fourier transform of compact support.
In this case, writing α̃ “ α ´ ξ ¨ pA, Bqt ,
ż
˘
@
D
`
˘ `
˚N
(146) f, µ
“
φ̂ab pαqφ̂3 pξqeα ´px0 , y0qt eξ N z̃ ´ z0
}α},|ξ|“Op1q
ż
´
¯
p3q
ˆ
eα̃ pwab q eξ H ˚ pwq ` w̃ 0 dµbN .
HpRqN
Argue as in Lemma 16 to truncate to |ξ| ! logNN . Since A and B are
unconstrained, a further difficulty is encountered in applying Lemma
17 to truncate α. Let ǫ ą 0. For |ξ| ! logNN and α̃ R SpecN ´1`ǫ pµab q,
Lemma 17 demonstrates that the integral over HpRqN is, for any A ą
0, OA pN ´A q. The following modification of Lemma 18 permits an
asymptotic evaluation of
”
´
¯ı
p3q
˚
(147)
EUN eα̃ pwab qeξ H pwq ` w̃ 0
at points α̃ in the large spectrum of µab , SpecN ´1`ǫ pµab q.
Lemma 21. Let µab have covariance matrix σ 2 and set δ “ detpσq.
P SpecN ´1`ǫ pµab q, |ξ| ! logNN , and let
Let 0 ă ǫ ă 41 , ϑ “ N ´1`ǫ , let α̃ ?
α0 P Mϑ pµab q satisfy }α̃ ´ α0 } ! ϑ. Then
´
”
¯ı
p3q
˚
EUN eα̃ pw ab qeξ H pwq ` w̃ 0
(148)
´
¯
´ 1
¯
1
“ O N ´ 2 `Opǫq ` µ̂ab pα0 qN I N 2 σpα̃ ´ α0 q, Nδξ .
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
29
´
¯
´
¯
p3q
p3q
Proof. Write eξ w̃0 “ 1 ` O |ξ|w̃0 . Since
ˇı
¯
´
” ˇ
ˇ p3q ˇ
´ 21 `ǫ
(149)
EUN |ξ| ˇw̃ 0 ˇ “ O N
it suffices to prove that
(150)
EUN reα̃ pw ab qeξ pH ˚ pwqqs
¯
´ 1
¯
´
N
´ 21 `Opǫq
2
.
“ µ̂ab pα0 q I N σpα̃ ´ α0 q, Nδξ ` O N
Let η “ ηp0, σq be a centered two dimensional Gaussian with covariance equal to that of µab . Let X be distributed according to η. Set
α “ α̃ ´ α0 . By Lemma 14,
(151)
µ̂ab pα0 ` αq “ µ̂ab pα0 qEη reα pXqs ` Opϑ}α} ` }α}3 q.
In analogy with Lemma 18, define
«
fft
ÿ
ξ ÿ
p2q
p1q
(152)
αj pwq “ α `
p´1qδpiăjq wi , p´1qδpiąjq wi
.
2 i‰j
i‰j
bpN ´kq
Set, for 1 ď k ď N, µk “ µbk
. Also set w ab,k “
ab b η
and
‰
“
(153)
Sk “ Eµk eα pw ab qeα0 pwab,k qeξ pH ˚pwqq
řk
j“1 wj,ab
so that SN “ EUN reα̃ pwab qeξ pH ˚pwqqs and
´?
¯
´?
¯
`
˘
(154) S0 “ I
Nσα, Nδξ; N “ I
Nσα, Nδξ ` O N ´1`Opǫq .
Holding all but the kth variable fixed obtains
“
‰
(155) |Sk ´ µ̂ab pα0 qSk´1| ! Eµbpk´1q bηbpN´kq ϑ}αk pwq} ` }αk pwq}3
¯
´ab 3
“ O N ´ 2 `Opǫq .
The proof is now complete, since
´1
ˇ
ˇ
ˇ Nÿ
ˇ
ˇµ̂ab pα0 qk SN ´k ´ µ̂ab pα0 qk`1 SN ´k´1 ˇ
(156) ˇSN ´ µ̂ab pα0 qN S0 ˇ ď
k“0
ď
N
´1
ÿ
k“0
1
|SN ´k ´ µ̂ab pα0 qSN ´k´1 | ! N ´ 2 `Opǫq .
Proof of Theorem 1, general case. Let 0 ă ǫ ă 14 and let ϑ “ N ´1`ǫ .
By Lemma 12 there is a constant c1 “ c1 pµq such that
ď
Bc ϑ 21 pα0 q.
Specϑ pµab q Ă
α0 PMϑ
1
30
PERSI DIACONIS AND BOB HOUGH
Let supp φ̂ab Ă BR p0q. Define the set
+
#
ď
log N
(157) Ξgood “ ξ : |ξ| !
BR`c ϑ 21 pα0 q .
, ξ ¨ pA, Bqt P
1
N
α PM
0
ϑ
Assume that N is sufficiently large so that if α0 , α1 are distinct points
1
of Mϑ then }α0 ´ α1 } ě 2pR ` c1 ϑ 2 q. Given ξ P Ξgood let α0 pξq be the
nearest point to ξ ¨ pA, Bqt in Mϑ . Also, define
(
(158)
Aξ “ α P BR p0q : α̃ “ α ` ξ ¨ pA, Bqt P Specϑ pµab q .
By Lemma 12, |Aξ | ! N ´1`ǫ .
In the evaluation from above,
(159)
@
D
f, µ˚N ` OA pN ´A q
ż
ż
“
α:α̃PSpecϑ pµab q
reα̃ pwab qeξ pH ˚pwqqs dαdξ
ξPΞgood
ˆ EµbN
`
˘ `
˘
φ̂ab pαqφ̂3 pξqeα ´px0 , y0qt eξ N z̃ ´ z0
insert the asymptotic formula for the expectation from Lemma 21. The
error term here is bounded by
ż
ż
1
(160)
N ´ 2 `Opǫq dαdξ
N
ξ:|ξ|! log
N
ď
ż
ξPΞgood
αPBR p0q,
α̃PSpecϑ pµab q
¯
´
1
5
N ´ 2 `Opǫq |Aξ |dξ “ O N ´ 2 `Opǫq .
This leaves the main term
ż
ż
ξPΞgood
(161)
ˆI
´?
α:α̃PSpecϑ pµab q
`
˘ `
˘
φ̂ab pαqφ̂3 pξqeα ´px0 , y0 qt eξ N z̃ ´ z0
¯
Nσpα̃ ´ α0 pξqq, Nδξ dαdξ.
The contribution from the part of this integral where α0 pξq “ 0 contributes xf, d?N νy ` OpN ´A q, by extending the integral with the same
integrand to all of R3 , so it remains to bound the contribution from ξ
for which α0 pξq ‰ 0.
For a fixed ξ P Ξgood , the formula
¯
´
2
exp ´ ξ2π}α}
coth ξ
(162)
Ipα, ξq “
cosh πξ
gives that integration in α is bounded absolutely by
`
˘
ż ´?
¯
max N1 , |ξ|
.
(163)
I
N σpα̃ ´ α0 pξqq, Nδξ dα !
cosh Nπξ
α
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
31
The contribution of ξ P Ξgood for which α0 pξq ‰ 0 is bounded by
ż
maxp N1 , |ξ|q
(164)
dξ.
ξPΞgood ,α0 pξq‰0 cosh Nπξ
Since the integrand is decreasing, and since the α0 pξq are F pϑq spaced,
this is bounded by
˘
`
ˆ ˙
ż8
max N1 , |ξ|
1
1
dξ “ o
.
(165)
!
F pϑq 0 cosh Nπξ
N2
6. Random walk on Nn pZq, proof of Theorem 3
The case n “ 2 is classical so assume n ě 3.
Let M : Zn´1 Ñ Nn pZq be the map
¨
˛
¨
1 v p1q 0 ¨ ¨ ¨
0
p1q
v
..
˚
.
˚ 0 1 v p2q 0
˚ v p2q ‹
‹
˚ . .
˚
n´1
.
.
(166) M : Z
Q v “ ˚ .. ‹ ÞÑ ˚ ..
..
..
..
0
˝ . ‚ ˚
˝
pn´1q
0
0
1 v
v pn´1q
0 ¨¨¨
0
1
˛
‹
‹
‹
‹
‹
‚
Recall that, given m P Nn we write Zpmq for the upper right corner.
n´1 N
q the central coordinate
Given sequence of vectors v “ tvi uN
i“1 P pZ
satisfies the product rule
˜
¸
N
ź
ÿ
p1q p2q
pn´1q
(167)
Z
Mpvi q “
vi1 vi2 ¨ ¨ ¨ vin´1 .
i“1
Write
(168)
ZnN “
1ďi1 ăi2 ă...ăin´1 ďN
ÿ
p1q
1ďi1 ăi2 ă...ăin´1 ďN
pn´1q
ei1 b ¨ ¨ ¨ b ein´1
N
for the corresponding tensor. Zn,µ
denotes the measure on Z obtained
by pushing forward measure µ on Zn´1 via M to measure µ̃ on Nn pZq,
N
then obtaining xZ, µ̃˚N y. Equivalently, Zn,µ
is the distribution of ZnN
evaluated on N vectors vi drawn i.i.d. from µ.
Given a probability measure ν on Z and prime p, Cauchy-Schwarz
and Plancherel give
¸1
ˇ
ˇ ˜
ˇ
ÿ ˇ
ÿ ˇ ˆ ξ ˙ˇˇ2 2
ˇ
1
ˇνpx mod pq ´ ˇ ď
ˇν̂
ˇ
(169)
ˇ
ˇ
ˇ
ˇ
p
p
x mod p
0ıξ mod p
where
(170)
ν̂pαq “
ÿ
nPZ
e´α pnqνpnq.
32
PERSI DIACONIS AND BOB HOUGH
Theorem 3 thus reduces to the following estimate on the characteristic
N
function of Zn,µ
.
Proposition 22. Let n ě 3 and let µ be a measure on Zn´1 satisfying
the same conditions as in Theorem 3. There exists constant C ą 0
such that for all N ą 0 and all 0 ă |ξ| ď 12 ,
ˇ
ˇ
¯
´
2
ˇ N
ˇ
n´1
(171)
.
ˇẐn,µ pξqˇ ! exp ´CN|ξ|
2
Deduction of Theorem 3. Recall N “ cp n´1
upper bound of Proposition 22. By (169),
˜
ˇ¸2
ˇ
ÿ
ÿ ˇ
ˇ
1
N
ˇZn,µ pxq ´ ˇ ď
(172)
ˇ
pˇ
ξPZ
x mod p
and let c ě 1. Apply the
0ă|ξ|ă 2p
!
ÿ
0ă|ξ|ă p2
ˇ
ˆ ˙ˇ
ˇ N ξ ˇ2
ˇ
ˇẐn,µ
ˇ
p ˇ
´
¯
2
exp ´Cc|ξ| n´1
! exp p´Ccq .
6.1. Proof of Proposition 22. Let C2n´2 act on blocks of vectors of
length k2n´2 with the jth factor from C2n´2 , j ě 1 switching the relative
order of the first k2j´1 and second k2j´1 indices. Thus, for instance,
in case n “ 5, if each of x1 , ..., x8 represents a block of k consecutive
indices and x “ x1 x2 x3 x4 x5 x6 x7 x8 ,
τ2 x “ x3 x4 x1 x2 x5 x6 x7 x8
τ1 τ3 x “ τ3 τ1 x “ x5 x6 x7 x8 x2 x1 x3 x4
τ1 τ2 τ3 x “ x5 x6 x7 x8 x3 x4 x2 x1 .
X
\
1
For k ě 1 set N 1 “ k2Nn´2 and let Gk “ pC2n´2 qN . Gk acts on
sequences of length N with, for j ě 1, the jth factor of Gk acting on
the contiguous subsequence of indices of length k2n´2 ending at jk2n´2 .
For fixed k and fixed w P WN “ psupp µqN , let
‰
“
(174)
Zk pwq “ Eτ PGk δZnN pτ ¨wq .
(173)
Continue to abbreviate UN “ µbN . For any k,
N
Zn,µ
“ EUN rZk pwqs .
(175)
We introduce a second, dual action of Gk on a linear dual space. Let
(176)
IN “ ti “ pi1 , i2 , ..., in´1 q : 1 ď i1 ă i2 ă ... ă in´1 ď Nu .
Given i P IN and k ě 1, let Si,k Ă Sn´1 be the subset of permutations
Si,k “ tστ ,i : τ P Gk u where
(177) @1 ď j ď n ´ 1,
στ ,i pjq “ #t1 ď k ď n ´ 1 : τ pik q ď τ pij qu.
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
33
That is, στ ,i pjq is the relative position of τ ¨ ij when τ ¨ i is sorted to be
in increasing order. Put another way, suppose τ maps i1 ă ¨ ¨ ¨ ă in´1
to j1 ă ¨ ¨ ¨ ă jn´1 in some order (and vice versa, τ is an involution)
and calculate
p1q
pn´1q
p1q
pn´1q
ej1 b ¨ ¨ ¨ b ejn´1 pτ ¨ wq “ eτ ¨j1 b ¨ ¨ ¨ b eτ ¨jn´1 pwq
p1q
pn´1q
“ eiσ´1 p1q b ¨ ¨ ¨ b eiσ´1 pn´1q pwq
(178)
pσp1qq
“ ei1
pσpn´1qq
b ¨ ¨ ¨ b ein´1
pwq,
where στ ,i is abbreviated σ.
Let
)
!
pσp1qq
pσpn´1qq
(179)
XN,k “ ei1
b ¨ ¨ ¨ b ein´1
: i P IN , σ P Si,k .
The action of τ P Gk is defined on a representative set within XN,k by,
for each i P IN ,
¯
´
pσ p1qq
pστ ,i pn´1qq
p1q
pn´1q
(180)
τ ¨ ei1 b ¨ ¨ ¨ b ein´1 “ ei1 τ ,i b ¨ ¨ ¨ b ein´1
.
The following lemma justifies that this definition extends to a unique
group action of Gk on all of XN,k .
Lemma 23. Let τ , τ 1 P Gk and i P IN satisfy στ ,i “ στ 1 ,i . Then for
any τ 2 P Gk , στ `τ 2 ,i “ στ 1 `τ 2 ,i . In particular, (180) extends to a unique
group action on XN,k .
Proof. This follows, since, for any 1 ď i ă j ď N there is at most one
factor of G “ C2n´2 in Gk , and one index ℓ, 1 ď ℓ ď n ´ 2 of G which
exchanges the order of i and j. To define the group action in general,
given τ P Gk , i P IN and σ P Si,k choose any τ 0 such that σ “ στ 0 ,i .
Let σ 1 “ στ 0 `τ ,i . Then
(181)
pσp1qq
τ ¨ ei1
pσpn´1qq
b ¨ ¨ ¨ b ein´1
pσ1 p1qq
“ ei1
pσ1 pn´1qq
b ¨ ¨ ¨ b ein´1
The definition is clearly unique, since (180) surjects on XN,k .
.
The actions of Gk on WN and on XN,k , although not adjoint, are
compatible on ZnN , in the sense that for any τ ,
(182)
so that
(183)
pτ ¨ ZnN qpwq “ ZnN pτ ¨ wq
‰
“
Zk pwq “ Eτ PGk δpτ ¨ZnN qpwq .
Note that in general, although Gk is a product group, the separate
factors τi do not act independently in τ ¨ ZnN . For instance, when n “ 5
p1q
p2q
p3q
p4q
and k “ 1, the change in a tensor of type e1 b e2 b e2n´2 `1 b e2n´2 `2
under the first factor of Gk depends upon whether the second factor
has been applied. Thus the characteristic function
“
˘‰
`
(184)
χk pξ, wq “ Eτ PGk e´ξ ZnN pτ ¨ wq
34
PERSI DIACONIS AND BOB HOUGH
need not factor as a product.
A pleasant feature of the general case is that this difficulty is rectified
by estimating instead of χk pξ, wq, a function Fk pξ, wq which is the result
of applying the Gowers-Cauchy-Schwarz inequality to χk pξ, wq. To
1
1
describe this, write Gk “ pC2n´2 qN “ pC2N qn´2 , and thus
‰
“
(185)
Eτ PGk rf pτqs “ Eτ PC N 1 ¨ ¨ ¨ Eτ PC N 1 f pτ 1 , ..., τ n´2 q .
1
n´2
2
2
Then, setting apart one expectation at a time and applying CauchySchwarz,
(186)
n´2
|χk pξ, wq|2
ď Eτ
1
N1
1 ,τ 1 PC2
»
¨ ¨ ¨ Eτ
¨
“ Eτ ,τ 1 PGk –e´ξ ˝
“: Fk pξ, wq,
1
N1
n´2 ,τ n´2 PC2
ÿ
»
¨
–e´ξ ˝
ÿ
p´1qn´2´|S|τ S ¨ w‚fl
SĂrn´2s
˛fi
p´1qn´2´|S| τ S ¨ w ‚fl
SĂrn´2s
where
(187)
˛fi
τ S “ pτ S,1 , ..., τ S,n´2 q,
τ S,i “
"
τi
τ 1i
iPS
.
iRS
Lemma 24. Fk pξ, wq factors as the product
˙
N1 ˆ
ź
Fk,j pξ, wq
1
Fk pξ, wq “
(188)
1 ´ n´2 `
2
2n´2
j“1
where Fk,j pξ, wq is a function of wj,k “ pω1 , ..., ω2n´2 q with the
ÿ
wℓ
(189)
ωi “
p2n´2 pj´1q`i´1qkăℓďp2n´2 pj´1q`iqk
the sum of consecutive blocks
of length k in w. Identify C2n´2 with
ř
n´2
1pτi ‰ 0q. Then
t0, 1un´2 and write |τ | “ i“1
(190)
˛fi
»
¨
ÿ
1
n´2
p´1q|τ | Zn2 ppτ ` τ 1 q ¨ wj,k q‚fl
Fk,j pξ, wq “ Eτ PC2n´2 –e´ξ ˝
τ 1 PC2n´2
with the action of C2n´2 on blocks of size 1 in wj,k .
Proof. Consider for fixed τ , τ 1 P Gk the sum
ÿ
p´1qn´2´|S|τ S ¨ ZnN pwq.
(191)
ZnN pτ , τ 1 qpwq “
SĂrn´2s
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
35
After replacing w with τ 1 w and τ with τ ` τ 1 it suffices to consider
τ 1 “ id.
Consider the action of
ÿ
(192)
τ̂ “
p´1qn´2´|S| τS
SĂrn´2s
on a tensor
p1q
p2q
pn´1q
(193) e “ ei1 b ei2 b ¨ ¨ ¨ b ein´1 ,
1 ď i1 ă i2 ă ¨ ¨ ¨ ă in´1 ď N
appearing in ZnN . Let G “ C2n´2 identified with subsets S of rn ´ 2s,
let G0 “ stabpiq ď C2n´2 be the subgroup consisting of S for which
τ S ¨ e “ e, and let G1 “ C2n´2 {G0 . By the group action property, for all
x P G0 , for all y P G1 , τ x`y e “ τ y e so that when G0 ‰ t1u, τ̂ ¨ e “ 0.
A necessary and sufficient condition for G0 “ t1u is that, for some
1 ď j ď N 1,
(194)
@1 ă ℓ ď n ´ 1
2n´2 pj ´ 1qk ă i1 ď 2n´2 pj ´ 1qk ` k
2n´2 pj ´ 1qk ` 2ℓ´2 k ă iℓ ď 2n´2 pj ´ 1qk ` 2ℓ´1 k,
and τj “ 1n´2 P C2n´2 . In words, the indices must all belong to a
common block of length 2n´2k acted on by a single factor from Gk ,
within this block, the first 2ℓ´1k elements must contain iℓ and the
second 2ℓ´1 k must contain iℓ`1 for ℓ “ 1, 2, ..., n ´ 2, and the factor τj
acting on the block must be the element 1n´2 of the hypercube C2n´2 .
The product formula given summarizes this condition.
n´1
Proof of Proposition
without loss that |ξ| ě N ´ 2 . Let
´Y
] 22. Assume
¯
2
k “ max |ξ|´ n´1 , Mpµq , where Mpµq is a constant depending upon
µ. By the triangle inequality and Hölder’s inequality,
ˇ
ˇ
ˇ
ˇ N
(195)
ˇẐn,µ pξqˇ ď |EUN rχk pξ, wqs|
ď EUN r|χk pξ, wq|s
”
ı 1
2n´2 2n´2
ď EUN |χk pξ, wq|
1
ď EUN rFk pξ, wqs 2n´2 .
Since disjoint blocks are i.i.d., Lemma 24 implies that the expectation
of Fk pξ, wq factors as a product
ˆ
˙N 1
1
E rFk,1pξ, wqs
EUN rFk pξ, wqs “ 1 ´ n´2 `
(196)
.
2
2n´2
In the limit as |ξ| Ó 0,
ÿ
1
n´2
(197)
ξ
p´1q|τ | Zn2 ppτ ` τ 1 q ¨ wj,k q
τ 1 PC2n´2
36
PERSI DIACONIS AND BOB HOUGH
has a continuous limiting distribution, which is a polynomial of degree
n ´ 1 in independent normal random variables, and hence the characteristic function ErFk,1pξ, wqs has size bounded uniformly away from 1
for all |ξ| smaller than a fixed constant ǫpµq. It follows that
ˇ
ˇ
´
¯
2
ˇ
ˇ N
(198)
ˇẐn,µ pξqˇ ď expp´C 1 N 1 q ď exp ´C|ξ| n´1 N .
To handle the remaining range ǫpµq ď |ξ| ď 12 , choose N1 , N2 , ..., Nn´1
minimal such that µ˚Ni gives positive mass to the ith standard basis
vector in Rn´1 . Set k “ N1 ` ... ` Nn´1 and recall that µ assigns posin´2
tive probability to 0. Then with µb2 k -positive probability, for each
1 ď j ď n ´ 1, ω2j´1 is the jth standard basis vector and all other ωi
n´2
are 0. For this configuration, Zn2 pτ ¨ w1,k q “ 1 if τ is the identity,
and 0 otherwise. Again, this gives that the characteristic function is
uniformly bounded away from 1. We thus conclude, as before, that
ˇ
ˇ ˇ
ˇ
ˇ
ˇ ˇ N
ˇ N
(199)
ˇẐn,µ pξqˇ ď ˇẐn,µpξqˇ ď expp´CNq.
Appendix A. The characteristic function of a Gaussian
measure on the Heisenberg group
This section proves Theorem 9, which gives a rate of convergence
to the characteristic function of a Gaussian measure on the Heisenberg group when the steps in the walk are normally distributed in the
abelianization.
Recall that
ˆ
˙
ˆ ˚
˙
ż
x
H pxq
Ipα, ξ; Nq “
e´α ?
e´ξ
dν2bN pxq
(200)
N
2
N
N
pR q
¯
´
2
1
.
exp ´ }x}
where ν2 pxq “ 2π
2
First consider the case α “ 0. Integrate away xp1q to obtain,
ˆ
˙
ż
˘
˘
1
1 t ``
2
2
exp ´ y 1 ´ ξ0 IN ` ξ0 H y dy
(201) Ip0, ξ; Nq “
N
2
p2πq 2 RN
where
(202)
ξ0 “
πξ
,
N
Hi,j “ N ´ 2|i ´ j|;
ř
2 2
p1q p2q
this follows from H ˚pxq “ 21 iăj p´1qδpjăiq xi xj , η̂pξq “ e´2π ξ for a
standard one dimensional Gaussian, and
˜
¸2
N
ÿ
ÿ
(203)
y t pH ´ IN qy “
p´1qδpjăiq yj .
i“1
j‰i
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
37
Thus,
Ip0, ξ; Nq “ a
(204)
1
,
det pp1 ´ ξ02 q IN ` ξ02 Hq
as may be seen by using an orthogonal matrix to diagonalize the quadratic form.
We perform elementary row operatorations to simplify the computation of the determinant. Let
U´ “ IN ´
(205)
Thus
` t
˘
U´ U´ i,j
(206)
and
(207)
so that
(208)
`
U´t HU´
``
˘
i,j
N
´1
ÿ
i“1
$
1,
’
’
&
2,
“
´1,
’
’
% 0,
$
N,
’
’
&
´2,
“
4,
’
’
% 0,
ei b ei`1 .
i“j“1
i“ją1
|i ´ j| “ 1
otherwise
i“j“1
i “ 1, j ą 1 or j “ 1, i ą 1
,
i“ją1
otherwise
˘
˘
1 ´ ξ02 IN ` ξ02 H U´
«
N ´1
1 ´ ξ02 ÿ
2
pei b ei`1 ` ei`1 b ei q
“ p1 ` ξ0 q 2IN ´
1 ` ξ02 i“1
U´t
ff
N
2ξ02 ÿ
pN ` 1qξ02 ´ 1
´
pe1 b ei ` ei b e1 q `
e1 b e1 .
1 ` ξ02 i“1
1 ` ξ02
1´ξ 2
Set ζ “ 1`ξ02 . We diagonalize the tridiagonal matrix with 2’s on the
0
diagonal and ´ζ on the first sub and super diagonal by working from
the lower right corner and adding up and to the left, and treat the
remainder of the matrix as a rank 2 perturbation.
Define sequences
(209)
ε1 “ 2,
@i ě 1, εi`1 “ 2 ´
π0 “ 1,
@i ě 1, πi “
δ1 “ 1,
@i ě 1, δi`1 “ 1 `
i
ź
ζ2
εi
εi
j“1
ζδi
.
εi
38
PERSI DIACONIS AND BOB HOUGH
These parameters have the following behavior with proof postponed
until the end of this section.
ı
´
1
Lemma 25. For ξ P 0, N 2 the following asymptotics hold
ˆ
ˆ
˙˙
sinhp2πξq
1 ` ξ2
πN “ N
1`O
2πξ
N
˙˙
ˆ
ˆ
2πξ
1 ` ξ2
εN “ 1 `
(210)
cothp2πξq 1 ` O
N
N
ˆ
ˆ
˙˙
2
1`ξ
N tanh πξ
1`O
δN “
2πξ
N
˙˙
ˆ
ˆ
N
´1 2
ÿ
δj
N3
1 ` ξ2
.
“ 3 3 r2πξ ´ 2 tanh πξs 1 ` O
ε
8π ξ
N
j“1 j
Set
(211)
Lε “ IN ` ζ
N
´1
ÿ
i“1
ei`1 b ei
,
εN ´i`1
N
ÿ
ei b ei
1
Dε “
.
2
p1 ` ξ0 q i“1 εN `1´i
The diagonalization process is summarized in the following matrix
1
equation, in which Dε2 is multiplied on left and right to obtain 1’s
on the diagonal
1
1
``
˘
˘
(212)
Dε2 Ltε U´t 1 ´ ξ02 IN ` ξ02 H U´ Lε Dε2 “ IN ` P
and where P is the rank two symmetric matrix which results from
applying the diagonalization operators to the second line on the right
hand side of (208),
(213)
N
´2ξ02 ÿ δN `1´i
pN ` 1qξ02 ´ 1
P “
pe
b
e
`
e
b
e
q
`
e1 b e1 .
?
1
i
i
1
1 ` ξ02 i“1 εN εN ´i`1
εN p1 ` ξ02 q
Then, for some orthogonal matrix O, and λ` ě λ´ ,
(214)
O t pIN ` P qO “ pλ` e1 b e1 ` λ´ e2 b e2 q ‘ IN ´2 .
By direct calculation, expanding by the top row, detpIN ` P q is equal
to the e1 b e1 entry plus the sum of the squares of the e1 b ei entries,
1 ă i ď N,
ˆ
˙
1
pN ` 1qξ02
`
(215) det pIN ` P q “ λ` λ´ “ 1 ´
εN p1 ` ξ02 q
εN p1 ` ξ02 q
N
´1 2
ÿ
4ξ04
δi
4ξ02 δN
´
´
2
2 2
1 ` ξ0 εN
p1 ` ξ0 q εN j“1 εi
ˆ
ˆ
˙˙
πξ coth πξ
1 ` ξ2
“
1`O
.
N
N
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
Since
(216)
(217)
39
˙˙
ˆ
ˆ
N sinh 2πξ
1 ` ξ2
,
det pDε q “ p1 `
“ 1`O
N
2πξ
ˆ
ˆ
˙˙
``
˘
˘
1 ` ξ2
2
2
2
det 1 ´ ξ0 IN ` ξ0 H “ pcosh πξq 1 ` O
.
N
´1
ξ02 qN πN
Now consider the general case in which α ‰ 0. Treat x as N vectors in R2 . When SO2 pRq acts diagonally on pR2 qN rotating each xi
simultaneously, H ˚ and the Gaussian density are preserved. Thus,
Ipα, ξ; Nq “ Ipp0, }α}qt, ξ; Nq. Calculate
1
e1
(218)
r1, 1, ¨ ¨ ¨ , 1sU´ Lε Dε2 “ a
.
ǫN p1 ` ξ02 q
1
It follows that after making the change of coordinates y 1 “: U1 Lε Dε2 y
the phase has magnitude ? 2π}α} 2 and is now in the e1 direction. Let
N εN p1`ξ0 q
v` , v´ be unit vectors generating the eigenspaces λ` , λ´ respectively.
Since e1 lies in the span of v` , v´ it follows
(219)
ˆ
˙˙
ˆ
xv` , e1 y2 xv´ , e1 y2
´2π 2 }α}2
Ip0, ξq.
`
Ipα, ξ; Nq “ exp
NεN p1 ` ξ02 q
λ`
λ´
Calculate
T “ λ` ` λ´ “ 2 `
(220)
so that
(221)
Also,
et1 P e1
ξ2
N
˙
ˆ
ˆ
˙˙ ˆ
˙
1 ` ξ2
πξ coth πξ
pλ` , λ´ q “ 1 ` O
1,
.
N
N
xv` , e1 y2 ` xv´ , e1 y2 “ 1
(222)
2
2
et1 P e1
,
2
λ` xv` , e1 y ` λ´ xv´ , e1 y “ 1 `
so that
(223)
“1`O
ˆ
2
xv` , e1 y “ O
ˆ
1 ` ξ2
N
˙
“O
ˆ
1 ` ξ2
N
xv´ , e1 y “ 1 ` O
ˆ
˙
1 ` ξ2
N
˙
.
It follows that
ˆ
ˆ
˙˙
N
1 ` ξ2
xv` , e1 y2 xv´ , e1 y2
1`O
.
`
“
(224)
λ`
λ´
πξ coth πξ
N
In particular
exp
(225)
Ipα, ξ; Nq “
´
´2π}α}2
ξ coth πξ
cosh πξ
¯
ˆ
ˆ
˙˙
p1 ` }α}2 qp1 ` ξ 2 q
1`O
N
40
PERSI DIACONIS AND BOB HOUGH
1´ξ02
.
1`ξ02
Proof of Lemma 25. Recall ζ “
(226)
πn “ 2πn´1 ´ ζ 2 πn´2 ,
πn satisfies the recurrence
π0 “ 1, π1 “ 2.
The following closed forms hold,
(227)
p1 ` ξ0 q2n`2 ´ p1 ´ ξ0 q2n`2
4ξ0 p1 ` ξ02 qn
2ξ0 p1 ` ξ0 q2n ` p1 ´ ξ0 q2n
εn “ 1 `
1 ` ξ02 p1 ` ξ0 q2n ´ p1 ´ ξ0 q2n
´
¯n ´
¯n
1`ξ0
1´ξ0
1 1´ξ0 ` 1`ξ0 ´ 2 1
´
¯n ´
¯n ` .
δn “
1`ξ0
1´ξ0
2ξ0
2
´
πn “
1´ξ0
1`ξ0
The formula for πn is immediate from the recurrence relation, since
(228)
p1 ` ξ0 q2
,
1 ` ξ02
p1 ´ ξ0 q2
1 ` ξ02
are the two roots of x2 ´ 2x ` ζ 2 “ 0. The formula for εn follows from
n
εn “ ππn´1
. The formula for δn is obtained on summing the geometric
series
(229)
δn “
n´1
ζ n´1 ÿ πj
,
πn´1 j“0 ζ j
and use
(230)
πn
p1 ` ξ0 q2n`2 ´ p1 ´ ξ0 q2n`2
“
ζn
4ξ0 p1 ´ ξ02 qn
«ˆ
˙n`1 ˆ
˙n`1 ff
1 ` ξ0
1 ´ ξ02
1 ´ ξ0
.
“
´
4ξ0
1 ´ ξ0
1 ` ξ0
The claimed asymptotics for π, ε, δ are straightforward. For instance,
to obtain the correct relative error in πn , write
(231)
p1 ` ξ0 q2n`2 ´ p1 ´ ξ0 q2n`2
p1 ` ξ0 q2n`2 ´ p1 ´ ξ0 q2n`2
“
4ξ0
2 pp1 ` ξ0 q ´ p1 ´ ξ0 qq
as a geometric series of positive terms. In each term of the series,
approximate the power with an exponential with acceptable relative
error, then sum the sequence of exponentials. The correct relative
error may be obtained in the other cases similarly.
Using the exact formulae for ε and δ yields
ˆ
ˆ
˙˙
˙2
ˆ
δj2
1 ξ2
N2
jπξ
“ 1`O
`
tanh
(232)
.
εj
j N
4π 2 ξ 2
N
RANDOM WALK ON UNIPOTENT MATRIX GROUPS
41
Approximating with a Riemann sum,
˙˙
ˆ
ˆ
ż1
N
´1 2
ÿ
δj
N3
1 ` ξ2
(233)
“ 1`O
tanh ptπξq2 dt
2ξ 2
ε
N
4π
0
j“1 j
which gives the claimed estimate.
References
[1] Alexopoulos, G. K. “Centered densities on Lie groups of polynomial volume
growth.” Probab. Theory Related Fields 124 (2002), no. 1, 112–150.
[2] Alexopoulos, G. K. “Random walks on discrete groups of polynomial volume
growth.” Ann. Probab. 30 (2002), no. 2, 723–801.
[3] P. Baldi and L. Caramellino. “Large and moderate deviations for random walks
on nilpotent groups.” J. Theoret. Probab. 12 (1999), no. 3, 779–809.
[4] Balog, A. “On the distribution of integers having no large prime factors.”
Journées Arithmétiques, Besançon, Astérisque 147/148 (1985): 27–31.
[5] E. F. Breuillard, Equidistribution of random walks on nilpotent Lie groups and
homogeneous spaces, ProQuest LLC, Ann Arbor, MI, 2004.
[6] Breuillard, Emmanuel. “Local limit theorems and equidistribution of random walks on the Heisenberg group.” Geometric and functional analysis 15.1
(2005):35–82.
[7] Breuillard, Emmanuel. “Equidistribution of dense subgroups on nilpotent Lie
groups.” Ergodic Theory Dynam. Systems 30 (2010), no. 1, 131–150.
[8] Daniel Bump, Persi Diaconis, Angela Hicks, Laurent Miclo, and Harold
Widom. An exercise (?) in Fourier analysis on the Heisenberg group.
arXiv:1502.04160, 2015.
[9] P. Crépel and A. Raugi. “Théorème central limite sur les groupes nilpotents.”
Ann. Inst. H. Poincaré sec B, Prob. and Stat., vol XIV, 2. (1978): 145–164.
[10] Coulhon, Saloff-Coste, and Varopoulos. Analysis and geometry on groups.
Cambridge tracts on mathematics, Cambridge University Press, 1992.
[11] Chung, Fan and Linyuan Lu. “Concentration inequalities and martingale inequalities: a survey.” Internet mathematics 3.1 (2006): 79–127.
[12] P. Diaconis. “Threads through group theory.” In Character theory of finite
groups, 33–47, Contemp. Math., 524, Amer. Math. Soc., Providence, RI (2010).
[13] P. Diaconis and L. Saloff-Coste. “Moderate growth and random walk on finite
groups.” Geom. Funct. Anal. 4 (1994), no. 1, 1–36.
[14] P. Diaconis and L. Saloff-Coste. “An application of Harnack inequalities to
random walk on nilpotent quotients.” J. Fourier Anal. Appl. (1995), Special
Issue, 189–207.
[15] P. Diaconis and L. Saloff-Coste. “Nash inequalities for finite Markov chains.”
J. Theoret. Probab. 9 (1996), no. 2, 459–510.
[16] Lawler, Gregory F., and Vlada Limic. Random walk: a modern introduction.
Vol. 123. Cambridge University Press, 2010.
[17] Gaveau, Bernard. “Principe de moindre action, propagation de la chaleur et
estimées sous elliptiques sur certains groupes nilpotents.” Acta mathematica
139.1 (1977): 95-153.
[18] Gonalves, Felipe, Michael Kelly, and Jose Madrid. “One-sided band-limited approximations of some radial functions.” Bulletin of the Brazilian Mathematical
Society, New Series 46.4 (2015): 563-599.
[19] Green, Ben and Tao, Terence. “The quantitative behaviour of polynomial orbits on nilmanifolds.” Ann. of Math. (2) 175 (2012), no. 2, 465–540.
42
PERSI DIACONIS AND BOB HOUGH
[20] Hulanicki, Andrzej. “The distribution of energy in the Brownian motion in the
Gaussian field and analytic-hypoellipticity of certain subelliptic operators on
the Heisenberg group.” Studia Mathematica 2.56 (1976): 165-173.
[21] S. Ishiwata. “A central limit theorem on a covering graph with a transformation
group of polynomial growth.” J. Math. Soc. Japan 55 (2003), no. 3, 837–853.
[22] S. Ishiwata. “A central limit theorem on modified graphs of nilpotent covering
graphs.” In Spectral analysis in geometry and number theory, 59–72, Contemp.
Math., 484, Amer. Math. Soc., Providence, RI.
[23] Y. Peres and A. Sly. “Mixing of the upper triangular matrix walk.” Probab.
Theory Related Fields 156 (2013), no. 3-4, 581–591.
[24] Raugi, A. “Théorème de la limite centrale sur les groupes nilpotents.” Probability Theory and Related Fields 43.2 (1978): 149-172.
[25] L. Saloff-Coste, “Probability on groups: random walks and invariant diffusions.” Notices Amer. Math. Soc. 48 (2001), no. 9, 968–977.
[26] D. W. Stroock and S. R. S. Varadhan. “Limit theorems for random walks on
Lie groups.” Sankhyā Ser. A 35 (1973), no. 3, 277–294.
[27] Tao, T., and V. Vu. Additive combinatorics. Vol. 105. Cambridge University
Press, 2006.
[28] V.N. Tutubalin. “Compositions of measures on the simplest nilpotent group.”
(Russian) Teor. Verojatnost. i Primenen 9, (1964): 531–539.
[29] D. Wehn. “Probabilities on Lie groups.” Proc. Nat. Acad. Sci. U.S.A. 48
(1962): 791–795.
(Persi Diaconis) Department of Mathematics, Stanford University, 450
Serra Mall, Stanford, CA, 94305, USA
E-mail address: diaconis@math.stanford.edu
(Bob Hough) Department of Mathematics, Stanford University, 450
Serra Mall, Stanford, CA, 94305, USA
Current address: School of Mathematics, Institute of Advanced Study, 1 Einstein
Drive, Princeton, NJ, 08540
E-mail address: hough@math.ias.edu
| 4 |
arXiv:1301.4200v1 [cs.DB] 17 Jan 2013
Enabling Operator Reordering in Data Flow Programs
Through Static Code Analysis
Fabian Hueske
Aljoscha Krettek
Kostas Tzoumas
Technische Universität Berlin, Germany
fabian.hueske@tu-berlin.de
Technische Universität Berlin, Germany
aljoscha.krettek@campus.tu-berlin.de
Technische Universität Berlin, Germany
kostas.tzoumas@tu-berlin.de
Abstract
In many massively parallel data management platforms, programs
are represented as small imperative pieces of code connected in a
data flow. This popular abstraction makes it hard to apply algebraic
reordering techniques employed by relational DBMSs and other
systems that use an algebraic programming abstraction. We present
a code analysis technique based on reverse data and control flow
analysis that discovers a set of properties from user code, which
can be used to emulate algebraic optimizations in this setting.
1. Introduction
Motivated by the recent “Big Data” trend, a new breed of massively
parallel data processing systems has emerged. Examples of these
systems include MapReduce [8] and its open-source implementation Hadoop [1], Dryad [11], Hyracks [6], and our own Stratosphere system [5]. These systems typically expose to the programmer a data flow programming model. Programs are composed as
directed acyclic graphs (DAGs) of operators, some of the latter typically being written in a general-purpose imperative programming
language. This model restricts control flow only within the limits
of operators, and permits only dataflow-based communication between operators. Since operators can only communicate with each
other by passing sets of records in a pre-defined hardwired manner,
set-oriented execution and data parallelism can be achieved.
Contrary to these systems, relational DBMSs, the traditional
workhorses for managing data at scale, are able to optimize queries because they adopt an algebraic programming model based
on relational algebra. For example, a query optimizer is able to
⋉ (S ⋊
⋉ T )) to the expression
transform the expression σR.X <3 (R ⋊
⋉ S) ⋊
⋉ T , exploiting the associativity and commuta(σR.X <3 (R) ⋊
tivity properties of selections and joins.
While algebraic reordering can lead to orders of magnitude
faster execution, it is not fully supported by modern parallel processing systems, due to their non-algebraic programming models.
Operators are typically written in a general-purpose imperative language, and their semantics are therefore hidden from the system. In
our previous work [10], we bridged this gap by showing that exposure of a handful of operator properties to the system can enable
reorderings that can simulate most algebraic reorderings used by
modern query optimizers. We discovered these properties using a
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee.
XLDI 2012 September 9, 2012, Copenhagen, Denmark.
Copyright c 2012 ACM [to be supplied]. . . $10.00
custom, shallow code analysis pass over the operators’ code. Here
we describe this code analysis in detail, which we believe is of interest by itself as an non-traditional use case of code analysis techniques. We note that our techniques are applicable in the context
of many data processing systems which support MapReduce-style
UDFs such as parallel programming models [5, 8], higher-level languages [2, 7], and database systems [3, 9].
Related work: In our previous work [10] we describe and formally prove the conditions to reorder user-defined operators. That
paper also contains a more complete treatment of related work.
Here, we focus on more directly related research. Manimal [12]
uses static code analysis of MapReduce programs for the purpose
of recommending possible indexes. Our code analysis can be seen
as an example of peephole optimization [4], and some of the concepts may bear similarity to techniques for loop optimization. However, we are not aware of code analysis being used before for the
purpose of swapping imperative blocks of code to improve performance of data-intensive programs.
The rest of this paper is organized as follows. Section 2 describes the programming model of our system, and introduces the
reordering technology. Section 3 discusses our code analysis algorithm in detail. Finally, Section 4 concludes and offers research
directions.
2. Data Flow Operator Reordering
In our PACT programming model [5], a program P is a DAG of
sources, sinks, and operators which are connected by data channels.
A source generates records and passes them to connected operators.
A sink receives records from operators and serializes them into an
output format. Records consist of fields of arbitrary types. To define an operator O, the programmer must specify (i) a second-order
function (SOF) signature, picked from a pre-defined set of system
second-order functions (currently Map, Reduce, Match, Cross, and
CoGroup), and (ii) a first-order function (called user-defined function, UDF) that is used as the parameter of the SOF. The model is
strictly second-order, in that a UDF is not allowed to call SOFs. The
intuition of this model is that the SOF defines a logical mapping of
the operator’s input records into groups, and the UDF is invoked
once for each group. These UDF invocations are independent, and
can be thus scheduled on different nodes of a computing cluster.
Figure 1(a) shows an example PACT program. The data flow
starts with two data sources Src1 and Src2 that provide records
which have the fields [0, 1] and [3, 4] set respectively (the numbering is arbitrary). Src1 feeds its data into a Map operator with
a UDF f1 . The Map SOF creates an independent group for each
input record, and f1 is itself written in Java. UDF f1 reads both
fields of its input record (0 and 1), appends the sum of both fields
as field 2, and emits the record. Similarly, the records of Src2 are
forwarded to a Map operator with UDF f2 which sums the fields 3
(a)
Snk1
(b)
Snk1
(c)
Snk1
Match( f3 ,[0],[3])
Map( f1 )
Map( f2 )
Match( f3 ,[0],[3])
Match( f3 ,[0],[3])
Map( f1 )
Map( f2 )
[0,1]
Src1
[3,4]
Src2
[0,1]
Src1
Map( f2 )
Map( f1 )
[3,4]
Src2
[0,1]
Src1
[3,4]
Src2
Figure 1. Example Data Flows: (a) original order, (b) first reordered alternative, (c) second reordered alternative
and 4, appends the sum as field 5 and emits the record. The outputs
of both Map operators are forwarded as inputs to a Match operator
with a UDF f3 and the key field [0] for the first and [3] for the second input. The Match SOF creates a group for each pair of records
from both inputs that match on their key fields. f3 merges the fields
of both input records and emits the result. We give the pseudo-code
of all three user functions in the form of 3-address code [4] below.
10:
11:
12:
13:
14:
15:
16:
f1(InRec $ir)
$a:=getField($ir,0)
$b:=getField($ir,1)
$c:=$a + $b
$or:=copy($ir)
setField($or,2,$c)
emit($or)
20:
21:
22:
23:
24:
25:
26:
27:
28:
f2(InRec $ir)
$x:=getField($ir,3)
$y:=getField($ir,4)
$z:=$x + $y
$or:=create()
setField($or,3,$x)
setField($or,4,$y)
setField($or,5,$z)
emit($or)
30: f3(InRec $ir1, InRec $ir2)
31: $or:=copy($ir1)
32: union($or,$ir2)
33: emit($or)
The pseudo-code shows the UDF API to process PACT records.
The user-functions f1, f2, and f3 receive as input one or two
input records of type InRec. The only way that a user function can emit an output record of type OutRec is by calling the
emit(OutRec) function. Output records can be either initialized
as empty (OutRec create()), or by copying an input record
(OutRec copy(InRec)). Records can be combined via the function void union(OutRec,InRec). Fields can be read to a variable via Object getField(InRec,int) addressed by their position in the input record. The value of a field can be set via void
setField(OutRec,int,Object). Note that our record API is
based on basic operations and similar to other systems’ APIs such
as Apache Pig [2].
Figures 1 (b) and (c) show potential reorderings of the original
data flow (a) where either Map( f1 ) or Map( f2 ) has been reordered
with Match( f3 , [0], [3]). While data flow (b) is a valid reordering,
alternative (c) does not produce the same result as (a). In previous
work, we presented conditions for valid reorderings of data flow
operators centered around conflicts of operators on fields [10]. For
example, since we know that f1 reads fields 0 and 1, and writes field
2, while f3 reads fields 0 and 3, we can conclude that f1 and f3 only
have a read conflict on field 0, and can thus be safely reordered.
UDFs that have write conflicts cannot be reordered. This would be
true if f1 did not append the sum as field 2, but overwrote field 0
with the sum. Additional complications arise from the way output
records are formed. Although on the first sight, f1 and f2 perform a
very similar operation, i. e., summing two fields and appending the
result, there is a fundamental difference. While f1 creates its output
record by copying the input record (line 14), f2 creates an empty
output record (line 24) and explicitly copies the fields of the input
record (lines 25,26). The side effect of creating an empty output
record is that all fields of an input record are implicitly removed
from the output. By reordering Map( f2 ) with Match( f3 , [0], [3]), the
fields 0, 1, and 2 will get lost since Map( f2 ) does not explicitly copy
them into the newly created output record.
The information that needs to be extracted from the user code
in order to reason about reordering of operators is as follows. The
read set R f of a UDF f is the set of fields from its input data sets
that might influence the UDF’s output, i. e., fields that are read and
evaluated by f . The write set W f is the set of fields of the output data set that have different values from the corresponding input
field. The emit cardinality bounds ⌊EC f ⌋ and ⌈EC f ⌉ are lower and
upper bounds for the number of records emitted per invocation of
f . Reference [10] defines these properties more formally, and provides conditions for reordering operators with various SOFs given
knowledge of these properties. In addition to change the order of
operators, the optimizer can leverage these properties to avoid expensive data processing operations, e. g., a previously partitioned
data set is still partitioned after a UDF was applied, if the partitioning fields were not modified by the UDF. Moreover, field projections can be pushed down based on read set information.
While it is very difficult to statically derive the exact properties
by UDF code analysis in the general case, it is possible to conservatively approximate them. In reference [10] we discussed this
static code analysis pass for the simple case of unary operators. In
the next section, we provide the full algorithm that deals with the
additional complexity due to binary operators, and provide detailed
pseudo-code.
3. Code Analysis Algorithm
Our algorithm relies on a static code analysis (SCA) framework
to get the bytecode of the analyzed UDF, for example as typed
three-address code [4]. The framework must provide a control flow
graph (CFG) abstraction, in which each code statement is represented by one node along with a function P REDS(s) that returns
the statements in the CFG that are “true” predecessors of statement s, i.e., they are not both predecessors and descendants. Finally, the framework must provide two methods D EF -U SE(s, $v)
and U SE -D EF(s, $v) that represent the Definition-Use chain of the
variable $v at statement s, and the Use-Definition chain of variable
$v at statement s respectively. Any SCA framework that provides
these abstraction can be used.
The algorithm visits each UDF in a topological order implied
by the program DAG starting from the data sources. For each UDF
f , the function V ISIT-U DF of Algorithm 1 is invoked. First, we
compute the read set R f of the UDF (lines 7-10). For each statement
of the form $t := getField($ir, n) that results in a valid use
of variable $t (D EF -U SE (g, $t)6= 0)
/ we add field n to R f
Approximating the write set W f is more involved. We compute
four sets of integers that we eventually use to compute an approximation of W f . The origin set O f of UDF f is a set of input ids.
An integer o ∈ O f means that all fields of the o-th input record of
f are copied verbatim to the output. The explicit modification set
E f contains fields that are modified and then included in the output. We generally assume that fields are uniquely numbered within
the program (as in Figure 1). The copy set C f contains fields that
are copied verbatim from one input record to the output. Finally,
the projection set Pf contains fields that are projected from the output, by explicitly being set to null. The write set is computed from
these sets using the function C OMPUTE -W RITE -S ET (lines 1-5).
All fields in E f and Pf are explicitly modified or set to null and
therefore in W f . For inputs that are not in the origin set O f , we
add all fields of that input which are not in C f , i. e., not explicitly
copied.
To derive the four sets, function V ISIT-U DF finds all statements
of the form e: emit($or), which include the output record $or in
Algorithm 1 Code analysis algorithm
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
function C OMPUTE -W RITE -S ET( f ,O f ,E f ,C f ,Pf )
W f = E f ∪ Pf
for i ∈ I NPUTS( f ) do
if i ∈
/ O f then W f = W f ∪ (I NPUT-F IELDS( f ,i) \C f )
return W f
function V ISIT-UDF( f )
R f = 0/
G = all statements of the form g:$t=getField($ir,n)
for g in G do
if D EF -U SE(g, $t)6= 0/ then R f = R f ∪ {n}
E = all statements of the form e:emit($or)
(O f ,E f ,C f ,Pf ) = V ISIT-S TMT(A NY(E), $or)
for e in E do
(Oe ,Ee ,Ce ,Pe ) = V ISIT-S TMT(e, $or)
(O f ,E f ,C f ,Pf ) = M ERGE((O f ,E f ,C f ,Pf ),(Oe ,Ee ,Ce ,Pe ))
return (R f ,O f ,E f ,C f ,Pf )
function V ISIT-S TMT(s, $or)
if VISITED(s,$or) then
return M EMO -S ETS (s, $or)
V ISITED(s,$or) = true
if s of the form $or = create() then return (0,
/ 0,
/ 0,
/ 0)
/
if s of the form $or = copy($ir) then
return (I NPUT-I D($ir), 0,
/ 0,
/ 0)
/
Ps = P REDS (s)
(Os ,Es ,Cs ,Ps ) = V ISIT-S TMT(A NY(Ps), $or)
for p in Ps do
(O p ,E p ,Cp ,Pp ) = V ISIT-S TMT(p, $or)
(Os ,Es ,Cs ,Ps ) = M ERGE((Os ,Es ,Cs ,Ps ), (O p ,E p ,Cp ,Pp ))
if s of the form union($or, $ir) then
return (Os ∪ I NPUT-I D($ir),Es ,Cs ,Ps )
if s of the form setField($or, n, $t) then
T =U SE -D EF (s, $t)
if all t ∈ T of the form $t=getField($ir,n) then
return (Os ,Es ,Cs ∪ {n},Ps )
else
return (Os ,Es ∪ {n},Cs ,Ps )
if s of the form setField($or, n, null) then
return (Os ,Es ,Cs ,Ps ∪ {n})
function M ERGE((O1 ,E1 ,C1 ,P1 ), (O2 ,E2 ,C2 ,P2 ))
C = (C1 ∩C2 ) ∪ {x|x ∈ C1 , I NPUT-I D(x) ∈ O2 }
∪ {x|x ∈ C2 , I NPUT-I D(x) ∈ O1 }
return (O1 ∩ O2 ,E1 ∪ E2 ,C,P1 ∪ P2 )
ation point of the output record, where it is initialized to the empty
record. The recursion then ends. Another base case is reaching a
statement $or = copy($ir) (line 22) where the output record is
created by copying all fields of the input record $ir. This adds the
input id of record $ir to the origin set O. A union statement (line
29) results in an inclusion of the input id of the input record $ir
in the origin set O, and a further recursion for the output record
$or. The algorithm maintains a memo table M EMO -S ETS to support early exit of the recursion in the presence of loops (line 18).
The memo table is implicitly updated at every return statement of
V ISIT-S TMT .
Function V ISIT-S TMT always terminates in the presence of
loops in the UDF code, since it will eventually find the statement
that creates the output record, or visit a previously seen statement.
This is due to P REDS always exiting a loop after visiting its first
statement. Thus, loop bodies are only visited once by the algorithm.
The complexity of the algorithm is O(en), where n is the size of the
UDF code, and e the number of emit statements. This assumes that
the Use-Def and Def-Use chains have been precomputed.
The lower and upper bound on the emit cardinality of the UDF
can be derived by another pass over the UDF code. We determine
the bounds for each emit statement e and combine those to derive
the bounds of the UDF. For the lower bound ⌊EC f ⌋, we check
whether there is a statement before statement e that jumps to a
statement after e. If there is none, the emit statement will always
be executed and we set ⌊EC f ⌋ = 1. If such a statement exists,
statement e could potentially be skipped during execution, so we
set ⌊EC f ⌋ = 0. For the upper bound ⌈EC f ⌉, we determine whether
there is a statement after e that can jump to a statement before
e. If yes, the statement could be executed several times during
the UDF’s execution, so we set ⌈EC f ⌉ = +∞. If such a statement
does not exist, statement e can be executed at most once so we set
⌈EC f ⌉ = 1. To combine the bounds we choose for the lower bound
of the UDF the highest lower bound over all emit statements and for
the upper bound the highest upper bound over all emit statements.
Our previous work [10] compares read and write sets which
are automatically derived by our static code analysis technique and
from manually attached annotations. We show that our technique
yields very precise estimations with only little loss of optimization
potential. However, we note that the estimation quality depends on
the programming style.
4. Conclusions and Future Work
the output (line 11). It then calls for each statement e the recursive
function V ISIT-S TMT that recurses from statement e backwards
in the control flow graph (lines 12-15). The function performs a
combination of reverse data flow and control flow analysis but does
not change the values computed for statements once they have been
determined. The function A NY returns an arbitrary element of a set.
The useful work is done in lines 24-38 of the algorithm. First,
the algorithm finds all predecessor statements of the current statement, and recursively calls V ISIT-S TMT . The sets are merged using
the M ERGE function (lines 39-42). M ERGE provides a conservative
approximation of these sets, by creating maximal E, P sets, and
minimal O,C sets. This guarantees that the data conflicts that will
arise are a superset of the true conflicts in the program. When a
statement of the form setField($or, n, null) is found (line
37), field n of the output record is explicitly projected, and is
thus added to the projection set P. When a statement of the form
setField($or, n, $t) is found (line 31), the U SE -D EF chain
of $t is checked. If the temporary variable $t came directly from
field n of the input, it is added to the copy set C, otherwise it is
added to the explicit write set E. When we encounter a statement
of the form $or = create() (line 21), we have reached the cre-
We presented a shallow code analysis technique that operates on
data flow programs composed of imperative building blocks (“operators”). The analysis is a hybrid of reverse data flow and control flow analysis, and determines sets of record fields that express
the data conflicts of operators. These sets can be used to “emulate” algebraic reorderings in the dataflow program. Our techniques
guarantee safety through conservatism and are applicable to many
data processing systems that support UDFs. Future work includes
research on intrusive user-code optimizations, i. e., modifying the
code of UDFs, and on the effects that the use of functional programming languages to specify UDFs has on our approach and possible
optimizations.
Acknowledgments
This research was funded by the German Research Foundation
under grant FOR 1036. We would like to thank Volker Markl,
our coauthors from previous work [10], and the members of the
Stratosphere team.
References
[1] http://hadoop.apache.org/.
[2] http://pig.apache.org/.
[3] http://www.greenplum.com/technology/mapreduce.
[8] J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on
large clusters. In OSDI, pages 137–150, 2004.
Compilers:
[9] E. Friedman, P. M. Pawlowski, and J. Cieslewicz. Sql/mapreduce: A
practical approach to self-describing, polymorphic, and parallelizable
user-defined functions. PVLDB, 2(2):1402–1413, 2009.
[5] D. Battré, S. Ewen, F. Hueske, O. Kao, V. Markl, and D. Warneke.
Nephele/pacts: a programming model and execution framework for
web-scale analytical processing. In SoCC, pages 119–130, 2010.
[10] F. Hueske, M. Peters, M. Sax, A. Rheinländer, R. Bergmann, A. Krettek, and K. Tzoumas. Opening the black boxes in data flow optimization. PVLDB, 5(11):1256–1267, 2012.
[6] V. R. Borkar, M. J. Carey, R. Grover, N. Onose, and R. Vernica.
Hyracks: A flexible and extensible foundation for data-intensive computing. In ICDE, pages 1151–1162, 2011.
[11] M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly. Dryad: distributed data-parallel programs from sequential building blocks. pages
59–72, 2007.
[7] R. Chaiken, B. Jenkins, P.-Å. Larson, B. Ramsey, D. Shakib,
S. Weaver, and J. Zhou. Scope: easy and efficient parallel processing of massive data sets. PVLDB, 1(2):1265–1276, 2008.
[12] E. Jahani, M. J. Cafarella, and C. Ré. Automatic optimization for
mapreduce programs. PVLDB, 4(6):385–396, 2011.
[4] A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman.
Principles, Techniques and Tools. Pearson, 2006.
| 6 |
Logical Methods in Computer Science
Vol. 10(1:17)2014, pp. 1–52
www.lmcs-online.org
Submitted
Published
Aug. 27, 2012
Mar. 24, 2014
LINEAR USAGE OF STATE ∗
RASMUS EJLERS MØGELBERG a AND SAM STATON b
a
IT University of Copenhagen, Denmark
e-mail address: mogel@itu.dk
b
Radboud University Nijmegen, Netherlands
e-mail address: s.staton@cs.ru.nl
Abstract. We investigate the phenomenon that every monad is a linear state monad. We
do this by studying a fully-complete state-passing translation from an impure call-by-value
language to a new linear type theory: the enriched call-by-value calculus. The results
are not specific to store, but can be applied to any computational effect expressible using
algebraic operations, even to effects that are not usually thought of as stateful. There is a
bijective correspondence between generic effects in the source language and state access
operations in the enriched call-by-value calculus.
From the perspective of categorical models, the enriched call-by-value calculus suggests
a refinement of the traditional Kleisli models of effectful call-by-value languages. The new
models can be understood as enriched adjunctions.
1. Introduction
1.1. Informal motivation. The state-passing translation transforms a stateful program
into a pure function. As an illustration, consider the following ML program which uses a
single fixed memory cell l : int ref.
- fun f x = let val y = !l in l := x ; y end ;
val f = fn : int -> int
The state-passing translation transforms that program into the following pure function which
takes the state as an argument and returns the updated state as a result.
2012 ACM CCS: [Theory of computation]: Semantics and reasoning—Program semantics—
Denotational semantics / Categorical semantics; Logic—Linear logic / Type theory.
Key words and phrases: Linear type theory, monads, computational effects, categorical semantics, enriched
category theory, state passing translation.
∗
This article expands on a paper presented at the Fourth International Conference on Algebra and Coalgebra
in Computer Science (CALCO 2011).
a
Research supported by the Danish Agency for Science, Technology and Innovation.
b
Research supported by EPSRC Fellowship EP/E042414/1, ANR Projet CHOCO, the Isaac Newton Trust,
and ERC Projects ECSYM and QCLS.
l
LOGICAL METHODS
IN COMPUTER SCIENCE
c
DOI:10.2168/LMCS-10(1:17)2014
CC
R. E. Møgelberg and S. Staton
Creative Commons
2
R. E. MØGELBERG AND S. STATON
- fun f (x,s) = let val y = s val s’ = x in (y,s’) end ;
val f = fn : int * int -> int * int
The state-passing translation is straightforward if the program only uses a fixed, finite area
of memory of type S: an impure program of type A * B becomes a pure program of type
A × S → B × S.
To what extent does the state-passing translation apply to programs with other effects?
In this article we develop the idea that, from a semantic perspective, all effects can be
understood as state effects. Central to our treatment of state is the idea of linear usage: in
general, computations cannot copy the state and save it for later, nor can they discard the
state and insert a new one instead. In 1972, Strachey wrote [46]:
The state transformation produced by obeying a command is essentially
irreversible and it is, by the nature of the computers we use, impossible to
have more than one version of σ [the state] at any one time.
Historically, the importance of the linearity of state arose in the study of programs with
private store. In this setting, the naive state-passing translation does not preserve contextual
equivalence. For instance, the function ‘snapback’ takes a stateful computation f : A * B
and returns the computation that executes f but then snaps back to the original state:
def
snapback = λf : (A × S → B × S). λ(a, s) : A × S. hπ1 (f (a, s)), si : (A * B) → (A * B)
The snapback program does not arise as the state-passing translation of an impure program.
No impure program could tamper with the private store in the way that the snapback
program does. In other words, the state-passing translation is not fully complete. One
can use this fact to show that contextual equivalence is not preserved by the state-passing
translation. Sieber [44] insisted that every function be wrapped in snapback to obtain full
abstraction for his simple denotational model.
O’Hearn and Reynolds [32] resolved these difficulties with private store by moving to a
linear typing system. Linear usage of state can be expressed syntactically by considering a
stateful computation of type A * B as a linear map of type !A ⊗ S ( !B ⊗ S. The type of
states S must be used linearly, but the argument type A and the return type B can be used
arbitrarily. The snapback program violates these linear typing constraints.
This notation is reminiscent of Girard’s linear logic [13]. Our starting point is actually a
more refined calculus, the enriched effect calculus, which was developed by Egger, Møgelberg
and Simpson [7, 8, 9, 10] as a way of investigating linear usage of resources such as state.
All effects are state effects. In this paper we develop this linear typing discipline to show
that all effects can be understood as state effects and that there is always a fully-complete
linear-use state-passing translation. Our analysis applies even to effects that do not involve
the memory of a computer, like printing or coin-tossing. (For this reason we use the word
‘store’ to refer to memory related effects and ‘state’ for the general notion.) We now provide
two informal explanations of this general phenomenon.
A first informal explanation is that an inhabitant of the ‘state’ type S is an entire history
of the universe. The history of the universe certainly cannot be discarded or duplicated.
To be slightly more precise, if the effect in question is printing, then a ‘state’ is a string of
everything that has been printed so far.
A second informal explanation involves Jeffrey’s graphical notation [19]. Jeffrey noticed
that a naive graphical notation for composition of impure functions does not work, for it
LINEAR USAGE OF STATE
3
describes how functions depend on their arguments but it does not describe the order of
side effects:
l :=
x
'world'
print
'hello'
!l
print
To make the order of evaluation visible in the graphical notation, Jeffrey adds a special kind
of edge which must be treated linearly. This is what we call state.
l :=
x
print
'world'
!l
'hello'
print
Our contribution in this paper is foundational, but let us speculate briefly on possible applications. State plays an important role in many aspects of semantics, including operational
semantics and Hoare logic. The type !A ⊗ S can be thought of as a type of configurations
(program/state pairs) for a general abstract machine. This might pave the way for a general
framework for operational semantics, building on the ideas of Plotkin and Power [35].
We also note that variations on linear-use state-passing style are already used to
accommodate a broad class of effects within pure languages such as Clean [1] and Mercury [45].
1.2. The state-passing translation. The source language of our translation is an impure
functional language with product and function types:
σ, τ ::= 1 | σ × τ | σ * τ | . . .
We also consider sum types and unspecified base types. We adopt the call-by-value calling
convention because it is most natural one for effectful programs. For simplicity we choose
a fine-grain language in which the order of evaluation is totally explicit. The full syntax,
equational theory and framework for denotational semantics is in Section 3. Sum types are
treated in Section 6.
The types of the target language are essentially the minimal fragment of the enriched
effect calculus (EEC, [8, 7]) that is needed for the linear-use state-passing translation. To
enforce linear usage, EEC borrows some techniques and notation from linear logic (!, ⊗, and
() but in a careful way, enforced by the following grammar:
A, B ::= 1 | A × B | A ( B | . . .
value types
A, B ::= !A ⊗ B | S | . . .
computation types (underlined)
The full syntax, equational theory and framework for denotational semantics is in Section 2.
In Section 4 we provide a linear-use state passing translation from the source language
to the target language The translation on types takes a type τ of the source language to a
value type τ S of the target language, by induction on the structure of types:
def
1S = 1
def
(σ × τ )S = σ S × τ S
def
(σ * τ )S = !σ S ⊗ S ( !τ S ⊗ S
...
4
R. E. MØGELBERG AND S. STATON
Theorem 4.3 says that the linear-use state-passing translation is fully complete: it describes
a bijective correspondence between programs in the source language of type τ and programs
in the target language of type τ S . This means that contextual equivalence is preserved by
linear-use state-passing translation.
Enriched category theory. The constructions involved in the target language have an elegant
categorical formulation: they can be modelled in any enriched category with copowers. For
this reason we call the target language the enriched call-by-value calculus.
Recall that in categorical semantics a type A is interpreted as an object [[A]] of a category,
a context Γ is interpreted as an object [[Γ]] of a category, and a term-in-context Γ ` t : A is
interpreted as a morphism [[t]] : [[Γ]] → [[A]]. In the enriched call-by-value calculus, there are
two classes of type: computation types and value types. A categorical analysis must thus
involve two categories: a category V whose objects interpret value types, and a category C
whose objects interpret computation types. The structure of the types dictates the structure
that the categories V and C must have:
• For the product types (1, ×), V must have finite products in the categorical sense.
• For the tensor type (!A ⊗ B), V must act on C. This means that there is a functor
(−1 · −2 ) : V × C → C such that 1 · X ∼
= X and (A × B) · X ∼
= A · (B · X).
• For the linear function type (, the action (−1 · −2 ) must have a right adjoint in its first
argument.
The linear function type ( ensures that the space of morphisms X → Y forms an object
of V. So C is enriched in V, and the action structure provides copowers. There are
many examples of enriched models, including categories of algebras and Kleisli categories
(understood as ‘closed Freyd categories’ [27]). Full details are in Section 2.
In Section 5 we explain that the connection with Kleisli categories gives us a categorical
explanation of the linear-use state-passing translation. We use a semantic approach to
prove the full completeness of the translation. We show that the traditional models of
effectful call-by-value languages, using monads and Kleisli constructions, form a coreflective
subcategory of the enriched models, and that the state-passing translation is the unit of the
coreflection.
1.3. Relationship with monads. In an enriched situation, because S ( (−) is right
adjoint to !(−) ⊗ S, any computation type S induces a monad on value types:
S ( !(−) ⊗ S
(1.1)
We call this a linear-use state monad. In Section 3.2 we show that every monad arises as
a linear-use state monad. In brief, the argument is as follows. The right notion of monad
for programming languages is ‘strong monad T on a category V with finite products and
Kleisli exponentials’; for every such monad T , the Kleisli category C is V-enriched and has
copowers and is thus a model of the enriched call-by-value language. The object 1 in C
induces a linear-use state monad on V (1.1) which is isomorphic to the original monad T on
V.
For the simplest case, consider the following monad on the category of sets for storing a
single bit. If we let V = C = Set and let S = 2 (there are two states), then the linear-use
state monad is the usual state monad T X = 2 → (X × 2).
LINEAR USAGE OF STATE
5
In general, the linear-use state monad (1.1) arises from an adjunction that is parameterized in S. Atkey [3] has investigated monads that arise from parameterized adjunctions: in
fact Atkey’s parameterized monads are essentially the same as our enriched models. We
return to this in Section 9.
1.4. Algebraic theories and state access operations. In order to explain the general
connection between effects and state, we turn to the analysis of effects begun by Plotkin
and Power [38]. Categorically, a finitary monad on the category of sets is essentially the
same thing as an algebraic theory. Plotkin and Power proposed to use this connection to
investigate computational effects from the perspective of universal algebra.
The analysis of Plotkin and Power centres around the correspondence between algebraic
operations and generic effects. More precisely, for a monad T the following data are
equivalent:
(1) An algebraic operation: roughly, a functorial assignment of an n-ary function X n → X
to the carrier of each T -algebra T (X) → X;
(2) A generic effect: a function 1 → T (n).
For instance, consider the store monad for a single bit of memory (using the isomorphic
presentation T = ((−) × 2 × (−) × 2)). Each T -algebra ξ : T (X) → X supports an algebraic
operation ?X : X × X → X in a functorial way: let x ?X y = ξ(x, 0, y, 1). If we understand
elements of T -algebras as computations, then x ?X y is the computation that reads the bit
in memory and then branches to either x or y depending on what was read.
The corresponding generic effect deref : 1 → T (2) is given by deref() = (0, 0, 1, 1). It
can be thought of as the command that reads the bit and returns the contents of memory.
We have already explained (§1.3) that all monads can be understood as linear-use state
monads, T = (S ( (!(−) ⊗ S)). The data for algebraic operations and generic effects can
equivalently be given in terms of the following structure on the state object S:
(3) A state access operation: a morphism S → !n ⊗ S in the computation category C.
For instance, let V = C = Set and S = 2. This gives a state monad for a single bit of
memory, isomorphic to ((−) × 2 × (−) × 2). The state access operation corresponding to
reading the bit is simply the function read : 2 → 2 × 2 given by read(i) = (i, i), which reads
the store and returns the result along with the store.
In Section 8 we provide more examples and investigate a general framework for algebraic
presentations of theories of effects using algebraic operations, generic effects and state access
operations.
1.5. The enriched effect calculus. The work presented here grew out of work on the
enriched effect calculus by Egger, Møgelberg and Simpson (EEC, [7, 8, 9, 10]). The enriched
call-by-value calculus that we introduce in this paper is a fragment of EEC and our categorical
semantics is based on their work. Every model of EEC contains a monad, and one of the
observations of [9] (see also [7, Example 4.2]) was that this monad can always be represented
as a linear state monad. A special case of the embedding theorem [8, Theorem 4] shows
that given a strong monad T on a category V with finite products and Kleisli exponentials
we can embed V in a model of EEC preserving the monad and all other structure. This
gives the motto: every monad embeds in a linear state monad.
Since the enriched call-by-value calculus is a fragment of EEC (as opposed to full EEC),
we allow ourselves a broader class of models. In contrast to the earlier work on EEC,
6
R. E. MØGELBERG AND S. STATON
we do not include products among the computation types, since they are not needed in
the state-passing translation, and so in our models the category C does not need to have
products. This allows us to build models from Kleisli categories, which typically do not have
products, and this makes the relationship with monad models and closed Freyd categories
much more straightforward. In particular, in our setting every monad is a linear state
monad.
Nonetheless, in Section 10 we show that EEC is a conservative extension of the enriched
call-by-value calculus. This shows that there is a fully-complete linear-use state translation
into EEC. This result is further evidence that EEC is a promising calculus for reasoning
about linear usage of effects. The related papers [9, 10] show how the linear-use continuation
passing translation arises from a natural dual model construction on models of EEC, and
use this to prove a full completeness theorem similar to that proven here for the linear-use
state-passing translation. In fact, from the point of view of EEC the two translations
are surprisingly similar: the linear-use state-passing translation is essentially dual to the
linear-use continuation-passing translation. This observation goes back to the work on EEC
and indeed duality plays a key role in [9] (although the relationship with state wasn’t made
explicit there). We draw it out explicitly in Section 7.
Acknowledgements. We thank Alex Simpson for help and encouragement. Also thanks to
Lars Birkedal, Jeff Egger, Masahito Hasegawa, Shin-ya Katsumata and Paul Levy for helpful
discussions. Diagrams are typeset using the xymatrix package and Paul Taylor’s diagrams
package.
2. Enriched call-by-value: a calculus for enriched categories with copowers
The target language for the linear-use state translation is a new calculus called the enriched
call-by-value calculus (ECBV), that we now introduce. As we will explain, it is an internal
language for enriched categories with copowers.
The enriched call-by-value calculus is a fragment of the enriched effect calculus (EEC),
which was introduced by Egger et al. [8, 7] as a calculus for reasoning about linear usage
in computational effects. The types of ECBV can be understood as a fragment of linear
logic that is expressive enough to describe the linear state monad, S ( !(−) ⊗ S. We will
not dwell on the connection with linear logic here.
2.1. Type theory and equational theory. The enriched call-by-value calculus has two
collections of types: value types and computation types. We use α, β, . . . to range over a
set of value type constants, and α, β, . . . to range over a disjoint set of computation type
constants. We then use upright letters A, B, . . . to range over value types, and underlined
letters A, B, . . . to range over computation types, which are specified by the grammar below:
A, B ::= α | 1 | A × B | A ( B
A, B ::= α | !A ⊗ B .
Note that the construction !A ⊗ B is indivisible: the strings !α and α ⊗ β are not well-formed
types. The stratification of types means that one cannot chain function types: the string
α ( (β ( γ) is not well-formed.
LINEAR USAGE OF STATE
7
Readers familiar with Levy’s Call-by-Push-Value [26] or EEC [8] should note that there
are no type constructors F and U for shifting between value and computation types and
that computation types are not included in the value types. The only way to shift between
value types and computation types is by using tensor and function types. As we will see,
this is the essence of the state-passing translation.
The enriched call-by-value calculus has two basic typing judgements, written
Γ |− ` t: B
and
Γ |z : A ` t: B
(2.1)
In the both judgements, Γ is an assignment of value types to variables. In the first
judgement, B is a value type, and in the second judgement, both A and B need to be
computation types. The second judgement should be thought of as a judgement of linearity
in the variable z : A. These judgements are defined inductively by the typing rules in Figure 1,
which are a restriction of the rules of EEC [7] to this type structure. In the figure, ∆ is an
assignment of a computation type to a single variable, as in (2.1). The ideas behind the
term language for the function space A ( B and the tensor !A ⊗ B go back to the early work
on linear lambda calculus. In particular, the introduction rule for !A ⊗ B uses pairing, and
the elimination rule uses a pattern matching syntax.
2.2. Enriched call-by-value models. The categorical notion of model for ECBV involves
basic concepts from enriched category theory [21]. In summary, a model of the language
comprises two categories, V and C, interpreting the value and computation types respectively;
the function type A ( B provides an enrichment of C in V, and the universal property of
the tensor type !A ⊗ B is the copower, sometimes called tensor.
We now make this precise. Let us recall some rudiments. Following [18, 14], we
begin with actions of categories. Let V be a category with finite products (by which
we mean that it has a chosen terminal object and chosen binary products). Recall that
an action of V on a category C is a functor · : V × C → C together with two natural
isomorphisms, unit (1 · A) ∼
= A and associativity ((A × B) · C) ∼
= (A · (B · C)), that cohere
with the isomorphisms arising from the product structure of V in the following sense:
(A × 1) · D
∼
=
(1 × A) · D
∼
=
A · (1 · D)
%
/ A·D
∼
=
(A × (B × C)) · D
∼
=
1 · (A · D)
∼
=
((A × B) × C) · D
∼
=
∼
=
∼
=
/ A · ((B × C) · D)
∼
=
%
/ A·D
/ (A × B) · (C · D)
∼
=
∼
=
/ A · (B · (C · D))
(We underline objects of C to distinguish them from objects of V.)
An enrichment of a category C in V with copowers is determined by an action of V
on C such that each functor (− · B) : V → C has a right adjoint, C(B, −) : C → V. Then
A · B is called a copower, and C(B, C) is called enrichment. We write HomC (B, C) for the
usual hom-set of C to distinguish it from the enrichment.
Definition 2.1. An enriched call-by-value model (or simply enriched model ) is given by a
category V with finite products and a category C enriched in V with copowers.
8
R. E. MØGELBERG AND S. STATON
Types.
A, B ::= α | 1 | A × B | A ( B
A, B ::= α | !A ⊗ B .
Term formation.
Γ, x : A, Γ0 | − ` x : A
Γ |− ` t: A
Γ |z : A ` z : A
Γ |− ` u: B
Γ | − ` ht, ui : A × B
Γ |z : A ` t: B
Γ |− ` t: A
Γ | − ` t : A1 × A2
Γ | − ` πi (t) : Ai
Γ |− ` s: A ( B
Γ | − ` λz. t : A ( B
Γ |∆ ` u: B
Γ |− ` ?: 1
Γ |∆ ` t: A
Γ | ∆ ` s[t] : B
Γ | ∆ ` t : !A ⊗ B
Γ | ∆ ` !t ⊗ u : !A ⊗ B
Γ, x : A | z : B ` u : C
Γ | ∆ ` t to (!x ⊗ z). u : C
Equality. (We elide α-equivalence, reflexivity, symmetry, transitivity and congruence
laws.)
Γ |− ` t: 1
Γ | − ` t1 : A1
Γ |− ` t ≡ ?: 1
Γ | − ` t2 : A2
Γ | − ` πi (ht1 , t2 i) ≡ ti : Ai
Γ | − ` t : A1 × A2
Γ | − ` hπ1 (t), π2 (t)i ≡ t : A1 × A2
Γ |z : A ` t: B Γ |∆ ` u: A
Γ |− ` t: A ( B
Γ | ∆ ` (λz. t)[u] ≡ t[u/z] : B
Γ | − ` t ≡ λz. (t[z]) : A ( B
Γ | − ` t : A Γ | ∆ ` u : B Γ, x : A | z : B ` v : C
Γ | ∆ ` (!t ⊗ u) to (!x ⊗ z). v ≡ v[t/x, u/z] : C
Γ | ∆ ` t : !A ⊗ B Γ | y : !A ⊗ B ` u : C
Γ | ∆ ` t to (!x ⊗ z). u[!x ⊗ z/y] ≡ u[t/y] : C
Figure 1: The enriched call-by-value calculus
LINEAR USAGE OF STATE
9
In Section 2.3 we will illustrate the definition with some examples of enriched models.
First, let us clarify the semantics for ECBV in an enriched model. The interpretation is
similar to the semantics of EEC proposed by Egger et al. [8].
• A value type A is interpreted as an object [[A]] of V, and a computation type A is
interpreted as an object [[A]] of C, as follows. The interpretation is defined by induction
on the structure of types. First, for each value type constant α, an object [[α]] of V is
given, and for each computation type constant α an object [[α]] of C is given. The product
types are interpreted as products in V. The remaining type constructions are interpreted
def
def
using the enriched structure: we let [[!A ⊗ B]] = ([[A]] · [[B]]), and [[A ( B]] = C([[A]], [[B]]).
• A value context Γ = (x1 : A1 , . . . , xn : An ) is interpreted as a product [[A1 ]] × · · · × [[An ]].
in V. A computation context ∆ = (z : A) is interpreted as the object [[A]] in C.
• A judgement Γ | − ` t : A is interpreted as a morphism [[Γ]] → [[A]] in V, and a judgement
Γ | ∆ ` t : A is interpreted as a morphism [[Γ]] · [[∆]] → [[A]] in C. This definition is made
by induction on the structure of typing derivations, making use of the universal properties
of the interpretations of the types. For illustration, we consider the following two rules:
Γ |z : A ` t: B
Γ | ∆ ` t : !A ⊗ B Γ, x : A | z : B ` u : C
Γ | − ` λz. t : A ( B
Γ | ∆ ` t to (!x ⊗ z). u : C
In dealing with the linear lambda abstraction rule, the induction principle gives us an
interpretation [[t]] : [[Γ]] · [[A]] → [[B]] in C which we use to form [[λz. t]] : [[Γ]] → C([[A]], [[B]])
in V, using the natural bijection that defines the relationship between the copower and
the enrichment:
HomC (A · B, C) ∼
= HomV (A, C(B, C)).
For the pattern matching rule, we assume morphisms
[[t]] : [[Γ]] · [[∆]] → [[A]] · [[B]]
[[u]] : ([[Γ]] × [[A]]) · [[B]] → [[C]]
in C and use them to define [[t to (!x ⊗ z). u]] : [[Γ]] · [[∆]] → [[C]] as the following composite:
diag.
∼
=
[[Γ]]·[[t]]
[[Γ]] · [[∆]] −−−→ ([[Γ]] × [[Γ]]) · [[∆]] −
→ [[Γ]] · ([[Γ]] · [[∆]]) −−−−→ [[Γ]] · ([[A]] · [[B]])
∼
=
[[u]]
−
→ ([[Γ]] × [[A]]) · [[B]] −−→ [[C]]
Proposition 2.2. The interpretation of the enriched call-by-value calculus in an enriched
model (V, C) is sound:
(1) If Γ | − ` t ≡ u : A then [[t]] = [[u]] : [[Γ]] → [[A]] in V.
(2) If Γ | ∆ ` t ≡ u : A then [[t]] = [[u]] : [[Γ]] · [[∆]] → [[A]] in C.
Proof notes. This is proved by induction on the structure of the derivation of (≡). The
following substitution lemma is helpful:
If Γ | ∆ ` t : A and Γ | z : A ` u : B then u[t/z] is the following composite
morphism in C:
diag.
∼
=
[[Γ]]·[[t]]
[[u]]
[[Γ]] · [[∆]] −−−→ ([[Γ]] × [[Γ]]) · [[∆]] −
→ [[Γ]] · ([[Γ]] · [[∆]]) −−−−→ [[Γ]] · [[A]] −−→ [[B]]
10
R. E. MØGELBERG AND S. STATON
2.3. Examples of enriched models. We now list some examples of enriched models
(Definition 2.1).
(1) If V = Set then a V-enriched category is just a locally small category. The copower
A · B is the A-fold iterated coproduct of B, if it exists. The following three examples
are instances of this example.
(2) Let V = Set and let C be the category of monoids and homomorphisms. The copower
A · B, where A is a set and B is a monoid, can be described as a quotient of the free
monoid on the product of sets, (A × |B|)∗ /∼ . Here (A × |B|)∗ is the set of strings built
of pairs in (A × |B|), which is a monoid under concatenation with the empty string as
unit. The equivalence relation (∼) is generated by (a, b).(a, b0 ) ∼ (a, b.b0 ) and ∼ (a, ).
There is a similar description of the copower for any algebraic theory.
(3) We can modify Example (2) to make C the category of free monoids and monoid
homomorphisms. That is, the objects are monoids of the form B ∗ . In this situation, the
copower satisfies A · B ∗ = (A × B)∗ . In this example C is the Kleisli category for the
free monoid monad. We will revisit Kleisli categories in Section 3.2.
(4) Let V = C = Set, with C considered with the ordinary enrichment. The copower A · B
is the cartesian product of sets. This situation generalizes to the situation where V = C
is an arbitrary cartesian closed category.
(5) Let V be the category of ω-cpo’s and continuous functions, and let C be the category
of pointed ω-cpo’s and strict functions. The enrichment C(A, B) is the cpo of strict
functions under the pointwise order, and the copower A · B is the smash product A⊥ ⊗ B.
(6) In the next section we will investigate a model built from the syntax of the enriched
call-by-value calculus.
2.4. The syntactic enriched model. The types and terms of the enriched call-by-value
calculus form an enriched model which we call the syntactic enriched model.
Let V be the category whose objects are value types and where a morphism A → B
is a term in context x : A | − ` t : B modulo the equational theory (Figure 1) and modulo
renaming the free variable x. The identity morphism is the term x : A | − ` x : A, and
composition of morphisms
x : A|− ` t : B
y : B|− ` u : C
A −−−−−−−−→ B −−−−−−−−→ C
def
is given by substitution: u ◦ t = (x : A | − ` u[t/y ] : C). Since morphisms are actually
equivalence classes, the well-definedness of substitution depends on the following substitution
lemma
If x : A | − ` t ≡ t0 : B and y : B | − ` u ≡ u0 : C
0
then x : A | − ` u[t/y ] ≡ u0 [t /y ] : C
which is proved by induction on the derivation of u ≡ u0 .
The laws of associativity and identity for composition are immediate. For instance,
associativity amounts to the following syntactic identity:
If x : A | − ` t : B, y : B | − ` u : C and z : C | − ` v : D
t
then x : A | − ` (v[u/z ])[t/y ] ≡ v[u[ /y ]/z ] : D.
The category V has products, given by the product types. The equations at product
types are exactly what is needed to guarantee the universal properties.
LINEAR USAGE OF STATE
11
Let C be the category whose objects are computation types and where a morphism
A → B is a term in context − | z : A ` t : B modulo the equational theory and modulo
renaming the free variable z. Identities and composition are defined in a similar way to V.
The identity morphisms A → A are − | z : A ` z : A and composition is by substitution.
def
The action of V on C is given on objects by the tensor type: let A · B = !A ⊗ B. Given
morphisms
x : A|− ` t : A0
−|z : B ` u : B0
A −−−−−−−−→ A0 in V
and
B −−−−−−−−→ B0 in C
we define a morphism t · u : (A · B) → (A0 · B0 ) in C by
def
t · u = − | z 0 : !A ⊗ B ` z 0 to (!x ⊗ z). !t ⊗ u : !A0 ⊗ B0 .
Functoriality follows from the equational theory of ECBV. The unit and associativity
isomorphisms are straightforward to define. For example, associativity ((A × B) · C ∼
=
A · (B · C)) is given by exhibiting mutual inverses at all types:
− | z : !(A × B) ⊗ C ` z to (!x ⊗ z 0 ). !(π1 x) ⊗ (!(π2 x) ⊗ z 0 ) : !A ⊗ (!B ⊗ C)
− | z : !A ⊗ (!B ⊗ C) ` z to (!x ⊗ z 0 ). z 0 to (!y ⊗ z 00 ). !(x, y) ⊗ z 00 : !(A × B) ⊗ C
It follows from the equational theory of ECBV that these are isomorphisms and are natural
and coherent.
Finally, we discuss the enrichment of C in V. Given types A, B and C we describe a
natural bijection
HomC (A · B, C) ∼
= HomV (A, B ( C)
From left to right the bijection takes a computation term − | z : !A ⊗ B ` t : C to a value
term x : A | − ` λb. t[(!x⊗b)/z ] : B ( C. From right to left the bijection takes a value term
x : A | − ` u : B ( C to a computation term − | z : (!A ⊗ B) ` z to (!x ⊗ y). u[y] : C.
2.5. Universal property of the syntactic enriched model. The syntactic model described in Section 2.4 enjoys a universal property: it is an initial object in a category of
enriched models with structure preserving functors as morphisms. Given any other enriched
model, the unique morphism from the syntactic model is given by interpretation of syntax
in the model.
This semantic characterization of interpretation is standard in categorical semantics,
and it is useful for deriving syntactic results from semantics, as we shall see in Section 5.
However, we shall also see (Section 5.3) that we need to talk about morphisms of models
preserving structure only up to isomorphism, and the syntactic model is not initial with
respect to this collection of morphisms. Rather, interpretation defines a morphism which
is unique only up to unique isomorphism. In order to formulate this kind of initiality, we
need to be able to assess when two morphisms between enriched models are isomorphic.
Thus the enriched models form a groupoid-enriched category, i.e., a 2-category in which
each 2-cell is invertible. The idea of using groupoid-enriched categories of models has been
around for a long time (e.g. [6, §8]) and has been used in previous work on the enriched
effect calculus [8, 9].
A precise definition of the 2-category of enriched models Enr can be found in Appendix A.
Here we just state the initiality property of the syntactic model. First recall the following
definition from 2-category theory (see e.g. [22, §6]).
12
R. E. MØGELBERG AND S. STATON
Definition 2.3. Let K be a 2-category. An object 0 of K is bi-initial if for any object A
the hom-category K(0, A) is equivalent to the terminal category (i.e., the category with one
object and one morphism).
An equivalent way of stating bi-initiality is to say that for any other object A there
exists a 1-cell 0 → A which is unique up to unique 2-isomorphism.
The syntactic model of Section 2.4 is bi-initial in the category Enr, but in this paper
we are much more interested in a category of enriched models (V, C) with a specified state
object S in C (because of the relation to the state-passing translation), and so we formulate
bi-initiality with respect to a category Ecbv of these. Like all other structure, 1-cells of Ecbv
are only required to preserve the state objects up to isomorphism. (See Appendix A.2 for a
full definition). We write (Vecbv , Cecbv , S) for the enriched model obtained as in Section 2.4
from the syntax of the enriched call-by-value calculus extended with a special computation
type constant S.
Theorem 2.4. The model (Vecbv , Cecbv , S) is bi-initial in Ecbv. The unique morphisms
with domain (Vecbv , Cecbv , S) are given by interpretation of syntax into models.
3. Fine-grain call-by-value, a calculus for Kleisli models
The source language for our state-passing translation is a call-by-value language equipped
with an equational theory to be thought of as generated by some operational semantics, as
in [34]. We use a ‘fine grain’ call-by-value language, following Levy et al. [27, 26]. We use α
to range over type constants. The types are given by the grammar
σ, τ ::= α | 1 | σ × τ | σ * τ .
The function space * is a call-by-value one, which takes a value and produces a computation.
The fine-grain call-by-value calculus (FGCBV) has two typing judgements, one for values
and one for producers. These are written Γ `v V : σ and Γ `p M : σ. The latter should be
thought of as typing computations which produce values in the type judged but may also
perform side-effects along the way. In both judgements the variables of the contexts are to
be considered as placeholders for values. Typing rules along with equality rules are given in
Figure 2.
The call-by-value language is called ‘fine grain’ because the order of evaluation is explicit.
Notice that the string (f (x), g(y)) is not well-formed syntax: one must specify the order of
evaluation, for instance, like this:
f (x) to x0 . g(y) to y 0 . (x0 , y 0 ).
Translations from a ‘coarser grain’, more natural programming language are given by Levy
et al. ([27, §3], [26, §A.3.3]).
3.1. Interpretation in Kleisli models. The most natural way to interpret fine-grain
call-by-value is two have two categories V and C to interpret the judgements `v and `p ,
but to insist that the two categories have exactly the same objects, since in this language
there is only one class of types.
LINEAR USAGE OF STATE
13
Types.
σ, τ ::= α | 1 | σ × τ | σ * τ .
Term formation.
Γ `v V : σ1 × σ2
Γ, x : σ, Γ0 `v x : σ
Γ `v ? : 1
Γ `v V1 : σ1
Γ `v πi (V ) : σi
Γ `v V : σ
Γ `p M : σ
Γ `p return V : σ
Γ, x : σ `p N : τ
Γ `v V 2 : σ2
Γ `v hV1 , V2 i : σ1 × σ2
Γ, x : σ `p N : τ
Γ `p M to x. N : τ
Γ `v V : σ * τ
Γ `v λx. N : σ * τ
Γ `v W : σ
Γ `p V W : τ
Equality. (We elide α-equivalence, reflexivity, symmetry, transitivity and congruence
laws.)
Γ `v M : 1
Γ `v M ≡ ? : 1
Γ `v V1 : σ2
Γ `v V2 : σ2
Γ `v V : σ1 × σ2
Γ `v πi (hV1 , V2 i) ≡ Vi : σi
Γ, x : σ `p M : τ
Γ `v V : σ * τ
Γ `v V : σ
Γ `v (λx. M ) V ≡ M [V /x] : τ
Γ `v λx. (V x) ≡ V : σ * τ
Γ `v V : σ
Γ `p M : σ
Γ `p M to x. return x ≡ M : σ
Γ `p M : σ
Γ `v hπ1 (V ), π2 (V )i ≡ V : σ1 × σ2
Γ, x : σ `p N : τ
Γ `p return V to x. N ≡ N [V /x] : τ
Γ, x : σ `p N : τ
Γ, y : τ `p P : υ
Γ `p (M to x. N ) to y. P ≡ M to x. (N to y. P ) : υ
Figure 2: Fine-grain call-by-value.
Definition 3.1. An enriched Kleisli model is an enriched call-by-value model (V, C)
(Def. 2.1) together with an identity-on-objects functor J : V → C that strictly preserves
copowers, which means that J(A × B) = A · J(B) (naturally in A and B) and that the canonical isomorphisms induced by the product structure are the coherent unit and associativity
isomorphisms of the copowers:
1 · JA = J(1 × A) ∼
(A × B) · JC = J((A × B) × C) ∼
= JA
= J(A × (B × C)) = A · (B · JC).
We will sometimes say ‘Kleisli model’ for ‘enriched Kleisli model’. We use the name
‘Kleisli’ because this definition captures the situation where C is the Kleisli category for a
strong monad on V. The correspondence is explained in Section 3.2.
14
R. E. MØGELBERG AND S. STATON
Kleisli models have earlier been called ‘closed Freyd categories’ by Levy et al. [27].
Their original definition of closed Freyd category is based on premonoidal categories; the
relationship with actions of categories and Kleisli models is observed by Levy [26, B.10].
A semantics for FGCBV is given in a Kleisli model in a standard way.
• Each base type is given an interpretation [[α]] as an object of V. This interpretation is
extended to all types: [[1]] is the terminal object of V; [[σ × τ ]] is the product of [[σ]] and
def
[[τ ]]; and [[σ * τ ]] is defined using the enriched structure of C: [[σ * τ ]] = C([[σ]], [[τ ]]).
• A context Γ = (x1 : σ1 , . . . , xn : σn ) is interpreted in V as a product [[σ1 ]] × · · · × [[σn ]].
• A value type judgement Γ `v V : σ is interpreted as a morphism [[Γ]] → [[σ]] in V and a
producer type judgement Γ `p M : σ is interpreted as a morphism [[Γ]] → [[σ]] in C. This
is defined by induction on the structure of derivations, using the universal properties of
Kleisli models. For illustration we consider the following rule.
Γ `p M : σ Γ, x : σ `p N : τ
Γ `p M to x. N : τ
The induction hypothesis gives us two morphisms in C
[[M ]]
[[Γ]] −−→ [[σ]]
[[N ]]
[[Γ]] × [[σ]] −−→ [[τ ]]
and we use these to define a morphism in C that interprets (M to x. N ):
J(diag)
=
[[Γ]]·[[M ]]
=
[[N ]]
[[Γ]] −−−−→ [[Γ]] × [[Γ]] −
→ [[Γ]] · [[Γ]] −−−−−→ [[Γ]] · [[σ]] −
→ [[Γ]] × [[σ]] −−→ [[τ ]] .
As another example [[return V ]] = J([[V ]]).
This defines a sound and complete notion of model for FGCBV.
Proposition 3.2 ([27], Prop. 5.1). The interpretation of the fine-grain call-by-value calculus
in a Kleisli model is sound:
(1) If Γ `v V ≡ W : σ then [[V ]] = [[W ]] : [[Γ]] → [[σ]] in V.
(2) If Γ `p M ≡ N : σ then [[M ]] = [[N ]] : [[Γ]] → [[σ]] in C.
3.2. Relationship with monads. We now explain the connection between enriched Kleisli
models and Kleisli categories for a monad. For more detail, see the paper by Levy et al. [27].
From the syntactic point of view, the fine-grain call-by-value language can be thought
of as a variant of Moggi’s λC [31]: the type construction (1 * (−)) is a monad.
From the semantic point of view, recall that a λC -model [31] is a category with finite
products and a strong monad with Kleisli exponentials. We now explain these conditions.
Let V be a category with finite products, and let (T, η, µ) be a monad on V. Let C be
the Kleisli category for T : the objects of C are the objects of V and a morphism A → B in
C is a morphism A → T (B) in V. There is an identity-on-objects functor J : V → C which
takes a morphism f : A → B in V to the morphism (ηB · f ) : A → B in C.
A strength for a monad T is usually expressed as a family of morphisms A × T (B) →
T (A × B) that respect the structure of the monad. In fact, a monad is strong if and only if
there is an action of V on C and the identity-on-objects functor J : V → C preserves it. The
strength is needed to take a morphism f : B → B 0 in C to a morphism (A · f ) : A · B → A · B 0
in C.
The requirement of Kleisli exponentials is normally expressed as the requirement that
for all A and B, the hom-functor HomV ((−) × A, T B) : Vop → Set is representable. To
LINEAR USAGE OF STATE
15
endow V with Kleisli exponentials is to give a right adjoint for the action, i.e. an enrichment
of C in V.
Conversely, every enriched Kleisli model (V, C, J) induces a strong monad on V with
def
Kleisli exponentials. The monad is defined using the closed structure: T (A) = C(1, A).
The Kleisli category for this monad is isomorphic to C. On the other hand, if we begin with
a monad, build the Kleisli category and then take the monad C(1, A), we recover a monad
that is isomorphic to the one that we started with. In this sense, enriched Kleisli models and
λC -models are equivalent. Note that they are not exactly the same, for although the hom-set
HomV (A, C(1, B)) is in bijection with HomC (A, B), the sets are typically not identical.
3.3. The syntactic Kleisli model. The types and terms of the fine-grain call-by-value
calculus form a syntactic model. We employ the same kinds of technique as for enriched
call-by-value in Section 2.4.
• The objects of both V and C are the types of FGCBV.
• A morphism σ → τ in V is a value judgement x : σ `v V : τ modulo the equational
theory ≡ (Figure 2) and modulo renaming the free variable x. Identities are variables and
composition is by substitution.
• A morphism σ → τ in C is a computation judgement x : σ `p M : τ modulo the equational
theory ≡ and renaming the free variable x. The identity σ → σ is x : σ `p return x : σ.
Composition is not by substitution, since one cannot substitute a producer term for a
variable. Rather, the composite of
x : σ`p M : τ
y : τ `p N : υ
σ −−−−−−−→ τ −−−−−−−→ υ
in C is x : σ `p M to y. N : υ.
• The product structure in V is given by the product types, projections and pairing.
def
• The action of V on C is given on objects by the binary product types: let σ · τ = σ × τ .
y : τ `p M : τ 0
x : σ`v V : σ 0
On morphisms, given A −−−−−−−→ σ 0 in V and τ −−−−−−−→ τ 0 in C, we define
V ·M
def
(σ · τ −−−→ σ 0 · τ 0 ) = z : σ × τ `p M [π2 (z)/y ] to y 0 . return hV [π1 (z)/x ], y 0 i : σ 0 × τ 0
def
• The enrichment is given by C(σ, τ ) = (σ * τ ).
x : σ`v V : τ
• The identity-on-objects functor J : V → C takes a morphism σ −−−−−−−→ τ in V to the
x : σ`p return V : τ
morphism σ −−−−−−−−−−−→ τ in C.
We write (Vfgcbv , Cfgcbv , J) for the syntactic Kleisli model.
3.4. Universal property of the syntactic Kleisli model. Appendix A.3 defines the
2-category Kleisli of Kleisli models. As was the case for Ecbv, 1-cells are only required to
preserve structure up to isomorphism.
Theorem 3.3. The syntactic Kleisli model (Vfgcbv , Cfgcbv , J) is bi-initial in Kleisli. The
unique morphisms with domain (Vfgcbv , Cfgcbv , J) are given by interpretation of syntax into
models.
16
R. E. MØGELBERG AND S. STATON
4. The linear-use state-passing translation
This section defines the linear-use state-passing translation from the fine-grain call-by-value
calculus to the enriched call-by-value calculus, and states the main syntactic results of this
paper: fullness on types and full completeness. Together these assert that the linear-use
state-passing translation is an equivalence of languages.
We now fix a computation type S of ECBV. For now, it can be an arbitrary computation
type; later we will make it a distinguished basic type to achieve a full completeness result.
We will describe a translation from FGCBV to ECBV. When S is thought of as a type of
states, then this translation reads as a state-passing translation.
We translate FGCBV types σ to ECBV value types σ S :
def
αS = α
def
(σ × τ )S = σ S × τ S
def
1S = 1
def
(σ * τ )S = !(σ S ) ⊗ S ( !(τ S ) ⊗ S
We extend this translation to type contexts, taking an FGCBV type context Γ to an ECBV
type context ΓS .
The translation on terms is syntax-directed. We pick a variable s, completely fresh.
The translation takes an FGCBV value type judgement Γ `v V : σ to an ECBV judgement
ΓS | − ` V S : σ S , and an FGCBV producer judgement Γ `p M : σ to an ECBV judgement
S
ΓS | s : S ` Ms : !(σ S ) ⊗ S. The translation is defined as follows.
def
xS = x
def
?S = ?
def
hV, W iS = hV S, W S i
def
(return V )Ss = !(V S ) ⊗ s
def
(π1 (V ))S = π1 (V S )
def
(π2 (V ))S = π2 (V S )
def
(M to x. N )sS = MsS to (!x ⊗ s). NsS
def
def
(λx. N )S = λz. z to (!x ⊗ s). NsS
(V W )sS = V S [!(W S ) ⊗ s]
In the case for λ-abstraction, the z is chosen to be completely fresh.
The translation respects types. For instance
Γ, x : σ `p N : τ
Γ `v λx. N : σ * τ
ΓS | z : !σ S ⊗ S ` z : !σ S ⊗ S
becomes
ΓS , x : σ S | s : S ` NsS : !τ S ⊗ S
ΓS | z : !σ S ⊗ S ` z to (!x ⊗ s). NsS : !τ S ⊗ S
ΓS | − ` λz. z to (!x ⊗ s). NsS : !σ S ⊗ S ( !τ S ⊗ S
Theorem 4.1. The linear-use state-passing translation is sound:
(1) If Γ `v V ≡ W : σ then ΓS | − ` V S ≡ W S : σ S .
S
S
(2) If Γ `p M ≡ N : σ then ΓS | s : S ` Ms ≡ Ns : σ S .
This result can be proved by induction on the structure of equality (≡) derivations, but
it can also be derived semantically as we shall see in Section 5.4.
4.1. Full completeness. We now state our main theorems: fullness on types and full
completeness on terms. To state fullness on types we need to talk about isomorphism of
types in ECBV. This can be defined in the usual way: for value types, an isomorphism A ∼
=B
is given by two judgements, x : A | − ` t : B and y : B | − ` u : A, such that u[t/y] ≡ x and
t[u/x] ≡ y. For computation types, A ∼
= B is witnessed by closed terms of type A ( B,
B ( A composing in both directions to identities. We note the following type isomorphisms,
inherited from the enriched effect calculus [8, §3]:
A ∼
!A ⊗ (!B ⊗ C) ∼
(4.1)
= !1 ⊗ A
= !(A × B) ⊗ C
LINEAR USAGE OF STATE
17
Theorem 4.2 (Fullness on types). Let A be a value type of ECBV formed using no other
computation type constants than S. Then there exists an FGCBV type σ such that σ S ∼
= A.
Proof. By induction on the structure of types. The interesting case A ( B uses the fact
that any computation type not using any α other than S is isomorphic to one of the form
!C ⊗ S, which follows from the isomorphisms (4.1).
We now state our main syntactic result.
Theorem 4.3 (Full completeness).
(1) If Γ `v V, W : σ and ΓS | − ` V S ≡ W S : σ S then Γ `v V ≡ W : σ.
S
S
(2) If Γ `p M, N : σ and ΓS | s : S ` Ms ≡ Ns : !σ S ⊗ S then Γ `p M ≡ N : σ.
(3) For any ΓS | − ` t : σ S there is a term Γ `v V : σ such that ΓS | − ` t ≡ V S : σ S .
(4) For any ΓS | s : S ` t : !(σ S ) ⊗ S there is a producer term Γ `p M : σ such that ΓS | s : S `
t ≡ M S : !(σ S ) ⊗ S.
In Section 5.4 we sketch a semantic proof of Theorems 4.2 and 4.3.
5. A semantic proof of full completeness
In this section we present two constructions on models. The first (§5.1) constructs a Kleisli
model (Def. 3.1) from an enriched model (Def. 2.1) with a specified state object. The second
(§5.2) constructs an enriched model with a state object from a given Kleisli model. The
state-passing translation arises from the first of these constructions. These two constructions
form a bi-adjunction exhibiting the category of Kleisli models as a coreflective subcategory of
the category of enriched models with chosen state objects (§5.3). In Section 5.4 we shall see
how to use these facts to explain full completeness of the linear-use state-passing translation
(Theorem 4.3).
5.1. From enriched models with state to Kleisli models. Given an enriched call-byvalue model (V, C) with a state object S in C, we can form an enriched Kleisli model
def
Kl(V, C, S) = (V, KlS , JS ), where the category KlS has the same objects as V and
hom-sets
def
HomKlS (A, B) = HomC (A · S, B · S)
Composition in KlS is just composition as in C. (This is an isomorphic presentation of the
Kleisli category for the monad C(S, − · S) on V.) The functor JS is the identity on objects
and maps f : A → B to f · S.
Lemma 5.1. For any enriched call-by-value model with state (V, C, S) the data (V, KlS , JS )
defines an enriched Kleisli model.
Proof. The action (−1 ) ·Kl (−2 ) : V × KlS → KlS is defined on objects as A ·Kl B = A × B.
On morphisms it maps f : A → A0 , g : B · S → B 0 · S to the composite
∼
=
f ·g
∼
=
→ A · (B · S) −−→ A0 · (B 0 · S) −
(A × B) · S −
→ (A0 × B 0 ) · S
which is easily seen to be functorial.
def
The right adjoint to (−) ·Kl A is KlS (A, −) = C(A · S, (−) · S).
18
R. E. MØGELBERG AND S. STATON
The construction Kl described above extends to a 2-functor Kl : Ecbv → Kleisli from
the 2-category of enriched models to the 2-category of Kleisli models. See Appendix C.2 for
details.
5.2. From Kleisli models to enriched models with state. Any Kleisli model is trivially
an enriched model, so for the opposite construction we just need to pick a state object in
def
a Kleisli model. We define St(V, C, J) = (V, C, 1), where 1 is the terminal object of V
considered as an object of C. This definition extends to a 2-functor St : Kleisli → Ecbv, as
shown in Appendix C.
The motivation for this definition is that, as we now show, the 2-category Ecbv can
be seen as a 2-category of enriched adjunctions, and the 2-functor St can be seen as an
inclusion of Kleisli adjunctions into Ecbv.
Let (V, C) be an enriched model. By an enriched adjunction we mean an adjunction
F a U : C → V equipped with a natural coherent isomorphism F (A × B) ∼
= A · F (B). When
V is cartesian closed, this is equivalent to the usual definition, i.e. a natural isomorphism
C(F (−1 ), −2 ) ∼
= V(−1 , U (−2 )) in V (see e.g. [20]).
Any choice of state object gives an enriched adjunction, since (− · S) is left adjoint to
C(S, −) : C → V. The following proposition (first noted for EEC, [8, Proof of Thm. 4], [7])
shows that every enriched adjunction arises in this way:
Proposition 5.2 ([7]). Let (V, C) be an enriched model. If F a U : C → V is an enriched
adjunction then it is naturally isomorphic to the enriched adjunction induced by F (1).
So enriched adjunctions correspond essentially bijectively to state objects. In particular
the state object corresponding to a Kleisli model is 1. Monads induced by state objects can
be described in ECBV as S ( !(−) ⊗ S. By the correspondence between Kleisli models and
strong monads we arrive at the slogan: Every strong monad is a linear-use state monad.
More directly, the slogan can be derived from the isomorphism KlT (1, A × 1) ∼
= T (A), which
holds for the Kleisli category KlT of any strong monad T .
(Remark: if T is a strong monad on V then, for any object S of KlT , the linear-use
state monad KlT (S, (−) · S) on V is also known as the application of the state monad
transformer to T , as in [28].)
5.3. A coreflection. The constructions Kl and St form a bi-adjunction between the 2categories of Kleisli models and of enriched models with state. One intuition for this is that
it expresses the minimality property of Kleisli resolutions of monads.
Theorem 5.3. The 2-functor St : Kleisli → Ecbv is left biadjoint to Kl, i.e., for any pair
of objects (V, C, J) and (V0 , C0 , S) of Kleisli and Ecbv respectively, there is an equivalence
of categories
Ecbv(St(V, C, J), (V0 , C0 , S)) ' Kleisli((V, C, J), Kl(V0 , C0 , S))
natural in (V, C, J) and (V0 , C0 , S). Moreover, the unit of the adjunction η : id Ecbv → Kl◦St
is an isomorphism.
See Appendix C for a proof of Theorem 5.3. Since left bi-adjoints preserve bi-initial
objects we get the following connection between the syntactic enriched model (Vecbv , Cecbv , S)
and the syntactic Kleisli model (Vfgcbv , Cfgcbv , J).
LINEAR USAGE OF STATE
19
Corollary 5.4. (Vecbv , Cecbv , S) and St(Vfgcbv , Cfgcbv , J) are equivalent as objects of Ecbv.
Since Kl(St(V, C, J)) = (V, Kl1 , J1 ) the unit of the adjunction can be described as the
pair (id V , G : C → Kl1 ) where G(f : A → B) is the composite
∼
=
f
∼
=
A·1−
→A−
→B−
→B·1
using the isomorphism J(π) : A · 1 = A × 1 → A. The pair (id V , G) preserves the enrichment
only up to isomorphism, and this is our main motivation for using 2-categories of models
(see also the discussion in Section 2.5).
5.4. A semantic explanation of the state-passing translation. The linear-use statepassing translation is essentially the interpretation of fine-grain call-by-value into the model
obtained by applying the construction Kl of Section 5 to the syntactic enriched call-by-value
model of Section 2.4. In this model judgements Γ `v V : σ and Γ `p M : σ are interpreted as
judgements of the form
Q S
Q
x:
Γ | − ` [[V ]] : σ S
− | x : !( ΓS ) ⊗ S ` [[M ]] : σ S
Q
respectively, where ΓS is the product of all the types appearing in ΓS .
Lemma 5.5. Let (Vecbv , Cecbv , S) be the syntactic enriched Kleisli model of Section 2.4.
The interpretation of FGCBV into Kl(Vecbv , Cecbv , S) models a type σ as σ S . Let Γ =
x1 : σ1 . . . xn : σn be a context of FGCBV and let Γ `v V : τ and Γ `p M : τ be judgements of
FGCBV. Then V and M are interpreted as the equivalence classes of the terms
Q
x : ( ΓS ) | − ` V S [π1 x...πn x/x1 ...xn ] : τ S
Q
− | x : !( ΓS ) ⊗ S ` x to (!z ⊗ s). (M S [π1 z...πn z/x1 ...xn ]) : τ S
Soundness of the state-passing translation (Theorem 4.1) follows immediately from
Lemma 5.5. Fullness on types and full completeness (Theorems 4.2 and 4.3) are also
consequences.
Proof of Theorems 4.2 and 4.3. By Theorem 3.3 and Lemma 5.5 the state-passing translation is the unique (up to isomorphism) 1-cell
(F, G) : (Vfgcbv , Cfgcbv , J) → Kl(Vecbv , Cecbv , S)
in Kleisli. It suffices to show that this is an equivalence in Kleisli, because this implies that
F and G are both equivalences of categories, in particular they are essentially full on objects
(proving Theorem 4.2) and full and faithful (proving Theorem 4.3).
By initiality (F, G) must be isomorphic to the composite
η
'
(Vfgcbv , Cfgcbv , J) −
→ Kl(St(Vfgcbv , Cfgcbv , J)) −
→ Kl(Vecbv , Cecbv , S)
of the unit of the adjunction (which is an isomorphism by Theorem 5.3) and Kl applied to
the equivalence of Corollary 5.4. Since this composite is an equivalence, so is (F, G).
20
R. E. MØGELBERG AND S. STATON
6. Sums
It is routine to add sum types to the languages considered in Section 2 and 3, and the
state-passing translation extends straightforwardly. We now summarize the details.
6.1. Sums in the enriched call-by-value calculus. We add sums to the enriched callby-value calculus, for both value and computation types. The required modifications to the
calculus are given in Figure 3. The resulting calculus is still a fragment of the enriched effect
calculus [7]. We now extend the notion of model to accommodate the sums.
Terminology. Recall that a distributive category is a category with finite products and
coproducts such that the canonical morphisms 0 → A × 0 and ((A × B) + (A × C)) →
(A × (B + C)) are isomorphisms.
If a distributive category V has an action (·) on a category C with finite coproducts
(0, ⊕), then we say that the situation is distributive if the four canonical morphisms 0 → A · 0,
(A · B) ⊕ (A · C) → A · (B ⊕ C), 0 → 0·A and (A·C)⊕(B·C) → (A+B)·C are isomorphisms.
If the action is enriched, i.e. each functor (− · A) : V → C has a right adjoint, then this
definition of distributive coproducts amounts to the usual notion of enriched coproducts.
(Note that when the action is enriched then the third and fourth canonical morphisms are
necessarily isomorphisms since left adjoints preserve colimits.)
Definition 6.1. A distributive enriched model is given by a distributive category V and a
category C enriched in V with copowers and enriched coproducts.
It is straightforward to extend the sound interpretation of the enriched call-by-value
calculus in enriched models (§2.2) to a sound interpretation of enriched call-by-value calculus
with sums in distributive enriched models.
Of the examples in Section 2.3, (1)–(5) are distributive. The syntactic model of the
version of the calculus with sums is a distributive enriched model, and it is bi-initial for the
suitable generalization of morphism.
6.2. Sums in the fine-grain call-by-value calculus. It is equally straightforward to add
sums to the fine-grain call-by-value language. This is summarized in Figure 4.
We only include constructors/destructors as value terms, but from these we can derive
constructors/destructors for producer terms, as follows.
def
imagep (M ) = M to x. return ?(x)
def
inpi (M ) = M to x. return ini (x)
casep M of (in1 (x1 ).N1 |in2 (x2 ).N2 )
def
= M to z. (case z of (in1 (x1 ).λw. N1 |in2 (x2 ).λw. N2 ))(?)
where w, z are fresh.
These constructions have derived typing rules
Γ ` p Mi : σ i
Γ `p M : 0
Γ `p imagep (M ) : σ
Γ `p inpi (M ) : σ1 + σ2
(i = 1, 2)
(6.1)
p
Γ ` M : σ1 + σ2
p
p
p
Γ, xi : σi ` Ni : τ (i = 1, 2)
Γ ` case M of (in1 (x1 ).N1 |in2 (x2 ).N2 ) : τ
LINEAR USAGE OF STATE
21
Types.
A, B ::= α | 1 | A × B | A ( B | 0 | A + B
A, B ::= α | !A ⊗ B | 0 | A ⊕ B .
Term formation. The following rules are in addition to the rules in Figure 1.
Γ |− ` t: 0
Γ |∆ ` t: 0
Γ | − ` ?(t) : A
Γ | ∆ ` ?(t) : A
Γ | − ` t : Ai
Γ | − ` ini (t) : A1 + A2
Γ | − ` t : A1 + A2
(i = 1, 2)
Γ | ∆ ` t : Ai
Γ | ∆ ` ini (t) : A1 ⊕ A2
Γ, x1 : A1 | − ` u1 : C
(i = 1, 2)
Γ, x2 : A2 | − ` u2 : C
Γ | − ` case t of (in1 (x1 ). u1 |in2 (x2 ). u2 ) : C
Γ | ∆ ` s : A1 ⊕ A2
Γ | x1 : A1 ` t1 : C
Γ | x2 : A2 ` t2 : C
Γ | ∆ ` case s of (in1 (x1 ). t1 |in2 (x2 ). t2 ) : C
Equality. The following rules are in addition to the rules in Figure 1.
Γ |− ` t: 0
Γ, x : 0 | − ` u : A
Γ |∆ ` t: 0
t
Γ | ∆ ` ?(t) ≡ u[t/x ] : A
Γ | − ` ?(t) ≡ u[ /x ] : A
Γ | − ` t : Ai
Γ |x: 0 ` u: A
Γ, x1 : A1 | − ` u1 : B
Γ, x2 : A2 | − ` u2 : B
Γ | − ` case (ini (t)) of (in1 (x1 ). u1 |in2 (x2 ). u2 ) ≡ ui [t/xi ] : B
Γ | ∆ ` t : Ai
Γ | x1 : A1 ` u1 : B
Γ | x2 : A2 ` u2 : B
Γ | ∆ ` case (ini (t)) of (in1 (x1 ). u1 |in2 (x2 ). u2 ) ≡ ui [t/xi ] : B
Γ | − ` t : A1 + A2
(i = 1, 2)
(i = 1, 2)
Γ, z : A1 + A2 | − ` u : B
Γ | − ` u[t/z ] ≡ case t of (in1 (x1 ). u[in1 (x1 )/z ]|in2 (x2 ). u[in2 (x2 )/z ]) : B
Γ | ∆ ` t : A1 ⊕ A2
Γ | z : A1 ⊕ A2 ` u : B
Γ | ∆ ` u[t/z ] ≡ case t of (in1 (x1 ). u[in1 (x1 )/z ]|in2 (x2 ). u[in2 (x2 )/z ]) : B
Figure 3: Additional rules for sum types in Enriched Call-by-Value
22
R. E. MØGELBERG AND S. STATON
Types.
σ ::= α | 1 | σ × σ | σ * σ | 0 | σ + σ
Term formation. The following rules are in addition to the rules in Figure 2.
Γ `v V : 0
Γ `v V : σ1
Γ `v V : σ2
Γ `v ?(V ) : σ
Γ `v in1 (V ) : σ1 + σ2
Γ `v in2 (V ) : σ1 + σ2
Γ `v V : σ1 + σ2
Γ, x1 : σ1 `v W1 : τ
Γ, x1 : σ2 `v W2 : τ
Γ `v case V of (in1 (x1 ).W1 |in2 (x2 ).W2 ) : τ
Equality. The following rules are in addition to the rules in Figure 2.
Γ `v V : 0
Γ, x : 0 `v W : σ
Γ `v ?(V ) ≡ W [V /x ] : σ
Γ `v V : σi
Γ, x1 : σ1 `v W1 : τ
Γ, x1 : σ2 `v W2 : τ
Γ `v case ini (V ) of (in1 (x1 ).W1 |in2 (x2 ).W2 ) ≡ Wi [V /xi ] : τ
Γ `v V : σ1 + σ2
(i = 1, 2)
Γ, z : σ1 + σ2 `v W : τ
Γ `v W [V /z ] ≡ case V of (in1 (x1 ).W [in1 (x1 )/z ]|in2 (x2 ).W [in2 (x2 )/z ]) : τ
Figure 4: Additional rules for sum types in Fine-Grain Call-by-Value.
For example:
Γ, x1 : σ1 `p N1 : τ
Γ, x2 : σ2 `p N2 : τ
Γ, x1 : σ1 `v λw. N1 : 1 * τ
Γ, x2 : σ2 `v λw. N2 : 1 * τ
Γ, z `v case z of (in1 (x1 ).λw. N1 |in2 (x2 ).λw. N2 ) : 1 * τ
Γ `p M : σ1 + σ2
Γ, z `v ? : 1
Γ, z : σ1 + σ2 `p (case z of (in1 (x1 ).λw. N1 |in2 (x2 ).λw. N2 )) (?) : τ
Γ `p casep M of (in1 (x1 ).N1 |in2 (x2 ).N2 ) : τ
We now refine the notion of Kleisli model (Def. 3.1) to accommodate the sums.
Definition 6.2. A distributive enriched Kleisli model (distributive Kleisli model for short)
is a distributive enriched model (Def 6.1) together with an identity-on-objects functor
J : V → C that strictly preserves copowers and coproducts.
Note that in any Kleisli model, J will preserve coproducts because it has a right adjoint,
C(1, −). We insist moreover that it preserves coproducts strictly.
Note that a distributive Kleisli model is what Power [41, Def. 36] calls a distributive
closed Freyd category.
LINEAR USAGE OF STATE
23
It is straightforward to extend our interpretation of the fine-grain call-by-value calculus
in Kleisli models (§3.1) to an interpretation of the calculus with sums in distributive Kleisli
models. The interpretation is sound and there is a syntactic model.
In Section 3.2 we discussed the equivalence between enriched Kleisli models and strong
monads with Kleisli exponentials. This equivalence extends to an equivalence between
distributive enriched Kleisli models, and strong monads on distributive categories with Kleisli
exponentials. Likewise, the constructions St and Kl of Section 5 extend to constructions
deriving distributive enriched models from distributive Kleisli models and vice versa, and an
extension of Theorem 5.3 states that St and Kl exhibit a 2-category of distributive Kleisli
models as a coreflective subcategory of a 2-category of distributive enriched models.
6.3. Sums and the state-passing translation. It is straightforward to adapt the statepassing translation to accommodate sums. Recall that each type σ of FGCBV is translated
to a value type σ S of ECBV. We set
def
0S = 0
def
(σ + τ )S = σ S + τ S .
We extend this translation to type contexts, taking an FGCBV type context Γ to an ECBV
type context ΓS .
Recall that the translation on terms takes an FGCBV value type judgement Γ `v V : σ to
an ECBV judgement ΓS | − ` V S : σ S , and takes an FGCBV producer judgement Γ `p M : σ
S
to an ECBV judgement ΓS | s : S ` Ms : !(σ S ) ⊗ S. We extend the translation in Section 4
straightforwardly, as follows:
def
?(V )S = ?(V S )
def
in1 (V )S = in1 (V S )
def
def
in2 (V )S = in2 (V S )
S
S
case V of (in1 (x1 ).W1 |in2 (x2 ).W2 )S = case V S of (in1 (x1 ). W1 |in2 (x2 ). W2 )
The translation remains sound. The full definability results (Theorems 4.2 and 4.3) continue
to hold in the presence of sums.
7. Remarks on the linear-use continuation-passing translation
We now briefly emphasise that the linear-use continuation-passing translation arises as
a formal dual of the linear-use state-passing translation. This is not a new observation:
Hasegawa noticed it in the context of classical linear logic ([15, §8],[33]) and indeed it
informed the earlier work on the enriched effect calculus.
The linear-use continuation-passing translation was first elaborated by Berdine, O’Hearn,
Reddy and Thielecke [4], but our main reference is the more recent work by Egger, Møgelberg
and Simpson [9, 10]. They showed that the linear-use continuation-passing translation can be
extended to an involutive translation of the enriched effect calculus to itself, and derived a full
completeness result from this. That work, in turn, stems from Hasegawa’s full completeness
result [15] for a linear-use continuation-passing translation into dual intuitionistic / linear
logic.
Following [9], our development is fuelled by the following categorical observation. If
a category C is enriched in V with copowers, then we can form the dual category Cop
which is also enriched in V, but now with powers instead of copowers. (Recall that an
enriched category C has powers Y X if the functor HomV (X, C(−, Y )) : Cop → Set is
24
R. E. MØGELBERG AND S. STATON
Types.
A, B ::= α | 1 | A × B | A ( B
A, B ::= α | A → B .
Term formation.
Γ, x : A, Γ0 | − ` x : A
Γ |− ` t: A
Γ |z : A ` z : A
Γ |− ` u: B
Γ | − ` ht, ui : A × B
Γ |z : A ` t: B
Γ | − ` t : A1 × A2
Γ | − ` πi (t) : Ai
Γ |− ` s: A ( B
Γ | − ` λz. t : A ( B
Γ, x : A | ∆ ` t : B
Γ |− ` ?: 1
Γ |∆ ` t: A
Γ | ∆ ` s[t] : B
Γ |∆ ` t: A → B
Γ | ∆ ` λx. t : A → B
Γ, x : A | − ` u : A
Γ |∆ ` t u: B
Equations: Equations of Figure 1 but with equations for tensor types replaced by:
Γ, x : A | ∆ ` t : B Γ | − ` u : A
Γ |∆ ` t: A → B
Γ | ∆ ` (λx. t) u ≡ t[u/x] : B
Γ, x : A | ∆ ` t ≡ λx. (t x) : A → B
Figure 5: A CPS variant of the enriched call-by-value calculus.
representable.) When viewed under this duality, the state-passing translation becomes the
continuation-passing translation, as we now explain.
In Figure 5, we provide an internal language for enriched categories with powers. We
call this the ‘CPS variant of ECBV’, because it is the target of the continuation passing
translation (see below). The key difference with Figure 1 is that we have replaced the tensor
type (!A ⊗ B) by a power type (A → B). It is another fragment of the enriched effect calculus.
If V is a category with products and C is a category enriched in V with powers, then we can
interpret this CPS variant of ECBV in (V, C) through a variation of the interpretation in
def
Section 2.2. The power type is interpreted using the categorical powers: [[A → B]] = [[B]][[A]] .
A computation judgement Γ | ∆ ` t : A is interpreted as a morphism [[t]] : [[∆]] → [[A]][[Γ]]
in C.
Following the categorical analysis above, we define a bijection (−)◦ between the types
of ECBV with this CPS variant:
def
def
α◦ = α
def
1◦ = 1
(A × B)◦ = (A◦ × B◦ )
(A ( B)◦ = (B◦ ( A◦ )
(!A ⊗ B)◦ = A◦ → B◦
def
def
(7.1)
LINEAR USAGE OF STATE
25
This bijection extends to terms-in-context straightforwardly, and the resulting translation is
a restriction of the linear-use cps involution of the enriched effect calculus studied in [9, 10].
We achieve a linear-use continuation-passing translation by composing the state-passing
translation of Section 4 with this bijection (7.1). For clarity, we now write this out explicitly.
We fix a computation type R of ECBV, thought of as a return type. We translate FGCBV
types σ to ECBV value types σ R :
def
def
αR = α (σ × τ )R = σ R × τ R
def
def
1R = 1 (σ * τ )R = ((τ R ) → R) ( ((σ R ) → R)
We extend this translation to type contexts, taking an FGCBV type context Γ to an ECBV
type context ΓR .
The translation on terms is syntax-directed. We pick a variable k, completely fresh.
The translation takes an FGCBV value type judgement Γ `v V : σ to an ECBV judgement
ΓR | − ` V R : σ R , and it take an FGCBV producer judgement Γ `p M : σ to an ECBV
R
judgement ΓR | k : σ R → R ` Mk : R.
def
xR = x
def
?R = ?
def
hV, W iR = hV R, W R i
R def
R def
(return V )k = k (V R )
def
def
def
(π1 (V ))R = π1 (V R )
(π2 (V ))R = π2 (V R )
R
R
(M to x. N )k = (λk. Mk ) [λx. Nk ]
R
(λx. N )R = λk. λx. Nk
R def
(V W )k = (V R [k]) W R
The continuation-passing translation inherits the following results from the soundness
and full completeness of the state-passing translation (Theorems 4.1 and 4.3).
Proposition 7.1. For any computation type R, the continuation-passing translation is
sound:
(1) If Γ `v V ≡ W : σ then ΓR | − ` V R ≡ W R : σ R .
R
R
(2) If Γ `p M ≡ N : σ then ΓR | k : σ R → R ` Mk ≡ Nk : R.
Proposition 7.2 ([9], Corollary 1). Let R be a computation type constant. The continuationpassing translation is fully complete, in the following sense.
(1) If Γ `v V, W : σ and ΓR | − ` V R ≡ W R : σ R then Γ `v V ≡ W : σ.
R
R
(2) If Γ `p M, N : σ and ΓR | k : σ R → R ` Mk ≡ Nk : R then Γ `p M ≡ N : σ.
(3) For any ΓR | − ` t : σ R there is Γ `v V : σ such that ΓR | − ` t ≡ V R : σ R .
(4) For any ΓR | k : σ R → R ` t : R there is a producer term Γ `p M : σ such that
ΓR | k : σ R → R ` t ≡ M R : R.
The full completeness result is the same as the one obtained by Egger et al. [9, Corollary 1]
except that loc. cit. describes a translation on the full enriched effect calculus rather than
this fragment of it.
7.1. Sums and products. The CPS variant of the enriched call-by-value calculus can be
extended so that value types are closed under sums and computation types are closed under
products. Thus the types are as follows:
A, B ::= α | 1 | A × B | A ( B | 0 | A + B
A, B ::= α | A → B | 1 | A × B .
26
R. E. MØGELBERG AND S. STATON
(For brevity, we omit the term language, which is a fragment of the enriched effect calculus.)
The type system is designed to be dual to the enriched call-by-value language with sums.
The translation from that language to this one (7.1) is extended as follows:
def
(0)◦ = 0
def
(A + B)◦ = A◦ + B◦
def
(0)◦ = 1
def
(A ⊕ B)◦ = A◦ × B◦
and the analysis of Section 6.3 can be converted to a fully-complete linear-use continuationpassing style translation from fine-grain call-by-value with sums.
8. Effect theories
To illustrate the nature of the state-passing translation we endow our calculi with effects.
We do this is in a general way, by following the programme of Plotkin, Power and others [38]
whereby a theory of effects is presented as an algebraic theory.
We discuss how to add effects to the source and target languages of the state-passing
translation, FGCBV and ECBV. Our central observation is that to accommodate an
algebraic theory of effects in the enriched call-by-value calculus it is necessary and sufficient
to supply the chosen state type S with the structure of a comodel. The idea of state being
a comodel, particularly for the theory of store, arose in the work of Plotkin, Power and
Shkaravska [40, 35].
The section is structured as follows. We begin with an overview in which we study the
situation for a particular algebraic effect. We then proceed to look at a general notion of
effect theory (§8.3), its relationship with the state-passing translation (§8.4) and notions of
model and comodel (§8.5,8.6).
8.1. Overview. We give an overview of the situation in terms of a particular algebraic
theory: an algebraic theory for accessing a single bit of memory. This is an algebraic theory
in the classical sense (like monoids, groups, rings, etc.). It has one binary operation (?), a
unary operation (f) and the following four equations:
(v ? x) ? (y ? z) ≡ v ? z
x≡x?x
f(f(x)) ≡ x
f(x ? y) ≡ f(y) ? f(x)
(8.1)
Here is some intuition. If x and y are computations, then x ? y is the computation that first
reads the bit in memory and then branches to x or to y depending on whether the bit was
set. If x is a computation then f(x) is the computation that first flips the bit in memory (0
def
to 1, 1 to 0) and then continues as x. There are derived operations update0 (x) = x ? f(x)
def
and update1 (x) = f(x) ? x, which first write 0 (resp. 1) and then continue as x.
We now explain how to accommodate this algebraic theory in the fine-grain and enriched
call-by-value calculi.
8.1.1. Fine-grain call-by-value: algebraic operations and generic effects. In the fine-grain
call-by-value calculus (§3), the algebraic theory (8.1) can be accommodated in two equivalent
ways: by adding algebraic operations and by adding generic effects. In adding the operations,
we add the following term formation rules for each type σ:
Γ `p M : σ
Γ `p M : σ Γ `p N : σ
(8.2)
Γ `p fσ (M ) : σ
Γ `p M ?σ N : σ
LINEAR USAGE OF STATE
27
We also add the equations in (8.1) at each type, and an algebraicity equation for each
operation (e.g. [43, Def 3.14]):
Γ `p M1 : σ Γ `p M2 : σ Γ, x : σ `p N : τ
Γ `p (M1 ?σ M2 ) to x. N ≡ (M1 to x. N ) ?τ (M2 to x. N ) : τ
Γ `p M : σ Γ, x : σ `p N : τ
Γ `p fσ (M ) to x. N ≡ fτ (M to x. N ) : τ
The result is a programming language with higher types and a single bit of storage.
The second way to accommodate the algebraic theory into the fine-grain call-by-value
calculus is by adding generic effects. For this we need sum types (§6.2). The idea is that an
expression in n variables in the algebraic theory corresponds to a ground producer term of
type n (= 1 + · · · + 1). Thus we add the following axioms for term formation:
Γ `p deref(?) : 2
Γ `p flip(?) : 1
(8.3)
Informally, deref(?) is a computation that returns the boolean value of the memory cell, and
flip(?) is a computation that flips the value in the memory cell. An important observation
of Plotkin and Power [38] is that the algebraic operations can be recovered at all types from
the generic effects, as follows:
def
def
M ?σ N = ifp deref(?) then M else N
fσ (M ) = flip(?); M
where we use some shorthand notation:
def
ifp deref(?) then M else N = casep deref(?) of (in1 (x1 ).M |in2 (x2 ).N )
def
flip(?); M = flip(?) to x. M
Conversely the generic effects can be derived from the algebraic operations:
def
deref(?) = inp1 (?) ?2 inp2 (?)
def
flip(?) = f1 (?)
(The subscript 1 on f1 (?) is the unit type.) We can thus write the four equations (8.1)
directly in terms of generic effects:
`p deref(?) to x. deref(?) to y. return hx, yi ≡ deref(?) to x. return hx, xi : 2 × 2
`p return (?) ≡ deref(?); return (?) : 1
`p flip(?); flip(?) ≡ return (?) : 1
`p flip(?); deref(?) ≡ deref(?) to x. flip(?); return (¬x) : 2
(8.4)
writing ¬x for if x then in2 (?) else in1 (?).
The two derived operations for writing a bit can be combined into a single command assign:
Γ `v V : 2
def
Γ `p assign(V ) = ifp (deref(?) to x. return (V xor x)) then flip(?) else return (?) : 1
where xor is the evident binary operation on values of type 2. Using this derived command,
the four equations for accessing the bit of memory can be equivalently written as the three
28
R. E. MØGELBERG AND S. STATON
program equations of Plotkin and Power [37]:
− `p return (?) ≡ deref(?) to x. assign(x) : 1
p
x : 2 ` assign(x); deref(?) ≡ assign(x); return (x) : 2
x, y : 2 `p assign(x); assign(y) ≡ assign(y) : 1
(8.5)
(8.6)
(8.7)
which state that reading a cell and then writing the same value is the same as doing nothing,
that writing and the reading yields the value just written, and that the effect of two writes
equals that of the second.
8.1.2. Enriched call-by-value and state access operations. How can we accommodate the
algebraic theory for a bit of memory (8.1) in the enriched call-by-value calculus? In this
section we develop the following observation. Whereas in FGCBV each type should be a
model of the theory, in that (8.2) gives terms (?) and (f) at each type σ, in ECBV the
distinguished state type S should be a comodel of the theory, which means that there are
ground value terms
read : S ( S ⊕ S
and
flip : S ( S
(8.8)
which we call state access operations. (It is called a comodel because the arrows have
been reversed and (×) has become (⊕).) Using the isomorphism (S ⊕ S) ∼
= (!2 ⊗ S), we can
understand the type of (read) as S ( !2 ⊗ S. The idea is that the read operation takes a state
and returns the value stored in that state. It also returns a state: this is necessary because
state is linear and cannot be discarded or duplicated. Notice that, under the state-passing
translation, the two generic effects (8.3) become the two state access operations.
The four equations (8.1) are also appealing when written in terms of state access
operations.
− | s : S ` read[s] to (!b ⊗ s0 ). read[s0 ] to (!b0 ⊗ s00 ). !hb, b0 i ⊗ s00
≡ read[s] to (!b ⊗ s0 ). !hb, bi ⊗ s0 : !(2 × 2) ⊗ S
− | s : S ` s ≡ read[s] to (!b ⊗ s0 ). s0 : S
(8.9)
− | s : S ` flip[flip[s]] ≡ s : S
− | s : S ` read[flip[s]] ≡ read[s] to (!b ⊗ s0 ). !¬b ⊗ flip[s0 ] : !2 ⊗ S
Notice that the second equation says that the read operation does not change the state.
The two derived operations for writing a bit can be combined into a single state access
operation:
def
write = λx.xto(!b ⊗ s). read[s]to(!b0 ⊗ s0 ). (if (b xor b0 ) then (λs.flip[s]) else (λs.s))[s0 ]
: !2 ⊗ S ( S
Intuitively, write[!b ⊗ s] writes the value b to the state s, returning the updated state.
In Section 4 we have seen a fully-complete state-passing translation from FGCBV to
ECBV. This translation extends to a fully-complete translation from FGCBV with generic
effects to ECBV with state access operations.
LINEAR USAGE OF STATE
29
8.1.3. Continuation passing style and algebraic operations. Finally, we turn to the linear-use
continuation-passing style. In this setting, it is natural to require that the distinguished
return type R be a model of the theory. This is dual to the situation with state-passing
style, where the distinguished state type S is a comodel of the theory.
More precisely, we extend the CPS variant of ECBV (§ 7) with the theory of memory
access by adding ground value terms
(?) : R × R ( R
and
(f) : R ( R
satisfying the following equations:
− | k : (R × R) ×(R × R) ` ((π1 (π1 k)) ? (π1 (π2 k))) ? ((π2 (π1 k)) ? (π2 (π2 k)))
≡ (π1 (π1 k)) ? (π2 (π2 k)) : R
− |k: R ` k ≡ k ? k: R
− | k : R ` f[f[k]] ≡ k : R
− | k : R × R ` f[(π1 k) ? (π2 k)] ≡ (f[π2 k]) ? (f[π1 k]) : R
Thus the generic effects in the source language endow the return type R of the linear-use
continuation-passing translation with the structure of a model for the algebraic theory.
8.1.4. Further simple examples of algebraic theories for computational effects. The theory
of accessing a bit of memory is perhaps the simplest example of a stateful effect. The
connections between algebraic operations, generic effects and state access operations also
work for less state-like effects.
Printing. The algebraic theory of printing a single bit has two unary function symbols, p0
and p1 . For instance, the term p0 (p1 (x)) should be understood as the computation that
first prints 0, then prints 1, then continues as x. There are no equations in this theory.
The generic effects for printing can be grouped together into one producer term
x : 2 `p print(x) : 1
thought of as a command that prints its argument.
As a state access operation, we have a function print : !2 ⊗ S ( S which, given a bit
and a state, returns a new state. Intuitively, S is a list of everything printed so far, and
print appends its first argument to its second argument.
Probability. There are different algebraic theories of probabilistic choice. The simplest one
is the theory of ‘mean-values’ considered by Heckmann [16] (but perhaps first introduced
by Aczél [2]): it has one binary function symbol
and its axioms are the medial law,
idempotence and commutativity:
(u
x)
(y
z) ≡ (u
y)
(x
z)
x≡x
x
x
y≡y
x
The idea is that a computation x y tosses a coin, and proceeds as x if heads and y if tails.
The generic effect for is Γ `p toss(?) : 2 which, intuitively, tosses the coin and returns
the result. In this style, the first equation is written
− `p toss(?) to x. toss(?) to y. (x, y) ≡ toss(?) to y. toss(?) to x. (x, y) : 2 × 2
It says that it doesn’t matter which order you toss coins.
30
R. E. MØGELBERG AND S. STATON
The state access operation toss : S ( !2 ⊗ S can be thought of as making the coin an
explicit parameter: we can think of S as a type of coins. In this style, the second equation
− | s : S ` s ≡ toss[s] to (!b ⊗ s0 ). s0 : S
says that when you toss a coin you get the same coin back. The third equation
− | s : S ` toss[s] ≡ toss[s] to (!b ⊗ s0 ). (!(¬b) ⊗ s0 ) : !2 ⊗ S
says that if you toss a coin it is the same as tossing a coin and turning it once more without
looking. This illustrates that probability is not really stateful and so for this effect the
approach based on algebraic operations is perhaps the most profitable perspective. The point
is that different computational effects are better suited to different approaches (algebraic
operations, generic effects, and state access operations) even though all three approaches
are always available.
8.2. State access operations, algebraic operations, and generic effects. We now
make precise the informal connections made between state access operations, generic effects
and algebraic operations in the previous section. We do this by revisiting the results of
Plotkin and Power [38] in the context of an enriched model (V, C).
In the previous section we focused on the classical situation where arities are natural
numbers. However, from the perspective of state access operations, generic effects and
algebraic operations have little to do with natural numbers per se. It is just as easy to allow
arities to be arbitrary objects of the base category V. In the following result we do not make
an artificial restriction to natural numbers. Instead we consider an operation with arity
[A1 , . . . , An ] and a parameter from B, where A1 , . . . , An , B are objects of V. The classical
case of an n-ary operation is recovered by setting the objects A1 , . . . , An , B to all be the
terminal object 1.
Theorem 8.1. Let (V, C) be an enriched model with sums. Let S be an object of C. Let
A1 . . . An and B be objects of V. The following data are equivalent:
(1) A state access operation: a morphism B · S → A1 · S ⊕ . . . ⊕ An · S in C.
(2) A generic effect: a morphism B → TS (A1 + · · · + An ) in V, where TS is the monad
C(S, (−) · S).
(3) An algebraic operation: a V-natural family of morphisms in V
Qn
Ai
→ (US X)B X∈C
i=1 (US X)
def
where US (X) = C(S, X).
The last point requires some explanation. First, even though V is not cartesian closed,
exponentials with base US (X) exist: (US X)A ∼
= C(A · S, X). Second, the constructions
Q
def
def
n
Ai
F =
G = (US (−))B
i=1 (US (−))
can be understood as V-functors F, G : C → V, since there are families of morphisms
{FX,Y : C(X, Y ) × F (X) → F (Y )}X,Y ∈C
{GX,Y : C(X, Y ) × G(X) → G(Y )}X,Y ∈C
in V that satisfy the laws for functors (respecting identities and composition). Thirdly, a
family of morphisms {φX : F (X) → G(X)}X∈C is called V-natural if the following diagram
LINEAR USAGE OF STATE
31
commutes in V for all X and Y :
C(X, Y ) × F (X)
FX,Y
C(X,Y )×φX
/ C(X, Y ) × G(X)
F (Y )
φY
GX,Y
/ G(Y )
(It is perhaps more compelling if algebraic operations are defined as structure on a V-category
of TS -algebras, but this V-category cannot be constructed without further assumptions on
V — see [38, §7].)
Proof of Theorem 8.1. To see the connection between (1) and (2), consider the following
bijections:
HomC (B · S, A1 · S ⊕ . . . ⊕ An · S) ∼
= HomV (B, C(S, A1 · S ⊕ . . . ⊕ An · S))
∼
= HomV (B, C(S, (A1 + · · · + An ) · S))
= HomV (B, TS (A1 + · · · + An )).
To see the connection between (1) and (3), we note that
Qn
Ai ∼
and
= C(A1 · S ⊕ · · · ⊕ An · S, X)
i=1 (US X)
(US X)B ∼
= C(B · S, X)
and the enriched Yoneda lemma gives
V-Nat(C(A1 · S ⊕ · · · ⊕ An · S, −), C(B · S, −)) ∼
= HomC (B · S, A1 · S ⊕ · · · ⊕ An · S).
We remark that the equivalence of (2) and (3) is essentially Theorem 2 of [38].
8.3. Effect theories. In the previous section we described connection between state access
operations, generic effects and algebraic operations. As we explained, the natural level of
generality for this is more sophisticated than the classical setting: the arity of an operation
is a list and we allow the operation to take a parameter. This suggests a generalization of
algebraic theories that we call ‘effect theories’, since they are useful from the computational
perspective.
The illustration in Section 8.1 involves storage of a single bit. A motivating example of
effect theory arises from modifying that theory above to allow storage of a more interesting
datatype. In FGCBV, we would like to have an (abstract) type Val of storable values, and
generic effects deref and assign with typing judgements
Γ `v V : Val
(8.10)
Γ `p deref(?) : Val
Γ `p assign(V ) : 1
We add to the theory of equality for FGCBV (Fig. 2) the three equations for global store
proposed by Plotkin and Power [37] (8.5–(8.7)):
`p return (?) ≡ deref(?) to x. assign(x) : 1
x : Val `p assign(x); deref(?) ≡ assign(x); return (x) : Val
x, y : Val `p assign(x); assign(y) ≡ assign(y) : 1
Our notion of effect theory accommodates the classical kinds of theory in the overview
(§ 8.1) and also the more general kind of theory of memory access illustrated above. It
is roughly the same as that used by Plotkin and Pretnar [36, §3]. The main difference is
32
R. E. MØGELBERG AND S. STATON
in the presentation: we use generic effects rather than algebraic operations. Rather than
introducing a new calculus for expressing the allowable equations of an effect theory, we use
the first-order fragment of FGCBV.
Value theories. Before we introduce effect theories we briefly discuss value theories, which
are simple extensions of the value judgements of FGCBV. By a value signature we shall
simply mean a signature for a many-sorted algebraic theory in the usual sense. This means
a set of type constants ranged over by α, β, and a set of term constants f with a given arity
f : (α1 , . . . , αn ) → β, where the αi , β range over type constants. We can extend FGCBV
along a value signature by adding the type constants and the typing rule
Γ `v ti : αi (i = 1, . . . , n)
(8.11)
Γ `v f (t1 , . . . , tn ) : β
for every term constant f : (α1 , . . . , αn ) → β in the signature. A value theory is a value signature with a set of equations, i.e. pairs of terms typable in the same context Γ `v V = W : β,
where V, W are formed only using variable introduction and the rule (8.11).
Effect theories. An effect signature consists of a value theory and a set of effect constants
each with an assigned arity e : β̄; ᾱ1 + . . . + ᾱn consisting of a list β̄ of type constants and
a formal sum of lists of type constants, ᾱ1 + . . . + ᾱn . (Here we are abbreviating a list
(β1 . . . βm ) using the notation β̄, etc.) FGCBV can be extended along an effect signature by
adding, for every effect constant e : β̄; ᾱ1 + . . . + ᾱn , a typing judgement
Γ `v V1 : β1 . . . Γ `v Vm : βm
(8.12)
Γ `p e(V1 , . . . , Vm ) : ᾱ1 + . . . + ᾱn
where β̄ = (β1 , . . . , βm ). In the conclusion, the vectors ᾱi should be understood as the
product of the types in the vector.
Here are some examples of effect signatures:
• The theory of reading/flipping a bit of memory (§ 8.1) has no value type constants. It has
two effect constants, deref and flip. The effect constant deref has arity 1; 1 + 1 and
the effect constant flip has arity 1; 1, where 1 is the empty string.
• The theory for storing an abstract datatype (8.10) has one value type constant Val and a
pair of effect constants (deref : 1; Val) and (assign : Val; 1). In this case term constants
in the value theory can be used to add basic operations manipulating values in Val: we
could ask that the storable values form a ring. (In future, it would be interesting to allow
Val to have more structure, for example as an inductive type such as the natural numbers,
but it is not clear how to extend the proof of Theorem 10.2 to inductive types.)
An effect theory comprises an effect signature and a set of equations. The equations are
pairs of producer terms-in-context Γ `p M ≡ N : τ of a restricted kind: they must be built
from the first-order fragment of fine-grain call-by-value in Figure 6. This notion of ‘effect
theory’ is connected with the classical notion of algebraic theory in Section 8.1 as follows. If
the value theory is empty, with no type constants (α) and no function symbols, then the lists
of type constants in (8.12) must all empty, and each generic effect is an operation of arity n.
This is the generic effect presentation of an algebraic theory as described in Section 8.1.1.
LINEAR USAGE OF STATE
33
Types.
σ, τ ::= α | 1 | σ × τ | 0 | σ + τ
Terms. The grammar for terms is as follows. Typing judgements are in Figure 2, Figure 4,
and Equations (8.11) and (8.12).
V ::= x | f (V1 , . . . , Vn ) | ? | π1 (V ) | π2 (V ) | hV1 , V2 i
| ?(V ) | in1 (V ) | in2 (V ) | case V of (in1 (x1 ).W1 |in2 (x2 ).W2 )
M ::= return V | M to x. N | casep M of (in1 (x1 ).N1 |in2 (x2 ).N2 )
Figure 6: The sub-calculus of fine-grain call-by-value that is used for describing effect theories
8.4. Effect theories and the state-passing translation. An effect theory is an extension
to the fine-grain call-by-value language. In Section 4 we explained how the linear-use statepassing translation goes from FGCBV to ECBV. We now explain how ECBV needs to
be extended to support this state-passing translation. The restricted nature of the effect
theories makes this particularly straightforward and illuminating.
Value theory:
• For each type constant in the value theory, we assume a value type constant.
• For each term constant f : ᾱ → β in the value theory we add the following term formation
rule:
Γ | − ` t1 : α1
...
Γ | − ` tn : αn
Γ | − ` f (t1 , . . . , tn ) : β
• An equation in the value theory must only be formed from variable introduction and the
rule (8.11). Thus each equation Γ `v V ≡ W : β in the value theory can be understood as
an equation Γ | − ` V ≡ W : β between value judgements.
Effect signature:
• We assume a chosen computation type constant S.
• For each effect constant e : β̄; ᾱ1 + . . . + ᾱn we add a constant at value type,
− | − ` e : !β̄ ⊗ S ( !(ᾱ1 + . . . + ᾱn ) ⊗ S
(8.13)
For the theory of reading/flipping a bit, this yields the constants read and flip in (8.8).
For the theory of storing an abstract datatype, this yields two state access operations:
read : S ( !Val ⊗ S
write : !Val ⊗ S ( S
State-passing translation:
• Recall that the state-passing translation (§4) takes a producer judgement of FGCBV
S
Γ `p M : σ to a computation judgement of ECBV ΓS | s : S ` Ms : !σ S ⊗ S. We extend
the state-passing translation to operate on effects:
def
S
(e(V1 . . . Vm ))Ss = e(!(V1 , . . . , VmS ) ⊗ s)
34
R. E. MØGELBERG AND S. STATON
• We use this extended state-passing translation to translate the equations in the effect
theory into ECBV, in such a way that the extended translation is sound by construction.
Each equation in the effect theory Γ `p M ≡ N : τ becomes an equation in ECBV:
Γ | s : S ` MsS ≡ NsS : !τ ⊗ S.
Notice that we do not need to translate the types in Γ because equations in an effect
theory must be from the first-order fragment of fine-grain call-by-value (Fig. 6) which is
shared with enriched call-by-value. For instance, the equations in the effect theory for
reading/flipping a bit (8.4) give rise to the equations on the state object (8.9). The three
equations for storing an abstract datatype (8.5–8.7) become the following equations for a
state object S:
− | s : S ` s ≡ write[read[s]] : S
x : Val | s : S ` read[write[!x ⊗ s]] ≡ !x ⊗ (write[!x ⊗ s]) : !Val ⊗ S
x, y : Val | s : S ` write[!y ⊗ write[!x ⊗ s]] ≡ write[!y ⊗ s] : S
8.5. Models and comodels of effect theories. Our analysis of effect theories in Sections 8.3 and 8.4 has been syntactic. We now provide a model-theoretic treatment. We define
the interpretation of effect theories in Kleisli models (§8.5.2) and enriched models (§8.5.3).
We then define what it means to be a model of an effect theory in general terms (§8.5.4).
8.5.1. Models of value theories. Let V be a distributive category. An interpretation of a
value signature in V is given by interpretations of the type constants α as objects [[α]]
of V, and interpretations of term constants f : ᾱ → β as morphisms [[f ]] : [[ᾱ]] → [[β]]. (Here,
def
if ᾱ = (α1 , . . . , αn ) then [[ᾱ]] = [[α1 ]] × · · · × [[αn ]].) This interpretation is extended to
interpret a term in context Γ `v V : β as a morphism [[V ]] : [[Γ]] → [[β]]. An interpretation of
a value theory is an interpretation of the signature such that [[V ]] = [[W ]] for each equation
Γ `v V ≡ W : β in the value theory.
8.5.2. Interpreting effect theories in Kleisli models. Let (V, C, J) be a distributive Kleisli
model (Def. 6.2) and suppose an interpretation of the type constants α, β is given. An
interpretation of an effect theory E in (V, C, J) is given by an interpretation of the value
theory in V and an interpretation of each effect constant e : β̄; ᾱ1 + · · · + ᾱn in E as a
morphism [[e]] : [[β̄]] → ([[ᾱ1 ]] + · · · + [[ᾱn ]]) in C, satisfying the equations of the theory.
8.5.3. Comodels of effect theories in enriched models. Let (V, C) be a distributive enriched
model in the sense of Definition 6.1. Thus V is a distributive category and C is a category
enriched in V with copowers and coproducts. A comodel of the effect theory in C is an
object S of C together with a morphism [[e]] : [[β̄]] · S → ([[ᾱ1 ]] + · · · + [[ᾱn ]]) · S in C for every
effect constant e : β̄; ᾱ1 + . . . + ᾱn such that for each equation Γ `p M ≡ N : τ in the effect
theory, the interpretations of M and N under the state passing style yield equal morphisms:
[[MsS ]] = [[NsS ]] : [[Γ]] · S → [[τ ]] · S.
LINEAR USAGE OF STATE
35
8.5.4. Models of effect theories in dual enriched models. We now justify our use of the term
‘comodel’ in Section 8.5.3 by showing that it is a generalization of the standard usage, i.e.,
dual to the concept of model of an algebraic theory known from classical algebra. Further
investigations of the notion of effect theory used in this paper along with relations to existing
notions of enriched algebraic theories [23, 39] could be an interesting topic of further research.
To dualize the notion of comodel, let (V, Cop ) be a distributive enriched model, i.e.,
let V be a distributive category and C a category enriched in V with powers and products, as
in Section 7. A model of the effect theory in C is a comodel in Cop . Explicitly this amounts
to an object R of C together with a morphism [[e]] : R([[ᾱ1 ]]+···+[[ᾱn ]]) → R[[β̄]] between powers
in C for every effect constant e : β̄; ᾱ1 + . . . + ᾱn such that for each equation Γ `p M ≡ N : τ
in the effect theory, the interpretations of M and N in continuation passing style yield
equivalent morphisms:
R
R
[[Mk ]], [[Nk ]] : R[[τ ]] → R[[Γ]] .
Because the terms of the effect theory are of a restricted kind, it is straightforward
to directly describe the interpretations of effect terms as morphisms between powers of
R, by induction on the structure of typing derivations. For instance, consider the casep
R
R
rule in (6.1). Given interpretations JMk K : RJσ1 +σ2 K → RJΓK and J(Ni )k K : RJτ K → RJΓ,σi K
R
(i = 1, 2), the interpretation J(casep M of (in1 (x1 ).N1 |in2 (x2 ).N2 ))k K is the composite
R
R
R JΓK
∆
(J(N1 ) K,J(N2 ) K)
JMk K
R
RJτ K −−−−−k−−−−−k−→ RJΓ,σ1 K × RJΓ,σ2 K ∼
= R(Jσ1 +σ2 K)×JΓK −−−−−→ RJΓK×JΓK −−→ RJΓK .
R
def
As another example, J(return V )k K = RJV K : RJτ K → RJΓK if Γ `v V : τ .
We now return to the setting of classical algebra, when the value theory has no type
constants or value constants. We will show that models in the above sense are models of
algebraic theories in the classical sense. If there are no type constants then every type in
the effect language (Figure 6) is isomorphic to one of the form 1 + 1 + · · · + 1. The arity
of an effect constant e : β̄; ᾱ1 + . . . + ᾱn must comprise β̄ as the empty list (since there
are no type constants) and ᾱ1 + . . . + ᾱn must be a sequence of n empty lists. Thus the
interpretation of e in the model is a morphism [[e]] : Rn → R.
We now explain the nature of equations in this restricted setting. In what follows,
we will write a natural number n for the type that is the n-fold sum of 1. We will write
ini (?) for the ith injection of type n, where 1 ≤ i ≤ n, and we will make use of n-ary case
constructions, case V of (in1 (?).W1 | . . . |inn (?).Wn ) to destruct terms V of type n. These
are just syntactic shorthand for terms that can be defined in the language in Figure 6.
We can focus on equations where Γ is empty. This is because every context has a finite
number of ground valuations — if a variable x in a context has type n, then it could be
valued with in1 (?) . . . inn (?) — and, moreover, an effect equation Γ `p M ≡ N : n is satisfied
if and only if it is satisfied at each ground instantiation.
The next step is to note that every effect term − `p M : 1 + 1 · · · + 1 is equal to one
built from the following rules:
− `pn M1 : n . . . − `pn Mm : n
(e : −; m)
− `pn casep e(?) of (in1 (?).M1 | . . . |inm (?).Mm ) : n
− `pn inpi (return ?) : n
1≤i≤n
36
R. E. MØGELBERG AND S. STATON
It is informative to look at the interpretation of these normalized terms. Given interpretations
[[M1 ]], . . . , [[Mm ]] : Rn → R, we have
([[M1 ]],...,[[Mm ]])
[[e]]
[[casep e(?) of (in1 (?).M1 | . . . |inm (?).Mm )]] = Rn −−−−−−−−−−→ Rm −−→ R,
π
i
[[inpi (return ?)]] = Rn −→
R.
Thus we see that, in the situation where there are no type constants or value constants, the
new general notion of model is the classical notion of model for an algebraic theory.
8.6. Examples of set-theoretic models and comodels. We revisit the simple example
effect theories from Sections 8.1 and 8.3 from the model-theoretic perspective. In each case,
we find that there are comodels that are state-like.
8.6.1. Storage. The category Set is enriched in itself with copowers given by products and
the enrichment given by the function space. The set 2 = {0, 1} is a comodel for the theory of
accessing a bit of memory (§8.1), with read(x) = (x, x) and flip(x) = ¬x. This is a comodel
for the theory in the enriched model (Set, Set). Power and Shkaravska [40] showed that 2
is the final comodel in Set for the theory of accessing a bit of memory.
As an aside, we note that Set is actually equivalent to the category of models for the
theory of accessing a bit of memory. The theory of reading a bit of memory is sometimes
called the theory of ‘rectangular bands’ because every model is isomorphic to one of the
def
form X × Y , with (x, y) ? (x0 , y 0 ) = (x, y 0 ). The anti-involution operation (f) enforces
that the model is isomorphic to one of the form X × X, and thus determined by a single
set. This phenomenon has been investigated in a more general setting by Métayer [30] and
Mesablishvili [29].
We can consider set theoretic models of the theory of storing an abstract datatype (8.10).
What is needed is an interpretation Val of the value sort, which also plays the role of the
state object. We let read(x) = (x, x) and write(v, s) = v. This is a comodel for the theory
for store in the enriched model (Set, Set).
In both cases, the induced monad on Set is the store monad ((−) × S)S .
8.6.2. Printing. Let 2∗ -Act be the category of algebras for the theory of printing a bit. The
objects are triples (X, p0 , p1 ) where p0 , p1 : X → X, and the morphisms (X, pX,0 , pX,1 ) →
(Y, pY,0 , pY,1 ) are functions X → Y that commute with the operations. As any ordinary
category, this category is enriched in Set. It has copowers given by
def
A · (X, pX,0 , pX,1 ) = (A × X, p(A·X),0 , p(A·X),1 )
def
where p(A·X),i (a, x) = (a, pX,i (x)).
Thus (Set, 2∗ -Act) is an enriched model.
The algebra structure of each algebra equips it with the structure of a comodel in
the category of algebras. The leading example is the set 2∗ of strings over {0, 1}, with
p2∗ ,i (s) = si. The induced state monad 2∗ -Act(2∗ , (−) · 2∗ ) is isomorphic to the monad
2∗ × (−) on Set. We can understand a string in 2∗ as a state: it is the list of things output
so far.
LINEAR USAGE OF STATE
37
8.6.3. Probability. Let MVAlg be the category of mean-value algebras. The objects are pairs
(X, ) of a set X and a binary operation satisfying the laws of mean-value algebras (§8.1.4).
The pair (Set, MVAlg) is an enriched model.
The one-element set is trivially a mean-value algebra, and it can be given the structure
of a comodel in the category of mean-value algebras. We can understand the one-element set
as a set of states: this captures the idea that probability is a stateless notion of computation.
Nonetheless, this ‘state object’ induces a ‘state monad’ on Set. This can be understood as a
monad D of finite dyadic probability distributions. By a finite dyadic probability distribution
on a set X,
P we mean a function p : X → [0, 1] such that supp(p) = {x ∈ X | p(x) 6= 0}
is finite, x∈supp(p) p(x) = 1, and for all x, p(x) has a finite binary representation. The
monad D has D(X) as the set of all finite dyadic probability distributions; the unit
picks out the Kronecker distributions, and multiplication µX : D(D(X)) → D(X) takes a
distribution p : D(X) → [0, 1] on D(X) to a distribution µX (p) : X → [0, 1] on X, given by
def P
µX (p)(x) =
q∈supp(p) (p(q) × q(x)).
In general, when the state object is a terminal object then the induced monad preserves terminal objects. A terminal-object-preserving monad is sometimes called affine [25,
Thm. 2.1] and the corresponding effects are said to be discardable (e.g. [12], [47, Def. 4.2.4])
since the following rule is admissible in the fine-grain call-by-value language.
Γ `p t : A Γ `p u : B
(x not free in u)
Γ `p t to x. u ≡ u : B
8.7. Relating notions of (co)model for effect theories. We now extend Theorem 8.1
to show, for each effect theory E, a bijective correspondence between the following: comodel
structures on S, interpretations of E in the Kleisli model (V, KlS , JS ), algebraic operations
equipping each US X with a model structure for E. The latter notion requires some
explanation, because the definition of model given in Section 8.5.4 defines only what it
means for R in a category C to be a model of E if (V, Cop ) is an enriched model, and in
the setting of Theorem 8.1 Vop is generally not V-enriched.
To generalize the notion of model, let V be a distributive category, let R be a fixed
object of V such that all exponents of the form RA exist (i.e. V(− × A, R) is representable
for all A) and let an interpretation of the value theory of E be given. We define a model
structure for E on R to be an interpretation of E in the Kleisli model (V, KlRR(−) , J) where
KlRR(−) has the same objects as V, but where a morphism A → B in KlRR(−) is a morphism
(−)
RB → RA in V. This is isomorphic to the Kleisli category for the strong monad RR on
V. By construction, the model structure interprets each effect constant e : β̄; ᾱ1 + . . . + ᾱn
as a morphism
[[e]] : R([[ᾱ1 ]]+···+[[ᾱn ]]) → R[[β̄]] .
If V is cartesian closed then (V, Vop ) is an enriched model and the above definition of a
model structure for E on R is equivalent to the one given in Section 8.5.4.
Theorem 8.2. Let (V, C) be an enriched model with sums, let S be an object of C, let E
be an effect theory and let an interpretation of the value theory of E in V be given. The
following data are equivalent:
(1) A comodel structure for E on S
(2) An interpretation of E in the Kleisli model (V, KlS , JS )
38
R. E. MØGELBERG AND S. STATON
(3) For each effect constant e : β̄; ᾱ1 + . . . + ᾱn in E an algebraic operation: a V-natural
family of morphisms in V
o
nQ
n
[[ᾱi ]]
[[β̄]]
→
(U
X)
X)
(U
S
S
i=1
X∈C
def
equipping each US (X) = C(S, X) with a model structure for E.
Proof. We first prove equivalence of (1) and (2). First note that in both cases, an effect
constant e : β̄; ᾱ1 + . . . + ᾱn is modelled as a morphism
[[e]] : [[β̄]] · S → ([[ᾱ1 ]] + · · · + [[ᾱn ]]) · S .
It thus suffices to show that for any term Γ `p M : σ of the fragment of Figure 6 the
S
S
S
morphisms [[Ms ]], [[M ]] : [[Γ]] · S → [[σ]] · S are equal, where [[Ms ]] is the ECBV term Ms
interpreted in the enriched model (V, C, S) and [[M ]] is the fine-grain call-by-value term
M interpreted in the Kleisli model (V, KlS , JS ). This can be proved by induction on the
structure of M .
To prove equivalence of (1) and (3) we build on the equivalence of state access operations
and algebraic operations of Theorem 8.1. First we show that for any term Γ `p M : σ of the
fragment of Figure 6 the equation
C([[MsS ]], X) = [[M ]] : C([[σ]] · S, X) → C([[Γ]] · S, X)
(8.14)
holds. This time the denotation brackets on the left hand side refers to the interpretation of
ECBV in (V, C, S), and the denotation brackets on the right hand side refer to the interpretation of fine-grain call-by-value in the Kleisli model (V, KlRR(−) , J), where R = US (X). As
S
above, [[Ms ]] is simply the interpretation of M in the Kleisli model (V, KlS , JS ). Equation
(8.14) can be proved by induction on the structure of M . (There is also a categorical
perspective on this, based around the construction C(−, X) which gives rise to an identityon-objects functor KlS → KlRR(−) that preserves sums and the action of V, although it
does not preserve the enrichment.)
From (8.14) we deduce the equivalence of (1) and (3). In fact (1) =⇒ (3) is immediate:
suppose S is a comodel, and consider an equation Γ ` M ≡ N : τ in the theory. Since S is a
S
S
comodel, we have [[Ms ]] = [[Ns ]] and so [[M ]] = [[N ]] as interpreted in (V, KlRR(−) , J).
For (3) =⇒ (1), suppose that R = C(S, X) is a model for every X, naturally in X.
Then by (8.14)
C([[MsS ]], X) = C([[NsS ]], X) : C([[τ ]] · S, X) → C([[Γ]] · S, X)
holds for all equations Γ ` M ≡ N : τ and all X. The enriched Yoneda embedding is full
S
S
and faithful and so [[Ms ]] = [[Ns ]] : [[Γ]] · S → [[τ ]] · S, proving that S is a comodel.
We remark that the equivalence of (2) and (3) is in the spirit of [38, §6].
8.8. Generalizing full completeness to the case of effects. The full completeness
result of Theorem 4.3 extends verbatim to the case of the calculi augmented with an effect
theory E. The proof is based on an extension of the coreflection theorem (Theorem 5.3)
which we state below.
First 2-categories dKleisliE and dEcbvE of distributive Kleisli models of E and distributive
enriched models of E are defined. These are defined similarly to Kleisli and Ecbv except
LINEAR USAGE OF STATE
39
morphisms are required to preserve coproducts (up to isomorphism) and the interpretation
of E (strictly). Details can be found in Appendix A.5.
def
def
Lemma 8.3. The assignments St(V, C, J) = (V, C, 1) and Kl(V, C, S) = (V, KlS , JS )
extend to 2-functors
St : dKleisliE → dEcbvE
Kl : dEcbvE → dKleisliE
Proof (sketch). We just show that these are well-defined on objects. The case of Kl is simply
the implication from (1) to (2) of Theorem 8.2.
In the case of St we must show that 1 carries a comodel structure for E whenever
(V, C, J) models E. An effect constant e : β̄; ᾱ1 + . . . + ᾱn can be modelled as the composite
J(π1 )
[[e]]
Jhid,! i
[[β̄]] · 1 −−−→ [[β̄]] −−→ [[ᾱ1 + . . . + ᾱn ]] −−−−→ [[ᾱ1 + . . . + ᾱn ]] · 1
where [[e]] refers to the interpretation of e in the given E-model structure of (V, C, J). We
must show that this defines a comodel, i.e., that the equations are satisfied. To this end
one can prove that for any term Γ `p M : σ of the fragment of fine-grain call-by-value used
S
for effect theories (Figure 6) the equation [[Ms ]] = J(hid , ! i) ◦ [[M ]] ◦ J(π1 ) holds. Here, on
the left hand side the double brackets refer to the interpretation of ECBV in (V, C, 1) and
the brackets on the right hand side refer to the interpretation of fine-grain call-by-value
in (V, C, J). This equation is proved by induction on typing derivations. Thus, for any
equation Γ `p M ≡ N : τ in E, since (V, C, J) models E we have [[M ]] = [[N ]], and thus also
S
S
[[Ms ]] = [[Ns ]], proving that 1 is indeed a comodel.
We end this section by stating the coreflection theorem for models of effect theories.
Theorem 8.4. The 2-functor St : dKleisliE → dEcbvE is left biadjoint to Kl, i.e., for any
pair of objects (V, C, J) and (V0 , C0 , S) of dKleisliE and dEcbvE respectively, there is an
equivalence of categories
dEcbvE (St(V, C, J), (V0 , C0 , S)) ' dKleisliE ((V, C, J), Kl(V0 , C0 , S))
natural in (V, C, J) and (V0 , C0 , S). Moreover, the unit of the adjunction η : id dEcbvE →
Kl ◦ St is an isomorphism.
9. Relationship with Atkey’s parameterized monads
Atkey’s work on parameterized monads [3], has proven relevant to functional programming (e.g. [24, §5.2]). In this section we show that parameterized monads are essentially the
same as enriched models.
Recall that, in general category theory, if a functor F : A × S → B is such that
F (−, S) : A → B has a right adjoint G(S, −) : B → A for each S, then these right adjoints
together form a functor G : S op × B → A called the parameterized right adjoint. Atkey
has carried out a study of a generalized form of monad that arises from parameterized
adjunctions: the functor G(−1 , F (−2 , −3 )) : S op × A × S → A is called a parameterized
40
R. E. MØGELBERG AND S. STATON
monad. Thus a parameterized monad is a functor T : S op × A × S → A together with
extranatural families of morphisms
ηS,A :A → T (S, A, S)
µS1 ,S2 ,S3 ,A :T (S1 , T (S2 , A, S3 ), S2 ) → T (S1 , A, S3 )
satisfying monad laws. A first example of a parameterized monad is the parameterized state
def
monad on the category of sets: T (S1 , A, S2 ) = [S1 ⇒ A × S2 ].
Every enriched model (V, C) contains a parameterized adjunction, since C(−1 , −2 ) :
Cop × C → V is by definition a parameterized right adjoint for (−1 ) · (−2 ) : V × C → C.
Conversely, in the theory of parameterized monads, the following Kleisli construction [3,
Prop. 1] plays a key role. Given a parameterized monad T : S op × A × S → A, the objects
of the Kleisli category are pairs (A, S) of an object of A and an object of S, and a morphism
(A, S) → (A0 , S 0 ) is a morphism A → T (S, A0 , S 0 ) in V. This is a first step towards building
an enriched model from a parameterized monad.
Plain parameterized monads are not especially relevant to the theory of programming
languages, just as plain monads are not very relevant. In his study of parameterized monads,
Atkey focuses on strong parameterized monads with Kleisli exponentials [3, §2.4.1]. He uses
these to provide semantics for a ‘command calculus’, which is a term language closely related
to our basic enriched call-by-value calculus (Figure 1).
Proposition 9.1. Let V be a category with finite products. The following data are equivalent.
(1) A strong parameterized monad on V with Kleisli exponentials, taking parameters in a
category S [3, §2.4.1].
(2) A category C enriched in V with copowers (i.e., an enriched model – §2.2) with a chosen
subcategory S of C such that every object X of C is of the form X = A · S for A in V
and S in S.
Proof notes. Given a strong parameterized monad T : S op × V × S → V, we let C be
the Kleisli category for T , as above. The pair (V, C) forms an enriched model with
def
A · (B, S) = (A × B, S): this is a rephrasing of what it means for T to be strong and have
Kleisli exponentials. Moreover S can be identified with a subcategory of C whose objects
are of the form (1, S) and whose morphisms are induced by the morphisms in S.
Conversely, suppose we are given an enriched model (V, C) and a chosen subcategory S
of C. We define a parameterized monad T : S op × V × S → V by
def
T (S, A, S 0 ) = C(S, A · S 0 )
.
It is routine to check that the two constructions are mutually inverse, up-to equivalence of
categories.
10. Relationship with the enriched effect calculus
Our enriched call-by-value calculus (ECBV) is a fragment of the enriched effect calculus (EEC,
[8, 7]) which was designed to analyze linear usage in effectful computation. We now show
that ECBV is not only a syntactic fragment of EEC: every model of the enriched call-by-value
calculus embeds in a model of the enriched effect calculus.
LINEAR USAGE OF STATE
41
The enriched effect calculus extends the enriched call-by-value calculus that we introduced
in Section 2 with some type constructions:
A, B ::= α | 1 | A × B | A ( B
| 0 | A + B | A → B | α | 0 | A ⊕ B | !A ⊗ B | !A
A, B ::= α | !A ⊗ B
|
{z
| 0 | A ⊕ B | 1 | A × B | A → B | !A .
|
| {z }
{z
Figure 1
}
Figure 3
Full EEC [8]
}
The additional types are: products (A × B) and powers (A → B) of computation types; an
operation to coerce a value type A into a computation type !A; a space of pure functions
(A → B) between value types; and an implicit inclusion of computation types as value types.
These additional types have been used to describe other aspects of effectful computation.
We briefly considered a linear-use CPS translation in Section 7, based on [9, 10], for which
we needed products and powers of computation types. Egger et al. [8] also describe monadic
call-by-name and call-by-value interpretations, for which they use the coercion of value types
into computation types and the implicit inclusion of computation types in value types.
The additional types of EEC do not affect the full completeness of the linear state-passing
translation (Thm. 4.3), for the following reason. In Theorem 10.2 we show that every model
of ECBV embeds in a model of EEC; conservativity of EEC over ECBV then follows from
a strong normalisation result for EEC [7]. Thus the linear-use state-passing translation of
Section 4 can be understood as a fully complete translation into EEC.
Definition 10.1. A closed enriched model is a pair of categories (V, C) such that V is
cartesian closed with coproducts and C is V-enriched with powers and copowers and products
and coproducts.
A model of EEC (V, C, F, U ) [8] is a closed enriched model (V, C) together with a
V-enriched adjunction F a U : C → V.
We refer to [8] for the term calculus and interpretation of EEC in EEC models. Here,
we will analyze how the notion of EEC model compares to the other notions of model that
we have discussed so far. One simple observation is that every closed enriched model is a
distributive enriched model in the sense of Definition 6.1. Another observation is that the
adjunction part of an EEC model can be equivalently given by a ‘state’ object of C (see [8,
Proof of Thm. 4] and Section 5.2).
Theorem 10.2. Every enriched model embeds in a closed enriched model.
Proof. The difference between enriched models and closed enriched models is that in an
enriched model (V, C) the value category V need not be cartesian closed nor have coproducts,
and the computation category C need not have coproducts, products and powers.
We use the Yoneda embedding to embed an enriched model in a closed enriched
model For any small category A we consider the category Ab of contravariant functors,
Aop → Set, and natural transformations between them. The Yoneda embedding A 7→
A(−, A) is a functor yA : A → Ab that exhibits Ab as a cocompletion of A. That is: Ab
is cocomplete, with colimits computed pointwise, and for any other cocomplete category
B and any functor F : A → B there is a colimit-preserving functor F! : Ab → B given by
def
π
F
F! (P ) = colim((yA ↓ P ) −
→A−
→ B), where (yA ↓ P ) is the category of elements of P ; this
colimit-preserving functor is essentially unique such that F ∼
= F! · yA .
b C)
b is a closed enriched model,
Let (V, C) be an enriched model. We will show that (V,
and that (V, C) embeds in it as an enriched model.
42
R. E. MØGELBERG AND S. STATON
We proceed by considering the following 2-categorical situation. Because the construction
d
(−) is a free cocompletion, it can be understood as a weak 2-functor from the 2-category
Cat of small categories, functors and natural transformations to the 2-category Cocomp of
categories with all colimits, colimit-preserving functors and natural transformations.
In fact it is necessary to be slightly more general than this: we will understand Cat and
Cocomp as 2-multicategories. Recall that a 2-multicategory is a Cat-enriched multicategory.
So it is like a 2-category except that the domains of the 1-cells are sequences of objects.
• The 2-multicategory Cat is defined as follows. The 0-cells are small categories with finite
coproducts. The 1-cells F : (A1 , . . . , An ) → B in Cat are functors in n arguments, i.e.
functors F : A1 × · · · × An → B. The 2-cells are natural transformations.
• The objects of the 2-multicategory Cocomp are categories with all colimits. The 1-cells
F : (A1 , . . . , An ) → B in Cocomp are functors F : A1 × · · · × An → B that preserve colimits
in each argument, i.e. that for fixed A1 ∈ A1 , . . . , An ∈ An and for 1 ≤ i ≤ n, the functor
F (A1 , . . . , −i , . . . An ) : Ai → B preserves colimits. The 2-cells are natural transformations.
d extends to a weak morphism of 2-multicategories from Cat to
• The construction (−)
Cocomp. A 1-cell F : (A1 , . . . , An ) → B in Cat is extended to a 1-cell in Cocomp,
c1 × · · · × A
cn → Bb which preserves colimits in each argument. This
i.e. a functor F! : A
construction is done by iteratively applying the following idea. If an n-ary functor
G : (A1 , . . . , Ak , . . . , An ) → B is such that B is cocomplete, A1 . . . Ak−1 are cocomplete, G
preserves colimits in each of the first (k − 1) arguments, and Ak is small, then there is an
ck , . . . , An ) → B that preserves colimits in each of the first k
n-ary functor G!k : (A1 , . . . , A
arguments such that G ∼
= G!k · (A1 , . . . , yA , . . . , An ). This is because the n-ary functor
G : (A1 , . . . , An ) → B can be curried to a functor
Ak → Cocomp(A1 , . . . , Ak−1 ; Cat(Ak+1 , . . . , An ; B))
whose codomain is cocomplete, and which can thus be extended to a colimit-preserving
functor using the universal property of Âk :
Âk → Cocomp(A1 , . . . , Ak−1 ; Cat(Ak+1 , . . . , An ; B));
ck , . . . , An ) → B. Ultimately, the extension
this can be uncurried to give G!k : (A1 , . . . , A
c1 × · · · × A
cn → Bb satisfies the following coend formula:
F! : A
Z A1 ,...,An
∼
F! (P1 , . . . , Pn )(B) =
P1 (A1 ) × · · · × Pn (An ) × B(B, F (A1 , . . . , An ))
• Recall the following consequence of the special adjoint functor theorem: a morphism
c1 , . . . , A
cn ) → B in Cocomp can be equivalently described as a functor that has a
F : (A
right adjoint in each argument, i.e. a right adjoint for each functor F (P1 , . . . , −i , . . . , Pn ) :
ci → B.
A
• Aside from size issues, there is a forgetful morphism of 2-multicategories Cat → Cocomp
d is left biadjoint to it. The Yoneda embedding is the unit for this adjunction.
and (−)
With the general situation explained, the proof of Theorem 10.2 is straightforward. We
begin by considering the evident notion of ‘weak monoid’ in a 2-multicategory K. This
comprises an object M of K and two 1-cells: m : (M, M) → M and e : () → M, with three
coherence 2-isomorphisms. A morphism of 2-multicategories K → K0 takes weak monoids
in K to weak monoids in K0 . In particular a monoidal category is a weak monoid in Cat, and
d takes it to a weak monoid in Cocomp, which is a cocomplete biclosed
the construction (−)
LINEAR USAGE OF STATE
43
c preserves the weak monoid structure.
monoidal category. The Yoneda embedding M → M
This is Day’s convolution construction [5, 17].
In particular, the value category V of our enriched model has products and this exhibits
b is cartesian closed and that the Yoneda
it as a monoidal category. It follows that V
b preserves the product structure in V. Given a weak monoid M in a
embedding V → V
2-multicategory K, we consider the evident notion of weak action for M: an object A of K
and a 1-cell (M, A) → A satisfying the laws of monoid actions up-to coherent isomorphism.
A morphism of 2-multicategories takes weak monoid actions to weak monoid actions. In
particular, given a monoidal category M, an action of M on another category A induces an
c with powers and copowers. The Yoneda embedding A → Ab preserves
enrichment of Ab in M
the monoidal action. Moreover since it is 2-natural it preserves any enrichment or powers
that already exist in A.
b is enriched in V
b with
In particular, in our enriched model, V acts on C and so C
b
powers and copowers, and the Yoneda embedding C → C is enriched in V and preserves
copowers.
The crux of the proof is that the Yoneda embedding adds closed structure — cartesian
closed structure and powers — while preserving the other structure. Although the Yoneda
embedding does not freely add the closed structure, it is considerably simpler that the free
closure. This is one reason why Yoneda embeddings are a common technique in semantics.
For instance, the enriched Yoneda embedding is used by Egger et al. [8] to show that
Levy’s call-by-push-value embeds in the enriched effect calculus (but the enriched Yoneda
embedding is not appropriate in our proof because it does not preserve copowers).
Theorem 10.2 explains that EEC is conservative over ECBV, but it neglects sum types.
Sum types play an important role in the study of generic effects and state access operations
(§8). We now show that EEC is conservative over ECBV with sum types.
Proposition 10.3. Every distributive enriched model (§6.1) embeds in a closed enriched
model (Def. 10.1).
Proof. Our proof of Proposition 10.3 follows the same outline as our proof of Theorem 10.2.
We must modify that proof because the Yoneda embedding yA : A → Ab does not preserve
coproducts. We use the following variation on the Yoneda embedding. For any category A
with finite coproducts, let FP(Aop , Set) be the category of finite-product-preserving functors
Aop → Set and natural transformations between them. Assuming A has coproducts, the
category FP(Aop , Set) has coproducts too. The Yoneda embedding A 7→ A(−, A) is a functor
A → FP(Aop , Set) which preserves coproducts. In fact, the Yoneda embedding exhibits
FP(Aop , Set) as the cocompletion of A as a category with finite coproducts. The category
FP(Aop , Set) is cocomplete (although not all colimits are computed pointwise), and for any
coproduct-preserving functor F : A → B into a cocomplete category B there is a colimitdef
π
F
preserving functor F! : FP(Aop , Set) → B given by F! (P ) = colim((yA ↓ P ) −
→A−
→ B);
this colimit-preserving functor is essentially unique such that F ∼
= F! · yA (see e.g. [21,
Thms 5.86, 6.11], [42], [11]). Since a distributive enriched model has coproducts, this is the
right variation of the Yoneda embedding to use.
We now mimic the proof of Theorem 10.2, replacing the cocompletion construction
d
(−) with the cocompletion FP((−)op , Set) of a category with coproducts. Consider the
2-multicategory Coprod: the 0-cells are small categories with finite coproducts; the 1-cells
F : (A1 , . . . , An ) → B are functors F : A1 × · · · × An → B that preserve coproducts in each
44
R. E. MØGELBERG AND S. STATON
argument; the 2-cells are natural transformations. The construction FP((−)op , Set) extends
to a morphism of 2-multicategories from Coprod to Cocomp. By the special adjoint functor
op
theorem, a morphism (FP(Aop
1 , Set), . . . , FP(An , Set)) → B in Cocomp is a functor that
has a right adjoint in each argument.
A weak monoid M in Coprod is a distributive monoidal category, i.e., a monoidal
category with coproducts such that the tensor preserves coproducts in each argument. The
construction FP(−op , Set) takes it to a weak monoid in Cocomp, which is a cocomplete
biclosed monoidal category. The Yoneda embedding M → FP(Mop , Set) preserves the
weak monoid structure and coproducts.
In particular, if (V, C) is a distributive enriched model then V has distributive products
and this exhibits it as a distributive monoidal category. It follows that FP(Vop , Set)
is cartesian closed with coproducts and that the Yoneda embedding V → FP(Vop , Set)
preserves the coproduct and product structure in V.
An action of a weak monoid in Coprod is the same thing as a distributive action in the
sense of Section 6.1. Given a distributive monoidal category M, a distributive action of M on
a category A with finite coproducts induces an enrichment of FP(Aop , Set) in FP(Mop , Set)
with powers and copowers. The Yoneda embedding A → FP(Aop , Set) preserves coproducts
and the monoidal action. Moreover since it is 2-natural it preserves any enrichment or
powers that already exist in A.
In particular, if (V, C) is a distributive enriched model then V acts on C, and so
FP(Cop , Set) is enriched in FP(Vop , Set) with powers and copowers, and the Yoneda
embedding C → FP(Cop , Set) is enriched in V and preserves coproducts and copowers.
The construction in this proof is related to the following natural situation. Let Setf be
the category of finite sets, and let T be a Lawvere theory. Then (Setf , Top ) is almost an
enriched model, except that the category Top is typically not Setf -enriched. Nonetheless, our
construction applied to (Setf , Top ) yields the basic motivating example of an EEC model:
FP(Setop
f , Set) is the category of sets (since Setf is the free category with finite coproducts
on one generator) and FP(T, Set) is the category of algebras of the Lawvere theory. (See
also [41, Thm. 38].)
References
[1] Peter Achten and Marinus J. Plasmeijer. The ins and outs of Clean I/O. J. Funct. Program., 5(1):81–110,
1995.
[2] J Aczél. On mean values. Bull. Amer. Math. Soc., 54(4):392–400, 1948.
[3] Robert Atkey. Parameterised notions of computation. J. Funct. Program., 19(3–4):335–376, 2009.
[4] Josh Berdine, Peter W. O’Hearn, Uday S. Reddy, and Hayo Thielecke. Linear continuation-passing.
Higher-Order and Symbolic Computation, 15(2-3):181–208, 2002.
[5] Brian Day. On closed categories of functors. In Lect. Notes Math. 137, pages 1–38. Springer, 1970.
[6] Eduardo J Dubuc and G.M Kelly. A presentation of topoi as algebraic relative to categories or graphs.
J. Algebra, 81(2):420 – 433, 1983.
[7] J. Egger, R.E. Møgelberg, and A. Simpson. The enriched effect calculus: syntax and semantics. To
appear in Journal of Logic and Computation.
[8] J. Egger, R.E. Møgelberg, and A. Simpson. Enriching an effect calculus with linear types. In CSL’09,
pages 240–254. Springer, 2009.
[9] J. Egger, R.E. Møgelberg, and A. Simpson. Linearly-used continuations in the enriched effect calculus.
In Proc. FOSSACS’10, volume 6014, pages 18–32. Springer, 2010.
LINEAR USAGE OF STATE
45
[10] J. Egger, R.E. Møgelberg, and A. Simpson. Linear-use CPS translations in the enriched effect calculus.
Logical Methods in Computer Science, 8(4), 2012.
[11] Marcelo P. Fiore. Enrichment and representation theorems for categories of domains and continuous
functions. Unpublished manuscript, March 1996.
[12] Carsten Führmann. Varieties of effects. In Proc. FOSSACS’02, pages 144–158, 2002.
[13] Jean-Yves Girard. Linear logic. Theor. Comput. Sci., 50:1–102, 1987.
[14] R. Gordon and A.J. Power. Enrichment through variation. J. Pure Appl. Algebra, 120:167–185, 1997.
[15] M. Hasegawa. Linearly used effects: Monadic and CPS transformations into the linear lambda calculus.
In Proc. 6th International Symposium on Functional and Logic Programming (FLOPS), volume 2441 of
LNCS, pages 167–182. Springer, 2002.
[16] Reinhold Heckmann. Probabilistic domains. In Proc. Trees in Algebra and Programming – CAAP’94,
volume 787 of Lecture Notes in Computer Science, pages 142–156. Springer, 1994.
[17] Geun Bin Im and G M Kelly. A universal property of the convolution monoidal structure. J. Pure Appl.
Alg., 43:75–88, 1986.
[18] G. Janelidze and G. M. Kelly. A note on actions of a monoidal category. Theory Appl. of Categ.,
9(4):61–91, 2001.
[19] Alan Jeffrey. Premonoidal categories and a graphical view of programs. Available at ftp://outside.cs.
bell-labs.com/who/ajeffrey/papers/premonA.pdf, 1997.
[20] G. M. Kelly. Adjunction for enriched categories. In Lect. Notes Math. 106, pages 166–177. Springer,
1969.
[21] G. M. Kelly. Basic Concepts of Enriched Category Theory. Cambridge University Press, 1982.
[22] G M Kelly. Elementary observations on 2-categorical limits. Bull. Austral. Math. Soc., 39(2):301317,
1989.
[23] G M Kelly and A J Power. Adjunctions whose counits are coequalizers, and presentations of finitary
enriched monads. J. Pure Appl. Algebra, 89:163–179, 1993.
[24] Oleg Kiselyov, Simon Peyton Jones, and C. Shan. Fun with type functions. In Reflections on the Work
of C.A.R. Hoare, pages 301–331. Springer, 2010.
[25] Anders Kock. Bilinearity and cartesian closed monads. Math. Scand., 29:161–174, 1971.
[26] P. B. Levy. Call By Push Value. Kluwer, December 2003.
[27] P.B. Levy, J. Power, and H. Thielecke. Modelling environments in call-by-value programming languages.
Inform. and Comput., 185, 2003.
[28] Sheng Liang, Paul Hudak, and Mark P. Jones. Monad transformers and modular interpreters. In
Proc. POPL 1995, pages 333–343, 1995.
[29] Bachuki Mesablishvili. Monads of effective descent type and comonadicity. Theory and Applications of
Categories, 16(1):1–45, 2006.
[30] F Métayer. State monads and their algebras. arXiv:math/0407251v1, 2004.
[31] E. Moggi. Computational lambda-calculus and monads. In Proceedings of the 4th Annual Symposium on
Logic in Computer Science, pages 14–23, Asiloomar, CA, 1989. IEEE Computer Society Press.
[32] P. W. O’Hearn and J. C. Reynolds. From Algol to polymorphic linear lambda-calculus. J. ACM,
47(1):167–223, 2000.
[33] Pierre-Marie Pédrot. On the semantics of the effect calculus. Master’s thesis, ENS Lyon, 2010. Available
at http://perso.ens-lyon.fr/pierremarie.pedrot/reports/rapport-m1-hasegawa.pdf.
[34] G. Plotkin. Call-by-name, call-by-value, and the λ-calculus. Theoret. Comp. Sci., 1:125–159, 1975.
[35] G. Plotkin and J. Power. Tensors of comodels and models for operational semantics. In Proc. MFPS XXIV,
volume 218 of Electr. Notes Theor. Comput. Sci, pages 295–311. Elsevier, 2008.
[36] G. Plotkin and M. Pretnar. Handlers of algebraic effects. In Proc. ESOP’09, volume 5502 of LNCS,
pages 80–94. Springer, 2009.
[37] G. D. Plotkin and J. Power. Notions of computation determine monads. In Proc. FOSSACS’02, volume
2620. Springer, 2002.
[38] G. D. Plotkin and J. Power. Algebraic operations and generic effects. Appl. Categ. Structures, 11(1):69–94,
2003.
[39] Gordon D Plotkin. Some varieties of equational logic. In Essays dedicated to Joseph A. Goguen, volume
4060 of Lect. Notes in Comput. Sci, pages 150–156. Springer, 2006.
[40] A. J. Power and O. Shkaravska. From comodels to coalgebras: State and arrays. In Proc. CMCS’04,
volume 106 of Electr. Notes Theor. Comput. Sci, pages 297–314. Elsevier, 2004.
46
R. E. MØGELBERG AND S. STATON
[41] John Power. Generic models for computational effects. Theoret. Comput. Sci., 364(2):254–269, 2006.
[42] John Power and Edmund Robinson. Premonoidal categories and notions of computation. Math. Structures
Comput. Sci., 7(5):453–468, 1997.
[43] Matija Pretnar. The logic and handling of algebraic effects. PhD thesis, School of Informatics, University
of Edinburgh, 2010.
[44] Kurt Sieber. Full abstraction for the second order subset of an Algol-like language. In Proc. MFCS,
pages 608–617, 1994.
[45] Zoltan Somogyi, Fergus Henderson, and Thomas C. Conway. The execution algorithm of Mercury, an
efficient purely declarative logic programming language. J. Log. Program., 29(1-3):17–64, 1996.
[46] C. Strachey. The varieties of programming language. In Proc. International Computing Symposium,
pages 222–233. Cini Foundation, Venice, 1972. Also Tech. Monograph PRG-10, Univ. Oxford (1973).
[47] Hayo Thielecke. Categorical structure of continuation passing style. PhD thesis, Univ. Edinburgh, 1997.
Appendix A. Categories of models
A.1. The 2-category Enr of enriched models. We first define a notion of morphism of
enriched call-by-value model and transformations between these. This gives a 2-category
Enr. Let (V, C) and (V0 , C0 ) be enriched call-by-value models (Def. 2.1). A morphism from
(V, C) to (V0 , C0 ) is a triple (F, G, λ) such that F : V → V0 and G : C → C0 are functors,
and λ is a natural family of isomorphisms
λA,B : G(A · B) ∼
= F (A) · G(B)
The following three conditions must be satisfied:
• F preserves products (up to isomorphism).
• The following two coherence diagrams commute
λ
G(1 · B) - F 1 · GB
∼
=
! ·S
∼
=
GB
?
G((A × B) · C)
∼
=
(A.1)
?
1 · GB
λ
-
- G(A · B · C)
F A · G(B · C)
FA · λ
λ
?
F (A × B) · GC
∼
=
-
hF π1 , F π2 i · GC
-
?
(F A × F B) · GC
F A · F B · GC
0
∼ C (GB, GC)
• The mate of
is an isomorphism F (C(B, C)) =
−1
Recall that the mate of λ is the adjoint correspondent of
λ−1
F (C(B, C)) · GB
λ−1
-
G(C(B, C) · B)
G(ev)
where ev is the counit of the adjunction (−) · B a C(B, (−)).
G(C)
(A.2)
LINEAR USAGE OF STATE
47
The 2-cells between morphisms (F, G, λ), (F 0 , G0 , λ0 ) : (V, C) → (V0 , C0 ) are natural
isomorphisms β : F ∼
= F 0, γ : G ∼
= G0 making the following diagram commute:
λA,B
- F A · GB
G(A · B)
β·γ
γ
?
G0 (A · B)
λ0A,B
(A.3)
?
- F 0 A · G0 B
The composition of 1-cells is defined as (F 0 , G0 , λ0 ) ◦ (F, G, λ) = (F 0 F, G0 G, λ0 ◦ G0 λ):
G0 G(A · B)
G0λ
λ0
G0 (F A · GB) - F 0 F A · G0 GB
The composition of 2-cells is simply pointwise (β 0 , γ 0 ) ◦ (β, γ) = (β 0 ◦ β, γ 0 ◦ γ)
A.2. The 2-category Ecbv. The objects of the 2-category Ecbv are tuples (V, C, S) where
(V, C) is an enriched call-by-value model and S is a ‘state’ object in C. A 1-cell (V, C, S) →
(V0 , C0 , S 0 ) is a quadruple (F, G, λ, δ) such that (F, G, λ) : (V, C) → (V0 , C0 ) is a morphism
of enriched call-by-value models and δ is an isomorphism
δ : GS ∼
= S0
A 2-cell from (F, G, λ, δ) to (F 0 , G0 , λ0 , δ 0 ) is a 2-cell of enriched call-by-value models (§A.1)
(β, γ) : (F, G) → (F 0 , G0 )
such that
GS
δ
0
δ
γ
- S0
-
(A.4)
?
G0 S
Composition of 1-cells and 2-cells in Ecbv is defined as in Enr. The state object
isomorphisms are composed as follows:
G0 δ - 0 0
δ 0 - 00
GS
S .
G0 GS
A.3. The 2-category Kleisli. We introduce the 2-category Kleisli whose objects are enriched
Kleisli models (§3.1). Morphisms from (V, C, J) to (V0 , C0 , J 0 ) are morphisms of enriched
models (F, G, λ) : (V, C) → (V0 , C0 ) such that J 0 ◦ F = G ◦ J. Note in particular that this
means that F and G agree on objects, since J and J 0 are required to be identities on objects.
A 2-cell (F, G, λ) → (F 0 , G0 , λ0 ) is a 2-cell of enriched models (β, γ) : (F, G) → (F 0 , G0 ) such
that γJ = J 0 β : GJ → J 0 F 0 .
A.4. Sums. The category dEnr is defined to have distributive enriched models as objects,
1-cells as in Enr with the restriction that F and G must both preserve coproducts, and
2-cells as in Enr. The definitions of the 2-categories Ecbv and Kleisli extend to 2-categories
dEcbv and dKleisli of distributive enriched models with state objects and distributive Kleisli
models.
48
R. E. MØGELBERG AND S. STATON
A.5. Effect theories. We now explain how to extend the 2-categories to models that
support effect theories, expanding on Section 8.8. We first define what it means for a functor
to preserve a value theory.
Definition A.1. Let V and V0 be distributive categories with given interpretations of some
fixed value theory (§8.5.1). A functor F : V → V0 preserving products and coproducts
preserves the interpretation of the value theory if F [[α]] = [[α]] for all type constants α of the
theory, and the diagram
F [[f-]]
F ([[ᾱ]])
F ([[β]])
]
]
[[f
hF π1 , . . . , F πn i
?
F [[α1 ]] × · · · × F [[αn ]]
commutes for all term constants f : ᾱ → β of the theory.
A.5.1. The 2-category dEcbvE . For any effect theory E, the objects of the 2-category dEcbvE
are distributive enriched models with state and a chosen E-comodel structure on the state
object (§8.5.3). Morphisms (1-cells) are morphisms (F, G, λ, δ) of dEcbv such that
• F preserves the value theory of E as in Definition A.1.
• The comodel structure is preserved, i.e., for each effect constant e : β̄; ᾱ1 + . . . + ᾱn of E,
the following diagram commutes
G[[e]]
- G(([[ᾱ1 ]] + · · · + [[ᾱn ]]) · S)
G([[β̄]] · S)
(A.5)
[[e]] [[β̄]] · S 0
([[ᾱ1 ]] + · · · + [[ᾱn ]]) · S 0
where the vertical maps are constructed using λ, δ and the preservation of products and
coproducts. (This makes sense because F ([[α]]) = [[α]] for all type constants α.)
A 2-cell of dEcbvE is a 2-cell (β, γ) : (F, G, δ) → (F 0 , G0 , δ 0 ) of dEcbv such that β[[α]] is the
identity for all type constants α of the theory E.
?
?
A.5.2. The 2-category dKleisliE . For any effect theory E, the objects of the 2-category
dKleisliE are distributive Kleisli models with given interpretations of E. Morphisms are
morphisms of dKleisli such that
• F preserves the value theory of E as in Definition A.1
• The interpretation of effect constants is preserved, i.e., for each e : β̄; ᾱ1 + . . . + ᾱn of E,
the following diagram commutes
G[[e]]
- G([[ᾱ1 ]] + · · · + [[ᾱn ]])
G([[β̄]])
hGJ(πi )ii
(A.6)
?
[[β̄]]
[[e]]-
?
([[ᾱ1 ]] + · · · + [[ᾱn ]])
where the left vertical map types because G[[βi ]] = [[βi ]], and the right vertical map is
constructed using the fact that G preserves coproducts.
LINEAR USAGE OF STATE
49
The 2-cells of dKleisliE are the 2-cells (β, γ) of dKleisli with β[[α]] the identity for all type
constants α of the theory E.
Appendix B. Bi-initiality of syntactic models
We sketch a proof of Theorem 2.4: the syntactic enriched model is bi-initial in the 2-category
Ecbv of enriched models. Bi-initiality of the syntactic monad model (Theorem 3.3) can be
proved in a similar manner.
Lemma B.1. Suppose (F, G, δ) : (V, C, S) → (V0 , C0 , S 0 ) is a morphism in dEcbvE . Let
[[−]] be the interpretation of ECBV in (V, C), and let [[−]]0 be the interpretation of ECBV in
(V0 , C0 ) (as in §2.2, §8.5.3). The family of isomorphisms given by
id : [[α]]0 → F [[α]]
δ −1 : [[S]]0 = S 0 → GS = G[[S]]
extends uniquely to a type-indexed family of isomorphisms [[A]]0 ∼
= F ([[A]]) and [[B]]0 ∼
= G([[B]])
such that if Γ | − ` t : A and Γ | ∆ ` u : B then
[[Γ]]0
[[t]]0 -
∼
=
[[A]]0
and
∼
=
?
F [[Γ]]
F [[t]]-
[[u]]0 -
[[Γ]]0 · [[∆]]0
∼
=
?
Moreover, if (β, γ) is a 2-cell then
∼
=[[A]]0
F [[A]]
=∼
β
-
∼
=
?
G([[Γ]] · [[∆]])
F [[A]]
?
F 0 [[A]]
and
[[B]]0
[[B]]0
G[[u]]-
∼
=-
(B.1)
?
G[[B]]
G[[B]]
=∼
γ
-
(B.2)
?
G0 [[B]]
Proof notes. The isomorphisms are defined by induction on A and B. For example, in the
case of !A ⊗ B we use the composite
∼
λ−1
=
[[!A ⊗ B]]0 = [[A]]0 · [[B]]0 - (F [[A]]) · (G[[B]]) - G([[A]] · [[B]]) = G[[!A ⊗ B]]
Commutativity of (B.1) and (B.2) is proved by induction on structure of terms and types
respectively. We omit the lengthy verifications entirely, but remark that (A.1) is used to
prove the case of linear variable introduction, and diagram (A.2) is used to prove the case of
linear function application and the elimination rule for !A ⊗ B. The case of effect constants
is exactly the requirement (A.5).
Uniqueness is proved by induction on types.
Bi-initiality of the syntactic model (Theorem 2.4) follows from Lemma B.1 as follows.
Given any other model, the unique morphism is given by interpretation of the syntax. To
prove uniqueness, suppose (F, G, δ) is a 1-cell from dEcbvE to (V, C, S). Then Lemma B.1
gives the natural isomorphism: since interpretation in the syntactic model of terms with one
variable is simply the identity, diagrams (A.1) prove naturality of the isomorphism. The
required commutative diagrams (A.3) for 2-cells follow directly from definitions.
50
R. E. MØGELBERG AND S. STATON
Appendix C. The adjunction between Kleisli models and enriched models
C.1. The 2-functor St : Kleisli → Ecbv. We define the 2-functor St : Kleisli → Ecbv by
the following data:
def
St(V, C, J) = (V, C, 1)
def
St(F, G) = (F, G, J 0 (! ))
for (F, G) : (V, C, J) → (V0 , C0 , J 0 )
def
St(β, γ) = (β, γ)
Note that J 0 (! ) has the right type:
J 0 (! )
G(1) = G(J(1)) = J 0 (F (1)) −−−→ J 0 (1) = 1.
C.2. The 2-functor Kl : Ecbv → Kleisli. Recall that Kl is defined on objects in Section 5
as
def
Kl(V, C, S) = (V, KlS , JS )
where the category KlS has the same objects as V and homsets
def
HomKlS (A, B) = HomC (A · S, B · S)
def
On morphisms we define Kl(F, G, λ) = (F, Kl(F, G), λKl ) where
Kl(F, G) : KlS → KlS 0
is the functor that maps an object A to F A and a morphism f : A · S → B · S to the following
composite:
F A · S0
A · δ-
F A · GS
λ−1
-
G(A · S)
G(f )
Kl(F, G)(f )
B·
F B · S0
?
δ −1
(C.1)
λ
F B · GS G(B · S)
?
The natural transformation
λKl : Kl(F, G)(A ·Kl B) → F (A) ·Kl Kl(F, G)(B)
has components given by
hF π1 , F π2 i · S 0 : F (A × B) · S 0 → (F (A) × F (B)) · S 0
On 2-cells we define Kl(β, γ) = (β, Kl(β, γ)) where Kl(β, γ) is the natural transformation
whose components are Kl(β, γ)A = βA · S 0 : F A · S 0 → F 0 A · S 0 . Note that this defines a
morphism in KlS 0 from Kl(F, G)(A) = F A to Kl((F 0 , G0 ))(A) = F 0 A as required.
LINEAR USAGE OF STATE
51
C.3. The unit. If (V, C, J) is a distributive Kleisli model, then
Kl(St(V, C, J)) = (V, Kl1 , J1 )
where Kl1 has the same objects as V and HomKl1 (A, B) = HomC (A × 1, B × 1) and
J1 (A) = A, J1 (f ) = J(f × 1) = f · 1.
We define the unit η(V,C,J) as
def
η(V,C,J) = (id , H, id ) : (V, C, J) → (V, Kl1 , J1 )
where H(A) = A, and H(f : A → B) is the composition
A×1
J(π1 ) -
f
A
- B
Jhid , !-i
B×1
The third component of η should be a natural transformation H(A · B) → A ·Kl HB. Since
both sides are equal to A × B it makes sense to take the identity transformation.
Lemma C.1. Each η(V,C,J) is an isomorphism in Kleisli.
The next lemma states that the unit is a 2-natural transformation.
Lemma C.2. For each pair of distributive Kleisli models (V, C, J) and (V0 , C0 , J 0 ), the
following diagram of functors commutes.
Kleisli((V, C, J), (V0 , C0 , J 0 ))
Kl◦St
/ Kleisli((V, Kl1 , J1 ), (V0 , Kl1 0 , J 0 ))
1
-
(η(V0 ,C0 ,J 0 ) )∗
(η(V,C,J) )∗
Kleisli((V, C, J), (V0 , Kl1 0 , J1 0 ))
where f ∗ and g∗ are the functors given by pre- and postcomposition by f and g respectively.
C.4. The counit. Let (V, C, S) be a model of enriched call-by-value. Then
St(Kl(V, C, S)) = (V, KlS , 1)
Recall that KlS has the same objects as V and set of morphisms defined as
HomKlS (A, B) = HomC (A · S, B · S)
Define the counit as
def
(V,C,S) = (id , IS , δ) : (V, KlS , 1) → (V, C, S)
where IS is the functor defined by IS (A) = A · S, IS (f ) = f , and δ is the isomorphism
δ : IS (1) = 1 · S → S
The counit is not strictly natural, but it does satisfy the following naturality condition.
Lemma C.3. Let (V, C, S), (V0 , C0 , S 0 ) be two given objects of Ecbv. The following diagram
commutes up to natural isomorphism.
Ecbv((V, C, S), (V0 , C0 , S 0 ))
St◦Kl
/ Ecbv((V, KlS , 1), (V0 , Kl 0 , 1))
S
∼
=
((V,C,S) )∗
-
((V0 ,C0 ,S 0 ) )∗
Ecbv((V, KlS , 1), (V0 , C0 , S 0 ))
52
R. E. MØGELBERG AND S. STATON
C.5. Triangle equalities. One of the triangle equalities only holds up to isomorphism.
Proposition C.4. Let (V, C, S) be a distributive enriched model with state. The composite
1-cell
Kl((V,C,S) ) ◦ ηKl(V,C,S)
is the identity on Kl(V, C, S).
Proposition C.5. Let (V, C, J) be a distributive Kleisli model. The composite 1-cell
St(V,C,J) ◦ St(η(V,C,J) )
is naturally isomorphic to the identity on St(V, C, J).
C.6. Proof of Theorem 5.3. We now prove Theorem 5.3: η and induce an equivalence
of categories
Ecbv(St(V, C, J), (V0 , C0 , S)) ' Kleisli((V, C, J), Kl(V0 , C0 , S)).
In the following we use X to denote an object of Ecbv (rather than the much longer
(V, C, S)) and Y to denote an object of Kleisli. The required equivalence of categories
consists of the composite functors
∗
Kl
(ηX )Kleisli(X, Kl(Y ))
Ecbv(St(X), Y ) - Kleisli(Kl(St(X)), Kl(Y ))
St
(Y )∗Ecbv(St(X), Y )
Kleisli(X, Kl(Y )) - Ecbv(St(X), St(Kl(Y )))
We need to prove that the two composites of these are naturally isomorphic to identities.
For the first of the composites, consider the following sequences of identities and natural
isomorphisms
∗
(Y )∗ ◦ St ◦ ηX
◦ Kl = (Y )∗ ◦ (St(ηX ))∗ ◦ St ◦ Kl
= (St(ηX ))∗ ◦ (Y )∗ ◦ St ◦ Kl
∼
= (St(ηX ))∗ ◦ (St(Y ) )∗
Lemma C.3
∗
= (St(Y ) ◦ St(ηX ))
∼ id ∗
=
Proposition C.5
= id
The other composite is isomorphic to the identity for a similar reason.
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view
a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a
letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or
Eisenacher Strasse 2, 10777 Berlin, Germany
| 6 |
Memory Tagging and how it improves
C/C++ memory safety
Kostya Serebryany, Evgenii Stepanov, Aleksey Shlyapnikov, Vlad Tsyrklevich, Dmitry Vyukov
Google
February 2018
Introduction
2
Memory Safety in C/C++
2
AddressSanitizer
Memory Tagging
2
3
SPARC ADI
4
AArch64 HWASAN
4
Compiler And Run-time Support
5
Overhead
5
RAM
5
CPU
6
Code Size
8
Usage Modes
8
Testing
9
Always-on Bug Detection In Production
9
Sampling In Production
10
Security Hardening
11
Strengths
11
Weaknesses
12
Legacy Code
12
Kernel
12
Uninitialized Memory
13
Possible Improvements
13
Precision Of Buffer Overflow Detection
13
Probability Of Bug Detection
14
Conclusion
14
Introduction
Memory safety in C and C++ remains largely unresolved.
A technique usually called “memory tagging” may dramatically improve the situation if
implemented in hardware with reasonable overhead. This paper describes two existing
implementations of memory tagging: one is the full hardware implementation in SPARC; the
other is a partially hardware-assisted compiler-based tool for AArch64. We describe the basic
idea, evaluate the two implementations, and explain how they improve memory safety.
This paper is intended to initiate a wider discussion of memory tagging and to motivate the CPU
and OS vendors to add support for it in the near future.
Memory Safety in C/C++
C and C++ are well known for their performance and flexibility, but perhaps even more for their
extreme memory unsafety. This year we are celebrating the 30th anniversary of the Morris
Worm, one of the first known exploitations of a memory safety bug, and the problem is still not
solved. If anything, it’s more severe due to the exponential increase in the amount and
complexity of modern software and a new focus on client-side applications with rich attack
surfaces.
There are numerous tools (e.g. AddressSanitizer or Valgrind) and techniques (e.g. fuzzing or
concolic execution) that find memory safety bugs during testing. There are also many
techniques and tools that mitigate some aspects of memory safety bugs in production (e.g.
ASLR, CFI). Yet there is no end to memory safety bugs found in shipped software, and to
exploits based on those bugs. The more constrained environments motivated the invention of
new exploitation techniques (e.g. ROP, DOP, JOP).
AddressSanitizer
AddressSanitizer (ASAN) is a software-only tool based on compiler instrumentation introduced
to the LLVM and GCC compilers in 2011. ASAN finds the following classes of bugs:
● Temporal: heap-use-after-free, stack-use-after-return, stack-use-after-scope.
● Spatial: heap-buffer-overflow, stack-buffer-overflow, global-buffer-overflow,
container-overflow.
● Other (not relevant for this discussion)
Among the bug classes not detected by ASAN are use-of-uninitialized-memory (see
MemorySanitizer) and intra-object-buffer-overflow.
Redzones around heap, stack, and global objects are used to detect spatial bugs, and thus an
access that jumps over a redzone (aka non-linear buffer-overflow) may be undetected by ASAN.
Quarantine for heap and stack objects delays the reuse of deallocated objects and thus helps to
detect temporal bugs. Exhausting the quarantine may lead to undetected temporal bugs.
A shadow memory maps every 8 bytes of the application memory into 1 byte of metadata to
mark memory as either valid or invalid. The shadow is checked by instrumentation code injected
by the compiler. The overheads are:
● 1.5x-3x in CPU, from instrumenting all loads and stores and from heavier malloc.
● 1.5x-3x in RAM, from redzones, quarantine, shadow, and internal bookkeeping.
● 1.5x-3x in Code Size, from instrumenting all loads and stores.
ASAN is heavily used at Google and across the industry and it has found tens of thousands
bugs. However, ASAN’s usage outside of the development process (testing and fuzzing) is
limited, mostly due to its combined CPU/RAM/Size overhead.
Other limitations of ASAN:
● Does not currently instrument assembly code
● Does not currently detect bugs in pre-compiled binaries
● Does not currently detect when the kernel accesses invalid user-space memory
ASAN is not a security hardening tool: the redzones and quarantine are easy to bypass.
Memory Tagging
The general idea of memory tagging (MT, also known as memory coloring, memory tainting,
lock and key) on 64-bit platforms is as follows:
● Every TG (tagging granularity) bytes of memory aligned by TG are associated with a tag
of TS (tag size) bits. These TG bytes are called the granule.
● TS bits in the upper part of every pointer contain a tag.
● Memory allocation (e.g. malloc) chooses a tag, associates the memory chunk being
allocated with this tag, and sets this tag in the returned pointer.
● Every load and store instruction raises an exception on mismatch between the pointer
and memory tags.
● The memory access and tag check do not necessarily occur atomically with respect to
each other.
The value of TS should be large enough to allow a sufficient number of different tag values (i.e.
at least 4 to provide ~ 16 tag values) and small enough to fit into the unused upper bits of a
pointer (usually up to 8 or 16, depending on the architecture). TS also affects the complexity and
the size of the tag storage.
The value of TG is a balance between the size of the tag storage, the alignment requirement,
and hardware possibilities.
Temporal and spatial bugs are detected probabilistically; if the tags are assigned to objects
randomly, the probability of catching a bug is roughly (2TS-1)/2TS, i.e. even with TS=4 the
probability is 15/16 or ~ 94%.
In the following sections we discuss two MT implementations:
● SPARC ADI (TG=64, TS=4)
● AArch64 HWASAN (TG=16, TS=8).
SPARC ADI
The SPARC ADI hardware extension ([1], [2], [3]) is supported on SPARC M7/M8 CPUs running
Solaris OS. There is also some indication that Linux support is in progress.
Main features of SPARC ADI:
● TG=64 and TS=4, i.e. every 64 bytes of memory are associated with a 4-bit tag.
● 2 (in SPARC M8) and 3 (in SPARC M7) tag values are reserved.
● The memory tag for a single 64-byte region is set with a single instruction:
○ stxa %r1, [ %r2 ] (144)
● ADI supports setting a memory tag and zeroing the memory itself with one instruction:
○ stxa %r1, [ %r2 ] (146) # %r1 must contain 0
● Load/store generates SEGV on tag mismatch
● ADI has two modes with regard to handling store instructions: precise (slow, generates
the exception exactly at the faulty instruction) and imprecise (faster, generates the
exception at some later point). Load instructions are always handled precisely.
● Memory tags do not seem to be separately addressable, they are probably stored in
extra ECC memory bits or in some other hidden hardware state. The only way to read
the memory tag is via a special instruction.
● Untagged memory (with tag 0) can be accessed with tagged and untagged pointers.
● Memory has to be mapped with MAP_ADI to be tagged.
● Applying a memory tag to a non-resident (freshly mmap-ed) page makes this page
resident.
● Syscalls return an error code if accessing a user provided buffer causes a tag mismatch.
AArch64 HWASAN
HWASAN (hardware-assisted ASAN) is an AArch64-specific compiler-based tool.
● TG=16, the memory tags are stored in a directly mapped 16=>1 virtual shadow region
allocated with mmap(MAP_NORESERVE), similar to ASAN.
● TS=8, the address tag is stored in the top address byte, leveraging the AArch64
hardware feature top-byte-ignore.
● Heap memory and pointers are tagged by a custom malloc implementation.
● Stack memory and pointers are tagged by compiler-injected code in the function
prologue/epilogue.
● Checks for loads/stores are made by compiler-injected code.
HWASAN can be seen as an improved variant of ASAN available on AArch64; it also serves as
a prototype for fully hardware-assisted memory tagging.
It is theoretically possible to implement HWASAN on architectures that do not have
top-byte-ignore using page aliasing (mmap(MAP_SHARED)) to store the address tag in the
meaningful address bits. However our experiments indicate that such implementation would be
impractical due to huge stress on the TLB.
Compiler And Run-time Support
Memory tagging hardware will not magically make C++ safer - it still requires cooperation
between the compiler and the run-time.
Detecting heap-related memory bugs requires changes in malloc and free:
Malloc:
● Align the allocations by TG.
● Choose a tag T for every allocation.
● Tag the memory for the just-allocated chunk with T.
● Return the address tagged with T.
Free:
● Optionally retag the free-d memory with another tag.
Strategies for choosing the tag during malloc may vary. One such strategy is to use pseudo
random tags. Another strategy would ensure that no two adjacent heap chunks have the same
tag (for 100% reliable linear buffer overflow/underflow detection).
Detecting stack-related memory bugs requires changes in the compiler: the function prologue
will need to align all local variables by TG, tag their locations with random tags, and the tagged
pointers will need to be stored in separate (virtual) registers. Optionally, the function epilogue
will retag the memory. Variations of this strategy are possible, e.g. to reduce register pressure or
to enable stack-use-after-scope detection.
Detecting buffer overflows in global variables also requires compiler support (the details are
out of scope of this paper).
Overhead
RAM
Major sources of RAM overhead:
●
●
●
Over-aligning heap objects from the natural alignment (usually, 8 or 16) to TG
Over-aligning stack objects
Memory tag storage: TS extra bits per every TG bytes
We have measured overhead from heap over-alignment on a variety of 64-bit applications: one
important server-side Google application, the Google Chrome browser on Linux, a set of 7
Android applications, and the Linux Kernel for arm64. The overheads are given in %, assuming
the base is 8-byte alignment.
Application / Heap alignment
16
32
64
A Google server-side application
2%
5.5%
17%
Google Chrome for Linux
0%
7%
28%
7 Android Applications
0%-6%
3%-23%
6%-42%
Linux kernel for arm64
4%
4%
14%
We have also measured the average stack frame increase due to stack over-alignment on a
subset of the above benchmarks:
Application / Stack alignment
16
32
64
Google Chrome
3.5%
12%
31%
Linux kernel for arm64
9%
30%
52%
Even though larger tagging granularity requires less storage for the tags, it costs more due to
heap and stack over-alignment. Conclusion: the optimal granularity is 16, assuming we need
always-on MT and stack instrumentation. Only tagging the heap, especially with allocation-level
sampling, will work fine with larger granularities.
CPU
Major sources of CPU overhead:
● Tagging the heap/stack objects during allocation/deallocation.
● Increased working set size due to extra RAM consumption.
● Extra instructions to tag stack variables.
● Checking the tags during loads and stores - only for compiler-based instrumentation!
The CPU overhead of HWASAN is roughly 2x (similar to ASAN, which typically has 1.5x-3x
CPU overhead [1]) and is mostly caused by extra instructions inserted at compile-time before
loads and stores.
In contrast, with SPARC ADI the actual tag check during loads/stores is performed by hardware
and introduces relatively low CPU overhead in the imprecise mode. We do not have a toolchain
that instruments stack frames with SPARC ADI, so our measurements reflect heap-only mode.
We have measured the performance of SPARC ADI on the SPECint 2006 benchmarks by
linking them against a simple ADI-enabled malloc, based on scudo malloc. A SPARC S7-2
server was used. We measured 6 configurations of malloc:
1. 16-byte alignment (base)
2. 64-byte alignment
3. 64-byte alignment + ADI tagging in malloc (equivalent to adi_set_version)
4. 64-byte alignment + ADI tagging+zeroing in malloc (equivalent to adi_memset)
5. 64-byte alignment + ADI tagging+zeroing in malloc + retagging in free
6. 64-byte alignment + ADI tagging+zeroing in malloc + precise trap mode for writes
Configurations 3 and 4 have shown the same results, i.e. zeroing out the memory has no
additional cost. The following graph compares the 1-st (base) configuration with configurations 2,
4, and 5. The gcc benchmark is the only benchmark where ADI tagging introduces noticeable
overhead, which comes from a large number malloc calls with large sizes. Xalancbmk slows
down by ~17% due to 64-byte over-alignment and it also gains ~20% in RSS; ADI tagging adds
very little overhead on top of it. Two benchmarks, astar and omnetpp, are positively affected by
64-byte alignment and are mostly unaffected by ADI.
We have also measured the slowdown introduced by enabling the ADI’s precise trap mode for
stores and confirmed that this mode is not suitable for always-on usage in production.
One more benchmark we used was the clang compiler: ADI slows it down by ~4% in imprecise
mode and by 26% in precise.
Conclusion: a fully hardware assisted memory tagging (in heap-only mode) could have
near-zero overhead except for very malloc-intensive applications, where the overhead will still
remain moderate. Another important observation is that zeroing out the memory may have no
additional cost if we are already tagging that memory.
Code Size
The major source of code size overhead is the extra instructions to tag stack variables.
SPARC toolchains do not support stack instrumentation. The current version of HWASAN has a
naive stack instrumentation which increases the code size by 40-50%.
In HWASAN, tagging each stack variable requires several instructions:
eor
mov
orr
lsr
strb
x19, x29, x29, lsr #20
x8, sp
x0, x8, x19, lsl #56
x20, x8, #4
w19, [x20]
<<
<<
<<
<<
<<
Get a semi-random tag from FP
Get the address of the local variable
Tag the address
Compute the shadow memory address
Tag the memory
A proper hardware implementation may require fewer, probably two or three, instructions, so we
expect the code size increase to be more moderate. Still, compilers need to learn how to avoid
excessive stack frame tagging by proving that certain address-taken local variables are immune
to temporal and spatial bugs.
Usage Modes
In this section we describe the main usage modes for memory tagging systems and share
Google’s prior experience with ASAN in similar circumstances.
Testing
A system based on MT can be seen as a better ASAN (but: see “Possible Improvements”):
● Much smaller RAM overhead.
● More reliable detection of temporal bugs (on heap and stack). Unlike ASAN, MT does
not rely on a quarantine to detect temporal bugs, and the probability of detecting such
bugs does not drop over time. It is still possible to use small quarantines with MT to
allow non-probabilistic detection of accesses to very recently freed memory.
● More reliable detection of some spatial bugs (non-linear buffer overflows or arbitrary
memory accesses). Unlike ASAN, MT does not rely on redzones and so non-linear
buffer overflows are detected with the same high probability.
● For fully hardware-assisted systems, i.e. for SPARC ADI but not for HWASAN:
○ Smaller CPU and Code Size overhead.
○ Finds buggy accesses in non-instrumented code or inside system calls, without
requiring extra complexity in software.
While these improvements are very welcome, they may not provide enough motivation to
implement full MT in hardware - a much simpler top-byte-ignore feature, if implemented in other
CPU architectures, would provide 80% of benefits for regular testing at 20% of cost.
However, we believe that the other usage modes make it critical to have full MT in hardware.
Always-on Bug Detection In Production
We have strong evidence that testing does not find all memory safety bugs that actually happen
in production.
Despite our large-scale efforts for fuzzing the Chrome browser ([1], [2]) we keep finding memory
safety bugs in the shipped versions of Chrome. A version of Chrome with SyzyASAN
instrumentation used by a small subset of real users on Windows (so-called “canary channel”)
finds two-three bugs every week (not all of these bugs are public, the total number after 5 years
of testing is ~950). The majority of these bugs are heap-use-after-free (SyzyASAN does not find
stack-related bugs). These are the bugs that happen when real users use the Chrome browser
normally, i.e. not necessarily when someone is trying to attack it. There is a skew towards bugs
that require complicated user interaction, since other kinds of bugs are more often detected
during testing & fuzzing. Shipping such instrumented builds to more users leads to more bugs
discovered per month.
Our experience with Google server-side applications is similar. Several production services set
up ASAN canaries (dedicated small clusters running an ASAN build on real traffic). Despite our
massive testing and fuzzing efforts these canaries regularly find memory safety bugs that
evaded testing.
Last but not least, the Android platform has adopted ASAN as the main memory sanitization
tool, replacing the slower Valgrind. However, ASAN’s overhead continues to be problematic. For
one, Android typically runs dozens of concurrent processes ("Android apps" and services) on
constrained devices. At that point, the combined memory overhead starts to negatively affect
the performance often making devices unusable. Besides, ASAN requires specially-built
libraries, which usually overflow the limited system storage of Android devices, necessitating a
non-standard directory layout that complicates practical use outside of a development
environment. Even with these constraints, ASAN has been very successful in finding platform
and app issues.
The large overhead of ASAN makes these efforts extremely painful, costly, and often
prohibitively complex. But a full hardware implementation of MT would solve these problems.
A system similar to SPARC ADI allows shipping a binary to production (web service, desktop,
mobile, etc - as long as the hardware supports MT) that will find the majority of memory safety
bugs actually triggered on real inputs. Given our evaluation of the overhead of SPARC ADI we
believe that many applications can run with MT always on.
For the Android ecosystem a full hardware implementation of MT would also have the benefit of
being applicable to code not found in the stock platform (eg: all additional native code across
OEM devices). This has significant advantages from both stability and security points of view.
For example, external developers routinely use ASAN to find and report security bugs in
Android (many of which can be found in the Pixel Security Bulletins). Being able to trivially
extend this kind of testing to code that ships on non-Pixel devices will greatly increase the
number of issues that are detected (and fixed) across the ecosystem.
Sampling In Production
Some applications will not tolerate even a single-digit % overhead in RAM or CPU. The great
advantage of the hardware-assisted memory tagging is that it allows sampling-based bug
detection for heap on multiple levels.
● Per-device or per-VM sampling: the same binary is shipped to all devices/VMs, but the
feature is enabled only on a small random subset of devices/VMs at any moment. When
MT is disabled on the device the overhead is exactly zero.
● Per-process sampling: for multi-process applications, such as the Chrome browser,
checking could be enabled on a subset of processes. Processes with disabled MT will
have no penalty.
● Per-allocation sampling: malloc may choose to tag a random subset of allocations to
avoid the overhead caused by malloc over-alignment and setting the memory tags. In
this case, the RAM overhead from storing the memory tags will remain, but the other
overheads will decrease proportionally to the sampling rate.
When also instrumenting the stack frames, it will be harder to implement sampling without
maintaining multiple builds. Even if the instructions that tag local variables are correct when MT
is disabled, every process will still have the code size overhead. The possibility to implement
sampling for stack will depend on how compact the tagging instructions are.
Security Hardening
Every Chrome release contains fixes for memory safety bugs reported to Google by external
security researchers. Many of these bugs are found by sophisticated fuzzing or code inspection;
we don’t know how many of them happen during normal usage of Chrome as opposed to only
being triggered by clever attacks. In other words, no amount of testing, including testing in
production, will eliminate all bugs.
Our hypothesis is that always-on memory tagging in production will serve as a significant
obstacle for attackers as it will increase the cost of attacks, and more importantly, reduce their
reliability and stealthiness.
Below we outline some strengths and weaknesses of the general MT scheme as an exploit
mitigation. We invite security practitioners to study this subject in greater detail and suggest
improvements to the general scheme.
Strengths
●
●
●
●
●
●
MT prevents the root cause of many classes of attacks (as opposed to other mitigations,
e.g. CFI, that prevent the consequences of memory corruption).
Attackers sensitive to having their exploits detected and reported will be forced to use
only the subset of vulnerabilities/exploit techniques unaffected by MT to evade the
mitigation.
MT can provide bug reports to the vendor with enough information to allow timely fixes
(unlike most other mitigations).
Leaking one pointer’s tag does not compromise any other tags (unlike with ASLR where
leaking one pointer allows the attacker to determine the layout of an entire section of
memory).
The memory tags are hard or impossible to leak (depends on the actual hardware
implementation and whether it’s protected from side channel attacks similar to
Meltdown).
Provides full mitigation against uses of uninitialized memory, see below.
Weaknesses
●
●
●
●
●
●
Probabilistic nature of bug detection. Given a sufficient number of attempts MT can be
bypassed (but if an exploit requires a chain of bugs, those low probabilities will multiply
for every bug detected by MT).
The address tag is stored in the upper pointer bits, potentially allowing the attacker to
change the address tag via an integer overflow on the address. Protecting from this
attack will require more compiler instrumentation.
The address tag is stored in a fixed position in the high bits of the pointer. A buffer
overflow not prevented by MT (e.g. an intra-object overflow) on little-endian processors
could change the low bits of a pointer without changing the tag so that it continues to
point into the same allocation (i.e. with the same tag) but increase the attacker’s access
in some way.
The address tag may be leaked and/or modified by some remaining unprotected
memory corruption classes, such as intra-object-buffer-overflow or type confusion. But if
the attacker has those primitives, in many cases they won’t need to bypass MT at all.
Since the accesses to the memory and their tags are not atomic with respect to each
other, racy use-after-frees may fail to be detected (but observing a racy use-after-free is
probabilistic anyway).
If the attacker can leak pointers (and hence their tags) for arbitrary objects they may be
able to repeatedly cause the allocation/deallocation of new heap/stack objects until
they’ve found two with matching tags to use in a use-after-free, linear overflow, etc.
Legacy Code
Hardware memory tagging (e.g. SPARC ADI), allows testing legacy code without recompiling it
-- only changes in malloc/free are required. This mode only allows finding heap-related bugs.
Mixed modes are also possible where parts of the code are instrumented (e.g. to also find
stack-related bugs) and other parts are not.
Kernel
This document mostly discusses memory safety of user-space applications. However everything
here equally applies to the OS kernels, such as Linux. We have been testing Linux with KASAN
(Kernel-ASAN) for several years and the situation with memory safety there is at least as bad as
in user space. KASAN has found at least 400 heap-use-after-free, 250 heap-buffer-overflow,
and 5 stack-buffer-overflow bugs ([1], [2], [3], also: git log | grep KASAN). These bugs indicate
that the Linux kernel will benefit from low-overhead memory tagging, optionally with
per-machine or per-allocation sampling.
Besides, using the MT for user space applications will require OS kernel support. For example,
HWASAN needs Linux to strip tags from user space pointers passed as system call arguments;
the patches will be sent upstream shortly.
Uninitialized Memory
Uses of uninitialized memory is another important, yet frequently overlooked, memory safety
problem (e.g. CVE-2017-1000380 and CVE-2017-14954 in Linux, more Linux info leaks, Linux
privilege escalations, 700+ bugs in Chrome, info leaks in Yahoo, 100+ bugs found by oss-fuzz).
Detecting uses of uninitialized memory is often not trivial, but mitigating them is. The compiler
and run-time should simply initialize all memory on allocation. Typically, this is considered to be
not worth the overhead, but if we already tag the memory, initializing the same memory may
come for free.
Our evaluation of SPARC ADI demonstrates that initializing the heap memory to zero while
tagging it has no additional cost. This important property allows applications that use always-on
MT to also mitigate uses of uninitialized heap memory (and for stack memory too, if the stack is
instrumented).
Possible Improvements
Precision Of Buffer Overflow Detection
A disadvantage of the memory tagging scheme described here compared to ASAN and similar
tools is the coarse precision of buffer overflow detection. Example:
// Assuming TG=16
uint8_t *p = (uint8_t*)malloc(10);
p[12] = 0; // same 16-byte granule, goes undetected
p[16] = 0; // will be detected
The bigger the TG, the bigger this disadvantage is. Some software strategies may soften the
problem to some extent (e.g. periodically allocating objects right-aligned within the granule,
instead of left-aligned, where permitted by the ABI) but won’t eliminate it. Many such bugs will
remain undetected.
One way to support better buffer overflow precision is to use more bits in the memory tag to
represent the size within the granule: log2(TG) bits are needed to represent the full size, fewer
bits could provide a compromise in detection precision. However, this will use too many
precious memory tag bits.
A more complicated scheme may require to reserve one memory tag value to indicate that the
first N bytes (0<N<TG) of the granule are valid and the rest TG-N bytes are not. Additional
information (e.g. N and the real memory tag to match against the pointer tag) could be stored in
the right-most byte(s) of the granule. It’s unclear how viable this idea is for full hardware
implementations.
For applications that always use MT this is less of a problem because such intra-granule
overflows will not cause memory corruption or information leaks (but will remain logical bugs).
Probability Of Bug Detection
The probability of bug detection with MT primarily depends on the tag size. With TS=4, the
probability is borderline tolerable (15/16=94%), with TS=8 it is already very good (255/256 =
99.6%). But larger values of TS incur larger RAM overheads. Clever software strategies will
need to be invented to increase the probability of bug detection with small values of TS.
Conclusion
This paper describes memory tagging (MT) - a general scheme for hardware-assisted C/C++
memory safety, and evaluates two implementations: SPARC ADI (full hardware assistance) and
AArch64 HWASAN (minimal hardware assistance).
We discuss the possible usage modes for memory tagging and potential improvements. The
major benefit from MT is finding bugs in production, either with always-on MT or by sampling.
Traditional pre-production testing and fuzzing will also benefit from less costly and more reliable
bug detection.
Memory tagging will not eliminate all memory safety bugs; however, our analysis indicates that
memory tagging, when widely supported by hardware, will help significantly reduce the number
of such bugs and is likely to complicate exploitation of the few remaining ones.
We call for a wider discussion of memory tagging and encourage the CPU and OS vendors to
support it.
| 6 |
1
Traffic Models of Periodic Event-Triggered Control
Systems
arXiv:1711.03599v1 [] 9 Nov 2017
Anqi Fu, and Manuel Mazo, Jr.
Abstract—Periodic event-triggered control (PETC) [13] is a
version of event-triggered control (ETC) that only requires to
measure the plant output periodically instead of continuously. In
this work, we present a construction of timing models for these
PETC implementations to capture the dynamics of the traffic they
generate. In the construction, we employ a two-step approach.
We first partition the state space into a finite number of regions.
Then in each region, the event-triggering behavior is analyzed
with the help of LMIs. The state transitions among different
regions result from computing the reachable state set starting
from each region within the computed event time intervals.
Index Terms—systems abstractions; periodic event-triggered
control; LMI; formal methods; reachability analysis.
I. I NTRODUCTION
Wireless networked control systems (WNCS) are control
systems that employ wireless networks as feedback channels.
In such systems, the physically distributed components are
co-located with their own wireless nodes and communicate
via a wireless network. These components can be designed
with great mobility once the nodes are supported by batteries.
Besides, each component can be established and updated
easily. Therefore, WNCS have great adaptability on obtaining
different control objectives and have been attracting much
attention. However, there are two major issues that must be
considered while designing such a system: limited bandwidth
and energy supply.
Most often, control tasks are designed to be executed
periodically. This periodic strategy, also named time-triggered
control (TTC), does not regard the system’s current state and
thus may waste bandwidth and energy. Alternatively, eventtriggered control (ETC) strategies are proposed to reduce
bandwidth occupation, see e.g. [5], [17], [19], [22], [25], [26],
and references therein. In ETC, the control tasks only execute
when necessary, e.g. when some pre-designed performance indicator is about to be violated. Thus the system is tightfisted in
communication. However, to validate the pre-designed eventtriggering conditions, sensors are required to sample the plant
output continuously. This continuous monitoring can consume
large amounts of energy. To reduce this energy consumption,
naturally one may want to replace the continuously sampling
by a discrete time sampling.
When applying discrete time sampling, to compensate the
delay caused by the discretization, one can either design a
The authors are with the Delft Center for Systems and Control, Delft University of Technology, Delft 2628 CD, The Netherlands e-mail: A.Fu-1,
M.Mazo@tudelft.nl.
This work is partly funded by China Scholarship Council (CSC).
stricter event-triggering condition based on the system dynamics, as e.g. [18]; or modify the Lyapunov function, as e.g.
[13]. In [13], Heemels et. al. present a periodic event-triggered
control (PETC) mechanism. In a PETC implementation, the
sensors are only required to measure the plant output and
validate the event conditions periodically. Only when some
pre-designed conditions are satisfied, fresh measurements are
employed to recompute the controller output. Therefore, PETC
enjoys the benefits of both cautious communication and discrete time measurement. Compared to [18], the event conditions can be less conservative to further reduce communications. Thus the energy consumed and bandwidth occupied are
reduced. Furthermore, the transmissions of the control input
from the controller to the plant are also included in the PETC
mechanism.
To further reduce the resource consumption and to fully
extract the potential gains from ETC, one can also consider
scheduling approaches. By efficiently scheduling listening
times on wireless communications and medium access time in
general, the energy consumption in a WNCS can be reduced
and bandwidth can be more efficiently reused. To enable
such scheduling, a model for the traffic generated by ETC
is required. In [16], Kolarijani and Mazo propose a type of
approximate power quotient system, to derive models that
capture the timing behaviors of ETC systems applying the
triggering mechanism from [22]. They first partition the state
space into finite cones. In each cone, they analyze the timing
behavior by over-approximation methods (see e.g. [3], [4], [6],
[11], [14], [20], [21]), linear matrix inequality (LMI) methods,
and reachability analysis (see e.g. [1] and [2]).
Similarly, in order to fully extract the potential gains from
PETC with scheduling approaches, a model for the traffic
generated by PETC is necessary. In this work, we present a
construction of the timing models of the PETC implementations from [13]. First of all, we modify the PETC mechanism
by giving an upper bound time such that if no event happens
within that interval, the system will be forced to generate an
event by the end of it. When constructing the models, the
approach has two steps. We first divide the state space into
a finite number of partitions. For a 2-dimensional system, the
partition looks like a dartboard. Then we construct a set of
LMIs to compute the output map. Transition relations among
different regions are derived by computing the reachable state
set starting from each region. Compared with the work from
[9], we do not require that the perturbation should vanish as
the state converges. Instead, we only assume the perturbation
to be both L2 and L∞ .
This paper is organized as follows. Section II presents the
2
necessary notation and definitions. The problem to be solved
is defined in Section III. Section IV shows all the details to
construct a power quotient system to model the traffic of a
centralized PETC implementation. A numerical example is
shown in Section V. Section VI summarizes the contributions
of this paper and discusses future work. To ease the readability,
the proofs are collected in the Appendix.
II. N OTATION AND PRELIMINARIES
We denote the n-dimensional Euclidean space by Rn , the
+
positive real numbers by R+ , by R+
0 = R ∪ {0}. The natural
numbers including zero is denoted by N. When zero is not
included, we denote the natural numbers as N+ . IN+ is the
set of all closed intervals [a, b] such that a, b ∈ N+ and a ≤ b.
For any set S, 2S denotes the set of all subsets of S, i.e. the
power set of S. Mm×n and Mn are the set of all m × n real
valued matrices and the set of all n × n real-valued symmetric
matrices respectively. A symmetric matrix M ∈ Rn×n is said
to be positive (negative) definite, denoted by M 0 (M ≺ 0),
whenever xT M x > 0 (xT M x < 0) for all x 6= 0, x ∈ Rn .
M 0 (M 0) means M is a positive (negative) semidefinite matrix. When Q ⊆ Z × Z is an equivalence relation
on a set Z, [z] denotes the equivalence class of z ∈ Z and
Z/Q denotes the set of all equivalence classes. For a locally
integrable
signal w: R+ → Rn , we denote by kwkL2 =
qR
∞
|w(t)|2 dt its L2 -norm, kwkL∞ = supt≥0 kw(t)k < ∞
0
its L∞ -norm. Furthermore, we define the space of all locally
integrable signals with a finite L2 -norm as L2 , the space of
all signals with a finite L∞ -norm as L∞ .
Now we review some notions from the field of system
theory.
Definition 2.1: (Metric)[7] Consider a set T , d : T × T →
R ∪ {+∞} is a metric (or a distance function) if the following
three conditions are satisfied ∀x, y, z ∈ T :
• d(x, y) = d(y, x);
• d(x, y) = 0 ↔ x = y;
• d(x, y) ≤ d(x, z) + d(y, z).
The ordered pair (T, d) is said to be a metric space.
Definition 2.2: (Hausdorff distance)[7] Assume X and Y
are two non-empty subsets of a metric space (T, d). The
Hausdoorff distance dH (X, Y ) is given by:
max sup inf d(x, y), sup inf d(x, y) .
(1)
x∈X y∈Y
y∈Y x∈X
Definition 2.3: (System)[23] A system is a sextuple
(X, X0 , U, −→, Y, H) consisting of:
• a set of states X;
• a set of initial states X0 ⊆ X;
• a set of inputs U ;
• a transition relation −→⊆ X × U × X;
• a set of outputs Y ;
• an output map H : X → Y .
The term finite-state (infinite-state) system indicates X is a
finite (an infinite) set. For a system, if the cardinality of U is
smaller than or equal to one, then this system is said to be
autonomous.
Definition 2.4: (Metric system)[23] A system S is said to
be a metric system if the set of outputs Y is equipped with a
metric d : Y × Y → R+
0.
Definition 2.5: (Approximate simulation relation)[23] Consider two metric systems Sa and Sb with Ya = Yb , and let
∈ R+
0 . A relation R ⊆ Xa × Xb is an -approximate
simulation relation from Sa to Sb if the following three
conditions are satisfied:
• ∀xa0 ∈ Xa0 , ∃xb0 ∈ Xb0 such that (xa0 , xb0 ) ∈ R;
• ∀(xa , xb ) ∈ R we have d (Ha (xa ), Hb (xb )) ≤ ;
0
• ∀(xa , xb ) ∈ R such that (xa , ua , xa ) ∈−→ in Sa implies
a
0
∃(xb , ub , xb ) ∈−→ in Sb satisfying (x0a , x0b ) ∈ R.
b
We denote the existence of an -approximate simulation
relation from Sa to Sb by Sa S Sb , and say that
Sb -approximately simulates Sa or Sa is -approximately
simulated by Sb . Whenever = 0, the inequality
d(Ha (xa ), Hb (xb )) ≤ implies Ha (xa ) = Hb (xb ) and the
resulting relation is called an (exact) simulation relation.
We introduce a notion of power quotient system and corresponding lemma for later analysis.
Definition 2.6: (Power quotient system)[16] Let S =
(X, X0 , U, −→, Y, H) be a system and R be an equivalence
relation on X.
The power quotient of S by R, denoted
by S/R ,
is the system
X/R , X/R,0 , U/R , −→, Y/R , H/R
/R
consisting
of:
•
•
•
•
X/R = X/R;
X/R,0 = x/R ∈ X/R |x/R ∩ X0 6= ∅ ;
U
/R = U ;
x/R , u, x0/R
∈−→ if ∃(x, u, x0 ) ∈−→ in S with x ∈
/R
x/R and x0 ∈ x0/R ;
Y
• Y/R ⊂ 2 ;
S
• H/R x/R =
x∈x/R H(x).
Lemma 2.7: [Lemma 1 in [16]] Let S be a metric system,
R be an equivalence relation on X, and let the metric system
S/R be the power quotient system of S by R. For any
(2)
≥
max
d H(x), H/R x/R ,
x ∈ x/R
x/R ∈ X/R
with d the Hausdorff distance over the set 2Y , S/R approximately simulates S, i.e. S S S/R .
The definition of Minkowski addition is introduced here for
the computation of the reachable sets.
Definition 2.8: (Minkowski addition) The Minkowski addition of two sets of vectors A and B in Euclidean space is
formed by adding each vector in A to each vector in B:
A ⊕ B = {a + b|a ∈ A, b ∈ B},
where ⊕ denotes the Minkowski addition.
III. P ROBLEM DEFINITION
The centralized PETC presented in [13] is reviewed here.
Consider a continuous linear time-invariant (LTI) plant of the
form:
(
ξ˙p (t) = Ap ξp (t) + Bp v̂(t) + Ew(t)
(3)
y(t) = Cp ξp (t),
3
where ξp (t) ∈ Rnp denotes the state vector
y(t) ∈ Rny denotes the plant output vector,
denotes the input applied to the plant, w(t) ∈
the perturbation. The plant is controlled by a
controller, given by:
(
ξc (tk+1 ) = Ac ξc (tk ) + Bc ŷ(tk )
of the plant,
v̂(t) ∈ Rnv
Rnw denotes
discrete-time
v(tk ) = Cc ξc (tk ) + Dc ŷ(tk ),
(4)
where ξc (tk ) ∈ Rnc denotes the state vector of the controller,
v(tk ) ∈ Rnv denotes the controller output vector, and ŷ(tk ) ∈
Rny denotes the input applied to the controller. A periodic
sampling sequence is given by:
Ts := {tk |tk := kh, k ∈ N},
(5)
where h > 0 is the sampling interval. Define two vectors:
T
u(t) : = y T (t) v T (t) ∈ Rnu
(6)
T
û(tk ) : = ŷ T (tk ) v̂ T (tk ) ∈ Rnu ,
with nu := ny + nv . u(t) is the output of the implementation,
û(t) is the input of the implementation. A zero-order hold
mechanism is applied between samplings to the input. At each
sampling time tk , the input applied to the implementation
û(tk ) is updated ∀tk ∈ Ts :
(
u(tk ),
if ku(tk ) − û(tk )k > σku(tk )k
û(tk ) =
(7)
û(tk−1 ), if ku(tk ) − û(tk )k ≤ σku(tk )k,
where σ > 0 is a given constant. Reformulating the event
condition as a quadratic form, the event sequence can be
defined by:
Te := tb |b ∈ N, tb ∈ Ts , ξ T (tb )Qξ(tb ) > 0 .
(8)
T
T
where ξ(t) := ξp (t) ξcT (t) ŷ T (t) v̂ T (t) ∈ Rnξ , with
nξ := np + nc + ny + nv . And:
Q1 Q2
Q=
QT
Q4
2
in which:
0
(1 − σ)CpT Cp
Q
=
1
0
(1 − σ)CcT Cc
−CpT
0
Q2 =
(1 − σ)CcT Dc −CcT
I + (1 − σ)DcT Dc −DcT
Q4 =
−Dc
I
0 is a zero matrix with proper dimension, I is an identity
matrix with appropriate dimension.
It is obvious that Te ⊆ Ts . According to Theorem V.2 in
[13], if the hypothesis therein are satisfied, then the system
(3-8):
1) is globally exponential stable (GES), i.e. ∃c > 0 and ρ >
0 s.t. ∀ξ(0) ∈ Rnξ with w = 0, kξ(t)k ≤ ce−ρt kξ(0)k
for all t ∈ R+ .
2) has L2 -gain from w to z smaller than or equal to γ, i.e.
∃σ : Rnξ → R+ s.t. ∀w ∈ L2 , ξ(0) ∈ Rnξ , the corresponding solution to the system and z(t) := g(ξ(t), w(t))
satisfies kzkL2 ≤ σ(ξ(0)) + γkwkL2 .
To model the timing behaviour of a PETC system, we aim at
constructing a power quotient system for this implementation.
Remark 3.1: Because of the uncertainty brought by the
perturbation, it may happen that the perturbation compensates
the effect of sampling, helping the state of the implementation
to converge. Therefore the event condition in (8) may not be
satisfied along the timeline. As a result, there may not be an
upper bound for the event intervals. However an upper bound
is necessary for constructing a useful power quotient system.
Remark 3.2: To apply scheduling approaches, an online
scheduler is required. The model we are going to construct
is non-deterministic, meaning that after an event the system
may end up in several possible regions, but those regions are
defined in terms of ξp , which means that from a measurement
is not always clear in which region the system is. That means
this online scheduler cannot figure out where the system
is from simple output measurements. Therefore, the online
scheduler should be able to access in which region the system
is.
Assumption 3.3: The current state region at each eventtriggered time tb can be obtained in real time.
Because of the observation in Remark 3.1, we use instead
the following event condition:
tb+1 = inf {tk |tk ∈ Ts , tk > tb ,
o
_
ξ T (tk )Qξ(tk ) > 0 tk ≥ tb + τ̄R(ξ(tb )) ,
(9)
where R(ξ(tb )) is the state region on state-space Rnξ at last
sampling time tb , τ̄R(ξ(tb )) is a regional maximum allowable
event interval (MAEI), which is dependent on R(ξ(tb )).
According to Assumption 3.3, R(ξ(tb )) is obtainable. If
this value is not possible to be accessed by the triggering
mechanisms, one can always employ a global upper bound
τ̄ :≥ τ̄R(ξ(tb )) . We will discuss the computation of τ̄R(tb )
in later sections. Note that, if the PETC implementation
employing (8) can guarantee some pre-designed stability and
performance, then the PETC implementation employing (9)
can guarantee the same stability and performance.
Consider a period:
τ (x) := tb+1 − tb = kx h.
(10)
By definition û(t) is constant ∀t ∈ [tb , tb+1 [ and dependent
on ξp (tb ) and ξc (tb ). The input û(t) can be expressed as:
Cp
0
û(t) = CE x, CE :=
,
Dc Cp Cc
where
T
x := ξpT (tb ) ξcT (tb ) .
T
Let ξx (k) := ξpT (tb + kh) ξcT (tb + kh)
be the state
T
T
T
evolution with initial state x = ξp (tb ) ξc (tb ) , and k ∈ N.
Now ξx (k) can be computed as:
ξx (k) = M (k)x + Θ(k),
where
M (k) :=
M1 (k)
Θ1 (k)
,
, Θ(k) :=
M2 (k)
0
(11)
4
Z kh
eAp s ds Ap I 0
M1 (k) := I 0 +
0
+Bp Dc Cp Cc ,
X
k−1
k
0
I
M
(k)
:=A
+
Ak−1−i
Bc Cp 0 ,
2
c
c
i=0
Z kh
Θ1 (k) :=
eAp (kh−s) Ew(s)ds.
Figure 1. An example of the state space partition, into (a) finite polyhedral
cones, (b) finite homocentric spheres, and (c) finite state-space partition.
0
Thus from the event condition in (9), kx in (10) can be
computed by:
(12)
kx = min k x , k x ,
τ̄
and
where k x := R(x)
h
k x := inf k ∈ N+
))
T
M (k)x + Θ(k)
M (k)x + Θ(k)
Q
>0
.
CE x
CE x
(13)
Now we present the main problem to be solved in this paper.
Consider the system:
S = (X, X0 , U, −→, Y, H),
(14)
where
n
• X = R x , nx = np + nc ;
n
• X0 ⊆ R x ;
• U = ∅;
0
0
• −→⊆ X × U × X such that ∀x, x ∈ X : (x, x ) ∈−→
0
iff ξx (H(x)) = x ;
+
• Y ⊂N ;
nx
• H :R
→ N+ where H(x) = kx .
S is an infinite-state system. The output set Y of system S
contains all the possible amount of sampling steps tb+1h−tb ∈
N, b ∈ N that the system (3-7), and (9) may exhibit. Once
the sampling time h is chosen, the event interval then can be
computed by kx h.
Problem 3.4: Construct a finite abstraction of system S
capturing enough information for scheduling.
Inspired by [16], we solve this problem by constructing a
power quotient systems S/P based on an adequately designed
equivalence relation P defined over the state set X of S. The
constructed systems S/P are semantically equivalent to timed
automata, which can be used for automatic scheduler design
[15].
In particular, the system S/P to be constructed is as follows:
S/P = X/P , X/P,0 , U/P , −→, Y/P , H/P ,
(15)
/P
•
•
•
X/P = Rn/Px := {R1 , · · · , Rq };
X
= Rn/Px ;
/P,0
x/P , x0/P ∈−→ if ∃x ∈ x/P , ∃x0 ∈ x0/P such that
Compared with the power quotient system constructed in
[16], a main difference is that since we focus on PETC, there
is no timing uncertainty.
IV. C ONSTRUCTION OF THE QUOTIENT SYSTEM
A. State set
From the results in [8], we remark the following fact:
Remark 4.1: When w = 0, excluding the origin, all the
states that lie on a line going through the origin have an
identical triggering behavior.
We also call the following assumption:
Assumption 4.2: The perturbation w satisfies w ∈ L2 and
w ∈ L∞ . Besides, assume an upper bound W > 0 for kwkL∞ ,
i.e. kwkL∞ ≤ W, is known.
Base on Remark 4.1 and Assumption 4.2, we propose statespace partition as follows:
(
n^
x −1
xT Ξs1 ,(i,i+1) x ≥ 0
Rs1 ,s2 = x ∈ Rnx
(16)
i=1
o
^
Ws2 −1 ≤ |x| < Ws2 ,
where s1 ∈ {1, · · · , q1 }, s2 ∈ {1, · · · , q2 }, q1 , q2 ∈ N
are pre-designed scalars. Ξs1 ,(i,j) is a constructed matrix;
{Wi |i ∈ {0, · · · , q2 }} is a sequence of scalars. Note that
W0 = 0, Wq2 = +∞, and the rest Ws2 are bounded
and
It is obvious that
S somewhere in between 0 and +∞.
nx
R
=
R
.
s1 ,s2
s1 ∈{1,··· ,q1 },s2 ∈{1,··· ,q2 }
This state-space partition combines partitioning the statespace into finite polyhedral cones (named as isotropic covering
[8]) and finite homocentric spheres. From (16), we can see
that, the isotropic covering describes the relation between
entries of the state vector, while the transverse isotropic
covering is used to capture the relation between the norm of
the state vector and the L∞ norm of the perturbations, which
will be shown later in Theorem 4.4. If w = 0, the homocentric
spheres can be omitted.
Details on the isotropic covering can be found in the
Appendix. Figure 1 shows a 2-dimensional example.
/P
•
•
ξx (H(x)) = x0 ;
Y/P ⊂ 2Y ⊂ IN+;
H
h /P x/P i= minx∈x/P H(x), maxx∈x/P H(x) :=
k x/P , k̄x/P .
S/P is a finite state system.
B. Output map
We first free the system dynamics from the uncertainty
brought by the perturbation.
Lemma 4.3: Consider the system (3-7) and (9), and that
Assumptions 4.2 hold. If there exist a scalar µ ≥ 0 and a
5
symmetric matrix Ψ such that (Q1 + Ψ)1 µI, then k x
generated by (13) is lower bounded by:
kx0 := inf{k ∈ N+ |Φ(k) 0},
where
Q1 + Ψ =
(Q1 + Ψ)1
(Q1 + Ψ)3
(Q1 + Ψ)2
(Q1 + Ψ)4
(17)
(Q1 + Ψ)1 ∈ Rnp ×np ,
Φ1 (k) Φ2 (k)
0
−Ψ
0 ,
Φ(k) := ΦT
2 (k)
0
0
Φ3 (k)
T
T
Φ1 (k) =M (k)Q1 M (k) + M (k)Q2 CE
T
T
+ CE
Q3 M (k) + CE
Q4 CE
T
T
Φ2 (k) =M (k)Q1 + CE Q3
Φ (k) =khµλ
T
3
max E E dAp (k),
(18)
(19)
and
kλmax (Ap +AT
p) − 1
e
, if λmax Ap + AT
p 6= 0,
T
dAp (k) =
λmax Ap + Ap
kh,
if λmax Ap + AT
p = 0.
Next we construct LMIs that bridge Lemma 4.3 and the
state-space partition.
Theorem 4.4: (Regional lower bound) Consider a scalar
k s1 ,s2 ∈ N and regions with s2 > 1. If all the hypothesis in
Lemma 4.3 hold and there exist scalars εk,(s1 ,s2 ),(i,i+1) ≥ 0
where i ∈ {1, · · · , nx −1} such that for all k ∈ {0, · · · , k s1 ,s2 }
the following LMIs hold:
H
Φ2 (k)
0,
(20)
ΦT
−Ψ
2 (k)
according to Remark 4.1, only applying isotropic covering is
enough.
We define Rs1 ,• to represent Rs1 ,s2 , ∀s2 ∈ {1, · · · , q2 }.
Corollary 4.7: (Regional lower bound when w = 0) Consider a scalar k s1 ,• ∈ N. If there exist scalars εk,s1 ,(i,i+1) ≥ 0
where i ∈ {1, · · · , nx −1} such that for all k ∈ {0, · · · , k s1 ,• }
the following LMIs hold:
X
Φ1 (k) +
εk,s1 ,(i,i+1) Ξs1 ,(i,i+1) 0,
(22)
i∈{1,··· ,nx −1}
with Φ1 (k) defined in (19), then the inter event time (8) of the
system (3-7) with w = 0 are regionally bounded from below
by (k s1 ,• + 1)h.
Corollary 4.8: (Regional upper bound when w = 0) Let
¯l ∈ N be a large enough scalar. Consider a scalar k̄s ,• ∈
1
k s1 ,• , · · · , ¯l . If there exist scalars ε̄k,s1 ,(i,i+1)
≥ 0 where
i ∈ {1, · · · , nx − 1} such that for all k ∈ k̄s1 ,• , · · · , ¯l the
following LMIs hold:
X
Φ1 (k) −
ε̄k,s1 ,(i,i+1) Ξs1 ,(i,i+1) 0,
(23)
i∈{1,··· ,nx −1}
with Φ1 (k) defined in (19), then the inter event time (8) of the
system (3-7) with w = 0 are regionally bounded from above
by k̄s1 ,• h.
Remark 4.9: For the choice of ¯l, we follow Remark 2 in [16],
and apply a line search approach: increasing ¯l until Φ1 (¯l) 0.
This results in ¯l being a global upper bound for the inter event
time (8) of the system (3-7) with w = 0.
It is obvious that ¯l ≥ k̄s1 ,• > k s1 ,• ≥ k s1 ,s2 , ∀s2 . We can
now set the regional MAEI τ̄R(ξ(tb )) in (9) as: τ̄R(ξ(tb )) :=
k̄s1 ,• h, ∀x ∈ Rs1 ,• .
where
H =Φ1 (k) + Φ3 (k)W 2 Ws−2
I
2 −1
X
+
εk,(s1 ,s2 ),(i,i+1) Ξs1 ,(i,i+1) ,
i∈{1,··· ,nx −1}
with Φ1 (k), Φ2 (k), and Φ3 (k) defined in (19), and Ψ from
Lemma 4.3, then the inter event times (9) for system (3-7)
are regionally bounded from below by (k s1 ,s2 + 1)h. For the
regions with s2 = 1, the regional lower bound is h.
Remark 4.5: In Theorem 4.4, we discuss the situations when
s2 > 1 and s2 = 1, since for all regions with s2 > 1, it holds
that Ws2 −1 6= 0; while for all regions with s2 = 1, Ws2 −1 = 0
holds. When Ws2 −1 6= 0, one can easily validate the feasibility
of the LMI (20); while when Ws2 −1 = 0, H will be diagonal
infinity, making the LMI (20) infeasible when k > 0. However,
according to the property of PETC, i.e. tb+1 ∈ Ts and tb+1 >
tb , the regional lower bound exists and is equal to h.
Following similar ideas of Theorem 4.4, we present next
lower and upper bounds starting from each state partition when
w = 0. Consider the following event condition:
(
)
T
M
(k)x
M
(k)x
kx = inf k ∈ N+
Q
>0 .
(21)
CE x
CE x
Remark 4.6: Since (21) does not consider perturbations,
when computing the lower and upper bound for each region,
C. Transition relation
In this subsection, we discuss the construction of the transition relation and the reachable state set. Denote the initial
state set as X0,(s1 ,s2 ) , after k-th samplings without an update,
the reachable state set is denoted as Xk,(s1 ,s2 ) . According to
(11), a relation can be obtained as:
Xk,(s1 ,s2 ) = M (k)X0,(s1 ,s2 ) + Θ(k).
(24)
It is obvious that, Xk,(s1 ,s2 ) cannot be computed directly,
because the perturbation is uncertain and the state region may
not be convex. Therefore, we aim to find sets X̂k,(s1 ,s2 ) such
that:
Xk,(s1 ,s2 ) ⊆ X̂k,(s1 ,s2 ) .
To compute X̂k,(s1 ,s2 ) , we take the following steps:
1) Partition the dynamics: According to (24), X̂k,(s1 ,s2 )
can be computed by:
1
2
X̂k,(s1 ,s2 ) = X̂k,(s
⊕ X̂k,(s
,
1 ,s2 )
1 ,s2 )
1
2
where ⊕ is the Minkowski addition, X̂k,(s
and X̂k,(s
1 ,s2 )
1 ,s2 )
are sets to be computed.
6
1
1
2) Compute X̂k,(s
: One can compute X̂k,(s
by:
1 ,s2 )
1 ,s2 )
1
X̂k,(s
= M (k)X̂0,(s1 ,s2 ) ,
1 ,s2 )
where X̂0,(s1 ,s2 ) is a polytope that over approximates
X0,(s1 ,s2 ) , i.e. X0,(s1 ,s2 ) ⊆ X̂0,(s1 ,s2 ) . X̂0,(s1 ,s2 ) can be
computed as in the optimization problem (1) in [2].
2
2
3) Compute X̂k,(s
: For the computation of X̂k,(s
,
1 ,s2 )
1 ,s2 )
it follows that:
2
= {x ∈ Rnx ||x| ≤ |Θ(k)|},
X̂k,(s
1 ,s2 )
where
kh
Z
eAp (kh−s) Ew(s)ds
|Θ(k)| =
0
kh
Z
eAp (kh−s) Ew(s) ds
≤
0
kh
Z
eAp (kh−s) ds|E|kwkL∞
≤
0
kh λ
max
Z
≤
e
AT
p +Ap
2
Figure 2. State-space partition and the labeling of each region.
(kh−s)
ds|E|W.
0
In which the last inequation holds according to (2.2) in [24].
of the system
Thus the reachable set X{k
s1 ,s2 ,k s1 ,• },(s1 ,s2 )
(3-7), and (9) starting from X0,(s1 ,s2 ) can be computed by:
X{k
⊆ X̂{k
s1 ,s2 ,k s1 ,• },(s1 ,s2 )
s1 ,s2 ,k s1 ,• },(s1 ,s2 )
[
=
X̂k,(s1 ,s2 ) .
k∈{ks ,s ,··· ,ks1 ,• }
1 2
To compute the transitions in S/P , one can check the
intersection between the over approximation of reachable state
set and all the state regions Rs01 ,s02 , ∀s01 ∈ {1, · · · , q1 }, s02 ∈
{1, · · · , q2 }. More specifically, one can check if the following
feasibility problem for each state region holds:
Rs01 ,s02 ∩ X̂{k
s1 ,s2 ,k s1 ,•
},(s1 ,s2 ) 6= ∅,
Figure 3. Computed result of the regional lower bound with W = 2.
in which case
Rs1 ,s2 , Rs01 ,s02 ∈−→ .
/P
D. Main result
Now we summarize the main result of the paper in the
following theorem.
=
Theorem 4.10: The metric
system S/P
X/P , X/P,0 , U/P , −→, Y/P , H/P ,
-approximately
/P
simulates S, where = max dH (y, y 0 ), y = H(x) ∈ Y ,
y 0 = H/P (x0 ) ∈ Y/P , ∀ (x, x0 ) ∈ P , and dH (·, ·) is the
Hausdorff distance.
V. N UMERICAL EXAMPLE
In this section, we consider a system employed in [13] and
[22]. The plant is given by:
˙ = 0 1 ξ(t) + 0 v(t) + 1 w(t),
ξ(t)
−2 3
1
0
and the controller is given by:
K= 1
−4 .
This plant is chosen since it is easy to show the feasibility of
the presented theory in 2-dimensional plots. The state-space
partition is presented in Figure 2.
We set W = 2, the convergence rate ρ = 0.01, L2 gain
γ = 2, sampling time h = 0.005s, event condition σ = 0.1.
By checking the LMI presented in [13], we can see there
exists a feasible solution, thus the stability and performance
can be guaranteed. The result of the computed lower bound by
Theorem 4.4 is shown in Figure 3. Figure 4 shows a zoomed-in
version. The computed upper bound by Corollary 4.8 is shown
in Figure 5. The resulting abstraction precision is = 0.15s.
The simulation results of the system evolution and event
intervals with perturbations are shown in Figure 6. The upper
bound triggered 6 events during the 10s simulation. Note
that, increasing the number of subdivisions can lead to less
7
Figure 4. Zoomed-in result of the regional lower bound with W = 2.
Figure 6. System evolution and event intervals when w = 2 sin(πt), t ∈
[3, 8]: state evaluation and perturbance, event intervals with the bounds.
Figure 5. Computed result of the regional upper bound with w = 0.
conserved lower and upper bounds of the inter event time.
The conservativeness can also be reduced by decreasing W.
The reachable state regions starting from each region is
shown in Figure 7. As an example, the reachable state region
of the initial region (s1 , s2 ) = (4, 6) is shown in Figure 8.
We also present a simulation when w = 0. The lower
bound is shown in Figure 9. The evolution of the system is
shown in Figure 10, which shows that, the inter event intervals
are within the computed bounds. The reachable state regions
starting from each region are shown in Figure 11.
VI. C ONCLUSION
In this paper, we present a construction of a power quotient
system for the traffic model of the PETC implementations
from [13]. The constructed models can be used to estimate the
next event time and the state set when the next event occurs.
These models allow to design scheduling to improve listening
time of wireless communications and medium access time to
increase the energy consumption and bandwidth occupation
efficiency.
Figure 7. Reachable regions starting from each state region, with labeling
from Figure 2.
In this paper, we consider an output feedback system with
a dynamic controller. However, the state partition is still
based on the states of the system and controller. The system
state may not always be obtainable. Therefore, to estimate
the system state in an ETC implementation from output
measurements is a very important extension to make this work
more practical. The periodic asynchronous event-triggered
control (PAETC) presented in [10] is an extension of PETC
considering quantization. One can either treat the quantization
error as part of the perturbations, or analyze this part separately
to increase the abstraction precision, since the dynamics of the
quantization error is dependent on the states. This is also an
interesting future investigation. Another interesting extension
is reconstruction of traffic models for each sensor node to
capture the local timing behavior in a decentralized PETC
8
Figure 8. Flow pipe of (s1 , s2 ) = (4, 6): indicating initial state set (red),
reachable state set (blue), and reachable regions (cyan).
Figure 10. System evolution and event intervals when w = 0: state evaluation
and event intervals vs computed bounds.
Figure 9. Computed result of the regional lower bound with w = 0.
implementation, by either global information or even only
local information.
A PPENDIX
Isotropic
covering:
Consider
x
=
T
x1 x2 · · · xn
∈ Rn . We first present a case
when x ∈ R2 . Let Θ = [− π2 , π2 [ be an interval. Splitting this
interval into q sub-intervals and Θs = [θs , θs [ be the s-th
sub-interval. Then for each sub-interval, one can construct a
cone pointing at the origin:
n
o
Rs = x ∈ R2 |xT Ξ̃s x ≥ 0 ,
where
Ξ̃s =
− sin θs sin θs 12 sin(θs + θs )
.
1
2 sin(θ s + θ s ) − cos θ s cos θ s
Remark 4.1 shows that x and −x have the same behaviours,
therefore it is sufficient to only consider half of the state-space.
Figure 11. Reachable regions starting from each conic region, with labeling
from Figure 2.
Now we derive the case when x ∈ Rn , n > 2. Define
(x)i,j = (xi , xj ) as the projection of this point on its i − j
coordinate. Now a polyhedral cone Rs can be defined as:
(
)
n−1
^
n
T
Rs = x ∈ R
(x)(i,i+1) Ξ̃s,(i,i+1) (x)(i,i+1) ≥ 0 ,
i=1
where Ξ̃s,(i,i+1) is a constructed matrix. A relation between
Ξ̃s,(i,i+1) and Ξs,(i,i+1) from (16) is given by:
[Ξs,(i,i+1) ](i,i)
=[Ξ̃s,(i,i+1) ](1,1)
=[Ξ̃s,(i,i+1) ](1,2)
[Ξs,(i,i+1) ](i,i+1)
[Ξs,(i,i+1) ](i+1,i)
[Ξs,(i,i+1) ](i+1,i+1)
[Ξs,(i,i+1) ](k,l)
=[Ξ̃s,(i,i+1) ](2,1)
=[Ξ̃s,(i,i+1) ](2,2)
=0,
9
where [M ](i,j) is the i-th row, j-th column entry of the matrix
M , k and l satisfy (k, l) 6= (i, i + 1).
Proof of Lemma 4.3: We decouple the event triggering
mechanism in (13) first:
T
M (k)x + Θ(k)
M (k)x + Θ(k)
Q
CE x
CE x
T
T
=x Φ1 (k)x + x Φ2 (k)Θ(k) + Θ
T
(k)ΦT
2 (k)x
+ ΘT (k)Q1 Θ(k)
T
≤xT (Φ1 (k) + Φ2 (k)Ψ−1 ΦT
2 (k))x + Θ (k)(Q1 + Ψ)Θ(k),
(25)
where the last inequality comes from Lemma 6.2 in [12]. Now
for the uncertainty part, we have:
ΘT (k)(Q1 + Ψ)Θ(k)
T
Θ1 (k)
(Q1 + Ψ)1
=
0
(Q1 + Ψ)3
(Q1 + Ψ)2
(Q1 + Ψ)4
Proof of Theorem 4.4: We first consider the regions with
s2 > 1. If all the hypothesis of the theorem hold, by applying
the Schur complement to (20), one has:
(29)
xT H + Φ2 (k)Ψ−1 ΦT
2 (k) x ≤ 0.
From (16), and applying the S-procedure, it holds that:
xT Φ1 (k) + Φ3 (k)W 2 Ws−2
I + Φ2 (k)Ψ−1 ΦT
2 (k) x ≤ 0.
2 −1
(30)
From (16) we also have:
xT x ≥ Ws22 −1 .
(31)
Since Φ3 (k), W, and Ws2 −1 are non-negative scalars and
Ws2 −1 > 0, we have the following inequality:
xT Φ3 (k)W 2 Ws−2
Ix = Φ3 (k)W 2 Ws−2
xT x
2 −1
2 −1
Θ1 (k)
0
=ΘT
1 (k)(Q1 + Ψ)1 Θ1 (k).
From the hypothesis of the theorem that there exists µ such
that (Q1 + Ψ)1 µI, together with Jensen’s inequality [12],
inequality (2.2) in [24], and Assumption 4.2, i.e. w ∈ L∞ ,
ΘT (k)(Q1 + Ψ)Θ(k) can be bounded from above by:
≥Φ3 (k)W 2 Ws−2
Ws22 −1 = Φ3 (k)W 2 ≥ Φ3 (k)kwk2L∞ ,
2 −1
(32)
in which the last inequality comes form the definition of W.
Now inserting (32) into (30) results in:
2
xT Φ1 (k) + Φ2 (k)Ψ−1 ΦT
2 (k) x + Φ3 (k)kwkL∞ ≤ 0,
which together with applying the Schur complement to (18)
provides the regional lower bound.
ΘT (k)(Q1 + Ψ)Θ(k) = ΘT
1 (k)(Q1 + Ψ)1 Θ1 (k)
!
!
When s2 = 1, k > 0, H will be diagonal infinity. Thus the
T Z
Z kh
kh
LMI (20) will be infeasible. According to the event-triggered
≤µ
eAp (kh−s) Ew(s)ds
eAp (kh−s) Ew(s)ds
condition (9), which indicates that tb+1 ∈ Ts and tb+1 > tb ,
0
0
the regional lower bound for those regions with s2 = 1 is h.
(by (Q1 + Ψ) µI)
This finishes the proof.
Z kh
T
Proof of Corollary 4.7: The result can be easily obtained
≤khµ
eAp (kh−s) Ew(s)
eAp (kh−s) Ew(s) ds
0
from Theorem 4.4 considering E = 0.
(by Jensen’s equality)
Proof of Corollary 4.8: The result can be easily obtained
Z kh
analogously
to Theorem 4.4 considering E = 0: if all the
T
≤khµ
e(kh−s)λmax (Ap +Ap ) wT (s)E T Ew(s)ds
hypothesis of this
Corollary hold, then according to (23),
0
Φ
(k)
0,
k
∈
k̄s1 ,• , · · · , ¯l . According to the definition of
1
(by (2.2) in [24])
Φ1 (k) in (19), for all k ≥ k̄s1 ,• , it holds that:
Z
kh (kh−s)λmax (Ap +AT )
p dskwk2
T
≤khµλmax E T E
e
L∞
M (k)x
M (k)x
0
Q
> 0,
CE x
CE x
(by w ∈ L∞ )
=khµλmax E T E dAp (k)kwk2L∞ .
which together with event condition (21) provides the regional
(26) upper bound.
With (26), (25) can be further bounded as:
Proof of Theorem 4.10: The result follows from Lemma
T
2.7
and
the construction described in this section.
M (k)x + Θ(k)
M (k)x + Θ(k)
Q
CE x
CE x
(27)
R EFERENCES
T
−1 T
2
≤x Φ1 (k) + Φ2 (k)Ψ Φ2 (k) x + Φ3 (k)kwkL∞ .
From the hypothesis of the theorem, if Φ(k) 0 holds,
then by applying the Schur complement to (18), the following
inequality holds:
2
xT Φ1 (k) + Φ2 (k)Ψ−1 ΦT
2 (k) x + Φ3 (k)kwkL∞ ≤ 0,
which indicates:
T
M (k)x + Θ(k)
M (k)x + Θ(k)
Q
≤ 0.
CE x
CE x
(28)
Therefore, k x generated by (13) is lower bounded by kx0
generated by (17). This ends the proof.
[1] Alongkrit Chutinan and Bruce H Krogh. Computing polyhedral approximations to flow pipes for dynamic systems. In Decision and Control,
1998. Proceedings of the 37th IEEE Conference on, volume 2, pages
2089–2094. IEEE, 1998.
[2] Alongkrit Chutinan and Bruce H Krogh. Computational techniques for
hybrid system verification. IEEE transactions on automatic control,
48(1):64–75, 2003.
[3] Marieke BG Cloosterman, Laurentiu Hetel, Nathan Van de Wouw,
WPMH Heemels, Jamal Daafouz, and Henk Nijmeijer. Controller
synthesis for networked control systems. Automatica, 46(10):1584–
1594, 2010.
[4] Marieke BG Cloosterman, Nathan Van de Wouw, WPMH Heemels,
and Hendrik Nijmeijer. Stability of networked control systems with
uncertain time-varying delays. IEEE Transactions on Automatic Control,
54(7):1575–1580, 2009.
10
[5] MCF Donkers and WPMH Heemels. Output-based event-triggered
control with guaranteed-gain and improved and decentralized eventtriggering. Automatic Control, IEEE Transactions on, 57(6):1362–1376,
2012.
[6] MCF Donkers, WPMH Heemels, Nathan Van de Wouw, and Laurentiu
Hetel. Stability analysis of networked control systems using a switched
linear systems approach. IEEE Transactions on Automatic control,
56(9):2101–2115, 2011.
[7] Günter Ewald. Combinatorial convexity and algebraic geometry, volume
168. Springer Science & Business Media, 2012.
[8] Christophe Fiter, Laurentiu Hetel, Wilfrid Perruquetti, and Jean-Pierre
Richard. A state dependent sampling for linear state feedback. Automatica, 48(8):1860–1867, 2012.
[9] Christophe Fiter, Laurentiu Hetel, Wilfrid Perruquetti, and Jean-Pierre
Richard. A robust stability framework for lti systems with time-varying
sampling. Automatica, 54:56–64, 2015.
[10] Anqi Fu and Manuel Mazo Jr. Periodic asynchronous event-triggered
control. CoRR, abs/1703.10073, 2017.
[11] Rob H Gielen, Sorin Olaru, Mircea Lazar, WPMH Heemels, Nathan
van de Wouw, and S-I Niculescu. On polytopic inclusions as a
modeling framework for systems with time-varying delays. Automatica,
46(3):615–619, 2010.
[12] Keqin Gu, Jie Chen, and Vladimir L Kharitonov. Stability of time-delay
systems. Springer Science & Business Media, 2003.
[13] WPMH Heemels, MCF Donkers, and Andrew R Teel. Periodic eventtriggered control for linear systems. Automatic Control, IEEE Transactions on, 58(4):847–861, 2013.
[14] Laurentiu Hetel, Jamal Daafouz, and Claude Iung. Stabilization of
arbitrary switched linear systems with unknown time-varying delays.
IEEE Transactions on Automatic Control, 51(10):1668–1674, 2006.
[15] Arman Sharifi Kolarijani, Dieky Adzkiya, and Manuel Mazo. Symbolic
abstractions for the scheduling of event-triggered control systems. In
Decision and Control (CDC), 2015 IEEE 54th Annual Conference on,
pages 6153–6158. IEEE, 2015.
[16] Arman Sharifi Kolarijani and Manuel Mazo Jr. A formal traffic characterization of lti event-triggered control systems. IEEE Transactions on
Control of Network Systems, 2016.
[17] Manuel Mazo Jr. and Ming Cao. Asynchronous decentralized eventtriggered control. Automatica, 50(12):3197–3203, 2014.
[18] Manuel Mazo Jr and Anqi Fu. Decentralized event-triggered controller
implementations. Event-Based Control and Signal Processing, page 121,
2015.
[19] Manuel Mazo Jr. and Paulo Tabuada. Decentralized event-triggered
control over wireless sensor/actuator networks. Automatic Control, IEEE
Transactions on, 56(10):2456–2461, 2011.
[20] Joëlle Skaf and Stephen Boyd. Analysis and synthesis of state-feedback
controllers with timing jitter. IEEE Transactions on Automatic Control,
54(3):652–657, 2009.
[21] Young Soo Suh. Stability and stabilization of nonuniform sampling
systems. Automatica, 44(12):3222–3226, 2008.
[22] Paulo Tabuada. Event-triggered real-time scheduling of stabilizing
control tasks. Automatic Control, IEEE Transactions on, 52(9):1680–
1685, 2007.
[23] Paulo Tabuada. Verification and control of hybrid systems: a symbolic
approach. Springer Science & Business Media, 2009.
[24] Charles Van Loan. The sensitivity of the matrix exponential. SIAM
Journal on Numerical Analysis, 14(6):971–981, 1977.
[25] Xiaofeng Wang and Michael D Lemmon. Event-triggering in distributed
networked control systems. Automatic Control, IEEE Transactions on,
56(3):586–601, 2011.
[26] Xiaofeng Wang and Michael D Lemmon. On event design in eventtriggered feedback systems. Automatica, 47(10):2319–2322, 2011.
| 3 |
Accurate Brain Extraction using Active Shape Model and Convolutional
Neural Networks
Nguyen Ho Minh Duya , Nguyen Manh Duya , Mai Thanh Nhat Truongb , Pham The Baoa , Nguyen Thanh
Binha,∗
a Department
arXiv:1802.01268v1 [] 5 Feb 2018
b Department
of Mathematics and Computer Science, Ho Chi Minh City University of Science, Ho Chi Minh City, Viet Nam
of Electrical, Electronic and Control Engineering, Hankyong National University, Anseong, Gyeonggi, Republic
of Korea
Abstract
Brain extraction or skull stripping is a fundamental procedure in most of neuroimaging processing systems.
The performance of this procedure has had a critical impact on the success of neuroimaging analysis. After
several years of research and development, brain extraction still remains a challenging problem. Brain
morphology and intensity characteristics are variable and complex, usually because of the variability in
conditions of image data acquisition, or abnormalities in data such as tumor regions. These difficulties
prevent brain extraction methods from producing acceptable results. In this paper, we propose an effective
method for skull stripping in Magnetic Resonance Imaging (MRI) scans named ASM-CNN. Our system is a
combination of Active Shape Model (ASM) and Convolutional Neural Network (CNN), taking full advantage
of these two methods to achieve remarkable results. Instead of working with 3D structures, we process 2D
image sequences in sagittal plane. First, we divide images into different groups such that, in each group,
the shapes and structures of brain boundaries have similar appearances. This allows developing precise
algorithms for each group in order to produce high performance segmentation results. Second, a modified
version of ASM is used to detect the brain boundary in images by utilizing prior knowledge of each group.
Finally, CNN and the post-processing methods such as Conditional Random Field, Gaussian Process and
some special rules are applied to refine segmentation contour produced by ASM. We compared ASM-CNN
with the latest version of five state-of-the-art, publicly available methods, namely BET, BSE, 3DSS, ROBEX
and BEAST. The evaluation was carried out by using three public datasets IBSR, LPBA and OASIS. The
experimental results show that the proposed method outperforms five states-of-the-art algorithms, surpassing
all the other methods by a significant margin in all experiments.
Keywords: skull stripping, brain extraction, convolutional neural network, active shape model, conditional
random field, gaussian process
∗ Corresponding
author
Email address: ngtbinh@hcmus.edu.vn (Nguyen Thanh Binh)
Preprint submitted to arXiv
1. Introduction
Whole brain segmentation is the problem of extracting brain regions from volumetric data, such as
Magnetic Resonance Imaging (MRI) or Computed Tomography (CT) scans. The results of this process is
segmentation map indicating brain regions after removing non-brain tissue such as eyes, fat, bone, marrow,
and dura. Brain extraction is the first step in most neuroimaging analysis systems which usually consists
of brain tissue classification and volumetric measurement [1], template construction [2], and cortical and
sub-cortical surface analysis [3]. Early preprocessing steps such as bias field correction can also benefit from
brain extraction [4]. Therefore, there is a need for high performance brain extraction methods that can
produce accurate segmentation results.
Automatic skull stripping is a solution to replace manual brain delineation for the purpose of reducing
processing time and preventing any kind of human bias in the results. This is especially true in large scale
studies, where thousands of images with different characteristics and significant anatomical variations are
examined. Most skull stripping methods are optimized and validated for MRI T1-weighted images, since
high resolution T1-weighted structural images are prevalent in clinical studies [5]. Furthermore, T1-weighted
images provide excellent contrast between different brain tissues, making it the leading imaging standard
for volumetric measurements [6]. However, segmentation on T1-weighted data is generally a daunting task
due to the complex nature of the images (ill-defined boundaries, low contrast) and the lack of standard in
image intensity. Several methods have been proposed in the recent years. However, these methods show
good performance only on certain datasets, they produce low quality results when the acquisition conditions
or study populations change [5].
Existing brain extraction methods can be divided into four categories, namely edge-based, templatebased, label fusion with atlas-based and non-local patch based [6]. Edge-based methods focus on detecting
edges between brain and non-brain regions by considering differences in appearance of these structures.
There are several techniques have been employed such as watershed [7], level set [8], histogram analysis [9],
morphological filtering [10], region growing [11], edge detection [12], graph cuts [13], and convolutional neural
networks [14]. Although these methods have proven its effectiveness and achieved comparable results, their
performance tend to be less accurate when working with pathology, different sites, scanners, and imaging
acquisition protocols.
Template-based methods register the subject to a template via affine transform [15] or deformable models
[16] to create an initial estimate for the brain mask. After that, the boundary of brain mask is segmented
again by a classifier, which helps increase the accuracy of final result. Templates can involve one or more
distinctive atlases. Template-based methods are robust, stable in different conditions and highly accurate.
Label fusion with atlas-based techniques, such as Multi-Atlas Propagation and Segmentation (MAPS)
[17], Advanced Normalization Tools (ANTs) [18], and Pincram [19], implement registration of multiple atlases
to a target subject by using deformable models. After being registered to the target space, the brain masks in
all atlases are combined together by using Simultaneous Truth And Performance Level Estimation (STAPLE)
2
[20] or joint label fusion. Because the main operations of these approaches are registration process, their
performance depends on the accuracy of registration and quality of brain mask in each atlas. In addition,
the variability representation in brain anatomy usually requires large number of atlases, hence these methods
usually are time-consuming and computationally intensive.
The last category is non-local patch based methods. At first, these methods transform atlases to subject
space by using affine registration in order to estimate the initial brain mask of the subject. Then, a neighborhood searching process for each small patch, which is around the initial estimates of brain boundary, is
performed. Patches are derived from the registered atlases and located within the neighborhood of target
patch. Then the patches are associated together and similarity weights are computed to generate the final
brain mask. Several methods such as Brain Extraction using non-local Segmentation Technique (BEAST)
[21], Multi-cONtrast brain STRipping (MONSTR) [6] are inspired by this ideas and achieved remarkable performance in aspects of both accuracy and robustness. However, one difficulty of this approach is pinpointing
optimal setting for parameters such as number of atlases or window size. Additionally, in researches related
to disease study, the brain regions in MRI scans usually contain lesion tissues. Therefore, atlases from T1weighted images may not be optimal for detecting brain boundaries since intensity values of structures such
as hemorrhages, tumors, or lesions may be similar to that of non-brain tissues. To overcome this problem,
non-local based methods use different MR acquisition protocols to obtain complementary information. As a
result, it is a complex task and requires more processing time.
In this research, we present a novel approach for brain extraction in T1-weighted MRI data. Our system
is a combination of Active Shape Model (ASM) [22] and Convolutional Neural Network (CNN) [23], hence the
name ASM-CNN. Unlike existing methods, our approach consider brain extraction problem as a segmentation
task for 2D image sequences in sagittal plane instead of working with 3D structure. This approach has several
benefits. First, it allows developing specific algorithms when dealing with images which have different
brain boundary silhouettes. Along image sequences, the shape of brain boundaries and brain sizes vary
significantly. Especially for sagittal slices located at the beginning and ending parts of 3D MRI volumes,
the brain regions are small and the boundaries are very complex. Based on prior knowledge about brain
structures, we developed specific rules representing the relationships between brain tissues. These rules
was applied directly to control the segmentation process effectively. Second, images in sagittal plane are
symmetry across two hemispheres. By utilizing this property, we are able to predict general shape of brain
mask based on the positions of slices. This property also enables us to establish more extensive and accurate
rules for segmentation.
ASM-CNN comprises three main stages: dividing images into groups, applying ASM to detect the brain
boundaries, and finally using CNN also post-processing methods based on Conditional Random Field (CRF),
Gaussian Process (GP) and special rules to refine segmentation contour. In the first stage, all images in
sagittal plane are divided into three groups, namely group I, II, and III. The number of groups was decided
according to our analysis on boundaries of brain masks. The divide of images is performed by a Support
3
Vector Machine (SVM) classifier and a set of rates learned from data, for the purpose of determining position
for each group. In group II and III, images have similar shapes and structures while group I consists of images
which have small brain sizes and complex structures. In the next stage, ASM is applied to group II and
group III to estimate initial contours for brain mask. After that, these contours are refined using CNN
before fed into post-processing based on CRF to obtain the final segmentation. For group I, because the
shapes of brain are complex and the sizes of brain regions are smaller than that in other groups, we take
the segmentation result of the slice in group II which is right next to the beginning position of group I and
use it as an initial brain mask. This brain mask is then fed into CNN for further analysis with specific
rules and GP to produce high quality results. The flowchart of our system is given in Figures 1. The main
contributions of the proposed method are:
1. Our approach is based on processing 2D images, this helps increase accuracy in segmenting small-sized
brain regions while preserving stable performance in overall segmentation.
2. CNN with high-level feature representations is utilized to refine voxels around brain boundaries. In
addition, global spatial information of each voxels is combined with features from CNN by using deep
neural networks. In this way, the proposed method is able to represent global and local features
simultaneously.
3. ASM is applied to different groups. The similarity of brain contours in each group allows ASM to detect
the general properties of boundaries, guaranteeing the geometry attributes of object thus enhancing
the accuracy of segmentation results.
4. Finally, our framework does not depend on any specific MRI format so that it can be applied effectively
in various acquisitions conditions.
The remainder of this paper is organized as follows. In section 2, we introduce several notations, and
present the design and algorithms of our system. In section 3, we report the experimental results, where the
proposed method was applied to three public datasets, namely IBSR, LPBA, OASIS, and compared with
five state-of-the-art brain extraction methods. Section 4 gives further analysis regarding important issues of
the proposed method and discussions for future research. The conclusions of this research are presented in
Section 5.
4
5
Figure 1: Flowchart of proposed brain extraction algorithm.
Figure 2: The illustration of the order of three groups in Sagittal plane.
2. Methodology
In neuroimaging, a three-plane coordinate system is used to describe the standard anatomical position of
human body. These imaging planes are transverse plane, sagittal plane, and coronal plane. Transverse plane
is an X-Z plane and parallel to the ground. Sagittal plane is an Y-Z plane and perpendicular to transverse
plane, separating the left side and the right side. Coronal plane is a Y-X plane and perpendicular to sagittal
plane, separating the front and the back. In our method, we choose to process 2D sagittal slices because
human brains are nearly symmetrical with respect to the mid-sagittal plane. This property enable us predict
the general shapes of brain regions more accurately. In the following subsections, we describe three main
stages of the proposed method for brain extraction.
2.1. Dividing sagittal slices into groups
The first stage of the algorithm is dividing sagittal slices into groups. The rules for division is based on
our analysis on shapes of brain regions. Slices whose brain regions have similar shapes will be put in the
same group. Figure 3 indicates some images in group I. The areas of brains in this group are small compared
to the sizes of images. Figures 4 and 5 illustrate images in group II and III, respectively. Brain sizes in these
groups are relatively large, and in group III, brain regions extend out at the lower left corner of images. Due
to the symmetry of brain, from the first to the last sagittal slice, the assigned groups of slices would be from
group I to group II, then group III, to group II again and then group I. Figure 2 depicts how three groups
are divided in sagittal axis. The main goal of this stage is to group images which have similar brain shapes
for the purpose of maximizing the performance of ASM in the next stage.
Before presenting our method in this stage, we introduce the notations used to describe the algorithms:
• P = {p1 , p2 , ..., pm }: the set of people in each dataset, pi (1 ≤ i ≤ m) is the ith person.
• N = {n1 , n2 , ..., nm }: the set that indicates the numbers of 2D images of a person P where nj (1 ≤
j ≤ m) is the number of 2D images of pj .
6
• Ipj = {Ipj 1 , Ipj 2 , ..., Ipj nj }: the set of images of person pj where Ipj k (1 ≤ k ≤ nj ) is the kth image of
pj .
• G = {G1 , G2 , G3 }: image groups where Gq (1 ≤ q ≤ 3) is the set of images in group q for all people.
• G1ph = {G1ph 1 , G1ph 2 , ..., G1ph a },
G2ph = {G2ph 1 , G2ph 2 , ..., G2ph b },
G3ph = {G3ph 1 , G3ph 2 , ..., G3ph c },
represent the set of images in group I, group II, and group III respectively for person ph where a+b+c =
nh .
• M12 , M23 : the models trained by SVM for classify images between group I and group II, group II and
group III.
• R1 , R2 , R3 , R4 : the rates used for estimating the positions of slices in each group.
In this stage, we train an SVM classifier for the dividing task (Algorithm 1), in which the feature used for
training is the histogram of oriented gradients (HOG) [24] extracted from a sub-rectangle. The sub-rectangle
in Algorithm 1 is illustrated in Figure 6, which indicates the differences between brain shapes in different
groups. Figures 6a and 6b are two adjacent slices from group II and group I, respectively. As shown in
this figure, the brain in 6a has a small tail at the lower left part, while the brain in 6b does not. It is
apparent that the brain shapes are significantly different even though the slices are adjacent. Similarly, 6c
and 6d are two adjacent slices from, respectively, group II and group III. The brain shape in group II is more
convex, while that in group III is concave at the lower right part. We utilize these differences to maximize
the classification performance of SVM. Algorithm 2 is used to divide sagittal slices of any MRI volumes into
groups, using the SVM model produced by Algorithm 1. In the next stage, we use ASM to estimate initial
contours for brain masks of slices in group II and group III. For group I, because the shapes of brain are
complex and the sizes of brain regions are smaller than that in other groups, we need specific algorithms for
this group. The algorithms for brain extraction in group I are detailed in subsection 2.4.
Figure 3: Typical appearances of slices in group I.
7
Algorithm 1: Training classifiers for dividing images to groups
Input : P
Output: M12 , M23 , R1 , R2 , R3 , R4
1
foreach person Pj ∈ P do
2
select k1 , k2 , k3 , k4 such that:
3
A = {Ipj 1 , Ipj 2 , ..., Ipj k1 } ∈ G1pj
4
B = {Ipj k1 +1 , Ipj k1 +2 , ..., Ipj k2 } ∈ G2pj
5
C = {Ipj k2 +1 , Ipj k2 +2 , ..., Ipj k3 } ∈ G3pj
6
D = {Ipj k3 +1 , Ipj k3 +2 , ..., Ipj k4 } ∈ G2pj
7
insert:
8
Gxpj to vector Gx (x ∈ 1, 2, 3)
9
ky /nj to vector Ratey (y ∈ 1, 2, 3, 4)
10
end
11
foreach group Gi ∈ G do
12
select constant k
13
foreach image I ∈ Gi do
14
Rec ← Extract rectangle containing the skull of I using Algorithm5
15
SubRec ← Extract sub rectangle from Rec with k
16
f eature ← Calculate feature for I in SubRec with HOG
17
insert f eature to vector Fi
18
end
19
end
20
M12 ← Create model using SVM algorithm, F1 , F2
21
M23 ← Create model using SVM algorithm, F2 , F3
22
Rid ← mean (Rateid ); id ∈ 1, 2, 3, 4)
23
return M12 , M23 , R1 , R2 , R3 , R4
8
Algorithm 2: Dividing images to three groups
Input : M12 , M23 , R1 , R2 , R3 , R4 , Ipj
Output: G1pj , G2pj , G3pj
1
Ri new ← Ri .nj (i ∈ {1, 2, 3, 4}), nj is the number images of pj
2
for id ← R1, new − 10 to R1, new + 10 do
3
if M12 (Ipj id ) == 0 then
4
insert A = {Ipj 1 , Ipj 2 , ..., Ipj id−1 } to vector G1pj
5
R1, new ← id − 1
6
break
7
end
8
end
9
for id ← R2, new − 10 to R2, new + 10 do
10
if M23 (Ipj id ) == 0 then
11
insert B = {Ipj R1,new +1 , Ipj R1,new +2 , ..., Ipj id−1 } to vector G2pj
12
R2, new ← id − 1
13
break
14
end
15
end
16
for id ← R3, new − 10 to R3, new + 10 do
17
if M23 (Ipj id ) == 1 then
18
insert C = {Ipj R2,new +1 , Ipj R1,new +2 , ..., Ipj id−1 } to vector G3pj
19
R3, new ← id − 1
20
break
21
end
22
end
23
for id ← R4, new − 10 to R4, new + 10 do
24
if M12 (Ipj id ) == 1 then
25
insert D = {Ipj R3,new +1 , Ipj R1,new +2 , ..., Ipj id−1 } to vector G2pj
26
R4, new ← id − 1
27
insert E = {Ipj R4,new +1 , Ipj R1,new +2 , ..., Ipj nj } to vector G1pj
28
break
29
end
30
end
31
return G1pj , G2pj , G3pj
9
Figure 4: Typical appearances of slices in group II.
Figure 5: Typical appearances of slices in group III.
Figure 6: Sub-rectangles and differences between brain shapes in different groups.
10
2.2. Active shape model with optimal feature
Recently, ASM has been applied successfully in several researches related to medical imaging [25, 26, 27].
In our system, we utilize the similarity of brain shapes of images in the same group by applying ASM with
specific model to each group, producing rough segmentations of brain regions which will be refined later.
For enhacing the modeling capacity of the gray-level variations around the border of the object, we use a
modified version of ASM called Active Shape Model with optimal feature (ASM-OF) [22]. In particular, the
optimal displacements for landmarks in original ASM is replaced by using a nonlinear kNN-classifier instead
of the linear Mahalanobis distance [28]. We denote some annotations, which are used in section below for
describing how ASM-OF is applied to segment each brain image in each group .
• s: the number of training images.
• n: the number of landmark points used to represent each image.
• ns : the number of new positions evaluated during searching.
• k: the number of points in profile located either inside and outside of the landmark point.
• lmax : the number of resolution levels for appearance model.
• ngrid : size of ngrid × ngrid of points, which are sampled in training process.
• nmax : the number of iterations per resolution level.
• kN N : number of neighbors used in k-NN classifier during searching and feature selection.
• fv : part of variance to be explained by the shape model, determining number of modes.
This stage comprises three subroutines. First, the ASM-OF create a shape model from landmarks in the
training set. Second, a k-nearest neighbor classifier (k-NN, [29]) is constructed using a gray-level appearance
model. Finally, the shape model and k-NN classifier are combined to produce preliminary estimations of
brain masks. The detail of each subroutine is described below.
2.2.1. Shape Model
A brain image in s images is represented by vector xi by stacking n landmark ((x1 , y1 ), . . . (xn , yn )) as
xi = (x1 , y1 , . . . , xn , yn )T with i ∈ [1, s]
(1)
Principal component analysis (PCA) is used to calculate the mean shape xs and covariance matrix C, the
eigenvectors φm (m = 1, . . . , t) correspond to the first t largest eigenvalues of the covariance matrix, and φm
is the respective variances λm as
s
xs =
s
1X
1 X
xi , C =
(xi − xs )(xi − xs )T
s i=1
s − 1 i=1
11
(2)
where φm and λm is calculated by singular value decomposition. When brain shape in the training set can
be approximated by
xs ≈ xs + Φs bs
(3)
where Φs = (φ1 . . . φt ) is the first t eigenvectors, and bs is a set of shape parameters which is calculated as
bs = ΦTs (xi − xs )
(4)
In experiment, bs is usually bounded by
−q
p
p
λi ≤ bs ≤q λi (i = 1, ..., t)
(5)
where q varies in the interval [2, 3]. The value t eigenvalues to retain is chosen so as to explain a certain
proportion fv of the variance in the training shapes, usually ranging from 90% to 99.5%. The desired number
of modes is given by the smallest t such that
t
X
λ i ≥ fv
i=1
2n
X
λi
(6)
i=1
2.2.2. Gray-level appearance model with optimal features
The gray-level appearance model that describes the typical image structure around each landmark is
obtained by applying k-NN instead from pixel profiles, sampled by using linear interpolation around each
landmark, perpendicular to the contour. From each training image and for each landmark a square grid of
ngrid × ngrid points is defined where ngrid is an odd integer and the landmark point at the center of the
grid. In our experiments, ngrid is set to 5 hence a feature vector has 60 elements which are calculated at 25
points. The output of feature vector is 1 if point is inside the objects or 0 if outside. The k-NN classifier with
weight to each vote is exp(−d2 ), where d is the Euclidean distance to each neighbor in the feature space.
For selecting the best feature, MannWhitney algorithm is used [30].
Given an input image, each position along the profile is calculated yielding 60 feature images and processing again to create the optimal feature. These features are then fed into k-NN classifier to determine
the probability of being inside the object for this pixel. Then we determine the point gi in the set of point
of the profile g such that the object function f (g) is minimized
f (g) =
−1
X
gi +
+k
X
(1 − gi )
(7)
i=0
i=−k
where gi is oriented from the outside to the inside of the object, runs from −k, to +k.
2.2.3. Evolution of the models
After both shape model and k-NN classifiers are constructed from training images, ASM-OF can be
applied to segment object by performing the procedures listed below.
• Step 1. Initialize the shape model with xs using (2).
12
• Step 2. For each landmark, put it at 2ns + 1 new locations, evaluate the equation (7) with the k-NN
classifier to find and move it to a new position denoted is xN ew .
• Step 3. Fit the shape model by calculating bsNew using (4) as (8), and limiting the values of bsNew using
(5).
bsN ew = ΦT
s (xN ew − xs )
(8)
where xN ew = (x1N ew , x2N ew , . . . , xnN ew )
• Step 4. Update the shape landmarks using (3) as
xsN ew ≈ xs + Φs bsNew
• Step 5. Iterate steps 2 and 4 up to predefined nmax times
Because ASM-OF is only able to capture the general shapes of brain region so the derived boundaries are
very smooth. Therefore, in the next stage, we apply convolutional neural networks to refine the preliminary
estimations from ASM-OF for the purpose of localizing accurate brain boundaries.
2.3. Convolutional neural networks
Convolutional neural networks (CNNs) are developed from multi-layer perceptrons, which allow exploiting
spatial information by using localized convolutional kernels. The main advantage of CNNs is the ability to
learn complex, nonlinear and high-dimensional mappings from large data. This attribute makes CNNs
achieve remarkable successes in 2D medical image analysis [31, 32]. The typical structure of CNN consists of
several pairs of convolutional, sub-sampling layers and multi-layer perceptron. The convolutional layers take
input as receptive fields in the previous layer and calculate features while spatial information is preserved.
Denote hlj is the j-th feature map of the l-th layer and hl−1
n (n = 1, . . . , N ) is the m-th feature map of the
(l − 1)-th layer, the feature map hlj is computed by equation:
hlj = σ(
N
X
l
Wjn
∗ hl−1
+ bl )
n
(9)
n=1
l
where Wjn
is the convolutional kernel associated with n-th feature map of the previous layer; bl is the bias
at the l th layer; and σ is the non-linear activation function.
The main operation of sub-sampling is pooling over local neighborhood to reduce the resolution of feature
maps. Max-pooling layers are special cases of sub-sampling in which non-max suppression and down-sample
the resolution of the feature maps are performed. The multi-layer perceptron with fully connected layers
is usually followed several convolutional layers and sub-sampling layers. The final layer of the multi-layer
perceptron are values, which are the posterior probability for each class with softmax function.
In our approach, a CNN is used as a classifier to refine the brain contour produced by ASM-OF. Unlike
the recent approach using CNNs, our method focuses on pixels located around boundaries of the preliminary
13
brain masks produced by ASM-OF in the previous stage, instead of processing for all pixels in image. In
addition, we also exploit the global spatial information [33] and combine it with feature maps learned from
a CNN as a input for multi-layer perceptrons. The details of construction for our CNN are described below.
2.3.1. Input and output space
Denotes that N is the number of slices of an MRI volume, I1 = {I11 , ..., I1p }, I2 = {I21 , ..., I2q }, I3 =
{I31 , ..., I3r } are the sets of images in group I, II, III, respectively where p+q+r = N , M2 = {M21 , ..., M2q }, M3 =
{M31 , ..., M3r } are the sets of brain masks produced by ASM for group II and III. For each image in group II
and III, position of pixels around the boundary of Mi (i ∈ {2, 3}) with distance 5 pixels are detected by using
algorithm 4 (Figure 7). These position then are extracted information based on Ij (j ∈ {2, 3}). With images
in I1 , all pixels within the rectangle, which contains the skull, are used as features. The aim of employing a
CNN is to classify pixels into brain and non-brain classes.
We extract two types of features for describing each pixel. The first type is local features, which are
three adjacent image slices with size 11 × 11 centered around each pixel. The second type of features is
global features, which are combining position (x, y) of each pixel and index z of image where pixel belongs in
(1 ≤ z ≤ N ). Figure 8 illustrate the feature extraction step and combine two features in network structure.
In the training process, the total of number brain and non-brain samples are approximately 4000000,
9000000 and 18000000 for IBSR, LPBA and OASIS dataset, respectively. The different in number samples
between dataset is mainly caused by the number of subjects in each dataset.
Figure 7: Preliminary brain boundary produced by ASM (green) and surrounding pixels that will be classified by CNN (red).
2.3.2. CNN architecture
Figure 8 illustrates our network structure. It includes three convolutional layers followed by four fully
connected layers, including the final two node layer for output. The size of convolutional layers is 3 × 3 and
max pooling uses 2 × 2 kernel. We configures the depths of the first, second and third convolutional layers
to 13, 26 and 39, respectively. The ReLU activation function is applied to the outputs of the convolutional
layer.
f (x) = max(0, x)
14
(10)
Three vectors from three slices of images obtained from convolutional layers are concatenated with the
normalize coordinates to form a new vector. Then this vector is fed into the fully connected layers. The
depths of four fully connected layers are 574, 300, 50, and 2, respectively. The final layer has the size of two,
indicating the probability of the input pixel belonging to the brain or non-brain class.
2.3.3. Training and Optimization
The aim of CNN is to minimize the cost function:
n
1X
η
L(w) =
l(z, f (x, w)) + ||w||2 ,
n i=1
2
(11)
for a labeled training set (xi , zi ), i ∈ (1, n) and weights including convolutional kernel weights and bias
weight (w1 , . . . wL ) with a loss function l . The loss function l is defined as:
l(z, f (x, w)) = −
N
M
1 XX
Bm,n log(pm,n )
N n=1 m=1
(12)
where N is the number of training images in the batch, M is the number of classes, pm,n is the probability
of the n the example being classified into the m-th class. If the n-th example is classified into the m-th class,
Bm,n equals 1, otherwise Bm,n equals 0.
In equation (11), L2 regularization is used to penalize the size of w with η = 0.005 is the coefficient of
regularization. The weights wt+1 at step t + 1 are updated by applied Adam algorithm [34]:
mt+1
= β1 mt + (1 − β1 )∇L(wt )
(13)
vt+1
= β2 vt + (1 − β2 )∇L(wt )2
mt+1
=
1 − β1t+1
vt+1
=
1 − β2t+1
αm̂t+1
= wt − p
v̂t+1 +
(14)
m̂t+1
v̂ t+1
wt+1
(15)
(16)
(17)
where t is the iteration index, α = 1e − 04, β1 = 0.9, β2 = 0.999 and = 1e − 08, m0 = 0, v0 = 0.
The weights in each layer at convolutional layer and are initialized from a normal distribution of N (0, 0.1)
while weights in multi-layer perceptron layer are created from normal distribution of N (0, β) with
β=
1
nhidden
(18)
where nhidden is the number of hidden layers of previous layer.
The CNN is trained by using mini-batch stochastic gradient descent with batch size of 128. For preventing
over-fitting, we use dropout layer after each layer in fully connected layer with rate 0.5.
2.4. Special Rules for Group I
The brain regions in group I are small and have a lot of noise, there would be cases where CNNs produce
two regions as segmentation results but it is difficult to automatically determine which area is the brain
region. In this paper, we utilized Gaussian Process [35] to overcome this situation.
15
16
Figure 8: Feature extraction step and our deep neural network structure.
Gaussian Process is used to to learn a line that changes center positions of the brain image from the
beginning to the end. Based on this line we can predict the center of the cerebral images in region I. Then
the closest area closest to the center of gravity is selected for processing. This line is learned based on the
Gaussian Process, because it is able to learn complex functions by adjusting the kernel functions.
Figure 9 illustrates the Gaussian Process training results with the IBSR dataset [36]. The blue lines
illustrate the change of the cerebral ventricles for each object from start to end position. The training set of
IBSR consists of 12 people hence there are 12 blue lines. It is worth noting that the positions of starting and
ending slice in MRI volumes are not constants. For example, scans of patient ID 1 start at slice 25 and end
at slice 200, scans of patient ID 2 start at slice 20, end at slice 220. Therefore, we normalized the starting
and ending slices to [1, 100]. The red line illustrates the result of the Gaussian Process.
Figure 10 illustrates the predicted results of the Gaussian Process. The red line illustrates the Gaussian
Process’s initial prediction for a new subject. After adjusting this prediction, we obtain the blue line which
is the final result of the Gaussian Process. The brown line is the ground truth. The adjustment from initial
prediction to final result is based on two factors:
• Because of the image acquisition condition, the image of a subject can be shifted, however, the rate
of change of the center of the image along the slices remains unchanged. Therefore, it is possible to
correctly predict the center of brain regions by translating the initial prediction by a value α.
• The value α is estimated based on a basis that, for each subject, we used a combination of ASM-OF,
CNN, and CRF to find the positions of region II and III based on the proposed algorithm. Then we
consider the center of all the images in region II and III. By compare the deviation with the result of
the Gaussian Process and taking the mean value, we obtain the desired value of α.
Figure 9: Illustration for training process of Gaussian process in IBSR dataset.
17
Figure 10: Prediction results of Gaussian process in IBSR dataset.
After the translation and obtaining the blue line, for each slice in region I, we calculate the distance
between each connected component in the slice and the predicted value of the Gaussian Process, finally we
select the region with the shortest distance. The algorithm 3 describes in detail how images in group I are
processed. The list of sub algorithms in the algorithm 3 including CheckCenter 6, CheckArea 7,ConvertRange
5 and CheckDistance 8 are presented in Appendices and CRF is presented in section 3.
3. Post-processing
Because the main procedures are pixel-wise approaches, the segmentation maps are considerably noisy,
decreasing overall accuracy of the brain extraction process significantly. To overcome this problem, we utilize
a post-processing technique based on conditional random field (CRF) for validating the extraction results
from ASM-OF and CNN, as well as finally refining the segmentation maps. Originally CRF was designed
for text segmenting and labeling [37], however there were several success in applying CRF to vision-based
applications [38, 39, 40], especially medical imaging [41, 42, 43]. In this research, we use fully connected CRF
as the main framework for post-processing because of its capability in producing highly refined segmentation
results.
Let I be an image of size N and x be a segmentation map of I, the Gibbs energy of x is given by
E(x) =
X
ϕu (xi ) +
i
X
ϕp (xi , xj )
(19)
i<j
where xi , xj is the label assigned to pixel i, j, respectively. In our system, the properties of segmentation
maps are x ∈ LN , L ∈ {0, 1} which are respective to “brain” (label 1) and “non-brain” (label 0) regions. The
unary potential is given by ϕu (xi ) = − log P (xi ), where P (xi ) is the probability of pixel i getting classified
as “brain”. This probability is acquired from the output of the main procedure.
18
Algorithm 3: Process Images in Group I
Input : Cb → Cm , Cq+1 → Ce : CNN Result Images of Group 01
Fm+1 , Fq : 2 Final Segmentation Images in Group II
GP M : Gaussian Process Model
Output: Final Images in Group 0I Fb → Fm , Fq+1 → Fe
1
Image1 = Fm+1
2
Image2 = F q
3
α = 0.4
4
β = 1.75
5
for i ∈ [m, m − 1, ..., b + 1, b] do
6
Ci ← Denoise in Ci
7
Ci = CheckCenter(Ci , Image1 )
8
Ci = CheckArea(Ci , α)
9
iGP M = ConvertRange(b, e, 1, 100, i)
10
di ← GP M (iGP M )
11
Ci = CheckDistance(Ci , di , β)
12
Fi ← CRF (Ci )
13
Image1 = Fi
14
end
15
for j ∈ [q + 1, q + 2, ..., e − 1, e] do
16
Cj ← Denoise in Cj
17
Cj = CheckCenter(Cj , Image2 )
18
Cj = CheckArea(Cj , α)
19
jGP M = ConvertRange(b, e, 1, 100, j)
20
dj ← GP M (jGP M )
21
Cj = CheckDistance(Cj , dj , β)
22
Fj ← CRF (Cj )
23
Image2 = Fj
24
end
25
return Fb → Fm , Fq+1 → Fe
19
The pairwise potential ϕp (xi , xj ) is given by
ϕp (xi , xj ) = µ(xi , xj )
K
X
wm km (fi , fj )
(20)
m=1
where µ(xi , xj ) follows Potts model, which is
1
µ(xi , xj ) =
0
xi 6= xj
(21)
otherwise
The weights are denoted as wm and km (fi , fj ) is the Gaussian kernel depends on feature vectors fi and fj .
Similarly to [39], we employ appearance kernel and smoothness kernel, which are respectively defined as
!
|Ii − Ij |2
|pi − pj |2
−
(22)
k1 (fi , fj ) = exp −
2σα2
2σβ2
|pi − pj |2
k2 (fi , fj ) = exp −
(23)
2σγ2
where pi , pj are positions, Ii , Ij are intensity values of pixel i, j. The sigmas (σα , σβ , σγ ) are used to control
the degrees of Gaussian kernels.
The most probable segmentation map x∗ is calculated by
x∗ = arg min E(x)
(24)
x∈LN
Because finding the exact minimization is infeasible, the CRF distribution is approximated by a mean-field
approximation. The detailed algorithm is presented in [39]. Besides performing mean-field approximation,
we need to determine the values of parameters of the model. In our system, we hard-coded the values of w2
and σγ because their effects to segmentation accuracy were insignificant. The other parameters (w1 , σα and
σβ ) were determined by random search with cross validation.
Figures 11 and 12 illustrate all processing stages of the proposed brain extraction system.
4. Experiments
4.1. Methods for comparison and dataset
In this paper, we consider comparing our proposed system with five well-known methods, namely Brain
Extraction Tool (BET) [44], Brain Surface Extractor (BSE) [45], 3DSkullStrip (3DSS) [46], ROBEX [15],
and Brain Extraction based on nonlocal Segmentation Technique (BEAST) [21]. The implementations of all
algorithms are optimized to work with T1-weighted data. BET is specifically designed for brain extraction
tasks. This method utilizes deformable models based on locally adaptive forces to produce brain mask
results. In the first stage, the center of the head is estimated. Then, the deformable model with a spherical
mesh is applied to fit brain boundaries around this position. BET is very fast and simple but requires manual
parameter settings to produce good results. In the experiment, we used BET 2.0, the latest version with
several improvements.
20
Figure 11: Processing steps for group I.
The next method for comparison is BSE. This method comprises several procedures: anisotropic diffusion
filtering, edge detection, and morphological operators. Anisotropic diffusion filtering is used to reduce noise
in images while preserving edge boundaries. Marr-Hildreth edge detection is employed to identify the border
of the brain regions. Finally, morphological operators such as erosion and dilation is applied to ensure that
the result is correct. Although BSE can provide high level of accuracy for whole-brain segmentations, fine
parameter tuning is usually required for method to work effectively in specific studies. In this study, we used
BSE in BrainSuite15c package, which was released in January 2016.
3DSS is a modification of BET, included in the AFNI package [46]. It uses the spherical surface expansion
Figure 12: Processing steps for group II and group III.
21
paradigm, with several modifications to avoid the eyes and ventricles regions, reducing misclassification.
Furthermore, not only points inside the surface, but also points located outside are combined together in
order to support the evolution of the mesh. In our experiment, we used 3DSS in the AFNI package released
on January 2016.
[15] proposed a method called ROBEX. This approach is a combination of a discriminative and a generative model. The random forest classifier is used as a discriminative model to detect voxels located on the
brain boundary. The generative model is based on a point distribution model to make sure the shape of the
mask is reasonable. In the first stage, ROBEX registers the subject to a template via affine transform. The
signal intensities are then normalized and applied bias correction before being fed into the discriminative
model. ROBEX is able to work effectively across multiple datasets without requiring any parameter tunning.
However, ROBEX uses an adult template as the standard for training the discriminative model, and the
target subject is supposed to be aligned. This limits the flexibility of ROBEX in working with different
conditions such as imaging modalities and young populations. For evaluating the performance of ROBEX,
we used the ROBEX version 1.2 released in November 2013.
The last method used for comparison is BEAST. The ideas of this technique is based on non-local patch
matching using multiple atlases. The sum of squared differences metric is used to determine suitable patches.
For optimization, input data and prior atlasyes library need to be normalized in terms of space and intensity.
An important characteristics of BEAST is that atlases library needs to be representative of the given data
and therefore user is able to add custom priors to the library. When sets of library are selected appropriately,
the method can achieve state-of-the-art performance for several data sets even when the data are related to
disease such as Alzheimers Disease Neuroimaging Initiative [47].
Three publicly available datasets were used for evaluation. In each dataset, the samples were randomly
divided into training and testing groups. The first dataset is Internet Brain Segmentation Repository (IBSR)
[36]. This dataset comprises T1-weighted scans (0.94 × 1.5 × 0.94 mm) of 20 healthy subjects (10 males and
10 females) whose ages vary from 24 to 34. The scans were provided along with manual segmentation
performed by experts. Two different scanners were used to acquire the scans. The first scanner, 1.5 Tesla
Siemens Magnetom MR System, was used to acquire scans from 4 males and 6 females with a Fast Low
Angle SHot (FLASH) pulse sequence and the following parameters: T R/T E = 40/8 ms, flip angle 50◦ , slice
thickness 3.1 mm, in-plane resolution 1 × 1 mm. The scans from the remaining subjects were acquired from a
1.5 Tesla General Electric Signa MR system with a 3D-CAPRY pulse sequence and the following parameters:
T R/T E = 50/9 ms, flip angle 50◦ , slice thickness 3.0 mm, in-plane resolution 1 × 1 mm. Owing to severe
striation artifacts existed in some scans, it is challenging to segment on this dataset.
The second dataset is LONI Probabilistic Brain Atlas (LPBA40) [48] which were acquired from healthy
volunteers. It contains 40 T1-weighted scans (0.86 × 1.5 × 0.86 mm) of 20 males and 20 females with the
average age is 29.2. The scans were acquired with a 3D spoiled gradient echo sequence on a GE 1.5T system
with TR: 10.0-12.5 ms; TE range 4.22-4.5 ms; flip angle 20◦ . Coronal slices were acquired 1.5 mm apart
22
with in-plane resolution of 0.86 mm (38 subjects) or 0.78 mm (2 subjects).
The third dataset was obtained from The Open Access Series of Imaging Studies (OASIS) project [49].
It is a cross-sectional and longitudinal collection of 416 subjects aged from 18 to 96. The data were acquired
by using T Siemens scanner with a MP-RAGE sequence, T R/T E/T I/T D = 9.7 ms/4.0 ms/20 ms/200 ms,
flip angle 10◦ . Sagittal slices were acquired 1.5 mm apart with in-plane resolution of 1 mm. We only used 77
T1-weighted scans (1 × 1 × 1 mm) acquired from 55 females and 22 males aged 51.64 ± 24.67 years to make
our results comparable to [15]. Differently from IBSR and LPBA40, 20 out of 77 subjects were clinically
diagnosed with very mild to moderate Alzheimers disease. Even though the brain masks are not created
manually but by a customized method based on registration to an atlas, the results from this method were
carefully checked by experts before releasing. This dataset is worthwhile by virtue of including scans from
a very diverse subjects with a expansive age range in addition to diseased brains. On account of this, it is
sufficient to use this dataset in order to prove the effectiveness and robustness of our approach.
In this research, 6 out of 20 subjects in IBSR dataset were used for training and the 14 remained subjects
were for evaluating. Similarly, 12/28 is the number of training/testing subjects in LPBA40 dataset and
22/55 is the ratio for OASIS dataset.
4.2. Setup
The proposed method was implemented in MATLAB and Python. Applying ASM-OF as well as extracting features for CNN were done using MATLAB. The deep neural networks used for training and testing
process was implemented in Python because of its various supported deep learning libraries e.g. TensorFlow.
In these experiments, we used the same deep neural network structure for all three datasets, as shown
in Figure 8. In the training process, the values of all weights of networks were randomly generated from
a normal distribution with mean 0 and standard deviation 0.1. To overcome overfitting problem, we also
added dropout layers with the rate 0.5 after each layer in deep neural network. The ReLU function was
selected as an activation function in our experiments. The processing time of the training phase varies
depending on the size of dataset. It took approximately 3.2 hours for IBSR, 6.4 hours for LPBA40, and 12.8
hours for OASIS to complete the training phase. This process was performed on a workstation with Intel(R)
Core(TM) i7-6700K CPU running at 4.00Ghz and an NVIDIA GeForce GTX 980 GPU. The average time
for processing one single MRI volume in testing phase is roughly 4 minutes.
4.3. Evaluation Metrics
We evaluated the performance of our brain extraction method by comparing the segmentation results
with the ground truth in each dataset. Among several popular metrics used for measuring the distance or
similarity between two images, we used the Dice coefficient, the Jaccard index and the Average Haussdorff
Distance (AHD). Furthermore, sensitivity and specificity scores were also calculated.
Aforementioned metrics can be derived from the four basic cardinalities of the confusion matrix, which
are true positives (TP), false positives (FP), true negatives (TN), and false negatives(FN). TP are voxels
23
correctly classified as brain tissue while TN consisted of voxels precisely predicted as non-brain tissue. FN
are brain tissue voxels in the ground truth but misclassified by the methods. In contrast, FP are incorrectly
identified brain tissue voxels.
The Dice coefficient is commonly used to measure repeatability in order to compare directly machinegenerated and ground truth segmentation. It is possibly the most-used measure for validating medical
segmentations. The Dice coefficient is calculated by
D=
2T P
2T P + F P + F N
(25)
The values of this coefficient is in the interval [0, 1] where the maximum value indicates that two segmentation
map are identical.
The Jaccard index is another widely used similarity measure and closely related to the Dice coefficient.
The relationship of two metrics is represented by
2J
1+J
(26)
TP
TP + FP + FN
(27)
D=
hence
J=
Sensitivity and specificity are similar to each other. Sensitivity is essentially how good a prediction is at
true positive. Analogously, specificity is a measure of how accurate a prediction is against false positives.
However, it is not common to use these two measures for evaluation of medical image segmentation because
of their sensibility to segments size. These metrics are defined as follows:
TP
TP + FN
TN
Spec =
TN + FP
Sens =
(28)
(29)
Differently from other metrics, the AHD is a spatial distance based metrics measuring dissimilarity
between two segmentations, where one is the predicted result needs to be evaluated and the other is respective
ground truth. These metrics are usually used when the overall accuracy, for instance the contour of the
segmentation, of the result is of importance. Because the normal Haussdorff distance is sensitive to outliers,
it is recommended to use the AHD which is known to be more stable. The AHD between two segmentation
map X and Y is defined by
AHD(X, Y ) = max(d(X, Y ), d(Y, X))
(30)
where d(X, Y ) is the directed average distance given by
d(X, Y ) =
1 X
min kx − yk
y∈Y
N
x∈X
24
(31)
5. Results
5.1. Qualitative evaluation
The segmentation results of all method including our proposed in a sagittal plane for IBSR, LPBA and
OASIS are illustrated in Figure 13 - 15 respectively. Each figure includes 6 typical testing scans from all
three groups (2 scans for each group). Although ASM-CNN approaches the sagittal plane, it also works well
and gives correct segmentation in two other planes, which are transverse and coronal plane. Figure 16 shows
the comparison between our approach and other methods on each dataset for these two planes.
For the BET, it generally gives good results for almost testing samples. Missing the cerebellum in its
segmentation is the crucial problem of this method. Even for scans in group III which are brains in a big
size with not so complex structure, BET sometime provides false segmented result (see Figure 13 and Figure
15). This issue appears in IBSR and mostly in OASIS dataset which can be clearly seen in coronal plane in
Figure 16. For the LPBA samples it seems to be improved and gets better performance. However, comparing
with other methods, it happens more frequent on BET. Moreover, Figure 16 shows that BET often leaves
out the lateral ventricles of its segmented result along with BSE and 3DSS. Although fails in group II and
III, this method can produce acceptable segmentations on images in group I (see Figure 15).
BSE seems to have the same performance with BET method which also fails at cerebellum segmentation
and does not include the ventricles. Different from other methods, sustainability is the main disadvantage
of BSE. It can gives excellent results in LPBA dataset but very poor in IBSR and OASIS. This can be
seen clearly in II scans in the same group III and same dataset in Figure 13. BSE works well in the first
scan although the dura matter and the cranium is oversegmented. However, for the second scan which has
higher contrast in the same group III, this method completely fails to produce correctly result. One of the
reasons for unstable segmentation is because we employed the default parameters. Hence carefully tuning
the parameters is necessary when using BSE. Besides, Figure 13 also shows that this method sometime gives
good segmentation for images in group I.
The extracted results obtained by 3DSS, which is a modified version of BET, are slightly better than BET
and BSE. 3DSS produces great segmentation in LPBA and its performance decreases in IBSR and OASIS.
Unlike BET, segmentation by 3DSS can avoid the eyes (see the second scan in Figure 13) and reduce leakage
into the skull although there are some minor oversegmentation at temporal lobe of cerebral hemisphere (see
the first scan in Figure 14). The same with BSE, this method leaves the ventricles out in its segmentation.
However, the results are extremely sharp at the boundary which sometimes loss the interhemispheric tissue
in the segmentation (see Figure 16). Furthermore, the extracted brains in group I by 3DSS are also not good
as those from BSE.
Both BEAST and ROBEX all have the higher performance than the aboved methods. They produce
precise segmentations of cerebellum which BET and BSE often fails at. Besides, the ventricles, which are
usually left out in 3DSS or BSE, are included in the segmentations. However, BEAST gives results sharp as
those from 3DSS causing to decrease the accuracy. Moreover, the main problem of ROBEX is that it gives
25
oversmoothed results, leading to inclusion of dura and gray matter loss [6] (see the second scan in Figure
16). Generally, both can provide accurate extracted brains but ROBEX seems to be slightly better than
BEAST. Unfortunately, they both work bad for small size brains in group I, even worse than the aboved
methods.
Finally, ASM-CNN provides extremely accurate segmentations in all three datasets. The results obtained
by our proposed are the same with those by ROBEX with smoothed boundaries. However we can still keep
the gray and dura matter in our extracted brains in most of cases in which they are usually left out by
ROBEX (see the second scans in Figure 16). Although there are some minor leakage into the skull in ASMCNN, it happens least than ROBEX and BEAST with smaller oversegmentation. The critical impact of our
method is that it can work precisely for the small size brains in group I where other methods usually fail at.
We observe in scans from this group in Figure 13 - 15 that our extractions are identical to the groundtruth
even for the tiny size brains (see two last scans in Figure 13). However the final result still has a few false
negatives and false positive in some cases because of the complex structure in this group. In spite of some
minor incorrect segmentation, ASM-CNN still has a better performance than others with higher accuracy.
5.2. Quantitative evaluation
We use five evaluation metrics which are Dice coefficient, Jaccard index, average Haussdorff distance,
sensitivity and specificity to compare our method with others. For the sake of easy comparison, we evaluate all
methods in 2D (sagittal plane) and 3D form for each dataset. Table 1 - 3 displays the average and standard
deviations (SD) of the metrics for IBSR, LPBA and OASIS dataset on 2D plane respectively. Similarly,
Table 4 - 6 shows the accuracy of the results executed on 3D structure. The results of different metrics from
evaluated methods on three datasets are depicted by box plots in Figure from 17 to 22 respectively.
Because our approach is working with 2D scans, we formulated the result on 3D structure of other
algorithms as 2D images sequences in sagittal plane for the reliable evaluating. Base on the result, all
methods prone to getting higher scores on the LPBA dataset in which BSE has significant changes when
working on this dataset such as the average of Dice coefficient of this method greatly increases about 9%
(85% to 94.27%) for the LPBA scans. BEAST has the highest mean of Specificity with small SD for three
datasets although there is no big difference between six methods at this evaluation metric. Surpassing all
other algorithms, ASM-CNN gets the best performance with the highest average of Dice overlap (with 0.55
- 1.63% higher than the second), Jaccard index (0.8 - 2.45 % higher) and AHD on three datasets and the
best Sensitivity on LPBA and OASIS while ROBEX produces the highest on IBSR. Besides, BSE also
achieve the same AHD score on LPBA like ASM-CNN but its SD is higher than us (0.32 compared with
0.19). Furthermore, ASM-CNN not only has outstanding mean of evaluation metric scores but also gets the
smallest SD values in Dice coefficient, Jaccard index and Sensitivity which can show the consistency of our
method.
Although process with 2D plane, we still create 3D results by combining sequences of scans to make ASMCNN more comparable with other algorithms. There is a increase at the accuracy of five remain methods
26
in this type of evaluation with a great improvement at standard deviation. Among the other algorithms,
BEAST has major changes in its accuracy for several evaluated metrics on three datasets. For instance, it
approximately gains more 3.47 - 5.1% on Dice, Jaccard and Sensitivity metrics. Furthermore, BEAST still
provides the highest Specificity which is almost 100% for all dataset. Meanwhile, ROBEX continually obtains
the best scores in Sensitivity metric on IBSE and LPBA. Despite not our main approach, the 3D results
produced by ASM-CNN are remarkable which can be competitive with others. It preserves the impressive
performance which gets the best on 3 dataset at Dice coefficient, Jaccard index, AHD and on OASIS at
Sensitivity.
The boxplots in Figure 17 and 20 shows that BSE has a big interquartile range (IQR) on IBSR which
means the results given from it are more disperse and have larger variance than other algorithms. However
this method has the best performance on LPBA (Figure 18, Figure 21) which gives extremely accurate results
and better than others although it has a terrible outlier (about 72% at Dice overlap for 2D evaluation).
Indeed, this issue was mentioned above that BSE can provide very accurate segmentations but also may give
atrocious results which is based on tuning employed parameters. By contrast, ASM-CNN shows its durable
and unaffected by datasets that outperforms than others with high accuracy and small IQR as well as SD
on three datasets especially on OASIS (Figure 19, Figure 22). Unfortunately, its Specificity is worse than
several methods. However it can be improved by employment a suitable post-processing method to mitigate
several false positive in the segmentations.
27
Figure 13: Comparison between ASM-CNN with other methods on IBSR dataset.
28
Figure 14: Comparison between ASM-CNN with other methods on LPBA dataset.
29
Figure 15: Comparison between ASM-CNN with other methods on OASIS dataset.
30
Figure 16: Result image on transverse and coronal plane for IBSR, LPBA, OASIS dataset.
31
Figure 17: 2D Box plots of Dice coefficient, Jaccard index, Average Hausdorff distance, Sensitivity and Specificity for IBSR
dataset.
Figure 18: 2D Box plots of Dice coefficient, Jaccard index, Average Hausdorff distance, Sensitivity and Specificity for LPBA
dataset.
32
Figure 19: 2D Box plots of Dice coefficient, Jaccard index, Average Hausdorff distance, Sensitivity and Specificity for OASIS
dataset.
Figure 20: 3D Box plots of Dice coefficient, Jaccard index, Average Hausdorff distance, Sensitivity and Specificity for IBSR
dataset.
33
Figure 21: 3D Box plots of Dice coefficient, Jaccard index, Average Hausdorff distance, Sensitivity and Specificity for LPBA
dataset.
Figure 22: 3D Box plots of Dice coefficient, Jaccard index, Average Hausdorff distance, Sensitivity and Specificity for OASIS
dataset.
34
Method
Dice
Jaccard
Average Hausdorff
Sensitivity
Specificity
BET
87.99 ± 10.49
79.68 ± 12.27
1.16 ± 0.55
80.76 ± 12.25
99.53 ± 0.43
BSE
85.23 ± 16.56
77.17 ± 20.61
0.85 ± 0.41
78.90 ± 21.63
99.36 ± 0.8
3DSS
91.42 ± 13.31
85.99 ± 14.9
0.72 ± 0.22
88.21 ± 15.55
99.1 ± 1.04
ROBEX
94.32 ± 8.13
90.02 ± 10.37
0.61 ± 0.24
95.22 ± 9.43
98.38 ± 1.57
BEAST
90.00 ± 16.51
84.46 ± 17.97
0.61 ± 0.15
84.63 ± 18.02
99.93 ± 0.1
ASM-CNN
94.89 ± 6.33
90.82 ± 9.15
0.57 ± 0.15
93.24 ± 8.53
99.36 ± 0.72
Table 1: 2D Results of different methods in IBSR dataset.
Method
Dice
Jaccard
Average Hausdorff
Sensitivity
Specificity
BET
93.25 ± 8.35
88.17 ± 10.76
0.72 ± 0.28
89.69 ± 10.71
99.34 ± 0.62
BSE
94.37 ± 10.32
90.63 ± 13.51
0.57 ± 0.32
92.41 ± 13.55
99.33 ± 0.66
3DSS
92.47 ± 13.89
87.93 ± 15.27
0.75 ± 0.21
91.46 ± 16.1
98.22 ± 1.6
ROBEX
93.85 ± 12.74
90.12 ± 14.45
0.62 ± 0.17
92.53 ± 14.82
99.08 ± 0.56
BEAST
93.00 ± 13.50
88.83 ± 15.37
0.62 ± 0.16
89.65 ± 15.38
99.67 ± 0.64
ASM-CNN
95.39 ± 7.02
91.87 ± 10.25
0.57 ± 0.19
94.72 ± 7.01
99.33 ± 1.01
Table 2: 2D Results of different methods in LPBA dataset.
Method
Dice
Jaccard
Average Hausdorff
Sensitivity
Specificity
BET
85.34 ± 13.24
76.17 ± 15.41
1.91 ± 0.77
78.17 ± 15.78
98.9 ± 1.05
BSE
85.53 ± 11.68
76.18 ± 14.48
1.94 ± 0.7
78.45 ± 14.96
98.80 ± 1.34
3DSS
87.30 ± 14.49
79.64 ± 17.25
1.81 ± 0.67
82.91 ± 17.8
98.48 ± 1.36
ROBEX
93.44 ± 8.75
88.59 ± 11.16
1.27 ± 0.27
92.1 ± 10.43
98.44 ± 1.05
BEAST
88.56 ± 14.17
81.49 ± 16.27
1.59 ± 0.37
82.71 ± 16.41
99.26 ± 0.99
ASM-CNN
95.07 ± 5.36
91.04 ± 8.46
1.13 ± 0.42
94.36 ± 7.1
98.53 ± 1.46
Table 3: 2D Results of different methods in OASIS dataset.
35
Method
Dice
Jaccard
Average Hausdorff
Sensitivity
Specificity
BET
89.63 ± 3.86
81.39 ± 6.42
34.99 ± 7.2
82.53 ± 6.09
99.57 ± 0.17
BSE
88.48 ± 9.84
80.50 ± 15.79
32.06 ± 12.88
82.09 ± 17.18
99.42 ± 0.53
3DSS
94.56 ± 2.11
89.75 ± 3.78
24.05 ± 2.2
92.02 ± 5.26
99.18 ± 0.63
ROBEX
96.24 ± 1.28
92.78 ± 2.34
22.72 ± 3.55
97.14 ± 1.63
98.47 ± 1.34
BEAST
94.46 ± 2.27
89.57 ± 4.07
22.27 ± 3.07
89.76 ± 4.2
99.93 ± 0.05
ASM-CNN
96.51 ± 1.03
93.27 ± 1.92
20.63 ± 1.15
95.04 ± 2.92
99.39 ± 0.39
Table 4: 3D Results of different methods in IBSR dataset.
Method
Dice
Jaccard
Average Hausdorff
Sensitivity
Specificity
BET
95.19 ± 1.81
90.87 ± 3.23
26.52 ± 5.1
92.33 ± 3.34
99.40 ± 0.24
BSE
96.19 ± 5.56
93.09 ± 8.96
21.17 ± 8.75
94.66 ± 9.46
99.39 ± 0.32
3DSS
95.85 ± 0.49
92.04 ± 0.91
24.52 ± 2.44
95.90 ± 0.75
98.41 ± 0.60
ROBEX
96.91 ± 0.18
94.00 ± 0.34
21.88 ± 1.07
96.18 ± 0.88
99.13 ± 0.28
BEAST
96.47 ± 0.61
93.20 ± 1.14
21.11 ± 2.08
93.93 ± 1.09
99.72 ± 0.43
ASM-CNN
97.14 ± 0.75
94.46 ± 1.40
20.15 ± 2.19
96.14 ± 1.65
99.33 ± 0.34
Table 5: 3D Results of different methods in LPBA dataset.
Method
Dice
Jaccard
Average Hausdorff
Sensitivity
Specificity
BET
87.99 ± 6.55
79.09 ± 9.72
46.42 ± 10.03
80.73 ± 10.51
98.98 ± 0.82
BSE
88.27 ± 4.88
79.32 ± 7.63
46.16 ± 7.61
81.15 ± 8.84
98.87 ± 1.12
3DSS
90.26 ± 7.15
82.93 ± 10.99
40.54 ± 10.54
85.39 ± 12.37
98.57 ± 1.02
ROBEX
95.64 ± 0.86
91.66 ± 1.57
30.31 ± 1.97
94.36 ± 2.55
98.49 ± 0.79
BEAST
92.72 ± 1.47
86.45 ± 2.52
35.65 ± 2.71
87.59 ± 3.03
99.33 ± 0.82
ASM-CNN
96.50 ± 0.91
93.25 ± 1.68
26.17 ± 2.28
95.82 ± 1.93
98.58 ± 0.65
Table 6: 3D Results of different methods in OASIS dataset.
36
6. Discussion
In this research, we proposed a system for brain extraction using active shape model and convolutional
neural network, hence the name ASM - CNN. The idea behind using ASM is to explore general structures
of brain regions, ensuring the consistency of the shape of the object. However, in medical imaging we
demand high precision boundaries for segmentation problems, while ASM only provides general shapes and
quite smooth. Therefore, we implemented CNN to refine the boundaries produced by ASM. After this step,
results from CNN may still be flawed, especially for small and complext brain regions. To overcome this
challenge, we proposed some pre-processing techniques based on CRF and Gaussian Process as the final
step. CRF with its ability to model the relationships between the pixels being predicted is used to produce
a structured output effectively. Gaussian process is applied to learn a non -linear function indicating the
changing of the cerebral images from the beginning to the end image due to its power in learning complex
functions with a suitable kernel function.
Instead of using whole 3D MRI volume, we decided to approach with 2D scans. This approach allows
us to design special rules, adapted for identifying small and complex brain regions, which are often ignored
by techniques that focus on processing 3D volume. In addition, we choose 2D scans in sagittal plane to
process. Harnessing the symmetry of this plane, we can group brain images into pre-defined groups such
that the images in each group have similar shapes. This property enables ASM to learn shape models and
appearance model properly in place of learning all shapes, which change significantly when throughout the
images.
The proposed method has been compared with five state-of-the-art brain extraction methods. By achieving the highest average of Dice coefficient and Jaccard index with smaller standard deviations, ASM-CNN
surpassed all other methods in all experiments. The proposed method also got remarkable scores in other
metrics, not only in 2D but also in 3D perspective as well. Despite the inferior result in Specificity, it is still
sufficient when it made no significant difference between the top method and ours. ROBEX has shown that
it was better than others with many great segmentations. However, the segmentation contours produced
by ROBEX were not as sharp as the groundtruth, leading to the increase in false positive. It is also the
main weakness of ROBEX when being tested with all three datasets. About BEAST and 3DSS, both methods performed great in general. Their segmentation results were regularly the same as the groundtruth in
some cases and their average scores are approximate with each other. Nevertheless, it seemed that BEAST
produced the more stable results than 3DSS which had fluctuation in its scores. For instance, there are
some cases when 3DSS performed on OASIS dataset, it missed some region of cerebellum, as illustrated in
Figure 15. It is worth noting that BEAST was at top in Specificity with the highest average scores in all
datasets. For BET and BSE, the quality of their segmentations is worse comparing with others but still
acceptable. When dealing with the diversity of brain shapes and contrast in scans, BET and BSE produced
some false segmented regions from cerebellum to brain stem which can be easily seen in Figure 13. Their
results were also unstable, especially when being tested with OASIS dataset. Moreover, it is shown that BSE
37
has not been stable on IBSR with the highest standard deviation. On the other hand, BSE was surprisingly
good on LBPA, albeit there was still one outlier which had an extremely low accuracy. It can be caused by
parameters tuning because BSE is extremely sensitive to these values. For example, when default parameter
values were used in LPBA, this tool has obtained superb results. But it was totally different when BSE was
applied to other dataset like IBSR or OASIS.
It is worth noting that the properties of datasets affected the performance of brain extraction methods.
Due to using two scanner to acquire MR images, IBSR comprises many heterogeneous scans with many
obvious artifacts. Besides, this dataset also has the lowest resolution and most anisotropic voxels. LPBA
has better resolution images with less noise but it is also fairly anisotropic. OASIS does not have lots of
noises and it is isotropic, but it includes some diagnosed Alzheimers disease subjects. Therefore, all methods
performed well on LBPA and got the results which did not have any clearly differences. But for other
datasets such as IBSR and OASIS, ASM-CNN has shown its remarkable performance when comparing with
others, especially in OASIS. It is because the contour refinement by CNN and post-processing by CRF and
Gaussian process have done their duty impressively which produced sharp brain boundaries as groundtruth.
Furthermore, combining with specific rules for particular groups helped our method overcome the small brain
region problems. Our result for this phenomenon is demonstrated in Figure 13. It has to be noted that five
other methods were run with the default parameters values. Hence, all the experimental evaluation can be
reproduced.
In our system, ASM is used to keep the original structures. However, ASM can only work effectively when
the shapes of the objects to be similar to the trained active appearance model. Therefore, the algorithm will
likely produce poor results when processing unusual shapes. Unfortunately in medical imaging, the data are
usually obscured by noise due to limitations of image acquisition systems, or the shapes and distributions of
images are not inlcuded in training data. In such cases, ASM may produce severely imprecise boundaries even
CNN cannot verify or refine. In the future work, we intend to study the techniques based on deep learning
for the purpose of constructing better models for shape and apperance. In addition, with improvements
in the future, such as GPU optimization, we believe that the proposed approach can be applied widely in
clinical applications because of its fine and robust performance.
7. Conclusion
In this article, we proposed a novel method for brain extraction in magnetic resonance images, namely
ASM-CNN. The method was named after its main components: Active Shape Model and Convolutional
Neural Network. Unlike existing approaches, ASM-CNN processes 2D sequences of images instead of 3D
brain structures. For each 2D scan, first we use a improved version of ASM is Active Shape Model with
optimal feature to produce a rough estimate of the brain region, its boundary is then refined by a CNN, which
is constructed and trained with several special rules. Finally, the brain region is post-processed by conditional
random field and Gaussian process. The proposed approach has shown its consistency in performance, it can
38
produce high accuracy segmentations in all cases, even when the brain regions are small and scattered. In
the experiments, our method achieved remarkable Dice coefficients and Jaccard indexes for the whole three
datasets (IBSR, LPBA and OASIS) in 2D scans as well as 3D structures, surpassed the performance of five
other state-of-the-art methods (BET, BSE, 3DSS, BEAST and ROBEX).
Acknowledgment
We would like to thank the National Foundation for Science and Technology Development (NAFOSTED),
University of Science VNU - HCM, and Business Intelligence LAB at University of Economics and Law, Viet
Nam for supporting us throughout this paper.
Appendices
Algorithm 4: M ergeSlice(X, Y, α)
Input : Two binary images X, Y ; Open distance α
Output: Binary image Z created by combining X and Y
1
O ← OpenBoundary(Y, α)
2
Core ← Y \O and Boundary ← X ∩ O
3
Z ← Core ∪ Boundary
4
Apply morphology to fill small holes in Z (if exists)
5
return Z
Algorithm 5: ConvertRange – Convert a value xAB in range [A, B] to range [C, D]
Input : A, B, C, D, xAB
Output: xCD
1
xCD = C + (xAB − A) ∗ (D − C)/(B − A)
2
return xCD
39
Algorithm 6: CheckCenter – Remove components in image X whose center is not in image Y to create
image Z
Input : X, Y
Output: Z
1
Z=X
2
bbox ← the smallest bounding box surrounding all components in Y
3
foreach component ci in Z do
4
centeri ← Coordinates of the center of ci
5
if centeri is not in bbox then
Remove ci out of Z
6
7
end
8
end
9
return Z
Algorithm 7: CheckArea – Remove small components in image X with the coefficient α to create
image Y
Input : X, α
Output: Y
1
Y =X
2
foreach component ci in Y do
3
ai ← Area of ci
4
end
5
amax ← max {ai }
6
athreshold = α ∗ amax
7
foreach component ci in Y do
8
if ai < athreshold then
9
Remove ci out of Y
10
end
11
end
12
return Y
40
Algorithm 8: CheckDistance – Remove components having small distance in image X with the coefficient β to create image Y
Input : X, d, β
Output: Y
1
Y =X
2
foreach component ci in Y do
3
centeri ← Coordinates of the center of ci
4
difi = |d − kcenteri k2 |
5
end
6
difmin ← min {difi }
7
difthreshold = β ∗ difmin
8
foreach component ci in Y do
9
if difi > difthreshold then
Remove ci out of Y
10
11
end
12
end
13
return Y
References
References
[1] S. Jiang, W. Zhang, Y. Wang, Z. Chen, Brain extraction from cerebral MRI volume using a hybrid
level set based active contour neighborhood model, BioMedical Engineering OnLine 12 (1) (2013) 31.
doi:10.1186/1475-925X-12-31.
[2] J. A. Maldjian, J. B. Daunais, D. P. Friedman, C. T. Whitlow, Vervet MRI atlas and label map
for fully automated morphometric analyses, Neuroinformatics 12 (4) (2014) 543–550. doi:10.1007/
s12021-014-9231-8.
[3] A. M. Dale, B. Fischl, M. I. Sereno, Cortical surface-based analysis: I. segmentation and surface reconstruction, NeuroImage 9 (2) (1999) 179–194. doi:10.1006/nimg.1998.0395.
[4] S. Sharma, V. Noblet, F. Rousseau, F. Heitz, L. Rumbach, J.-P. Armspach, Evaluation of brain atrophy
estimation algorithms using simulated ground-truth data, Medical Image Analysis 14 (3) (2010) 373–389.
doi:10.1016/j.media.2010.02.002.
41
[5] P. Kalavathi, V. B. S. Prasath, Methods on skull stripping of MRI head scan images—a review, Journal
of Digital Imaging 29 (3) (2016) 365–379. doi:10.1007/s10278-015-9847-8.
[6] S. Roy, J. A. Butman, D. L. Pham, Robust skull stripping using multiple MR image contrasts insensitive
to pathology, NeuroImage 146 (2017) 132–147. doi:10.1016/j.neuroimage.2016.11.017.
[7] R. Beare, J. Chen, C. Adamson, T. Silk, D. Thompson, J. Yang, V. Anderson, M. Seal, A. Wood,
Brain extraction using the watershed transform from markers, Frontiers in Neuroinformatics 7 (2013)
32. doi:10.3389/fninf.2013.00032.
[8] J. Hwang, Y. Han, H. Park, Skull-stripping method for brain MRI using a 3D level set with a speedup
operator, Journal of Magnetic Resonance Imaging 34 (2) (2011) 445–456. doi:10.1002/jmri.22661.
[9] A. G. Balan, A. J. Traina, M. X. Ribeiro, P. M. Marques, C. T. Jr., Smart histogram analysis applied
to the skull-stripping problem in t1-weighted MRI, Computers in Biology and Medicine 42 (5) (2012)
509–522. doi:10.1016/j.compbiomed.2012.01.004.
[10] O. Gambino, E. Daidone, M. Sciortino, R. Pirrone, E. Ardizzone, Automatic skull stripping in MRI
based on morphological filters and fuzzy c-means segmentation, in: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE Computer Society, Washington,
DC, USA, 2011, pp. 5040—5043.
[11] K. Somasundaram, P. Kalavathi, Brain segmentation in magnetic resonance human head scans using multi-seeded region growing, The Imaging Science Journal 62 (5) (2014) 273–284. doi:10.1179/
1743131X13Y.0000000068.
[12] T. K. Sreeja, A. Mohammed, J. J. Kumari, A skull-stripping method based on modified morphological
processing, in: 2011 International Conference on Signal Processing, Communication, Computing and
Networking Technologies, IEEE Computer Society, Washington, DC, USA, 2011, pp. 313–316.
[13] S. A. Sadananthan, W. Zheng, M. W. Chee, V. Zagorodnov, Skull stripping using graph cuts, NeuroImage 49 (1) (2010) 225–239. doi:10.1016/j.neuroimage.2009.08.050.
[14] J. Kleesiek, G. Urban, A. Hubert, D. Schwarz, K. Maier-Hein, M. Bendszus, A. Biller, Deep MRI brain
extraction: A 3D convolutional neural network for skull stripping, NeuroImage 129 (Supplement C)
(2016) 460–469. doi:10.1016/j.neuroimage.2016.01.024.
[15] J. E. Iglesias, C. Y. Liu, P. M. Thompson, Z. Tu, Robust brain extraction across datasets and comparison
with publicly available methods, IEEE Transactions on Medical Imaging 30 (9) (2011) 1617–1634.
doi:10.1109/TMI.2011.2138152.
[16] Y. Wang, J. Nie, P.-T. Yap, F. Shi, L. Guo, D. Shen, Robust deformable-surface-based skull-stripping for
large-scale studies, in: Proceedings of the 14th International Conference on Medical Image Computing
42
and Computer-assisted Intervention - Volume Part III, Springer-Verlag, Berlin, Heidelberg, 2011, pp.
635–642.
[17] K. K. Leung, J. Barnes, M. Modat, G. R. Ridgway, J. W. Bartlett, N. C. Fox, S. Ourselin, Brain MAPS:
an automated, accurate and robust brain extraction technique using a template library, NeuroImage
55 (3) (2011) 1091–1108. doi:10.1016/j.neuroimage.2010.12.067.
[18] B. B. Avants, N. J. Tustison, M. Stauffer, G. Song, B. Wu, J. C. Gee, The insight toolkit image
registration framework, Frontiers in Neuroinformatics 8 (2014) 44. doi:10.3389/fninf.2014.00044.
[19] R. A. Heckemann, C. Ledig, K. R. Gray, P. Aljabar, D. Rueckert, J. V. Hajnal, A. Hammers, Brain
extraction using label propagation and group agreement: Pincram, PLOS ONE 10 (7) (2015) 1–18.
doi:10.1371/journal.pone.0129211.
[20] S. K. Warfield, K. H. Zou, W. M. Wells, Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation, IEEE Transactions on Medical Imaging
23 (7) (2004) 903–921. doi:10.1109/TMI.2004.828354.
[21] S. F. Eskildsen, P. Coupé, V. Fonov, J. V. Manjn, K. K. Leung, N. Guizard, S. N. Wassef, L. R. Østergaard, D. L. Collins, BEaST: brain extraction based on nonlocal segmentation technique, NeuroImage
59 (3) (2012) 2362–2373. doi:10.1016/j.neuroimage.2011.09.012.
[22] T. Cootes, C. Taylor, D. Cooper, J. Graham, Active shape models-their training and application,
Computer Vision and Image Understanding 61 (1) (1995) 38–59. doi:10.1006/cviu.1995.1004.
[23] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition,
Proceedings of the IEEE 86 (11) (1998) 2278–2324. doi:10.1109/5.726791.
[24] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: 2005 IEEE Computer
Society Conference on Computer Vision and Pattern Recognition (CVPR’05), IEEE Computer Society,
Washington, DC, USA, 2005, pp. 886–893 vol. 1.
[25] M. Esfandiarkhani, A. H. Foruzan, A generalized active shape model for segmentation of liver in
low-contrast CT volumes, Computers in Biology and Medicine 82 (2017) 59–70. doi:10.1016/j.
compbiomed.2017.01.009.
[26] H. El-Rewaidy, E. S. Ibrahim, A. S. Fahmy, Segmentation of the right ventricle in MRI images using a
dual active shape model, IET Image Processing 10 (10) (2016) 717–723. doi:10.1049/iet-ipr.2016.
0073.
[27] C. Santiago, J. C. Nascimento, J. S. Marques, A new robust active shape model formulation for cardiac MRI segmentation, in: 2016 IEEE International Conference on Image Processing (ICIP), IEEE
Computer Society, Washington, DC, USA, 2016, pp. 4112–4115.
43
[28] B. Van Ginneken, A. F. Frangi, J. J. Staal, B. M. ter Haar Romeny, M. A. Viergever, Active shape
model segmentation with optimal features, IEEE transactions on medical imaging 21 (8) (2002) 924–933.
doi:10.1109/TMI.2002.803121.
[29] N. S. Altman, An introduction to kernel and nearest-neighbor nonparametric regression, The American
Statistician 46 (3) (1992) 175–185. doi:10.1080/00031305.1992.10475879.
[30] H. B. Mann, D. R. Whitney, On a test of whether one of two random variables is stochastically
larger than the other, The Annals of Mathematical Statistics 18 (1) (1947) 50–60. doi:10.1214/
aoms/1177730491.
[31] L. K. Tan, Y. M. Liew, E. Lim, R. A. McLaughlin, Convolutional neural network regression for shortaxis left ventricle segmentation in cardiac cine mr sequences, Medical Image Analysis 39 (Supplement
C) (2017) 78–86. doi:10.1016/j.media.2017.04.002.
[32] M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin,
H. Larochelle, Brain tumor segmentation with deep neural networks, Medical Image Analysis 35 (Supplement C) (2017) 18–31. doi:10.1016/j.media.2016.05.004.
[33] X. Liu, F. Chen, Automatic segmentation of 3-d brain mr images by using global tissue spatial structure
information, IEEE Transactions on Applied Superconductivity 24 (5) (2014) 1–5. doi:10.1109/TASC.
2014.2347316.
[34] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
[35] C. Rasmussen, Cki williams gaussian processes for machine learning mit press, Cambridge, MA.
[36] Ibsr dataset http://www.nitrc.org/projects/ibsr.
[37] J. D. Lafferty, A. McCallum, F. C. N. Pereira, Conditional random fields: Probabilistic models for
segmenting and labeling sequence data, in: Proceedings of the Eighteenth International Conference on
Machine Learning, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2001, pp. 282–289.
[38] X. He, R. S. Zemel, M. A. Carreira-Perpiñán, Multiscale conditional random fields for image labeling,
in: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, IEEE Computer Society, Washington, DC, USA, 2004, pp. 695–703.
[39] P. Krähenbühl, V. Koltun, Efficient inference in fully connected CRFs with gaussian edge potentials, in:
Proceedings of the 24th International Conference on Neural Information Processing Systems, Curran
Associates Inc., USA, 2011, pp. 109–117.
[40] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, P. H. S. Torr,
Conditional random fields as recurrent neural networks, in: Proceedings of the 2015 IEEE International
44
Conference on Computer Vision (ICCV), IEEE Computer Society, Washington, DC, USA, 2015, pp.
1529–1537.
[41] C. Bhole, C. Pal, D. Rim, A. Wismüller, 3D segmentation of abdominal CT imagery with graphical
models, conditional random fields and learning, Machine Vision and Applications 25 (2) (2014) 301–325.
doi:10.1007/s00138-013-0497-x.
[42] M. G. Uzunbas, C. Chen, D. Metaxas, An efficient conditional random field approach for automatic
and interactive neuron segmentation, Medical Image Analysis 27 (2016) 31–44. doi:10.1016/j.media.
2015.06.003.
[43] K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert,
B. Glocker, Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Medical Image Analysis 36 (2017) 61–78. doi:10.1016/j.media.2016.10.004.
[44] S. M. Smith, Fast robust automated brain extraction, Human Brain Mapping 17 (3) (2002) 143–155.
doi:10.1002/hbm.10062.
[45] D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, R. M. Leahy, Magnetic resonance
image tissue classification using a partial volume model, NeuroImage 13 (5) (2001) 856–876. doi:
10.1006/nimg.2000.0730.
[46] R. W. Cox, AFNI: software for analysis and visualization of functional magnetic resonance neuroimages,
Computers and Biomedical Research 29 (3) (1996) 162–173. doi:10.1006/cbmr.1996.0014.
[47] S. G. Mueller, M. W. Weiner, L. J. Thal, R. C. Petersen, C. Jack, W. Jagust, J. Q. Trojanowski, A. W.
Toga, L. Beckett, The alzheimer’s disease neuroimaging initiative, Neuroimaging Clinics 15 (4) (2005)
869–877. doi:10.1016/j.nic.2005.09.008.
[48] Lpba dataset http://sve.loni.ucla.edu/.
[49] Oasis dataset http://www.oasis-brains.org/.
45
| 1 |
Under review as a conference paper at ICLR 2016
W HAT H APPENED TO M Y D OG IN T HAT N ETWORK :
U NRAVELING T OP - DOWN G ENERATORS
IN C ONVOLUTIONAL N EURAL N ETWORKS
arXiv:1511.07125v1 [] 23 Nov 2015
Patrick W. Gallagher & Shuai Tang & Zhuowen Tu
Department of Cognitive Science
University of California, San Diego
{pwgallag,shuaitang93,ztu}@ucsd.edu
A BSTRACT
Top-down information plays a central role in human perception, but plays relatively little role in many current state-of-the-art deep networks, such as Convolutional Neural Networks (CNNs). This work seeks to explore a path by which
top-down information can have a direct impact within current deep networks. We
explore this path by learning and using “generators” corresponding to the network
internal effects of three types of transformation (each a restriction of a general
affine transformation): rotation, scaling, and translation. We demonstrate how
these learned generators can be used to transfer top-down information to novel
settings, as mediated by the “feature flows” that the transformations (and the associated generators) correspond to inside the network. Specifically, we explore
three aspects: 1) using generators as part of a method for synthesizing transformed
images — given a previously unseen image, produce versions of that image corresponding to one or more specified transformations; 2) “zero-shot learning” —
when provided with a feature flow corresponding to the effect of a transformation
of unknown amount, leverage learned generators as part of a method by which
to perform an accurate categorization of the amount of transformation, even for
amounts never observed during training; and 3) (inside-CNN) “data augmentation” — improve the classification performance of an existing network by using
the learned generators to directly provide additional training “inside the CNN”.
1 I NTRODUCTION
Deep learning has many recent successes; for example, deep learning approaches have made strides
in automatic speech recognition (Hinton et al., 2012), in visual object recognition (Krizhevsky et al.,
2012), and in machine translation (Sutskever et al., 2014). While these successes demonstrate the
wide-ranging effectiveness of deep learning approaches, there yet remains useful information that
current deep learning is less able to bring to bear.
To take a specific example, consider that much of current deep learning practice is dominated by
approaches that proceed from input to output in a fundamentally bottom-up fashion. While current
performance is extremely impressive, these strongly bottom-up characteristics leave room for one
to ask whether providing deep learning with the ability to also incorporate top-down information
might open a path to even better performance.
The demonstrated role of top-down information in human perception (Stroop, 1935; Cherry, 1953;
Hill & Johnston, 2007; Ames Jr, 1951) provides a suggestive indication of the role that top-down
information could play in deep learning. Visual illusions (such as the “Chaplin mask”) provide
the clearest examples of the strong effect that top-down/prior information can have on human perception; the benefits of top-down information in human perception are widespread but subtler to
notice: prominent examples include color constancy (Kaiser & Boynton, 1996) and the interpretation of visual scenes that would otherwise be relatively meaningless (e.g. the “Dalmatian” image
(Marr, 1982)). Another particularly common experience is the human ability to focus on some specific conversation in a noisy room, distinguishing the relevant audio component among potentially
overwhelming interference.
Motivated by the importance of top-down information in human perception, as well as by the successful incorporation of top-down information in non-deep approaches to computer vision (Borenstein & Ullman, 2008; Tu et al., 2005; Levin & Weiss, 2009), we pursue an approach to bringing top1
Under review as a conference paper at ICLR 2016
down information into current deep network practice. The potential benefits from incorporating topdown information in deep networks include improved prediction accuracy in settings where bottomup information is misleading or insufficiently distinctive as well as generally improved agreement
when multiple classification predictions are made in a single image (such as in images containing
multiple objects). A particularly appealing direction for future work is the use of top-down information to improve resistance to “adversarial examples” (Nguyen et al., 2015; Szegedy et al., 2013).
1.1
R ELATED WORK
The incorporation of top-down information in visual tasks stands at the intersection of three fields:
cognitive science, computer vision, and deep learning. Succinctly, we find our inspiration in cognitive science, our prior examples in computer vision, and our actual instantiation in deep learning.
We consider these each in turn.
Cognitive science Even before Stroop’s work (Stroop, 1935) it has been noted that human perception of the world is not a simple direct path from, e.g., photons reaching the retina to an interpretation
of the world around us. Researchers have established a pervasive and important role for top-down
information in human perception (Gregory, 1970). The most striking demonstrations of the role of
top-down information in human perception come in the form of “visual illusions”, such as incorrectly perceiving the concave side of a plastic Chaplin mask to be convex (Hill & Johnston, 2007).
The benefits of top-down information are easy to overlook, simply because top-down information
is often playing a role in the smooth functioning of perception. To get a sense for these benefits,
consider that in the absence of top-down information, human perception would have trouble with
such useful abilities as the establishment of color constancy across widely varying illumination
conditions (Kaiser & Boynton, 1996) or the interpretation of images that might otherwise resemble
an unstructured jumble of dots (e.g., the “Dalmatian” image (Marr, 1982)).
Non-deep computer vision Observations of the role of top-down information in human perception have inspired many researchers in computer vision. A widely-cited work on this topic that
considers both human perception and machine perception is (Kersten et al., 2004). The chain of
research stretches back even to the early days of computer vision research, but more recent works
demonstrating the performance benefits of top-down information in tasks such as object perception
include (Borenstein & Ullman, 2008; Tu et al., 2005; Levin & Weiss, 2009).
Deep computer vision Two recent related works in computer vision are (Cohen & Welling, 2015;
Jaderberg et al., 2015). There are distinct differences in goal and approach, however. Whereas spatial
transformer networks (Jaderberg et al., 2015) pursue an architectural addition in the form of what one
might describe as “learned standardizing preprocessing” inside the network, our primary focus is on
exploring the effects (within an existing CNN) of the types of transformations that we consider. We
also investigate a method of using the explored effects (in the form of learned generators) to improve
vanilla AlexNet performance on ImageNet. On the other hand, (Cohen & Welling, 2015) state that
their goal is “to directly impose good transformation properties of a representation space” which
they pursue via a group theoretic approach; this is in contrast to our approach centered on effects
on representations in an existing CNN, namely AlexNet. They also point out that their approach
is not suitable for dealing with images much larger than 108x108, while we are able to pursue
an application involving the entire ImageNet dataset. Another recent work is (Dai & Wu, 2014),
modeling random fields in convolutional layers; however, they do not perform image synthesis, nor
do they study explicit top-down transformations.
Image generation from CNNs As part of our exploration, we make use of recent work on generating images corresponding to internal activations of a CNN. A special purpose (albeit highly intriguing) method is presented in (Dosovitskiy et al., 2015). The method of (Mahendran & Vedaldi, 2014)
is generally applicable, but the specific formulation of their inversion problem leads to generated
images that significantly differ from the images the network was trained with. We find the technique of (Dosovitskiy & Brox, 2015) to be most suited to our purposes and use it in our subsequent
visualizations.
Feature flows One of the intermediate steps of our process is the computation of “feature flows” —
vector fields computed using the SIFTFlow approach (Liu et al., 2011), but with CNN features used
in place of SIFT features. Some existing work has touched on the usefulness of vector fields derived
from “feature flows”. A related but much more theoretical diffeomorphism-based perspective is
(Joshi et al., 2000). Another early reference touching on flows is (Simard et al., 1998); however,
the flows here are computed from image pixels rather than from CNN features. (Taylor et al., 2010)
uses feature flow fields as a means of visualizing spatio-temporal features learned by a convolutional
2
Under review as a conference paper at ICLR 2016
gated RBM that is also tasked with an image analogy problem. The “image analogy” problem is
also present in the work (Memisevic & Hinton, 2007) focusing on gated Boltzmann machines; here
the image analogy is performed by a “field of gated experts” and the flow-fields are again used for
visualizations. Rather than pursue a special purpose re-architecting to enable the performance of
such “zero-shot learning”-type “image analogy” tasks, we pursue an approach that works with an
existing CNN trained for object classification: specifically, AlexNet (Krizhevsky et al., 2012).
2
G ENERATOR LEARNING
We will focus our experiments on a subset of affine image operations: rotation, scaling, and translation. In order to avoid edge effects that might arise when performing these operations on images
where the object is too near the boundary of the image, we use the provided meta information to
select suitable images.
2.1
P IPELINE FOR GENERATOR LEARNING
We select from within the 1.3M images of the ILSVRC2014 CLS-LOC task (Russakovsky et al.,
2014). In our experiments, we will rotate/scale/translate the central object; we wish for the central
object to remain entirely in the image under all transformations. We will use bounding box information to ensure that this will be the case: we select all images where the bounding box is centered,
more square than rectangular, and occupies 40-60% of the pixels in the image. We find that 91
images satisfy these requirements; we will subsequently refer to these 91 images as our “original
images”.
2.1.1 G ENERATING TRANSFORMED IMAGE PAIRS
We use rotation as our running example. For ease of reference, we will use the notation Ij [θ] to
denote a transformed version of original image Ij in which the central object has been rotated to an
orientation of θ degrees; the original image is Ij [0◦ ] = Ij . Using this notation, we can consider
image pairs in which the difference between one image and the other is the amount of rotation of the
central object. For example, in the pair (Ij [θinit ] , Ij [θinit + ∆θ]), the central object is at orientation
θinit in the first image and at orientation θinit + ∆θ in the second.
egr
ees
0d
area to be
transformed
ate
rot
1
by
scale by 1.3x
tra
nsl
original image
ate
up
by
10
pix
els
transformed image
conv1 flow field
conv2 flow field
conv3 flow field
conv5 flow field
Figure 1: Illustration of AlexNet feature flow fields associated with the specified transformations. Best viewed
in color, on-screen.
To begin, we will consider 72 regularly-spaced values of the initial orientation angle, θinit ∈
{0◦ , 5◦ , 10◦ , . . . , 355◦ } , but only one value of rotation amount, ∆θ = 10◦ . This means that for
each of the 91 original images, we will have 72 pairs of the form (Ij [θinit ] , Ij [θinit + ∆θ]). These
6,552 total pairs will be the focus of our subsequent processing in the rotation experiments.
2.1.2 C OMPUTING A LEX N ET FEATURES
Next, for each transformed image pair, we use the Caffe (Jia et al., 2014) library’s pretrained reference AlexNet model to compute the AlexNet features associated with each image in the pair. Using
the notation Fj [θ] to denote the collection of all AlexNet feature values resulting when image Ij [θ]
is the input, this means that we now have 6,552 “collected AlexNet features” pairs of the form
3
Under review as a conference paper at ICLR 2016
(Fj [θinit ] , Fj [θinit + ∆θ]). AlexNet has 8 layers with parameters: the first 5 of these are convolutional layers (henceforth referred to as conv1, conv2, . . ., conv5); the final 3 are fully connected
layers (henceforth referred to as fc6, fc7, fc8). Our attention will be focused on the convolutional
layers rather than than fully connected layers, since the convolutional layers retain “spatial layout”
that corresponds to the original image while the fully connected layers lack any such spatial layout.
2.1.3 C OMPUTING PER - LAYER FEATURE FLOWS
For ease of reference, we introduce the notation Fj,` [θinit ] to refer to the AlexNet features
at layer ` when the input image is Ij [θinit ] . From the “entire network image features” pair
(Fj [θinit ] , Fj [θinit + ∆θ]), we focus attention on one layer at a time; for layer ` the relevant
pair is then (Fj,` [θinit ] , Fj,` [θinit + ∆θ]) . In particular, at each convolutional layer, for each such
(Fj,` [θinit ] , Fj,` [θinit + ∆θ]) pair we will compute the “feature flow” vector field that best describes
the “flow” from the values Fj,` [θinit ] to the values Fj,` [θinit + ∆θ].
We compute these feature flow vector fields using the SIFTFlow method (Liu et al., 2011) —
however, instead of computing “flow of SIFT features”, we compute “flow of AlexNet features”. See Figure 1 for an illustration of these computed feature flow vector fields. For a
layer ` feature pair (Fj,` [θinit ] , Fj,` [θinit + ∆θ]) , we refer to the corresponding feature flow as
Vj,` [θinit , θinit + ∆θ]. Recalling that we only compute feature flows for convolutional layers, collecting the feature flow vector fields Vj,` [θinit , θinit + ∆θ] for ` ∈ {conv1, . . . , conv5} results in a
total1 of 8, 522 values; we collectively refer to the collected-across-conv-layers feature flow vector
fields as Vj,: [θinit , θinit + ∆θ]. If we flatten/vectorize these feature flow vector field collections for
each pair and then row-stack these vectorized flow fields, we obtain a matrix with 6,552 rows (one
row per image pair) and 8,522 columns (one column per feature flow component value in conv1
through conv5).
2.1.4 F EATURE FLOW PCA
In order to characterize the primary variations in the collected feature flow vector fields, we perform
PCA on this matrix of 6,552 rows/examples and 8,522 columns/feature flow component values. We
retain the first 10 eigenvectors/principal component directions (as “columns”); each of these contains
8,522 feature flow vector field component values.
mean
PC 1
PC 2
conv1
conv2
conv3
conv5
Figure 2: PCA components of the CNN feature flow fields associated with 10◦ of rotation. The first, second,
and third rows show, respectively, the mean, first, and second principal components. The first, second, third,
and forth columns show, respectively, the results in conv1, conv2, conv3, and conv5. Best viewed on screen.
We denote these “eigen” feature flow fields as {U1 , . . . , U10 } , with each Ui ∈ R8,522 . Here the use
of an upper case letter is intended to recall that, after reshaping, we can plot these flow fields in a
spatial layout corresponding to that of the associated AlexNet convolutional layers. We also recall
1
8, 522 = 2 × 552 + 2 × 272 + 3 × 2 × 132
4
Under review as a conference paper at ICLR 2016
that these “eigen” feature flow fields were computed based on feature pairs with a 10◦ rotation. Together with the mean feature flow field, subsequently denoted M ∈ R8,522 , these 10 “stacked and
flattened/vectorized ‘eigen’ feature flow fields” (as “columns”) of 8,522 feature flow component values provide us the ability to re-represent each of the 6,552 “per-pair stacked and flattened/vectorized
feature flow fields” in terms of 11 = 1+10 coefficients. When we re-represent the 6,552 example
“stacked and flattened/vectorized ’pair’ feature flow fields”, the coefficient associated with the mean
will always be equal to 1; however, we will shortly consider a setting in which we will allow the
coefficient associated with the mean to take on values other than 1.
2.1.5 U SING FEATURE FLOW PCA TO OBTAIN BASES FOR EXPRESSING GENERATORS
Taken together, the mean feature flow field M and the ‘eigen’ feature flow fields {U1 , . . . , U10 } ,
provide us with the ability to produce (up to some minimized value of mean squared error) “rerepresentations” of each of the 6,552 example “stacked and flattened/vectorized ‘10◦ rotation’ feature flow fields”.
These 11 vectors were determined from the case of feature flow fields associated with 10◦ rotations.
We next seek to use these 11 vectors (together with closely related additions described shortly) as
bases in terms of which we seek to determine alternative representations of flow fields associated
with other amounts of rotation. In particular, we will seek to fit regression coefficients for the representation of flow fields associated with feature pairs derived when there has been rotation of varying
amounts of the central object. Specifically, we will follow the steps of the feature flow computation
process detailed earlier in this Section, but now using ∆θ ∈ {10◦ , 20◦ , 30◦ , 40◦ , 50◦ , 60◦ } together
with the previous θinit ∈ {0, 5, 10, . . . , 355} .
The regression equation associated with each feature flow example will be of the form
U [∆θ] · w = Vj,: [θinit , θinit + ∆θ] ,
(1)
where U [∆θ] ∈ R8,522×33 is a matrix containing 11 groups of 3 columns, each of the form
∆θ 2
Ui , ∆θ
Ui ; there is one such group for each of {M, U1 , . . . , U10 } . We do this so
10 Ui ,
10
that the basis vectors provided in U [∆θ] will be different in different rotation conditions, enabling better fits. The vector w ∈ R33 can similarly be regarded as containing 11 groups of
T
3 coefficient values, say w = (aM , bM , cM , . . . , aU10 , bU10 , cU10 ) . Finally, the right hand side
Vj,: [θinit , θinit + ∆θ] is an instance of the collected-across-conv-layers feature flow vector fields described earlier. We have one of the above regression expressions for each “transform image” pair
(Ij [θinit ] , Ij [θinit + ∆θ]); since in our current setting have 91 original images, 72 initial orientation angles θinit ∈ {0, 5, 10, . . . , 350, 355}, and 6 rotation amounts ∆θ ∈ {10, 20, . . . , 60}, we
have a total of 39, 312 such pairs. For ease of reference, we will refer to the vertically stacked basis matrices (each of the form U [∆θ] , with ∆θ being the value used in computing the associated
“transform image” pair as in the example regression described in Eqn. 1) as U ∈ R3.4e8×33 . Similarly, we will refer to the vertically stacked “feature flow vector field” vectors, each of the form
Vj,: [θinit , θinit + ∆θ] ∈ R8,522 as V.
Our “modified basis” regression problem2 is thus succinctly expressed as
1
2
minimize kUw − Vk2 .
w∈R33 2
(2)
We will refer to the minimizing w argument as wlsq ∈ R33 .
3 I LLUSTRATION OF USE OF LEARNED GENERATORS
3.1 T RANSFORMATIONS VIA LEARNED GENERATORS
We can use these “least squares” coefficient values wlsq ∈ R33 to “predict” feature flow fields associated with a specified number of degrees of rotation. More particularly, we can do this for rotation
degree amounts other than the ∆θ ∈ {10◦ , 20◦ , 30◦ , 40◦ , 50◦ , 60◦ } degree amounts used when we
generated the 39, 312 “transform image training pairs” used in our least-squares regression calibration Eqn. 2. To obtain the desired generator, we decide what specific “number of degrees of rotation”
2
For specificity, we describe the row dimension of U: A total of 3.4 × 108 rows that come from 39, 312
= 91 × 72 × 6 vertically stacked one-per-image-pair matrices, each with 8,522 rows. Thus, the total number
of rows is 3.4 × 108 = 335, 016, 864 = 91 original images × 72 initial orientations × 6 rotation amounts ×
8,522 entries in the feature flow field collection per image pair.
5
Under review as a conference paper at ICLR 2016
is desired; using this specified degree amount and the 11 basis vectors (learned in the 10 degree rotation case we performed PCA on previously), generate the corresponding “33 column effective basis
matrix” U [∆θ] . Our sought-for generator is then U [∆θ]·wlsq , an element of R8,522 . For specificity,
we will refer to the generator arising from a specified rotation angle of ∆θ as G [∆θ] = U [∆θ]·wlsq .
We could describe generators as “predicted specified feature flows”; however, since we use these to
generate novel feature values in the layers of the network (and since this description is somewhat
lengthy), we refer to them as “generator flow fields”, or simply “generators”.
(a) Input
(b) “Inverted”
(c) “rotate 30 ”
(d) “rotate -30 ”
(e) “scale 1.3x”
(f) “scale 0.75x”
(g) “left 30”
(h) “up 30”
Figure 3: Illustration of applying learned generators to CNN features of a novel input image. (a) Input image.
(b) “Inverted image” (Dosovitskiy & Brox, 2015) from conv1 features of (a). (c), (d), (e), (f), (g), and (h)
show, respectively, “inverted images” from conv1 features obtained by applying learned generator flow fields
associated with -30◦ rotation, 30◦ rotation, scaling by a factor of 1.3, scaling by a factor of 0.75, translation 30
pixels to the left, and translation 30 pixels up.
We may describe the use of these learned generators follows: given a collection of CNN features,
we can apply a learned generator to obtain (approximations to) the feature values that would have
arisen from applying the associated transformation to the input image.
We now seek to investigate the use of generator flow fields, generically G [∆θ], in order to produce
an approximation of the exact CNN features that would be observed if we were to e.g. rotate an original image and compute the resulting AlexNet features. As a specific example, consider a “transform
image pair” (Ij [θinit ] , Ij [θinit + ∆θ]). In our notation, the corresponding AlexNet feature response
map pair is (Fj [θinit ] , Fj [θinit + ∆θ]) . We seek to use our generator, G [∆θ] to provide an estimate
of Fj [θinit + ∆θ] given only Fj [θinit ] .
Visualizations of the CNN features are often difficult to interpret. To provide an interpretable evaluation of the quality of the learned generators, we use the AlexNet inversion technique of (Dosovitskiy
& Brox, 2015). Applying our learned generators, the results in Fig. 3 indicate that the resulting
CNN features (arrived at using information learned in a top-down fashion) closely correspond to
those that would have been produced via the usual bottom-up process. As a more quantitative evaluation, we also check the RMS error and mean absolute deviation between network internal layer
features “generated” using our learned generators and the corresponding feature values that would
have arisen through “exact” bottom-up processing. For example, when looking at the 256 channels
of AlexNet conv5, the RMS of the difference between generator-produced features and bottom-up
features associated with “translate left by 30” is 4.69; the mean absolute deviation is 1.042. The
RMS of the difference between generator-produced and bottom-up associated with “scale by 1.3x”
is 1.63; the mean absolute deviation is 0.46
3.2
Z ERO - SHOT LEARNING
We have seen that the learned generators can be used to produce CNN features corresponding to
various specified transformations of provided initial CNN features. We next seek to explore the use
of these learned generators in support of a zero-shot learning task. We will again use our running
example of rotation.
A typical example of “zero-shot learning”: “If it walks like a duck...” We first describe a typical
example of zero-shot learning. Consider the task of classifying central objects in e.g. ImageNet
images of animals. A standard zero-shot learning approach to this task involves two steps. In the first
step, we learn a mapping from raw input data (for example, a picture of a dog) to some intermediate
representation (for example, scores associated with “semantic properties” such as “fur is present”,
“wings are present”, etc.). In the second step, we assume that we have (from Wikipedia article text,
for example) access to a mapping from the intermediate “semantic property” representation to class
label. For example, we expect Wikipedia article text to provide us with information such as “a zebra
is a hoofed mammal with fur and stripes”.
If our training data is such that we can produce accurate semantic scores for “hoofs are present”,
“stripes are present”, “fur is present”, we can potentially use the Wikipedia-derived association
between “zebra” and its “semantic properties” to bridge the gap between the “semantic properties”
6
Under review as a conference paper at ICLR 2016
predicted from the raw input image and the class label associated with “zebra”; significantly, so long
as the predicted “semantic properties” are accurate, the second part of the system can output “zebra”
whether or not the training data ever contained a zebra. To quote the well-known aphorism: “If it
walks like a duck, swims like a duck, and quacks like a duck, then I call that thing a duck.”
Zero-shot learning in our context In the typical example of zero-shot learning described above,
the task was to map raw input data to a vector of predicted class label probabilities. This task was
broken into two steps: first map the raw input data to an intermediate representation (“semantic
property scores”, in the animal example), then map the intermediate representation to a vector of
predicted class label probabilities. The mapping from raw input data to intermediate representation
is learned during training; the mapping from intermediate representation to class label is assumed to
be provided by background information or otherwise accessible from an outside source (determined
from Wikipedia, in the animal example).
In our setting, we have (initial image, transformed image) pairs. Our overall goal is to determine
a mapping from (initial image, transformed image) to “characterization of specific transformation
applied”. A specific instance of the overall goal might be: when presented with, e.g., an input pair
(image with central object, image with central object rotated 35◦ ) return output “35◦ ”. Analogous
to the animal example discussed above, we break this overall mapping into two steps: The first
mapping step takes input pairs (initial image, transformed image) to “collected per-layer feature
flow vector fields”. The second mapping step takes an input of “collected per-layer feature flow
vector fields” to an output of “characterization of specific transformation applied”. Note that, in
contrast to the animal example, our context uses learning in the second mapping step rather than
the first. A specific example description of this two-part process: Take some previously never-seen
new image with a central object. Obtain or produce another image in which the central object has
been rotated by some amount. Push the original image through AlexNet and collect the resulting
AlexNet features for all layers. Push the rotated-central-object image through AlexNet and collect
the resulting AlexNet features for all layers. For each layer, compute the “feature flow” vector field;
this is the end of the first mapping step. The second mapping step takes the collection of computed
“per-layer feature flow vector fields” and predicts the angle of rotation applied between the pair of
images the process started with. In our context, we use our learned generator in this second mapping
step. We now discuss the details of our approach to “zero-shot learning”.
Details of our “zero-shot learning” task The specific exploratory task we use to evaluate the feasibility of zero-shot learning (mediated by top-down information distilled from the observed behavior of network internal feature flow fields) can be described as follows: We have generated (imagewith-central-object, image-with-central-object-rotated) pairs. We have computed feature flows for
these pairs. We have performed PCA on these feature flows to determine U [∆θ] ∈ R8,522×33 , an
“effective basis matrix” associated with a rotation angle of ∆θ = 10◦ . We have fit a calibration
regression, resulting in wlsq ∈ R33 , the vector of least-squares coefficients with which we can make
feature flow predictions in terms of the “effective basis matrix”. Our initial “zero-shot learning”
prediction task will be to categorize the rotation angle used in the image pair as “greater than 60◦ ”
or “less than 60◦ ”.
Figure 4: Desired categorization output: “Rotated less than 60◦ ”.
We compare the performance of our approach to a more standard approach: We train a CNN in
a “Siamese” configuration to take as input pairs of the form (image, image-with-central-objectrotated) and to produce as output a prediction of the angle of rotation between the images in the pair.
One branch of the “Siamese” network receives the initial image as input; the other branch receives
input with the central object rotated. Each branch is structured to match AlexNet layers from conv1
up to pool5 — that is, up to but not including fc6. We then stack the channels from the respective
pool5 layers from each branch. The resulting stack is provided as input to a fully-connected layer,
fc6, with 4096 units; fc7 takes these 4096 units of input and produces 4096 units of output; finally,
fc8 produces a single scalar output — probability the rotation angle used in the image pair was
“greater than 60◦ ” or “less than 60◦ ”.
7
Under review as a conference paper at ICLR 2016
On a test set of 1,600 previously unseen image pairs with orientation angles ranging through 360◦ ,
our initial zero-shot learning approach yields correct categorization 74% of the time. We structure
our comparison question as “How many image pairs are required to train the ‘Siamese’ network to
a level of prediction performance comparable to the zero-shot approach?” We observe that with 500
training pairs, the “Siamese” network attains 62% correct categorization; with 2,500 pairs performance improves to 64%; with 12,500 pairs, to 86%; and finally with 30,000 pairs, to 96%.
3.3
(N ETWORK INTERNAL ) “ DATA AUGMENTATION ”
We have previously illustrated our ability to use learned generators to produce a variety of “predicted” CNN feature response maps, each of which corresponds to some exact CNN feature response map that would have arisen in a standard bottom-up approach; we will now describe how
we can these learned generators to perform (network internal) “data augmentation”. To ground our
discussion, consider an initial image Ij [θinit ] . If one were to perform standard “data augmentation”,
one might apply a variety of rotations to the initial image, say from a possible collection of n rotation angle amounts {(∆θ)1 , (∆θ)2 , . . . , (∆θ)n } , where we have chosen our notation to emphasize
that the index is over possible “∆θ” rotation angle values. The “data augmentation” process would
involve n corresponding images, {Ij [θinit + (∆θ)1 ] , . . . , Ij [θinit + (∆θ)n ]} . Network training then
proceeds in the usual fashion: for whichever transformed input image, the corresponding AlexNet
feature collection in {Fj [θinit + (∆θ)1 ] , . . . , Fj [θinit + (∆θ)n ]} would computed and used to produce loss values and backpropagate updates to the network parameters.
Our observation is that we can use our learned generators to produce, in a network-internal fashion, AlexNet internal features akin to those listed in {Fj [θinit + (∆θ)1 ] , . . . , Fj [θinit + (∆θ)n ]}.
Specifically, in our running rotation example we learned to produce predictions (for each layer of
the network) of the flow field associated with a specified rotation angle. As mentioned previously,
we refer to the learned generator at a layer ` associated with a rotation angle of ∆θ as G` [∆θ] . We
regard the process of applying a learned generator, for example G` [(∆θ)1 ], to the layer ` AlexNet
features, Fj,` [θinit ], as a method of producing feature values akin to Fj,` [θinit + (∆θ)1 ] . To emphasize this notion, we will denote the values obtained by applying the learned generator flow field
G` [(∆θ)1 ] to the layer ` AlexNet features, Fj,` [θinit ] as Φj,` [θinit ; (∆θ)1 ] (with the entire collection of layers denoted as Φj [θinit ; (∆θ)1 ]). Using our newly-established notation, we can express
our proposed (network internal) “data augmentation” as follows: From some initial image Ij [θinit ] ,
compute AlexNet features Fj [θinit ] . For any desired rotation, say ∆θ, determine the associated
learned generator flow field G` [∆θ] . Apply this generator flow field to, for example, Fj,` [θinit ] to
obtain “predicted feature values” Φj,` [θinit ; ∆θ] . The standard feedforward computation can then
proceed from layer ` to produce a prediction, receive a loss, and begin the backpropagation process
by which we can update our network parameters according to this (network internal) “generated
feature example”.
Table 1: ImageNet validation set accuracy (in %).
Method
AlexNet (Krizhevsky et al., 2012)
AlexNet after 5 additional epochs of generator training
top-1
m-view
60.15
60.52
top-5
m-view
83.93
84.35
The backpropagation process involves a subtlety. Our use of the generator to means that forward path
through the network experiences a warp in the features. To correctly propagate gradients during the
backpropagation process, the path that the gradient values follow should experience the “(additive)
inverse” of the forward warp. We can describe the additive inverse of our gridded vector field fairly
simply: Every vector in the initial field should have a corresponding vector in the inverse field; the
component values should be negated and the root of the “inverse vector” should be placed at the head
of the “forward vector”. The “inverse field” thus cancels out the “forward field”. Unfortunately, the
exact “inverse vector” root locations will not lie on the grid used by the forward vector field. We
obtain an approximate inverse vector field by negating the forward vector field components. In tests,
we find that this approximation is often quite good; see Fig. A2. Using this approximate inverse
warp, our learned generator warp can be used in the context of network internal data augmentation
during training. See the Fig. 5 for an illustration.
We now discuss the use of our proposed network internal data augmentation to improve the performance of AlexNet on ImageNet. We train using the 1.3M images of the ILSVRC2014 CLS-LOC
8
Under review as a conference paper at ICLR 2016
fc8
warped conv5
apply random choice
of learned generator
(e.g. “scale by 1.3x”)
conv5
Figure 5: Illustration of network internal data augmentation, more succinctly described as “generator training”.
On the left, we show a schematic of the modified AlexNet architecture we use. The primary difference is the
incorporation at conv5 of a module applying a randomly selected learned generator flow field. On the right,
we provide comparison between five selected conv5 channels: (lower row) before applying the “scale 1.3x”
learned generator flow field; (upper row) after applying the generator flow field.
task. For each batch of 256 images during training, we randomly select one of six of our learned
generator flow fields to apply to the initial features in conv5. Specifically, we randomly select from
one of +30◦ rotation, -30◦ rotation, 1.3x scaling, 0.75x scaling, translation 30 pixels left, or translation 30 pixels up. We apply the (approximate) inverse warp when backpropagating through conv5.
We evaluate performance on the 50k images of ILSVRC2014 CLS-LOC validation set; see Table 1.
Acknowledgments This work is supported by NSF IIS-1216528 (IIS-1360566), NSF award IIS0844566 (IIS-1360568), and a Northrop Grumman Contextual Robotics grant. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for
this research. We thank Chen-Yu Lee and Jameson Merkow for their assistance, and Saining Xie
and Xun Huang for helpful discussion.
R EFERENCES
Ames Jr, Adelbert. Visual perception and the rotating trapezoidal window. Psychological Monographs: General and Applied, 65(7):i, 1951.
Borenstein, Eran and Ullman, Shimon. Combined top-down/bottom-up segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(12):2109–2125, 2008.
Cherry, E Colin. Some experiments on the recognition of speech, with one and with two ears. The
Journal of the acoustical society of America, 25(5):975–979, 1953.
Cohen, T and Welling, M. Transformation Properties of Learned Visual Representations. In
ICLR2015, 2015.
Dai, Jifeng and Wu, Ying-Nian. Generative Modeling of Convolutional Neural Networks. arXiv
preprint arXiv:1412.6296, 2014.
Dosovitskiy, Alexey and Brox, Thomas. Inverting convolutional networks with convolutional networks. arXiv preprint arXiv:1506.02753, 2015.
Dosovitskiy, Alexey, Tobias Springenberg, Jost, and Brox, Thomas. Learning to Generate Chairs
With Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pp. 1538–1546, 2015.
Gregory, Richard Langton. The intelligent eye. 1970.
Hill, Harold C and Johnston, Alan. The hollow-face illusion: Object specific knowledge, general
assumptions or properties of the stimulus. 2007.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George, Mohamed, Abdel-Rahman, Jaitly, Navdeep,
Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara, and Kingsbury, Brian. Deep
neural networks for acoustic modeling in speech recognition: The shared views of four research
groups. In IEEE Signal Processing Magazine, 2012.
9
Under review as a conference paper at ICLR 2016
Jaderberg, Max, Simonyan, Karen, Zisserman, Andrew, and Kavukcuoglu, Koray. Spatial transformer networks. arXiv preprint arXiv:1506.02025, 2015.
Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross,
Guadarrama, Sergio, and Darrell, Trevor. Caffe. In ACM MM, 2014.
Joshi, Sarang C, Miller, Michael, et al. Landmark matching via large deformation diffeomorphisms.
Image Processing, IEEE Transactions on, 9(8):1357–1370, 2000.
Kaiser, Peter K and Boynton, Robert M. Human color vision. 1996.
Kersten, Daniel, Mamassian, Pascal, and Yuille, Alan. Object perception as Bayesian inference.
Annu. Rev. Psychol., 55:271–304, 2004.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet Classification with Deep
Convolutional Neural Networks. In NIPS, 2012.
Levin, Anat and Weiss, Yair. Learning to combine bottom-up and top-down segmentation. International Journal of Computer Vision, 81(1):105–118, 2009.
Liu, Ce, Yuen, Jenny, and Torralba, Antonio. Sift flow: Dense correspondence across scenes and its
applications. PAMI, 33(5):978–994, 2011.
Mahendran, Aravindh and Vedaldi, Andrea. Understanding Deep Image Representations by Inverting Them. IJCV, 2(60):91–110, 2014.
Marr, David. Vision: A computational approach, 1982.
Memisevic, Roland and Hinton, Geoffrey. Unsupervised learning of image transformations. In
Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1–8. IEEE,
2007.
Nguyen, Anh, Yosinski, Jason, and Clune, Jeff. Deep Neural Networks Are Easily Fooled: High
Confidence Predictions for Unrecognizable Images. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pp. 427–436, 2015.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang,
Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei,
Li. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2014.
Simard, Patrice Y, LeCun, Yann A, Denker, John S, and Victorri, Bernard. Transformation invariance
in pattern recognition—tangent distance and tangent propagation. In Neural networks: tricks of
the trade, pp. 239–274. Springer, 1998.
Stroop, J Ridley. Studies of interference in serial verbal reactions. Journal of experimental psychology, 18(6):643, 1935.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to Sequence Learning with Neural
Networks. In NIPS, 2014.
Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow,
Ian, and Fergus, Rob. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199,
2013.
Taylor, Graham W, Fergus, Rob, LeCun, Yann, and Bregler, Christoph. Convolutional learning of
spatio-temporal features. In Computer Vision–ECCV 2010, pp. 140–153. Springer, 2010.
Tu, Zhuowen, Chen, Xiangrong, Yuille, Alan L, and Zhu, Song-Chun. Image parsing: Unifying
segmentation, detection, and recognition. IJCV, 63(2):113–140, 2005.
10
Under review as a conference paper at ICLR 2016
A1
S UPPLEMENTARY M ATERIALS
conv1
pool1
conv2
“rotate 10 ”
“scale by 1.3x”
“scale by 0.75x”
Figure A1: Visualizations of the mean “feature flow” as computed across the 91 “original images” whose
selection is described in Section 2.1. The leftmost column contains visualizations computed from features
arrived at with input image pairs that differ by 10◦ rotation of the central object; the center column, with input
image pairs that differ by the central object being scaled by 1.3x; the rightmost column, with input image pairs
that differ by the central object being scaled by 0.75x. Moving from top to bottom within each column, the
feature flow fields are shown, respectively, for conv1, then pool1, then conv2.
11
Under review as a conference paper at ICLR 2016
“Inverted images” from conv1 features
generators applied: none
generators applied: “rotate 30” and its negation
generators applied: “scale by 1.3x” and its negation
generators applied: “translate left by 30” and its negation
generators applied: “scale by 0.75x” and its negation
generators applied: “translate up by 30” and its negation
Figure A2: Here we confirm that the “negation” of a generator flow field is (both qualitatively and quantitatively) a good approximation to the additive inverse of that generator flow field. Since the inverted images from
conv1 have more detail, we perform our qualitative evaluation with conv1. Since our actual training uses conv5,
we perform our quantitative evaluation with conv5. In each entry above, we begin with AlexNet conv1 features
from the original “taco cat” image. The “inverted image” (Dosovitskiy & Brox, 2015) corresponding to these
untouched features is found in the top left. In each other entry we apply a different learned generator followed
by its “negation”. The close correspondence between the images “inverted” from the resulting features and
the image “inverted” from the untouched features confirms the quality of the approximation. Moving on to
quantitative evaluation, we find that the feature values arising from applying a generator flow field followed by
its negation differs from the original AlexNet conv5 feature values across the 256 channels of conv5 as follows:
approximate inverse of “rotate -30” yields 0.37 RMS (0.09 mean absolute difference); approximate inverse of
“scale by 1.3x” yields 0.86 RMS (0.19 mean absolute difference); for “translation 30 left”, the approximation
incurs error at the boundaries of the flow region, yielding 3.96 RMS (but 0.9 mean absolute deviation).
12
| 9 |
Heap Abstractions for Static Analysis
Vini Kanvar and Uday P. Khedker
arXiv:1403.4910v5 [] 13 May 2015
Department of Computer Science and Engineering
Indian Institute of Technology Bombay
Email: {vini,uday}@cse.iitb.ac.in
14 May, 2015
Abstract
Heap data is potentially unbounded and seemingly arbitrary. As a consequence, unlike
stack and static memory, heap memory cannot be abstracted directly in terms of a fixed
set of source variable names appearing in the program being analysed. This makes it an
interesting topic of study and there is an abundance of literature employing heap
abstractions. Although most studies have addressed similar concerns, their formulations
and formalisms often seem dissimilar and some times even unrelated. Thus, the insights
gained in one description of heap abstraction may not directly carry over to some other
description. This survey is a result of our quest for a unifying theme in the existing
descriptions of heap abstractions. In particular, our interest lies in the abstractions and
not in the algorithms that construct them.
In our search of a unified theme, we view a heap abstraction as consisting of two
features: a heap model to represent the heap memory and a summarization technique for
bounding the heap representation. We classify the models as storeless, store based, and
hybrid. We describe various summarization techniques based on k-limiting, allocation sites,
patterns, variables, other generic instrumentation predicates, and higher-order logics. This
approach allows us to compare the insights of a large number of seemingly dissimilar heap
abstractions and also paves way for creating new abstractions by mix-and-match of models
and summarization techniques.
1
Heap Analysis: Motivation
Heap data is potentially unbounded and seemingly arbitrary. Although there is a plethora of
literature on heap, the formulations and formalisms often seem dissimilar. This survey is a
result of our quest for a unifying theme in the existing descriptions of heap.
1.1
Why Heap?
Unlike stack or static memory, heap memory allows on-demand memory allocation based on
the statements in a program (and not just variable declarations). Thus it facilitates creation
of flexible data structures which can outlive the procedures that create them and whose sizes
can change during execution. With processors becoming faster and memories becoming larger
as well as faster, the ability of creating large and flexible data structures increases. Thus the
role of heap memory in user programs as well as design and implementation of programming
languages becomes more significant.
1.2
Why Heap Analysis?
1.2
Why Heap Analysis?
The increasing importance of the role of heap memory naturally leads to a myriad requirements
of its analysis. Although heap data has been subjected to static as well as dynamic analyses,
in this paper, we restrict ourselves to static analysis.
Heap analysis, at a generic level, provides useful information about heap data, i.e. heap
pointers or references. Additionally, it helps in discovering control flow through dynamic
dispatch resolution. Specific applications that can benefit from heap analysis include program
understanding, program refactoring, verification, debugging, enhancing security, improving
performance, compile time garbage collection, instruction scheduling, parallelization etc.
Further, some of the heap related questions asked during various applications include
whether a heap variable points to null, does a program cause memory leaks, are two pointer
expressions aliased, is a heap location reachable from a variable, are two data structures
disjoint, and many others. Section 8 provides an overview of applications of heap analysis.
1.3
Why Heap Abstraction?
Answering heap related questions using compile time heap analysis is a challenge because of
the temporal and spatial structure of heap memory characterized by the following aspects.
• Unpredictable lifetime. The lifetime of a heap object may not be restricted to the scope
in which it is created. Although the creation of a heap object is easy to discover in a
static analysis, the last use of a heap object, and hence the most appropriate point of its
deallocation, is not easy to discover.
• Unbounded number of allocations.
Heap locations are created on-demand as a
consequence of the execution of certain statements. Since these statement may appear
in loops or recursive procedures, the size of a heap allocated data structure may be
unbounded. Further, since the execution sequence is not known at compile time, heap
seems to have an arbitrary structure.
• Unnamed locations. Heap locations cannot be named in programs, only their pointers
can be named. A compile time analysis of a heap manipulating program therefore,
needs to create appropriate symbolic names for heap memory locations. This is nontrivial because unlike stack and static data, the association between symbolic names and
memory locations cannot remain fixed.
In principle, a program that is restricted only to stack and static data, can be rewritten
without using pointers. However, the use of pointers is unavoidable for heap data because
the locations are unnamed. Thus a heap analysis inherits all challenges of a pointer
analysis of stack and static data1 and adds to them because of unpredictable lifetimes
and unbounded number of allocations.
Observe that none of these aspects are applicable to stack or static memory because their
temporal and spatial structures are far easier to discover. Thus an analysis of stack and static
data does not require building sophisticated abstractions of the memory. Analysis of heap
requires us to create abstractions to represent unbounded allocations of unnamed memory
locations which have statically unpredictable lifetimes. As described in Section 3, two features
common to all heap abstractions are:
1
Pointer analysis is undecidable [13,67]. It is inherently difficult because a memory location can be accessed in
more than one way i.e. via pointer aliases. Therefore, pointer analysis requires uncovering indirect manipulations
of data and control flow. Additionally, modern features such as dynamic typing, field accesses, dynamic field
additions and deletions, implicit casting, pointer arithmetic, etc., make pointer analysis even harder.
2
Heap Abstractions
1.4
Organization of the paper
• models of heap which represent the structure of heap memory, and
• summarization techniques to bound the representations.
We use this theme to survey the heap abstractions found in the static analysis literature.
1.4
Organization of the paper
Section 2 presents the basic concepts. Section 3 defines heap abstractions in terms of models
and summarization techniques. We categorize heap models as storeless, store based, or hybrid
and describe various summarization techniques. These are generic ideas which are then used
in Sections 4, 5, and 6 to describe the related investigations in the literature in terms of
the interactions between the heap models and summarization techniques. Section 7 compares
the models and summarization techniques to explore the design choices and provides some
guidelines. Section 8 describes major heap analyses and their applications. Section 9 mentions
some notable engineering approximations used in heap analysis. Section 10 highlights some
literature survey papers and book chapters on heap analysis. Section 11 concludes the paper
by observing the overall trend. Appendix A compares the heap memory view of C/C++ and
Java.
2
Basic Concepts
In this section, we build the basic concepts required to explain the heap abstractions in later
sections. We assume Java like programs, which use program statements: x := new, x := null,
x := y, x.f := y, and x := y.f. We also allow program statements x.f := new and x.f := null as
syntactic sugar. The dot followed by a field represents field dereference by a pointer variable.
For ease of understanding, we draw our programs as control flow graphs. Inn and Outn denote
the program point before and after program statement n respectively.
2.1
Examples of Heap Related Information
Two most important examples of heap information are aliasing and points-to relations because
the rest of the questions are often answered using them.
• In alias analysis, two pointer expressions are said to be aliased to each other if they
evaluate to the set of same memory locations. There are three possible cases of aliases
between two pointer expressions:
– The two pointer expressions cannot alias in any execution instance of the program.
– The two pointer expressions must alias in every execution instance of the program.
– The two pointer expressions may alias in some execution instances but not
necessarily in all execution instances.
• A points-to analysis attempts to determine the addresses that a pointer holds. A pointsto information also has three possible cases: must-points-to, may-points-to, and cannotpoints-to.
An analysis is said to perform a strong update if in some situations it can remove some
alias/points-to information on processing an assignment statement involving indirections on
the left hand side (for example, *x or x->n in C, or x.n in Java). It is said to perform a weak
update if no information can be removed.
Strong updates require the use of
May 2015
3
2.2
Soundness and Precision of Heap Analysis
1 x := new 1
2 x.g := null 2
3 y := new 2
4 y.f := null 3 6 y := x 6
5 y.g := null 4
7 x.f := new 7
8 x.f := new 7
Figure 1. Example to illustrate soundness and precision of information computed by may
and must analyses.
must-alias/must-points-to information whereas weak updates can be performed using
may-alias/may-points-to information in a flow-sensitive analysis2 .
2.2
Soundness and Precision of Heap Analysis
A static analysis computes information representing the runtime behaviour of the program
being analysed. Two important considerations in a static analysis of a program are soundness
and precision. Soundness guarantees that the effects of all possible executions of the program
have been included in the information computed. Precision is a qualitative measure of the
amount of spurious information which is the information that cannot correspond to any
execution instance of the program; lesser the spurious information, more precise is the
information.
Applications involving program transformations require sound analyses because the
transformations must be valid for all execution instances. Similarly applications involving
verification require a sound approximation of the behaviour of all execution instances. On
the other hand error detection or validation applications can afford to compromise on
soundness and may not cover all possible execution paths.
When an analysis computes information that must hold for all execution instances of a
program, soundness is ensured by under-approximation of the information. When it
computes information that may hold in some execution instances, soundness is ensured by
over-approximation of the information. Precision is governed by the extent of over- or
under-approximation introduced in the process.
Consider the program in Figure 1. Let us consider a may-null (must-null) analysis whose
result is a set of pointers that may (must) be null in order to report possible (guaranteed)
occurrences of null-dereference at statement 8. Assume that we restrict ourselves to the set
{x.f, x.g, y.f, y.g}. We know that both x.g and y.g are guaranteed to be null along all executions
of the program. However, x.f is guaranteed to be non-null because of the assignment in
statement 7 and y.f may or may not be null depending on the execution of the program.
2
A flow-sensitive heap analysis computes, at each program point, an abstraction of the memory, which is a
safe approximation of the memory created along all control flow paths reaching the program point
4
Heap Abstractions
(a) Consider the set {x.g, y.g} reported by an analysis at statement 8. This set is:
• Sound for a must-null analysis because it includes all pointers that are guaranteed to
be null at statement 8. Since it includes only those pointers that are guaranteed to
be null, it is also precise. Any under-approximation of this set (i.e. a proper subset
of this set) is sound but imprecise for a must-null analysis. An over-approximation of
this set (i.e. a proper superset of this set) is unsound for must-null analysis because it
would include a pointer which is not guaranteed to be null as explained in (b) below.
• Unsound for a may-null analysis because it excludes y.f which may be null at
statement 8.
(b) On the other hand, the set {x.g, y.g, y.f} reported at statement 8 is:
• Sound for a may-null analysis because it includes all pointers that may be null at
statement 8. Since it includes only those pointers that may be null, it is also precise.
Any over-approximation of this set (i.e. a proper superset of this set) is sound but
imprecise for a may-null analysis. Any under-approximation of this set (i.e. a proper
subset of this set) is unsound for a may-null analysis because it would exclude a
pointer which may be null as explained in (a) above.
• Unsound for a must-null analysis because it includes y.f which is not guaranteed be
null at statement 8.
3
Heap Abstractions
In this section we define some generic ideas which are then used in the subsequent sections to
describe the work reported in the literature.
3.1
Defining Heap Abstractions
The goal of static analysis of heap memory is to abstract it at compile time to derive useful
information. We define a heap abstraction as the heap modeling and summarization of the heap
memory which are introduced below
• Let a snapshot of the runtime memory created by a program be called a concrete memory.
A heap model is a representation of one or more concrete memories. It abstracts away less
useful details and retains information that is relevant to an application or analysis [59].
For example, one may retain only the reachable states in the abstract memory model.
We categorize the models as storeless, store based, and hybrid. They are defined in
Section 3.2.
• Deriving precise runtime information of non-trivial programs, in general, is not
computable within finite time and memory (Rice theorem [70]). For static analysis of
heap information, we need to summarize the modeled information. Summarization
should meet the following crucial requirements: (a) it should make the problem
computable, (b) it should compute a sound approximation of the information
corresponding to any runtime instance, and (c) it should retain enough precision
required by the application.
The summarizations are categorized based on using allocation sites, k-limiting, patterns,
variables, other generic instrumentation predicates, or higher-order logics. They are
defined in Section 3.3.
May 2015
5
Memory
Heap Models
Store based
model
Hybrid
model
Models
Unbounded
heap memory
Storeless
model
Generic
instrumentation
predicates
Higher-order
kHigher-order
klogics
limiting
logics
limiting
Generic
Allocation
instrumentation
Generic
sites
predicates
Patterns instrumentation
predicates
Variables
Summarizations
3.2
Figure 2. Heap memory can be modeled as storeless, store based, or hybrid. These models are
summarized using allocation sites, k-limiting, patterns, variables, other generic instrumentation
predicates, or higher-order logics.
Some combinations of models and summarization techniques in common heap abstractions
are illustrated in Figure 2.
3.2
Heap Models
Heap objects are dynamically allocated, are unbounded in number, and do not have fixed
names. Hence, various schemes are used to name them at compile time. The choice of naming
them gives rise to different views of heap. We define the resulting models and explain them
using a running example in Figure 3. Figure 4 associates the models with the figures that
illustrate them for our example program.
• Store based model. A store based model explicates heap locations in terms of their
addresses and generally represents the heap memory as a directed graph [7, 10, 15, 18,
26, 37, 61, 68, 77, 84, 87]. The nodes of the graph represent locations or objects in the
memory. An edge x → o1 in the graph denotes the fact that the pointer variable x may
hold the address of object o1 . Since objects may have fields that hold the addresses,
f
we can also have a labelled edge x → o1 denoting the fact that the field f of object x
may hold the address of object o1 . Let V be the set of root variables, F be the set of
fields names, and O be the set of heap objects. Then a concrete heap memory graph
can be viewed as a collection of two mappings: V 7→ O and O × F 7→ O. Observe that
this formalization assumes that O is not fixed and is unbounded. It is this feature that
warrants summarization techniques.
An abstract heap memory graph3 is an approximation of concrete heap memory graph
which collects together all addresses that a variable or a field may hold
3
6
In the rest of the paper, we refer to an abstract heap memory graph simply by a memory graph.
Heap Abstractions
3.2
Heap Models
1 x := new 1
2 y := x 2
3 y.f := new 3
4 y := y.f 4
5 y.f := new 5
6 y := y.f 6
l1
x l3
l2
y l7
l3
l4
l5
l6
l7
f l4
f l5
f l6
f l7
f ...
1
(b) Execution snapshot showing an unbounded heap graph at Out6 of the
program in Figure 3a. Here we have shown the heap graph after iterating
twice over the loop. Stack locations x and y point to heap locations l3 and
l7 , respectively. Heap locations l3 , l4 , l5 , and so on point to heap locations
l4 , l5 , l6 , and so on, respectively.
(a) Example
Figure 3. Running example to illustrate heap models and summarizations, which have been
shown in Figures 5, 6, and 7. In the program we have purposely duplicated the program
statements in order to create a heap graph where variable y is at even number of indirections
from variable x after each iteration of the loop. Not all summarization techniques are able to
capture this information.
– at all execution instances of the same program point, or
– across all execution instances of all program points.
Hence the ranges in the mappings have to be extended to 2O for an abstract memory
graph. Thus a memory graph can be viewed as a collection of mappings4 V 7→ 2O and
O × F 7→ 2O .
Figure 3 shows our running example and an execution snapshot of the heap memory
created and accessed by it. The execution snapshot shows stack locations x and y and
heap locations with the addresses l3 , l4 , l5 , l6 , and l7 . The address inside each box
denotes the location that the box points to. This structure is represented using a store
based model in Figure 5. Here the root variable y points to a heap location that is at
even number of indirections via f from x after each iteration of the loop in the program
in Figure 3a.
• Storeless model. The storeless model (originally proposed by Jonkers [40]) views the
heap as a collection of access paths [8, 17, 24, 40, 43, 49, 60, 63]. An access path consists of
a pointer variable which is followed by a sequence of fields of a structure. The desired
properties of both a concrete and an abstract heap memory are stored as relations on
access paths. The storeless model does not explicate the memory locations or objects
corresponding to these access paths. Given V as the set of root variables and F as the set
of field variable names, the set of access paths is defined as V × F ∗ . For example, access
path x.f.f.f.f represents a memory location reachable from x via four indirections of field
f. Observe that the number of access paths is potentially infinite and the length of each
access path is unbounded. It is this feature that warrants summarization techniques.
The heap memory at Out6 of our running example (Figure 3) is represented using
4
In principle a graph may be represented in many ways. We choose a collection of mappings for convenience.
May 2015
7
Memory
Heap Summarization Techniques
Unbounded
heap memory
Store based
model
Hybrid
model
Storeless
model
(Figure 5)
(Figure 7)
(Figure 6)
Models
(Figure 3b)
Generic
instrumentation
predicates
(Figure 7b)
klimiting
Higher-order
logics
(Figure 5b)
Allocation
sites
(Figure 5c)
Generic
instrumentation
predicates
klimiting
Higher-order
logics
(Figure 6b)
Generic
Patterns instrumentation
(Figure 6c) predicates
Summarizations
3.3
Variables
(Figure 5d)
Figure 4. Figures illustrating various heap models and their summarizations for the program
in Figure 3.
storeless model in Figure 6. The alias information is stored as a set of equivalence
classes containing access paths that are aliased. Access paths x.f.f.f.f and y are put in
the same equivalence class at Out6 because they are aliased at some point in the
execution time of the program.
• Hybrid model. Chakraborty [14] describes a hybrid heap model which represents heap
structures using a combination of store based and storeless models [16, 25, 50, 72]. Heap
memory of Figure 3b is represented using the hybrid model in Figure 7. The model stores
both objects (as in a store based model) and access paths (as in a storeless model).
3.3
Heap Summarization Techniques
In the presence of loops and recursion, the size of graphs in a store based model and the lengths
of the access paths (and hence their number) in a storeless model is potentially unbounded.
For fixpoint computation of heap information in a static analysis, we need to approximate the
potentially unbounded heap memory in terms of summarized heap locations called summarized
objects. A summarized object is a compile time representation of one or more runtime (aka
concrete) heap objects.
3.3.1
Summarization
Summarized heap information is formally represented as Kleene closure or wild card in regular
expressions, summary node in heap graphs, or recursive predicates.
• Summarized access paths are stored as regular expressions [17] of the form r.e, where
r is a root variable and e is a regular expression over field names defined in terms of
8
Heap Abstractions
3.3
y
x
f
f
Heap Summarization Techniques
y
f
f
...
(a) Unbounded store based model.
y
x
f
f
1
f
(b) k-limiting (k = 2)
summarization.
y
x
f
3
f
5
f
(c) Allocation site based
summarization.
y
x
f
f
f
(d) Variable based
summarization.
Figure 5. Store based heap graphs at Out6 for the program in Figure 3a. Figures 5b, 5c,
and 5d are bounded representations of heap information in Figure 5a. The numbers inside the
graph nodes indicate the object’s allocation sites in the program in Figure 3a.
concatenation (.), Kleene closure (∗ and + used as superscripts), and wild card (∗ used
inline) operators. For example, access path x.f.∗ represents an access path x.f followed
by zero or more dereferences of any field. Access path x(.f)∗ represents an access path x
followed by any number of dereferences of field f.
• Summarized heap graphs are stored by associating each graph node with a boolean
predicate indicating whether it is a summary node representing more than one concrete
heap location [15]. Observe that summary nodes may result in spurious cycles in the
graph if two objects represented by a summary node are connected by an edge.
• Summarized collection of paths in the heap can also be stored in the form of recursive
predicates [26, 63].
3.3.2
Materialization
A collection of concrete nodes with the same property are summarized as a summary node.
However, after creation of a summary node, a program statement could make a root variable
point to one of the heap locations represented by the summary node.
Traditional
summarization techniques [15, 50] do not “un-summarize” this heap location from the
summary node. Thus in traditional summarization techniques, a property discovered for a
summarized node may be satisfied by some of the represented heap locations and not
necessarily by all. For example, when determining which pointer expressions refer to the
same heap location, all pointer expressions pointing to the same summarized object will be
recognized as possible candidates, even though some of them may have been changed by new
assignments. Therefore, a heap analysis using this traditional summarization technique has a
serious disadvantage: it can answer only may-pointer questions. As a result traditional
summarization techniques cannot allow strong updates. In order to compute precise
must-pointer information, Sagiv et al. [75] materialize (“un-summarize”) summarized objects
(explained in Section 5.2). Since this allows the locations that violate the common property
May 2015
9
3.3
Heap Summarization Techniques
{hx.f.f, yi, hx.f.f.f.f, yi, . . . }
{hx.f.f, yi,
hx.f.f.∗, yi}
{hx(.f.f)+ , yi}
(a) Unbounded storeless model.
(b) k-limiting (k = 2).
(c) Pattern based.
Figure 6. Storeless model of heap graph at Out6 of the program in Figure 3a. Figures 6b
and 6c are the bounded representations of heap information in Figure 6a. Equivalence class of
aliased access paths is denoted by h and i.
y
x
hxi
f
hx.fi
f
y
f hx.f.f, f hx.f.f.f f hx.f.f.f.f, f
...
yi
y.fi
y.f.f, yi
(a) Unbounded hybrid model.
y
x
hxi
f
hx(.f)+ i
f hx(.f)+ .f,
yi
(b) Variable based summarization.
Figure 7. Hybrid model of heap graph at Out6 of the program in Figure 3a. Figure 7b is the
bounded representation of the heap information in Figure 7a. Although the access paths in
the nodes can be inferred from the graph itself, they have been denoted for simplicity.
to be removed from the summary node and be represented by a newly created node, this
opens up the possibility that a summary node could represent a must property satisfied by all
locations represented by the summary node. Performing strong updates is an example of
increased precision that can be facilitated by materialization. Literature contains many
approaches for must-pointer analysis, ranging from relatively simple abstractions such as
recency abstraction [4] to sophisticated shape analysis [75].
An analysis involving
materialization is expensive because of the additional examination required and the possible
increase in the size of the graph.
3.3.3
Summarization Techniques
We introduce below the six commonly found summarization techniques using our running
program of Figure 3a. The figures illustrating these techniques have been listed in Figure 4.
Note that our categorization is somewhat arbitrary in that some techniques can be seen as
special cases of some other techniques but we have chosen to list them separately because of
their prevalence.
The main distinction between various summarization techniques lies in how they map a
heap of potentially unbounded size to a bounded size. An implicit guiding principle is to find
a balance between precision and efficiency without compromising on soundness.
1. k-limiting summarization distinguishes between the heap nodes reachable by a sequence
of up to k indirections from a variable (i.e. it records paths of length k in the memory
graph) and over-approximates the paths longer than k.
10
Heap Abstractions
3.3
Heap Summarization Techniques
k-limiting summarization has been performed on store based model [50]. Figure 5b
represents a k-bounded representation of the hybrid model in Figure 5a. For k = 2,
heap nodes beyond two indirections are not stored. A self loop is created on the second
indirection (node corresponding to x.f.f) to over-approximate this information. This
stores spurious aliases for access paths with more than k = 2 indirections (for example,
x.f.f.f and y are spuriously marked as aliases at Out6 ).
k-limiting summarization has also been performed on storeless model [39, 49]. This was
proposed by Jones and Muchnick [39]. Figure 6b represents a k-bounded representation
of the storeless model in Figure 6a. This also introduces the same spurious alias pairs as
in Figure 5b.
2. Summarization using allocation sites merges heap objects that have been allocated at
the same program site. This technique is used for approximating store based heap
model [4, 61] and hybrid model [50]. It gives the same name to all objects allocated in a
given program statement. The summarization is based on the premise that nodes
allocated at different allocation sites are manipulated differently, while the ones
allocated at the same allocation site are manipulated similarly. Figure 5c represents
allocation site based summarization heap graph of the store based model in Figure 5a.
Here all objects allocated at program statements 3 and 5 are respectively clustered
together. This summarization on the given example does not introduce any spurious
alias pairs. We will show spuriousness introduced due to this summarization in
Section 6.1.
3. Summarization using patterns merges access paths based on some chosen patterns of
occurrences of field names in the access paths. Pattern based summarization has been
used to bound the heap access paths [17, 43, 60]. Figure 6c represents pattern based
summarization of the storeless model in Figure 6a. For this example, it marks every
second dereference of field f (along the chain rooted by x) as aliased with y which is
precise.
4. Summarization using variables merges those heap objects that are pointed to by the
same set of root variables. For this, Sagiv et al. [78] use the predicate pointed-to-by-x on
nodes for all variables x to denote whether a node is pointed to by variable x. Thus, all
nodes with the same pointed-to-by-x predicate values are merged and represented by a
summary node. Variable based summarization has been performed on store based heap
model [7,15,75,76]. Figure 5d represents variable based summarization of the store based
model in Figure 5a. After the first iteration of the loop of the program in Figure 3a,
there are three nodes—the first pointed to by x and the third pointed to by y. In the
second iteration of the loop, nodes reachable by access paths x.f, x.f.f, and x.f.f.f are
not pointed to by any variable (as shown in Figure 3b). Therefore, they are merged
together as a summary node represented by dashed lines in Figure 5d which shows the
graphs after the first and the second iterations of the loop. The dashed edges to and from
summary nodes denote indefinite connections between nodes. This graph also records
x.f.f.f and y as aliases at Out6 which is spurious.
Figure 7b is a variable based summarized representation of the unbounded hybrid
model in Figure 7a. A summary node (shown with a dashed boundary in the figure) is
created from nodes that are not pointed to by any variable. Summarized access paths
are appropriately marked on the nodes in the hybrid model.
5. Summarization using other generic instrumentation predicates merge those heap objects
that satisfy a given predicate [4, 24, 37, 68, 72, 77, 78, 87, 90].
May 2015
11
3.3
Heap Summarization Techniques
Note that the summarization techniques introduced above are all based on some
predicate, as listed below:
• k-limiting predicate: Is the heap location at most k indirections from a root variable?
• Allocation site based predicate: Is the heap location allocated at a particular
program site?
• Pattern based predicate: Does the pointer expression to a heap location have a
particular pattern?
• Variable based predicate: Is a heap location pointed to by a root variable?
Since the above four are very commonly used predicates, we have separated them out in
our classification.
Apart from these common predicates, summarization may be based on other predicates
too depending on the requirements of a client analysis. Some examples of these predicates
are: is a heap location part of a cycle, is a heap location pointed to by more than one
object, is a heap location allocated most recently at a particular allocation site, does the
data in a heap node belong to a given type. We group such possible predicates under
generic instrumentation predicates. A shape analysis framework [77, 78, 90] accepts any
set of predicates as parameter to control the degree of efficiency and precision in the
summarization technique.
6. Summarization using higher-order logics includes those logics that have more expressive
power than first order (predicate) logic. Classical logics, like Hoare logic [34], fail when
they are used to reason about programs that manipulate the heap. This is because
classical logics assume that each storage location has a distinct variable name [82], i.e.
there are no aliases in the memory. However, heap memory contains aliases of variables
and becomes difficult to analyse. Therefore, heap specialized logics, such as those listed
below, have been used for heap abstraction.
• Separation Logic [10, 18, 26],
• Pointer Assertion Logic (PAL) [63],
• Weak Alias Logic (wAL) [8],
• Flag Abstraction Language [46, 47],
These are extensions of classical logics. For example, separation logic adds separating
connectives to classical logic to allow separate reasoning for independent parts of
heap [9, 82]. Summarizations using higher-order logics differ from summarizations using
generic instrumentation predicates in the following sense: the former use formal
reasoning in logics specialized for heap memory. Unlike the latter, these techniques may
be highly inefficient and even include undecidable logics; therefore, in order to ensure
their termination, they generally need support with program annotations in the form of
assertions and invariants [38]. In the following sections, we illustrate separation logic,
which is generally based on store based heap models, and PAL, wAL, and Flag
Abstraction Language, which have been used on storeless heap modes.
These heap summarization techniques can be combined judiciously. Most investigations
indeed involve multiple summarization techniques and their variants by using additional
ideas. Section 7 outlines the common factors influencing the possible choices and some broad
guidelines.
12
Heap Abstractions
1 y := x 1
2 z := w 2
y
3 t := y.g 3
4 z.g := t 4
f
x
f
f
g
5 y := y.f 5
g
6 y := y.f 6
w
y
f
f
g
f
g
...
g
f
g
f
...
7 z.f := new 7
z
8 z := z.f 7
9 u := new 7
10 y.f := u 7
z
(b) Execution snapshot showing an unbounded heap graph at Out8
of the program in Figure 8a. y points to x.f.f and z points to w.f
in the first iteration of the loop. In the second iteration, y points
to x.f.f.f.f and z points to w.f.f.
11 v := y.f 7
(a) Example
Figure 8. Running example to illustrate various heap summarization techniques. We assume
that all variables are predefined in the program. Summarized representations of the heap
memory in Figure 8b are shown on a storeless model in Figure 9, and are shown on a hybrid
model in Figures 18 and 19.
4
Summarization in Storeless Heap Model
As described in Section 3.2, a storeless heap model views the heap memory as a collection of
access paths. By contrast, the store based model views the memory as a graph in which nodes
are heap objects and edges are fields containing addresses. The view of storeless model may
seem to be a secondary view of memory that builds on a primary view of memory created by
the store based model. In this section we present the summarization techniques for a storeless
model. Sections 5 and 6 present techniques for a store based model and a hybrid model,
respectively.
4.1
k-Limiting Summarization
May-aliases have been represented as equivalence classes of k-limited access paths [49]. For the
program in Figure 8a, with its unbounded memory graph in Figure 8b, information bounded
using k-limiting summarization of access paths is shown in Figure 9a (alias pairs of variables
y and z are not shown for simplicity). The method records alias pairs precisely up to k
indirections and approximates beyond that. For k = 3, fields up to three indirections from the
root variables in the access paths are recorded; those beyond three indirections are summarized
with a wild card (symbol ∗). Observe that this summarization induces the spurious alias
relationship hx.f.f.f, w.f.fi.
May 2015
13
4.2
Summarization Using Patterns
hx.g, w.gi
hx.f.f.g, w.f.gi
hx.f.f.f.∗, w.f.f.∗i
(a) Alias pairs for variables x
and w at Out8 for k = 3 [49].
x
f5
f6
g3
(d) Live access graph at In1
when variable t is live at
Out11 [43].
hy, x(.f.f)+ i
hz, w(.f)+ i
hx(.f.f)+ .g, w(.f)+ .gi
(b) Aliases at Out8 [60].
x y
x 1 1
y 0 1
Direction
Matrix
x y
x 1 1
y 1 1
Interference
Matrix
(e) Direction and interference
matrices for variables x and
y at Out8 [24].
hx.f2i .g, w.fi .gi
(c) Parameterised alias pairs
for variables x and w at
Out8 [17].
x
S
y
x
f+
y
S
Path Matrix
(f) Path matrix for variables x
and y at Out8 [31].
Figure 9. Summarization techniques on a storeless model for the program in Figure 8a:
k-limiting (Figure 9a), pattern based (Figure 9b, 9c, 9d), and other generic instrumentation
predicates based (Figure 9e, 9f) summarization techniques are shown. (Equivalence class of
aliased access paths is denoted by h and i in Figures 9a, 9b, and 9c.)
4.2
Summarization Using Patterns
A common theme in the literature has been to construct expressions consisting of access paths
approximated and stored either as a regular expression or a context free language.
• Consider the possibility of representing access paths in terms of regular expressions [60].
For example, let p be the initial access path outside a program loop. After each iteration
of the loop, if the value of p is advanced into the heap relative to its previous value via
the field left or right, then the access path can be represented as p(.left | .right)∗ .
The bounded alias information for the unbounded memory graph of Figure 8b is shown
in Figure 9b. The example illustrates that the method is able to identify (.f.f) as the
repeating sequence of dereferences in the access path rooted at x and (.f) as the repeating
sequence of dereferences in the access path rooted at w. The alias hx(.f.f)∗ .g, w(.f)∗ .gi
at Out8 , indicates that x.f.f.g is aliased to w.g, which is spurious.
The general problem of detecting possible iterative accesses can be undecidable in the
worst case [60]. This is because a repeated advance into the heap may arise from an
arbitrarily long cycle of pointer relations. Therefore, the focus in the work by Matosevic
and Abdelrahman [60] remains on detecting only consecutive repetitions of the same type
of field accesses. For efficiency, finite state automata are used to compactly represent
sets of access paths that share common prefixes.
• On similar lines, repetition of field dereferences in program loops can be identified more
efficiently and precisely by using the statement numbers where the field dereference
has occurred [43]. This has been used to perform liveness based garbage collection by
computing live access graphs of the program. A live access graph is a summarized
14
Heap Abstractions
4.3
Summarization Using Generic Instrumentation Predicates
representation of the live access paths5 in the form of a graph; here a node denotes both
a field name and the statement number where the field dereference has occurred; the
edges are used to identify field names in an access path.
A live access graph is illustrated in Figure 9d for the program in Figure 8a. Let us assume
that variable t is live at Out11 in the program i.e. it is being used after statement 11.
This implies that access path y.g (or x(.f.f)∗ .g) is live at In3 since it is being accessed
via variable t in the program loop. Therefore, access paths x(.f.f)∗ .g are live at In1 .
These access paths are represented as a summarized live access graph in Figure 9d. The
cycle over nodes f5 and f6 denotes the Kleene closure in the access paths x(.f.f)∗ .g.
This illustrates that the method is able to identify (.f.f) as a repeating sequence in the
live access paths at In1 .
Basically, this is achieved by assigning the same name to the objects that are dereferenced
by a field at the same statement number. For example, the last field in each of the access
paths, x.f, x.f.f.f., and so on, is dereferenced in statement 5; therefore, all these fields
f (dereferenced in statement 5) are represented by the same node f5 in Figure 9d.
Similarly, the last fields f in each of the access paths, x(.f.f)∗ .g, are represented by
the same node f6 because each of them is dereferenced in statement 6. With the use of
statement numbers, unlike the method by Matosevic and Abdelrahman [60], this method
can identify even non-consecutive repetitions of fields efficiently.
In somewhat similar lines, liveness based garbage collection for functional programs has
been performed using a store based model by Inoue et al. [37] and Asati et al. [2] (see
Section 5.3).
• More precise expressions of access paths compared to those in the above methods are
constructed by parameterising the expressions with a counter to denote the number
of unbounded repetitions of the expression [17]. Right-regular equivalence relation on
access paths helps in performing an exact summary of the may-aliases of the program.
The precisely bounded information for the unbounded memory graph of Figure 8b is
illustrated in Figure 9c. The key idea of the summarization is to represent the position
of an element in a recursive structure by counters denoting the number of times each
recursive component of the structure has to be unfolded to give access to this element.
This records the fact that the object reached after dereferencing 2i number of f fields on
access path x is aliased with the object reached after dereferencing i number of f fields
on the access path w. Due to the parameterisation with 2i and i on field f of both aliased
access paths which are rooted at variables x and w respectively, the method excludes the
spurious alias pairs derived from the alias information in Figure 9b.
4.3
Summarization Using Generic Instrumentation Predicates
Since the identification of patterns can be undecidable in the worst case [60], the power of
summarization using patterns is limited by the set of patterns that the algorithm chooses to
identify. Instead of using a fixed set of patterns, summarization using generic instrumentation
predicates enables a richer set of possibilities. We review this approach in this section.
A digression on shape analysis
The use of heap analysis to determine shapes of the heap memory dates back to the work by
Jones and Muchnick [39]. Some of the notable works which also determine shapes are enlisted
below.
5
An access path is live at a program point if it is possibly used after the program point.
May 2015
15
4.3
Summarization Using Generic Instrumentation Predicates
– Analysis to determine shape using a storeless model has been presented by Jones and
Muchnick [39], Hendren and Nicolau [31], Ghiya and Hendren [24], and others (presented
in this section).
– Analysis to determine shape using a store based model has been presented by Chase et
al. [15], Sagiv et al. [75,77,78], Distefano et al. [18], Gotsman et al. [26], Calcagno et al. [10],
and others (see Section 5).
– Analysis to determine shape using a hybrid model has been presented by Rinetzky et al. [72]
and others (see Section 6).
This study of the structure and shape of the heap has been called shape analysis. Below
we discuss shape analysis techniques used on a storeless model.
• Hendren and Nicolau [31] and Ghiya and Hendren [24] classify the shapes of the heap into
tree, DAG, and cyclic graph, and choose to use the following predicates on a storeless
model.
(a) Direction relationship, which is true from pointer x to pointer y, if x can reach y via
field indirections.
(b) Interference relationship, which is true for pointers x and y, if a common heap object
can be accessed starting from x and y. This is a symmetric relationship.
Direction and interference relationships are stored in terms of matrices as shown in
Figure 9e for the program in Figure 8a. Here, the heap has been encoded as access
paths in path matrices (direction and interference) at each program statement. Direction
relationship between pointers x and y is true (represented by 1 in the direction matrix),
since x reaches y via indirections of field f at Out8 of the program in Figure 8a. Since
y cannot reach a node pointed to by x at Out8 , 0 is marked in the corresponding entry
of the direction matrix. Here, from the direction relationship, we can derive that objects
pointed by x and y are not part of a cycle, since x has a path to y, but not vice versa.
Interference relationship between pointers x and y is true, since a common heap object
can be accessed starting from x and y.
• Storeless heap abstraction using reachability matrices can also be summarized using
regular expressions of path relationships between pointer variables [31]. This is used to
identify tree and DAG shaped heap data structures by discovering definite and possible
path relationships in the form of path matrices at each program point. For variables x
and y, an entry in the path matrix, denoted as p[x, y], describes the path relationship
from x to y. In other words, each entry in the path matrix is a set of path expressions of
field dereferences made for pointer x to reach pointer y. Figure 9f shows the summarized
path matrix for pointers x and y at Out8 of the program in Figure 8a. Entry p[x, x] = {S}
denotes that source and destination pointer variables are the same. Entry p[x, y] = {f+ }
denotes that there exists a path from x to y via one or more indirections of field f. An
empty entry p[y, x] denotes that there is no path from pointer y to pointer x.
This analysis calculates the part of the data structure that is between two variables at
each program point. The analysis can differentiate between a tree and a DAG by the
number of paths to a variable calculated in the path matrix. The information is used for
interference detection and parallelism extraction. This approach is, however, restricted
to acyclic data structures. Some follow-up methods [28–30] also use path matrices for
alias analysis of heap allocated data structures.
16
Heap Abstractions
4.4
4.4
Summarization Using Higher-Order Logics
Summarization Using Higher-Order Logics
To describe heap specific properties, various formalisms like Pointer Assertion Logic, Weak
Alias Logic, and Flag Abstraction Language have been proposed in the literature.
• PALE (Pointer Assertion Logic Engine) [63] is a tool that provides a technique to check
the partial correctness of programs annotated manually by the programmer using PAL
(Pointer Assertion Logic). The programmer encodes everything in PAL including the
program code, heap data structures, pre- and post-conditions of various modules of the
program, and loop invariants. PAL is an assertion language which is a monadic
second-order logic, or more precisely, WS2S (weak monadic second-order logic with two
successors). Unlike first-order logic, ordinary second-order logic allows quantification
(existential/universal) over predicates. “Monadic” means that quantification of only
monadic predicates, i.e. sets, is allowed. “Weak” means that quantification over finite
sets is allowed. “With two successors” means that the formulae are interpreted over a
tree domain (which is infinite). Although it is technically “two” successors, it is trivial
to encode any fan-out.
Here is an example [63] of a specification of type binary tree using PAL.
type Tree = {
data left,right:Tree;
pointer root:Tree[rooth(left+right)∗ithis
& empty(rootˆTree.left union rootˆTree.right)];
}
.A memory graph consists of a “backbone” which represents a spanning tree of the
underlying heap data structure. The memory links of the backbone are encoded using
data fields in PAL. Other memory links of the data structure are encoded in PAL using
pointer fields that are defined on top of the backbone6 . The above example defines a
heap location of type Tree which consists of left, right, and root links. The root
link is an extra pointer which points to the root of the tree. It is defined with a formula
specified between the square brackets which is explained below.
– The formula rooth(left+right)∗ ithis specifies that the root location reaches this
location via a sequence of left or right fields. The Kleene closure in this regular
expression helps in summarizing unbounded information.
– In PAL, formula xˆT.p can be read as xˆ(T.p), where ˆ(T.p) represents a step
upwards in the backbone i.e. backwards along field p from a location of type T
in order to reach a location pointed to by x. In the above example, formulae
rootˆTree.left and rootˆTree.right denote that location root can be reached
by moving a step upwards in the backbone along left and right fields from a
location of type Tree. The empty() formula above specifies that locations having
left or right pointers to the root location must be empty.
Once the data structures, loop invariants, pre- and post-conditions are specified by the
programmer in PAL, PALE passes these PAL annotations to the MONA tool [62] for
automatic verification of the program. MONA reports null-pointer dereferences, memory
leaks, violations of assertions, graph type errors, and verifies shape properties of data
structures.
6
Anders Møller, 04 May 2015, personal communication.
May 2015
17
4.4
Summarization Using Higher-Order Logics
Let us take an example of the predicates used in MONA logic. Consider statement 5,
z := y.f which is executed on the linked list of the program in Figure 10a. The linked
list can be specified in PAL as: type Node = {data f: Node;}. For program points
i = In5 and j = Out5 , MONA code [63] generated for this statement is
memfailed j () = memfailed i() | null y i()
ptr z j(v) = ex2 w:
ptr y i(w) & succ Node f i(w,v)
null z j() = ex2 w:
ptr y i(w) & null Node f i(w)
.MONA code uses the following predicates in the above formula:
– memfailed() is true if a null dereference has occurred.
– null p() is true if pointer variable p is null.
– ptr p(v) is true if the destination of pointer variable p is object v.
– succ t f (v, w) is true if object v of type t reaches location w via pointer field f .
– null t f (v) is true if object v of type t reaches a null location via pointer field f .
The predicates in the above MONA code for statement 5 have been indexed with the
program points. For example, for program point i, the value of predicate memfailed() is
memfailed i(). Also, ex2 is an existential quantifier used for Node object w in the above
MONA code.
In the above MONA code, the first line specifies that program point j is in a state of
memory failure if either there was a memory failure at program point i or variable y was
null at i. The second line specifies that if object w is the destination of variable y, and w
reaches object v via pointer field f, then v is the destination of variable z. The third line
specifies that if object w is the destination of variable y, and w reaches a null location
via pointer field f, then variable z is null.
Since MONA’s logic is decidable, PALE will definitely reach a fixpoint. Due to the
overhead of manually adding annotations to the program, the technique is suited for
small input programs only.
• Unlike PAL, which describes a restricted class of graphs, Weak Alias Logic (wAL) deal
with unrestricted graphs [8].
The user annotates the program with pre- and
post-conditions and loop invariants using wAL. The annotations are then automatically
verified for correctness. wAL is an undecidable monadic second order logic that can
describe the shapes of most recursive data structures like lists, trees, and dags. Let X
and Y be heap structures represented as a set of regular expressions or access paths,
and let ρ be a regular expression or a set of regular expressions. In wAL, hXiρ specifies
that X is bound to the heap, which is described by ρ. Also the formula X −1 Y in wAL
denotes all the paths from X to Y . Given below are some predicates in wAL [8].
reach(X, Y ) = hY iXΣ+
share(X, Y ) = ∃Z.reach(X, Z) ∧ reach(Y, Z)
tree(root) = ∀X.hXiroot ⇒ ∀Y, Z.(reach(X, Y ) ∧ reach(X, Z) ⇒ ¬share(Y, Z))
.These predicates are explained below.
18
Heap Abstractions
– Predicate reach(X, Y ) states that location Y is reachable from location X via a
non-empty path Σ+ . The Kleene closure over the set of pointer fields Σ helps in
summarizing unbounded information.
– Predicate share(X, Y ) states that locations X and Y reach a common location via
non-empty paths, respectively.
– Predicate tree(root) describes the shape of a tree structure pointed to by a variable
root. It states that sharing is absent in a tree structure.
Let us derive the pre-condition for statement 3, y.f := x of the program in Figure 10a,
when its post-condition is given.
pre-condition:
{aclist(x) ∧ ∀X, Y [hXix ∧ hY iy ⇒ X −1 Y = ∅]}
assignment statement:
post-condition:
y.f := x
{aclist(y)}
.The post-condition for the assignment statement y.f := x specifies that variable y points
to an acyclic linked list (denoted by predicate aclist(y)). The pre-condition for the
assignment statement is that variable x should be an acyclic linked list and that there
should be no path from x to y (otherwise the assignment statement would create a cycle,
invalidating the post-condition aclist(y)).
Bozga et al. [8] have also designed pAL (Propositional Alias Logic), which is a decidable
subset of wAL. However, pAL can describe only finite graphs and does not have the
ability to describe properties like list-ness, circularity, and reachability.
• Hob [48] is a program analysis framework, which allows a developer to use multiple
analysis plugins for the same program. Each procedure can be verified by a different
analysis plugin; therefore, an efficient analysis plugin can be chosen for each procedure
depending on the properties of the procedure that the developer wishes to verify. The
Hob project has been plugged with the following three analysis plugins [46]:
1. Flag Abstraction Language plugin [47] uses first order boolean algebra extended
with cardinality constraints. It is used to infer loop invariants.
2. PALE plugin [63] uses monadic second order logic to verify properties of tree like
data structures.
3. Theorem proving plugin uses higher-order logic to handle all data structures.
5
Summarization in Store Based Heap Model
It is easier to visualize a memory graph as heap objects connected through fields. This is
the view of a store based heap model as introduced in Section 3.2. The following sections
summarize this unbounded view using techniques involving a combination of allocation sites,
variables, some other generic instrumentation predicates, and higher-order logics.
5.1
Summarization Using Allocation Sites and Variables
Chase et al. [15] were the first to summarize heap nodes using techniques involving allocation
sites and variables. In their method, heap nodes with the following properties are summarized:
May 2015
19
5.1
Summarization Using Allocation Sites and Variables
y
f
1 x := null 1
...
x
2 y := new 2
3 y.f := x 3
f
(b) Execution snapshot showing an unbounded heap graph at Out4
of program in Figure 10a.
4 x := y 4
y
f
5 z := y.f 6
6 y := z 7
(a) Example
x
f
...
f
f
f
...
z
(c) Execution snapshot showing an unbounded heap graph at Out6
of program in Figure 10a.
Figure 10.
Running example to illustrate various heap summarization techniques.
Summarized representations of the heap memories in Figures 10b and 10c are shown on a
store based model in Figures 11, 12, 13, 16, and 17.
1. heap nodes created at the same program point (i.e. allocation site) such that
2. they have the same pointed-to-by-x predicate values for each pointer variable x.
We illustrate this for the program in Figure 10a. The unbounded memory graphs at Out4
and Out6 are shown in Figures 10b and 10c, respectively. The corresponding summarized
graphs created using this method [15] at Out4 and Out6 are shown in Figures 11a and 11b,
respectively. In Figure 11a, we see that nodes have been named by their allocation site, i.e.
statement 2. Also, since this method keeps nodes apart on the basis of pointer variables, we
get two abstract nodes—one node pointed to by pointer variables x and y, and the other node
not pointed to by any variable. The self loop on the second node denotes the presence of
unbounded number of nodes that are not pointed to by any pointer variable.
This method analyses Lisp like programs and constructs shape graphs for heap variables.
It can determine the shape of the allocated heap as tree, simple cycle, and doubly linked list.
In case of lists and trees, if all the nodes are allocated at the same site then the shape graph
would contain a single summary node with a self loop, making all the nodes aliased to each
other. For example, from the graph in Figure 11a, it cannot be inferred whether the structure
is a linear list or it contains a cycle in the concrete heap memory. To avoid this, each node
is augmented with a reference count i.e. the number of references to the corresponding heap
location from other heap locations (and not from stack variables). For example, the reference
count of the summary node not pointed to by any variable in Figure 11a is one. A reference
count of less than or equal to one for each node indicates that the data structure is a tree or a
list; whereas, a reference count of more than one indicates that the data structure is a graph
with sharing or cycles. Therefore, this method can identify at Out4 that the program creates
a linear list.
However, the method cannot perform materialization of summary nodes. For example,
after analysing statements 5 and 6 of the program in Figure 10a, the abstract graph obtained
20
Heap Abstractions
5.2
y
2
y
f
f
2
x
(a) Summarized shape graph at Out4 .
Figure 11.
Figure 10a.
Summarization Using Variables
2
x
f
2
f
z
(b) Summarized shape graph at Out6 .
Summarization using allocation sites and variables [15] for the program in
at Out6 is shown in Figure 11b. It can be seen that the summary node (not pointed to by
any variable) in the graph at Out4 in Figure 11a has not been materialized when y and z
point to the heap locations corresponding to this summary node. The graph in Figure 11b,
therefore, indicates that y and z may possibly point to two different heap locations on a list
which is never true at Out6 of the program. Additionally, due to the lack of materialization,
this method is not able to determine list reversal and list insertion programs. Finally, Sagiv et
al. [75] highlight, “this method does not perform strong updates for a statement of the form
x.f := null, except under very limited circumstances.”
5.2
Summarization Using Variables
Variable based summarization technique has been used in shape analysis. Shape analysis
encompasses all algorithms that compute the structure of heap allocated storage with varying
degrees of power and complexity [78]. Heap nodes not pointed to by any root variable are
summarized as a single summary node. When a program statement creates a pointer from a
new root variable to one of the heap locations represented by the summary node, the algorithm
materializes the summary node. It creates two nodes—one representing a single materialized
heap node pointed to by the new root variable and the other representing the remaining
summary nodes not pointed to by any root variable. We describe below some shape analysis
techniques that summarize using variables.
• Sagiv et al. [75,76] distinguish between heap locations by their pointed-to-by-x predicate
values for all variables x in the program7 . We use the running program in Figure 10a
to illustrate various forms of the shape analysis techniques. Unbounded memory graphs
of the program are shown in Figure 10b and Figure 10c. Fixpoint computation of the
bounded shape graph [75] at Out6 is shown in Figure 12c. Intermediate steps are shown
in Figures 12a and 12b. Let us see how these are obtained. Figure 12a shows a shape
graph at Out4 which contains a node pointed to by both x and y. This node in turn points
to a summary node through link f representing an unbounded number of dereferences
of field f. At Out5 , z points to a node y.f of Figure 12a. For this, a node (pointed
to by z) is created by materializing the summary node y.f. At Out6 , y points to this
materialized node (pointed to by z) (shown in Figure 12b). In the subsequent iteration
of the loop, y and z point to a subsequent node (shown in Figure 12c). The remaining
nodes (not pointed to by any of x, y, and z—those between x and y and those beyond
y) get summarized (represented using dashed lines) as shown in Figure 12c. Here we see
that node pointed to by x either directly points to the node pointed to by y (or z) via
7
A generalized approach of shape analysis [75] is TVLA [77], which uses summarization using generic
instrumentation predicates (see Section 5.3 and Figures 12d and 12e).
May 2015
21
5.2
Summarization Using Variables
f
y
f
f
f
x y
f
x
(a) Shape graph
Out4 [75, 77].
(b) Shape graph at Out6 after one
iteration of statements 5 and 6
[75, 77].
at
x
rx ,
ry , rz
rx
f
f
f
z
f
rx ,
ry , rz
rx
f
x
rx ,
ry , rz
rx
f
(d) Shape graph at Out6 after two iterations of
statements 5 and 6 [77].
y
z
(c) Shape graph at Out6 after
fixpoint [75].
f
y
f
f
x
f
rx
f
z f
f
f
rx ,
ry , rz
f
y
z
f
(e) Shape graph at Out6 after fixpoint [77]. The
two summary nodes are distinguished based on
whether they are reachable from root variables
x, y, and z.
Figure 12. Summarization using variables [75] is shown in Figures 12a, 12b, and 12c.
Summarization using generic instrumentation predicates [77] is shown in Figures 12a, 12b,
12d, and 12e for the program in Figure 10a. Pointer rx denotes whether any variable x can
transitively reach the node. It can be seen that variable z materializes the summary node
pointed to by y.f in Figures 12a and 12b.
field f or points to an unbounded number of nodes before pointing to the node pointed
to by y (or z) via field f.
Let us compare the shape graphs produced by Sagiv et al. [75] (Figures 12a and 12c)
with those of Chase et al. [15] (Figures 11a and 11b). The graphs at Out4 shown in
Figure 11a and Figure 12a store identical information. However, the graph at Out6
shown in Figure 12c is more precise than the graph at Out6 in Figure 11b—unlike the
latter, the former is able to indicate that y and z always point to the same location on
the list due to materialization.
• An imprecision in shape analysis is that its summary nodes do not remember the exact
count of the number of concrete nodes represented by a summary node in an abstract
heap graph. These counts are useful in checking termination of the programs that needs
to consider the size of the list being accessed. An interesting solution to this problem is
the use of a counter with every such summary node in the heap graph in order to denote
the number of concrete heap locations represented by the summary node [7]. This is
used to define a counter automaton abstraction of the state transition behaviour of heap
manipulating programs. This is illustrated in Figure 13 for the program in Figure 10a.
With the use of variables i, j, and k for counters, the algorithm ensures that the analysis
is not unbounded. The automaton starts with a heap graph containing one summary
node (with counter i), pointed to by x and y at In5 . It proceeds to Out5 if counter i > 1,
and materializes the node into a unique node (with a new counter j = 1) pointed to by x
and y, and the remaining summary node (with counter i) pointed to by z. Here counter i
22
Heap Abstractions
5.3
Summarization Using Generic Instrumentation Predicates
i
In5
x, y
[i > 1]
i := i − 1
j := 1
Out5
j
f
x, y
Out6
j
x
i
z
f
Out5
j
f
x
j := j + k
k
y
f
i
z
[i > 1]
i := i − 1
k := 1
i
y, z
Figure 13. Summarization using variables: Counter automaton [7] for the program statements
5 to 6 in Figure 10a is shown. States of the automaton denote the abstract heaps at the program
points shown. Edges of the automaton denote the condition of transition in the automaton.
Counter variables (i, j, and k) corresponding to each abstract node in the heap are depicted
inside the node itself.
used at In5 is decremented at Out5 . The graph at Out5 is then transformed to Out6 under
the influence of program statement 6. To further transform this graph from Out6 to Out5
in the loop, if counter i > 1, it materializes the summary node pointed to by y at Out6
into a new node (with a new counter k = 1) pointed to y, and the remaining summary
node (with counter i) pointed to by z. Here counter i used at Out6 is decremented by
one at Out5 . In the transformation from Out5 to Out6 , since y will start to point to
z, the node with counter k will not be pointed to by any variable. Therefore, nodes
with counters k and j are merged, and their counter values updated (added up) at Out6 .
Bouajjani et al. [7] have used these counters for verifying safety and termination of some
sorting programs.
5.3
Summarization Using Generic Instrumentation Predicates
We describe below some other generic instrumentation predicates based summarization
techniques, including TVLA, type propagation analyses, acyclic call paths, and context free
grammars that have been used for a store based heap model.
• As an improvement over the summarization technique using only variables [75] (see
Section 5.2), the following predicates are used in order to summarize heap nodes more
precisely [77, 78, 90].
– pointed-to-by-x property denotes whether a heap node is pointed directly by variable
x.
May 2015
23
5.3
Summarization Using Generic Instrumentation Predicates
– reachable-from-x-via-f property denotes whether variable x can transitively reach
a heap node via field f .
We use the running program in Figure 10a to illustrate the summarization. Unbounded
memory graphs at Out4 and Out6 of the program are shown in Figures 10b and 10c.
Fixpoint computation of a bounded shape graph using predicates pointed-to-by-x and
reachable-from-x-via-f for summarization at Out6 is shown in Figure 12e. Intermediate
steps are shown in Figures 12a, 12b, and 12d. We have already explained the bounded
shape graph obtained using only pointed-to-by-x predicate [75] for summarization at
Out6 in Figure 12c (see Section 5.2). Compare Figures 12c and 12e to observe that the
bounded shape graphs obtained are the same with respect to the nodes pointed to by a
root pointer variable; however, they differ with respect to the summary nodes not
pointed to by any root pointer variable. This is because of the use of the additional
predicate reachable-from-x-via-f ; this predicate is denoted as rx , ry , and rz in
Figures 12d and 12e. To see how Figure 12e is obtained, further observe the following
in the intermediate step shown in Figure 12d: the node pointed to by rx is kept
separate from the summary node pointed to by rx , ry , and rz . Therefore, the shape
graph in Figure 12e represents unbounded dereferences of field f following root node x
and another sequence of unbounded dereferences of field f following root node y (or z).
This paper builds a parametric framework, which allows the designer of shape analysis
algorithm to identify any desired heap property. The designer can specify different
predicates in order to obtain more useful and finer results, depending on the kind of
data structure used in a program. For example, the use of predicate “is shared” gives
more precise sharing information, and the use of predicate “lies on cycle” gives more
precise information about cycles in the heap memory. Further, 3-valued predicates
(TVLA) [77, 78, 90] help in describing properties of the shape graph using three values,
viz. false, true, and don’t know. Therefore, both may and must pointer information can
be stored. Shape analysis stores and summarizes heap information precisely, but at the
same time, it is expensive due to the use of predicates for each node [10].
• Another way of summarizing unbounded heap locations is based on the types of the heap
locations. Sundaresan et al. [87] merge unnamed heap nodes if the types reaching the
heap locations are the same. For example, for some variables x and y containing field f,
heap locations x.f and y.f are merged and represented as C.f if x and y point to objects
whose class name is C. This method has been used in literature to determine at compile
time which virtual functions may be called at runtime. This involves determining the
runtime types that reach the receiver object of the virtual function. This requires data
flow analysis to propagate types of the receiver objects from allocation to the method
invocation. These techniques that perform data flow analysis of types are called type
propagation analyses [19].
• Lattner and Adve [51] point out that if heap objects are distinguished by allocation sites
with a context-insensitive analysis8 , precision is lost. This is because it cannot segregate
distinct data structure instances that have been created by the same function i.e. at the
same allocation site via different call paths in the program. To overcome this imprecision,
Lattner and Adve [51, 52] propose to name heap objects by the entire acyclic call paths
through which the heap objects were created. They compute points-to graphs called
Data Structure graphs, which use a unification-based approach [86]. Here, all heap nodes
pointed to by the same pointer variable via the same field are merged; in other words,
8
24
A context-sensitive analysis examines a given procedure separately for different calling contexts.
Heap Abstractions
5.3
Summarization Using Generic Instrumentation Predicates
x
APPEND(x,y) :=
if (null x) then y
else cons(car(x),APPEND(cdr(x),y))
APPEND
1
1
1
1
1
1
1
1
(a) Functional language program.
.
.
.
1
1
1
1
y
APPEND1 → cons1 .car1 | cons2 .APPEND1 .cdr1
APPEND2 → ǫ | cons2 .APPEND2
(b) Context free grammar for the program in
Figure 14a [37]. Fi denotes the ith argument
of function F .
(c) Computing result of APPEND from arguments
x and y. The two edges in each rectangle
denote car and cdr fields, respectively. Dashed
locations depict nodes unreachable from the
result of APPEND; therefore, can be garbage
collected.
Figure 14. Computing context free grammar for a functional language program in order to
garbage collect unreachable nodes [37].
every pointer field points to at most one heap node. The use of acyclic call paths and
unification-based approach help in summarizing the potentially infinite number of heap
nodes that can be created in recursive function calls and loops.
• Another way of summarizing is to build a context free grammar of the heap [37]. This has
been done for functional programs, which consist of primitive functions like cons, car,
and cdr. This grammar has been used to detect garbage cells in a functional program
through compile time analysis. It is based on the idea that the unshared locations passed
as parameter to a function that are not a part of the final result of the function, can be
garbage collected after the function call. We describe this by reproducing from the paper
the definition of function APPEND in Figure 14a. Data structures pointed to by variables
x and y (shown in Figure 14c) are passed as arguments to APPEND function. The circular
nodes are reachable from the root of the result of the APPEND function; these circular
nodes can be identified as x(.cdr)∗ .car and y. However, the dashed locations, which
belong to x, are not reachable from the root of the result of the APPEND function; these
dashed locations can be identified as x(.cdr)∗ . These dashed locations can, therefore, be
garbage collected.
In order to identify argument locations that are unreachable from the result of the
called function, the paper analyses the usage of each argument of the called function by
constructing a context free language of each argument. The grammar constructed is
shown in Figure 14b. Each argument of a function, say F (x1 , x2 , . . . , xi , . . . ), is
represented by a non-terminal in a context free grammar. Derivation rule for the ith
argument xi of function F is Fi → s1 | s2 | · · · | sk , where s1 , s2 , . . . , sk are all the strings
obtained from the function body. The first and the second lines in Figure 14b are the
context free grammars of APPEND1 and APPEND2, which denote arguments x and y of the
program in Figure 14a. The strings on the right hand side of the grammar consist of
car1 , cdr1 , cons1 , and cons2 , and user defined functions. Each function name is used
with a subscript indicating the position of argument in the function. Let us study the
May 2015
25
Summarization Using Generic Instrumentation Predicates
v
w
z
hea
d −1
tail −1
3 w := cdr(x) 2
y
head
1 x := cons(y, z) 1
2 v := car(x) 2
x
ta
il
5.3
Figure 15. A control flow graph of a program and its equation dependence graph. Edges
in the equation dependence graph have been labelled with head, tail, head−1 , and tail−1 ;
those shown without labels represent identity relation (label id) [68].
grammar of function APPEND shown in Figure 14b. APPEND2 in the second line denotes
the usage of the second argument of APPEND, y in the function definition. It would
either be used as it is or would be passed as the second argument to cons (denoted by
cons2 ). APPEND1 in the first line denotes the usage of the first argument of APPEND, x in
the function definition. The strings generated by APPEND1 grammar are of the form
consk2 .cons1 .car1 .cdrk1 . By reading the string in the reverse order, we can see that
APPEND decomposes list x, k number of times by the application of cdr, and then a car
selects the element at that position, followed by a cons1 on the element to make it the
left child of a new location, which itself will be acted on by cons2 the same k number of
times. The context free grammar is used to identify reachable paths from the
argument. For example, using the grammar APPEND1, i.e. argument x, it can be seen
that string (cdr1 )k .car1 (obtained from the reverse of string consk2 .cons1 .car1 .cdrk1 )
denotes the locations x(.cdr)∗ .car, which are reachable from the result of APPEND. The
rest of the locations in argument x are unreachable and can be garbage collected.
Liveness based garbage collection has been performed using grammars also by Asati et
al. [2] for creating the notion of a demand that the execution of an expression makes
on the heap memory. In somewhat similar lines, liveness based garbage collection for
imperative programs has been performed using a storeless model by Khedker et al. [43]
(see Section 4.2).
• Another way of building context free grammar of heap access paths is by posing shape
analysis as CFL reachability problem. This has been done for Lisp like languages that do
not support strong updates [68]. A CFL reachability problem is different from the graph
reachability problem in the sense that a path between two variables is formed only if the
concatenation of the labels on the edges of the path is a word in the specified context free
language. Equation dependence graph is constructed by marking all program variables
at each program point in the program’s control flow graph. The edges between these
26
Heap Abstractions
5.4
Summarization Using Allocation Sites and Other Generic Instrumentation Predicates
variables are labelled with head, tail, head−1 , and tail−1 .
We illustrate the use of these labels in the equation dependence graph in Figure 15. For
statement 1, x := cons(y, z), label head is marked on the edge from y before statement
1 to x after statement 1. Similarly, label tail is marked on the edge from z before
statement 1 to x after statement 1. This denotes that x derives its head from y and tail
from z. For program statement 2, v := car(x), label head−1 is marked on the edge from
x before statement 2 to v after statement 2. This denotes that v gets its value using the
head of y. Similarly, tail−1 is labelled for statement 3, w := cdr(x).
Heap language in terms of access paths is identified by concatenating, in order, the labels
of the edges on the paths of the equation dependence graph. For example, the path from
z before statement 1 to w after statement 3 shows that w gets the value z.tail.id.tail−1 ,
which is simply z. Heap properties can be obtained by solving CFL reachability problems
on the equation dependence graph using the following context free grammars [68]:
– id path → id path id path | head id path head−1 | tail id path tail−1 | id | ǫ
This grammar represents paths in which the number of head−1 (tail−1 ) are
balanced by a matching number of head (tail), implying that the heap was used
through head−1 (tail−1 ) as much as it was constructed using head (tail).
– head path → id path head id path
tail path → id path tail id path
These grammars represent paths in which the number of head (tail) is more than
the number of head−1 (tail−1 ), implying that the amount of heap allocated using
head (tail) is more than the amount of heap dereferenced using head−1 (tail−1 ).
5.4
Summarization Using Allocation
Instrumentation Predicates
Sites
and
Other
Generic
As an attempt to reduce the cost of shape analysis, recency-abstraction [4] is used as an
approximation of heap allocated storage. This approach does not use the TVLA tool;
however, it uses concepts from 3-valued logic shape analysis [77]. Here, only the most
recently allocated node at an allocation site is kept materialized representing a unique node.
Therefore, its precision level is intermediate between (a) one summary node per allocation
site and (b) complex shape abstractions [77]. Note that for the program in Figure 10a,
Figure 16a shows that summarization based only on allocation sites creates a summary node
for objects allocated at site 2. Here the summary node is not materialized; therefore,
variables x and y point to the summary node itself at Out4 . Consequently, allocation site
based summarization cannot derive that x and y are must-aliased. Recency-abstraction is
illustrated in Figure 16b for the unbounded graph of Figure 10b. Due to materialization of
the most recently allocated node, the method is able to precisely mark x and y as
must-aliases at Out4 . However, materializing only once is not enough and introduces
imprecision at Out6 . This is shown in Figure 16c, where y and z are marked as may-aliases
(instead of the precise must-alias, as shown by the unbounded runtime memory graph in
Figure 10c).
5.5
Summarization Using Higher-Order Logics
Heap can be abstracted as logical structures of specialized logic like separation logic, which are
more powerful than simple predicate logic. Also, the efficiency of shape analysis can be boosted
by representing independent portions of the heap using formulae in separation logic [69]. To
elaborate, it exploits spatial locality of a code i.e. the fact that each program statement
May 2015
27
5.5
Summarization Using Higher-Order Logics
f
f
Site 2
Site 2
x y
x y
(a) Summarization using only
allocation sites does not
materialize summary node
Site 2. Figure shows alias
graph at Out4 .
f
Site 2
(b) Alias graph at Out4 . With
the materialization of the
most-recent Site 2, hx, yi are
marked as must-aliases [4].
f
Site 2
x
f
Site 2
y
z
(c) Alias graph at Out6 . Node
Site 2 is not materialized
further.
Dashed edges
denote may-alias [4].
Figure 16. Summarization using allocation sites and other generic instrumentation predicates
[4] for the program in Figure 10a is shown in Figures 16b and 16c. For comparison,
summarization using only allocation sites is shown in Figure 16a.
accesses only a very limited portion of the concrete state. Using separation logic, the portion
of heap that is not accessed by the statement(s) can be easily separated from the rest and later
recombined with the modified heap after analysing the statement(s). This dramatically reduces
the amount of reasoning that must be performed, specially if the statement is a procedure call.
Assertions expressed in separation logic may produce infinite sets of concrete states. A
fixpoint computation can be achieved using finitely represented inductive predicate
assertions [10, 26] like list(), tree(), dlist(), representing unbounded number of concrete
states, shaped like a linked list, tree, doubly linked list, respectively. The abstraction comes
from not tracking the precise number of inductive unfoldings from the base case. Note that
unlike logics on storeless model which use access paths and hide locations in their modeling,
separation logic explicates heap locations; therefore, separation logic is categorized under a
store based model.
In separation logic, assertion A 7→ B denotes memory containing heap location A, which
points to heap location B. Assertion A ∗ B denotes memory represented as a union of two
disjoint heaps (i.e. with no common heap locations)—one satisfying A and the other satisfying
B. Assertion A = B denotes that A and B have equal values. Assertion A ∧ B denotes a heap
that satisfies both A and B.
We work out the assertions using separation logic for the program in Figure 10a. In
Figure 17a, we have shown the heap graph and also the assertions in separation logic at Out4
over three iterations of statements 2, 3, and 4 in a loop. Assertion in the first iteration says
that x and y hold the same value, which points to a null value. Assertion in the second iteration
says that x and y hold the same value, which points to a new variable X′ . Separation logic
introduces a variable X′ , which is not used anywhere in the program code. This X′ points to
a null value. Assertion in the third iteration says that x and y hold the same value, which
points to another new variable X′′ , which further points to X′ ; X′ points to a null value. If we
continue in this way, we will get ever longer formulae. This unboundedness is abstracted using
the predicate list(), where list(u, v) says that there is a linked list segment of unbounded
length from u to v. This predicate has the following recursive definition (here emp denotes an
empty heap):
list(u, v) ⇔ emp ∨ ∃w.u 7→ w ∗ list(w, v)
With this, we obtain the abstraction by using the following operation in the second iteration
at Out4 .
28
Heap Abstractions
X′
x y
x y
f
null
f
f
f
null
X′
X′′
x y
f
f
null
x = y ∧ x 7→ null
x = y ∧ x 7→ X′ ∗ X′ 7→ null
≡ x = y ∧ list(x, null)
x = y ∧ x 7→ X′′ ∗ X′′ 7→ X′ ∗ X′ 7→ null
≡ x = y ∧ list(x, null)
Iteration 1
Iteration 2
Iteration 3
(a) Heap at Out4 obtained after respectively three iterations of the program. X′ and X′′ are new
variables not used anywhere in the program code.
X′′′
x
f
y z
f
f
...
f
null
y = z ∧ x 7→ X′′′ ∗ X′′′ 7→ y ∗ list(z, null)
≡ y = z ∧ list(x, z) ∗ list(z, null)
(b) Heap at Out6 after fixpoint computation. X′′′ is a new variable not used anywhere in the program
code.
Figure 17. Summarization using separation logic [10, 26] for the program in Figure 10a.
replace
x = y ∧ x 7→ X′ ∗ X′ 7→ null
with
x = y ∧ list(x, null)
Using a similar way of synthesizing, the assertion at Out6 (shown in Figure 17b) can be obtained
to be y = z ∧ list(x, z) ∗ list(z, null).
The SpaceInvader tool [18] also uses separation logic. The tool works on a subset of
separation logic for inferring basic properties of linked list programs.
6
Summarization in Hybrid Heap Model
For heap applications that need to capture both points-to related properties (using a store
based model) and alias related properties (using a storeless model), the heap memory is best
viewed as a hybrid model combining the storeless and the store based heap model. This model
can also be summarized using various techniques, like allocation sites, k-limiting, variables,
and other generic instrumentation predicates.
6.1
Summarization Using Allocation Sites and k-Limiting
Using the hybrid model, alias graphs record may-aliases [50]. Let us study the abstract memory
graph for the program in Figure 8a. We assume that variable x is initialised before statement
1 to point to an unbounded memory graph shown in Figure 8b. The bounded representation
of this unbounded memory graph is illustrated in Figure 18 using this technique. This method
labels each node with an access path reaching the node. If there is more than one access path
reaching a node, then this method arbitrarily chooses any one of the paths as a label for the
May 2015
29
6.2
k-Limiting Summarization
f
x
x
f
x.f
g
x.g
x.f.f
g
g
w
f
x.f.f.f
f
Site 7
f
x.f.f.f(.f)+
g
x.f.f(.f)+ .g
x.f.f.g
g
w
f
g
f
Figure 18. Summarization using allocation sites and k-limiting (k = 4) on a hybrid model [50]
at Out8 for the program in Figure 8a. Pointer variables y and z are not shown for simplicity.
node. For example, access paths x.g and w.g reach the same node; this node is arbitrarily
labelled as x.g. It can be seen in the summarized graph in Figure 18 that nodes reachable
from x via fields f and g have been summarized using k-limiting; value of k has been set to
4; therefore, the last node pointed to by variable x via field f has the label x.f.f.f(.f)+ . This
node has a self loop, which denotes that the node is a summary node of unbounded locations.
Larus and Hilfinger [50] also proposed allocation site based summarization as a way of
naming the nodes. For this, let us study locations pointed to by z and w for the program in
Figure 8a. Memory locations z(.f)∗ (or w(.f)∗ ) are allocated at program statement 7. Figure 18
shows that these nodes are summarized using allocation sites. A self loop around node, marked
with Site 7, denotes unbounded dereferences of field f. However, this summarization spuriously
stores the alias relationship hx.f.f.g, w.f.f.gi.
To handle this imprecision in summarization using allocation sites, Larus and Hilfinger [50]
distinguish nodes allocated at the same site by labeling each newly allocated node with an
aggregate of arguments passed to the allocation function (cons in Lisp). This hybrid approach
of labeling allocation sites with access paths (arguments of the allocation function) improves
the precision of the graphs. In order to limit the abstract graph to a finite size, summary nodes
are created using the concept of s-l limiting in which no node has more than s outgoing edges
(other than the nodes representing the bottom element), and no node has a label longer than
l.
6.2
k-Limiting Summarization
De and D’Souza [16] highlight an imprecision in saving pointer information as graphs. We
illustrate this imprecision using Figure 19a for statements 9, 10, and 11 of our running program
in Figure 8a. The problem is caused by the fact that a summarized object node may represent
multiple concrete objects; therefore, the analysis cannot perform a strong update on such
objects. At In9 of the program, y is aliased to the summary node x.f.f.f(.f)+ . Therefore,
strong update cannot be performed in statement 10 i.e. the pointer of y.f cannot be removed.
Hence, at Out11 , v will point to all the objects previously pointed to by y.f as well as the new
location pointed to by u. Observe that the former is imprecise.
De and D’Souza [16] believe that this imprecision is caused by storing points-to information
as graphs. Therefore, instead of using graphs, they use access paths. Their technique maps
k-limited access paths (storeless model) to sets of summarized objects (store based model)
(represented as ohni in Figure 19b and Figure 19c). For example, x → {o1} means that the
30
Heap Abstractions
6.2
u
y
x
o1
g
f
o2
o9
g
w
o6
f
f
v
o12
f
o4
o7
y
f
v
o3
g
o10
g
f
k-Limiting Summarization
f
o5
g
f
o11
g
f
z
o8
f
z
(a) Illustrating imprecision in store based model. k-limiting (k = 4) summarized graph at Out11 .
Corresponding to statements 9, 10, and 11, u points to o12, y.f points to both o4 and o12;
therefore, v also points to both o4 and o12. Here v is imprecisely aliased to x.f.f.f.
x → {o1}
x.f → {o2}
x.f.f → {o3}
x.f.f.f → {o4}
x.f.f.f.f → {o5}
w → {o6}
w.f → {o7}
w.f.f → {o8}
...
...
x.g → {o9}
x.f.f.g → {o10}
...
y → {o3, o5}
...
w.g → {o9}
w.f.g → {o10}
...
z → {o7, o8}
...
(b) k-limited (k = 4) points-to information at
In9 [16]. x.g and w.g are aliased.
x → {o1}
x.f → {o2}
x.f.f → {o3}
x.f.f.f → {o4, o12}
x.f.f.f.f → {o5}
w → {o6}
w.f → {o7}
w.f.f → {o8}
...
...
x.g → {o9}
x.f.f.g → {o10}
...
y → {o3, o5}
y.f → {o12}
...
u → {o12}
w.g → {o9}
w.f.g → {o10}
...
z → {o7, o8}
...
...
v → {o12}
(c) k-limited (k = 4) points-to information at
Out11 [16]. Variable v precisely points to only
o12 (pointed to by u) and is not aliased to
x.f.f.f.
Figure 19. Summarization using k-limiting on a hybrid model [16] for the program in
Figure 8a is shown in Figures 19b and 19c. Here ohni represents an object name and the
symbol → denotes points-to relation. For easy visualization, we have shown a summarization
on a store based model at Out11 in Figure 19a.
access path x points to (is mapped to) the object named o1. Since the access paths are precise
up to k length, like any k-limiting abstraction, it can also perform strong updates up to k
length.
In Figure 19b at In9 , y points to a summarized object {o3, o5} (pointed to by x.f.f and
x.f.f.f(.f)+ , respectively), as shown in Figure 19a. Program statement 10 updates the pointer
information of y.f. Therefore, if u points to object o12, then it is sound to say that y.f will
point only to object o12 at Out10 . However, it is not sound to say that x.f.f.f (alias of y.f) will
May 2015
31
6.3
Summarization Using Variables and Other Generic Instrumentation Predicates
point only to object o12 since y points to multiple access paths, viz. x.f.f and x.f.f.f(.f)+ .
Therefore, in Figure 19c, at Out10 , the method strongly updates y.f to {o12} (pointed to by
u), even though y points to multiple objects (o3 and o5) at In10 . Also, for sound results,
x.f.f.f is not strongly updated, and x.f.f.f points to o12 in addition to the previously pointed
object o4. Since y.f points only to o12, at Out10 , access path v also precisely points only to
the new object {o12} (pointed to by u) at Out11 .
6.3
Summarization Using Variables and Other Generic Instrumentation
Predicates
We describe below some application specific predicates that have been used in a hybrid model.
• In order to remove unreachable parts of the heap across functions in interprocedural
analysis, cutpoints are marked on the heap [72]. Cutpoints are objects which separate
the local heap of the invoked function from the rest of the heap. 3-valued logic shape
analysis (classified under the store based model) is used for summarization [77]. Each
cutpoint is identified by an access path (a feature of a storeless model) which is not
relevant to the function being called. When the function returns, the access path of
the cutpoint object is used to update the caller’s local heap with the effect of the call.
Therefore, irrelevant parts of abstract states that will not be used during the analysis
are removed by modeling the heap using both storeless and store based representations.
For example, an acyclic list pointed to by x is passed to the reverse() function, which
reverses the list performing strong updates. Let us say, before the function call, y.g.g
and x.f are aliased and y is not in scope of function reverse(). On return of the function,
we should be able to derive that y.g.g.f and x are aliased. To capture this kind of a
relationship, effect of the function on cutpoints is tracked. In this example, the second
node of list x is a cutpoint and in the function reverse() can be identified with a new alias
relationship between access paths as hC, x.fi, where C is the access path used to label the
second node (cutpoint) in the list. On return of the function reverse(), we will derive
hx, C.fi as the alias relationship. Thus, we will be able to restore the alias relationship
between x and y as hx, y.g.g.fi in the calling function.
• Connection analysis (similar to access paths used in a storeless model) along with store
based points-to analysis has been used as an abstraction [25]. This method first
resolves all pointer relationships on the stack using a store based points-to analysis,
which abstracts all heap locations as a single symbolic location called heap. All
pointers reported to be pointing to heap are then further analysed via a storeless heap
analysis, called connection analysis, and shape analysis.
7
Design Choices in Heap Abstractions
Given a confounding number of possibilities of combining heap models and summarization
techniques for heap abstractions, it is natural to ask the question “which heap abstraction
should I use for my analysis?” This question is one of the hardest questions to answer
because there is no one right answer and the final choice would depend on a wide range of
interdependent, and often conflicting, requirements of varying importance.
This section attempts to provide some guidelines based on
• the properties of heap abstractions,
• the properties of underlying analyses, and
32
Heap Abstractions
7.1
Properties of Heap Models
• the properties of programs being analysed.
The properties of heap abstractions are dominated by the properties of summarization
techniques with the properties of heap models playing a relatively minor role. Among the
properties of summarization, we explore the tradeoffs between precision and efficiency on the
one hand and expressiveness and automatability on the other. The properties of analyses
include flow- and context-sensitivity, bottom up vs. top down traversals over call graphs,
partial soundness, and demand driven nature.
These guidelines are admittedly incomplete and somewhat abstract. Because of the very
nature of heap abstractions and a large variety of uses they can be put to, these guidelines
may need deeper examination and may not be applicable directly.
7.1
Properties of Heap Models
We believe that in general,
• client analyses that explore points-to related properties are easier to model as store
based [18, 72], whereas
• analyses that explore alias related properties are easier to model as storeless [9, 18, 72].
This is because in points-to related properties, heap locations and addresses contained in
locations are important. Store based models are more natural in such situations because they
explicate all locations. On the other hand, alias related properties can leave the locations
implicit which is the case in a storeless model. The metrics like precision and efficiency are
generally not decided by the choice of heap model but by the summarization technique used.
7.2
Properties of Heap Summarization Techniques
In this section, we compare the summarization techniques with respect to efficiency, precision,
expressiveness, and automatability.
7.2.1
Precision vs. Efficiency
In general, if a client analysis requires computing complex heap properties, like shape of the
heap memory, then summarization techniques using variables, generic instrumentation
predicates, and higher-order logics are more precise. On the other hand for computing
simpler heap properties, like finding the pointer expressions that reach a particular heap
location, a client can choose more efficient summarization techniques like those based on
k-limiting and allocation sites.
We describe the other considerations in precision-efficiency tradeoff for specific
summarization techniques.
• k-limiting. This technique does not yield very precise results for programs that
manipulate heap locations that are k indirections from some pointer variable of the
program as illustrated in Figures 5b and 6b. k-limiting merges the access paths that
are longer than a fixed constant k. Thus the tail of even a non-circular linked list will
be (conservatively) represented as a possibly cyclic data structure. Due to the
summarization of heap locations that are beyond k indirections from pointer variables,
this technique lacks strong update operations on these heap locations. Consequently,
Sagiv et al. [75] observe, “k-limiting approach cannot determine that either list-ness or
circular list-ness is preserved by a program that inserts an element into a list.” However,
May 2015
33
7.2
Properties of Heap Summarization Techniques
k-limiting gives reasonably precise results if the user program being analysed does not
need strong updates.
The efficiency of the analysis is heavily dependent on the value of k; larger values improve
the precision but may slow down the analysis significantly [3]. The analysis may be
extremely expensive because as observed by Sagiv et al. [75] “the number of possible shape
graphs is doubly exponential in k.” This is because heap locations beyond k indirections
from some pointer variable have to be (conservatively) assumed to be aliased to every
other heap location. Hence, k-limiting is practically feasible only for small values such as
k ≤ 2 [79]. The price to pay is reduced precision as shown by Chase et al. [15]. In general
it is difficult for a client analysis to know the best value of k a-priori and it should be
guided by empirical observations on representative programs.
• Allocation sites.
This technique may be imprecise when memory allocation is
concentrated within a small number of user written procedures. In such situations,
nodes allocated at the same allocation site but called from different contexts are
merged even though they may have different properties. Figure 18 contains an example
of imprecision using allocation sites. Chase et al. [15] state that “allocation site based
method cannot determine that list-ness is preserved for either the insert program or the
reverse program on a list” because of merging of nodes.
However, regarding efficiency, Sagiv et al. [76] note, “the techniques based on allocation
sites are more efficient than k-limiting summarizations, both from a theoretical
perspective [15] and from an implementation perspective [3].” The size of an allocation
site based graph is bounded by the number of allocation sites in the program.
Therefore, majority of client analyses are likely to find this technique space efficient on
most practical programs.
• Patterns. Identifying precise repeating patterns is undecidable in the most general case
because a repeated advance into the heap may arise from an arbitrarily long cycle of field
dereferences [60]. Therefore, generally the focus remains on detecting only consecutive
repetitions of the same type of field accesses which may be imprecise. Also, it seems
difficult for an analysis to determine if an identified repetition will occur an unbounded
number of times or only a bounded number of times. This approach has been found to
be more efficient than TVLA based shape analysis techniques for discovering liveness of
heap data [43].
• Variables. For complex shape graphs, summarization using variables may be more precise
than k-limiting. Chase et al. [15] observe that two nodes need not have similar properties
just because they occur k indirections away from the root variable in an access path. On
the other hand, two nodes which are pointed to by the same set of variables are more likely
to have similar properties. Further, summarization using variables can perform strong
nullification in a larger number of cases; therefore, it may be more precise. However,
there are situations where summarization using variables can also be imprecise: since
it merges nodes not pointed to by any root variable, sometimes nodes are abstracted
imprecisely as illustrated in Figure 5d. Contrast this with the precise summarization of
Figure 5c.
In general this technique has been found to be inefficient. Since each shape graph node is
labelled with a set of root variables in this technique, Sagiv et al. [75] state, “the number
of shape nodes is bounded by 2|Var| , where Var is the number of root pointer variables
in the program.” They further note, “unfortunately for some pathological programs the
34
Heap Abstractions
7.2
Properties of Heap Summarization Techniques
number of shape nodes can actually grow to be this large, although it is unlikely to arise
in practice.”
• Generic instrumentation predicates. Both the precision and efficiency of a client analysis
depends on the chosen predicate. By identifying one or more suitable predicates, a client
analysis can strike a balance between precision and efficiency.
The implementation of generic instrumentation predicates using TVLA [77] has
potentially exponential runtime in the number of predicates. Therefore, it is not
suitable for large programs [10].
• Higher-order logics. These techniques have the capability of computing complex heap
properties. With the use of program annotations in the form of assertions and loop
invariants, they can compute surprisingly detailed heap properties [38]. Unlike TVLA,
they can also produce counter examples for erroneous programs [63]. However, these
techniques are generally used to verify restricted data structures [8], without considering
the full behaviour of the program and have to be made less detailed for large programs [63]
since they are highly inefficient. An analysis needs to use simpler and less precise logics in
order to improve scalability. For example, Distefano et al. [18] use a subset of separation
logic as the domain of their analysis; the domain is less powerful because it does not
allow nesting of ∗ and ∧ operators.
These techniques may be highly inefficient as they include higher-order and undecidable
logics. For example, quantified separation logic is undecidable [11]. For termination,
these techniques require program annotations in the form of assertions and loop
invariants [8, 38, 63]. Consequently, analyses based on higher-order logics cannot be
made fully automatic. Since the effort of annotating the program can be significant,
these techniques can work efficiently only on small programs [38]. Therefore, these are
mostly used for teaching purposes [38] in order to encourage formal reasoning of small
programs. Again, since they are inefficient, these are considered useful to verify only
safety critical applications [63] where the effort of annotating the program is justified
by the complex properties that these techniques can derive. However, as compared to
TVLA, these techniques are sometimes more scalable due to the use of loop invariants;
empirical measurements show high speedup in these techniques where the use of loop
invariants is more efficient than a fixpoint computation required by TVLA [63]. An
advantage of separation logic is its efficiency due to the following: once the program is
analysed for a part of the memory, it can directly be used to derive properties for the
extended memory [82].
7.2.2
Expressiveness vs. Automatability
Here we discuss degree of expressive power and automation offered by heap summarization
techniques using predicates (for example, k-limiting, allocation sites, variables, pattern, and
other user-defined predicates) and those using higher-order logics.
• Predicates. Parameterised frameworks like TVLA summarize heap data based on any
desired user-defined predicate. Therefore, they achieve good expressiveness as per the
user’s requirements. However, the predefined predicates (for example, k-limiting,
allocation sites, variables, pattern) lack this expressiveness.
Automation of summarization techniques using user-defined predicates in TVLA is not
difficult since TVLA allows only simple predicates. Also, several automated tools are
already available for predefined predicates. For example, LFCPA [42] performs automatic
heap analysis using allocation site based summarization.
May 2015
35
7.3
Properties of Underlying Heap Analysis
• Higher-order logics. Unlike summarizations based on predicates, summarizations based
on higher-order logics do not need to work with a predefined user predicate; with the
use of heap specialized operators and rules, the latter can build upon basic predicates
to be able to compute complex properties of the heap. Depending on the underlying
logic, a client may find these summarization techniques to be more powerful and easier
to express.
However, summarization techniques using higher-order logics are not fully automated and
need user intervention for inference of non-trivial properties specially if the technique is
based on undecidable logics.
7.3
Properties of Underlying Heap Analysis
The choice of heap summarization technique is sometimes dependent on the design dimensions
of the underlying analysis that the client uses. We describe some such dependencies.
• Flow-sensitive analysis.
increased by
The precision benefits of a flow-sensitive analysis can be
• using TVLA whose 3-valued logic enables a more precise meet operation by
distinguishing between the may (i.e. along some paths), must (i.e. along all paths)
and cannot (i.e. along no path) nature of information discovered.
• using techniques that aid strong updates: summarization techniques based on
variables [75, 77] and k-limiting [16], and the materialization [75, 77] of summary
nodes.
• Context-sensitive analysis. A context-sensitive analysis examines a given procedure
separately for different calling contexts. If such a procedure contains an allocation
statement, the allocation site based summarization should be able to distinguish
between the nodes representing different calling contexts. This can be achieved by heap
cloning [91]. In the absence of replication of allocation site based nodes for different
calling contexts, the precision of analysis reduces significantly [65].
• Bottom-up analysis. A bottom-up interprocedural analysis traverses the call graph
bottom up by processing callees before callers. It constructs a summary of the callee
procedures that may access data structures whose allocation is done in the callers.
Thus the allocation site information may not be available in a callee’s heap summary.
Therefore, allocation site based summarization cannot be used with bottom-up
analyses; instead summarization using patterns has been used for computing procedure
summaries [22, 58, 60].
• Partially sound analysis and demand driven analysis. Soundness of an analysis requires
covering behaviours of all (possibly an infinite number of) execution paths. In many
situations such as debugging, useful information may be obtained by covering the
behaviour of only some execution paths. Such partially sound analyses9 are often
demand driven. The other flavour of demand driven analyses (such as assertion
verification) may need to cover all execution paths reaching a particular program point
but not all program points. In either case, these analyses examine a smaller part of the
input program and hence may be able to afford expensive summarization techniques.
Here k-limiting and higher-order logics based summarization techniques permit the
9
Not to be confused with “soundy” analyses which refer to partially unsound analyses that ignore some well
identified hard to analyse constructs [56].
36
Heap Abstractions
7.4
Properties of Programs
client to choose a larger value of k and a more complex logic, respectively thereby
improving precision. Likewise, parametric frameworks like TVLA can also be used with
more complex predicates. Observe that, allocation site and variable based techniques
do not have any inherent parameter for which the analysis may be improved.
7.4
Properties of Programs
The suitability of a technique depends on various properties of the input program. These are
discussed below.
• k-limiting. If the input program contains a small number of indirections from pointer
variables, k-limiting summarization based on a suitable choice of empirically observed k
would give reasonable results.
• Allocation sites. For input programs where allocations are made from sites that are
distributed over the program, rather than being made from a small set of procedures,
summarization using allocation sites will be able to preserve heap properties efficiently.
• Patterns. For input programs containing simple repeating patterns, summarization
techniques based on patterns can produce useful summaries.
• Variables. In our opinion, summarizations based on variables are precise in generally all
types of programs using the heap; however they are usually not as efficient as techniques
using k-limiting, allocation sites, and patterns.
• Higher-order logics. Techniques based on logics are inefficient and need manual
intervention. Therefore, their usefulness may be limited on small input programs.
8
Heap Analyses and Their Applications
In this section, we categorize applications of heap analyses and list common heap analyses in
terms of the properties that they discover.
8.1
Applications of Heap Analyses
We present the applications of heap analyses under the following three broad categories:
– Program understanding. Software engineering techniques based on heap analysis are used to
maintain or reverse engineer programs for understanding and debugging them. Heap related
information like shape, size, reachability, cyclicity, and others are collected for this purpose.
Program slicing of heap manipulating programs [44] can help in program understanding by
extracting the relevant part of a program.
– Verification and validation. Heap analysis is used for detecting memory errors at compile
time (for example, dereferencing null pointers, dangling pointers, memory leaks, freeing
a block of memory more than once, and premature deallocation) [25, 36, 57, 80]. Sorting
programs that use linked lists have been verified using heap analyses [53].
– Optimization. Modern compilers use heap analysis results to produce code that maximizes
performance. An optimization of heap manipulating programs is the garbage collection of
accessible yet unused objects [2, 43] which are otherwise beyond the scope of garbage
collection that depends purely on runtime information. Transformation of sequential heap
manipulating programs for better parallel execution involves heap analysis [5]. Heap
May 2015
37
8.2
Heap Analyses
analysis also helps in performing data prefetching based on future uses and updates on
heap data structures in the program [25]. Data locality of dynamically allocated data has
been identified and exploited using heap analysis by Castillo et al. [12].
8.2
Heap Analyses
A compile time program analysis that needs to discover and verify properties of heap data
could perform one or more of the following analyses.
• Shape analysis [24, 77, 90] also called storage analysis discovers invariants that describe
the data structures in a program and identifies alias relationships between paths in the
heap. Its applications include program understanding and debugging [20], compile time
detection of memory and logical errors, establishing shape properties, code optimizations,
and others.
• Liveness analysis of heap data statically identifies last uses of objects in a program to
discover reachable but unused heap locations to aid garbage collection performed at
runtime [2, 37, 43, 80].
• Escape analysis is a method for determining whether an object is visible outside a given
procedure. It is used for (a) scalar replacement of fields, (b) removal of synchronization,
and (c) stack allocation of heap objects [45].
• Side-effect analysis finds the heap locations that are used (read from or written to) by
a program statement. This analysis can optimize code by eliminating redundant loads
and stores [61].
• Def-use analysis finds point pairs of statements that initialize a heap location and then
read from that location. This analysis is used to check for the uses of undefined variables
and unused variables [61].
• Heap reachability analysis finds whether a heap object can be reached from a pointer
variable via field dereferences for detecting memory leaks at compile time [6].
• Call structure analysis disambiguates virtual calls in object-oriented languages and
function pointers. Presence of heap makes this disambiguation non-trivial. Instead of
relying on a call graph constructed with a relatively less precise points-to analysis, the
program call graph can be constructed on-the-fly with pointer analysis [66, 85, 89].
Receiver objects of a method call can also be disambiguated in order to distinguish
between calling contexts using object-sensitivity [61, 84] and type propagation
analysis [87].
9
Engineering Approximations for Efficiency
Given the vital importance of pointer analysis and the inherent difficulty of performing precise
pointer analysis for practical programs [13, 35, 49, 67], a large number of investigations involve
a significant amount of engineering approximations [41]. A detailed description of these is
beyond the scope of this paper because its focus is on building the basic concepts of various
modeling and summarization techniques for heap. Here we merely list some notable efforts in
engineering approximations used in heap analysis.
Since heap data is huge at compile time Calcagno et al. [10] perform
compositional/modularized analysis, i.e. using function summaries. Heap data can also be
38
Heap Abstractions
restricted by propagating the part of the heap that is sufficient for a procedure [10, 18, 26, 72].
Amount of heap data collection can be controlled by a demand-driven analysis using client
intervention [27, 85]. Rountev et al. [73] restrict the scope of program where high precision is
required. For example, they determine program fragments where accuracy is vital (like
regions of code, pointer variables) and find ways to make the results precise for only for those
critical regions. They have also performed safe analysis for incomplete programs. Limiting
the analysis to live and defined variables of the program has also helped in achieving
scalability without any loss of precision [1, 16, 42]. An inexpensive flow-insensitive heap
analysis over an SSA form [21] of a program seeks a middle ground between a flow-sensitive
and a flow-insensitive heap analysis. Incremental computations [88] and efficient encoding of
information by using BDDs [89] are amongst other engineering techniques employed for
efficient heap analysis.
Given a large body of work on building efficient approximations, Michael Hind observes that
although the problem of pointer analysis is undecidable, “fortunately many approximations
exists” and goes on to note that “unfortunately too many approximations exist” [32]. We
view this trend as unwelcome because a large fraction of pointer analysis community seems to
believe that compromising on precision is necessary for scalability and efficiency. Amer Diwan
adds, “It is easy to make pointer analysis that is very fast and scales to large programs. But
are the results worth anything?” [32].
In our opinion, a more desirable approach is to begin with a careful and precise modeling
of the desired heap properties even if it is not computable. Then the analysis can be
gradually refined into a computable version which can further be refined to make it scalable
and efficient to make it practically viable. Tom Reps notes that “There are some interesting
precision/efficiency trade-offs: for instance, it can be the case that a more precise pointer
analysis runs more quickly than a less precise one” [32]. Various implementations [42, 54, 84]
show that this top-down approach does not hinder efficiency. In fact increased precision in
pointer information not only causes a subsequent (dependent) analysis to produce more
precise results, it also causes the subsequent analysis to run faster [81].
10
Related Surveys
We list below some investigations that survey heap abstractions, either as the main goal or as
one of the important subgoals of the paper.
Hind [32], Ryder [74], and Smaragdakis and Balatsouras [83] present a theoretical discussion
on some selective pointer analysis metrics like efficiency, precision, client requirements, demand
driven approaches, handling of incomplete programs, and others. They also discuss some chosen
dimensions that influence the precision of heap analyses like flow-sensitivity, context-sensitivity,
field-sensitivity, heap modeling, and others. Smaragdakis and Balatsouras [83] present some of
these aspects in the form of a tutorial. Hind [32] provide an excellent compilation of literature
on pointer analysis which are presented without describing their algorithms.
Sridharan et al. [85] present a high-level survey of alias analyses that they have found useful
from their industrial experiences. Hind and Pioli [33] give an empirical comparison of precision
and efficiency of five pointer analysis algorithms. Ghiya [23] provides a collection of literature
on stack and heap pointer analyses and highlights their key features. Sagiv et al. [78] and
Nielson et al. [64] have a detailed chapter on shape analysis and abstract interpretation.
There are short sections on literature surveys [14, 71], which categorize a variety of heap
analyses into storeless and store based models. Chakraborty [14] points out that heap models
cannot always be partitioned into storeless and store based only; some literature use hybrid
model.
May 2015
39
We have not come across a comprehensive survey which seeks a unifying theme among a
plethora of heap abstractions.
11
Conclusions
A simplistic compile time view of heap memory consists of an unbounded number of
unnamed locations relating to each other in a seemingly arbitrary manner. On the theoretical
side, this offers deep intellectual challenges for building suitable abstractions of heap for more
sophisticated compile time views of the heap memory. On the practical side, the quality of
the result of a heap analysis is largely decided by the heap abstraction used. It is not
surprising, therefore, that heap abstraction is a fundamental and vastly studied component of
heap analysis. What is surprising, however, is that a quest of a unifying theme in heap
abstractions has not received adequate attention which, in our opinion, it deserves.
This paper is an attempt to fill this void by separating the heap model as a representation
of heap memory, from a summarization technique used for bounding it. This separation has
allowed us to explore and compare a comprehensive list of algorithms used in the literature
making it accessible to a large community of researchers. We observe that the heap models
can be classified as storeless, store based, and hybrid. The summarization techniques use
k-limiting, allocation sites, patterns, variables, other generic instrumentation predicates, and
higher-order logics.
We have also studied the design choices in heap abstractions by comparing and
contrasting various techniques used in literature with respect to client requirements like
efficiency, precision, expressiveness, automatability, dimensions of the underlying analysis,
and user program properties. We hope that these comparisons can be helpful for a client to
decide which abstraction to use for designing a heap analysis. It is also expected to pave way
for creating new abstractions by mix-and-match of models and summarization techniques.
We observe in passing that, as program analysts, we still face the challenge of creating
summarizations that are efficient, scale to large programs, and yield results that are precise
enough to be practically useful.
Acknowledgements
An invigorating discussion in the Dagstuhl Seminar on Pointer Analysis [55] sowed the seeds
of this survey paper. We would like to thank Amitabha Sanyal, Supratik Chakraborty, and
Alan Mycroft for their comments on this paper as also for enlightening discussions related to
heap analysis from time to time. Anders Møller helped us in improving the description of
Pointer Assertion Logic Engine. Rohan Padhye, Alefiya Lightwala, and Prakash Agrawal gave
valuable feedback on the paper, helped in rewording some text, and pointed out some errors
in the examples. We would also like to thank the anonymous reviewers for their rigorous and
extensive reviews and thought-provoking questions and suggestions.
Vini Kanvar is partially supported by TCS Fellowship.
References
[1] Gilad Arnold, Roman Manevich, Mooly Sagiv, and Ran Shaham. Combining shape
analyses by intersecting abstractions. In Proceedings of the 7th International Conference
on Verification, Model Checking, and Abstract Interpretation, VMCAI’06, pages 33–48.
Springer-Verlag, 2006.
40
Heap Abstractions
REFERENCES
[2] Rahul Asati, Amitabha Sanyal, Amey Karkare, and Alan Mycroft. Liveness-based garbage
collection. In Proceedings of the 23rd International Conference on Compiler Construction,
CC’14. Springer-Verlag, 2014.
[3] Uwe Aßmann and Markus Weinhardt. Interprocedural heap analysis for parallelizing
imperative programs. In Proceedings of Programming Models for Massively Parallel
Computers, pages 74–82. IEEE Computer Society, September 1993.
[4] Gogul Balakrishnan and Thomas Reps. Recency-abstraction for heap-allocated storage.
In Proceedings of the 13th International Conference on Static Analysis, SAS’06, pages
221–239. Springer-Verlag, 2006.
[5] Barnali Basak, Sandeep Dasgupta, and Amey Karkare. Heap dependence analysis for
sequential programs. In PARCO, pages 99–106, 2011.
[6] Sam Blackshear, Bor-Yuh Evan Chang, and Manu Sridharan. Thresher: Precise
refutations for heap reachability. In Proceedings of the 34th ACM SIGPLAN Conference
on Programming Language Design and Implementation, PLDI ’13, pages 275–286. ACM,
2013.
[7] Ahmed Bouajjani, Marius Bozga, Peter Habermehl, Radu Iosif, Pierre Moro, and Tomáš
Vojnar. Programs with lists are counter automata. In Proceedings of the 18th International
Conference on Computer Aided Verification, CAV’06, pages 517–531. Springer-Verlag,
2006.
[8] Marius Bozga, Radu Iosif, and Yassine Lakhnech. On logics of aliasing. In SAS, pages
344–360, 2004.
[9] Marius Bozga, Radu Iosif, and Yassine Laknech. Storeless semantics and alias logic.
SIGPLAN Not., 38(10):55–65, June 2003.
[10] Cristiano Calcagno, Dino Distefano, Peter W. O’Hearn, and Hongseok Yang.
Compositional shape analysis by means of bi-abduction. J. ACM, 58(6):26:1–26:66,
December 2011.
[11] Cristiano Calcagno, Hongseok Yang, and Peter W. O’Hearn. Computability and
complexity results for a spatial assertion language for data structures. In Proceedings
of the 21st Conference on Foundations of Software Technology and Theoretical Computer
Science, FST TCS ’01, pages 108–119, London, UK, UK, 2001. Springer-Verlag.
[12] R. Castillo, A. Tineo, F. Corbera, A. Navarro, R. Asenjo, and E. L. Zapata. Towards a
versatile pointer analysis framework. In Proceedings of the 12th International Conference
on Parallel Processing, Euro-Par’06, pages 323–333. Springer-Verlag, 2006.
[13] Venkatesan T. Chakaravarthy. New results on the computability and complexity of points–
to analysis. In Proceedings of the 30th ACM SIGPLAN-SIGACT Symposium on Principles
of Programming Languages, POPL ’03, pages 115–125. ACM, 2003.
[14] Supratik Chakraborty. Reasoning about Heap Manipulating Programs using Automata
Techniques. In Deepak D’Souza and Priti Shankar, editors, Modern Applications of
Automata Theory. IISc-World Scientific Review Volume, May 2012.
[15] David R. Chase, Mark Wegman, and F. Kenneth Zadeck. Analysis of pointers and
structures. In Proceedings of the ACM SIGPLAN 1990 Conference on Programming
Language Design and Implementation, PLDI ’90, pages 296–310. ACM, 1990.
May 2015
41
REFERENCES
[16] Arnab De and Deepak D’Souza. Scalable flow-sensitive pointer analysis for java with
strong updates. In Proceedings of the 26th European Conference on Object-Oriented
Programming, ECOOP’12, pages 665–687. Springer-Verlag, 2012.
[17] Alain Deutsch. Interprocedural may-alias analysis for pointers: Beyond k-limiting. In
Proceedings of the ACM SIGPLAN 1994 Conference on Programming Language Design
and Implementation, PLDI ’94, pages 230–241. ACM, 1994.
[18] Dino Distefano, Peter W. O’Hearn, and Hongseok Yang. A local shape analysis based
on separation logic. In Proceedings of the 12th International Conference on Tools and
Algorithms for the Construction and Analysis of Systems, TACAS’06, pages 287–302,
Berlin, Heidelberg, 2006. Springer-Verlag.
[19] Amer Diwan, J. Eliot B. Moss, and Kathryn S. McKinley. Simple and effective analysis of
statically-typed object-oriented programs. In Proceedings of the 11th ACM SIGPLAN
Conference on Object-oriented Programming, Systems, Languages, and Applications,
OOPSLA ’96, pages 292–305, New York, NY, USA, 1996. ACM.
[20] Nurit Dor, Michael Rodeh, and Mooly Sagiv. Detecting memory errors via static pointer
analysis (preliminary experience). In Proceedings of the 1998 ACM SIGPLAN-SIGSOFT
Workshop on Program Analysis for Software Tools and Engineering, PASTE ’98, pages
27–34. ACM, 1998.
[21] Stephen J. Fink, Kathleen Knobe, and Vivek Sarkar. Unified analysis of array and object
references in strongly typed languages. In Proceedings of the 7th International Symposium
on Static Analysis, SAS ’00, pages 155–174, London, UK, UK, 2000. Springer-Verlag.
[22] Manuel Geffken, Hannes Saffrich, and Peter Thiemann. Precise interprocedural sideeffect analysis. In Theoretical Aspects of Computing - ICTAC 2014 - 11th International
Colloquium, Bucharest, Romania, September 17-19, 2014. Proceedings, pages 188–205,
2014.
[23] Rakesh Ghiya. Putting Pointer Analysis to Work.
Montreal, 1998.
PhD thesis, McGill University,
[24] Rakesh Ghiya and Laurie J. Hendren. Is it a tree, a dag, or a cyclic graph? a shape
analysis for heap-directed pointers in C. In Proceedings of the 23rd ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL ’96, pages 1–15.
ACM, 1996.
[25] Rakesh Ghiya and Laurie J. Hendren. Putting pointer analysis to work. In Proceedings of
the 25th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,
POPL ’98, pages 121–133. ACM, 1998.
[26] Alexey Gotsman, Josh Berdine, and Byron Cook. Interprocedural shape analysis with
separated heap abstractions. In Proceedings of the 13th International Conference on Static
Analysis, SAS’06, pages 240–260. Springer-Verlag, 2006.
[27] Samuel Z. Guyer and Calvin Lin. Client-driven pointer analysis. In Proceedings of the
10th International Conference on Static Analysis, SAS’03, pages 214–236. Springer-Verlag,
2003.
[28] L. Hendren, C. Donawa, M. Emami, G. Gao, Justiani, and B. Sridharan. Designing
the mccat compiler based on a family of structured intermediate representations. In
42
Heap Abstractions
REFERENCES
Utpal Banerjee, David Gelernter, Alex Nicolau, and David Padua, editors, Languages and
Compilers for Parallel Computing, volume 757 of Lecture Notes in Computer Science,
pages 406–420. Springer Berlin Heidelberg, 1993.
[29] L. J. Hendren and A. Nicolau. Parallelizing programs with recursive data structures.
IEEE Trans. Parallel Distrib. Syst., 1(1):35–47, January 1990.
[30] Laurie J. Hendren. Parallelizing Programs with Recursive Data Structures. PhD thesis,
Cornell University, January 1990.
[31] Laurie J. Hendren and Alexandru Nicolau. Intererence analysis tools for parallelizing
programs with recursive data structures. In Proceedings of the 3rd International
Conference on Supercomputing, ICS ’89, pages 205–214. ACM, 1989.
[32] Michael Hind. Pointer analysis: Haven’t we solved this problem yet? In Proceedings of
the 2001 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools
and Engineering, PASTE ’01, pages 54–61. ACM, 2001.
[33] Michael Hind and Anthony Pioli. Which pointer analysis should i use? SIGSOFT Softw.
Eng. Notes, 25(5):113–123, August 2000.
[34] C. A. R. Hoare. An axiomatic basis for computer programming.
12(10):576–580, October 1969.
Commun. ACM,
[35] Susan Horwitz. Precise flow-insensitive may-alias analysis is np-hard.
Program. Lang. Syst., 19(1):1–6, January 1997.
ACM Trans.
[36] David Hovemeyer, Jaime Spacco, and William Pugh. Evaluating and tuning a static
analysis to find null pointer bugs. In Proceedings of the 6th ACM SIGPLAN-SIGSOFT
Workshop on Program Analysis for Software Tools and Engineering, PASTE ’05, pages
13–19. ACM, 2005.
[37] Katsuro Inoue, Hiroyuki Seki, and Hikaru Yagi. Analysis of functional programs to detect
run-time garbage cells. ACM Trans. Program. Lang. Syst., 10(4):555–578, October 1988.
[38] Jakob L. Jensen, Michael E. Jørgensen, Michael I. Schwartzbach, and Nils Klarlund.
Automatic verification of pointer programs using monadic second-order logic. In
Proceedings of the ACM SIGPLAN 1997 Conference on Programming Language Design
and Implementation, PLDI ’97, pages 226–234. ACM, 1997.
[39] Neil D. Jones and Steven S. Muchnick. Flow analysis and optimization of lisp-like
structures. In Proceedings of the 6th ACM SIGACT-SIGPLAN Symposium on Principles
of Programming Languages, POPL ’79, pages 244–256. ACM, 1979.
[40] H. B. M. Jonkers. Abstract storage structures. In De Bakker and Van Vliet, editors,
Algorithmic Languages, pages 321–343. IFIP, 1981.
[41] Uday P. Khedker. The approximations vs. abstractions dilemma in pointer analysis. In
Ondřej Lhoták, Yannis Smaragdakis, and Manu Sridharan, editors, Pointer Analysis,
volume 3 of Dagstuhl Reports, pages 91–113. Schloss Dagstuhl – Leibniz-Zentrum für
Informatik, Dagstuhl Publishing, April 2013.
[42] Uday P. Khedker, Alan Mycroft, and Prashant Singh Rawat. Liveness-based pointer
analysis. In Proceedings of the 19th International Conference on Static Analysis, SAS’12,
pages 265–282. Springer-Verlag, 2012.
May 2015
43
REFERENCES
[43] Uday P. Khedker, Amitabha Sanyal, and Amey Karkare. Heap reference analysis using
access graphs. ACM Trans. Program. Lang. Syst., 30(1), November 2007.
[44] Raghavan Komondoor. Precise slicing in imperative programs via term-rewriting and
abstract interpretation. In SAS, pages 259–282, 2013.
[45] Thomas Kotzmann and Hanspeter Mössenböck. Escape analysis in the context of dynamic
compilation and deoptimization. In Proceedings of the 1st ACM/USENIX International
Conference on Virtual Execution Environments, VEE ’05, pages 111–120. ACM, 2005.
[46] Viktor Kuncak, Patrick Lam, Karen Zee, and Martin C. Rinard. Modular pluggable
analyses for data structure consistency. IEEE Trans. Softw. Eng., 32(12):988–1005,
December 2006.
[47] Patrick Lam, Viktor Kuncak, and Martin Rinard. Generalized typestate checking
for data structure consistency. In Proceedings of the 6th International Conference on
Verification, Model Checking, and Abstract Interpretation, VMCAI’05, pages 430–447,
Berlin, Heidelberg, 2005. Springer-Verlag.
[48] Patrick Lam, Viktor Kuncak, and Martin Rinard. Hob: A tool for verifying data
structure consistency. In Proceedings of the 14th International Conference on Compiler
Construction, CC’05, pages 237–241, Berlin, Heidelberg, 2005. Springer-Verlag.
[49] William Landi and Barbara G. Ryder. A safe approximate algorithm for interprocedural
aliasing. In Proceedings of the ACM SIGPLAN 1992 Conference on Programming
Language Design and Implementation, PLDI ’92, pages 235–248. ACM, 1992.
[50] J. R. Larus and P. N. Hilfinger. Detecting conflicts between structure accesses. In
Proceedings of the ACM SIGPLAN 1988 Conference on Programming Language Design
and Implementation, PLDI ’88, pages 24–31. ACM, 1988.
[51] Chris Lattner and Vikram Adve. Data structure analysis: A fast and scalable contextsensitive heap analysis. Technical report, University of Illinois at Urbana Champaign,
2003.
[52] Chris Lattner, Andrew Lenharth, and Vikram Adve. Making context-sensitive points-to
analysis with heap cloning practical for the real world. In Proceedings of the 2007 ACM
SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’07,
pages 278–289, New York, NY, USA, 2007. ACM.
[53] Tal Lev-Ami, Thomas Reps, Mooly Sagiv, and Reinhard Wilhelm. Putting static analysis
to work for verification: A case study. In Proceedings of the 2000 ACM SIGSOFT
International Symposium on Software Testing and Analysis, ISSTA ’00, pages 26–38.
ACM, 2000.
[54] Ondřej Lhoták and Kwok-Chiang Andrew Chung. Points-to analysis with efficient strong
updates. In Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages, POPL ’11, pages 3–16. ACM, 2011.
[55] Ondřej Lhotak, Yannis Smaragdakis, and Manu Sridharan. Pointer Analysis (Dagstuhl
Seminar 13162). Dagstuhl Reports, 3(4):91–113, 2013.
[56] Benjamin Livshits, Manu Sridharan, Yannis Smaragdakis, Ondřej Lhoták, J. Nelson
Amaral, Bor-Yuh Evan Chang, Samuel Z. Guyer, Uday P. Khedker, Anders Møller, and
Dimitrios Vardoulakis. In defense of soundiness: A manifesto. Commun. ACM, 58(2):44–
46, January 2015.
44
Heap Abstractions
REFERENCES
[57] Ravichandhran Madhavan and Raghavan Komondoor. Null dereference verification
via over-approximated weakest pre-conditions analysis. In Proceedings of the 2011
ACM International Conference on Object Oriented Programming Systems Languages and
Applications, OOPSLA ’11, pages 1033–1052. ACM, 2011.
[58] Ravichandhran Madhavan, Ganesan Ramalingam, and Kapil Vaswani. Purity analysis: An
abstract interpretation formulation. In Proceedings of the 18th International Conference
on Static Analysis, SAS’11, pages 7–24, Berlin, Heidelberg, 2011. Springer-Verlag.
[59] Mark Marron, Cesar Sanchez, Zhendong Su, and Manuel Fahndrich. Abstracting runtime
heaps for program understanding. IEEE Trans. Softw. Eng., 39(6):774–786, June 2013.
[60] Ivan Matosevic and Tarek S. Abdelrahman. Efficient bottom-up heap analysis for symbolic
path-based data access summaries. In Proceedings of the Tenth International Symposium
on Code Generation and Optimization, CGO ’12, pages 252–263. ACM, 2012.
[61] Ana Milanova, Atanas Rountev, and Barbara G. Ryder. Parameterized object sensitivity
for points-to and side-effect analyses for java. SIGSOFT Softw. Eng. Notes, 27(4):1–11,
July 2002.
[62] Anders Møller. Mona project home page, 2014.
[63] Anders Møller and Michael I. Schwartzbach. The pointer assertion logic engine. In
Proceedings of the ACM SIGPLAN 2001 Conference on Programming Language Design
and Implementation, PLDI ’01, pages 221–231. ACM, 2001.
[64] Flemming Nielson, Hanne R. Nielson, and Chris Hankin. Principles of Program Analysis.
Springer-Verlag New York, Inc., 1999.
[65] Erik M. Nystrom, Hong-Seok Kim, and Wen-mei W. Hwu. Importance of heap
specialization in pointer analysis. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT
Workshop on Program Analysis for Software Tools and Engineering, PASTE ’04, pages
43–48, New York, NY, USA, 2004. ACM.
[66] Rohan Padhye and Uday P. Khedker. Interprocedural data flow analysis in soot using
value contexts. In Proceedings of the 2Nd ACM SIGPLAN International Workshop on
State Of the Art in Java Program Analysis, SOAP ’13, pages 31–36. ACM, 2013.
[67] G. Ramalingam. The undecidability of aliasing. ACM Trans. Program. Lang. Syst.,
16(5):1467–1471, September 1994.
[68] Thomas Reps. Program analysis via graph reachability. In Proceedings of the 1997
International Symposium on Logic Programming, ILPS ’97, pages 5–19. MIT Press, 1997.
[69] John C. Reynolds. Separation logic: A logic for shared mutable data structures. In
Proceedings of the 17th Annual IEEE Symposium on Logic in Computer Science, LICS
’02, pages 55–74. IEEE Computer Society, 2002.
[70] H. G. Rice. Classes of recursively enumerable sets and their decision problems.
Transactions of the American Mathematical Society, 74(2):pp. 358–366, 1953.
[71] Noam Rinetzky. Interprocedural and Modular Local Heap Shape Analysis. PhD thesis, Tel
Aviv University, June 2008.
May 2015
45
REFERENCES
[72] Noam Rinetzky, Jörg Bauer, Thomas Reps, Mooly Sagiv, and Reinhard Wilhelm. A
semantics for procedure local heaps and its abstractions. In Proceedings of the 32Nd
ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL
’05, pages 296–309. ACM, 2005.
[73] Atanas Rountev, Barbara G. Ryder, and William Landi. Data-flow analysis of program
fragments. In Proceedings of the 7th European Software Engineering Conference Held
Jointly with the 7th ACM SIGSOFT International Symposium on Foundations of Software
Engineering, ESEC/FSE-7, pages 235–252. Springer-Verlag, 1999.
[74] Barbara G. Ryder. Dimensions of precision in reference analysis of object-oriented
programming languages. In Proceedings of the 12th International Conference on Compiler
Construction, CC’03, pages 126–137, Berlin, Heidelberg, 2003. Springer-Verlag.
[75] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm. Solving shape-analysis problems in
languages with destructive updating. In Proceedings of the 23rd ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages, POPL ’96, pages 16–31. ACM, 1996.
[76] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm. Solving shape-analysis problems
in languages with destructive updating. ACM Trans. Program. Lang. Syst., 20(1):1–50,
January 1998.
[77] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm. Parametric shape analysis via
3-valued logic. In Proceedings of the 26th ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages, POPL ’99, pages 105–118. ACM, 1999.
[78] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm. Shape analysis and applications. In
Y. N. Srikant and P. Shankar, editors, Compiler Design Handbook: Optimizations and
Machine Code Generation, chapter 12. CRC Press, Inc, 2007.
[79] Damien Sereni. Termination analysis of higher-order functional programs. PhD thesis,
Oxford University, 2006.
[80] Ran Shaham, Eran Yahav, Elliot K. Kolodner, and Mooly Sagiv. Establishing local
temporal heap safety properties with applications to compile-time memory management.
In Proceedings of the 10th International Conference on Static Analysis, SAS’03, pages
483–503. Springer-Verlag, 2003.
[81] II Marc Shapiro and Susan Horwitz. The effects of the precision of pointer analysis. In
Proceedings of the 4th International Symposium on Static Analysis, SAS ’97, pages 16–34.
Springer-Verlag, 1997.
[82] Elodie-Jane Sims.
University, 2007.
Pointer analysis and separation logic.
PhD thesis, Kansas State
[83] Yannis Smaragdakis and George Balatsouras. Pointer analysis. Foundations and Trends
in Programming Languages, 2(1), 2015.
[84] Yannis Smaragdakis, Martin Bravenboer, and Ondřej Lhoták. Pick your contexts well:
Understanding object-sensitivity. In Proceedings of the 38th Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL ’11, pages 17–30.
ACM, 2011.
46
Heap Abstractions
[85] Manu Sridharan, Satish Chandra, Julian Dolby, Stephen J. Fink, and Eran Yahav.
Aliasing in object-oriented programming. In Dave Clarke, James Noble, and Tobias
Wrigstad, editors, Alias Analysis for Object-oriented Programs, pages 196–232. SpringerVerlag, 2013.
[86] Bjarne Steensgaard. Points-to analysis in almost linear time. In Proceedings of the 23rd
ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL
’96, pages 32–41, New York, NY, USA, 1996. ACM.
[87] Vijay Sundaresan, Laurie Hendren, Chrislain Razafimahefa, Raja Vallée-Rai, Patrick Lam,
Etienne Gagnon, and Charles Godin. Practical virtual method call resolution for java.
In Proceedings of the 15th ACM SIGPLAN Conference on Object-oriented Programming,
Systems, Languages, and Applications, OOPSLA ’00, pages 264–280, New York, NY, USA,
2000. ACM.
[88] Frédéric Vivien and Martin Rinard. Incrementalized pointer and escape analysis. In
Proceedings of the ACM SIGPLAN 2001 Conference on Programming Language Design
and Implementation, PLDI ’01, pages 35–46. ACM, 2001.
[89] John Whaley and Monica S. Lam. Cloning-based context-sensitive pointer alias analysis
using binary decision diagrams. In Proceedings of the ACM SIGPLAN 2004 Conference
on Programming Language Design and Implementation, PLDI ’04, pages 131–144. ACM,
2004.
[90] Reinhard Wilhelm, Shmuel Sagiv, and Thomas W. Reps. Shape analysis. In Proceedings of
the 9th International Conference on Compiler Construction, CC ’00, pages 1–17. SpringerVerlag, 2000.
[91] Guoqing Xu and Atanas Rountev. Merging equivalent contexts for scalable heap-cloningbased context-sensitive points-to analysis. In Proceedings of the 2008 International
Symposium on Software Testing and Analysis, ISSTA ’08, pages 225–236, New York, NY,
USA, 2008. ACM.
A
Heap and Stack Memory in C/C++ and Java
In this section, we briefly compare the programming constructs related to pointer variables in
C/C++ and Java programs.
Referencing variables on stack and heap. In C/C++, both stack and heap allow pointer
variables. Java does not allow stack directed pointers. C/C++ allows pointers to variables
on the stack through the use of addressof operator &; Java does not have this operator. Both
C/C++ and Java allow pointers/references to objects on the heap using malloc function (in
C/C++) and new operator (in C++ and Java).
Dereferencing pointers. Every variable on the stack, whether it contains a reference or a
value, always has a name because all the objects allocated on the stack have compile time
names associated with them. Heap allocated data items do not possess names and are all
anonymous. The only way to access heap items is using pointer dereferences. C/C++ has
explicit pointers. Pointer variables in C/C++ are dereferenced using star operator (∗), for
example, y := ∗x. Fields of a pointer to an aggregate data type (struct, union, or class) can
be accessed using star operator (∗) and dot operator (.), for example, (∗x).f, or using arrow
operator (->), for example, x->f; both are equivalent pointer dereferences of the member field
f of pointer variable x. In Java, fields are dereferenced using the dot operator (.), for example,
x.f.
May 2015
47
C/C++
Heap
C/C++
Stack
A
lp
w
tr
rp
tr
lp
lp
B
tr
x
rp
C
y
D
z
Java Stack
tr
rp
tr
tr
Java Heap
Figure 20. C/C++ memory framework modeled as a Java memory framework.
Analysis of scalar and aggregate pointers. In Java, a pointer variable cannot point to an
object of scalar data type such as integer or floating point number; pointer variables point to an
object of only aggregate data types in Java such as structures, classes etc. However, C/C++
allows pointers to both scalars and aggregate structures. In C++, pointer analysis of scalar
variables is comparatively straightforward (due to type restrictions) as compared to the pointer
analysis of aggregate variables. For example, a program statement x := ∗x is syntactically
invalid—the scalar pointer x cannot advance to a location of a different data type. On the
other hand an aggregate pointer can be advanced subject to its type compatibility making it
difficult to find properties of such pointers. For example, program statement x := x->f in a loop
allows the aggregate pointer x to point to any location after x through field f. Further, cycles
in recursive data structures, cause infinite number of paths that refer to the same memory
location. This makes the analysis of an aggregate pointer challenging over a scalar pointer.
Mapping C/C++ memory to the Java memory. As explained before, C/C++ heap and
stack pointers can point to locations on both stack and heap. On the other hand, Java
stack pointers can point only to Java heap locations. In spite of this difference in memory
modeling, stack and heap memory in C/C++ can be modeled like a Java memory. To achieve
this, C/C++ memory is viewed as consisting of two partitions of the memory—addresses of
variables and the rest of the memory (stack and heap together) [43]. Here, the first partition
of the C/C++ memory (i.e. the addresses of variables) works like the Java stack. The second
partition of the C/C++ memory consisting of the rest of the memory (stack and heap together)
works like the Java heap.
Figure 20 illustrates a C/C++ memory snapshot, which has been modeled as Java memory
(in dotted lines). Pointer variables w, x, y, and z are on the C/C++ stack and pointer variables
A, B, C, and D are on the Java stack. C/C++ pointers point to stack variables x and z in the
figure. The stack and heap of C/C++ are represented as the Java heap. Java stack is the set
of addresses of C/C++ locations (viz. w, x, y, and z) stored in A, B, C, and D, respectively.
To overcome the difference of pointer dereferences (∗) and addressof (&) operator in C/C++
which are absent in Java, Khedker et al. [43] model these two C/C++ constructs as follows:
48
Heap Abstractions
• Pointer dereference (∗) is considered as a field dereference deref, which has not been used
elsewhere in the program. For example [43], (∗x).f in C/C++ is viewed as x.deref.f in
Java.
• The addresses of C/C++ variables are represented by the Java stack (as shown in
figure 20, where A denotes &w, B denotes &x, C denotes &y, and D denotes &z). For
example [43], y.f in Java is modeled as &y.deref.f in C/C++.
May 2015
49
| 6 |
Computation of the first Chow group of
a Hilbert scheme of space curves
arXiv:1103.0122v2 [math.AG] 20 Jun 2014
Gerd Gotzmann
Abstract
An earlier wrong formula for the dimension of A1 (Hd,g ) ⊗ Q is corrected.
Introduction
The results stated in ([T4], pp. 1) have to be corrected as follows: Let H = Hd,g =
HilbP (P3C ) be the Hilbert scheme , which parametrizes the curves in P3C of degree d and
genus g (i.e., the closed subschemes of P3C with Hilbert polynomial P (T ) = dT − g + 1).
It is always assumed that d ≥ 3 and g is not maximal, i.e. that g < (d − 1)(d − 2)/2.
Theorem 0.1. Let be g(d) := (d − 2)2 /4. Then dimQ A1 (Hd,g ) ⊗ Q = 3 (resp. = 4),
if g ≤ g(d) (resp. if g > g(d)).
Corollary 0.1. NS(H) ≃ Zρ and Pic(H) ≃ Zρ ⊕ Cr , where r := dimC H 1 (H, OH )
and ρ = 3, if g ≤ g(d). If g > g(d), then ρ = 3 or ρ = 4.
Theorem 0.2. Let C ֒→ H×P3 be the universal curve over H. Then dimQ A1 (C) ⊗ Q =
Z
dimQ A1 (H) ⊗ Q + 1.
Z
Corollary 0.2. NS(C) = Zρ+1 and Pic(C) ≃ Zρ+1 ⊕Cs , where s := dimC H 1 (C, OC )
and ρ is defined as in Corollary 1.
That means, the formula (d − 2)(d − 3)/2 for the bound g(d) in ([T4], p.1) is wrong
and has to be replaced by the above formula.
March 2, 2011
Contents
Chapter 1. Summary of earlier results
1
Chapter 2. Subschemes of points in P2 and their Hilbert functions
5
Chapter 3. A rough description of ideals invariant under Γ · T (ρ)
27
Chapter 4. The α-grade.
43
Chapter 5. Estimates of the α-grade in the case ρ1 < 0, ρ2 > 0.
55
Chapter 6. Estimates of the α-grade in the case ρ1 > 0, ρ2 > 0 and r ≥ 1.
65
Chapter 7. Estimates of the α-grade in the case ρ1 > 0, ρ2 > 0 and r = 0.
79
Chapter 8. Borel-invariant surfaces and standard cycles
87
Chapter 9. Relations between B-invariant 1-cycles
95
Chapter 10. Proof of Theorem 1.1
101
Chapter 11. Surfaces in H invariant under Ga · T (4; k)
103
Chapter 12. Surfaces in H invariant under B(4; k)
107
Chapter 13. Relations in B(4; k)-invariant surfaces
115
Chapter 14. Necessary and sufficient conditions
119
Chapter 15. The case d ≥ 5
123
Chapter 16. The cases d = 3 and d = 4
125
Chapter 17. Correction of the results of [T4] and summary
129
Appendix A. Notations
131
Appendix B. Hilbert functions without Uniform Position Property
135
Appendix C. Ideals with many monomials
137
Appendix D. Unipotent groups acting on polynomial rings
139
Appendix E. Standard bases
143
Appendix. Bibliography
145
i
CHAPTER 1
Summary of earlier results
1.1. Description of the starting situation
The result are the same as in [T1]-[T4] and are summed up in Appendix A. The
ground field is C, and H = Hd,g is the Hilbert scheme which parametrizes the curves
C ⊂ P3C of degree d and genus g (i.e. the closed subschemes of P3C with Hilbert polynomial
P (T ) = dT − g + 1). According to F.S.Macaulay, Hd,g is not empty if and only if the
“complementary” Hilbert polynomial Q(T ) = T +r
− P (T ) either has the form Q(T ) =
T −a+2
Tr −a+2 T −b+1
T −1+3
T −1+3
+
or the form Q(T ) =
+
+ 1 , where a is an integer ≥
3
2
3
2
1, respectively a and b are integers (Macaulay coefficients), such that 2 ≤ a ≤ b. Between
the degree and genus on the one hand and the Macaulay coefficients on the other hand, one
has the following relations d = a, g = (d − 1)(d − 2)/2, if Q(T ) = T −1+3
+ T −a+2
, and
3
2
T −a+2 T −b+1
T −1+3
2
d = a−1, g = (a −3a+4)/2−b, if Q(T ) =
+
+
, respectively. One
3
2
1
sees that the first case occurs if and only if one is dealing with plane curves, in which case
the groups A1 (H) and NS(H) both have the rank 2 (cf. [T1], Satz 2a, p. 91). Therefore
in the following we always suppose that d ≥ 3 and g < (d − 1)(d − 2)/2, that means, the
complementary Hilbert polynomial has the form Q(T ) = T −1+3
+ T −a+2
+ T −b+1
,
3
2
1
where 4 ≤ a ≤ b.
We also write HQ instead of Hd,g in order to express that this Hilbert scheme likewise
parametrizes the ideals I ⊂ OP3 with Hilbert polynomial Q(T ), or equivalently, the
saturated graded ideals in C[x, y, z, t] with Hilbert polynomial Q(T ).
In [T1]-[T4] it was tried to describe the first Chow group A1 (H), where we always
take rational coefficients, and we write A1 (H) instead of A1 (H) ⊗ Q. The starting point
Z
is the following consideration: If the Borel group B = B(4; k) operates on H = HQ
in the obvious way, then one can deform each 1-cycle on H in a 1-cycle, whose prime
components are B-invariant, irreducible, reduced and closed curves on H. It follows that
A1 (H) is generated by such B-invariant 1-prime cycles on H. This is a partial statement
of a theorem of Hirschowitz. (Later on we will have to use the general statement, whereas
the partial statement can be proved in a simple way, see [T1], Lemma 1, p. 6.) Now such
a B-invariant 1-prime cycle (i.e. closed, irreducible and reduced curve) C on H can be
formally described as follows: Either each point of C is invariant under ∆ := U(4; k), or
one has C = Gia · η, where η is a closed point of H, which is invariant under T = T (4; k)
and the group Gi , i ∈ {1, 2, 3}. Here Gia is the group Ga , acting by
ψα1 : x 7−→ x, y 7−→ y, z 7−→ z, t 7−→ αz + t
ψα2 : x 7−→ x, y −
7 → y, z 7−→ αy + z, t 7−→ t
ψα3 : x −
7 → x, y −
7 → αx + y, z 7−→ z, t 7−→ t,
1
respectively, on P = k[x, y, z, t], and Gi is
Gia , that means, one defines
1
∗
∗
∗
0
1
∗
∗
,
G
:=
G1 :=
2
0
0
1
0
0 0 0 1
the subgroup of ∆, which is complementary to
∗
1
0
0
1
0
0
0
∗
0
1
0
∗
∗
,
∗
1
G3 :=
1
0
0
0
0
1
0
0
∗
∗
1
0
∗
∗
.
∗
1
If C has this form, then C is called a curve or a 1-cycle of type i, where i ∈ {1, 2, 3}.
A(H) := Im(A1 (H∆ ) → A1 (H)) is called the “algebraic part” and A1 (H) := A1 (H)/A(H)
is called the “combinatorial part” of the first Chow group of H. Here H∆ denotes the
fixed point scheme which, just as all other fixed point schemes that will occur later on,
is supposed to have the induced reduced scheme structure. (This convention is valid also
for the Hilbert scheme H d := Hilbd (P2C ), see below.)
In order to formulate the results obtained so far, one has to introduce the following
”tautological” 1-cycles on H:
−
C1 = (x, y a , y a−1 z b−a (αz + t)) α ∈ k
C2 = {(x, y a−1 (αy + z), y a−2 z b−a+1 (αy + z))|α ∈ k}−
C3 = {(xa , αx + y, xa−1 z b−a+1 )|α ∈ k}−
D = {(x2 , xy, y a−1, z b−2a+4 (y a−2 + αxz a−3 ))|α ∈ k}−
E = {(x2 , xy, xz, y a, y a−1 z b−a+1 , xtb−2 + αy a−1z b−a )|α ∈ k}−
For the sake of simplicity, we now suppose d ≥ 5 (i.e. a ≥ 6). (The cases d = 3 and d = 4
will be treated separately in Chapter 16.) Then one has the following results:
1. If b < 2a − 4, i.e. if g > γ(d) := (d − 2)(d − 3)/2, then A(H) is generated by E, and
A1 (H) is generated by E, C1 , C2 , C3 .
2. If b ≥ 2a − 4, i.e if g ≤ γ(d), then A(H) is generated by E and D and A1 (H) is
generated by E, D, C2 and C3 ( see [T1], Satz 2, p. 91; [T3], Proposition 4, p. 22; [T4],
Satz 1 and Proposition 2, p. 26).
From reasons of degree it follows that [C2 ] can not lie in the vector space spanned by
[E], [D], [C3 ], so the problem is to decide, if [C3 ] ∈ A(H).
In ([T4], Proposition 3, p. 32) it was erroneously claimed that [C3 ] ∈ A(H), if b ≥
2a − 4. (The error is the wrong computation of the degree in ([T4], p. 28, line 21 to line
30.) Therefore the bound for the genus in ([T4], p. 1) is wrong.
Actually, in ([T2], 3.3.2) it had been proved, that [C3 ] ∈ A(H), if a ≥ 6 is even and
b ≥ a2 /4, i.e. if d ≥ 5 is odd and g ≤ (d − 1)(d − 3)/4. In the case d ≥ 6 even , in
Conclusion 14.3 it will follow that [C3 ] ∈ A(H), if g ≤ (d − 2)2 /4. (This means the bound
of [T2], 3.3.3 is valid if d ≥ 6, already . ). One sees that the condition for g in both cases
can be summed up to g ≤ (d − 2)2 /4.
The major part of the following text serves for the proof that this sufficient condition
is a necessary condition, too (cf. Conclusion 14.1).
2
1.2. Technical tools
The formulas in ([T2], p. 134) and of ([T3], Anhang 2, p. 50) show that it is not
possible to decide by means of the computation of degrees, whether [C3 ] lies in A(H).
Therefore we try to get a grasp of the relations among the B-invariant 1-cycles on H with
the help of the theorem of Hirschowitz ([Hi], Thm. 1, p. 87). We sketch the procedure.
1.2.1. The Theorem of Hirschowitz. There is a closed and reduced subscheme
Z = Z(H) of H, such that Z(k) = {x ∈ H(k)| dim ∆ · x ≤ 1} (cf. [Ho], p. 412 and [T3],
Lemma 1, p. 35). Then one can show, with the help of the theorem of Hirschowitz, that
∼
A1 (Z) → A1 (H) (cf. [T2], Lemma 24, p. 121). As was explained in (1.1), A1 (H) has a
generating system consisting of B-invariant 1-cycles which lie in Z, automatically. As ∆
is normalized by B, B operates on Z and therefore one can form the so called equivariant
Chow group AB
1 (Z), which is isomorphic to A1 (Z) ([Hi], loc. cit.). And the relations
among B-invariant 1-cycles on Z are generated by relations among such cycles, which lie
on B-invariant surfaces V ⊂ Z ( see [Hi], Mode d’emploi, p. 89).
1.2.2. The Restriction morphism. Let Ut ⊂ H be the open subset consisting of
the ideals I ⊂ OP3 with Hilbert polynomial Q, such that t is not a zero divisor of OP3 /I.
Then there is a so called restriction-morphism h
: Ut →
H d := Hilbd (P2C ), defined by
I 7→ I ′ := I + tOP3 (−1)/tOP3 (−1). E.g., if G :=
1
0
0
0
0
1
0
0
0
0
1
0
∗
∗
∗
1
< ∆, then the fixed point
scheme F := HG is contained in Ut , and the restriction of h to F is denoted by h, again.
In ([G4], Abschnitt 6, p. 672f) the following description of Im(h) is given:
(i) There is a finite set F of Hilbert functions of ideals of colength d on P2 such that
S
Im(h) = {H≥ϕ |ϕ ∈ F }.
(ii) If k = k and if I ⊂ OP2k is an ideal of colength d and Hilbert function ϕ, then
d
P
I ∈ Im(h) ⇐⇒ g ∗ (ϕ) :=
ϕ(n) − d+3
+ d2 + 1 ≥ g.
3
0
(iii) If ϕ ∈ F and if ψ is the Hilbert function of an ideal on P2 of colength d such that
ϕ(n) ≤ ψ(n) for all n ∈ N, then ψ ∈ F .
(iv) Let be ϕ ∈ F and I ⊂ OP2k an ideal with Hilbert function ϕ. Let I ∗ be the ideal
n
in OP3k defined by H 0 (I ∗ (n)) = ⊕ tn−i H 0 (I(i)), then V+ (I ∗ ) ⊂ P3k is a curve of degree d
and genus g ∗ (ϕ).
i=0
Here Hϕ ⊂ H d is the locally closed subscheme (with the reduced induced scheme
structure), which parametrizes the ideals I ⊂ OP2 of colength d with Hilbert function
S
ϕ, and H≥ϕ := {Hψ |ψ ≥ ϕ} is a closed subscheme ( with the induced reduced scheme
structure). The image of C3 under h is the 1-cycle c3 := {(xd , αx + y)|α ∈ k}− . One has
S
Theorem 1.1. Let be d ≥ 5, H := {Hϕ ⊂ H d |g ∗(ϕ) > g(d)} and
A(H) := Im(A1 (HU (3;k) ) → A1 (H)). Then [c3 ] ∈
/ A(H).
3
The proof extends over the chapters 2 to 10 and essentially rests on the apparently
strong condition for an ideal I to have a Hilbert function ϕ such that g ∗(ϕ) > g(d).
1.2.3. Standard cycles on H d . It has been shown, respectively it will be shown
that [C3 ] ∈ A(Hd,g ), if g ≤ g(d) (cf. 1.1). Therefore, we can suppose that g > g(d). If
J ∈ Ut and the restriction ideal I := J ′ has the Hilbert function ϕ, then from (ii) in
(1.2.2) it follows that g ∗ (ϕ) > g(d). It will be shown in Chapter 2 that this implies there
is a linear form ℓ ∈ S1 − (0), an ideal K ⊂ OP2 of colength c and a form f ∈ H 0 (K(m))
such that I = ℓK(−1) + f OP2 (−m), c + m = d and m ≥ c + 2.
Let be C = Ga · η ⊂ H a 1-cycle of type 3 and let be J ↔ η the corresponding
′
′
ideal in OP3 with Hilbert polynomial
Q. Thenthe
ideal I := J ↔ η := h(η) is
1 0 ∗
invariant under T (3; k) and Γ := 0 1 ∗ < U(3, k). It follows that either
0 0 1
I = xK(−1) + y m OP2 (−m) or I = yK(−1) + xm OP2 (−m), where K is a monomial
ideal. We say, I has x-standard form or I has y-standard form, respectively, and we call
C ′ := Ga · η ′ a x-standard cycle or y-standard cycle on H d , respectively. With the help of
the theorem of Hirschowitz one can again try to describe the relations between B(3; k)invariant y-standard cycles on H, and one obtains that such relations cannot make the
y-standard cycle c3 disappear modulo A(H) (cf. Proposition 9.1) from which Theorem
0.1 will follow.
1.2.4. 1-cycles of proper type 3. Let be C = Ga · η, η ↔ J , be a 1-cycle of type
3 on H = Hd,g , such that d ≥ 5 and g > g(d). C is called a 1-cycle of proper type 3, if
C ′ = Ga · η ′ is a y-standard cycle on H. Corresponding to Hirschowitz’s theorem one has
to consider B(4; k)-invariant surfaces V ⊂ Z(H), which contain a 1-cycle of proper type
3. It turns out that then V is pointwise invariant under G3 and therefore V is contained
in Ut . Then one can map relations between B-invariant 1-cycles on V by h∗ into relations
between B(3; k)-invariant 1-cycles on h(V ), and one obtains with the aid of Proposition
9.1 the main result of the second part of the paper ( Theorem 14.1), which corresponds
to Theorem 0.1. In Chapter 15 there is complete description of A1 (Hd,g ) if d ≥ 5, and in
Chapter 16 this is done in the cases d = 3 and d = 4 (Theorem 15.1 and Theorem 16.1,
respectively).
4
CHAPTER 2
Subschemes of points in P2 and their Hilbert functions
2.1. General properties
The ground field is C and k denotes an extension field. A closed subscheme Z ⊂ P2k of
− d. If
length d > 0 is defined by an ideal I ⊂ OP2k with Hilbert polynomial Q(n) = n+2
2
0
0
2
the Hilbert function h (I(n)) = dimk H (Pk ; I(n)), n ∈ N, of I is denoted by ϕ(n), then
ϕ′ (n) := ϕ(n) − ϕ(n − 1), n ∈ N, denotes the difference function. If ϕ : N −→ N is any
function, such that ϕ(n) = n+2
− d for n ≫ 0, then the ideals I ⊂ OP2k with Hilbert
2
function ϕ form a locally closed subset Hϕ of the Hilbert scheme H d = Hilbd (P2C ), and we
take Hϕ as a subscheme of H d with the induced reduced scheme structure.
Iarrobino has shown ([I], Lemma 1.3, p.8) that Hϕ 6= ∅ if and only if the difference
function fulfils the following two conditions:
(a) ϕ′ (n) ≤ n + 1, for all n ∈ N and
(b) ϕ′ (n) ≤ max(ϕ′ (n + 1) − 1, 0), for all n ∈ N.
If α = α(ϕ) := min{n ∈ N | ϕ(n) > 0}, then (b) is equivalent to:
(b’) ϕ′ (n) + 1 ≤ ϕ′ (n + 1), for all n ≥ α.
The (Mumford-)regularity e of an ideal I with Hilbert function ϕ as before is characterized by e = reg(ϕ) = min{n ∈ N | ϕ′ (n + 1) = n + 1} (cf. Appendix B, Lemma
2). In principle, the graph of ϕ′ has the shape of Fig. 2.1. If ∅ =
6 Hϕ ⊂ H d , then d is
n+2
determined by the condition ϕ(n) = 2 − d, n ≫ 0, and we call d the colength of ϕ.
It is known that reg(ϕ) ≤ d ([G1], Lemma 2.9, p. 65), and reg(ϕ) = d is equivalent with
I being generated by a linear form ℓ ∈ S1 and a form f ∈ Sd , not divisible by ℓ. Here
S = k[x, y, z] is the graded polynomial ring. Another characterization of reg(ϕ) = d is
that the graph of ϕ′ has the shape of Fig.2.2. One notes that the colength of ϕ is equal to
the number of ”monomials” between the graph of ϕ′ and the line y = x + 1. (For this and
other properties , see [T1]-[T4].) In the following we write P2 instead of P2k and denote by
I an ideal in OP2 , whose finite colength (resp. whose Hilbert function) usually is denoted
by d (resp. by ϕ).
2.2. Numerical and algebraic properties
Lemma 2.1. Let be k = k, I ⊂ OP2 an ideal with colength d, Hilbert function ϕ and
regularity m. We assume that there is a number ε ∈ N, 0 ≤ ε < m−2, such that ϕ′ (n) = n
for all n ∈ N, such that ε + 1 ≤ n ≤ m − 1. Then there is a linear form ℓ = S1 , an ideal
K ⊂ OP2 of colength c and a form f ∈ H 0 (K(m)), such that I = ℓK(−1) + f OP2 (−m). If
5
ℓ1 , ℓ2 are any linear forms in S1 such that ℓ, ℓ1 , ℓ2 is a basis of the k-vector space S1 and
if R := k[ℓ1 , ℓ2 ] is the subring of S, isomorphic to k[x, y], then d = c + m and
(
ℓH 0 (K(n − 1))
if n < m,
H 0 (I(n)) =
0
ℓH (K(n − 1)) ⊕ f Rn−m if n ≥ m.
Proof. By assumption, the graph of ϕ′ has the shape as in Fig.2.3. Then there is a
ℓ ∈ S1 − (0) and an ideal K ⊂ OP2 of regularity ≤ ε such that H 0 (I(n)) = ℓH 0 (K(n − 1))
for all n ≤ m − 1 (cf. [G4] and Appendix B, Lemma 1). If ψ is the Hilbert function of
K, then ϕ′ (n) = ψ ′ (n − 1) for 1 ≤ n ≤ m − 1, and because of the shape of the graphs
of ϕ′ and ψ ′ it follows that ϕ(n) = ψ(n − 1) + (n − m + 1) for all n ≥ m. Therefore
H 0 (I(m)) = ℓH 0 (K(m − 1)) ⊕ f · k, where f ∈ H 0 (I(m)) is a suitable section. Because
of the m-regularity of I it follows that H 0 (I(n)) = ℓH 0 (K(n − 1)) + f Sn−m , n ≥ m. If
n = m + 1, then from ϕ(m + 1) = ψ(m) + 2 it follows that S1 f ∩ ℓH 0 (K(m)) has the
dimension 1. Thus there is a h ∈ S1 − (0), such that hf ∈ ℓH 0 (K(m)). If ℓ would be a
divisor of f , then it would follow that I ⊂ ℓOP2 (−m) and thus I would not have a finite
colength in OP2 . Therefore we may suppose that h = ℓ, and it follows that f ∈ H 0 (K(m)).
We choose ℓ1 , ℓ2 ∈ S1 such that ℓ, ℓ1 , ℓ2 are linear independent and we put R := k[ℓ1 , ℓ2 ].
If there would be a r ∈ Rν − (0) such that rf ∈ ℓH 0 (K(m + ν − 1)), then it would follow
that ℓ is a divisor of f , contradiction. Between the graph of ψ ′ (n − 1) and the line y = x
there are exactly c := colength(ψ) monomials, and therefore d = c + m (cf. Fig.2.3).
Corollary 2.1. The assumptions and notations are as in Lemma 2.1. Then one
has:
(i) κ := reg(K) ≤ ε, especially κ ≤ m − 3.
(ii) K (respectively the linear form ℓ) is uniquely determined (respectively uniquely up to
a factor out of k different from zero).
(iii) f is uniquely determined up to a factor out of k different from zero, modulo ℓH 0 (K(m−
1)).
(iv) κ and ε are uniquely determined by ϕ.
Proof. (i) ε + 1 = ϕ′ (ε + 1) = ψ ′ (ε). From (Appendix B, Lemma 2) it follows that
κ ≤ ε.
(ii) The regularity only depends on the Hilbert function, and therefore κ = reg(K1 ) =
reg(K2 ) < m − 1. Thus from ℓ1 H 0 (K1 (m − 2)) = ℓ2 H 0 (K2 (m − 2)) it follows that
ℓ1 K1 (−1) = ℓ2 K2 (−1).
If ℓ1 would not be a divisor of ℓ2 then one would have K2 ⊂ ℓ1 OP2 (−1) contradiction.
From this assertion (ii) does follow, and (iii) and (iv) are clear.
Remark 2.1. If ϕ and ψ are two Hilbert functions of colength d, then from ϕ < ψ
(that means ϕ(n) ≤ ψ(n) for all n ∈ N and ϕ(n) < ψ(n) for at least one n ∈ N) it follows
that g ∗ (ϕ) < g ∗ (ψ). This follows immediately from the definition of g ∗(ϕ) in (1.2.2).
6
e−2
P
Remark 2.2. If e := reg(ϕ), d := colength(ϕ) and s(ϕ) :=
ϕ(i), then g ∗(ϕ) =
i=0
n+2
s(ϕ) − e+1
+
d(e
−
2)
+
1.
Because
of
ϕ(n)
=
− d for all n ≥ e − 1 this follows
3
2
from a simple computation with binomial coefficients.
2.2.1. Hilbert functions of colength ≤ 4. We use the formula of Remark 2.2 and
orientate ourselves by the figures 2.4–2.7.
d = 1 There is only one Hilbert function (cf. Fig. 2.4).
2
∗
+ 1 · (1 − 2) + 1 = 0.
e = 1, s(ϕ) = 0, g (ϕ) = 0 −
3
d = 2 There is again only one Hilbert function (cf. Fig. 2.5).
3
∗
+ 2 · 0 + 1 = 0.
e = 2, s(ϕ) = 0, g (ϕ) = 0 −
3
d = 3 There are two Hilbert functions (Fig. 2.6 a and Fig. 2.6 b).
3
∗
+ 3 · 0 + 1 = 0,
e1 = 2, s(ϕ1 ) = 0, g (ϕ1 ) = 0 −
3
4
∗
+ 3 · 1 + 1 = 1.
e2 = 3, s(ϕ2 ) = 1, g (ϕ2 ) = 1 −
3
d = 4 There are two Hilbert functions (Fig. 2.7 a and Fig. 2.7 b).
4
∗
+ 4 · 1 + 1 = 1,
e1 = 3, s(ϕ1 ) = 0, g (ϕ1 ) = 0 −
3
5
∗
+ 4 · 2 + 1 = 3.
e2 = 4, s(ϕ2 ) = 4, g (ϕ2 ) = 4 −
3
2.2.2. Two special ideals. First case: If d ≥ 6 is even, then let be e := d/2 + 1 and
I := (x2 , xy e−2, y e ). The Hilbert function χ can be read from Fig. 2.8. One notes that
n−1
P
colength(I) and reg(I) really are equal to d and e, respectively, and χ(n) =
i = n2 ,
1
if 1 ≤ n ≤ e − 2. Therefore s(χ) =
−
e+1
3
1
i
2
=
e−1
3
and it follows that
e
e
e+1
+ 2(e − 1)(e − 2) + 1 = e−1
−
+
−
+ 2e2 − 6e + 5
3
3
3
3
1
1
= − (e − 1)(e − 2) − e(e − 1) + 2e2 − 6e + 5 = e2 − 4e + 4
2
2
1
= (e − 2)2 = (d − 2)2 .
4
g ∗(χ) =
e−1
3
e−2
P
Second case: If d ≥ 5 is odd, then let be e := (d + 1)/2 and I := (x2 , xy e−1 , y e).
The Hilbert function χ can be read from Fig. 2.9. One notes that colength (I) and reg(I)
are equal to d and e, respectively, and χ(n) = n2 , if 1 ≤ n < e.
7
Therefore s(χ) =
e−2
P
2
g ∗ (χ) =
e−1
3
−
i
2
=
e+1
3
e−1
3
and it follows that
+ (2e − 1)(e − 2) + 1 = −
e−1
2
−
e
2
+ (2e − 1)(e − 2) + 1
3
1
= −(e − 1)2 + 2e2 − 5e + 3 = e2 − 3e + 2 = (d + 1)2 − (d + 1) + 2
4
2
1 2
= (d − 4d + 3).
4
Definition 1. If d ≥ 5, then we set
(
1
(d − 2)2
if d ≥ 6 is even,
g(d) := 41
(d − 1)(d − 3) if d ≥ 5 is odd.
4
g(d) is called the deformation bound for ideals in OP2 of colength d.
The rest of the article is to justify this notation.
2.2.3.
Lemma 2.2. Let be k = k, I ⊂ OP2 an ideal of colength d ≥ 5 and regularity m. Let
be ϕ the Hilbert function of I. If g ∗(ϕ) > g(d), then the assumptions of Lemma 2.1 are
fulfilled by I.
Proof. Let be χ the Hilbert function defined by Fig. 2.8 and Fig. 2.9, respectively.
Let be m = reg(ϕ). If ϕ′ (m) − ϕ′ (m − 1) > 2, then ϕ′ (i) ≤ χ′ (i) and therefore ϕ(i) < χ(i)
for all i, and it would follow that g ∗ (ϕ) ≤ g(χ) (Remark 2.1). If ϕ′ (m)−ϕ′ (m−1) = 1, then
ϕ′ (m−1) = ϕ′ (m)−1 = (m+1)−1 = (m−1)+1, therefore reg(I) ≤ m−1 (cf. Appendix
B, Lemma 2). It follows that ϕ′ (m) − ϕ′ (m − 1) = 2, therefore ϕ′ (m − 1) = m − 1. If
ϕ′ (m−2) = m−2, as well, then the assumptions of Lemma 2.1 are fulfilled with ε := m−3,
for instance. Thus without restriction of generality one can assume ϕ′ (m − 2) ≤ m − 3.
Case 1: ϕ′ (m − 2) < m − 3. Figure 2.10 represents the Hilbert function ϕ as well as the
B(3; k)-invariant ideal M with Hilbert function ϕ. Then one makes the deformation
E(H 0 (M(m))) 7→ E(H 0 (M(m))) − u ∪ v =: E(H 0 (N (m))),
where N is a B(3; k)-invariant ideal with Hilbert function ψ > ϕ. But then it follows
g ∗ (ϕ) < g ∗(ψ) ≤ g ∗ (χ) = g(d), contradiction.
Case 2: ϕ′ (m − 2) = m − 3. If the graph of ϕ′ would have a shape different from that
in Fig. 2.11 a, i.e., if the graph of ϕ′ would have a “jumping place” n < m − 2 (cf. the
terminology in [T1], p. 72), then as in the first case one could make a deformation
E(H 0 (M(m))) 7→ E(H 0 (M(m))) − u ∪ v =: E(N (m)))
(cf. Fig. 2.11b) and would get a contradiction, again. It only remains the possibility
represented in Fig. 2.11a. But then ϕ = χ, which contradicts the assumption g ∗(ϕ) >
g(d).
8
2.3. Numerical conclusions from g ∗ (ϕ) > g(d)
2.3.1. At first we describe the starting situation: In this section we suppose that
g(d) is defined, i.e. d ≥ 5. Moreover let be ϕ a Hilbert function such that Hϕ 6= ∅,
colength(ϕ) = d, reg(ϕ) = m and g ∗ (ϕ) > g(d). Then the assumptions of Lemma 2.2 are
fulfilled for an ideal I, which can be supposed to be monomial. Therefore the assumption
k = k is superfluous. As m and d are uniquely determined by ϕ, c := d − m is uniquely
determined, too.
The aim in this section is to prove the inequality m ≥ c + 2. By Lemma 2.1 and
Lemma 2.2, respectively, one can write I = ℓK(−1) + f OP2 (−m), and c is equal to the
colength of the Hilbert function ψ of K. (As I is monomial, K is monomial, too, and
without restriction one has ℓ ∈ {x, y, z}.)
Lemma 2.3. Let be ψ the Hilbert function of an ideal K of colength c ≥ 5, κ :=
reg(ψ), and m ≥ κ + 2 an integer. If one defines ϕ by ϕ′ (n) := ψ ′ (n − 1), 0 ≤ n ≤
m − 1, ϕ′ (n) := n + 1, n ≥ m, then Hϕ 6= ∅, colength(ϕ) = c + m, reg(ϕ) = m and
g ∗ (ϕ) = g ∗(ψ) + 12 m(m − 3) + c.
Proof. We orientate ourselves by Figure 2.3, but the weaker assumption m ≥ κ + 2
takes the place of the assumption m ≥ κ + 3.
Without restriction one can assume that K is B(3; k)-invariant. Then y m ∈ H 0 (K(m))
and I := xK(−1) + y mOP2 (−m) has the Hilbert function ϕ, the regularity m and the
colength c + m. This follows by considering the figure mentioned above (and has been
shown in a more general situation in [G4], Lemma 4, p. 660). We compute g ∗(ϕ) (cf.
Remark 2.2):
s(ϕ) =
m−2
X
κ−1
X
ϕ(i) =
i=0
i=0
=
κ−2
X
ψ(i) +
i=0
= s(ψ) +
ψ(i − 1) +
m−3
X
m−2
X
i=κ
ψ(i) = s(ψ) +
i=0
i+2
2
m−3
X
i=κ−1
i=κ−1
m−3
X
ψ(i − 1)
−
κ−2
X
i=0
i+2
2
i+2
2
−c
− (m − κ − 1)c.
By Remark 2.2 it follows that:
m+1
g ∗ (ϕ) = s(ψ) + m3 − κ+1
−
(m
−
κ
−
1)c
−
+ (c + m)(m − 2) + 1
3
3
κ+1
m
m+1
= s(ψ) − 3 + c(κ − 2) + 1 + 3 − 3 − mc + c + (c + m)m − 2m
1
= g ∗ (ψ) − m2 + m2 − 2m + c = g ∗ (ψ) + m(m − 3) + c.
2
2.3.2. The cases c ≤ 4. By Lemma 2.2 (resp. by Corollary 2.1 of Lemma 2.1) one
has ϕ′ (n) = ψ ′ (n − 1), 0 ≤ n ≤ m − 1, ϕ′ (n) = n + 1, n ≥ m, and κ = reg(K) ≤ m − 3.
Then the assumptions of Lemma 2.3 are fulfilled.
9
We use the formula given there and orientate ourselves by the Figures 2.4 - 2.7. The
regularity of the Hilbert function considered each time will now be denoted by κ.
If
If
If
If
c ∈ {0, 1}, then because of d = m + c it follows that m ≥ 4.
c = 2, then κ = 2 and m ≥ κ + 3 = 5.
c = 3 and κ = 2 or κ = 3, then m ≥ κ + 3 = 5.
c = 4, then κ = 3 or κ = 4 and m ≥ κ + 3 ≥ 6.
Thus in the cases 0 ≤ c ≤ 4 one has m ≥ c + 2.
2.3.3. The case g ∗ (ψ) ≤ g(c). This notation implies that c ≥ 5. If κ is the regularity
and c is the colength of any Hilbert function, then because of 1 + 2 + · · · + κ ≥ c, one
√
2c − 1. By Lemma 2.2 the assumptions of
always has κ+1
≥
c,
and
therefore
κ
≥
2
Lemma 2.1 are fulfilled, therefore by Corollary 2.1 it follows that m ≥ κ + 3 > 5.16.
1st case: c and m are even.
By the formulas for g(d) and g ∗ (ϕ) it follows that:
1 2
(c
4
− 4c + 4) + 12 m(m − 3) + c > 41 [(c + m)2 − 4(c + m) + 4]
⇐⇒
1
m(m
2
− 3) + c > 14 [2cm + m2 − 4m]
⇐⇒ m2 − 2(c + 1)m + 4c > 0.
The solutions of the corresponding quadratic equation are 0 and 2c.
Therefore m ≥ 2c + 1 > c + 2.
2nd case: c is even, m is odd.
One obtains the inequality:
1 2
(c
4
− 4c + 4) + 12 m(m − 3) + c > 41 [(c + m)2 − 4(c + m) + 3]
m2 − 2(c + 1)m + 4c + 1 > 0.
√
The solutions of the corresponding quadratic equation are m = c + 1 ± c2 − 2c ≥ 0.
√
√
Because of c + 1 − c2 − 2c < 3, if c ≥ 5, it follows that m ≥ c + 1 + c2 − 2c. Because of
√
c + 1 + c2 − 2c > 2c − 1, if c ≥ 5, it follows that m ≥ 2c, therefore m ≥ 2c + 1 > c + 2.
⇐⇒
3rd case: c is odd, m is even.
One obtains the inequality :
1 2
(c
4
− 4c + 3) + 21 m(m − 3) + c > 14 [(c + m)2 − 4(c + m) + 4]
⇐⇒
m2 − 2(c + 1)m + 4c > 0.
It follows that m ≥ 2c + 1 > c + 2.
4th case: c and m are odd.
One obtains the inequality:
1 2
(c
4
⇐⇒
− 4c + 3) + 21 m(m − 3) + c > 14 [(c + m)2 − 4(c + m) + 4]
m2 − 2(c + 1)m + 4c − 1 > 0.
10
p
The solutions of thepcorresponding quadratic equation are m = c +p1 ± (c − 1)2 + 1.
Because of c + 1 − (c − 1)2 + 1 < 2 it follows that m ≥ c + 1 + (c − 1)2 + 1 > 2c,
therefore m ≥ 2c + 1 ≥ c + 2.
2.3.4. The case g ∗ (ψ) ≥ g(c). As in the proof of Lemma 2.3 one can write I =
xK(−1)+y m OP2 (−m), K a B(3; k)-invariant ideal of colength c, κ = reg(K), d = colength(I) =
c + m, and again m ≥ κ + 3 (Corollary 2.1). We represent the Hilbert function ψ by the
ideal K = xJ (−1) + y κ OP2 (−κ), J a B(3; k)-invariant ideal of colength b and of regularity ε, where κ ≥ ε + 3 (cf. Corollary 2.1). If the Hilbert function of J is denoted by
ϑ, then in principle one has the situation represented by Fig. 2.12. If one assumes that
(m−1) −κ ≤ colength(J ) = b, then one could bring the monomials denoted by 1, 2, 3, . . .
in the positions denoted by 1, 2, 3, . . . (cf. Fig. 2.12). In the course of this the Hilbert
function increases and therefore g ∗ (ϕ) < g ∗ (ϕ1 ) < · · · < g(d), contradiction. Thus one
has (m − 1) − κ > b, i.e., m ≥ κ + b + 2 = c + 2.
2.3.5. Summary.
Lemma 2.4. (Notations as in Lemma 2.1 and Lemma 2.2) If g(d) < g ∗(ϕ), then
m ≥ c + 2.
From the proof of Lemma 2.4 we can conclude one more statement:
Corollary 2.2. Let be g ∗(ψ) ≤ g(c) (which notation implies c ≥ 5). Then m ≥
2c + 1.
2.4. Additional group operations
2.4.1. General auxiliary lemmas. The group Gl(3; k) operates on S = k[x, y, z],
and therefore on H d = Hilbd (P2k ). If ρ = (ρ0 , ρ1 , ρ2 ) ∈ Z3 is a vector such that ρ0 +ρ1 +ρ2 =
0, then
:= { (λ0 , λ1 , λ2 ) ∈ (k ∗ )3 | λρ00 λρ11 λρ22 = 1 } is a subgroup of T = T (3; k), and
nT (ρ) o
1 0 ∗
0 1 ∗
is a subgroup of U(3; k). We let the additive group Ga operate on S by
Γ :=
0 0 1
ψα : x 7→ x, y 7→ αx + y, z 7→ z, α ∈ k,
and σ : Gm → T nearly always denotes the operation σ(λ) : x 7→ x, y 7→ y, z 7→ λz, λ ∈ k ∗ .
Auxiliary Lemma 1. If V ⊂ Sd is a vector space, invariant under G := Γ · T (ρ)
where ρ = (0, −1, 1) or ρ = (−1, 0, 1), then V is monomial, i.e. invariant under T .
Proof. We first consider the case ρ = (0, −1, 1). We take a standard basis of V
consisting of T (ρ)-semi-invariants (see Appendix E). Assuming that the assertion above
is wrong , we conclude that there is a T (ρ)-semi-invariant f ∈ V , such that the monomials
occurring in f are not in V . Then there is such a form with smallest z-degree. From the
invariance of V under T (ρ) it follows that V is invariant under the Gm -action τ (λ) : x 7→
P
λx, y → y, z 7→ z, λ ∈ k ∗ . We write f = Mp, where M = xℓ y m, p = ni=0 ai y n−i z i , ℓ + m +
P
n = d, n ≥ 1 and an 6= 0. It follows that y∂f /∂z = yM n1 iai y n−i z i−1 ∈ V . Now y∂f /∂z
is also a T (ρ)-semi-invariant with smaller z-degree than f . According to the choice of f
11
it follows that g := Myz n−1 ∈ V , therefore y∂g/∂z = (n − 1)My 2 z n−2 ∈ V , etc. One gets
My i z n−i ∈ V, 1 ≤ i ≤ n, therefore My n ∈ V , contradiction.
P
In the case ρ = (−1, 0, 1) we write f = xℓ y m ni=0 ai xn−i z i . Because of x∂f /∂z =
P
xM ni=1 iai xn−i z i−1 we can argue as before.
Auxiliary Lemma 2. Let be I ⊂ OP2 an ideal of colength d, which is invariant under
G := Γ · T (ρ). If ρ0 + ρ1 + ρ2 = 0, ρ0 < 0 and ρ1 < 0, then I is invariant under T .
Proof. Let be n the smallest natural number, such that H 0 (I(n)) is not T -invariant.
Then we have without restriction that n ≥ 1. As H 0 (I(n)) has a standard basis , there
is a proper semi-invariant in H 0 (I(n)), i.e. a form f ∈ H 0 (I(n)) of the shape f =
M(1+a1 X ρ +a2 X 2ρ +· · ·+ar X rρ ), M a monomial, ar 6= 0, r ≥ 1, and no monomial MX iρ is
in H 0 (I(n)), if ai 6= 0. If M would be divisible by z, then g := z −1 f ∈ H 0 (I(n−1)) would
be a proper semi-invariant, too, because from z −1 MX iρ ∈ H 0(I(n − 1)) it follows that
MX iρ ∈ H 0 (I(n)). Therefore, M is not divisible by z. From the proper semi-invariants of
H 0 (I(n)) we chose one, say f , such that the z-degree is minimal. Now from f ∈ H 0 (I(n)),
because of the Γ-invariance, it follows that x∂f /∂z = xM(a1 ρ2 z −1 X ρ +· · ·+rar ρ2 z −1 X rρ)
and y∂f /∂z = yM(a1 ρ2 z −1 X ρ + · · · + rar ρ2 z −1 X rρ ) is in H 0 (I(n)), i.e., g := xMX ρ z −1 p
and h := yMX ρ z −1 p are in H 0 (I(n)), where p(X ρ ) := a1 ρ2 +2a2 ρ2 X ρ +· · ·+rar ρ2 X (r−1)ρ .
As the z-degree of g and of h is smaller than the z-degree of f, g and h are no longer
proper semi-invariants, i.e. the monomials which occur in g or in h, all are in H 0 (I(n)). It
follows that u := z −1 xMX rρ and v := z −1 yMX rρ are in H 0 (I(n)). From the Γ-invariance
|ρ1 |
|ρ |
it follows by applying the operators x∂/∂z and y∂/∂z repeatedly, that xz |ρ00| · yz |ρ1 | · MX rρ ∈
H 0 (I(n)).
Now X ρ = x−|ρ0 | y −|ρ1 | z ρ2 and ρ0 + ρ1 + ρ2 = 0, therefore MX (r−1)ρ ∈ H 0 (I(n)). Applying
the operators mentioned before again gives MX (r−2)ρ ∈ H 0 (I(n)), etc. It follows that
MX iρ ∈ H 0 (I(n)), 0 ≤ i ≤ r − 1, and therefore MX rρ ∈ H 0 (I(n)), contradiction.
2.4.2. Successive construction of Γ-invariant ideals. At first we consider a general situation: Let be K ⊂ OP2 an ideal of colength c and of regularity e; z is supposed
to be a non-zero divisor of OP2 /K; let be R := k[x, y] and ℓ ∈ R1 a linear form. Let be
m > e an integer and f ∈ H 0 (K(m)) a section, whose leading term is not divisible by ℓ,
i.e., if one writes f = f 0 + zf 1 + · · · + z m f m , where f i ∈ Rm−i , then f 0 is not divisible by
ℓ.
Lemma 2.5. The ideal I := ℓK(−1) + f OP2 (−m) has the following properties:
(i) z is not a zero-divisor of OP2 /I.
(ii) H 0 (I(n)) = ℓH 0 (K(n − 1)), if n < m, and
H 0 (I(n)) = ℓH 0 (K(n − 1)) ⊕ f k[x, z]n−m , if n ≥ m and ℓ = αx + y
(respectively H 0 (I(n)) = ℓH 0 (K(n − 1)) ⊕ f k[y, z]n−m , if n ≥ m and ℓ = x).
(iii) colength(I) = c + m, reg(I) = m.
Proof. If ℓ = x, these are the statements of ([G4], Lemma 4, p. 660). If ℓ =
αx + y, α ∈ k ∗ , then let be u the automorphism x 7→ ℓ, y 7→ y, z 7→ z of S. By applying
12
(loc.cit) to
one gets
Now applying u gives
K := u−1 (K), I := u−1(I), f := u−1 (f )
H 0 (I(n)) = xH 0 (K(n − 1))) ⊕ f k[y, z]n−m .
H 0(I(n)) = ℓH 0 (K(n − 1)) ⊕ f k[y, z]n−m .
As k[y, z] = k[ℓ − αx, z] and ℓf ∈ ℓH 0 (K(m)), the statement (ii) follows, if α 6= 0.
If α = 0, we take the automorphism x 7→ y, y 7→ x, z 7→ z and argue as before.
We would like to put the section f in a certain normal form. We first consider the
case ℓ = αx + y. Then we can write f 0 = xm + ℓu, u ∈ Rm−1 , without restriction. As
m − 1 ≥ e, there is v = v 0 + zv 1 + z 2 v 2 + · · · ∈ H 0 (K(m − 1)) such that v 0 = u. As f is
determined only modulo ℓH 0 (K(m − 1)), we can replace f by f˜ := f − ℓv, therefore we
can assume without restriction, that f = f 0 + zf 1 + · · · + z m f m , where f 0 = xm .
We now suppose that K is invariant under Γ, and will formulate conditions that I
is Γ-invariant, too. This is equivalent to the condition that f is Γ-invariant modulo
ℓH 0 (K(m − 1)). By ([T2], Hilfssatz 1, p. 142) this is equivalent to the condition that
hx, yi∂f /∂z ⊂ ℓH 0 (K(m − 1)). It follows that ℓ is a divisor of f i , 1 ≤ i ≤ n, i.e., one has
f = xm + ℓzg, g ∈ Sm−2 .
Write g = g 0 + zg 1 + z 2 g 2 + · · · , where g i ∈ Rm−2−i and choose u = u0 + zu1 + · · · ∈
H 0 (K(m − 2)) such that u0 = g 0 . This is possible, if m − 2 ≥ e. As f is determined only
modulo ℓH 0 (K(m − 1)), one can replace f by f˜ = f − ℓzu. It follows that one can assume
without restriction f = xm + ℓz 2 g, where g ∈ Sm−3 .
Choose u ∈ H 0 (K(m − 3)), where u0 = g 0 ; this is possible, if m − 3 ≥ e. If this is the
case, replace f by f˜ = f − ℓz 2 u. It follows that one can assume without restriction f =
xm +ℓz 3 g, where g ∈ Sm−4 , etc. Finally one obtains f = xm +z m−e ℓg, ℓ = αx+y, g ∈ Se−1 ,
and the Γ-invariance of f modulo ℓH 0 (K(m−1)) is equivalent to hx, yi[(m−e)z m−e−1 ℓg +
z m−e ℓ∂g/∂z] ⊂ ℓH 0 (K(m − 1)), i.e. equivalent to:
(2.1)
hx, yi[(m − e)g + z∂g/∂z] ⊂ H 0 (K(e))
In the case ℓ = x, because of Rm = xRm−1 ⊕ y m · k, one can write f 0 = y m + xu, and
the same argumentation shows that one can write f = y m + z m−e xg, g ∈ Se−1 , and the
Γ-invariance can again be expressed by the inclusion (2.1).
2.4.3. Standard forms. Let I ⊂ OP2 have the colength d and Hilbert function ϕ,
and let I be invariant under G := Γ · T (ρ), where ρ2 > 0. Moreover, we assume that
g ∗ (ϕ) > g(d). By Lemma 2.2 it follows that I = ℓK(−1)+f OP2 (−m), if k = k is supposed.
As H 0 (I(m − 1)) = ℓH 0 (K(m − 2)) is then invariant under G and m − 1 > e = reg(K),
it follows that hℓi and K are G-invariant. Assume that hax + by + zi is Γ-invariant. But
then hax + by + zi = h(a + α)x + (b + β)y + zi, ∀ α, β ∈ k, which is not possible. Thus we
have ℓ = ax + by. From hλ0 ax + λ1 byi = hax + byi, ∀ (λ0 , λ1 , λ2 ) ∈ T (ρ) it follows that
λ0 /λ1 = 1 ∀ (λ0 , λ1 , λ2 ) ∈ T (ρ), if a and b both were different from 0. But then it would
13
follow T (ρ) ⊂ T (1, −1, 0), and therefore ρ2 = 0, contradiction. Therefore we have ℓ = x
or ℓ = y, without restriction.
We consider the case ℓ = x, for example. As it was shown in (2.4.2) we can write
f = y m + z m−e xg, e = reg(K), g ∈ Se−1 .
From Appendix E it follows that xH 0 (K(m − 1)) has a standard basis of T (ρ)-semiinvariants fi = mi pi (X ρ ), i.e., mi is a monomial, pi is a polynomial in one variable with
constant term 1, and such that mi does not occur in fj any longer, if i 6= j. Now each fi is
divisible by x, therefore y m does not occur in fi . If the initial monomial mi of fi appears
in f , then mi has to appear in z m−e xg. By choosing α ∈ k in a suitable way, one can
achieve that mi does not occur in f˜ := f − αfi . As ρ2 > 0 and fi is divisible by x, f˜ still
has the shape y m + z m−e xg̃, g̃ ∈ Se−1 . By repeating this procedure one can achieve that
none of the mi does occur in f = y m + z m−e xg (and f is still invariant under Γ modulo
xH 0 (K(m − 1)). The same argumentation as in the proof of the lemma in Appendix E
then shows that f is automatically a T (ρ)-semi-invariant with initial monomial y m , and
f together with the fi forms a standard basis of H 0 (I(m)). We summarize:
Lemma 2.6. Let be I ⊂ OP2k an ideal of colength d, with Hilbert function ϕ, which
is invariant under G = Γ · T (ρ), where ρ2 > 0. Assume that g(d) < g ∗(ϕ). (It is not
assumed that k = k.) Then I = xK(−1) + f OP2 (−m) or I = yK(−1) + f OP2 (−m),
where K is a G-invariant ideal with colength(K) = c, reg(K) = e, and c + m = d.
Moreover f ∈ H 0 (K(m)) can be written in the form f = y m + z m−e xg respectively in the
form f = xm + z m−e yg, where g ∈ Se−1 . We have hx, yi∂f /∂z ⊂ xH 0 (K(m − 1)) or
hx, yi∂f /∂z ⊂ yH 0(K(m − 1)), respectively, and each of these inclusions are equivalent
to the inclusion (2.1) in section (2.4.2). One has
(
xH 0 (K(n − 1))
if n < m,
0
H (I(n)) =
0
xH (K(n − 1)) ⊕ f k[y, z]n−m if n ≥ m,
respectively
(
yH 0(K(n − 1))
H 0 (I(n)) =
yH 0(K(n − 1)) ⊕ f k[x, z]n−m
if n ≤ m,
if n ≥ m.
If one chooses a standard basis {fi } of xH 0 (K(m−1)) or of yH 0(K(m−1)), respectively,
then one can choose f in such a way that f has the form and the properties mentioned
above and together with the fi ’s forms a standard basis of H 0 (I(m)).
Proof. If k = k this follows from the foregoing discussion. One has, e.g. I ⊗ k =
k
0
yK(−1)+f OP2 (−m), where K ⊂ OP2 ⊗k and f ∈ H (K(m)) have the properties mentioned
above. One sees that K = I ⊗ k : yOP2 ⊗k (−1) and therefore one has the exact sequence:
0 → (OP2 ⊗k /K)(−1) → OP2 ⊗k /I ⊗ k → OP2 ⊗k /I ⊗ k + yOP2⊗k (−1) → 0.
If K := I : yOP2 (−1), then the sequence
0 → (OP2 /K)(−1) → OP2 /I → OP2 /I + yOP2 (−1) → 0
14
is exact, too. Tensoring this sequence with k one obtains a commutative diagram
0
/
(OP2 ⊗k /K ⊗ k)(−1)
·y
/
OP2 ⊗k /I ⊗ k
/
OP2 ⊗k /I ⊗ k + yOP2⊗k (−1)
/
0
OP2 ⊗k /I ⊗ k
/
OP2 ⊗k /I ⊗ k + yOP2⊗k (−1)
/
0
0
/
OP2 ⊗k /K(−1)
/
with exact rows, where the first vertical arrow is obtained from the canonical injection
∼
K ⊗ k ֒→ K by tensoring with OP2 ⊗k (−1). It follows from this, that K ⊗ k → K.
∼
Because of H 0 (I(m)) ⊗ k → H 0 (I(m) ⊗ k) one obtains a standard basis of T (ρ)-semiinvariants of H 0 (I(m) ⊗ k) by tensoring a standard basis of H 0 (I(m)) with ⊗ 1k . As the
k
elements of a standard basis are uniquely determined up to constant factors, it follows
that f = f ⊗k 1k , where f ∈ H 0 (K(m)). Therefore f has the form xm + z m−e yg, g ∈
Se−1 , if f has the form xm + z m−e yg, g ∈ Se−1 ⊗ k. For reasons of dimension it follows
H 0 (I(n)) = yH 0(K(n − 1)), n < m, and H 0 (I(n)) = yH 0(K(n − 1)) + f k[x, z]n−m , n ≥ m.
As the G-invariance of K follows from the G-invariance of I, the remaining statements of
Lemma 2.6 follow by the same argumentation as in the case k = k.
Definition 2. The (uniquely determined) decomposition I = xK(−1)+f OP2 (−m) or
I = yK(−1)+f OP2 (−m) of Lemma 2.6 is called x-standard form or y-standard form of I,
respectively.( N.B. For the Hilbert function ϕ of I this definition implies that g(d) < g ∗ (ϕ)
is fulfilled.)
Corollary 2.3. Let be R = k[x, y]. If I has x-standard form (resp. y-standard
form), then xRm−2 (resp. yRm−2 ) is contained in H 0(I(m − 1)) and thus xRm−1 (resp.
yRm−1 ) is contained in H 0 (I(m)).
Proof. One has m − 2 ≥ c by Lemma 2.4 and thus xRm−2 ⊂ xH 0 (K(m − 2)) (resp.
yRm−2 ⊂ yH 0(K(m − 2))) by Appendix C, Remark 2.
Remark 2.3. If I has x-standard from or y-standard form, respectively, and if Gm acts
on S by σ(λ) : x 7→ x, y 7→ y, z 7→ λz, λ ∈ k ∗ , then I0 := lim σ(λ)I again has x-standard
λ→0
form or y-standard form, respectively. This follows from h0 (I0 (n)) = h0 (I(n)), n ∈ N (cf.
[G2], Lemma 4, p. 542), because then H 0 (I0 (n)) is generated by the initial monomials of
a standard basis of H 0 (I(n)), n ∈ N. The Figures 2.13a and 2.13b show the typical shape
of the pyramid formed by the initial monomials.
Remark 2.4. An ideal cannot have x-standard form and y-standard form at the same
time. For m = reg(I) and c = colength(I) are determined by the Hilbert function ϕ of
I. If I would have x-standard form as well as y-standard form, then E(H 0 (I0 (d))) has
the form shown in Figure 2.14a. As I and I0 have the same Hilbert function, ϕ has the
form shown in Figure 2.14b, and therefore g ∗ (ϕ) ≤ g ∗ (χ) = g(d), where χ is the Hilbert
function of (2.2.2).
Remark 2.5. (Notations and assumption as in Remark 2.3) I∞ := lim σ(λ)I has
λ→∞
again x-standard form or y-standard form, respectively. The reasoning is as follows:
15
The Hilbert function ϑ of I∞ (respectively the regularity µ of I∞ ) is ≥ ϕ (respectively
≥ m). This follows from the semicontinuity theorem. By Lemma 2.4 it follows that
m ≥ c + 2, therefore Rm−2 ⊂ H 0 (K(m − 2)) (cf. Appendix C, Remark 2). If I has xstandard form (respectively y-standard form), then xRm−2 ⊂ H 0 (I(m − 1)) (respectively
yRm−2 ⊂ H 0 (I(m − 1))) follows. Therefore xRm−2 ⊂ H 0 (I∞ (m − 1)) (respectively
yRm−2 ⊂ H 0 (I∞ (m − 1))). It follows that xRµ−2 ⊂ H 0 (I∞ (µ − 1)) (respectively yRµ−2 ⊂
H 0 (I∞ (µ − 1))). Now I∞ has x-standard form or y-standard form, in any case, and the
above inclusions show that I and I∞ have the same kind of standard form.
2.5. The type of a G-invariant ideal
Let I ⊂ OP2k be an ideal of colength d, with Hilbert function ϕ and invariant under
G = Γ · T (ρ), where ρ2 > 0 and k is an extension field of C.
2.5.1.
Definition 3. 1◦ I has the type (−1), if one of the following cases occurs:
1st case: g ∗ (ϕ) ≤ g(d), where d ≥ 5 by convention.
2nd case: 0 ≤ d ≤ 4
2◦ We now assume g ∗ (ϕ) > g(d), which notation implies d ≥ 5. Then one has
I = ℓ0 I1 (−1) + f0 OP2 (−m0 ), where (ℓ0 , I1 , f0 , m0 ) is as in Lemma 2.6. If I1 has type
(−1), then we say I has type 0. If I1 is not of type (−1), then I1 = ℓ1 I2 (−1)+f1 OP2 (−m1 ),
where (ℓ1 , I2 , f1 , m1 ) is as in Lemma 2.6. If I2 has the type (−1), then we say I has the
type 1, etc. As d = colength(I) = colength(I1 ) + m0 , etc., the colengths of the ideals in
question decrease and the procedure will terminate. We have
Lemma 2.7. If I has not the type (−1), then one has a sequence of decompositions
I =: I0 = ℓ0 I1 (−1) + f0 OP2 (−m0 ),
(2.2)
I1 = ℓ1 I2 (−1) + f1 OP2 (−m1 ),
·································
Ir−1 = ℓr−1 Ir (−1) + fr−1 OP2 (−mr−1 ),
Ir = ℓr K(−1) + fr OP2 (−mr ).
For a given ideal Ii , (ℓi , Ii+1 , fi , mi ) is
Ir+1 := K. If di and ϕi is the colength
the inequality g(di ) < g ∗ (ϕi ) is fulfilled.
Hilbert function, the regularity) of K is
defined as in Lemma 2.6, where 0 ≤ i ≤ r and
and the Hilbert function, respectively, of Ii , then
The ideal K has the type (−1). The colength (the
denoted by c (by ψ and κ, respectively).
3◦ We have already noted in Corollary 2.1 that the decompositions of I0 , I1 , · · · , are
uniquely determined, in essence. Therefore the number r is determined uniquely. It is
called the type of I. The types of the ideals occurring in (2.2), their Hilbert functions and
the numbers m0 , · · · , mr are uniquely determined by the Hilbert function ϕ.
16
2.5.2. Properties of an ideal of type r ≥ 0. The assumptions and notations are
as before, and we assume that I has the type r ≥ 0.
Lemma 2.8. In the decompositions (2.2) of I one has:
(a)
(b)
(c)
(d)
colength(Ii ) = colength(Ii+1 ) + mi , i.e., colength(Ii ) = c + mr + · · · + mi , 0 ≤ i ≤ r.
If r = 0, then m0 ≥ c + 2, where c = colength(K).
If r ≥ 1, then m0 ≥ c + 2 + mr + · · · + m1 = colength(I1 ) + 2.
If r ≥ 0, then m0 ≥ 2r (c + 2).
Proof. (a) follows from the decompositions (2.2) and from Lemma 2.6.
(b) follows from Lemma 2.4.
(c) As the statement only depends on the Hilbert functions of I0 , I1 , · · · , Ir+1 = K , one
can assume without restriction ℓi = x, 0 ≤ i ≤ r, and Ii is B(3; k)-invariant, 0 ≤ i ≤ r +1.
Then one has
I = xI1 (−1) + y m0 OP2 (−m0 ) and I1 = xI2 (−1) + y m1 OP2 (−m1 ),
where I2 = K, if r = 1.
We argue as in (2.3.4) and we orientate ourselves by Figure 2.15. If one would have
m0 − 1 − m1 ≤ colength(I2 ), then one could make the deformations
1 7→ 1, . . . , m0 − 1 − m1 7→ m0 − 1 − m1 .
Then we would get g ∗ (ϕ) < · · · ≤ g(d), contradiction, because from type r ≥ 0 it follows
that g ∗ (ϕ) > g(d). Therefore one has m0 − 1 − m1 > colength(I2 ). Now colength(I2 ) = c,
if r = 1 (respectively colength(I2 ) = c + mr + · · · + m2 , if r ≥ 2) as was shown in (a).
(d) If r = 0, this is statement (b). If r = 1, then by (b) and (c) it follows that
m0 ≥ (c + 2) + m1 ≥ 2(c + 2). Now we assume r ≥ 2. We argue by induction and assume
that mr ≥ c + 2, mr−1 ≥ 2(c + 1), . . . , m1 ≥ 2r−1 (c + 2). By (c) it follows that
m0 ≥ (c + 2) + (c + 2) + 2(c + 2) + · · · + 2r−1 (c + 2) = 2r (c + 2).
In the case r = 1, the statement (c) is valid even if K = OP2 , i.e., if c = 0. Because
then one can assume without restriction again
I = xI1 (−1) + y m0 OP2 (−m0 ) and I1 = xOP2 (−1) + y m1 OP2 (−m1 ),
and then, because of colength(I1 ) = m1 , by Lemma 2.4 it follows that m0 ≥ m1 + 2, i.e.,
one gets the statement (c).
Corollary 2.4 (of the proof of Lemma 2.8). If I has type r ≥ 1, then mj + j <
mi + i − 1 for all 0 ≤ i < j ≤ r.
Proof. We refer to Lemma 2.8 and use the same notations. If the sequence of
decompositions (2.2) in (2.5.1) begins with Ii instead of I0 , then from Lemma 2.8c it
follows that mi−1 ≥ c + 2 + mr + · · · + mi . It follows that mi−1 − mi ≥ 2, therefore
mi−1 + (i − 1) > mi + i. One gets
mr + r < mr−1 + (r − 1) < · · · < m1 + 1 < m0 .
17
If mj + j = mi + (i − 1), then it would follow j = i + 1 and therefore mi+1 = mi − 2. We
will show, that this is not possible.
The ideal Ii has the type r −i ≥ 1, the colength c + mr + · · ·+ mi = di , and the Hilbert
function ϕi of Ii fulfils the inequality g(di ) < g ∗(ϕi ) (cf. Lemma 2.7). As to the Hilbert
function ϕi , one obtains the situation of Figure 2.16, which corresponds to the Hilbert
function ϕ. But then it follows that g ∗ (ϕi ) ≤ g ∗ (χ′ ) = g(di ), where the graph of χ′ is
denoted by a dotted line (cf. the first case in (2.2.2), Figure 2.8, and the argumentation
in the proof of Lemma2.2).
18
Fig. 2.1
0
1
2
3
Fig. 2.2
α ... ... ... ... e
Fig. 2.3
0
1
2
3
4 ... ... ... ... d
Fig. 2.4
0
1
2
Fig. 2.5
0
1
2
3
4
ε ε+1 . . . . . . . . . m
Fig. 2.6a
0
1
2 ...
Fig. 2.6b
0
1
2
3
19
0
1
2
Fig. 2.7b
Fig. 2.7a
0
1
2
3 ...
Fig. 2.8
0
1
2
3
4 ...
Fig. 2.9
ye
ye
xy e−2
0
x2
1 2 3 ... ... ... ... ... ... e
20
0
x2
1 2 3
4 ... ... ... ... ... e
Fig. 2.10
u
v
0
1 . . . . . . . . . . . . . . . . . . . . . . . . . . . m−1 m
21
Fig. 2.11a
Fig. 2.11b
u
v
0
2 · · · · · · · · · · · ·m−1 m
1
0
1 ... ... n ... ... ... m
Fig. 2.12
6
5
4
3
ψ ′ (n − 1)
2
1
1
2
3
ϑ′ (n − 2)
4
5
6
0
1
2
3
4
5
6
7
8 ε+2 10 11 12 13 14 15 16 17 18 m
22
Fig. 2.13a
Fig. 2.13b
ym
xm
0
1
2
3
4
5 κ+1 7
8
0
1
2
3
Fig. 2.14a
0
1
2
3
4
5
4
5
6 κ+1 8
Fig. 2.14b
6
7 m
0
23
1
2
3
4
5
6
7 m
Fig. 2.15
m0 −1−m1
2
graph of ϕ′1 (n − 1)
1
1
graph of ϕ′2 (n − 2)
0
1
2
3
4
5
6
7
8
m2 +2 or κ+2
2
10 11 m1 +1 13 14 15 16 17 m0 19
#{monomials between graph of ϕ′2 (n − 1) and line y = x − 1} = colength(I2 )
24
mi
mi+1 +1
Fig. 2.16
25
CHAPTER 3
A rough description of ideals invariant under Γ · T (ρ)
It seems impossible to characterize ideals of colength d in OP2 , which are invariant
under G := Γ · T (ρ). If I is an ideal of type r in the sense of the definition in (2.5.1),
however, then one can give a rough description of the forms f0 , · · · , fr , which occur in the
decomposition (2.2) in (2.5.1). This description will be used later on for estimating the
so called α-grade of I which will be defined in the next chapter.
3.1. Assumptions
Let I ⊂ OP2 be an ideal of colength d, invariant under Γ · T (ρ), where ρ2 > 0, as
usual. We assume that I has the type r ≥ 0. Then according to Lemma 2.7 one has a
decomposition:
I =: I0 = ℓ0 I1 (−1) + f0 OP2 (−m0 ),
I1 = ℓ1 I2 (−1) + f1 OP2 (−m1 ),
(Z)
······························
Ii = ℓi Ii+1 (−1) + fi OP2 (−mi ),
······························
Ir = ℓr Ir+1 (−1) + fr OP2 (−mr ).
Using the notations of (loc.cit.) the ideal Ir+1 := K has the colength c and regularity κ.
If ℓi = x, then
H 0 (Ii (n)) = xH 0 (Ii+1 (n − 1)) + fi k[y, z]n−mi
and if ℓi = y, then
H 0 (Ii (n)) = yH 0(Ii+1 (n − 1)) + fi k[x, z]n−mi
(cf. Lemma 2.6). From (Z) follows
H 0 (I1 (mi + i − 1)) = ℓ1 H 0 (I2 (mi + i − 2)),
H 0 (I2 (mi + i − 2)) = ℓ2 H 0 (I3 (mi + i − 3)),
·································
H 0(Ii−1 (mi + 1)) = ℓi−1 H 0 (Ii (mi )),
H 0 (Ii (mi )) = ℓi H 0 (Ii+1 (mi − 1)) + hfi i,
for by Corollary 2.4 mi + 2 < mi−1 , i = 1, . . . , r. If one starts with H 0 (I1 (mi + i − 2)),
then one obtains a similar system of equations, whose last line is
H 0 (Ii (mi − 1)) = ℓi H 0 (Ii+1 (mi − 2)).
27
Conclusion 3.1. If 2 ≤ i ≤ r (if 1 ≤ i ≤ r, respectively), then H 0 (I1 (mi + i − 1)) =
ℓ1 · · · ℓi−1 H 0 (Ii (mi )) (H 0 (I1 (mi + i − 2)) = ℓ1 · · · ℓi H 0 (Ii+1 (mi − 2)), respectively).
3.2. Notations
We orientate ourselves by Figure 3.1, which shows the initial monomials of H 0 (I(m0 )).
The set of all monomials in Sm0 with z-degree ≥ m0 − (c + r) (with z-degree ≤ m0 − (c +
r + 1), respectively) is called the left domain and is denoted by LB (is called right domain
and is denoted by RB, respectively). The monomials in Sc−1 − in (H 0 (K(c − 1))) form a
basis of Sc−1 /H 0 (K(c − 1)). If we put ℓ = ℓ0 · · · ℓr , then ℓ[Sc−1 /H 0 (K(c − 1))] has a basis
consisting of the c monomials in ℓSc−1 − ℓ· in (H 0 (K(c − 1))).
The initial monomial of ℓ0 · · · ℓi−1 fi · z m0 −mi −i is Miup := xi−ι(i) y mi +ι(i) z m0 −mi −i or
Midown := xmi +i−ι(i) y ι(i) z m0 −mi −i , if ℓi = x or if ℓi = y, respectively. Here we have put, if
1 ≤ i ≤ r + 1, ι(i) := number of indices 0 ≤ j < i such that ℓj = y. We also put ι(0) = 0,
so the formulas give the right result for i = 0, too.
For instance, in Figure 3.1 we have
r = 5, ℓ0 = x, ℓ1 = y, ℓ2 = ℓ3 = x, ℓ4 = ℓ5 = y,
and therefore
ι(1) = 0, ι(2) = ι(3) = ι(4) = 1, ι(5) = 2, ι(6) = 3.
(N.B. ℓ0 · · · ℓi−1 = xi−ι(i) y ι(i) , if 0 < i ≤ r + 1.)
As always we assume ρ2 > 0. If Midown occurs, then Ii = yIi+1(−1) + fi OP2 (−mi ).
If Miup occurs, then Ii = xIi+1 (−1) + fi OP2 (−mi ). By Lemma 2.8 mi ≥ (c + 2) + mr +
· · · + mi+1 , if r ≥ 1 (m0 ≥ c + 2, if r = 0).The colength of Ii+1 equals c + mr + · · · + mi+1 ,
and therefore Rn ⊂ H 0 (Ii+1 (n)) if n ≥ mi − 2 ≥ c + mr + · · · + mi+1 in the case
r ≥ 1(Rn ⊂ H 0(K(n)) if n ≥ c in the case r = 0). This follows from Remark 2 in
Appendix C. The next generating element fi has the initial monomial Miup or Midown .
Conclusion 3.2. Suppose ρ2 > 0 and ρ1 is arbitrary.Then the columns of total degree
mi + i − 2 and mi + i − 1 (in the variables x and y ),which occur in in(H 0(I(m0 ))) , also
occur in H 0 (I(m0 )). Thus xMiup or yMidown , when it exists, is contained in I.
Proof. This follows from , Conclusion 3.1, the decompositions (Z) and the foregoing
discussion.
As always we assume ρ2 > 0. We have to distinguish between two cases (cf. 2.4.1
Auxiliary Lemma 1 and Auxiliary Lemma 2):
Main Case I: ρ0 > 0 and ρ1 < 0
Main Case II: ρ0 < 0 and ρ1 > 0
28
3.3. Description of f0 in Case I
If the initial monomial of f0 equals xm0 , then f0 = xm0 and ℓ0 = y. Therefore
we may assume that f0 is a proper semi-invariant with initial monomial y m0 . Then
I = I0 = xI1 (−1) + f0 OP2 (−m0 ), and we write f0 = y m0 + z µ x·G, µ ∈ N maximal, G a
T (ρ)-semi-invariant (cf. 2.4.3). If G0 is the initial monomial of G, then N : = z µ · x · G0
is called the vice-monomial of f0 , which by assumption is not equal to zero .
The inclusion (2.1) of (2.4.2) reads
(3.1)
hx, yi[µG + z ∂G/∂z] ⊂ H 0 (I1 (m0 − µ))
As for the initial monomial G0 of G, one has
(3.2)
hx, yiG0 ⊂ in(H 0 (I1 (m0 − µ))) = H 0 (in(I1 (m0 − µ)))
where in denotes the subspace or the initial ideal, respectively, generated by the initial
monomials. (Equality follows, for example, from in(I) = lim σ(λ)I and [G2], Lemma 3
λ→0
and 4.) Without restriction we may assume that G0 ∈
/ in(H 0 (I1 (m0 − µ − 1))), because
we can reduce f0 modulo xH 0 (I1 (m0 − 1)).
(N.B.: The same statements are analogously valid in the Case II, that means, if ρ1 > 0,
I = yI1 (−1) + f0 OP2 (−m0 ), f0 = xm0 + z µ yG, etc.)
It follows that one of the following cases occurs, where 1 ≤ i ≤ r:
1◦ N = Nidown := (z/x)Midown = xmi +i−ι(i)−1 y ι(i) z m0 −mi −i+1
2◦ N = Niup := (z/y)Miup = xi−ι(i) y mi +ι(i)−1 z m0 −mi +i+1
3◦ hx, yiN ⊂ ℓin(H 0 (K(m0 − r))), where ℓ := ℓ0 · · · ℓr and N is a monomial in
L := ℓSm0 −(r+1) − ℓin(H 0 (K(m0 − r − 1))).
Notabene: One has L = [ℓSc−1 − ℓ · in(H 0(K(c − 1)))]·z m0 −r−c , because of
m0M
−r−1
ν=c
z m0 −r−1−ν Rν ⊂ H 0 (K(m0 − r − 1)).
As µ is maximal and ρ2 > 0, G0 is not divisible by z. In the cases 1◦ and 2◦ we therefore
have µ = m0 − mi − i + 1. The case 3◦ will be treated in (3.3.6), and we assume the case
1◦ or 2◦ . Then (3.1) can be written as
hx, yi[µG + z ∂G/∂z] ⊂ H 0 (I1 (mi + i − 1)).
If we put h := ℓ1 · · · ℓi−1 , then h = xi−1−ι(i) y ι(i) , because of ℓ0 = x one has ι(i) = number
of indices j such that 0 < j < i and ℓj = y. If i = 1, then h := 1. By Conclusion 1
it follows from (3.1), that G is divisible by h, that means, one has G = hg, g ∈ Smi −1 .
Therefore we can write
f0 = y m0 + z µ xhg
and (3.1) is equivalent to:
(3.3)
hx, yi[µg + z ∂g/∂z] ⊂ H 0 (Ii (mi )).
29
3.3.1. We assume the case 1◦ . As Nidown occurs , Ii = yIi+1 (−1) + xmi OP2 (−mi ). If
g 0 is the initial monomial of the form g (cf. the inclusion (3.3)), then z µ xG0 = hg 0xz µ =
xi−ι(i) y ι(i) z µ g 0 = Nidown . Because of µ = m0 − mi − i + 1 it follows that g 0 = xmi −1 .
Representing the initial monomials of H 0 (Ii (mi )) in the same way as the initial monomials
of H 0 (I0 (m0 )), one sees that southwest of Nidown there is no further monomial which occurs
in f0 .
Conclusion 3.3. Let be ρ1 < 0. If the vice-monomial of f0 equals Nidown , then
f0 = y m0 + αNidown , where α ∈ k. As fi = Midown is a monomial, by Conclusion 3.2
it follows that all monomials which have the same z-degree as Mi and are elements of
in(H 0 (I(m0 ))) also are elements of H 0 (I(m0 )). Therefore we have (x, y)y m0 ⊂ I.
3.3.2. We assume the case 2◦ . Then y m0 X νρ = Niup , where ν > 0 and 1 ≤ i ≤ r.
This statement is equivalent to the equations:
(3.4)
νρ0 = i − ι(i), νρ1 = mi + ι(i) − 1 − m0 , νρ2 = m0 − mi − i + 1.
Lemma 3.1. If one has f0 = y m0 + αNiup + · · · , where 1 ≤ i ≤ r and α ∈ k − (0), then
ρ2 > mi+1 = reg(Ii+1 ), where we put Ir+1 := K and mr+1 := κ = reg(K). Especially Ij is
a monomial ideal for all j > i.
Proof. As we have assumed ρ1 < 0, by (2.4.1 Auxiliary Lemma 2) one has ρ0 > 0.
Therefore ν ≤ i. On the other hand ρ2 = (m0 − mi − i + 1)/ν ≥ (m0 − mi − i + 1)/i; thus
it suffices to show (m0 − mi − i + 1)/i > mi+1 , i.e.
(3.5)
m0 > mi + i · mi+1 + (i − 1).
We start with i = r, in which case one has to show m0 > mr + rκ + (r − 1). This is valid
if r = 0, so we may assume r ≥ 1. By Lemma 2.8 one has m0 ≥ c + 2 + mr + · · · + m1 ,
thus it suffices to show c + 2 + mr + · · · + m1 > mr + rκ + (r − 1). If r = 1, then this
inequality is equivalent to c + 2 > κ, and this is true. Therefore one can assume without
restriction that r > 1, and has to show
c + 2 + mr−1 + · · · + m1 > rκ + (r − 1).
Because of κ ≤ c it suffices to show 2 + mr−1 + · · · + m1 > (r − 1)(κ + 1) which is true
because of κ ≤ c < mr−1 < · · · < m1 (cf. Lemma 2.4 and Corollary 2.4).
Now we assume 1 ≤ i < r, in which case it suffices to show:
c + 2 + mr + · · · + m1 > mi + i · mi+1 + (i − 1)
⇐⇒
(c + 2) + (mr + · · · + mi+2 ) + (mi−1 + · · · + m1 ) > (i − 1)mi+1 + (i − 1).
On the left side of this inequality the second summand (the third summand, respectively)
does not occur, if i + 1 = r (if i = 1, respectively). If i = 1 and r = 2, the inequality
reads c + 2 > 0. If i = 1 and r ≥ 3, the inequality reduces to (c + 2) + mr + · · · + m3 > 0.
Thus the case i ≥ 2 remains, and it suffices to show mi−1 + · · · + m1 > (i − 1)(mi+1 + 1),
which is true because of mi+1 < mi < · · · < m1 (loc.cit.).
30
3.3.3. Description of the ideal Ii . The assumptions are the same as in Lemma
3.1, but we slightly change the notations and write I = Ii , K = Ii+1 , m = mi , e = reg(K).
(Thus e = mi+1 or e = κ.) We have the following situation: I = xK(−1) + f OP2 (−m) is
Γ · T (ρ)-invariant, K is monomial, ρ1 < 0, ρ2 > e = reg(K). From the results of (2.4.2)
and Lemma 2.6 it follows that f = y m + z m−e xg, where g ∈ Se−1 and
(3.6)
hx, yi[(m − e)g + z ∂g/∂z] ⊂ H 0 (K(e))
where f ∈ H 0 (K(m)) is determined only modulo xH 0 (K(m − 1) and we can assume that
f is a T (ρ)-semi-invariant. Then g is a T (ρ)-semi-invariant, too. Thus we can write
g = N(1 + a1 X ρ + · · · ), where N ∈ Se−1 is a monomial. If we assume a1 6= 0 for example,
then NX ρ ∈ Se−1 would have a z-degree ≥ ρ2 . As ρ2 > e (cf. Lemma 3.1 ), this is
impossible.
Conclusion 3.4. Under the assumptions mentioned above, the form g in (3.6) equals
a monomial N ∈ Se−1 − H 0 (K(e − 1)) such that hx, yiN ⊂ H 0 (K(e)). It follows that
(x, y)y m ⊂ I.
3.3.4. We go back to the notations of Lemma 3.1, i.e. one has N = Niup . As in the
case 1◦ one has µ = m0 − mi − i + 1. As y m0 is the initial monomial of f0 , ℓ0 equals x and
h := ℓ1 · · · ℓi−1 equals xi−1−ι(i) y ι(i) as before. If one puts g = µg + z∂g/∂z, then (3.3) can
be written as
(3.7)
hx, yig ⊂ xH 0 (Ii+1 (mi − 1)) + hfi i
where fi has the initial monomial y mi ; thus
(3.8)
g ∈ H 0 (Ii+1 (mi − 1)).
The initial monomial of g equals the initial monomial of g; by construction this is equal
to y mi −1 . (Proof: The initial monomial of xhgz µ = xGz µ is equal to Niup by assumption.
Thus the initial monomial g 0 of g fulfils the equation
xhg 0 z µ = xi−ι(i) y mi +ι(i)−1 z m0 −mi −i+1 .
As xh = xi−ι(i) y ι(i) and µ = m0 − mi − i + 1, it follows that g 0 = y mi −1 .) From (3.7) it
follows that
(3.9)
yg = xF + αfi , F ∈ H 0 (Ii+1 (mi − 1)), α ∈ k ∗ .
Now we can write fi = y mi + z mi −mi+1 xu (cf. the last sentence in (2.4.2)), where u is either
a monomial in Smi+1 −1 − H 0 (Ii+1 (mi+1 − 1)) (cf. Conclusion 2.4), or u = 0.
We consider the first case. Then it follows that z mi −mi+1 xu = βy mi X νρ , where β ∈ k ∗
and ν ≥ 1. As fi can be reduced modulo xH 0 (Ii+1 (mi − 1)), we have z mi −mi+1 xu ∈
/
mi −1
0
,
xH (Ii+1 (mi − 1)) without restriction. From (3.9) it follows that in g, except for y
mi −mi+1
the monomial u := z
xu/y occurs.
Suppose, there is another monomial v in g. Then one would have yv ∈
xH (Ii+1 (mi − 1)), and from (3.8) it would follow that v is an element of the monomial
subspace H 0 (Ii+1 (mi − 1)). Figure 3.2 shows H 0 (Ii (mi )) = xH 0 (Ii+1 (mi − 1)) + hfi i
marked with — in black and H 0 (Ii+1 (mi )) marked with — in blue. Suppose that
0
31
v ∈ H 0 (Ii (mi − 1)) = xH 0 (Ii+1 (mi − 2)). By construction v occurs in g, therefore
xhv = ℓ0 · · · ℓi v occurs in xG and z µ xhv occurs in z µ xG. On the other hand, z µ xhv ∈
z µ xℓ1 · · · ℓi−1 xH 0 (Ii+1 (mi − 2)) = z µ xH 0 (I1 (mi + i − 2)) ⊂ xH 0 (I1 (m0 − 1)), because ℓi = x and so Conclusion 3.1 can be applied. As f0 can be reduced modulo
xH 0 (I1 (m0 − 1)), one can assume without restriction, that v does not occur in g and
therefore does not occur in the inclusions (3.7) or (3.8), which have to be fulfilled. Thus
we can assume without restriction that v ∈ H 0 (Ii+1 (mi − 1)) − H 0 (Ii (mi − 1)). From
yv ∈ xH 0 (Ii+1 (mi − 1)) it follows that v is equal to one of the monomials in Figure 3.2,
which are denoted by ?. Therefore the z-degree of v is ≥ mi − mi+1 .
By construction, the z-degree of u is ≥ mi − mi+1 , too. As u and v occur in the semiinvariant g, both monomials differ by a factor of the form X νρ . As ρ2 > mi+1 (Lemma
3.1), it follows that ν = 0, i.e., u and v differ by a constant factor. Therefore one has
g = βy mi−1 + γz mi −mi+1 xu/y ; β, γ ∈ k ∗ .
We have to describe the position of xu more exactly. From xu ∈
/ xH 0 (Ii+1 (mi+1 − 1)) but
hx, yixu ∈ xH 0 (Ii+1 (mi+1 )) and from the Γ-invariance of Ii+1 it follows that z m0 −mi+1 xu
equals Njdown or Njup , where j is an index r ≥ j > i, or equals a monomial L ∈ L (cf. 3.3).
Suppose z m0 −mi+1 ·xu = Njdown . Then u = z mi −mi+1 xu/y is southwest of Njdown /z m0 −mi
and does not occur in the monomial subspace H 0 (Ii+1 (mi − 1)), which contradicts (3.8).
Finally we note that the monomials of g agree with the monomials of g.
Conclusion 3.5. Assume that f0 has the vice-monomial Niup and that fi is not a
monomial. Then (up to a constant factor) fi = Miup + αNjup , where 1 ≤ i < j ≤ r and
α ∈ k ∗ , or fi = Miup + αL, where L ∈ L is a monomial such (x, y)L ⊂ ℓK(−r − 1), and
α ∈ k∗ .
Then it follows that f0 = y m0 + βNjup + γNjup · (z/y) or f0 = y m0 + βNiup + γL · (z/y),
respectively. Here β and γ are elements of k ∗ . Finally, all monomials, which occur in x f0
or in y 2 f0 also occur in I.
Proof. The shape of f0 and of fi results from the forgoing argumentation. Conclusion
3.4 gives (x, y)L ⊂ ℓK(−r − 1). The statements concerning the monomials in xf0 follow
from Conclusion 3.2. By Lemma 3.1, Ij is monomial, thus yNjup = z Mjup ∈ I and
therefore y Miup ∈ I. Furthermore y 2Njup · (z/y) = y Njup z ∈ I and as well y 2L · (z/y) =
yLz ∈ I. From this the assertion concerning the monomials of y 2 f0 follows.
Now we come to the case that fi = y mi . The above reasoning had shown that y mi −1
occurs in g. Suppose that another monomial v occurs in g. The same argumentation as
before shows that v equals one of the monomials in Figure 3.2 denoted by ?. Thus v is
equal to Njup or Njdown for an index i < j ≤ r, or is equal to a monomial L ∈ L, such that
(x, y)L ⊂ ℓK(−r − 1). Furthermore, there can be only one such monomial v.
Conclusion 3.6. If f0 has the vice-monomial Niup and if fi = y mi , then f0 = y m0 +
αNiup +βNjup or f0 = y m0 +αNiup +βNjdown , where 1 ≤ i < j ≤ r, or f0 = y m0 +αNiup +βL,
32
where L ∈ L is a monomial such that (x, y)L ⊂ ℓK(−r − 1). All monomials occurring in
xf0 or yf0 also occur in I.
Proof. The statements concerning the shape of f0 follow from the foregoing discussion. As Ii+1 is monomial by Lemma 3.1 and as fi is a monomial, Ii is monomial and
(x, y)L, (x, y)Niup and (x, y)Njdown are contained in I.
3.3.5. Suppose in the forms f0 and fj , where 1 ≤ j ≤ r , as in Conclusion 3.5
or Conclusion 3.6 there are three monomials with coefficients different from zero. We
call f0 (and fj , respectively) a trinomial and we then have f0 = y m0 + αNiup + βE0
and fj = Mjup + γNkup + δEj , where the “final monomials” E0 and Ej have the shape
described in Conclusion 3.5 and Conclusion 3.6, and where α, β, γ, δ ∈ k ∗ . If i > k,
then mi ≤ mk+1 < ρ2 (Lemma 3.1). As we are in the case ρ0 > 0, ρ2 > 0, it follows that
|ρ1 | > mi , and as Niup and E0 both occur in the semi-invariant f0 , one has E0 = Niup · X νρ ,
where ν ≥ 1. Looking at Figure 3.2, one sees that then E0 cannot be an element of Sm0 ,
contradiction. In the same way it follows that i < k is not possible, and thus we have
i = k. As f0 and fj are T (ρ)-semi-invariants, it follows that y m0 · X νρ = Mjup , where
ν ≥ 1. We conclude from this that
νρ0 = j − ι(j), νρ1 = mj + ι(j) − m0 , νρ2 = m0 − mj − j,
where 1 ≤ j ≤ r. We want to show that this implies ρ2 > mj+1 . Similarly as in the proof
of Lemma 3.1 it suffices to show m0 > mj + jmj+1 + j. We start with the case j = r.
Then again mr+1 := κ and one has to show m0 > mr + rκ + r. The case r = 0 cannot
occur. If r = 1, the inequality reads m0 > m1 + κ + 1, and because of m0 ≥ c + 2 + m1
(Lemma 2.8) and κ ≤ c this is right. Therefore we can assume r ≥ 2.
Because of (loc.cit.) it suffices to show c + 2 + mr + · · · + m1 > mr + rκ + r. As κ ≤ c it
suffices to show 2+mr−1 +· · ·+m1 > (r−1)κ+r ⇐⇒ 1+mr−1 +· · ·+m1 > (r−1)(κ+1).
Because of κ ≤ c < mr < · · · < m1 this is true (cf.Lemma 2.4 and Corollary 2.4).
We now assume 1 ≤ j < r, in which case it suffices to show:
c + 2 + mr + · · · + m1 > mj + jmj+1 + j
If j = 1 this inequality reads c + 2 + mr + · · · + m1 > m1 + m2 + 1, and this is true,
because r ≥ 2. Thus we can assume 2 ≤ j < r and the inequality which has to be shown
is equivalent to
(c + 1) + (mr + · · · + mj+2 ) + (mj−1 + · · · + m1 ) > (j − 1)(mj+1 + 1)
Because of mj+1 < mj < · · · < m1 (loc.cit.), this is true.
We thus have proved that from y m0 X νρ = Mjup
(3.10)
ρ2 > mj+1
follows. As k > j by definition, we have mk ≤ mj+1 < ρ2 and the same argumentation as
before carried out with Niup and E0 shows that Nkup and Ej cannot simultaneously occur
in fj .
33
Conclusion 3.7. Among the forms fi , 0 ≤ i ≤ r, which have the shape described in
Conclusion 3.5 and Conclusion 3.6, there can only occur at most one trinomial.
3.3.6. We now consider the case 3◦ . The following argumentation is first of all
independent of the sign of ρ1 . We write f0 = f 0 + z u g, where f 0 = y m0 if ℓ0 = x and
f 0 = xm0 , if ℓ0 = y. If we choose µ maximal then we have µ ≥ m0 − (κ + r). For as
Rκ ⊂ in(H 0 (K(κ))), the initial monomial of g has a z-degree > m0 − (κ + r + 1), i.e., the
initial monomial occurs in a column of total degree in x and y smaller or equal κ + r.
From the Γ-invariance of f0 modulo ℓ0 H 0 (I1 (m0 − 1)) it follows that hx, yi∂f /∂z =
hx, yi [µz µ−1 g + z µ ∂g/∂z] ⊂ ℓ0 H 0 (I1 (m0 − 1)). From the decompositions in (Z) (cf. 3.1)
we conclude that H 0 (Ii (n)) = ℓi H 0 (Ii+1 (n − 1)) if n < mi . Now
m0 − µ < κ + r + 1 < mr + r < · · · < m1 + 1 < m0
(cf. Corollary 2.4) and thus
H 0 (I1 (m0 − µ)) = ℓ1 H 0 (I2 (m0 − µ − 1)),
H 0 (I2 (m0 − µ − 1)) = ℓ2 H 0 (I3 (m0 − µ − 2)),
0
················································
H (Ir−1 (m0 − µ − r + 2)) = ℓr−1 H 0 (Ir (m0 − µ − r + 1)),
H 0 (Ir (m0 − µ − r + 1)) = ℓr H 0 (K(m0 − µ − r)).
It follows that H 0 (I1 (m0 − µ)) = ℓ1 · · · ℓr H 0 (K(m0 − µ − r)) and therefore
hx, yi [µg + z ∂g/∂z] ⊂ ℓH 0 (K(m0 − µ − r)) where ℓ := ℓ0 · · · ℓr = xa y b
and a (respectively b) is the number of ℓi = x (respectively the number of ℓi = y). This
implies that g is divisible by ℓ, and changing the notation we can write f0 = f 0 + ℓz µ g,
where ℓ = ℓ0 · · · ℓr , µ ≥ m0 − (κ + r) is maximal, g ∈ Sm0 −µ−r−1 and
hx, yi [µg + z ∂g/∂z] ⊂ H 0 (K(m0 − µ − r)).
Now f0 ∈ H 0 (K(m0 )) or f0 ∈ H 0 (I1 (m0 )) and m0 ≥ colength(K) + 2, if r = 0 or
m0 ≥ colength(I1 ) + 2, if r ≥ 1, respectively (cf. Lemma 2.6 and Lemma 2.8). Therefore
Rm0 ⊂ H 0 (K(m0 )) or Rm0 ⊂ H 0 (I1 (m0 )), respectively (cf. Appendix C, Remark 2). It
follows ℓz µ g ∈ H 0 (K(m0 )) (or ℓz µ g ∈ H 0 (I1 (m0 )) and thus ℓg ∈ H 0 (K(m0 − µ)) (or
ℓg ∈ H 0 (I1 (m0 − µ)) = ℓ1 · · · ℓr H 0 (K(m0 − µ − r)), respectively).
From this we conclude that ℓ0 g ∈ H 0 (K(m0 − µ − r)), in any case. We also note that
g∈
/ H 0 (K(m0 − µ − r − 1)) without restriction, because otherwise
ℓg ∈ ℓH 0 (K(m0 − µ − r − 1)) = ℓ0 H 0 (I1 (m0 − µ − 1)),
and thus z µ ℓg ∈ ℓ0 H 0 (I1 (m0 − 1)) would follow. But then ℓg could be deleted.
34
Conclusion 3.8. If the vice-monomial of f0 is different from all monomials Niup and
Njdown , then we can write
f0 = f 0 + z µ ℓg, where µ ≥ m0 − (κ + r)is maximal, ℓ := ℓ0 · · · ℓr ,
f 0 = y m0 , if ℓ0 = x (or f = xm0 , if ℓ0 = y, respectively),
g ∈ Sm0 −µ−r−1 − H 0 (K(m0 − µ − r − 1)), ℓ0 g ∈ H 0 (K(m0 − µ − r))
and hx, yi [µg + z ∂g/∂z] ⊂ H 0 (K(m0 − µ − r)).
3.4. Order of f0 in the case 3◦
We keep the notations of Conclusion 3.8, but now we write h := ℓ0 · · · ℓr . In order to
simplify the notations , the cases ρ1 > 0 and ρ1 < 0 will be treated separately. As always
we assume ρ2 > 0.
3.4.1. Let be ρ1 > 0 (and thus ρ0 < 0). Conclusion 3.8 is still valid, if x and y are
interchanged. Thus one can write f0 = xm0 + z µ hg, ℓ0 = y and
H 0(I(d)) = yH 0(I1 (d − 1)) ⊕ h{f0 xn z d−m0 −n |0 ≤ n ≤ d − m0 }i,
where Ii := K if r = 0 (cf. Lemma 2.6).
Suppose there is a 0 ≤ ν ≤ d−m0 such that xν+1 f0 ≡ xm0 +ν+1 modulo hH 0 (K(m0 +ν−r)).
Because of
hH 0 (K(m0 + ν − r)) = yℓ1 · · · ℓr H 0 (K(m0 + ν − r)) ⊂ yH 0(I1 (m0 + ν))
then follows that
H 0 (I(d)) = yH 0(I1 (d − 1))
⊕ h{f0 xn z d−m0 −n |0 ≤ n ≤ ν}i ⊕ h{xm0 +n z d−m0 −n |ν + 1 ≤ n ≤ d − m0 }i.
˙ α (H 0 (I(d)) (cf. Chapter 4), then
If one wants to determine the so called α-grade of ∧ψ
it is easier to estimate the contribution coming from the monomials in H 0 (I(d)). In the
following definition we assume ρ2 > 0 and the sign of ρ1 is arbitrary.
Definition 4. The order of f0 is the smallest natural number ν such that xν+1 f0 ≡
xm0 +ν+1 modulo hH 0 (K(m0 + ν − r)) (such that y ν+1 f0 ≡ y m0 +ν+1 modulo hH 0 (K(m0 +
ν − r)), respectively) if ρ1 > 0 (if ρ1 < 0, respectively).
Here we have put h := ℓ0 · · · ℓr (cf. (Z) in 3.1).
Remark 3.1. If ρ1 > 0, then ρ0 < 0, and from Lemma 2.6 it follows that f0 has
the initial monomial M0down = xm0 . Then Conclusion 3.2 gives yxm0 ∈ I. On the other
hand, if ρ1 < 0 the same argumentation shows that f0 has the initial monomial y m0 and
xy m0 ∈ I.
We now take up again the assumption ρ1 > 0 from the beginning of (3.4.1).
Subcase 1: ℓ0 = x. As ρ0 < 0 one has f0 = y m0 .
Subcase 2: ℓ0 = y. Putting h := xa y b one can write
f0 = xm0 + z µ hg = xm0 (1 + xa−m0 y b z µ g).
35
As f0 is a T (ρ)-semi-invariant, one has
xa−m0 y bz µ g = X γρ p(X ρ ), where γ ∈ N−(0), p(X ρ ) = a0 +a1 X ρ +. . .+as X sρ and ai ∈ k.
We now assume f0 is not a monomial. If both µ and γ are chosen maximal, then a0 6= 0,
and we can write g = Mp(X ρ ), where M := xm0 −a+γρ0 y γρ1 −b and µ = γρ2 . From the
Γ-invariance of f0 modulo ℓ0 H 0 (I1 (m0 − 1)) it follows that
hx, yihz µ−1 [µg + z∂g/∂z] ⊂ ℓ0 H 0 (I1 (m0 − 1)),
thus
hx, yiℓ1 · · · ℓr [µg + z∂g/∂z] ⊂ H 0 (I1 (m0 − µ)).
We had already obtained H 0 (I1 (m0 − µ)) = ℓ1 · · · ℓr H 0 (K(m0 − µ − r)) in (3.3.6).
We conclude that
hx, yi[µg + z∂g/∂z] ⊂ H 0 (K(m0 − µ − r))
and therefore:
xg ≡ − µ1 xM[ρ2 a1 X ρ + 2ρ2 a2 X 2ρ + · · · + sρ2 as X sρ ] modulo H 0 (K(m0 − µ − r))
Assume that s = degree (p) > 0. Then as 6= 0 and j := min {i > 0|ai 6= 0} is an integer
between 1 and s inclusive. Then one can write :
xg ≡ − µ1 xMX jρ [jρ2 aj + (j + 1)ρ2 aj+1 X ρ + · · · + sρ2 as X (s−j)ρ ] modulo H 0 (K(m0 − µ − r))
From this it follows that
xf0 = xm0 +1 + h z µ xg ≡ f˜ modulo hH 0 (K(m0 − r)),
where f˜ := xm0 +1 + h z µ̃ M̃ p̃(X ρ ) and µ̃ := µ + jρ2 > µ, M̃ := xMxjρ0 y jρ1 ,
p̃(X ρ ) := ã0 + ã1 X ρ + · · · + ãs−j X (s−j)ρ
and finally ã0 := − µ1 jρ2 aj 6= 0, · · · , ãs−j := − µ1 sρ2 as 6= 0.
Remark 3.2. f˜ is again a T (ρ)-semi-invariant, because
x−(m0 +1) hz µ̃ M̃ = x−m0 −1 xa y b z µ+jρ2 x · xm0 −a+γρ0 · y γρ1 −b · xjρ0 · y jρ1
= x(γ+j)ρ0 y (γ+j)ρ1 z (γ+j)ρ2 .
Remark 3.3. f˜ is only determined modulo ℓ0 H 0 (I1 (m0 )), as hH 0 (K(m0 − r)) ⊂
ℓ0 H 0 (I1 (m0 )).
We continue the above discussion and get at once xf0 ≡ xm0 +1 modulo hH 0 (K(m0 −r)),
if s = 0. In any case we have degree (p̃) < degree (p), and continuing in this way, we get
finally p̃·(s+1) = 0. This means, there is an integer 0 ≤ ν ≤ s such that
(3.11)
xν+1 f0 ≡ xm0 +ν+1 modulo hH 0 (K(m0 − r + ν)).
36
3.4.2. We now assume ρ1 < 0. If f0 is a monomial, then the order of f0 is zero
by definition. Thus we may assume without restriction that ρ0 > 0 (c.f. 2.4.1 Auxiliary
Lemma 2).
Subcase 1: ℓ0 = y. Then f0 = xm0 (Lemma 2.6).
Subcase 2: ℓ0 = x. With notations analogous to that of (3.4.1) one has: f0 = y m0 (1 +
xa y b−m0 z µ g) is a semi-invariant, therefore xa y b−m0 z µ g = X γρ p(X ρ ), γ ∈ N − (0). If f0 is
not a monomial and µ and γ are chosen maximal, then one can write g = Mp(X ρ ), where
µ = γρ2 , M = xγρ0 −a y m0 −b+γρ1 and p(X ρ ) = a0 +a1 X ρ +· · ·+as X sρ . Furthermore one gets
yf0 ≡ f˜ modulo hH 0 (K(m0 − r)), where f˜ := y m0 +1 + hz µ̃ M̃ p̃(X ρ ), µ̃ := µ + jρ2 , M̃ :=
yMxjρ0 y jρ1 and p̃ is defined as in (3.4.1). It is clear that one gets the corresponding
statements as in (3.4.1), if x and y as well as a and b are interchanged. This means, there
is an integer 0 ≤ ν ≤ s such that
(3.12)
y ν+1f0 ≡ y m0 +ν+1 modulo hH 0 (K(m0 − r + ν)).
3.4.3. Estimate of the order of f0 . The order has been defined as the smallest
natural number ν such that xν+1 f0 ≡ xm0 +ν+1 (respectively y ν+1f0 ≡ y m0 +ν+1 ) modulo
hH 0 (K(m0 − r + ν)). We keep the notations of (3.4.1) and (3.4.2). From γρ2 = µ ≥
m0 − (κ + r) (cf. 3.3.6) it follows that γ ≥ [m0 − (κ + r)]/ρ2 . On the other hand,
xm0 −a+γρ0 · xsρ0 has to be a polynomial in case that ρ1 > 0 ( y m0 −b+γρ1 · y sρ1 has to be
a monomial in case that ρ1 < 0, respectively). That means m0 − a + (γ + s)ρ0 ≥ 0
(m0 − b + (γ + s)ρ1 ≥ 0, respectively). Now one has ρ0 < 0, if ρ1 > 0, and ρ0 > 0, if
ρ1 < 0 (cf. 2.4.1 Auxiliary Lemma 2). It follows that
γ + s ≤ (m0 − a)/|ρ0 | if ρ1 > 0,
γ + s ≤ (m0 − b)/|ρ1 | if ρ1 < 0.
Here a (respectively b) is the number of ℓi , 0 ≤ i ≤ r, such that ℓi = x (respectively
ℓi = y).
In case that ρ1 > 0 we obtain , because of |ρ0 | = ρ1 + ρ2 :
s≤
m0 − a m0 − (κ + r)
−
ρ1 + ρ2
ρ2
And in case that ρ1 < 0, because of |ρ1 | = ρ0 + ρ2 , we obtain
s≤
m0 − b m0 − (κ + r)
−
.
ρ0 + ρ2
ρ2
From the congruences (3.11) and (3.12) we get
Conclusion 3.9. The order s of f0 fulfils the inequality:
a
κ − m0 1 − 1
+ ρr2 − ρ1 +ρ
if ρ1 > 0,
ρ2
ρ2
ρ1 +ρ2
2
s≤
b
κ − m0 1 − 1
+ ρr2 − ρ0 +ρ
if ρ1 < 0.
ρ2
ρ2
ρ0 +ρ2
2
37
3.5. Summary in the Case I
We recall that this means ρ0 > 0, ρ2 > 0 and ρ1 < 0.
Proposition 3.1. Let be ρ2 > 0 and ρ1 < 0.
(a) The following cases can a priori occur:
1st case: One of the fi , say fi0 has as vice-monomial an “upper empty corner” Njup
. Then
0
ρ2 > κ, thus K is monomial, and the fi have one of the following forms:
(1)
(2)
(3)
(4)
(5)
fi = Midown
fi = Miup + αNjdown
fi = Miup + αNjup + βNkup
fi = Miup + αNjup + βNkdown
fi = Miup + αNjup + βL, L ∈ L a monomial
such that
(x, y) · L ⊂ ℓK(−r − 1), ℓ := ℓ0 · · · ℓr .
(6) fi = Miup + αNjup + βNkup · (z/y), α, β ∈ k ∗
(7) fi = Miup + αNjup + βL · (z/y), L ∈ L a monomial
such that
(x, y) · L ⊂ ℓK(−r − 1), α, β ∈ k ∗ .
Here α and β are (a priori) arbitrary elements of k and 0 ≤ i < j < k ≤ r.
2nd case: Not any of the fi has as vice-monomial an “upper empty corner” Njup . Then
each of the fi has one of the following forms:
(1) fi = Miup + αNjdown
(2) fi = Midown
P
(3) fi = Miup + F, F = αj Lj , Lj ∈ L monomial, αj ∈ k ∗ , such that xF ∈ ℓK(−r −
P
1), and for suitable βj ∈ k ∗ and G := βj Lj one has (x, y) · G ⊂ ℓK(−r − 1).
(b) In the 1st case there is only one trinomial at most, i.e. there is only one fi at most,
which has the shape as in one of the cases 1.3 - 1.7, with α and β different from zero.
S
(c) Let be M :=
{monomials in H 0 (I(n))} and let be hMi the subspace of S generated
n≥0
by these monomials. In the 1st case one has:
• xfi ∈ hMi, for all 0 ≤ i ≤ r;
• yfi ∈ hMi, if fi has the shape as in one of the cases 1.1–1.5
• y 2 fi ∈ hMi, if fi has the shape as in one of the cases 1.6 and 1.7.
(d) In the 2nd case one has hx, yifi ⊂ hMi, if fi has the shape as in case 2.1.
If fi has the shape as in case 2.3, then xMiup ∈ M , and the order s of fi fulfils the
inequality
1
1
r
κ
− mi
−
+ ,
s≤
ρ2
ρ2 ρ0 + ρ2
ρ2
where 0 ≤ i ≤ r.
38
Proof. (1.1) and (1.2) result from Conclusion 3.2 and Conclusion 3.3; (1.3)–(1.5)
result from Conclusion 3.6; and (1.6), and (1.7) results from Conclusion 3.5. (2.1) results
again from Conclusion 3.2 and Conclusion 3.3; (2.2) is clear, and (2.3) results from Conclusion 3.8, for if Miup occurs, then ℓi = x, and this corresponds to the linear form ℓ0 in
Conclusion 3.8.
(b) results from Conclusion 3.7.
(c) results from the Conclusions 3.3, 3.5 and 3.6.
(d) results in the case (2.1) from the fact that fj = Mjdown is a monomial and therefore all
monomials with the same z-degree as Mjdown , which occur in in(H 0(I(m0 ))), also occur in
H 0 (I(m0 )) (Conclusion 3.2). In the case (2.3) the first part of the statement results from
Conclusion 3.2, and the statement concerning the order results from Conclusion 3.9.
Remark 3.4. If r = 0, then only the cases (2.2) and (2.3) can occur, i.e., one has
either f0 = xm0 or f0 = y m0 + F , where F has the properties mentioned above.
3.6. Summary in the Case II
We recall that this means ρ0 < 0, ρ1 > 0, ρ2 > 0.
We translate formally the results of (3.5): As always ρ2 > 0 is assumed, one has ρ0 < 0.
By definition ι(i) = #{ℓj = y|0 ≤ j < i} and therefore i − ι(i) = #{ℓj = x|0 ≤ j < i}.
We conclude that if one interchanges x and y in the monomial Miup (respectively Midown )
and if one simultaneously replaces ι(i) by i − ι(i) and vice versa i − ι(i) by ι(i), then one
obtains Midown (respectively Miup ). Therefore the statements of Proposition 3.1 can be
translated to the Case II by interchanging x and y and “up” and “down”:
Proposition 3.2. Let be ρ2 > 0 and ρ1 > 0.
(a) The following cases can a priori occur:
1st case: One of the fi , say fi0 has as vice-monomial a “lower empty corner” Njdown .
Then ρ2 > κ, thus K is monomial and the fi have one of the following forms:
fi = Miup
fi = Midown + αNjup
fi = Midown + αNjdown + βNkdown
fi = Midown + αNjdown + βNkup
fi = Midown + αNjdown + βL, L ∈ L a monomial
such that (x, y) · L ⊂ ℓK(−r − 1), ℓ := ℓ0 · · · ℓr
(6) fi = Midown + αNjdown + βNkdown · (z/x), α, β ∈ k ∗
(7) fi = Midown + αNjdown + βL · (z/x), L ∈ L a monomial
such that (x, y) · L ⊂ ℓK(−r − 1), α, β ∈ k ∗ .
(1)
(2)
(3)
(4)
(5)
2nd case: Not any of the fi has as vice-monomial a “ lower empty corner” Njdown . Then
each of the fi has one of the following forms:
(1) fi = Midown + αNjup
(2) fi = Miup
39
P
(3) fi = Miup + F, F =
αj Lj , Lj ∈ L monomial,αj ∈ k ∗ , such that yF ∈ ℓK(−r −
P
1), and for suitable βj ∈ k ∗ and G := βj Lj one has (x, y) · G ⊂ ℓK(−r − 1).
(b) The same statement as in Proposition 3.1.
(c) The same statement as in Proposition 3.1, if x and y are interchanged.
(d) The same statement as in Proposition 3.1, if one replaces xMiup by yMidown .
And the order s of fi fulfils the inequality
r
1
1
κ
+ ,
− mi
−
s≤
ρ2
ρ2 ρ1 + ρ2
ρ2
where 0 ≤ i ≤ r.
Remark 3.5. If r = 0, then only the cases (2.2) and (2.3) can occur.
40
Fig. 3.1
M0
M2
N2
M3
N3
N5 M 5
N4 M 4
N1 M 1
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Explanation: reg(K) ≤ c ⇒ Rn ⊂ H 0 (K(n)), n ≥ c
Sc−1 /H 0 (K(c − 1)) has a basis of c monomials, namely the monomials
of Sc−1 − inH 0 (K(c − 1)) ⇒ ℓ[Sc−1 /H 0(K(c − 1))] has a basis
of c monomials, namely the monomials of ℓSc−1 − ℓ · inH 0 (K(c − 1))
They generate a subspace L. ℓ := ℓ0 · · · ℓr = xr+1−ι(r+1) y ι(r+1)
41
m0
6
m1 +1
5
m2 +2
4
m3 +3
3
m4 +4
2
m5 +5
1
κ+6
0
Fig. 3.2
y mi
Niup
Ii+1
?
?
Ii = xIi+1 (−1) + fi OP2 (−mi )
?
?
?
?
42
mi
mi+1 +1
mi+1
?
CHAPTER 4
The α-grade.
4.1. Notations.
We let Gm (respectively Ga ) operate on S = k[x, y, z] by σ(λ) : x → x, y 7→ y, z 7→ λz
(respectively by ψα : x 7→ x, y 7→ αx + y, z 7→ z).
m
m
V
V
Let be G = Grassm (Sn ). If V ∈ G(k), then V has the dimension 1 and V 7→ V
m
m
V
V
p
defines a closed immersion G → P( Sn ) = PN , N := dim Sn − 1, the so called Plücker
embedding.
If one numbers the monomials in Sn , then one gets a basis {e1 , e2 , · · · } of Sn , and
m
V
therefore e(i) = ei1 ∧ · · · ∧ eim is a basis of Sn , where (i) = (i1 , · · · , im ) runs through all
. If one puts
sequences of natural numbers, such that 1 ≤ i1 < · · · < im ≤ n+2
2
ψα (e(i) ) := ψα (ei1 ) ∧ · · · ∧ ψα (eim ), then Ga operates in an equivariant manner on G
and PN , and the same is valid for the operation σ of Gm . If ξ ∈ G(k) corresponds to
m
V
the vector space V ⊂ Sn , then Cξ := { ψα (V )|α ∈ k}− is a point or a curve in PN ,
and there are polynomials f0 , · · · , fN in one variable with coefficients in k, such that
Cξ = {(f0 (α) : · · · : fN (α))|α ∈ k}− . At least one of the fi is equal to 1, and if ξ is
not invariant under Ga , then Cξ is a Ga -invariant closed curve in PN of degree equal to
max{deg(fi )|0 ≤ i ≤ N} (cf. [T1], Bemerkungen 2 und 3, p. 11). This number is denoted
by α-grade (V ).
Let now be ei , 1 ≤ i ≤ ℓ := n+2
, the monomials in Sn , ordered in the inverse
2
ℓ
P
lexicographic manner. If fi =
aji ej , 1 ≤ i ≤ m, is a basis of V , then f1 ∧ · · · ∧ fm =
j=1
a
·
·
·
a
i
1
i
m
1
1
P
P(i) e(i) , where P(i) = det · · · · · · · · · · · · is the Plücker coordinate for the index
(i)
aim 1 · · · aim m
m
V
P
(i) = (i1 , · · · , im ). It follows that ψα ( V ) = h P(i) ψα (e(i) )i and we conclude from this:
(i)
α − grade (V ) ≤ max{α − grade ψα (e(i) ) | P(i) 6= 0}
(4.1)
(i)
where we define the α-grade of ψα (e(i) ) to be the α-grade of the monomial subspace
ℓ
P
pjν (α)ej , 1 ≤
hei1 , · · · , eim i of Sn . This can be computed as follows: Write ψα (eiν ) =
j=1
ν ≤ m, where the pνj are polynomials in one variable with coefficients in Z. Then
P
ψα (e(i) ) = P(j) (α)e(j) , where the P(j) (α) are the Plücker coordinates of the vector space
(j)
43
hψα (ei1 ), · · · , ψα (eim )i. The P(j) are polynomials in one variable with coefficients in Z. As
P(i) (α) = 1, the α-grade of hei1 , · · · , eim i is equal to max{deg(P(j) )}.
(j)
Whereas it seems practically impossible to find a formula for the α-grade of an arbitrary vector space V ⊂ Sn , the α-grade of a monomial subspace V ⊂ Sn can be computed
n
L
as follows: Write V =
z n−i Vi where Vi ⊂ Ri is a monomial subspace and R = k[x, y].
i=0
If we put m(i) := dim Vi then Vi has a basis of the form {xi−aij y aij |1 ≤ j ≤ m(i)}, where
n
P
0 ≤ ai1 < · · · < aim(i) ≤ i is a sequence of integers. As α-grade (V ) =
α-grade (Vi ), we
i=0
can consider V as a graded vector space in R, which has a basis of monomials of different
degrees. In ([T1], 1.3, p. 12f.) it was shown:
If 0 ≤ c1 < · · · < cr ≤ i are integers, and
W := hxi−c1 y c1 · · · , xi−cr y cr i ⊂ Ri ,
then α-grade (W ) = (c1 + · · · + cr ) − (1 + · · · + r − 1).
(4.2)
Later on we will need an estimate of the α-grade of an ideal I ⊂ OP2 of colength d,
which is invariant under G = Γ · T (ρ). This will be done by estimating the α-grade of
the vector space V = H 0 (I(n)), if n is sufficiently large. By means of (4.1) the estimate
will be reduced to the computation of the α-grade of monomial subspaces, and because
of (4.2) this can be regarded as a combinatorial problem, the formulation of which needs
some more notations.
4.2. The weight of a pyramid.
Definition 5. A pyramid with frame c and colength d, 1 ≤ d ≤ c, is a set P of
monomials in R = k[x, y] with the following properties: The i-th “column” Si of P
consists of monomials xi−aij y aij , 0 ≤ ai1 < . . . < aim(i) ≤ i, for all 0 ≤ i ≤ c − 1, such that
the following conditions are fulfilled:
(1)
c−1
S
Si = P
i=0
(2) #({xi−j y j |0 ≤ j ≤ i ≤ c − 1} \ P ) = d
Then w(Si ) := (ai1 + · · · + aim(i) ) − (1 + · · · + m(i) − 1) is called the weight of the i-th
c−1
P
column Si , and w(P ) :=
w(Si ) is called the weight of the pyramid P .
i=0
Example. Let be I ⊂ OP2 an ideal of colength d which is invariant under T (3; k).
Then reg(I) ≤ d, therefore h0 (I(d − 1)) = d+1
− d, and we can write H 0 (I(d − 1)) =
2
d−1
L d−1−i
z
Vi , where Vi ⊂ Ri are monomial subspaces. If Si is the set of monomials in
i=0
Vi , then P :=
d−1
S
Si is a pyramid with frame and colength equal to d. From (4.2) one
i=0
concludes that w(P ) = α-grade (H 0 (I(d − 1)). (N.B. One has Rn ⊂ H 0 (I(n)) if n ≥ d.)
44
Remark 4.1. Let 0 ≤ c1 < · · · < cr ≤ i be a sequence of integers. w(c1, · · · , cr ) :=
(c1 + · · · + cr ) − (1 + · · · + r − 1) is maximal if and only if cν = i − r + ν, 1 ≤ ν ≤ r, i.e.
if (c1 , · · · , cr ) = (i − r + 1, · · · , i).
The aim is to determine those pyramid P of type (c, d), i.e., with frame c and colength
d, for which w(P ) is maximal. Because of Remark 4.1 we will consider without restriction
only pyramids with Si = {xi−a(i) y a(i) , · · · , xy i−1 , y i }, where a(i) := i+1−m(i) is a number
between 0 and i inclusive. We call xi−a(i) y a(i) the initial monomial and a(i) initial degree of
Si . For simplicity we write Si = (a(i), a(i) + 1, · · · , i) and P = hxi−a(i) y a(i) |0 ≤ i ≤ c − 1i.
Remark 4.2. w(Si ) = ia(i) + a(i) − a(i)2 .
Proof. w(Si ) = (a(i) + · · · + i) − (1 + · · · + i − a(i)) = (1 + · · · + i) − (1 + · · · + a(i) −
a(i)
i−a(i)+1
1) − (1 + · · · + i − a(i)) = i+1
−
−
, and a direct computation gives the
2
2
2
assertion.
Taking away xi−a(i) y a(i) from Si and adding xj−a(j)+1 y a(j)−1 to Sj , if Si 6= ∅, a(j) > 0
and j 6= i, then gives a pyramid
P ′ = P − {xi−a(i) y a(i) } ∪ {xj−a(j)+1 y a(j)−1 }.
We express this as Si (P ) = (a(i), · · · , i), Sj (P ) = (a(j), · · · , j), Si (P ′ ) = (a(i) + 1, · · · , i),
Sj (P ′ ) = (a(j) − 1, · · · , j) and we get w(Si (P )) = ia(i) + a(i) − a(i)2 ; w(Si (P ′ )) =
i(a(i) + 1) + (a(i) + 1) − (a(i) + 1)2 ; w(Sj (P )) = ja(j) + a(j) − a(j)2 , w(Sj (P ′ )) = j(a(j) −
1) + a(j) − 1 − (a(j) − 1)2 . It follows that w(P ′) − w(P ) = [w(Si (P ′)) − w(Si (P ))] +
[w(Sj (P ′ )) − w(Sj (P ))] = [i + 1 − 2a(i) − 1] + [−j − 1 + 2a(j) − 1]. We get the following
formula:
(4.3)
w(P ′) − w(P ) = 2(a(j) − a(i)) − (j − i) − 2,
where we have made the assumption that i 6= j, Si (P ) 6= ∅, and a(j) > 0.
Now let P be a pyramid of type (c, d), such that w(P ) is maximal. Then for all
deformations P 7→ P ′ as above, the right side of (4.3) is ≤ 0, i.e. a(j) − a(i) ≤ 12 (j − i) + 1.
If i < j (if i > j, respectively) this is equivalent to
1
1
a(j) − a(i)
1
1
a(j) − a(i)
≤ +
and
≥ +
, respectively.
j−i
2 j−i
j−i
2 j−i
Putting in j = i + 1 (respectively j = i − 1) gives a(i + 1) − a(i) ≤ 1.5 (a(i) − a(i − 1) ≥
−0.5, respectively). As the left side of these inequalities are integers, it follows that
a(i + 1) − a(i) ≤ 1 (a(i) − a(i − 1) ≥ 0, respectively) for all 0 ≤ i < c − 1 (for all
0 < i ≤ c − 1, respectively). Note that we can apply (4.3) only under the assumption
a(i + 1) > 0 (respectively a(i − 1) > 0). But if a(i + 1) = 0 (respectively a(i − 1) = 0),
then the two last inequalities are true, too. So we get
Remark 4.3. If the pyramid P of type (c, d) has maximal weight, then one has
a(i) ≤ a(i + 1) ≤ a(i) + 1, for all 0 ≤ i ≤ c − 2.
Remark 4.4. If P = {xi−aij y aij |i, j} is a pyramid of type (c, d), then P ′ = yP :=
{xi−aij y aij +1 |i, j} is called shifted pyramid, and w(P ′) = w(P ) + #P .
45
Proof. The column Si = (ai1 , · · · , aim(i) ) goes over to Si′ = (ai1 + 1, · · · , aim(i) + 1),
and w(Si′ ) = (ai1 + 1) + · · · + (aim(i) + 1) − (1 + · · · + m(i) − 1) = w(Si ) + m(i). Thus
c−1
P
w(P ′) = (w(Si ) + m(i)) = w(P ) + #P .
i=0
Remark 4.5. A pyramid with maximal weight does not contain a step of breadth
≥ 4, i.e. it is not possible that a(j) = · · · = a(i) > 0, if i − j ≥ 3.
Proof. If one would have the situation shown in Figure 4.1, then one could make the
deformation xi−a(i) y a(i) 7→ xj−a(j)+1 y a(j)−1 and then from (4.3) one would get: w(P ′) −
w(P ) = 2 · 0 − (j − i) − 2 = i − j − 2 ≥ 1, contradiction.
Remark 4.6. In any pyramid two consecutive “normal” steps can be replaced by a
step of breadth 3, without a change of weight. In the course of this, the sum over the
x-degrees of all monomials in the pyramid increases by 2, however.
Proof. (cf. Fig. 4.2). By assumption one has a(i) + 1 = a(i + 1) and a(i + 1) + 1 =
a(i + 2). If one makes the deformation xi−a(i) y a(i) 7→ xj−a(j)+1 y a(j)−1 , where j = i + 2,
then one gets w(P ′) − w(P ) = 2 · 2 − 2 − 2 = 0.
N.B. The possibility a(i) = 0 is not excluded.
Remark 4.7. There is a pyramid of maximal weight, which does not contain two
consecutive normal steps, that means, there is no index i, such that a(i + 1) = a(i) +
1, a(i + 2) = a(i + 1) + 1. This is called a “prepared” pyramid. (N.B. a(i) may equal zero.)
Proof. Apply Remark 4.6 for several times.
Remark 4.8. A prepared pyramid of maximal weight does not contain two steps of
breadth 3.
Proof. We consider two cases:
1st case: The steps of breadth 3 are situated side by side. Then one makes the deformation
described in Fig. 4.3 and one gets: w(P ′) − w(P ) = 2[a(i − 5) − a(i)] + 5 − 2 = 1,
contradiction.
2nd case: Between the steps of breadth 3 there are ν ≥ 1 double steps. One then makes
the deformation described in Fig. 4.4. Putting j = i − 2(ν + 1) one gets: w(P ′) − w(P ) =
2(a(j) − a(i)) − (j − i) − 2 = 2(−ν) + 2(ν + 1) − 2 = 0. Then P ′ would have maximal
weight, too. But P ′ contains a step of breadth 4, contradicting Remark 4.5.
Remark 4.9. Each positive natural number d can uniquely represented either in the
form d = n(n + 1) − r, 0 ≤ r < n (1st case) or in the form d = n2 − r, 0 ≤ r < n (2nd
case). Both cases exclude each other.
Proof. Choosing n sufficiently large, then the sequence n(n+1), n(n+1)−1, · · · , n(n+
1) − (n − 1), n(n + 1) − n = n2 , n2 − 1, · · · , n2 − (n − 1), n2 − n = (n − 1)n, · · · contains
any given set {1, · · · , m} ⊂ N.
46
We now assume that P is a prepared pyramid of type (c, d), with d ≥ 3 and maximal
weight. If P does not contain a step of breadth 3, then P has the form described either
in Fig. 4.5a or in Fig. 4.5b, according to if either d = n(n + 1) or d = n2 . (One has
n
n
P
P
2 · ν = n(n + 1) and (2ν − 1) = n2 .) If there is 1 step of breadth 3, then from the
1
1
Remarks 3,5,6,7 and 8 it follows that P has the form described either in Fig. 4.6a or in
Fig. 4.6b. Here the step of breadth 3 may lie quite left or right in Fig. 4.6a or Fig 4.6b,
respectively. One sees that Fig. 4.6a and Fig. 4.6b result from Fig. 4.5a and Fig. 4.5b,
respectively, by removing the monomials marked by −.
We first compute the weights of the pyramids shown in Fig. 4.5a and Fig. 4.5b, and
then the weights of the “reduced” pyramids P in Fig. 4.6a and Fig. 4.6b.
1st case: (Fig. 4.5a)
i
a(i)
c−1
n
c−2
n
.......... .....
c − 2ν − 1 n − ν
c − 2ν − 2 n − ν
.......... .....
c − 2n + 1
1
c − 2n
1
w(Si ) = ia(i) − a(i)(a(i) − 1)
(c − 1)n − n(n − 1)
(c − 2)n − n(n − 1)
........................................
(c − 2ν − 1)(n − ν) − (n − ν)(n − ν − 1)
(c − 2ν − 2)(n − ν) − (n − ν)(n − ν − 1)
........................................
(c − 2n + 1) · 1 − 1 · 0
(c − 2n) · 1 − 1 · 0
2nd case: (Fig 4.5b)
i
c−1
c−2
c−3
..........
c − 2ν
c − 2ν − 1
..........
c − 2n + 2
c − 2n + 1
a(i)
n
n−1
n−1
.....
n−ν
n−ν
.....
1
1
w(Si ) = ia(i) − a(i)(a(i) − 1)
(c − 1)n − n(n − 1)
(c − 2)(n − 1) − (n − 1)(n − 2)
(c − 3)(n − 1) − (n − 1)(n − 2)
........................................
(c − 2ν)(n − ν) − (n − ν)(n − ν − 1)
(c − 2ν − 1)(n − ν) − (n − ν)(n − ν − 1)
........................................
(c − 2n + 2) · 1 − 1 · 0
(c − 2n + 1) · 1 − 1 · 0
1st case: We sum up w(Si ) and w(Si−1 ) if i = c − 2ν − 1 and we get:
n−1
X
ν=0
(n − ν)(2c − 2n − 2ν − 1) =
n
X
ν=1
ν(2c − 4n + 2ν − 1)
= (2c − 4n − 1) · 12 n(n + 1) + 13 n(n + 1)(2n + 1)
= n(n + 1)(c − 2n − 0.5 + 32 n + 13 )
= n(n + 1)(c − 34 n − 16 ).
In the reduced pyramid the initial terms a(i) of the column S i , if i = c−2, c−4, · · · , c−
2r, then are equal to n − 1, n − 2, · · · , n − r. This means, a(i) = n − ν, if i = c − 2ν,
and w(S i ) = (c − 2ν)(n − ν) − (n − ν)(n − ν − 1), 1 ≤ ν ≤ r. If i = c − 2ν we get:
47
w(S i )−w(Si ) = (c−2ν)(n−ν)−(n−ν)(n−ν −1)−[(c−2ν)(n−ν +1)−(n−ν +1)(n−ν)] =
−(c − 2ν) + (n − ν) · 2 = 2n − c.
It follows that w(P ) − w(P ) = r(2n − c).
2nd case: We sum up w(Si ) and w(Si−1 ) if i = c − 2ν and we get:
n−1
X
ν=1
(n − ν)(2c − 2n − 2ν + 1) =
n−1
X
ν=1
ν(2c − 4n + 2ν + 1)
1
1
= (2c − 4n + 1) · (n − 1)n + (n − 1)n(2n − 1)
2
3
2
1
= (n − 1)n(c − 2n + 0.5 + n − )
3
3
4
1
= n(n − 1)(c − n + ).
3
6
Beside this, we have to add the weight (c − 1)n − n(n − 1) = n(c − n), if i = c − 1,
and we get w(P ) = n[(c + 0.5)n − 43 n2 − 61 ].
In the reduced pyramid the initial terms a(i) of the column S i , if i = c−1, c−3, · · · , c−
2r + 1, then are equal to n − 1, n − 2, · · · , n − r. This means a(i) = n − ν, if i = c − 2ν + 1,
and w(S i ) = (c − 2ν + 1)(n − ν) − (n − ν)(n − ν − 1), 1 ≤ ν ≤ r. If i = c − 2ν + 1 we get:
w(S i ) − w(Si ) = (c − 2ν + 1)(n − ν) − (n − ν)(n − ν − 1) − [(c − 2ν + 1)(n − ν + 1) − (n −
ν + 1)(n − ν)] = 2n − c − 1, 1 ≤ ν ≤ r. We get w(P ) − w(P ) = r(2n − c − 1).
Remark 4.10. The maximal weight of a pyramid P of type (c, d) is equal to
n[(c − 1, 5)n + (c + 2r − 1/6) − 4/3 · n2 ] − rc
(4.4)
if d = n(n + 1) − r and 0 ≤ r < n and it is equal to
n[(c + 0.5)n + (2r − 1/6) − 4/3 · n2 ] − r(c + 1)
(4.5)
if d = n2 − r and 0 ≤ r < n.
Proof. If d ≥ 3, this follows from the foregoing computation. If d = 2 or if d = 1,
then n = 1 and r = 0. The formula (4.4) and the formula (4.5) give the weights 2c − 3
and c − 1, which is confirmed by Fig. 4.7a and Fig. 4.7b, respectively.
Remark 4.11. The formulas (4.4) and (4.5) agree in the ends of the intervals. This
is shown by putting in r = n in (4.4) and r = 0 in (4.5) and by putting in n − 1 instead
of n and r = 0 in (4.4) and r = n in (4.5), respectively, and then checking equality.
We denote the maximal weight of a pyramid of type (c, d) by w(Pc,d).
Remark 4.12.
(
− 43 n3 − 1, 5n2 + (2r − 16 )n + dc,
w(Pc,d) =
− 34 n3 + 0.5n2 + (2r − 16 )n − r + dc,
if d = n(n + 1) − r,
if d = n2 − r
Thus w(Pc,d) is a strictly increasing function of c ≥ d, if d is fixed.
48
Remark 4.13. Fixing the integer c ≥ 5, then w(Pc,d) is a strictly increasing function
of 1 ≤ d ≤ c.
Proof. w(Pc,d) = − 43 n3 − 1, 5n2 − 16 n + cn(n + 1) + r(2n − c), if d = n(n + 1) − r, 0 ≤
r ≤ n, and w(Pc,d) = − 34 n3 + 0.5n2 − 16 n + cn2 + r(2n − c − 1), if d = n2 − r, 0 ≤ r ≤ n.
√
From n(n + 1) − r = d ≤ c and from n2 − r = d ≤ c it follows that n ≤ c. From c ≥ 5
we conclude 2n − c < 0. From d ≥ 1 it follows that n ≥ 1. As a function of r both terms
for w(Pc,d) are strictly decreasing in the intervals 0 ≤ r ≤ n, and the assertion follows
from Remark 4.11 and Remark 4.12.
Remark 4.14. Fixing the integer 1 ≤ c ≤ 4, w(Pc,d) is an increasing function of
1 ≤ d ≤ c.
Proof. By drawing the possible patterns one finds:
d
1 2
w(P2,d ) 1 1
d
1 2 3
w(P3,d) 2 3 3
d
1 2 3 4
w(P4,d ) 3 5 6 7
We define Pc := Pc,c and w(∅) = 0.
Proposition 4.1. w(Pc ) ≤ (c − 1)2 for all c ∈ N.
Proof. 1st case: c = d = n(n + 1) − r, 0 ≤ r < n. Because of Remark 4.12 one has
to show:
− 34 n3 − 1, 5n2 + (2r − 16 )n + c2 ≤ (c − 1)2 ⇔
4 3
n + 1, 5n2 − (2r − 61 )n − 2[n(n + 1) − r] + 1 ≥ 0 ⇔
3
4 3
n − 0.5n2 − (2r + 11
)n + 2r + 1 ≥ 0 ⇐
3
6
4 3
11
2
n − 0.5n − (2n + 6 )n + 2n ≥ 0 ⇐
3
4 3
n − 2, 5n2 + 61 n ≥ 0. This is true if n ≥ 2. If n = 1, then r = 0, and by substituting
3
one can convince oneself that the inequality is true in this case, too.
2nd case: c = d = n2 − r, 0 ≤ r < n. One has to show:
− 43 n3 + 0.5n2 + (2r − 16 )n − r + c2 ≤ (c − 1)2
(4.6)
⇔ 43 n3 − 0.5n2 − (2r − 61 )n + r − 2[n2 − r] + 1 ≥ 0
⇔ 43 n3 − 2, 5n2 − (2r − 61 )n + 3r + 1 ≥ 0.
Assuming n ≥ 2, this inequality follows from
4 3
n
3
− 2, 5n2 − (2n − 61 )n + 3n + 1 ≥ 0
⇐
⇔
4 3
n −
3
4 2
n
3
4, 5n2 + 3 16 n ≥ 0
− 4, 5n + 3 61 ≥ 0.
This is true if n ≥ 3. Putting n = 1 and n = 2 in (4.6) gives the inequalities r ≥ 0 and
2 − r > 0, respectively, which are true by assumption. As P1,1 = ∅, the assertion is true
if c = 0 or c = 1, too.
49
4.3. Preview
Let be V ⊂ Sn a m-dimensional subspace and V ↔ ξ ∈ G(k) the corresponding point
in G = Grassm (Sn ). We let Gm and Ga operate on S as described in (4.1). We assume
V not to be invariant under Gm or Ga . Then C := {ψα (ξ)|α ∈ k}− is a closed irreducible
curve, which is to have the induced reduced scheme structure and which we imagine as a
curve in PN by means of the Plücker embedding p. Let be h the Hilbert polynomial of
p(C) ⊂ PN , X = Hilbh (G) ֒→ Hilbh (PN ) and σ : Gm → X the morphism λ 7→ σ(λ)C. It
has an extension σ : P1 −→ X , which induces a family of curves
C ֒→ G × P1
ց
f
↓ p2
P1
such that f is flat and Cλ := f −1 (λ) = σ(λ)C for all λ ∈ P1 − {0, ∞}. As f is dominant,
we get (cf. [Fu], p.15): If C0/∞ := p1 (C0/∞ ), then [C0 ] = [C∞ ] in A1 (G).
Let be ξ0/∞ = lim σ(λ)ξ and C0/∞ := {ψα (ξ0/∞ )|α ∈ k}− . The central theme of
λ→0/∞
[T1]–[T4] is the question, what is the connection of [C0 ] and [C∞ ]. The essential tool,
which was already used in [T1] is the α-grade of V , which is nothing else than the degree
of C, imbedded in PN by means of p (cf. [T1], 1.3).
We paraphrase the recipe for estimating the α-grade of V : Let M1 < · · · < Mℓ , ℓ =
ℓ
P
n+2
,
be
the
inverse-lexicographically
ordered
monomials
of
S
,
and
let
be
f
=
aij Mj ,
n
i
2
j=1
1 ≤ i ≤ m, a basis of V . If Mj1 < · · · < Mjm is a sequence of monomials in Sn , then the
Plücker coordinate PV (Mj1 , · · · , Mjm ) is defined to be the determinant of (aijν )1≤i,ν≤m .
In the following , V is a T (ρ)-invariant subspace of Sn , and fi = Mi (1 + a1i X ρ + a2i X 2ρ +
· · · + aiν(i) X ν(i)ρ ), 1 ≤ i ≤ m, is a basis of T (ρ)-semi-invariants. From formula (4.1) in
(4.1) it follows that α − grade(V ) ≤ max{α − grade hM1 X j(1)ρ , · · · , Mm X j(m)ρ i}
(j)
where (j) = (j(1), · · · , j(m)) ∈ [1, ℓ]m ∩ Nm runs through all sequence such that
PV (M1 X j(1)ρ , · · · , Mm X j(m)ρ ) 6= 0.
It is possible to choose the semi-invariants fi so that the initial monomials Mi (the
final monomials Mi X ν(i)ρ =: Ni , respectively) are linearly independent, i.e. different from
each other (cf. Appendix E or the proof of Hilfssatz 6 in [T2], Anhang 1, p. 140). As
the Plücker coordinates of a subvector space, up to a factor different from zero, do not
depend on the basis which one has chosen, one has
(4.7)
PV (M1 , · · · , Mm ) 6= 0 and PV (N1 , · · · , NM ) 6= 0.
Define V0 = hM1 , · · · , Mm i ↔ ξ0 and V∞ := hN1 , · · · , Nm i ↔ ξ∞ . As the function deg
is constant on flat families of curves, we get
α-grade(V ) = deg(C) = deg(C0 ) = deg(C∞ )
As C0/∞ ⊂ C0/∞ we conclude that
(4.8)
α-grade(V ) ≥ max(α-grade(V0 ), α-grade(V∞ )).
50
Now it is scarcely possible to see if PV (M1 X j(1)ρ , · · · , Mm X j(m)ρ ) is different from zero.
Therefore we introduce the number
j(1)
j(m)ρ
max −α-grade(V ) := max{α − grade ha1 M1 X j(1)ρ , . . . , aj(m)
i}
m Mm X
(j)
where (j) runs through all sequences (j(1), · · · , j(m)) ∈ Nm , such that 0 ≤ j(i) ≤ ν(i)
for all 1 ≤ i ≤ m and a0i := 1.
Remark 4.15. (a) Clearly α-grade (V ) ≤ max −α-grade (V ).
(b) In the definition, the monomials need not be ordered.
j(i)
(c) If one coefficient ai is equal to zero or if two of the monomials Mi X j(i)ρ are equal
for different indices i, then the m-times exterior product of the monomial space and its
α-grade are zero (cf. 4.1).
To say it differently, take from each semi-invariant fi a monomial Mi X j(i)ρ , whose
m
j(i)
coefficient ai is different from zero, form ∧ ψα (Mi X j(i)ρ ), and determine the highest
i=1
power of α occurring in such an exterior product. Finally, define max −α-grade (V ) to
be the maximum of these degrees, if (j) runs through all such sequences.
Accordingly, one defines
j(1)
j(m)ρ
min −α − grade(V ) := min{α-gradeha1 M1 X j(1)ρ , · · · , aj(m)
i}
m Mm X
(j)
where (j) runs through all sequences of the kind described above and which give an
α-grade different from zero.
As α-grade (V0/∞ ) = deg(C0/∞ ), from (4.7) we conclude that
(4.9)
min −α-grade(V ) ≤ min(deg C0 , deg C∞ ).
Later on, the vector space V will always be equal to H 0 (I(n))), where I ⊂ OP2 is a
G = Γ · T (ρ)-invariant ideal of y-standard form (cf. 2.4.3 Definition 2). We will see that
max −α-grade (I) := max −α-grade (H 0 (I(n))) and min −α-grade (I) := min −α-grade
(H 0 (I(n))) not only are independent of n ≥ colength (I), but also can be computed with
the help of smaller numbers n. The real aim of the following estimates is to prove the
following inequality: If I ⊂ OP2 is an ideal of y-standard form and if reg(I) = m, then
(!)
Q(m − 1) + min −α-grade(I) > max −α-grade(I).
From this it will follow that C0 and C∞ do not contain any y-standard cycle besides
C0 and C∞ , respectively (cf. Lemma 9.2).
51
Fig. 4.1
Fig. 4.2
Fig. 4.3
a(i+2)
a(j)
Fig. 4.4
a(i)
a(i)
a(i)
a(i−5)
a(i)
a(j)
Fig. 4.5a
n
n−1 n−1
n−ν n−ν
52
−
c−1
c−2
n
c−2ν−1
c−2ν−2
c−2n+1
c−2n
−
−
−
−
−
n
e−2n
c−2n+1
e−2n+1
c−2n+2
−
n−ν n−ν
Fig. 4.5b
−
Fig. 4.6a
53
c−2ν−1
c−2ν
−
−
c−2
c−1
−
n
e−1
−
n−1 n−1
e−2
n
n
Fig. 4.6b
Fig. 4.7a
e−1
e−2
e−2n+2
e−2n+1
n
Fig. 4.7b
c−1
c−1
54
CHAPTER 5
Estimates of the α-grade in the case ρ1 < 0, ρ2 > 0.
5.1. Preliminary remarks.
We refer to Proposition 3.1 in section 3.5 and we treat case (I.1) at first. If in fi the
vice-monomial Njup occurs, then Ik is monomial for all k ≥ j + 1. Especially, fj+1 , · · · fr
are monomials, which do not cause a deformation of the pyramid.
We show that the z-degree of such a monomial Njup cannot be equal to the z-degree
of an initial monomial Mk . For then it would follow mj + j − 1 = mk + k for another
index k, which is not possible by Corollary 2.4. For the same reason it is not possible
that the z-degree of Njup · (z/y) is equal to the z-degree of Nkdown or of Mk . The corresponding statements are true, if “up” and “down” are exchanged, as it follows from the
corresponding definition in (3.2) and (3.3).
Finally, if there occurs a deformation of the form (1.6) of Proposition 3.1, then it can
be that the final monomial Nkup (z/y) of fi has the same z-degree as the initial monomial
Mℓ of fℓ . But then ℓ > k > j > i, and therefore Iℓ is a monomial ideal by Lemma 3.1
. But then fℓ does not define a deformation, at all. It follows from this remarks that
one can separately consider the deformations defined by the different fi , if one wants to
determine the changes of α-grade caused by these deformations. (N.B. This statement is
analogously valid in the situation described by Proposition 3.2, too).
At first we determine the change of the α-grade, if in one fi the initial monomial Miup
is replaces by another monomial occurring in fi :
1◦ Miup 7−→ Njdown , if 0 ≤ i < j ≤ r;
2◦ Miup 7−→ Njup , if 0 ≤ i < j ≤ r;
3◦ Miup 7−→ L, L ∈ L monomial such that (x, y)L ⊂ ℓK(−r − 1), 0 ≤ i ≤ r;
4◦ Miup 7−→ Nkup · (z/y) = Mkup · (z/y)2 , 0 ≤ i < k ≤ r;
5◦ Miup 7−→ L · (z/y), L ∈ L monomial such that (x, y)L ⊂ ℓK(−r − 1), 0 ≤ i ≤ r.
The deformation 4◦ (resp. 5◦ ) comes from the case 1.6 (resp. 1.7) of Proposition 3.1,
and therefore there is at most one such a deformation, whereas in the deformations 1◦ and
2◦ (resp. 3◦ ) the index i may a priori run through all integers 0 ≤ i < r (resp. 0 ≤ i ≤ r).
Then for the index j in the cases 1◦ and 2◦ (resp. for the monomial L in the case 3◦ )
there are several possibilities. But if one has chosen i, then one has to decide for an index
j (resp. for a monomial L), and we will give an uniform estimate of the corresponding
changes of α-grades.
We identify the set L, which was introduced in Section (3.3) with the vector space
generated by the monomials in this set.
55
We denote by LB (left domain) the vector space generated by all monomials in Sm0
with z-degree ≥ m0 − (c + r), i.e., generated by all monomials xa y bz m0 −(a+b) , where
a + b ≤ c + r.
As to the deformation 4◦ (resp. 5◦ ), there is still the possibility yMiup 7→ yMkup (z/y)2
(resp. yMiup 7→ yL · (z/y)). This is because in the case 1.6 (resp. 1.7) of Proposition 3.1,
fi has the order 1, whereas in the remaining cases fi has the order 0.
Remember that (cf. Figure 3.1 and 5.1):
Miup = xi−ι(i) y mi +ι(i) z m0 −mi −i
Niup = Miup · (z/y) = xi−ι(i) y mi +ι(i)−1 z m0 −mi −i+1
Midown = xmi +i−ι(i) y ι(i) z m0 −mi −i
Nidown = Midown · (z/x) = xmi +i−ι(i)−1 y ι(i) z m0 −mi −i+1
Ekup := Mkup · (z/y)2 = xk−ι(k) y mk +ι(k)−2 z m0 −mk −k+2
Ekdown := Mkdown · (z/x)2 = xmk +k−ι(k)−2 y ι(k) z m0 −mk −k+2 .
5.2. Estimates in the case I.
We determine one after the other the changes of the α-grade in the deformations:
1 Miup → Njdown .
First we note ϕ′ (mi + i) = mi + 1 and ϕ′ (mj + j − 1) = mj − 1 (see Fig.5.2 ). The α-grade
of the column, in which Miup occurs changes by −(mi + ι(i)) + ϕ′ (mi + i) − 1 = −ι(i)
(cf. the formula (4.2) in 4.1). The α-grade of the column, to which Njdown is added,
changes by ι(j) − ϕ′ (mj + j − 1) = ι(j) − mj + 1 ( loc . cit.). Therefore the α-grade
changes by −mi − ι(i) + ϕ′ (mi + i) − 1 + ι(j) − ϕ′ (mj + j − 1) = ι(j) − ι(i) − mj + 1.
As 0 ≤ ι(i) ≤ ι(j) ≤ j , the absolute value of this difference is ≤ max(j, mj − 1), where
0 ≤ i < j ≤ r.
2◦ Miup → Njup .
The α-grade of the column, to which Miup belongs changes by −(mi +ι(i))+ϕ′ (mi +i)−1 =
−ι(i); the α-grade of the column, to which Njup is added, changes by mj +ι(j)−1−ϕ′ (mj +
j − 1) = ι(j). Therefore the change of α-grade is equal to 0 ≤ ι(j) − ι(i) ≤ j, where
0 ≤ i < j ≤ r.
3◦ Miup 7−→ L ∈ L.
The α-grade of the column, to which Miup belongs, changes by −ι(i). From the domain L
the monomial L is removed, such that by Proposition 4.1 one gets the following estimate
of the α-grade: The α-grade after the deformation 3◦ of that part of the pyramid, which
belongs to the left domain is ≤ (c − 1)2 + ι(r + 1)[ c+1
− (c − 1)]. For one has #L = c, and
2
c+1
0
there are 2 − c initial monomials of ℓH (K(c − 1)) in the left domain LB. Therefore
the expression in the bracket gives the number of monomials in the corresponding part
of the pyramid after the deformation. Besides this one has to take into account that the
pyramid is pushed upwards by ι(r + 1) units (cf. Remark 4.4). We recall that LB (resp.
RB) is the vector space generated by the monomials of total degree ≤ c + r (resp. of
total degree ≥ c + r + 1) in x and y (cf. Fig. 5.3). The change of α-grade caused by
the deformation 3◦ can be expressed as follows: The change in the domain RB is −ι(i);
◦
56
the α-grade of the left domain of the pyramid after the deformation is estimated as given
above.
4◦ Miup 7−→ Ekup .
At first we consider the case that Ekup belongs to the right domain, i.e. mk +k−2 ≥ c+r+1,
and we orientate ourselves by Figure 5.1. The deformation 4◦ changes the α-grade of the
column of Miup by −ι(i) (cf. 3◦ ), and the α-grade of the column to which Ekup is added
changes by mk +ι(k)−2−ϕ′ (mk +k −2). As we have remarked above, ϕ′ (mk +k) = mk +1
and therefore ϕ′ (mk +k−2) = mk −2. Therefore the α-grade of the column of Ekup changes
by ι(k). Altogether the deformation 4◦ causes a change of α-grade by 0 ≤ ι(k) − ι(i) ≤ k,
where 0 ≤ i < k ≤ r.
This deformation occurs only once, yet one has to take into account the deformation
4◦ bis (y/z)Miup 7→ Nkup ( Proposition 3.1c). In the column of yMiup this gives a change of
the α-grade by −(mi + ι(i) + 1) + ϕ′ (mi + i + 1) − 1 = −mi − ι(i) − 1 + mi + 2 − 1 = −ι(i).
In the column of Nkup the α-grade changes by mk + ι(k) − 1 − ϕ′ (mk + k − 1) = mk +
ι(k) − 1 − (mk − 1) = ι(k). Altogether the deformation 4◦ bis gives a change of α-grade
by 0 ≤ ι(k) − ι(i) ≤ k.
Now to the case mk + k − 2 ≤ c + r. Due to the deformation 4◦ (resp. 4◦ bis) the
α−grade in the right domain of the pyramid changes by −ι(i). In any case the deformation
4◦ (resp. 4◦ bis) gives a change of α-grade in the right domain of absolute value ≤ r.
5◦ Miup 7−→ L · (z/y).
Removing Miup (resp. (y/z)Miup ) gives a change of α-grade by −ι(i) in the corresponding
column (cf. case 4◦ ). The changes in the left domain will be estimated later on.
The deformations 1◦ − 5◦ exclude each other, i.e., there are at most r + 1 such deformations plus two deformations 4◦ bis and 5◦ bis. The changes in the right domain can be
estimated in the cases 1◦ and 2◦ by max(j, mj − 1) ≤ r + mi+1 , where i runs through the
numbers 0, · · · , r − 1. The absolute value of the change in the case 3◦ can be estimated
by r, and the same is true for the deformations 4◦ , 4◦ bis, 5◦ and 5◦ bis.
We now consider possible trinomials.
6 We assume there is a trinomial of the form 1.3. We want to determine the change of
α-grade,if Njup is replaced by Nkup , where we start from a pyramid containing Njup instead
of Miup . The changes of α-grade in the following diagram follow from the computation in
2◦ .
Miup
ι(j) − ι(i) ւ
ց ι(k) − ι(i)
δ
up
Nj
−→ Nkup
◦
The change of α-degree is therefore 0 ≤ δ := ι(k) − ι(j) ≤ r.
7◦ The trinomial has the form 1.4.
Miup
ι(j) − ι(i) ւ
ց ι(k) − ι(i) − mk + 1
δ
up
Nj
−→ Nkdown
(cf. 1◦ and 2◦ ) Therefore δ = ι(k) − ι(j) − mk + 1, and as in 1◦ we obtain the estimate
0 ≤ |δ| ≤ max(k, mk − 1) ≤ mk + k < mi+1 + r.
57
8◦ The trinomial has the form (1.5).
Miup
ι(j) − ι(i) ւ
ց −ι(i)
δ
up
Nj
−→ L
(cf. 3◦ ) Therefore δ = −ι(j) and 0 ≤ |δ| ≤ r.
9◦ The trinomial has the form (1.6).
Miup
ι(j) − ι(i) ւ
ց ι(k) − ι(i)(resp. − ι(i))
δ
up
Nj
−→ Nkup · (z/y)
(cf. 4◦ ) It follows δ = ι(k) − ι(j) (resp. δ = −ι(j)) and therefore |δ| ≤ r.
10◦ The trinomial has the form (1.7).
Miup
ι(j) − ι(i) ւ
ց −ι(i)
δ
up
Nj
−→ L · (z/y)
(cf. 5◦ ) It follows that δ = −ι(j) and |δ| ≤ r.
N.B. Because of Njup · (y/z) = Mjup the cases 9◦ bis and 10◦ bis do not occur.
Summarizing the cases 1◦ − 10◦ one sees that the total change of α-grade in the right
r
P
domain has an absolute value ≤ (r + 1)r + 2r +
mi . In order to formulate this result
i=1
in a suitable manner, we have to introduce some notations.
We take up the decomposition (Z) of Section (3.1) and choose a standard basis of
H (K(c)). Then we multiply the elements in this basis as well as the forms fi by monomials
in the variables x, y, z to obtain a basis of T (ρ)- semi-invariants of H 0 (I(n)) with different
initial monomials. By linearily combining one can achive that the initial monomial of
each semi-invariant does not appear in any other of these semi-invariant, i.e., one gets a
standard basis. (Fig. 3.1 is to show these initial monomials.) The set of all monomials
which occur in this basis form a pyramid, which is denoted by P. Here n ≫ 0, e.g. n ≥ d.
0
From each element of the basis we take a monomial the coefficient of which is different
from zero and such that the monomials are different from each other .Then we compute
the α-grade of the vector space generated by these monomials. The maximum and the
minimum of the α-grades which one obtains in this way had been denoted by
max −α − grade(V ) and by
min −α − grade(V ),
respectively. One chooses from such a sequence of monomials, which gives the maximal αgrade (which gives the minimal α- grade , respectively) those monomials the total degree
in x and y of which is ≥ c + (r + 1). Then one forms the α-grade of the subspaces
generated by these monomials and denotes it by max −α-grade (P ∩ RB) ( by min −αgrade (P ∩ RB), respectively ). If one chooses from such sequences of monomials those
58
monomials the total degree in x and y of which is ≤ c + r, then max −α-grade (P ∩ LB)
and min −α-grade (P ∩ LB) are defined analogously.
Of course this is valid in the case ρ1 > 0, too, but the assumption ρ2 > 0 is essential .
We make the
Definition 6. A := max −α-grade (P ∩ RB) − min −α-grade (P ∩ RB).
Then we can formulate the result obtained above as
Conclusion 5.1. In the case I.1 one has
r
X
mi .
A ≤ r(r + 3) +
i=1
N.B.. If r = 0 one has actually A = 0.
Now to the case I.2 (cf. Proposition 3.1, 2nd case).
1 Miup 7−→ Njdown gives a change of α-grade of absolute value ≤ max(r, mi+1 ), where
0 ≤ i ≤ r − 1 (cf. the case I.1).
2◦ Miup 7−→ L ∈ L gives a change of α-grade in the right domain by −ι(i) (see above).
Further possible deformations are yMiup 7→∈ L, y 2Miup 7→∈ L, · · · , y ν Miup 7→∈ L, so long
as mi + i + ν < mi−1 + (i − 1) − 1 (cf. Conclusion 3.2 ). This gives in the column of yMiup
(of y 2 Miup , · · · , y ν Miup , respectively) a change of α-grade by
◦
−(mi + ι(i) + 1) + [ϕ′ (mi + i + 1) − 1] = −(mi + ι(i) + 1) + [mi + 1] = −ι(i)
(by − (mi + ι(i) + 2) + [ϕ′ (mi + i + 2) − 1] = −(mi + ι(i) + 2) + [mi + 2] = −ι(i),
··· ,
−(mi + ι(i) + ν) + [ϕ′ (mi + i + ν) − 1] = −(mi + ι(i) + ν) − [mi + ν] = −ι(i), respectively)
as long as mi + i + ν < mi−1 + (i − 2), see above.) This procedure can be repeated at
most c times, until L is full. As ι(r) ≤ r, the total change of α-degree in the right domain
caused by deformations 2◦ has an absolute value ≤ cr. If A is defined as before one gets
Conclusion 5.2. In the case I.2 one has
r
X
A ≤ r(r + c) +
mi .
i=1
N.B. If r = 0, then one really has A = 0. For removing M0up = y m0 does not change
the α-grade of the column of z-degree 0 (this α-grade is equal to 0), or one has f0 = xm0 .
5.3. The case r ≥ 1.
We start with an ideal I of type r ≥ 0 such that ℓ0 = y, and we refer to Proposition
3.1 again. The aim is to prove the inequality (!) in (4.3), where now V = H 0 (I(n)). In
the course of the following computations it will turn out that α-grade (V ) is independent
of n, if n is sufficient large, e.g. if n ≥ d.
59
If one simply writes α-grade (I) instead of α-grade (H 0 (I(n))), where n is sufficient
large, then one has to show:
Q(m0 − 1) + min −α − grade (I) > max −α − grade (I)
(!)
We orientate ourselves by Fig. 5.4. From the Remarks 4.4 , 4.13 , 4.14 and Proposition
4.1 it follows that
c+1
− c]
min −α-grade(LB ∩ P) ≥ ι(r + 1)[
2
c+1
2
− (c − s − 1)]
max −α-grade(LB ∩ P) ≤ (c − 1) + ι(r + 1)[
2
where s + 1 = total number of all deformations Miup 7→∈ L, · · · , y ν Miup 7→∈ L, even
with different indices i (s = −1 means, there are no such deformations). For proving (!)
it is sufficient to show
Q(m0 − 1) > (c − 1)2 + (s + 1) · ι(r + 1) + A
(*)
where A = max −α-grad (P ∩ RB) − min −α-grade (P ∩ RB) by definition (cf. Section
5.2 for the notations).
N.B. If c = 0 there are no deformations into the left domain of the pyramid. Therefore
the inequality (*) reduces to
Q(m0 − 1) > A.
(*bis)
These statements are independent of ρ1 < 0 (Case I) or ρ1 > 0 (Case II).
Unfortunately one has to distinguish these two cases in the following estimates and we
start with Case I.1 (cf. Proposition 3.1).
Because of 0 ≤ s + 1 ≤ c, 1 ≤ ι(r + 1) ≤ r + 1 and the estimates of
as well as Lemma 2.8) it is sufficient to show:
r
r
X
X
m0 +1
2
− (c +
mi ) > (c − 1) + c(r + 1) + r(r + 3) +
mi
2
i=0
i=1
1
m0 (m0 − 1) > c2 + cr + r(r + 3) + 1
2
The case r = 0 has to be treated separately (see below Section 5.4), such that we can
assume r ≥ 1. If c = 0, then m0 ≥ 2r+1 ( Lemma 2.8), and (5.1) follows from 2r (2r+1 −1) >
r(r + 3) + 1, which inequality is true if r ≥ 1. Therefore we can assume c > 0. Because of
m0 ≥ 2r (c+2) the inequality follows from 2r−1 (c+2)·2r (c+1) > c2 +cr +r(r +3)+1 ⇐⇒
22r−1 (c2 + 3c + 2) > c2 + cr + r(r + 3) + 1, which is true if r ≥ 1 and c > 0.
(5.1)
⇐⇒
Conclusion 5.3. In the case I.1 the inequality (*) is valid, if r ≥ 1.
Now to the case I.2. Because of Conclusion 5.2 one has to replace in the inequality
above r(r + 3) by r(r + c), i.e. one has to show that 22r−1 (c2 + 3c + 2) > c2 + 2rc + r 2 + 1,
if r ≥ 1. This is true, if c ≥ 0.
Conclusion 5.4. In the case I.2 the inequality (*) is valid, if r ≥ 1.
60
5.4. The case r = 0.
As always we suppose g ∗(ϕ) > g(d). As ℓ0 = y and ρ1 < 0 by assumption, one has
f0 = xm0 (cf. Fig. 5.5). Therefore there are no deformations into L, i.e. A = 0 and (*)
reads Q(m0 − 1) > (c − 1)2 . If for simplicity one writes m instead of m0 , then one has to
show
(5.2)
m(m − 1) > 2c2 − 2c + 2
Now c = colength (K), K an ideal of type (−1) and the following cases can occur:
1. If ψ is the Hilbert function of K, then g ∗ (ψ) ≤ g(c). Especially one has c ≥ 5 and by
Corollary 2.2 m ≥ 2c + 1, from which (5.2) follows.
2. One has c ≤ 4. If c = 0, then I is monomial and there are no deformations at all. It
follows that (∗bis) is fulfilled.
A little consideration shows that because of m ≥ c + 2 the inequality (5.2) is fulfilled
in the cases 1 ≤ c ≤ 4, too. Thus (*) and (∗bis) are proved in the case r = 0, also. Using
Conclusion 5.4 gives
Proposition 5.1. Assume that I has the type r ≥ 0 and has y-standard form; assume
ρ1 < 0 and ρ2 > 0. Then (*) and (∗bis) are fulfilled, respectively.
N.B. Hence the inequality (!) follows (see the corresponding argumentation in 5.3).
61
Fig. 5.1
Mkup
Nkup
mk +k
mk +k−2
Ekup
Fig. 5.2
ϕ′ (mi + i) = mi + 1
ϕ′
0
1
2
3
4
5
6
m2 +2
8
m1 +1
10 m0
62
Fig. 5.3
c+r+1
c+r
column with c + 1 monomials
is complete
mr +r
right domain
#{ monomials in the left domain } = h0 (K(c − 1)) = c+1
−c
2
n+2
as K has the Hilbert polynomial 2 − c.
m0
left domain
#{ monomials in the right domain } = Q(m0 − 1) − #{ monomials in the left domain }
= m02+1 − d c+1
−c
2
63
Fig. 5.4a
Fig. 5.4b
· · · · · · · · · · · · · · · · · · m1 +1 · · · · · · m0
· · · · · · · · · · · · · · · · · · m1 +1 · · · · · · m0
Fig. 5.5 (r = 0)
column filled
with monomials
#L=c
c c+1
LB
RB
64
CHAPTER 6
Estimates of the α-grade in the case ρ1 > 0, ρ2 > 0 and r ≥ 1.
We refer to Proposition 3.2 in (3.6) and take over the notations from there. As has
been remarked in (5.1) one can compute the changes of α-grade by the single deformations
fi separately.
6.1. Estimates in the case II
At first we compute the changes of the α-grade , if we replace the initial monomial
Mi of fi by another monomial occurring in fi (cf. 5.1):
1◦ Midown 7−→ Njup , if 0 ≤ i < j ≤ r.
In the column of Midown there is a change of the α-grade by −ι(i)+ϕ′ (mi +i)−1 = mi −ι(i).
In the column of Njup there is a change of the α-grade by mj + ι(j) − 1 − ϕ′ (mj + j − 1) =
mj +ι(j)−1−(mj −1) = ι(j). Therefore the deformation 1◦ gives a change of the α-grade
by mi + ι(j) − ι(i), 0 ≤ i < j ≤ r.
2◦ Midown 7−→ Njdown , 0 ≤ i < j ≤ r.
In the column of Midown there is a change of the α-grade by mi − ι(i); in the column of
Njdown is a change of α-grade by ι(j) − ϕ′ (mj + j − 1) = ι(j) − mj + 1. Therefore the
deformation 2◦ gives a change of α-grade, whose absolute value is |ι(j)−ι(i)+mi −mj +1| ≤
max(|mi + ι(j)|, |mj + ι(i) − 1|) ≤ mi + j, where 0 ≤ i < j ≤ r.
3◦ Midown 7−→ L ∈ L.
In the column of Midown is a change of α-grade by 0 < mi − ι(i) ≤ mi + i, 0 ≤ i ≤ r. The
change of α-grade in the left domain can be estimated as in Case I.
4◦ Midown 7−→ Ekdown = Mkdown · (z/x)2 , 0 ≤ i < k ≤ r.
At first we consider the case that Ekdown belongs to the right domain, i.e. mk + k − 2 ≥
c+r +1. The change of α-grade in the column of Midown is mi −ι(i); the change of α-grade
in the column of Ekdown is ι(k) − ϕ′ (mk + k − 2) = ι(k) − (mk − 2). Therefore the absolute
value of the change of α-grade by the deformation 4◦ is |mi − ι(i) + ι(k) − mk + 2| ≤
max(|mi + ι(k)|, |mk + ι(i) − 2|) = mi + ι(k) ≤ mi + k, if 0 ≤ i < k ≤ r. This deformation
can occur only once, yet one has to take into account the deformation
4◦ bis (x/z)Midown 7→ Nkdown (cf. Proposition 3.2c).
In the column of xMidown this gives a change of the α-grade by −ι(i) + ϕ′ (mi + i + 1) − 1 =
mi − ι(i) + 1. In the column of Nkdown the α-grade changes by ι(k) − ϕ′ (mk + k − 1) =
ι(k) −mk + 1. Thus the absolute value of the change of α-grade in the right domain due to
4◦ bis is |mi −ι(i)+1+ι(k)−mk +1) ≤ max(|mi +ι(k)|, |mk +ι(i)−2|) = mi +ι(k) ≤ mi +k,
where 0 ≤ i < k < r. This deformation occurs only once.
Now to the case that mk + k − 2 ≤ c + r. Removing Midown ( resp. (x/z)Midown ) gives a
65
change of α-grade by mi − ι(i) ( by mi − ι(i) + 1, respectively), whose absolute value is
bounded by mi + r.
5◦ Midown 7−→ L · (z/x).
Removing Midown (resp. (x/z)Midown ) causes a change of α-grade of the column of Midown
(resp. (x/z)Midown ) by mi − ι(i) (resp. by mi − ι(i) + 1), which are estimated by mi + i
(resp. mi + i + 1), where 0 ≤ i ≤ r. The deformation 5◦ (resp. 5◦ bis) can occur only
once. The changes in the left domain will be estimated later on.
The deformation 1◦ −5◦ exclude each other, i.e. there are at most r+1 such deformation
plus two deformations 4◦ bis and 5◦ bis. The changes of α-grade in the right domain in the
cases 1◦ − 3◦ have an absolute value ≤ mi + r, 0 ≤ i ≤ r. The same estimate is valid for
the deformations 4◦ and 4◦ bis, even if Ek belongs to the left domain, as we have assumed
r ≥ 1. As for the deformations 5◦ (resp. 5◦ bis) we estimate the change of the α-grade by
mi + r (resp. mi + r + 1).
We now consider possible trinomials.
6 We assume there is a trinomial of the form 1.3. Similarly as in the Case I in Chapter
5, we have a diagram
Midown
γ(j) ւ
ց γ(k)
δ
down
Nj
−→ Nkdown
where γ(j) := mi − mj − ι(i) + ι(j) + 1 and γ(k) := mi − mk − ι(i) + ι(k) + 1 (cf. 2◦ ).
Therefore δ = mj − mk − ι(j). It follows that |δ| ≤ max(|mj + ι(k)|, |mk + ι(j)|) ≤ mi + r.
◦
7◦ The trinomial has the form 1.4.
Midown
γ(j) ւ
ցβ
δ
down
−→ Nkup
Nj
where β := mi − ι(i) + ι(k) (cf. 1◦ and 2◦ ). It follows that δ = mj − ι(j) + ι(k) − 1 and
|δ| ≤ mi + r.
8◦ The trinomial has the form 1.5.
Midown
γ(j) ւ
ցβ
δ
down
Nj
−→ L
where β := mi − ι(i) (cf. 3◦ ). It follows that δ = mj − ι(j) − 1 and |δ| ≤ mi + r.
9◦ The trinomial has the form 1.6.
Midown
γ(j) ւ
ցβ
δ
down
Nj
−→ Nk · (z/x)
where β := mi − mk − ι(i) + ι(k) + 2, respectively β := mi − ι(i) (cf. 4◦ ). It follows that
δ = mj − mk − ι(j) + ι(k) + 1 (resp. δ = mj − ι(j) − 1) and |δ| ≤ max(|mj − ι(k)|, |mk +
66
ι(j) − 1|) ≤ mj + r.
10◦ The trinomial has the form 1.7.
Midown
γ(j) ւ
ց mi − ι(i)
δ
down
Nj
−→ L · (z/x)
(cf. 5◦ ). It follows that δ = mj − ι(j) − 1 and |δ| < mi + r.
Notabene. Because of Njdown · (x/z) = Mjdown the case 9◦ bis or 10◦ bis does not occur.
Summarizing the cases 1◦ − 10◦ one sees that the total change of α-grade in the
r
P
mi . If one estimates the changes
right domain has an absolute value ≤ (r + 1)r +
i=0
in the cases 4◦ bis and 5◦ bis by mi + r and mi + r + 1, respectively, one obtains A ≤
r
P
r(r+3)+1+3m0 + mi . As we have assumed that r ≥ 1, we have m0 ≥ c+2+mr +· · ·+m1
i=1
( Lemma 2.8) and we obtain
Conclusion 6.1. If r ≥ 1 is assumed,in the case II.1 one has A ≤ 4m0 + r(r + 3) −
c − 1.
Now we come to the case II.2 (cf. Proposition 3.2, 2nd case).
1◦ Midown 7−→ Njup again gives a change of the α-grade in the right domain, whose
absolute value is ≤ mi + r, 0 ≤ i ≤ r − 1 (see above).
2◦ Midown 7−→ L ∈ L gives a change of α-grade in the right domain by mi − ι(i), 0 ≤
i ≤ r (see above). Further possible deformations are xMidown 7→∈ L, x2 Midown 7→∈
L, · · · , xν Midown 7→∈ L, as long as mi + i + ν < mi−1 + (i − 2) (cf. Conclusion 3.2).
This gives in the column of xMidown (of x2 Midown , · · · , xν Midown , respectively) a change
of α-grade by −ι(i) + ϕ′ (mi + i + 1) − 1 = −ι(i) + (mi + 2) − 1 = mi − ι(i) + 1 (by
−ι(i) + ϕ′ (mi + i + 2) − 1 = mi − ι(i) + 2, · · · , −ι(i) + ϕ′ (mi + i + ν) − 1 = mi − ι(i) + ν,
respectively), as long as mi + i + ν < mi−1 + (i − 2) and ν ≤ c − 1.
Remark 6.1. One has |mi − ι(i)| ≤ m0 for all 0 ≤ i ≤ r.
Proof. The inequality −ι(i) + mi ≤ mi ≤ m0 is true, and ι(i) − mi ≤ i − mi ≤
r − mi ≤ r ≤ m0 is true if r = 0. If r ≥ 1 one has m0 ≥ c + 2 + mr + · · · + m1 > r (Lemma
2.8).
From this we conclude: Replacing Midown , xMidown , · · · , xν(i) Midown by monomials in L,
even with different indices i as long as ν(i) ≤ c − 1, gives a change of α-grade in the right
ν(i)
P
(m0 + j), because |mi − ι(i) + j| ≤ m0 + j by the
domain whose absolute value is ≤
j=0
remark above . One gets A ≤
r−1
P
(mi + r) +
i=0
r ν(i)
P
P
(m0 + j). As ν(0) + 1 + · · ·+ ν(r) + 1 ≤ c,
i=0 j=0
67
it follows that A ≤
we obtain
r−1
P
i=1
r−1
P
(mi +r)+
i=0
c−1
P
(m0 +j). As m0 ≥ c+2+mr +· · ·+m1 and mr ≥ c+2,
j=0
mi ≤ m0 − 2(c + 2). This estimate is valid if r ≥ 1. In the case r = 0 one
only has the deformations M0down 7→∈ L, · · · , xs M0down 7→∈ L, and s can be estimated as
in ( Proposition 3.2).
If M0down occurs, the α-grade in the column of M0down , · · · , xs M0down increases by
m0 , · · · , m0 + s, respectively.
Conclusion 6.2. In the case II.2 one has A ≤ (c + 1)m0 + r 2 − 2(c + 2) + 2c , if r ≥ 1.
If r = 0, if I has y-standard form and κ := reg(K), then A ≤ (s + 1)m0 + s+1
, where
2
s ≤ κ/ρ2 − m0 (1/ρ2 − 1/(ρ1 + ρ2 )).
6.2. The case r ≥ 2.
We recall that we started from an ideal I of type r ≥ 0 with y-standard form, and
the aim was to show the inequalities
Q(m0 − 1) > (c − 1)2 + (s + 1)ι(r + 1) + A
(*)
and
respectively (cf. Section 5.3).
Q(m0 − 1) > A (*bis)
At first we treat the case II.1, where A ≤ 4m0 + r(r + 3) − c − 1, if r ≥ 1.
Auxiliary Lemma 1. If I has the type r = 2 (the type r = 1, respectively), then
m0 ≥ 14 ( m0 ≥ 7, respectively).
Proof. We use the results of Lemma 2.8 and write in the case r = 2 : I = I0 =
yI1 (−1) + f0 OP2 (−m0 ), I1 = ℓ1 I2 (−1) + f1 OP2 (−m1 ), I2 = ℓ2 K(−1) + f2 OP2 (−m2 ). I2
has the type 0, therefore colength (I2 ) = c + m2 ≥ 5 and it follows that m2 ≥ 5 − c. As
I1 has the type 1, we get m1 ≥ m2 + c + 2 ≥ 7. Because of m0 ≥ c + 2 + m2 + m1 it
follows that m0 ≥ c + 2 + 5 − c + 7.
To begin with, let c = 0. Then (∗bis) reads 21 m0 (m0 + 1) − (mr + · · · + m0 ) >
r
P
4m0 + r(r + 3) − 1. Because of m0 − (c + 2) ≥
mi it is sufficient to prove
1
(6.1)
m0 (m0 − 11) > 2r(r + 3) − 6
If r = 2 (respectively r = 3) the right side of (6.1) is equal to 14 (equal to 30, respectively).
As m0 ≥ 14 by the Auxiliary Lemma 1 (m0 ≥ 23 (c + 2) = 16 by Lemma 2.8, respectively),
(6.1) is true in these cases.
Let be r ≥ 4. Then m0 ≥ 2r+1 and (6.1) follows from 2r (2r+1 − 11) > r(r + 3). Now
2r > r if r ≥ 2, and 2r+1 − 11 > r + 3 is equivalent to 2r+1 > r + 14, which is true if r ≥ 4.
68
Thus we can assume c ≥ 1 in the following. Because of 1 ≤ ι(r + 1) ≤ r + 1, 0 ≤
r
P
s + 1 ≤ c, mi ≤ m0 − (c + 2) the inequality (∗) will follow from:
i=1
(6.2)
1
m0 (m0 + 1) − (2m0 − 2) > (c − 1)2 + c(r + 1) + 4m0 + r(r + 3) − c − 1
2
⇐⇒ m0 (m0 − 11) > 2c2 + 2(r − 2)c + 2r(r + 3) − 4
Because of m0 ≥ 2r (c + 2) it suffices to show:
(6.3)
2r (c + 2)[2r c + 2r+1 − 11] > 2c2 + 2(r − 2)c + 2r(r + 3) − 4
If r = 2 this inequality reads 4(c + 2)(4c − 3) > 2c2 + 16 ⇔ 7c2 + 10c − 20 > 0 and
this is true if c ≥ 2.
In the case r = 2, c = 1, the inequality (6.2) reads m0 (m0 − 11) > 18, which is
true because m0 ≥ 14 (cf. Auxiliary Lemma 1). Therefore we can now suppose without
restriction r ≥ 3, c ≥ 1. But 2r+1 > 11 and thus (6.3) follows from 2r (c + 2) · 2r c >
2c2 + 2cr + 2r(r + 3) which is equivalent to :
(22r − 2)c2 + (22r+1 − 2r)c > 2r(r + 3)
(6.4)
The left side of (6.4) is a monotone function of c, and if c = 1, then (6.4) reads
2 − 2 + 22r+1 − 2r > 2r(r + 3) ⇔ 22r + 22r+1 > 2r 2 + 8r + 2 ⇔ 22r−1 + 2r > r 2 + 4r + 1.
This is true if r ≥ 3. Summarizing all subcases we obtain
2r
Conclusion 6.3. In the case II.1 the inequality (∗) is fulfilled for all r ≥ 2.
We now consider the case II.2 and assume r ≥ 2. With the help of Conclusion 6.2 and
r
P
the estimates
mi ≤ m0 − (c + 2), s + 1 ≤ c, ι(r + 1) ≤ r + 1 one sees that (∗) follows
i=1
from
1
c
2
2
.
m0 (m0 + 1) − (2m0 − 2) ≥ (c − 1) + c(r + 1) + (c + 1)m0 + r − 2(c + 2) +
2
2
A simple computation shows that this is equivalent to
(6.5)
m0 (m0 − 2c − 5) > 3c2 + 2cr − 7c − 10 + 2r 2 .
Now we have m0 ≥ 2r (c + 2) ≥ 4(c + 2). If c = 0, then (6.5) follows from 2r+1 >
−10 + r 2 ,which is true for all r ≥ 2. Therefore we can assume c > 0. Then (6.5) follows
from 2r (c + 2)(2c + 3) > 3c2 + 2cr + 2r 2 ⇔ 2r (2c2 + 7c + 6) > 3c2 + 2cr + 2r 2 which is
true for all c ≥ 1 and r ≥ 2. We get
Conclusion 6.4. In the case II.2 the inequality (*) is fulfilled for all r ≥ 2.
6.3. The case r = 1.
Then I = yI1 (−1) + f0 OP2 (−m0 ), I1 = ℓ1 K(−1) + f1 OP2 (−m1 ), where I1 has the type
0 and K has the type −1.
69
6.3.1.
We start with the case II.1 of Proposition 3.2 .
Subcase 1: ℓ1 = y: Then one has the situation shown in Figure 6.1 and there are the
following possibilities (case II 1.5 and II 1.7, respectively):
1◦ f0 = xm0 + αN1down + βL, L ∈ L monomial such that (x, y)L ⊂ ℓK(−2).
2◦ f0 = xm0 + αN1down + βL · (z/x), L ∈ L monomial such that (x, y)L ⊂ ℓK(−2).
We treat the case 1◦ . At first, one has the possibility xm0 7→ N1down . The α-grade
of the column of xm0 changes by m0 ; the α-grade of the column of N1down changes by
ι(1) − ϕ′ (m1 ) = 1 − (m1 − 1) = 2 − m1 . Therefore the change of α-grade in the right
domain is m0 − m1 + 2. The deformation xm0 7→ L gives a change of α-grade by m0 in
the right domain. As the order of f0 is equal to 0 in the case II 1.5 (cf. Proposition 3.2c),
there are no other changes of α-grade caused by f0 .
By Proposition 3.2 again, it follows that f1 has the form of case II.1.5, where α = 0,
and the order of f1 is equal to 0. The deformation M1down 7→∈ L gives a change of αgrade by −ι(1) + ϕ′ (m1 + 1) − 1 = −ι(1) + m1 = m1 − 1. Thus in the case 1◦ one has
A ≤ max(m0 − m1 + 2, m0 ) + m1 − 1 = m0 + m1 − 1, because m1 ≥ c + 2 (Lemma 2.4).
2◦ At first, f0 defines a deformation as in the case 1◦ and gives a change of α-grade
≤ max(m0 , m0 − m1 + 2) = m0 in the right domain. But as f0 has the order ≤ 1 by
Proposition 3.2c, there is still the possibility xm0 +1 7→∈ L, which gives a change of αgrade by m0 + 1 in the right domain. As f1 again has the same form as in the case 1◦ , it
follows that A ≤ 2m0 + m1 .
Because of s + 1 ≤ c the inequality (*) follows from
1
m0 (m0 + 1) − (c + m0 + m1 ) > (c − 1)2 + 2c + 2m0 + m1 .
2
As m1 ≤ m0 − (c + 2), this inequality follows from m0 (m0 − 9) > 2c2 − 2c − 6. Because of
m0 ≥ 2c + 4 it suffices to show (2c + 4)(2c − 5) > 2c2 − 2c − 6 ⇔ 2c2 > 14. This inequality
is fulfilled, if c ≥ 3. The cases c = 0, 1, 2 will be treated later on (see below).
Subcase 2: ℓ1 = x: Then from Figure 6.2 we conclude that only the second case of
Proposition 3.2 can occur, and we have
Conclusion 6.5. In the case II.1 the inequality (*) is fulfilled except if ℓ0 = y, ℓ1 = y
and c = {0, 1, 2}.
6.3.2.
We now treat the case II.2 of Proposition 3.2).
Subcase 1: ℓ1 = y: Figure 6.1 shows that only the case II.2.3 is possible. Then
there are s + 1 deformations M0 7→∈ L, · · · , xs M0 7→∈ L ( and t + 1 deformations
M1 7→∈ L, · · · , xt M1 7→∈ L, respectively). The changes of α-grade in the columns of
M0 , · · · , xs M0 (of M1 , · · · , xt M1 , respectively) is m0 , · · · , m0 +s ( and m1 −1, m1 , · · · , m1 +
t − 1, respectively). Here s and t fulfil the inequalities of ( Proposition 3.2d). Thus the
total change of α-grade in the right domain fulfils the inequality: A ≤ (s + 1)m0 + s+1
+
2
70
(t + 1)m1 +
t
2
− 1, where
κ
1
1
1
− m0 ( −
)+
and
ρ2
ρ2 ρ1 + ρ2
ρ2
κ
1
1
t≤
− m1 ( −
).
ρ2
ρ2 ρ1 + ρ2
Estimate of s: Because of κ ≤ c and m0 ≥ 2(c + 2) one obtains :
1
1
1
c
− 2(c + 2)( −
)+
s≤
ρ2
ρ2 ρ1 + ρ2
ρ2
c+1
1
1
1
1
=
− 2(c + 1)( −
) − 2( −
)
ρ2
ρ2 ρ1 + ρ2
ρ2 ρ1 + ρ2
1
1
1
2
− ) − 2( −
)
= (c + 1)(
ρ1 + ρ2 ρ2
ρ2 ρ1 + ρ2
s≤
We first consider the possibility s ≥ 0. This implies
2
ρ1 +ρ2
−
1
ρ2
> 0, i.e. ρ2 > ρ1 .
2
− x1 ; i.e. x corresponds to ρ2 and a corresponds to ρ1 , therefore 1 ≤
Let be fa (x) = x+a
√
√
a < x. fa′ (x) = −2/(x+a)2 +1/x2 < 0 ⇔ 2x2 > (x+a)2 ⇔ 2x > x+a ⇔ x > a(1+ 2).
√
√
It follows that fa (x) has the maximum for x = a(1 + 2), and fa (a(1 + 2)) = 0.171···
.
a
Therefore s ≤ 0.172(c + 1).
Estimate of t: Because of m1 ≥ c + 2 (cf. Lemma 2.4) one has
c
1
1
t≤
− (c + 2)
−
ρ2
ρ2 ρ1 + ρ2
c
1
1
1
c
−c
−
−2
−
=
ρ2
ρ2 ρ1 + ρ2
ρ2 ρ1 + ρ2
c
1
1
=
−2
−
ρ1 + ρ2
ρ2 ρ1 + ρ2
Therefore t ≤ c/2, if ρ2 ≤ ρ1 and t ≤ c/3, if ρ2 > ρ1 .
First possibility: ρ2 ≤ ρ1 : Then s < 0, i.e. there are no deformations defined by f0 ,
and A ≤ (c/2 + 1)m1 + c/2
− 1.
2
Second possibility: ρ1 < ρ2 . Then s ≤ 0.172(c + 1), t ≤ c/3 and A ≤ (0.172c +
c/3
1, 172)m0 + 0.172c+1,172
+
(c/3
+
1)m
+
− 1.
1
2
2
As m1 ≤ m0 − (c + 2) (Lemma 2.8), one obtains the following estimates:
First possibility: A ≤ (c/2+1)[m0 −(c+2)]+c2/8−1 ⇒ A ≤ (0.5c+1)m0 −3/8c2 −2c−3
Second possibility: A ≤ (0.172c + 1, 172)m0 + 0.5(0.172c + 1, 172)2 + (c/3 + 1)[m0 −
(c + 2)] + c2 /18 − 1 ⇒ A ≤ (0.51c + 2, 172)m0 − 0.25c2 − 1, 46c − 2, 3
As we have assumed that ℓ0 = y, ℓ1 = y it follows that ι(r + 1) = ι(2) = 2. As
mentioned above, only the case II.2.3 of Proposition 3.2 can occur. If c = 0, there are no
deformations at all, so that one can assume c ≥ 1. Replacing s + 1 by c, one sees that it
suffices to show:
(6.6)
Q(m0 − 1) > c2 + 1 + A
71
First possibility: A ≤ (0.5c + 1)m1 + 0.5c
− 1 (see above). One sees that it suffices
2
to show:
1
m (m0 + 1) − (c + m0 + m1 ) > c2 + 1 + (0.5c + 1)m1 + 81 c2 − 1 ⇐⇒ 12 m0 (m0 + 1) −
2 0
m0 − (0.5c + 2)m1 > 89 c2 + c.
Because of m1 ≤ m0 −(c+2) this follows from 21 m0 (m0 +1)−(0.5c+3)m0 +(0.5c+2)(c+2) >
9 2
c + c ⇐⇒ m0 (m0 − c − 5) > 1, 25c2 − 4c − 8.
8
As m0 ≥ 2c+4 this follows from (2c+4)(c−1) > 1, 25c2 −4c−8 ⇐⇒ 0.75c2 +6c+4 > 0.
This inequality is fulfilled for all c.
Second possibility: A ≤ (0.51c + 2, 172)m0 − 0.25c2 − 1, 46c − 2, 3. Putting this into
(6.6), one has to show m0 (m0 +1)−2(c+m0 +m1 ) > 1, 5c2 −2, 92c−2, 6+(1.02c+4, 344)m0.
Because of m1 ≤ m0 −(c+2) this follows from m0 (m0 +1)−2(2m0 −2)−(1, 02c+4, 344)m0 >
1, 5c2 − 2, 92c − 2, 6 ⇐⇒ m0 (m0 − 1, 02c − 7, 344) > 1, 5c2 − 2, 92c − 6, 6.
As m0 ≥ 2c + 4 this follows from (2c + 4)(0.98c − 3, 344) > 1, 5c2 − 2, 92c − 6, 6 ⇐⇒
0.46c2 + 0.152c − 6, 776 > 0.
This inequality is fulfilled if c > 3, 676 and the cases c = 0, 1, 2, 3 will be treated later on
in (6.3.3).
Subcase 2: ℓ1 = x. Figure 6.2 shows that in Proposition 3.2 f1 = M1up has to be a
monomial and that for f0 one of the two cases 2.1 or 2.3 can occur. We first treat the case
2.1, i.e., one has the deformation xm0 7→ N1up . The α-grade of the column of xm0 changes
by m0 and the α-grade of the column of N1up changes by 1. Therefore A ≤ m0 + 1. There
are no further deformations in the right domain. Now in the inequality (*) one has s = 0
and ι(2) = 1 and therefore has to show:
1
m (m0 + 1) − (c + m0 + m1 ) > (c − 1)2 + 1 + m0 + 1. Because of m1 ≤ m0 − (c + 2)
2 0
this follows from m0 (m0 − 5) > 2(c − 1)2 . As m0 ≥ 2c + 4 this follows from (2c +
4)(2c − 1) > 2(c − 1)2 ⇐⇒ 2c2 + 10c − 6 > 0. This inequality is fulfilled, if c ≥ 1.
In the case c = 0 one has (∗bis), i.e., one has to prove Q(m0 − 1) > A, i.e. to prove
1
m (m0 + 1) − (c + m1 + m0 ) > m0 + 1.
2 0
One sees that this follows from m0 (m0 − 5) > −2. As one has m0 ≥ 7 by (6.2 Auxiliary
Lemma 1), this inequality is fulfilled.
Now we treat the case 2.3, that means f0 = M0 + F as in Proposition 3.2. The only
possible deformations are M0 7→∈ L, xM0 7→∈ L, · · · , xs M0 7→∈ L. As c = 0 implies that
I is monomial, we can assume c > 0.
We again distinguish two cases:
First possibility: ρ2 ≤ ρ1 . Then s + 1 = 0, i.e. there is no deformation at all, and
therefore one has A = 0. Then (*) reads 21 m0 (m0 + 1) − (c + m0 + m1 ) > (c − 1)2. Because
of m1 ≤ m0 − (c + 2) this follows from m0 (m0 − 3) > 2c2 − 4c − 2. Because of m0 ≥ 2c + 4
it suffices to show (2c + 4)(2c + 1) > 2c2 − 4c − 2 ⇐⇒ 2c2 + 14c + 6 > 0 which is true
for all c.
Second possibility: ρ1 < ρ2 . Then s ≤ 0, 172(c + 1), as was shown above. We have
already remarked at the beginning of (6.3.2) that the s + 1 deformations noted above give
a change of α-grade A ≤ (s + 1)m0 + s+1
. It follows that A ≤ (0.172c + 1, 172)m0 +
2
72
0.5(0.172c + 1, 172)2.
In order to prove (*) it suffices to show Q(m0 − 1) > (c − 1)2 + (0.172c + 1, 172)[1 + m0 +
0.5·(0.172c+1, 172)]. Because of m1 ≤ m0 −(c+2) this follows from m0 (m0 +1)−2(2m0 −
2) > 2(c − 1)2 + (0.344c + 2, 344)(m0 + 0.086c + 1, 586) ⇐⇒ m0 (m0 − 0.344c − 5, 344) >
2, 029584c2 − 3, 252832c + 1, 717584.
Now c ≥ 1 and m0 ≥ 2c + 4 so that it suffices to show (2c + 4)(1, 656c − 1, 344) >
2, 029584c2 − 3, 252832c + 1, 717584 ⇐⇒ 1, 282416c2 + 7, 188832c − 7, 093584 > 0.
This is true if c ≥ 1. Therefore the inequality (*)is fulfilled in Subcase 2 .
Conclusion 6.6. In the case II, if r = 1, then (*) is fulfilled except in the case II.1
if ℓ0 = y, ℓ1 = y and c ∈ {0, 1, 2} or in the case II.2 if ℓ0 = y, ℓ1 = y and c ∈ {0, 1, 2, 3}.
6.3.3. The cases 0 ≤ c ≤ 4. c = 0 From Figure 6.3 it follows that M0 7→ N1
is the only possible deformation, which is the case II.1. The change of α-grade is
A = m0 + m1 − 2(m1 − 1) = m0 − m1 + 2 and the inequality (∗bis) reads Q(m0 − 1) >
m0 −m1 +2 ⇐⇒ m0 (m0 −3) > 4, and this is true, as m0 ≥ 7 by (6.2 Auxiliary Lemma 1).
c = 1 At first, we note that K is monomial. From Figure 6.4 it follows that there are
three deformations, which can occur, with the “total” change of α-grade B:
1. M0 7→ L, B = m0 + 2
2. M0 7→ N1 and M1 7→ L, B = (m0 − m1 + 2) + (2m1 − m1 − 1) + 2 = m0 + 3
3. M1 7→ L, B = m1 + 1.
Then (∗bis) follows from Q(m0 −1) > m0 +3 ⇐⇒ 12 m0 (m0 +1)−(1+m0 +m1 ) > m0 +3.
As m1 ≤ m0 − 3 (Lemma 2.8), one sees that it suffices to show m0 (m0 − 5) > 2. This is
true, because of m0 ≥ 2(c + 2) = 6 (loc.cit.).
c = 2 Then K is monomial, and the possible deformations are shown in Figure 6.5
a, b. If the case II.1 occurs, then f0 = m0 + αN1 + βF and f1 = M1 + γL. One sees that
the change of α-grade in the right domain becomes maximal, if the deformations M0 7→ F
and M1 7→ L occur. Therefore A ≤ m0 + 2m1 − m1 − 1 = m0 + m1 − 1. The inequality (*)
follows from Q(m0 − 1) > 12 + 2 · 2 + (m0 + m1 − 1) ⇐⇒ 21 m0 (m0 + 1) − (2 + m0 + m1 ) >
4 + m0 + m1 ⇐⇒ 21 m0 (m0 + 1) − 2m0 − 2m1 > 6. Because of m1 ≤ m0 − (c + 2) = m0 − 4
it suffices to show 12 m0 (m0 + 1) − 4m0 > −2 ⇐⇒ m0 (m0 − 7) > −4. This is true as
m0 ≥ 2c + 4 = 8.
If the Figure 6.5a occurs, then only the case 1◦ of (6.3.1) is possible and the change
of α-grade in the right domain is A ≤ m0 + m1 − 1. One gets the same estimate of A if
Figure 6.5a occurs in the case II.2, because xF and xL are elements of K and thus the
order of f0 and of f1 is equal to 0. Then (*) reads Q(m0 − 1) > 12 + 2 · 2 + (m0 + m1 − 1),
and this is true as was shown just before.
If Figure 6.5b occurs, then the order of f0 can be equal to 1, but it is not possible
that f1 = M1 + αF , where α 6= 0, because K is monomial and F ∈
/ K, whereas f1 ∈ K by
Lemma 2.6.
73
We want to sharpen the estimate of (6.3). M0 7→ F and xM0 7→ L yield A =
2m0 + 1; M0 7→ F, M1 7→ L yield A = m0 + m1 − 1 and M0 7→ N1 , M1 7→ L yield
A = (m0 − m1 + 2) + m1 − 1. Therefore A ≤ 2m0 + 1, and (*) reads 12 m0 (m0 + 1) − (2 +
m0 +m1 ) > 12 +2·2+2m0 +1. Because of m1 ≤ m0 −4 it suffices to show m0 (m0 −7) > 8.
As m0 ≥ 2c + 4 = 8, the case c = 2, m1 = 4, m0 = 8 remains (cf. Fig. 6.5c). One sees
that the deformations M0 7→ F, xM0 7→ L give the maximal change of “total” α-grade
B = (8 + 9) + 1 + 2 = 20. As Q(7) = 92 − 14 = 22 the inequality (!) in (5.3) is fulfilled.
c = 3 Because of Conclusion 6.6 one has only to consider deformations of the form
f0 = M0 + F, f1 = M1 + G, where F, G ∈ L (cf. Proposition 3.2). From the computations
in (6.3.2) it follows that the order of f0 is ≤ 0.172(3 + 1) < 1 and the order of f1 is ≤ 1.
Therefore the change of α-grade in the right domain is A ≤ m0 + (m1 − 1) + m1 . If one
replaces m1 by m0 − 5 ≥ m1 in the inequality (*) of (5.3), then one has to show:
1
m (m0 + 1) − (2m0 − 2) > 22 + 3 · 2 + m0 + 2(m0 − 5) − 1 ⇐⇒ m0 (m0 − 9) > −6. This
2 0
is true because of m0 ≥ 2 · (3 + 2) = 10 (cf. Lemma 2.8).
Conclusion 6.7. Even in the cases c ∈ {0, 1, 2, 3} the inequalities (*) and (!) respectively, are fulfilled.
Summarizing the Conclusions 6.1–6.7, we obtain:
Proposition 6.1. If ρ1 > 0, ρ2 > 0, I has the type r ≥ 1 and has y-standard form,
then the inequality (!) of Section (4.3) is fulfilled.
74
Fig. 6.1
monomials
N1 M1
0
1
2
3
4
5
6
7
m1 m1 +1 10
11
12
13
M0
m0
c c+1
LB
RB
Fig. 6.2
M1
N1
monomials
0
1 . . . . . . . . . c+1c+2 . . .
LB
m1 m1 +1 . . .
RB
75
. . . . . . . . . m0
Fig. 6.3
N1 M1
0
1
2
3
M0
4 m1 m1 +1 . . . m0
Fig. 6.4
L
N1 M1
0
1
2 . . . . . . . . .m1 +1. . . m0
Fig. 6.5a
L
F
N1 M1
0
1
2
3
4
5 m1 +1. . . m0
76
Fig. 6.5b
F
Fig. 6.5c
L
F
L
N1 M1
0
1
2
3
4
5
6
N1 M1
M0
7 8
0
77
1
2
3
4
5
6
M0
7 8
CHAPTER 7
Estimates of the α-grade in the case ρ1 > 0, ρ2 > 0 and r = 0.
Let I be an ideal of type r = 0 with y-standard form. Then one has by definition:
I = yK(−1) + f OP2 (−m), colength (K) = c, reg(K) = κ, colength (I) = d = c + m, I
and K invariant under G = Γ · T (ρ), and if the Hilbert functions of K and I are ψ and
ϕ, respectively, then g ∗ (ϕ) > g(d). By definition, K has the type (−1), that means, the
following cases can occur:
1st case: g ∗ (ψ) ≤ g(c), where c ≥ 5 by convention.
2nd case: 0 ≤ c ≤ 4.
As usual the aim is to prove (*) in (5.3), and we write m and f instead of m0 and f0 .
7.1. The case g ∗ (ψ) ≤ g(c).
7.1.1. We first assume that there is no deformation into L, at all. That means f =
x (cf. Fig. 7.1), A = 0 and one has to prove the inequality (*), i.e. Q(m − 1) > (c − 1)2 .
Because of Q(m−1) = 21 m(m+1)−(c+m), this is equivalent to m(m−1) > 2c2 −2c+2. As
m ≥ 2c+1 by Corollary 2.2, it suffices to show (2c+1)·2c > 2c2 −2c+2 ⇐⇒ c2 +2c−1 > 0,
and this is true for c ≥ 5.
m
m
s
7.1.2. We now assume that there are
M0 = x 7→∈ L, · · · , x M0 7→∈
deformations
1
L. Then by Proposition 3.2d s ≤ ρκ2 − m ρ12 − ρ1 +ρ
.
2
Auxiliary lemma 1. If g ∗ (ψ) ≤ g(c) and if there is the deformation M0 7→∈ L, then
ρ1 < ρ2 .
Proof. Because M0 X νρ = L is a monomial in L, the slope of the line connecting M0
and L is ≤ the slope of the line connecting M0 and L0 (see Figure 7.1, take into account
the inclusion (3.1) in 3.3 and the following remark concerning the vice-monomial of f .)
It follows that ρ1 /ρ2 ≤ κ/(m − κ). Now κ/(m − κ) < 1 is equivalent to 2κ < m, and
because of κ ≤ c and Corollary 2.2 this is true.
Auxiliary lemma 2. If g ∗ (ψ) ≤ g(c), then κ = reg(K) ≤ c − 2.
Proof. One has κ ≤ c and from κ = c it follows that g ∗ (ψ) = (c − 1)(c − 2)/2 > g(c)
(cf. [T1], p. 92 and Anhang 2e, p. 96). By considering the figures 7.2a and 7.2b one can
convince oneself that κ = c − 1 implies g ∗ (ψ) > g(c).
79
It is clear that the change of α-grade in the right domain caused by the deformations
mentioned above is equal to
s+1
.
(7.1)
A = m + (m + 1) + · · · + (m + s) = (s + 1)m +
2
Because of κ ≤ c − 2 and m ≥ 2c + 1 (Corollary 2.2) one has
1
1
c−2
.
− (2c + 1)
−
s≤
ρ2
ρ2 ρ1 + ρ2
It follows that
c−2
s≤
− 2(c − 2)
ρ2
and therefore
(7.2)
1
1
−
ρ2 ρ2 + ρ2
−5
2
1
s < (c − 2)
−
ρ1 + ρ2 ρ2
1
1
−
ρ2 ρ2 + ρ2
Auxiliary lemma 3. If g ∗ (ψ) ≤ g(c) and if there is a deformation M0 7→∈ L, then
s < 61 (c − 2).
Proof. As 1 ≤ ρ1 < ρ2 (cf. Auxiliary lemma 1), one has to find an upper bound for
2
the function f : [N − {0}] × [N − {0}] → R, f (x, y) := x+y
− y1 , on the domain 1 ≤ x < y.
One can convince oneself that f (1, 2) is the maximal value.
In the case r = 0 the inequality (*) reads 12 m(m + 1) − (c + m) > (c − 1)2 + (s + 1) + A.
Putting in the expression (7.1), one gets 12 m(m + 1) − m > c2 − c + s + 2 + (s + 1)m +
s+1
⇐⇒ m(m + 1) − 2m > 2c2 − 2c + 2(s + 2) + 2(s + 1)m + s(s + 1) ⇐⇒ m(m − 1) >
2
2c2 − 2c + 2(s + 2) + 2(s + 1)m + s(s + 1). Because of the Auxiliary lemma 3 it suffices to
73 2 29
c − 18 c + 28
. As m ≥ 2c + 1 (Corollary 2.2) this follows from:
show: m m − 13 c − 37 > 36
9
5
4
73 2
29
28
(2c + 1) 3 c − 3 > 36 c − 18 c + 9 ⇐⇒ 47
c2 + 11
c − 40
> 0. As this is true if c ≥ 2, we
36
18
9
have proven (*) in the 1st case.
7.2. The cases 0 ≤ c ≤ 4.
If c = 0, then I = (y, xm ) is monomial and (*) is true, obviously. Unfortunately,
one has to consider the cases c = 1, · · · , 4 separately. The ideal K can have the Hilbert
functions noted in (2.2.2).
7.2.1. c = 1 . From Figure 7.3 we see that deg C0 = 2 + · · · + m − 1 = m2 − 1
and deg C∞ = 1 + · · · + m = m+1
. The deformation of C0 into C∞ is defined by
2
m
m−1
f0 = x + yz
, and this is a simple deformation in the sense of ([T1], 1.3).
One has α-grade (I) = deg(C) = max(deg C0 , deg C∞ ) (loc.cit. Hilfssatz 2, p. 12).
In order to prove (*) one has to show: Q(m − 1) > deg C∞ − deg C0 = m + 1 ⇐⇒
1
m(m + 1) − (1 + m) > m + 1 ⇐⇒ m2 − 3m − 4 > 0. This is true if m ≥ 5.
2
Conclusion 7.1. If c = 1, then (*) is fulfilled except in the case c = 1, m = 4, which
will be treated in (9.3).
80
7.2.2. c = 2 . There are two subcases, which are shown in Fig. 7.4a and Fig. 7.4b.
Here only simple deformations occur, again.
1st subcase (Fig. 7.4a). One gets deg C0 = 1 + 3 + 4 + · · · + m − 1 = m2 − 2; deg C∞ =
2 + 3 + · · · + m = m+1
− 1. The same argumentation as in (7.2.1) shows that one has
2
to show Q(m − 1) > m + 1, i.e. m2 − 3m − 6 > 0. This is fulfilled if m ≥ 5.
In the case m = 4, by the formula of Remark 2.2, it follows that g ∗ (ϕ) = 4. As g(6) = 4
and g ∗(ϕ) > g(d) by assumption, the case m = 4 cannot occur.
2nd subcase (Fig. 7.4b). deg C0 = 2 + · · · + m − 1, deg C∞ = 2 + · · · + m; Q(m − 1) >
deg C∞ − deg C0 = m ⇐⇒ m2 − 3m − 4 > 0. This is true if m ≥ 5, and the case m = 4
cannot occur.
Conclusion 7.2. If c = 2, then (*) is fulfilled.
7.2.3. c = 3 . Here 5 deformations are possible, which are all simple (Fig. 7.5a-7.5e).
1st subcase (Fig. 7.5a). deg C0 = 3 + · · · + m − 1 = m2 − 3; deg C∞ = 2 + 4 + 4 + · · · +
m − 1 = m2 . Q(m − 1) > 3 ⇐⇒ m2 − m > 12. This is true because of m ≥ c + 2 = 5.
2nd subcase (Fig. 7.5b). deg C0 = 3 + · · · + m − 1; deg C∞ = 1 + 3 + 4 + · · · + m.
Q(m − 1) > deg C∞ − deg C0 = m + 1 ⇐⇒ m2 − 3m − 8 > 0. This is true because of
m ≥ 5.
3rd subcase (Fig. 7.5c). deg C0 = 3 + · · · + m − 1; deg C∞ = 2 + · · · + m. Q(m − 1) >
deg C∞ − deg C0 = m + 2 ⇐⇒ m2 − 3m − 10 > 0. This is fulfilled if m ≥ 6. As
m ≥ c + 2 = 5, the case m = 5 remains. But if m = 5, then from Remark 2.2 it follows
that g ∗ (ϕ) = 8 < g(8) = 9, which contradicts the assumption.
4th subcase (Fig. 7.5d). deg C0 = 1 + 2 + 4 + · · · + m − 1; deg C∞ = 1 + 3 + · · · +
m; Q(m − 1) > deg C∞ − deg C0 = m + 1 ⇐⇒ m2 − 3m − 8 > 0. This is fulfilled if
m ≥ 5.
5th subcase (Fig. 7.5e). deg C0 = 2+4+4+· · ·+m−1; deg C∞ = 2+· · ·+m; Q(m−1) >
deg C∞ − deg C0 = m − 1 ⇐⇒ m2 − 3m > 4. This is fulfilled if m ≥ 5.
Conclusion 7.3. If c = 3, then (*) is fulfilled.
7.2.4. c = 4 . There are 8 subcases which are shown in Fig. 7.6a1 –7.6e. At first we
make the additional assumption that m ≥ 7. The slope ρ1 /ρ2 of the line connecting the
monomials denoted by + and - is equal to 1/2 in Fig. 7.6b1 , is ≤ 1/4 in Fig. 7.6b2 and
is ≤ 2/5 in Fig. 7.6b3 . It follows that the deformations of Fig. 7.6b1 and of Fig. 7.6b2 (
respectively the deformations of Fig. 7.6b1 and of Fig. 7.6b3 ) cannot occur simultaneously,
that means, we have only simple deformations.
1st subcase (Fig. 7.6a1 ). deg C0 = 2 + 4 + · · · + m − 1; deg C∞ = 1 + 2 + 4 + · · · +
m; Q(m − 1) > deg C∞ − deg C0 = m + 1 ⇐⇒ m2 − 3m − 10 > 0. This is fulfilled if
m ≥ c + 2 = 6.
2nd subcase (Fig. 7.6a2 ). deg C0 = 2 + 4 + · · · + m − 1; deg C∞ = 3 + 4 + · · · +
m; deg C∞ − deg C0 = m + 1, etc, as in the first subcase.
81
3rd subcase (Fig 7.6b1 ). deg C0 = 4 + 4 + 5 + · · · + m − 1 = m2 − 2; deg C∞ =
2 + 4 + 6 + 5 + · · · + m − 1; Q(m − 1) > deg C∞ − deg C0 = 4 ⇐⇒ m(m − 1) > 16. This
is fulfilled if m ≥ 5.
4th subcase (Fig. 7.6b2 ). deg C0 = 4 + 4 + 5 + · · · + (m − 1); deg C∞ = 3 + · · · +
m; Q(m − 1) > deg C∞ − deg C0 = m − 1 ⇐⇒ m2 − 3m − 6 > 0. This is fulfilled if
m ≥ 5.
5th subcase (Fig. 7.6b3 ). deg C0 = 4 + 4 + 5 + · · · + m − 1; deg C∞ = 2 + 4 + 4 + · · · +
m; Q(m − 1) > m + 2 ⇐⇒ m2 − 3m − 12 > 0. This is fulfilled if m ≥ 6.
6th subcase (Fig 7.6c). deg C0 = 3 + · · · + m − 1; deg C∞ = 3 + · · · + m; Q(m − 1) >
m ⇐⇒ m2 − 3m − 8 > 0. This is fulfilled if m ≥ 5.
7th subcase (Fig. 7.6d). deg C0 = 1 + 2 + 3 + 5 + . . . + m − 1; deg C∞ = 1 + 2 + 4 +
· · · + m; Q(m − 1) > m + 1 ⇐⇒ m2 − 3m − 10 > 0. This is fulfilled if m ≥ 6.
8th subcase (Fig. 7.6e). deg C0 = 2 + 4 + 6 + 5 · · · + m − 1 = m2 + 2; deg C∞ =
; Q(m − 1) > m − 2 ⇐⇒ m2 − 3m − 4 > 0. This is fulfilled,
2 + 4 + 4 + · · · + m = m+1
2
if m ≥ 5.
As m ≥ c + 2 = 6, the case m = 6 remains. All inequalities are fulfilled if m ≥ 6;
therefore the possibility remains that in Fig. 7.6b1 and Fig. 7.6b3 , if m = 6 and ρ =
(−3, 1, 2), the deformations f = (x3 y + ay 2 z 2 ) · z 2 and g = x6 + by 2 z 4 simultaneously
occur. Now f ∧ g = x3 yz 2 ∧ x6 + bx3 yz 2 ∧ y 2 z 4 + ay 2 z 4 ∧ x6 and therefore
max −α-grade (I) ≤ max(deg C0 in Fig. 7.6b1 , deg C∞ in Fig. 7.6b3 , deg C∞ in Fig. 7.6b1 )
m
m
m
m+1
= max m2 − 2, m+1
,
+
2
and
min
−
α
−
grade(I)
=min
−
2,
,
+
2
.
2
2
2
2
2
m+1
m
m+1
Now 2 > 2 + 2, if m ≥ 3, and therefore max −α − grade(I) = 2 , and
m
min − α − grade(I) = m2 − 2. Thus (*) follows from Q(m − 1) > m+1
−
+ 2 ⇐⇒
2
2
2
m − 3m − 12 > 0, and this is true if m ≥ 6.
Conclusion 7.4. If c = 4, then (*) is fulfilled.
7.2.5.
Summarizing the Conclusions 7.1–7.4 we obtain
Proposition 7.1. If ρ1 > 0, ρ2 > 0 and r = 0, then (*) is fulfilled except in the case
c = 1, m = 4, I = (y(x, y), x4 + yz 3 ).
Notabene. Hence the inequality (!)in Section 4.3 is fulfilled with one exception .
82
Fig. 7.1
first column filled
with monomials
of H 0 (K(c))
L0
monomial domain
0
1
2
3
4
5
6
7
8
κ κ+1
M0
m0
c c+1
LB
RB
Fig. 7.2b
Fig. 7.2a
0
1 2 3 4 5 6 7 8 9
colength(K) = 10, reg(K) = 9
0
83
1 2 3 4 5 6 7 8 9 10
colength(K) = 11, reg(K) = 10
Fig. 7.3
Fig. 7.4a
Fig. 7.4b
−
0
1
2
M
3 ... ... m
0
+
2 ... ... m
1
−
0
1
2
+
3 ... ... m
Fig. 7.5c
Fig. 7.5a
−
0
1
2
Fig. 7.5b
−
+
3 ... ... m
0
1
2
−
+
3 ... ... m
0
1
Fig. 7.5d
2
+
3 ... ... m
Fig. 7.5e
−
0
1
2
+
3 ... ... m
84
−
0
1
2
+
3 ... ... m
Fig. 7.6a1
Fig. 7.6a2
−
−
0
1
2
3
+
4 ... ... m
0
1
2
Fig. 7.6b1
−
0
1
2
Fig. 7.6b2
−
+
3
3
+
4 ... ... m
4 ... ... m
0
85
1
2
3
+
4 ... ... m
Fig. 7.6b3
Fig. 7.6c
−
0
1
2
−
3
+
4 ... ... m
0
1
2
Fig. 7.6d
3
+
4 ... ... m
Fig. 7.6e
−
0
1
2
3
−
+
4 ... ... m
0
86
1
2
3
+
4 ... ... m
CHAPTER 8
Borel-invariant surfaces and standard cycles
8.1. Preliminary remarks.
d
The group GL(3; k) operates on S by
matrices, thus
GL(3; k) operates on H =
1 0 ∗
d
2
Hilb (Pk ). We recall the subgroups Γ = 0 1 ∗ < U(3; k) and T (ρ) < T :=
0 0 1
T (3; k), already introduced in (2.4.1). As each subspace U ⊂ Sn is invariant under
D := {(λ, λ, λ)|λ ∈ k ∗ }, each ideal I ⊂ OP2 and each point of H d is invariant under D.
Instead of the operation of T on H d one can consider the operation of the group T /D,
which is isomorphic to T (2; k).
According to the theory of Hirschowitz, the 1-cycles in H d which are invariant under
B = B(3; k), form a generating system of the first Chow group of H d , and relations among
them are realized in B-invariant surfaces V ⊂ H d ([Hi], Mode d’emploi, p. 89).
We distinguish between the cases, whether V is pointwise invariant under the Gm operation σ(λ) : x 7→ x, y 7→ y, z 7→ λz, or not. This we call the homogeneous case and
the inhomogeneous case, respectively.
8.2. The inhomogeneous case.
We let T = T (3; k) operate by diagonal matrices and let Ga operate by ψα : x 7→
x, y 7→ αx + y, z 7→ z on S. Then the group G = Ga · T operates on H d . Let V ⊂ H d be
a G-invariant, closed, two-dimensional variety, which is not pointwise invariant under Ga
and is not pointwise invariant under the Gm -operation σ(λ) : x 7→ x, y 7→ y, z 7→ λz. (For
abbreviation we speak of an inhomogeneous surface.)
We suppose k = k. If ξ ∈ H d (k) is a closed point, then Tξ and Gξ denote the inertia
group of ξ in T and G, respectively.
8.2.1. Auxiliary lemma 1. There is a point ξ ∈ V (k) such that V = T · ξ.
Proof. If dim T · ξ < 2 for all ξ ∈ V (k), then ξ is T -invariant or T · ξ is a T -invariant,
irreducible curve, ∀ ξ ∈ V (k). The first case can occur for only finitely many points; in the
second case one has Tξ = T (ρ), ρ ∈ Z3 − (0), such that ρ0 + ρ1 + ρ2 = 0 ([T1], Bemerkung
1, p.2).
There are only finitely many ρ, such that there exists an ideal I ⊂ OP2 of fixed
colength d, which is invariant under T (ρ), but not invariant under T (see [T2], Hilfssatz
6, p. 140). In other words, there are only finitely many ρ, such that (H d )T (ρ) 6⊇ (H d )T .
87
S
From the assumption follows V = {V T (ρi ) |i ∈ I}, where I is a finite set of indices. As
V is irreducible, it follows that V = V T (ρ) for a certain ρ. Now one has
Gg(ξ) = gGξ g −1 ∀ ξ ∈ V (k), ∀g ∈ G(k),
and therefore
Gg(ξ) ⊃ T (ρ) ∪ gT (ρ)g −1, ∀ ξ ∈ V (k),
∀ g ∈ G(k).
We show there are λ0 6= λ1 in k ∗ such that τ = (λ0 , λ1 , 1) ∈ T (ρ). We assume that
(λ0 , λ1 , λ2 ) ∈ T (ρ) implies λ0 /λ2 = λ1 /λ2 , i.e. λ0 = λ1 . Then each element in T (ρ) has
the form (λ, λ, λ2), thus T (ρ) ⊂ Gm · D, where D = {(λ, λ, λ)|λ ∈ k ∗ }. But then ρ2 = 0
and V is pointwise Gm -invariant, contradiction.
We thus take τ = (λ0 , λ1 , 1) ∈ T (ρ) such that λ0 =
6 λ1 and take
1 α 0
0 1 0 ∈ Ga < G, where α 6= 0.
g=
0 0 1
Then
gτ g −1
is an element of Gg(ξ) and thus
τ −1 gτ g −1
λ0 (λ1 − λ0 )α 0
λ1
0
= 0
0
0
1
1 β 0
= 0 1 0
0 0 1
β := λ−1
0 (λ1 − λ0 )α 6= 0
is an element of Gg(ξ) , too. It follows that g(ξ) is invariant under ψβ , thus invariant under
Ga and therefore ξ is invariant under Ga . This is valid for all ξ ∈ V (k), contradicting the
assumption.
8.2.2. If one lets T (2; k) operate on S by x 7→ λ0 x, y 7→ λ1 y, z 7→ z then one sees
that G = T · Ga operates on H d in the same way as the group T (2; k) · Ga which for
simplicity is again denoted by G (and is equal to B(2; k)). If ξ is as in the Auxiliary
lemma 1, then from V = G · ξ it follows that Gξ < G is 1-dimensional. There is the
·
S
decomposition Gξ = gi H, H := G0ξ , in finitely many cosets. As H is a 1-dimensional
connected group, two cases can occur:
1st case: H ≃ Ga . Now the unipotent elements in B(2; k) are just the matrices
ψα = ( 10 α1 ). It follows that there is α 6= 0 such that ψα ∈ Gξ . But then ξ is invariant
under Ga . As Ga is normalized by T , T ·ξ is pointwise Ga -invariant. Because of V = T · ξ,
V is pointwise Ga -invariant, contradiction.
2nd case: H ≃ Gm . As each element of Gm is semi-simple, so is each element of
the isomorphic image H. Thus the commutative group H < GL(2; k) consists of semisimple elements. Then there is an g ∈ GL(2; k), such that g −1Hg consists of diago∼
nal matrices ([K], Lemma 1, p. 150). Because of Gm ≃ H → g −1 Hg < T (2; k) one
has a 1-psg f : Gm → T (2; k). Let p : T (2; k) → Gm be the projection onto the
88
n
first component. Then
p ◦ af : Gm → Gm has the form λ → λ , n a suitable inteλ 0
λ ∈ k ∗ =: T (a, b), a and b suitable integers. It
ger. Thus g −1 Hg =
0 λb
a11 a12
−1
follows H = gT (a, b)g ⊂ G = Ga · T (2; k) = B(2; k). Writing g =
gives
a21 a22
a22 −a12
−1
−1
g =D
, D := det(g). We compute:
−a21
a11
a
a11 a12
λa 0
λ a11 λb a12
and
=
a21 a22
0 λb
λa a21 λb a22
a
λ a11 λb a12
a22 −a12
λa 0
−1
−1
=
g =D
g
−a21
a11
0 λb
λa a21 λb a22
a
λ a11 a22 − λb a12 a21 −λa a11 a12 + λb a11 a12
=
D −1
λa a a − λb a21 a22 −λa a12 a21 + λb a11 a22
a 21 22
λ a11 a22 − λb a12 a21 (λb − λa )a11 a12
.
D −1
(λa − λb )a21 a22
λb a11 a22 − λa a12 a21
This matrix is an upper triangular matrix if and only if (λa − λb )a21 a22 = 0, ∀λ ∈ k ∗ .
Subcase 1. a = b ⇒ H = T (a, a).
Subcase 2. a 6= b and a21 = 0. Write
a11 a12
a11 0
1 c
g=
= uτ, where τ =
,u =
and c := a12 /a22 .
0 a22
0 a22
0 1
Then H = gT (a, b)g −1 = uT (a, b)u−1 .
λa 0
0 λb
Subcase 3. a 6= b and a21 6= 0. Then a22 = 0 and g
g −1 =
−λb a12 a21 (λb − λa )a11 a12
−1
D
. As D = −a12 a21 , this matrix equals
0
−λa a12 a21
b
λ (λa − λb )c
, where c = a11 a12 /a12 a21 = a11 /a21 .
0
λa
b
λ (λa − λb )c
−1
∗
Thus H = gT (a, b)g =
λ ∈ k , where c := a11 /a21 . If u :=
0
λa
b
1 −c
λ (λa − λb )c
λb 0
−1
, then u
u =
and thus H = u−1 T (b, a)u.
0 1
0
λa
0 λa
We have proved
Auxiliary lemma 2. There is an element u ∈ Ga such that H = uT (a, b)u−1, where a
and b are suitable integers.
8.2.3. Let ξ and u be as in Auxiliary lemma 1 and 2, respectively. Set ζ := u−1 (ξ).
Then Gζ = u−1 Gξ u ⊃ u−1 Hu = T (a, b) < T (2; k) and thus dim T (2; k) · ζ ≤ 1. If this
dimension would be equal to 0, then Gζ and Gξ would have the dimension 2. Because
of V = G · ξ the dimension of V would be 1, contradiction. Thus dim T · ζ = 1 and
(Appendix E, Corollary) gives
89
Auxiliary lemma 3. The inertia group Tζ of ζ in T (3; k) has the form T (ρ), where
ρ = (ρ0 , ρ1 , ρ2 ) and ρ2 6= 0.
8.2.4. Summary.
Proposition 8.1. Let V ⊂ H d be a closed 2-dimensional subvariety, invariant under
G = Ga · T (3; k), where Ga operates by ψα : x 7→ x, y 7→ αx + y, z 7→ z and T (3; k)
operates by diagonal matrices on S. We suppose that V is not pointwise invariant under
this Ga -operation and is not pointwise invariant under the Gm -operation σ(λ) : x 7→
x, y 7→ y, z 7→ λz. Then there is a point ξ ∈ V (k) and u ∈ Ga such that :
(i) V = T (3; k) · ξ (ii) The inertia group Tζ of ζ := u(ξ) in T (3; k) has the form T (ρ),
where ρ = (ρ0 , ρ1 , ρ2 ) and ρ2 6= 0. (iii) V = Gm · Ga · ζ.
Proof. The statements (i) and (ii) follow from (8.2.1) – (8.2.3). Put G := Gm · Ga =
Gm × Ga . If the statement (iii) were wrong, then the inertia group Gζ would have
a dimension ≥ 1 and thus would contain a subgroup H isomorphic to Gm or Ga . If
H ≃ Gm , then ζ would be invariant under p1 (H) = {(λn , 1)|λ ∈ k ∗ }, n ∈ Z − (0). Then
ζ and ξ would be invariant under Gm and thus V would be pointwise Gm - invariant,
contradiction.
If H ≃ Ga then ζ and ξ would be invariant under Ga . As Ga is normalized by T (3; k),
T (3; k) · ξ would be pointwise Ga -invariant and the same would follow for V .
8.3. The homogeneous case.
We now assume V ⊂ H d is a 2-dimensional subvariety, invariant under G :=
Ga · T (3; k), not pointwise invariant under Ga , but now pointwise invariant under the
Gm -operation σ as in (8.1). As there are only finitely many T (3; k)-fixed points in H d , it
follows that V is not pointwise fixed by the Gm -operation τ (λ) : x 7→ λx, y 7→ y, z 7→ z.
Let ξ ∈ V (k) be not Ga -invariant and not Gm -invariant. We define a morphism f by
f : Ga × Gm → V , (α, λ) 7→ ψα τ (λ)ξ.
Assume that f has an infinite fibre. Then there is an element (β, µ) ∈ Ga × Gm such
that ψα τ (λ)ξ = ψβ τ (µ)ξ, i.e.
(8.1)
ψ(α−β)λ−1 (ξ) = τ (λ−1 µ)ξ
for infinitely many (α, λ) in Gm × Ga .
By assumption, C := {ψα (ξ)|α ∈ k}− and D := {τ (λ)(ξ)|λ ∈ k ∗ }− are curves in V . If
one assumes that only finitely many different λ can occur in (8.1), then on the left side
of (8.1) also only finitely many α can occur. For ξ is not a fixed point of Ga , so that from
ψα (ξ) = ψβ (ξ) it follows that α = β. This contradicts the last assumption. Thus C and
D have in common infinitely many points and hence are equal (as subschemes with the
induced reduced scheme structure).
The fixed-point set of C under Gm consists of the two points ξ0/∞ := lim τ (λ)ξ. As
λ→0/∞
C has an unique Ga -fixed-point, and as Ga is normalized by Gm , one of the two points, say
90
ξ∞ , is fixed by Ga . Thus C = {ψα (ξ0)|α ∈ k}− and ξ0 corresponds to a monomial ideal.
n
S
There are only finitely many T (3; k)-fixed points ξi , 1 ≤ i ≤ n, in H d . Set M := Ga · ξi .
1
Then C \ M is open and non empty, and choosing ξ ∈ C \ M it follows that f as defined
above has no infinite fiber.
Proposition 8.2. Let V ⊂ H d be a closed 2-dimensional subvariety, invariant under
G = Ga · T (3; k), not pointwise Ga -invariant, but pointwise invariant under the Gm operation σ(λ) : x 7→ x, y 7→ y, z 7→ λz. Then V is not pointwise invariant under the
Gm -operation τ (λ) : x 7→ λx, y 7→ y, z 7→ z. And for all closed points ζ in an open non
empty subset of V one has V = Ga · Gm · ζ.
8.4. Standard cycles.
In the following we suppose d ≥ 5 and we recall to memory the closed subscheme
[
H = {Hϕ ⊂ H d | g ∗ (ϕ) > g(d)}
of H d (cf. 1.2.2). As usual, we let Ga operate by ψα : x 7→ x, y 7→ αx + y, z 7→ z and we
let Gm operate by σ(λ) : x 7→ x, y 7→ y, z 7→ λz (by σ(λ) : x 7→ λx, y 7→ y, z 7→ z) on S in
the inhomogeneous case (in the homogeneous case, respectively).
8.4.1.
Definition 7. Let C ⊂ H be a B = B(3; k) -invariant curve, which is not pointwise
−
Ga -invariant. Then C={ψα (ξ)|α ∈k}
, where ξ ↔ I is an ideal, invariant under
1 0 ∗
T = T (3; k) and Γ = 0 1 ∗ < ∆ := U(3; k), which is uniquely determined by
0 0 1
C (cf. [T1], Proposition 0, p. 3). We call C a x-standard cycle respectively a y-standard
cycle, if I has x-standard form respectively y-standard form (see 2.4.3 Definition 2).
8.4.2. Let V ⊂ Z = Z(H) be a 2-dimensional, B-invariant subvariety, where Z is
defined as in (1.2.1). We suppose, that V contains a y-standard cycle. Then V is not
pointwise Ga -invariant, so that we can write V = Gm · Ga · ζ, where ζ ∈ V (k) is as in
Proposition 8.1 or Proposition 8.2 , respectively. By the definition of Z, the orbit ∆ · ζ
has a dimension
ζ is not ∆-invariant,the inertia group ∆ζ of ζ in ∆ has the form
≤ 1. As
1 α ∗
G(a : b) = 0 1 β ∈ ∆|aα + bβ = 0 , where a, b ∈ k and not both elements are
0 0 1
zero (cf. Appendix D, Lemma 1). Let I be the ideal corresponding to ζ. If ϕ is the Hilbert
function of I, then g ∗ (ϕ) > g(d) by the definition of H and thus I = ℓK(−1)+f OP2 (−m),
where K ⊂ OP2 has the colength c, f ∈ Sm , c+m = d and m = reg(I) ≥ c+2 ( see Lemma
2.1 – Lemma 2.4). If ν := min{n|H 0(I(n)) 6= (0)}, then ν < m . This follows from Lemma
2.2 and Corollary 2.1 . As G(a : b) is unipotent, there is an eigenvector f ∈ H 0(I(ν)).
From x ∂f /∂z ∈ hf i and bx ∂f /∂y − ay∂f /∂z ∈ hf i we conclude that f = xν , if we
assume b 6= 0, which we do now. Let be η ∈ V (k) any point. If L is the corresponding
91
ideal, then xν ∈ H 0 (L(ν)). ( Proof : This is first of all true for all η ∈ W := Gm · Ga · ζ.
By means of the mapping J 7→ H 0 (J (d)) we can regard H d as a closed subscheme of
G = GrassQ(d) (Sd ). If J ∈ H d is any ideal, the condition xν ∈ H 0 (J (ν)) is equivalent to
the condition xν Sd−ν ⊂ H 0 (J (d)). An element of G(Spec A) is a subbundle L ⊂ Sd ⊗ A
of rank Q(d), and the condition xν Sd−ν ⊂ L defines a closed subscheme of G. Thus the
condition xν ∈ H 0 (L(ν)) defines a closed subset of V . As V = W , this condition is fulfilled
for all points of V .) Assume that L has y-standard form. Then L = y·M(−1)+gOP2 (−n),
where e := colength (M), n := reg(L) ≥ m, and e + n = d. As ν < m ≤ n we get
xν ∈ H 0 (L(ν)) = yH 0(M(ν − 1)), contradiction.
Lemma 8.1. If V ⊂ Z(H) is a B-invariant surface which contains a y-standard cycle,
then V is point-wise invariant under Γ.
Proof. From the foregoing reasoning it follows that b = 0, i.e., ζ is invariant under
Γ = G(1 : 0). As Γ is normalized by Ga and T , it follows that Gm · Ga · ζ is point-wise
invariant under Γ, and the same is true for V .
8.4.3. We suppose that V ⊂ Z(H) is a B-invariant surface, which contains a ystandard cycle. We represent V in the form V = Gm · Ga · ζ, according to Proposition
8.1 or Proposition 8.2, respectively.
Lemma 8.2. (a) One can assume without restriction that J ↔ ζ has y-standard form.
(b) I0/∞ are monomial ideals and have y-standard form.
Proof. Let be ζ ↔ I as in Proposition 8.1 (resp. in Proposition 8.2 ).
(a) By Lemma 2.6), I has x- or y-standard form. First we consider the inhomogeneous
case . From I = xK(−1) + f OP (−m) and the Γ-invariance of I (Lemma 8.1), the Γinvariance of K follows. By (Appendix C, Remark 2) we have Rc ⊂ H 0 (K(c)), and
because of m ≥ c + 2 we get Rm−2 ⊂ H 0 (K(m − 2)), thus xRm−2 ⊂ xH 0 (K(m − 2)) =
H 0 (I(m − 1)). We conclude that xRm−2 ⊂ H 0 (J (m − 1)), hence xRm−2 · Sd−m+1 ⊂
H 0 (J (d)), if J ↔ η is any point of Gm · Ga · ζ. The same reasoning as in (8.4.2) shows
that xRm−2 · Sd−m+1 ⊂ H 0 (J (d)) and hence xRm−2 ⊂ H 0 (J (m − 1)) is true for all
J ↔ η ∈ V . As J = ℓM(−1) + gOP2 (−n), e = colength (M), n = reg(J ) ≥ m, e + n = d
and ℓ ∈ R1 − (0), it follows that H 0 (J (m − 1)) = ℓH 0 (M(m − 2)) ⊂ xRm−2 , hence ℓ = x.
It follows, that no ideal in V has y-standard form, contradiction. Thus the statement (a)
is proved in the inhomogeneous case. If the homogeneous occurs and if we assume that
ζ ↔ I would have x-standard form, the same argumentation as in the inhomogeneous case
gives a contradiction. By Lemma 2.2 we have a representation I = ℓK(−1) + f OP2 (−m),
and if the homogeneous case occurs, because of the Γ-invariance of I, we conclude that
ℓ = −βx + y, where β ∈ k. Replacing I by ψβ (I) = yψβ (K)(−1) + ψβ (f )OP2 (−m), we
can assume without restriction that I has y-standard form.
(b) If I = yK(−1) + f OP2 (−m), colength (K) = c, reg(I) = m, c + m = d, m ≥ c + 2,
then Rm−2 ⊂ H 0 (K(m − 2)) (Appendix C, Remark 2), thus yRm−2 ⊂ H 0 (I(m − 1)),
hence yRm−2 ⊂ H 0 (σ(λ)I(m − 1)), λ ∈ k ∗ . As yRm−2 ⊂ H 0 (J (m − 1)) is equivalent
to yRm−2 · Sd−m+1 ⊂ H 0 (J (d)), the condition yRm−2 ⊂ H 0 (J (m − 1)) defines a closed
92
subscheme of V , which is invariant under the Gm -operation σ, as one sees by the same
reasoning as in the proof of Lemma 8.1 . It follows that yRm−2 ⊂ H 0(I0/∞ (m − 1)). By
Lemma 2.2 we can write I0 = ℓM(−1) + gOP2 (−n), where n = reg(I0 ) ≥ m. But then
yRm−2 ⊂ H 0 (I0 (m − 1)) = ℓH 0 (M(m − 2)), showing that ℓ = y. The same argument
shows that I∞ has y-standard form, too.
In the inhomogeneous case ζ is fixed by T (ρ), where ρ2 6= 0 ( Proposition 8.1) , hence
ζ0/∞ are fixed by T (ρ) · Gm = T (3; k)
In the homogeneous case ζ0/∞ are invariant under the two Gm -operations σ and τ , hence
are invariant under T (3; k).
93
CHAPTER 9
Relations between B-invariant 1-cycles
9.1. Generalities on relations between 1-cycles
We describe the method of Hirschowitz in our special situation. B = B(3; k) operates
on H d and we take a closed subscheme X ⊂ H d , which is B-invariant. Z1B (X) is the
free group generated by B-invariant curves in X. It is easy to see that the canonical
homomorphism Z1B (X) → A1 (X) is surjective (cf. [T1], Lemma 1, p. 6). By ([Hi],
Theorem 1, p.87) the kernel RatB
1 (X) can be described as follows. One has to consider all
1
operations of B on P and all two dimensional subvarieties B ⊂ X × P1 with the following
k
properties:
(i) p2 : B → P1 is dominant, hence surjective and flat.
(ii) The operation of B on P1 fixes 0 and ∞.
(iii) B is invariant under the induced operation (ξ, µ) 7→ (gξ, gµ), ξ ∈ X, µ ∈ P1 , g ∈ B,
of B on X ×k P1 .
N.B. According to a general theorem of Fogarty, the fixed-point scheme (P1 )U (3;k) is
connected; hence from (iii) it follows that U(3; k) operates trivially on P1 .
If Bµ := p−1
2 (µ) is the fibre and Bµ := p1 (Bµ ) is the isomorphic image in X, then one
has:
g(Bµ ) = { (gξ, gµ) | ξ ∈ X, such that (ξ, µ) ∈ B }
= { (ξ, gµ) | ξ ∈ X, such that (ξ, gµ) ∈ B }
= Bgµ , for all g ∈ B, µ ∈ P1
We conclude that
(9.1)
gBµ = Bgµ , for all g ∈ B, µ ∈ P1 .
From property (ii) it follows that B0 and B∞ are B-invariant 1-cycles in X. Then
B
RatB
1 (X) is defined to be the subgroup of Z1 (X) generated by all elements of the form B0 −
B
B
B∞ , and the theorem of Hirschowitz says that AB
1 (X) := Z1 (X)/ Rat1 (X) is canonically
isomorphic to A1 (X) (loc.cit.). We consider V := p1 (B) as a closed subscheme of X with
the induced reduced scheme structure. As B ⊂ V ×k P1 , dim V ≥ 1. If dim V = 1, then
B = V × P1 and B0 − B∞ is equal to zero in Z1B (X). Thus we can assume without
restriction that V is a B-invariant, 2-dimensional subvariety of X.
95
9.2. The standard construction
Let X ⊂ H d be a B(3; k)-invariant subvariety. Start with a subvariety B ⊂ X ×k P1 as
in (9.1), such that V := p1 (B) ⊂ H d is a 2-dimensional subvariety, which is automatically
B(3; k)-invariant. Assume V is not pointwise invariant under the Ga -operation as in 8.2.
Write
(9.2)
V = Ga · Gm · ξ
where ξ ∈ V (k) is a suitable point and Gm operates by σ(λ) : x 7→ x, y 7→ y, z 7→ λz or by
σ(λ) : x 7→ λx, y 7→ y, z 7→ z, respectively (cf. Proposition 8.1 and Proposition 8.2). Then
Gm is a subgroup of T and fixes 0 = (0 : 1) and ∞ = (1 : 0) by assumption. Assume
that σ operates trivially on Gm . From (9.1) it follows that σ(λ)Bµ = Bµ , ∀λ ∈ k ∗ .
If ξ is the point of (9.2), then one chooses µ in such a way that (ξ, µ) ∈ Bµ . Then
ψα (ξ, µ) = (ψα (ξ), µ) ∈ Bµ , ∀α ∈ k, hence ψα (ξ) ∈ Bµ , ∀α ∈ k. Because of (9.1) and (9.2)
we would get V ⊂ Bµ . Then the closed subscheme Bµ ⊂ B has the dimension two and
it follows Bµ = B, contradiction, as p2 is dominant by assumption. This argumentation
also shows ξ ∈
/ B0 ∪ B∞ , i.e. there is µ ∈ P1 − {0, ∞} such that (ξ, µ) ∈ Bµ . Thus one
can find λ ∈ k ∗ such that σ(λ)µ = (1 : 1). Replacing ξ and µ by ξ ′ = σ(λ)ξ and σ(λ)µ,
respectively, then (9.2) is fulfilled with ξ ′ instead of ξ. Thus one can assume without
restriction that µ = (1 : 1). As A1 = P1 − {∞} is fixed by σ, Gm operates by σ on A1 and
fixes (0 : 1). Then there is m ∈ Z such that σ(λ)(a : 1) = (λm a : 1) for all a ∈ k, λ ∈ k ∗ .
As the action of Gm on P1 is not trivial, m 6= 0.
C := {ψα (ξ)|α ∈ k}− ⊂ B(1:1) is a curve in V ; let h be its Hilbert polynomial
with respect to the usual embedding of H d in a projective space (cf. [T2], 4.1.1). The
association λ 7→ σ(λ)C defines a morphism Gm → X := Hilbh (V ). It has an extension to
a morphism P1 → X , which defines a flat family of curves
C ֒→ V ×k P1
↓ p2
P1
∗
If Cλ := p−1
2 (λ), then Cλ := p1 (Cλ ) = σ(λ)C, ∀λ ∈ k and
(9.3)
[p1 (C0 )] = [C] = [p1 (C∞ )] ∈ A1 (V ).
The finite morphism A1 − (0) → A1 − (0) defined by λ 7→ λm has an extension
f : P1 → P1 such that (λ : 1) 7→ (λm : 1), ∀λ ∈ k, and (1 : 0) 7→ (1 : 0). For simplicity, the
morphism 1V × f : V × P1 → V × P1 is denoted by f , again. By construction C ⊂ B(1:1)
and hence σ(λ)C ⊂ σ(λ)B(1:1) = Bσ(λ)(1:1) . The fibre Cλ = σ(λ)C ×{(λ : 1)} is mapped by
f into σ(λ)C × {(λm : 1)} = σ(λ)(C × {(1 : 1)}) ⊂ σ(λ)(B(1:1) × {(1 : 1)}) = σ(λ)B(1:1) =
Bσ(λ)(1:1) , ∀λ ∈ k ∗ .
This construction of the family C is called standard construction and the proof in
([T1], Lemma 1, p.6) shows that C ⊂ V ×k P1 is a subvariety. As C0 and C∞ are closed in
o
C, C := C − (C0 ∪ C∞ ) is open in C. Because C and B are irreducible and f is projective,
96
o
from f (C) ⊂ B we conclude that f (C) = B. As V × {0} and V × {∞} are mapped by f
into itselve, C0 ⊂ V × {0} and C∞ ⊂ V × {∞} are mapped by f into B ∩ V × {0} = B0
and B ∩ V × {∞} = B∞ , respectively. As f (C) = B, we get f (C0 ) = B0 and f (C∞ ) = B∞ ,
i.e. C0 = B0 and C∞ = B∞ . The curves p1 (C0 ) and p1 (C∞ ) are called the limit curves (of
the standard construction) and are denoted by C0 = lim σ(λ)C and C∞ = lim σ(λ)C,
λ→0
respectively, and one has
B0 − B∞ = C0 − C∞
(9.4)
λ→∞
in Z1B (X).
We note that the relation (9.4) was derived under the assumption that V is not pointwise invariant under Ga . If V is pointwise invariant under Ga , the standard construction
cannot be carried out. We get
P
Lemma 9.1. Let X be as before. Elements of RatB
qi Ci ,
1 (X) are either of the form
Ga
where qi ∈ Q and Ci ⊂ X is a B(3; k)-invariant 1-prime cycle in X or they occur in
the following way: One considers all B(3; k)-invariant 2-dimensional subvarieties V ⊂ X,
which are not pointwise invariant under Ga . Then there are points ξ ∈ V (k) such that
V = Gm · Ga · ξ, where Gm operates by σ(λ) : x 7→ x, y 7→ y, z 7→ λz on S (inhomogeneous
case) or by σ(λ) : x 7→ λx, y 7→ y, z 7→ z (homogeneous case). C := {ψα (ξ)|α ∈ k}− is
a curve in V (with the reduced induced scheme structure). By means of the standard
construction it defines a family of curves C ⊂ V ×k P1 , which is flat over P1 and with the
Ga
fibres Cλ = σ(λ)C for all λ ∈ P1 − {0, ∞}. RatB
1 (X) is generated by the relations in X
noted above as well as by the relations C0 − C∞ , where C0/∞ = lim σ(λ)C are the limit
λ→0/∞
curves of C in V and these are Ga -invariant 1-cycles.
Proof. As σ(λ)ψα = ψαλ σ(λ) in the homogeneous case, σ(λ)C = {ψα σ(λ)ξ|α ∈ k}−
is Ga -invariant, ∀λ ∈ k ∗ . And the same is true in the inhomogeneous case .
9.3. Relations containing a y-standard cycle
Suppose d ≥ 5. The closed subscheme H ⊂ H d (cf. Section 8.4) is clearly B-invariant.
As U(3; k) is normalized by T (3; k), the closed subscheme X = Z(H) is B-invariant,
too (cf.Section 1.2). From Lemma 8.1 it follows that relations in Z1B (X) containing a
y-standard cycle are defined by 2-dimensional B-invariant surfaces V ⊂ X, which are
pointwise Γ-invariant but not pointwise Ga -invariant.
We can write V = Gm · Ga · ζ, where ζ ∈ V (k) is a point as in Proposition 8.1
or Proposition 8.2. By Lemma 8.2 we can assume that I ↔ ζ has y-standard form.
Let be ζ0/∞ = lim σ(λ)ζ , where σ(λ) denotes one of the two Gm - operations menλ→0/∞
tioned in Lemma 9.1 .Then carry out the standard construction by using the curve
C := { ψα (ζ) | α ∈ k }− . By Lemma 8.2 C0/∞ are y-standard cycles. We conclude from
Proposition 8.1 and Proposition 8.2 that W := Gm · Ga · ζ is B-invariant and therefore
V − W is a union of B-invariant curves and points. As C0/∞ is invariant under Gm and
Ga , from W ∩ C0/∞ 6= ∅ it would follow that V ⊂ C0/∞ . Hence C0/∞ ⊂ V − W , and
C0/∞ is a union of B-invariant curves, too. As ζ0/∞ ∈ C0/∞ , one has C0/∞ ⊂ C0/∞ . Now
97
from the Propositions 5.1, 6.1 and 7.1 it follows that the inequality (!) in ( 4.3) is fulfilled
with one exception (cf. Proposition 7.1 ). Let be m = reg(I). Because of max −α-grade
(I) ≥ α-grade (I) = deg C = deg C0 = deg C∞ and min(deg C0 , deg C∞ ) ≥ min −αgrade (I) (cf. the definitions and the inequality (4.9) in Section 4.3), from (!) it follows
that
Q(m − 1) + min(deg C0 , deg C∞ ) > deg C0 = deg C∞ .
(**)
As V = W , each ideal J in V has a regularity n ≥ m. If J is monomial and has ystandard form , then from Remark 4.4 it follows that the y-standard cycle generated by
J has a degree ≥ Q(n − 1) ≥ Q(m − 1).
Lemma 9.2. C0 and C∞ are the only y-standard cycles contained in the B-invariant
cycles C0 and C∞ , respectively, and they both occur with the multiplicity 1.
Proof. Except when c = 1, m = 4 the statements follow from (**) and the foregoing
discussion. In the remaining exceptional case (cf. Proposition 7.1 ) we carry out the
standard construction with I = (y(x, y), x4 + yz 3 ) ↔ ζ. Then ζ0 ↔ (y(x, y), x4), ζ∞ ↔
(y, x5 ) and C0 and C∞ are y-standard cycles of degrees 5 and 10, respectively, as one can
see from Figure 9.1.
If η := lim ψα (ζ), the corresponding ideal L is invariant under Ga , hence invariant
λ→∞
under U(3; k). One sees that xz 3 ∈ L, and hence x ∈ L follows. One concludes from this
that L = (x, y 5 ).
As σ and ψα commute, it follows that η is the only point in σ(λ)C for all λ, which is
S
S
U(3; k)-invariant. From V = p1 (C) = {p1 (Cλ )|λ ∈ P1 } = {σ(λ)C|λ ∈ k ∗ } ∪ C0 ∪ C∞
it follows that
V = W ∪ {η} ∪ C0 ∪ C∞
where W := Gm · Ga · ζ. Now η0 := lim ψα (ζ0 ) ↔ (x(x, y), y 4) and η∞ := lim ψα (ζ∞ ) ↔
α→∞
α→∞
(x, y 5 ) are the only B(3; k)-invariant points in C0 and C∞ , respectively. By the theorem
of Fogarty V U (3;k) is connected, hence is a union of pointwise U(3; k)-invariant, closed
and irreducible curves. If E is such a curve, then E ⊂ C0 ∪ C∞ follows. Now deg C0 =
deg C∞ = deg C = 10. Hence C∞ = C∞ and E ⊂ C0 follows. As each y-standard cycle
in V has a degree ≥ Q(3) = 5, C0 is the only y-standard cycle contained in C0 , and C0
occurs with multiplicity 1.
Proposition 9.1. Let be X = Z(H). Relations in RatB
1 (X) , which are defined by a
B-invariant surface in X and which contain a y-standard cycle, are generated by relations
of the form C1 − C2 + Z where C1 and C2 are y-standard cycles and Z is a 1-cycle in X,
whose prime components are B-invariant but are not y-standard cycles.
Proof. This follows from Lemma 9.1 and Lemma 9.2 .
98
Fig. 9.1
Fig. 9.2
C∞
C0
−
0
1
2
3
+
4
0
1
2
3
99
4
5
CHAPTER 10
Proof of Theorem 1.1
This had been formulated in Chapter 1 and we repeat the notations and assumptions
S
introduced there: d ≥ 5, g(d) = (d − 2)2 /4, H = {H≥ϕ ⊂ H d |g ∗(ϕ) > g(d)}, A(H) =
Im(A1 (HU (3;k) ) → A1 (H)), c3 = {(αx + y, xd )|α ∈ k}− .
P
We make the assumption [c3 ] ∈ A(H). Then [c3 ] =
qi [Ui ] in A1 (H), where qi ∈ Q,
and Ui ⊂ H are T -invariant curves, which are pointwise U(3; k)-invariant. (A1 (HU (3;k) )
is generated by such curves, as follows from the theorem of Hirschowitz or more directly
∼
from an application of Lemma 1 in [T1], p. 6.) If X := Z(H), then A1 (X) → A1 (H)
∼
([T2], Lemma 24, p.121) and AB
1 (X) → A1 (X) ([Hi], Theorem 1, p.87).
P
Hence we can regard α := c3 − qi Ui as a 1-cycle in Z1B (X) ,whose canonical imB
B
B
age in AB
1 (X) := Z1 (X)/ Rat1 (X) vanishes. (We recall that Z1 (X) is the free group
generated by all closed, reduced and irreducible curves in X.) Define B(X) to be the subgroup of Z1B (X), which is generated by all B(3; k)-invariant curves in X (closed, reduced,
irreducible), which are not y-standard cycles. Define:
F1 (X) = Z1B (X)/B(X), R1 (X) = RatB
1 (X) + B(X)/B(X)
F1 (X) is the free group generated by the y-standard cycles, and by Proposition 9.1 R1 (X)
is generated by elements of F1 (X) of the form C1 − C2 , where C1 and C2 are (different)
y-standard cycles. It follows that the canonical image of α in F1 (X)/R1 (X) on the one
hand vanishes and on the other hand is equal to the canonical image of c3 in this Qvector space. Hence c3 ∈ R1 (X). We will show that this is not possible. To each of the
finitely many y-standard cycles we associate a canonical basis element ei ∈ Qn , 1 ≤ i ≤ n.
Especially, we associate c3 to the element en = (0, . . . , 1). From c3 ∈ R1 (X) it follows
that en ∈ h{ei − ej |1 ≤ i < j ≤ n}i, which is not possible. All in all we have
Theorem 10.1. [c3 ] ∈
/ A(H).
Corollary 10.1. dimQ A1 (H) ≥ 3.
Proof. E := {(x2 , xy, y d−1 + αxz d−2 )|α ∈ k}− and F := {(x, y d−1 (αy + z))|α ∈ k}−
are 1-cycles in H, which are pointwise invariant under U(3; k) and G(0, 1), respectively
(see the notation in Appendix D). If [c3 ] = q1 [E] + q2 [F ], q1 , q2 ∈ Q, we compute the
intersection numbers with the tautological line bundles and get
d
= q1 + q2 (n − d + 1)
2
for all n ≥ d − 1. Hence q2 = 0. If q1 6= 0, then [c3 ] ∈ A(H) would follow.
101
CHAPTER 11
Surfaces in H invariant under Ga · T (4; k)
We let Ga operate by ψα : x 7→ x, y 7→ αx + y, z 7→ z, t 7→ z and let T (4; k) operate by
diagonal matrices on P = k[x, y, z, t], hence on H = HQ . Let V ⊂ H be a 2-dimensional
subvariety, which is invariant under Ga · T (4; k) but not pointwise invariant under Ga .
11.1. The inhomogeneous case
We suppose that V is not pointwise invariant under the Gm -operation σ(λ) : x 7→
x, y 7→ y, z 7→ z, t 7→ λt.
11.1.1. Auxiliary Lemma 1. There is a ξ ∈ V (k) such that V = T (4; k) · ξ.
Proof. If dim T (4; k) · ξ ≤ 1, then the inertia group Tξ ⊂ T (4; k) has the dimension
≥ 3. If the dimension is 4, then ξ corresponds to a monomial ideal with Hilbert polynomial
Q, and there are only finitely many such ideals. If dim Tξ = 3, then Tξ = T (ρ) ([T2],
Hilfssatz 7, p.141). If ξ corresponds to the ideal I with Hilbert polynomial Q, then
H 0 (I(b)) has a basis of the form fi = mi pi (X ρ ), where m is a monomial and pi is a
polynomial in 1 variable. At least for one index i the polynomial pi (X ρ ) contains a
positive power of X ρ . Then mi X ρ ∈ Pb . As mi ∈ Pb and ρ = (ρ0 , . . . , ρ3 ) ∈ Z4 − (0) is a
vector such that ρ0 + · · ·+ ρ3 = 0, this implies |ρi | ≤ b. Hence there are only finitely many
such vectors ρ such that the fixed-point scheme HT (ρ) does not consist of only finitely
many T (4; k)-fixed points.
n
S
Suppose that dim T (4; k) · ξ ≤ 1 for all ξ ∈ V (k). Then V = V T (ρi ) , ρi ∈ Z4 − (0),
1
hence V = V T (ρ) , ρ ∈ Z4 − (0) suitable vector. We show there is τ = (λ0 , . . . , λ3 ) ∈ T (ρ)
such that λ0 6= λ1 . If not, it follows that T (ρ) ⊂ T (1, −1, 0, 0), hence νρ = (1, −1, 0, 0), ν ∈
Z. But then ρ3 = 0 and V would be pointwise invariant under Gm , contradiction.
1 α(λ−1
−1
−1
0 λ1 − 1)
= ψβ ,
Take this τ and an arbitrary α 6= 0. Then τ ψα τ ψα =
0
1
where β := α(λ−1
0 λ1 −1) 6= 0. The same argumentation as in the proof of ( 8.2.1 Auxiliary
Lemma 1) gives a contradiction.
11.1.2. If 0 ≤ i < j ≤ 3 define Tij = {(λ0 , λ1 , λ2 , λ3 ) ∈ T (4; k)|λℓ = 1 if ℓ 6= i and
ℓ 6= j}. We let G := Ga · T23 operate von V . If ξ ∈ V (k) is as in Auxiliary Lemma 1,
let Gξ be the inertia group of ξ in G. From 1 ≤ dim Gξ ≤ 2 it follows that there is a
1-dimensional connected subgroup H < G0ξ . Then H ≃ Ga or H ≃ Gm . In the first case it
103
follows that H = Ga and hence T (4; k) · ξ is pointwise Ga -invariant, contradiction. Hence
H ≃ Gm . In the diagram
i
Gm ≃ H ֒→ G = Ga × T23
p1 ւ
ց p2
Ga
T23
p1 ◦ i is the trivial map, hence H = {(1, 1, λr , λs )|λ ∈ k ∗ } =: T23 (r, s), where r, s ∈ Z are
not both equal to zero.
λ α
∗
11.1.3. Let be G :=
λ, µ ∈ k , α ∈ k and let ξ be as in Auxiliary
0 µ
Lemma 1. Then there is again a subgroup H < G0ξ isomorphic to Gm . It has the form
a
λ (λb − λa )c
∗
H=
λ ∈ k , where a, b ∈ Z are not both equal to zero and c ∈ k
0
λb
1 −c 0 0
0 1 0 0
is a fixed element (cf. 8.2.2). Let be u =
0 0 1 0.
0 0 0 1
a
λ
b
λ
∗
−1
=: T01 (a, b). If ζ := u(ξ) one gets
λ
∈
k
Then uHu =
1
1
Tζ ⊃ T01 (a, b). From (11.1.2) it follows Tζ ⊃ T23 (r, s), too. As Tζ contains the diagonal
group, dim Tζ ≥ 3 follows. If ζ were fixed by T (4; k), then ξ = u−1 (ζ) would be fixed by
T23 , contradiction. It follows that Tζ = T (ρ). Here ρ3 6= 0, because ξ is not invariant
under σ, for otherwise V would be pointwise Gm -invariant. If c = 0, then Tξ = T (ρ),
which contradicts the choice of ξ.
Conclusion 11.1. The element u is different from 1, and putting ζ := u(ξ) one has
V = Ga · Gm · ζ.
Proof. Put G := Ga · Gm = Ga × Gm . If dim Gζ ≥ 1, then G0ζ would contain a
subgroup H isomorphic to Ga or Gm . But then ζ would be invariant under Gm or Ga ,
and then the same would be true for ξ, which gives a contradiction as above.
Conclusion 11.2. V \ Ga · Gm · ζ is a union of points and curves which are invariant
under Ga · T (4; k).
Proof. If τ ∈ T (ρ), then τ · Ga · Gm · ζ = Ga · τ · Gm · ζ = Ga · Gm · τ · ζ = Ga · Gm · ζ,
and if τ ∈ Gm , the same is true.
11.2. The homogeneous case
11.2.1. We first suppose that V is pointwise invariant under the Gm -operation σ(λ) :
x 7→ x, y 7→ y, z 7→ z, t 7→ λt, but not pointwise invariant under the Gm -operation τ (λ) :
x 7→ x, y 7→ y, z 7→ λz, t 7→ t. Then V is invariant under T (3; k) ≃ {(λ0 , λ1 , λ2 , 1)|λi ∈ k ∗ }
and we have the same situation as in Chapter 8. We use the same notations introduced
104
there
ξ (8.2.1 Auxiliary Lemma 1). We let G := Ga · T (2; k) =
and get V = T (2; k) ·
λ α
λ, µ ∈ k ∗ , α ∈ k operate on V . Then there is a subgroup H of Gξ , which
0 µ
is isomorphic to Ga or Gm . In the first case ξ would
bea Ga -invariant,
hence Vwould
λ (λb − λa )c
be pointwise Ga -invariant. In the second case H =
λ ∈ k ∗ . The
0
λb
same argumentation as in (11.1.3) shows that the inertia group of ζ = u(ξ) in T (3; k) has
a dimension ≥ 2. If Tζ = T (3; k), then ζ would be invariant under the Gm -operation τ ,
hence ξ invariant under τ , too, and V would be pointwise invariant under τ . It follows
that Tζ = T (ρ), where ρ = (ρ0 , ρ1 , ρ2 , 0) and ρ2 6= 0. We note that u 6= 1, for otherwise
the inertia group of ξ in T (4; k) would have a dimension ≥ 3.
Conclusion 11.3. The element u is different from 1 and putting ζ = u(ξ) one has
V = Ga · Gm · ζ.
The same argumentation as in (11.1.3) gives
Conclusion 11.4. V \ Ga · Gm · ζ is a union of points and curves which are invariant
under Ga · T (4; k).
11.2.2. We now suppose V is pointwise invariant under T23 = { (1, 1, λ2, λ3 ) | λi ∈ k ∗ }.
Then V is not pointwise invariant under the Gm -operation σ(λ) : x 7→ λx, y 7→ y, z 7→
Gm
z, t 7→ t. Let ξ ∈ V \ (V
∪ V Ga ) be a closed point, and put G := Ga ⋉ Gm =
λ α
λ ∈ k∗ , α ∈ k .
0 1
Assume that dim Gξ ≥ 1. Then H := G0ξ is 1-dimensional and connected. As ξ is not
a
λ (1 − λa )c
∗
, a ∈ Z − (0). Putting
Ga -invariant, H ≃ Gm hence H =
λ∈k
0
1
1 −c 0 0
0 1 0 0
−1
a
∗
u =
0 0 1 0 and ζ = u(ξ) we get uHu = { (λ , 1, 1, 1) | λ ∈ k }. It follows
0 0 0 1
that ζ is Gm -invariant, hence T (4; k)-invariant. Thus ξ corresponds to an ideal I such
that H 0 (I(b)) has a generating system of the form Mi (y − cx)ni , where Mi is a monomial
without y and ni ∈ N. There are only finitely many points ζi ∈ V (k) which are T (4; k)
S
invariant and it follows that ξ ∈ Ga · ζi .
S
Conclusion 11.5. If ξ ∈ V \ [ Ga · ζi ∪ V Ga ∪ V Gm ] is a closed point, then V =
Ga · Gm · ξ.
11.3. Summary
Let V ⊂ HQ be a 2-dimensional subvariety, invariant under G := Ga · T (4; k) but not
pointwise invariant under Ga . We distinguish between three cases:
1. V is not pointwise invariant under the Gm -operation σ(λ) : x 7→ x, y 7→ y, z 7→ z, t 7→
λt.
105
2. V is pointwise invariant under the Gm -operation σ as in the first case, but not pointwise
invariant under the Gm -operation τ (λ) : x 7→ x, y 7→ y, z 7→ λz, t 7→ t.
3. V is pointwise invariant under the Gm -operations σ and τ as in the 1th and 2nd case.
Then V is not pointwise invariant under the Gm -operation ω(λ) : x 7→ λx, y 7→ y, z 7→
z, t 7→ t.
Lemma 11.1. (a) In the 1st case (respectively in the 2nd case) there is ζ ∈ V (k) with
the following properties:
(i) The inertia group Tζ of ζ in T (4; k) has the form T (ρ) where ρ3 > 0 (respectively
ρ2 > 0 and ρ3 = 0).
(ii) V = Ga · Gm · ζ and V \ Ga · Gm · ζ is a union of points and curves invariant under
Ga · T (4; k).
(iii) There is u ∈ Ga , different from 1, such that V = T (4; k) · ξ, where ξ := u(ζ).
(b) In the 3rd case, if one chooses ζ ∈ V (k) such that ζ is neither Ga – nor Gm -invariant
and does not lie in the set ξ ∈ V (k) ∃u ∈ Ga and ∃µ ∈ V T (4;k) (k) such that ξ = u(µ) ,
then V = Ga · Gm · ζ.
106
CHAPTER 12
Surfaces in H invariant under B(4; k)
12.1. The operation of the unipotent group on H
12.1.1.
Let be p = (a : b : c) ∈ P2 (k)
1 α ∗
0 1 β
G(p) :=
0
0 1
0 0 0
and
∗
∗
aα + bβ + cγ = 0 .
γ
1
This is a 5-dimensional subgroup of ∆ := U(4 : k) and each 5-dimensional subgroup of ∆
has this form, where p ∈ P2 (k) is uniquely determined (Appendix D, Lemma 1).
Especially one has the groups
G
: 0
i = G(pi ), where
p1 = (0
1
1
∗
∗
∗
0
0 1 ∗ ∗
, G2 =
0), p3 = (1 : 0 : 0) and G1 =
0
0
0
1
0
0
0 0 0 1
1
0
∗
∗
0
1
∗
∗
.
0 0 1 ∗
0 0 0 1
1 α 0 0
0 1 0 0
−1
Remark 12.1. If ψα =
0 0 1 0, then ψα G(p)ψα = G(p).
0 0 0 1
: 1), p2
=(0 : 1 :
∗ ∗ ∗
1 0 ∗
, G3 =
0 1 ∗
0 0 1
Remark 12.2. If τ = (λ0 , λ1 , λ2 , λ3 ) ∈ T (4; k), then τ G(p)τ −1 = G(τ p), where τ p :=
−1
−1
(aλ−1
0 λ1 : bλ1 λ2 : cλ2 λ3 ).
12.1.2. In ([T2], 3.2.1) and ([T3], 10.2) we had introduced a closed, reduced subscheme Z = Z(HQ ) of HQ such that Z(k) = {x ∈ HQ (k)| dim ∆ · x ≤ 1}. From the
∼
theorem of Hirschowitz it follows that A1 (Z) −→ A1 (HQ ) ([T2], Lemma 24, p. 121). In
the following we consider a surface V ⊂ Z (i.e. a closed 2-dimensional subvariety) which
is B(4; k)-invariant, but not pointwise invariant under ∆ = U(4; k).
12.1.3. Auxiliary Lemma 1. Let be ξ ∈ Z(k) but ξ not invariant under ∆ ,hence
∆ξ = G(p) . If τ ∈ Tξ , then τ p = p .
Proof. If τ ∈ Tξ , then τ G(p)τ −1 τ ξ = τ ξ and thus G(τ p)ξ = ξ. If τ p 6= p , then
G(τ p) 6= G(p) and ξ would be fixed by the subgroup of ∆ , which is generated by G(p)
and G(τ p) , i.e. fixed by ∆ .
107
12.1.4. Auxiliary Lemma 2. Let ξ be as in Auxiliary Lemma 1 . If p = (a : b : 0)
and a, b 6= 0 , then Tξ ⊂ T (1, −2, 1, 0) .
Proof. This follows from Remark 12.2 and Auxiliary Lemma 1 .
12.2. The case p = (a : b : c) where a, b, c 6= 0.
Let be V ⊂ Z a B-invariant surface and ξ ∈ V (k) a point such that ∆ξ = G(a : b : c),
−2 3
2
∗
where a, b, c 6= 0. Then Tξ ⊂ Λ := {(λ0 , λ1 , λ−1
0 λ1 , λ0 λ1 )|λ0 , λ1 ∈ k }. As dim T (4; k)·ξ ≤
2, one has dim Tξ ≥ 2 and thus Tξ = Λ.
λ
α
0
0
0
0
λ
0
0
1
∗
operate on V .
α ∈ k, λ0 , λ1 ∈ k
We let G := Ga · T01 =
0 0 1 0
0 0 0 1
Remark 12.3. T (4; k)ξ ∩ G = (1).
−2 3
−1 2
−2 3
2
Proof. If (λ0 , λ1 , λ−1
0 λ1 , λ0 λ1 ) ∈ G, then λ0 λ1 = λ0 λ1 = 1 which implies λ0 /λ1 =
1, and then λ0 = λ1 = 1 follows.
Remark 12.4. V = T01 · ξ.
Proof. Because of Remark (12.3) dim T01 · ξ < 2 is not possible.
From Remark (12.4) it follows that Gξ < G is 1-dimensional. If H := G0ξ would be
isomorphic to Ga
, then
be invariant
under
Ga , hence invariant under ∆, contradic mξ would
λ
(λn − λm )c
tion. Thus H =
λ ∈ k ∗ , m, n ∈ Z not both equal to zero, c ∈ k
0
λn
m
1 −c
λ
0
−1
∗
(see 8.2.2 Auxiliary Lemma 2). If u :=
, then uHu =
λ ∈ k =:
0 1
0 λn
T01 (m, n). Putting ζ := u(ξ) one gets Tζ ⊃ T01 (m, n). Because of Remark (12.1) from
∆ξ = G(p) it follows that ∆ζ = G(p), too. Hence Tζ = Λ ⊃ T01 (m, n), which is not
possible. We have proved
Lemma 12.1. If V ⊂ Z is a B(4; k)-invariant surface, which is not pointwise∆ invariant , then there is no point ξ ∈ V (k), whose inertia group ∆ξ in ∆ has the form
G(a : b : c), where a, b, c 6= 0.
12.3. 1-cycles of proper type 3.
12.3.1. Recalling the restriction morphism h. The ideals I ⊂ OP3 with Hilbert
polynomial Q such that t is not a zero divisor of OP3 /I form an open non empty subset
108
Ut ⊂ HQ , and I 7→ I ′ := I + tOP3 (−1)/tO
P3 (−1)
1 0
0 1
phism h : Ut → H d = Hilbd (P2 ). If Γ :=
0 0
0 0
in Ut .
defines
the so called restriction mor0 ∗
0 ∗
< ∆, then HΓ is contained
Q
1 ∗
0 1
12.3.2. 1-cycles of proper type 3. We recall, respectively introduce, the notations:
An ideal J ⊂ OP3k with Hilbert polynomial Q corresponds to a point ξ ∈ HQ (k). J has
the type 3, if J is invariant under T (4; k) and G3 = G(1 : 0 : 0), but not invariant under
∆. The curve C = Ga · ξ = {ψα (ξ)|α ∈ k}− in HQ is called the 1-cycle of type 3 defined
by ξ. We say C is a 1-cycle of proper type 3, if I := J ′ = h(J ) has y-standard form (cf.
2.4.3 Definition 2). If ϕ is the Hilbert function of I ↔ ξ ′ ∈ H d (k), then g ∗(ϕ) > g(d)
m
by definition, and one has I = yK(−1) + x
O
K ⊂ OP2 has the colength
P2 (−m),where
1 0 ∗
′
c and is invariant under T (3; k) and G3 := 0 1 ∗ < U(3; k). Moreover one has
0 0 1
d = m + c and m ≥ c + 2.
T −a+2
T −b+1
If Q(T ) = T −1+3
+
+
is the Hilbert polynomial of J , then the
3
2
1
3
closed subscheme V+ (J ) ⊂ P defined by J has the “complementary” Hilbert polynomial
P (n) = dn − g + 1, where d = a − 1, g = g(J ) = (a2 − 3a + 4)/2 − b (cf. [T1], p. 92).
Lemma 12.2. Let J be of proper type 3 and put ν := min{n|H 0 (J (n)) 6= (0)}. If
xν ∈ H 0 (J (ν)), then g(J ) < 0.
Proof. We start with an ideal J fulfilling these conditions. There are subspaces
Ui ⊂ Si , invariant under T (3; k)·G′3 such that S1 Ui ⊂ Ui+1 , i = 0, 1, 2, . . . and H 0 (J (n)) =
n
L
tn−i Ui , for all n ∈ N. Besides this, Un = H 0 (I(n)), at least if n ≥ b, where I = J ′
i=0
is the restriction ideal of J (cf. [G78], Lemma 2.9, p. 65). We replace Un by H 0 (I(n))
for all ν ≤ n ≤ b − 1 and we get an ideal J ∗ ⊃ J such that H 0(J ∗ (n)) = (0), if
n
L
n < ν, and H 0 (J ∗ (n)) =
tn−i H 0 (I(i)), if n ≥ ν. (N.B.: xν ∈ H 0 (J (ν)) ⇒ xν ∈
0
−→
i=ν
0
Im(H (J (ν)) res H (I(ν)))). Then (J ∗ )′ is equal to I, J ∗ is of proper type 3, the
n−a+2
n−b∗ +1
Hilbert polynomial of J ∗ is equal to Q∗ (n) = n−1+3
+
+
, the length of
3
2
1
∗
∗
∗
∗
2
J /J is equal to Q (n) − Q(n) = b − b ≥ 0, and because of g(J ) = (a − 3a + 4)/2 − b∗
one has g(J ∗ ) ≥ g(J ). Thus it suffices to show g(J ∗ ) < 0 We thus can assume without
n
L
restriction J = J ∗ , i.e. H 0 (J (n)) =
tn−i H ( I(i)) for all n ≥ ν, and xν ∈ H 0(I(ν)).
i=ν
Using the terminology of [T1]–[T4], one can say the pyramid E(J ) is complete up to the
level ν over the platform H 0 (I(n)), where n ≥ b, for instance (cf. Fig. 12.1). Further
note that
(12.1)
H 0 (I(n)) = yH 0(K(n − 1)) ⊕ xm k[x, z]n−m
for all n ∈ N (cf. Lemma 2.6). We associate to I the ideal Ĩ represented in Figure
12.2; that means Ĩ arises from I by shifting yH 0(K(n − 1)) into yH 0(K̃(n − 1)), where
109
the Hilbert functions of K and K̃ agree and hence I and Ĩ have the same Hilbert function
n
L
ϕ. Then J˜ is defined by H 0 (J˜(n)) =
tn−i H 0 (Ĩ(i)) and H 0 (J˜(n)) = (0), if n < ν.
i=ν
Then Ĩ and J˜ fulfil the same assumptions as I and J do, and g(J ) = g(J˜). Thus we
can assume without restriction that I has the shape as represented by Fig. 12.2. Then
•
•
one makes the graded deformations • 7→ 1 (or • → 2 , etc.) in E(J ). Each of the
orbits which are to be exchanged, have the same length. One gets an ideal J˜ with the
same Hilbert polynomial, which is again of proper type 3. If ϕ̃ is the Hilbert function of
Ĩ := (J˜)′ , then ϕ̃ > ϕ, hence g ∗(ϕ̃) > g ∗ (ϕ) > g(d) (cf. Remark 2.1). As J and J˜ have
the same Hilbert polynomial, one has g(J˜) = g(J ). The colength of Ĩ in OP2 is the same
as the colenght d of I in OP2 , hence the coefficient a, remains unchanged. Now one can
again pass to (J˜)∗ and has again xν ∈ H 0((J˜)∗ (ν)), ν = min{n|H 0 ((J˜)∗ (n)) 6= (0)}, (J˜)∗
of proper type 3. Thus it suffices to show g((J˜)∗ ) < 0. Continuing in this way one sees
that one can assume without restriction: J is of proper type 3, I = J ′ has the shape of
n
L
Figure 12.4, H 0 (J (n)) =
tn−i H 0 (I(i)), for all n ≥ ν, H 0 (J (n)) = (0), if n < ν and
i=ν
xν ∈ H 0 (J (ν)), i.e., xν ∈ H 0 (I(ν)). From (12.1) it follows that ν ≥ m. As m ≥ c + 2, we
have h0 (K(n)) = n+2
− c, n ≥ m − 1, hence h0 (I(n)) = h0 (K(n − 1)) + (n − m + 1) =
2
n−1+2
n−1+2
n−1+1
n+2
−
c
+
(n
−
m
+
1)
=
+
+
1
−
(c
+
m)
=
− d, n ≥ m. But then
2
2
1
2
n
n
X
X
i+2
−d
h0 (J (n)) =
h0 (I(i)) =
2
i=ν
i=ν
X
ν−1
n
X
i+2
i+2
− (n − ν + 1)d
−
=
2
2
i=0
i=0
ν+2
n+3
− (n − ν + 1)d.
−
=
3
3
From this we get P (n) = (n − ν + 1)d + ν+2
, thus:
3
(12.2)
g(J ) = (ν − 1)d − ν+2
+1.
3
We regard g(J ) as a function of ν ≥ m, and we
1
1
g(x) := (x − 1)d − x3 − x2 −
6
2
have to determine the maximum of
1
x + 1, x ≥ m.
3
q
1 2
1
′
We have g (x) = − 2 x − x + (d − 3 ) = 0 ⇔ x = −1 ± 2d + 31 , and we show
q
m ≥ −1 + 2d + 31 . This last inequality is equivalent to (m + 1)2 > 2d + 31 ⇔ m2 + 23 ≥ 2c
(because of d = c + m). As m ≥ c + 2, this is true.
It follows that g ′(x) ≤ 0, if x ≥ m, hence g(x) is monotone decreasing if x ≥ m. Now
1
1
1
g(m) = (m − 1)d − m3 − m2 − m + 1
6
2
3
1 3 1 2 1
= (m − 1)(c + m) − m − m − m + 1
6
2
3
1 3 1 2 1
≤ (m − 1)(2m − 2) − m − m − m + 1.
6
2
3
110
The right side of this inequality is smaller than 0 if and only if 18 < m3 − 9m2 + 26m,
which is true as m ≥ c + 2 ≥ 2.
12.4. B(4; k)-invariant surfaces containing a 1-cycle of proper type 3.
Let be V ⊂ Z = Z(HQ ) a B(4; k)-invariant surface containing a 1-cycle D of proper
type 3. Then V is not pointwise invariant under the Ga -operation ψα : x 7→ x, y 7→
αx+y, z 7→ z, t 7→ t. According to Lemma 11.1 one can write V = Ga · Gm · ζ. The inertia
group ∆ζ has the form G(p) and by Lemma 12.1 it follows that p = (a : b : c), a, b, c 6= 0,
is not possible.
12.4.1. The case p = (a : 0 : c), a, c 6= 0. Then V is not pointwise invariant under
the Gm -operation σ (notations as in Lemma 11.1), as the following argumentation will
show: Let J ↔ ζ be the corresponding ideal, let P be an associated prime ideal, which is
G(p)-invariant. If t ∈ P, then from (Appendix D, Lemma 2) it follows that P = (x, y, z, t),
contradiction. Thus t is a non zero divisor of OP3 /J . If J would be invariant under σ,
then J would be generated by elements of the form f · tn , where f ∈ Sm , m, n ∈ N. It
follows that f ∈ J and thus J is invariant under Γ (cf. 12.3.1). But by assumption
∆ζ = G(a : 0 : c) 6⊃ Γ.
The inertia group Tζ ⊂ T (4; k) has the form T (ρ), where ρ3 > 0 ( Lemma 11.1 a). By
Appendix E H 0 (J (b)) has a standard basis fi = Mi pi (X ρ ).
Let be W := Ga · Gm · ζ The morphism his defined on V ∩ Ut ⊃ W . As W = V , it
follows that h(W ) = {ψα (ζ ′)|α ∈ k} is dense in h(V ∩ Ut ), where ζ ′ = h(ζ) ↔ J ′ . As
ζ ∈ Ut , it follows that ζ0 = lim σ(λ)ζ ∈ Ut too (see [G88], Lemma 4, p. 542, and [G89],
λ→0
1.6, p.14). As ρ3 > 0 it follows that ζ0′ = ζ ′. Now write D = {ψα (η)|α ∈ k}− , where
η ∈ V (k) is invariant under T (4; k) and G3 , hence C ⊂ V ∩Ut and h(η) ∈ {ψα (ζ ′)|α ∈ k}− .
′
As η is not Ga -invariant , h(η) is not Ga -invariant, hence
h(η)
∈ k}. Then
∈ {ψα (ζ )|α
1 0 ∗ ∗
0 1 ∗ ∗
′
′
< G(p) is
(Appendix D, Lemma 3) shows h(η) = ζ = ζ0 . H :=
0 0 1 0
0 0 0 1
normalized by σ, hence ζ0 is H-invariant. As ζ0 ∈ Ut is Gm -invariant , it follows that ζ0
is Γ-invariant. But then ζ0 is invariant under G3 , and as ζ0′ = η ′ corresponds to an ideal
in y-standard form, J0 ↔ ζ0 is of proper type 3.
As G(p) is unipotent there is an eigenvector f 6= 0 in H 0 (J (ν)), ν := min{n|H 0(J (n)) 6=
(0)}. From x∂f /∂t ∈ hf i it follows that ∂f /∂t = 0. From y∂f /∂z ∈ hf i it follows
∂f /∂z = 0 and from cx∂f /∂y − az∂f /∂t ∈ hf i we deduce f = xν (cf. Appendix D,
Lemma 2). But then xν ∈ J0 , too. Now h0 (J0 (n)) = h0 (J (n)), n ∈ Z (cf. [G88], Lemma
4, p.542 and [G89], 1.6, p.14). But then from Lemma 12.2 it follows that g < 0.
Conclusion 12.1. In the case p = (a : 0 : c), a, c 6= 0, V does not contain a 1-cycle
of proper type 3, if g > g(d) is supposed.
111
12.4.2. The case p = (a : b : 0), a, b 6= 0. As Tζ ⊂ T (1, −2, 1, 0) (cf. Auxiliary
Lemma 2 ), it follows from Lemma 11.1 that V is pointwise invariant under σ(λ) : x 7→
x, y 7→ y, z 7→ z, t 7→ λt. As V is pointwise invariant under Γ, one has V ⊂ GΦ , where Φ
is the Hilbert function of J ↔ ζ and GΦ is the corresponding “graded Hilbert scheme”.
This had been defined in ([G4], Abschnitt 2) as follows: Gm operates on HQ by σ. Then
it is shown in (loc. cit.) that G := (HQ )Gm ∩ Ut is a closed subscheme of HQ , and G is a
disjunct union of closed subschemes GΦ , where GΦ parametrizes the corresponding ideals
with Hilbert function Φ.
Now suppose D = {ψα (η)|α ∈ k}− ⊂ V is a 1-cycle of proper type 3, where η
corresponds to an ideal L of proper type 3. Then L and J ↔ ζ have the same Hilbert
function.
Let ν and f ∈ H 0 (J (ν)) be defined as in (12.4.1). From x∂f /∂z ∈ hf i and y∂f /∂t ∈
hf i it follows that ∂f /∂z = ∂f /∂t = 0. But then from bx∂f /∂y − ay∂f /∂z ∈ hf i it
follows that ∂f /∂y = 0, hence f = xν . (cf. Appendix D, Lemma 2). If I corresponds to
any point ξ ∈ Ga · Gm · ζ, then xν ∈ H 0 (I(ν)), and the same argumentation as in (8.4.2)
shows this is true for all points in V . But from xν ∈ H 0 (L(ν)) it follows that g < 0.
Conclusion 12.2. In the case p = (a : b : 0), a, b 6= 0, V does not contain a 1-cycle
of proper type 3, if g > g(d) is supposed.
12.4.3. If V contains a 1-cycle of proper type 3, then V cannot be pointwise Ga invariant, so the case p = (0 : b : c) cannot occur, at all.
Lemma 12.3. Suppose g > g(d). If V ⊂ Z(HQ ) is a B(4; k)-invariant surface containing a 1-cycle of proper type 3, then V is pointwise invariant under G3 = G(1 : 0 : 0).
Fig.: 12.1
c monomials are missing
xm z n−m
112
Fig.: 12.2
xm z n−m
Fig.: 12.3
2
1
xm z n−m
113
Fig.: 12.4
c monomials are missing
xm z n−m
114
CHAPTER 13
Relations in B(4; k)-invariant surfaces
We suppose g > g(d) and let V ⊂ X = Z(HQ ) be a B(4; k)-invariant surface, which
contains a 1-cycle of proper type 3. From it follows that V is pointwise G3 -invariant. Then
from ([T1], Proposition 0, p.3) we conclude that any B-invariant 1-prime cycle D ⊂ V is
either pointwise ∆-invariant or a 1-cycle of type 3. The aim is to describe the relations in
Z1B (X) defined by the standard construction of Section 9.2 carried out with regard to V .
13.1. First case : V is not pointwise invariant under the Gm -operation σ
We use the notation of (11.1). According to Lemma 11.1 V = Gm · Ga · ζ, where
Tζ = T (ρ) and ρ3 > 0, for otherwise ζ would be Gm -invariant and hence V would be
pointwise Gm -invariant. Let J ↔ ζ and C := {ψα (ζ)|α ∈ k}− . If one chooses a standard
basis of H 0 (J (b)) consisting of T (ρ)-semi-invariants, then one sees that h(V ) = C ′ :=
{ψα (ζ ′)|α ∈ k}− , where ζ ′ = h(ζ). As V contains a 1-cycle of proper type 3, C ′ is a
y-standard cycle, generated by ζ ′ ↔ J ′ .
Now if D ⊂ V is a 1-cycle of proper type 3, then h(D) = C ′ . If D is of type 3, but
not of proper type 3, then h(D) is not a y-standard cycle. Write D = Ga · η, η ∈ V (k)
invariant under T (4; k). It follows that η ′ = h(η) is one of the two T (3; k)-fixed points of
C ′ (cf. Appendix D, Lemma 3). If η ′ = ζ ′, then D would be of proper type 3. Hence η ′
is the unique point of C ′ , invariant under B(3; k) and h(D) equals the point η ′ .
If D is pointwise ∆-invariant, then h(D) is pointwise invariant under U(3; k) and hence
equals the point η ′ , too .
Let C0/∞ = lim σ(λ)C be the B-invariant limit curves of C coming from the stanλ→0/∞
dard construction. We write in Z1B (V ):
X
C0 =
mi Ai + Z0
and C∞ =
X
nj Bj + Z∞
where Ai and Bj are the 1-cycles of proper type 3, occurring in C0 and C∞ , respectively,
and mi , nj ∈ N − (0). All the other prime cycles occurring in C0 and C∞ are summed up
in Z0 and Z∞ , respectively.
Let Mn be a tautological line bundle on HQ . Then (Mn · Ai ) = δn + ai , (Ln · Bj ) =
δn + bj . Here δ ∈ N − (0) is independent of i and j, as h(Ai ) = h(Bj ) = C ′ for all i, j,
whereas the constants ai and bj still depend on Ai and Bj , respectively. From [C0 ] = [C∞ ]
P
P
it follows that
mi (δn + ai ) =
nj (δn + bj ) + c for all n ≫ 0, where c depends only
P
P
on Z0 and Z∞ (see Appendix D, Lemma 4) . It follows that
mi = nj . Therefore we
115
can write
(13.1)
C0 − C∞ =
X
k
(Ek − Fk ) +
X
Gk
k
where (E1 , E2 , . . . ) := (A1 , . . . , A1 , A2 , . . . , A2 , . . . ) and (F1 , F2 , . . . ) := (B1 , . . . , B1 , B2 ,
. . . , B2 , . . . ). Here A1 (respectively B1 ) are to occur m1 -times (respectively n1 -times) etc.
P
P
By the way, the arbitrary association Ek 7→ Fk is possible because of
mi =
nj . If
Ek = Fk , the summand Ek − Fk is left out. Gk is composed of either pointwise U(4; k)invariant curves or 1-cycles of type 3, which are not proper.
13.2. Second case: V is pointwise invariant under the Gm -operation σ, but
not pointwise invariant under the Gm -operation τ
We use the same notations as in (11.3).
1st subcase: h(V ) is not 2-dimensional. By assumption V contains a 1-cycle D of proper
type 3, hence h(V ) = h(D) =: D ′ is a y-standard cycle and all the other 1-cycles in V ,
which are of proper type 3, are mapped by h onto D ′ . Then one carries out the standard
construction by means of the operation τ and one gets formally the same relations as
(13.1).
′
2nd subcase: h(V
H d is a B(3; k)-invariant surface, which is pointwise invariant
) = V ⊂
1 0 ∗
under G′3 = 0 1 ∗ < U(3; k). As V ′ contains a y-standard cycle, V ′ is not
0 0 1
pointwise Ga -invariant.
At first, the standard construction is carried out in V = Gm · Ga · ζ (cf. Lemma
11.1): C := {ψα (ζ)|α ∈ k}− is a closed curve in V with Hilbert polynomial p and
λ 7→ τ (λ)C defines a morphism Gm → Hilbp (V )Ga , whose extension to P1 defines a
∗
flat family C ֒→ V × P1 over P1 such that Cλ := p−1
2 (λ) = τ (λ)C × {λ}, if λ ∈ k .
C0/∞ =: p1 (C0/∞ ) are called the limit curves of C and [C0 ] − [C∞ ] is the “standard
relation” defined by V .
Put U := {τ (λ)ψα (ζ)|α ∈ k, λ ∈ k ∗ }.
Put U ′ := {τ (λ)ψα (ζ ′ )|α ∈ k, λ ∈ k ∗ }.
Then U = V and U ′ = V ′ . Carrying out the standard construction by means of C ′ :=
p2
{ψα (ζ ′)|α ∈ k}− one gets a flat family C ′ ֒→ V ′ × P1 −→ P1 . One has a morphism
h×id
C ֒→ V × P1 −→ V ′ × P1 , which is denoted by f .
Put U := {(τ (λ)ψα (ζ), λ)|α ∈ k, λ ∈ k ∗ } ⊂ C.
Put U′ := {(τ (λ)ψα (ζ ′), λ)|α ∈ k, λ ∈ k ∗ } ⊂ C ′ .
C and C ′ are reduced and irreducible (see [T1], proof of Lemma 1, p.6). Hence U = C and
U′ = C ′ . As f (U) = U′ and f is projective, f (C) = C ′ follows. As the diagram
C
f
p2 ց
−→
P1
116
ւ p2
C′
′
is commutative, f (C0 ) = C0′ and f (C∞ ) = C∞
follows. As the diagram
h×id
V × P1 −→ V ′ × P1
p1 y
V
h
−→
yp1
V′
′
is commutative, it follows that C′0/∞ := p1 (C0/∞
) = p1 f (C0/∞ ) = h(C0/∞ ). Let be
′
′
ζ0/∞ := lim τ (λ)ζ and ζ0/∞ := lim τ (λ)ζ .
λ→0/∞
λ→0/∞
′
′
Now from Lemma 9.2, it follows that C0/∞
:= {ψα (ζ0/∞
)|α ∈ k}− are the only ystandard cycles in C′0/∞ and they both occur with multiplicity 1. We want to show that
C0/∞ := {ψα (ζ0/∞ )|α ∈ k}− are the only 1-cycles of proper type 3 in C0/∞ , and they both
occur with multiplicity 1. In order to show this, we consider the extension of τ : λ 7→ τ (λ)ζ
to a morphism τ : A1 → V . Then ζ0 = τ (0). As there is a commutative diagram
Gm
τ
−→
τ′ ց
V
yh
V′
where τ ′ (λ) := τ (λ)ζ ′ and as the extensions are determined uniquely, it follows that
τ ′ = h ◦ τ , hence h(ζ0 ) = ζ0′ . By constructing the corresponding extensions to P1 , it
′
′
follows in the same way that h(ζ∞ ) = ζ∞
. Hence we get h(C0/∞ ) = C0/∞
. Therefore C0
and C∞ are 1-cycles of proper type 3, and from ζ0/∞ ∈ C0/∞ we conclude C0/∞ ⊂ C0/∞ .
Assume there is another 1-cycle D of proper type 3 contained in C0 , or assume that
C0 occurs with multiplicity ≥ 2 in C0 . Then there is an equation [C0 ] = [C0 ] + [D] + · · ·
in Z1B (V ), where D = C0 , if C0 occurs with multiplicity ≥ 2. From Lemma 9.2 it follows
that h(D) = C0′ . It follows that
h∗ ([C0 ]) = h∗ ([C0 ]) + h∗ ([D]) + · · ·
= deg(h|C0 )[h(C0 )] + deg(h|D)[h(D)] + · · ·
= h∗ ([C]) = deg(h|C)[C ′ ].
As C = {ψα (ζ)|α ∈ k}− and α 7→ ψα (ζ) is injective and the same is true for C ′ and
α 7→ ψα (ζ ′ ), h|C is an isomorphism. The same argumentation shows that h|C0 and h|C∞
are isomorphisms. Hence we get [C0′ ] + [C0′ ] + · · · = [C ′ ] = [C′0 ], which means that C0′
occurs with multiplicity ≥ 2 in C′0 , contradiction.
The same argumentation shows that C∞ is the only 1-cycle of proper type 3 in C∞ , and
it occurs with multiplicity 1. This gives a relation of the form
(13.2)
C0 − C∞ = C 0 − C ∞ − Z
where C0 and C∞ are 1-cycles of proper type 3 and Z is a 1-cycle whose components are
either pointwise U(4; k)-invariant curves or 1-cycles of type 3, which are not proper.
117
13.3. Third case:V is pointwise invariant under σ and τ
Then we write V = Gm · Ga · ζ as in the 3rd case of Lemma 11.1.
1st subcase : h(V ) is not 2-dimensional. The same argumentation as in the first subcase
of (13.2), using the Gm -operation ω as in the third case of Lemma 11.1 instead of the
Gm -operation τ , gives relations of the form (13.1).
2nd subcase: If h(V ) is 2-dimensional, the same argumentation as in the second subcase
of 13.2 , with ω instead of τ , gives relations of the form (13.2).
Proposition 13.1. Assume g > g(d) and let V ⊂ X := Z(HQ ) be a B(4; k)-invariant
surface containing a 1-cycle of proper type 3. Then each relation in Z1B (X) defined by V ,
which contains a 1-cycle of proper type 3 is a sum of relations of the form C1 − C2 + Z.
Here C1 and C2 are 1-cycles of proper type 3, and Z is a 1-cycle whose prime components
either are pointwise ∆-invariant or 1-cycles of type 3 , which are not proper .
Proof. This follows from the foregoing discussion.
118
CHAPTER 14
Necessary and sufficient conditions
We take up the notations introduced in Chapter 1. Let be d ≥ 5, g(d) = (d −
2) /4, H = Hd,g , A(H) = Im(A1 (HU (4;k) ) −→ A1 (H)). Obviously, the cycle C3 =
{(xa , αx + y, xa−1 z b−a+1 )|α ∈ k}− , where d = a − 1, g = (a2 − 3a + 4)/2 − b, is a 1cycle of proper type 3.
2
14.1. The necessary condition.
In the proof of Theorem 1.1 in Chapter 10 we replace H, U(3; k), B(3; k) and “ystandard cycle” by H, U(4; k), B(4; k) and “1-cycle of proper type 3”, respectively. Then
using Proposition 13.1 instead of Proposition 9.1 , the same reasoning as in the proof of
Theorem 1.1 gives the
Conclusion 14.1. If d ≥ 5 and g > g(d), then [C3 ] ∈
/ A(H).
14.2. The sufficient condition.
14.2.1. The case d ≥ 5 and d odd. In ([T2], 3.3.2, Folgerung, p.129) it had been
shown [C3 ] ∈ A(H), if a ≥ 6 and b ≥ a2 /4. We express this condition by means of the
formulas in (1.1):
1
1
1
b ≥ a2 /4 ⇔ g ≤ [(d + 1)2 − 3(d + 1) + 4] − (d + 1)2 = (d2 − 4d + 3).
2
4
4
1
2
As d is odd, this is equivalent to g ≤ g(d) = 4 (d − 2) .
Conclusion 14.2. If d ≥ 5, d is odd and g ≤ g(d), then [C3 ] ∈ A(Hd,g ).
14.2.2. The case d ≥ 6 and d even. We will show that the bound given in
([T2], 3.3.3 Folgerung, p.129) is already valid, if a ≥ 7.
1◦ We consider the Hilbert function ϕ of the monomial ideal (x2 , xy e−2 , y e), e := d/2 + 1 ≥
4, which is represented in Fig. 14.1. In Section (2.2.3) this Hilbert function had been
denoted by χ and it had been shown that g ∗ (ϕ) = 41 (d−2)2 =: g(d). The figures 14.1 - 14.6
show all possible monomial ideals with Hilbert function ϕ. Besides this, we consider the
ideal I := (xy, xe , y e, xe−1 + y e−1) which corresponds to a closed point ξ in the Iarrobino
variety Iϕ . This variety parametrizes all sequences (U0 , U1 , . . . ) of subspaces Ui ⊂ Ri with
dimension ϕ′ (i) = ϕ(i) − ϕ(i − 1), such that R1 Ui ⊂ Ui+1 , i ∈ N . Here R is the polyno mial ring in two variables .
2◦ We show that V = Ga · Gm · ξ is 2-dimensional, where Gm operates by τ (λ) : x 7→
f
λx, y 7→ y, z 7→ z. If this would not be the case, then Ga × Gm −→ Iϕ defined by
119
(α, λ) 7→ ψα τ (λ)ξ would have a fibre with infinitely many points. The argumentation in
(8.3) then showed that one of the points ξ0/∞ = lim τ (λ)ξ is invariant under Ga . As
λ→0/∞
ξ0 ↔ I0 = (xy, xe , y e−1) and ξ∞ ↔ I∞ = (xy, xe−1, y e ), this is wrong as e ≥ 4. Thus we
can carry out the standard construction with C = {ψα (ξ)|α ∈ k}− , if e ≥ 4. As we have
already noted in the proof of Lemma 9.1 , the curves τ (λ)C are Ga -invariant, hence the
construction of the family C takes place in (Hilbp (Iϕ ))Ga . Therefore the limit curves C0/∞
are invariant under B(3; k).
As usual, we put C0/∞ = {ψα (ξ0/∞ )|α ∈ k}− .
3◦ Let I ↔ (U0 , U1 , · · · ), where Ui = xyRi−2 , if 0 ≤ i ≤ e − 2, Ue−1 = xyRe−3 + hxe−1 +
y e−1i, Ui = Ri , i ≥ e. We get
ψα (Ue−1 ) = x(αx + y)Re−3 + hxe−1 + (αx + y)(αx + y)e−2i
⇒
= x(αx + y)Re−3 + hxe−1 + (αx + y)y e−2i
˙ α (Ue−1 ) = αxe−1 ∧ αxe−2 y ∧ · · · ∧ αx2 y e−3 ∧ αxy e−2
∧ψ
+ terms with smaller powers of α.
Hence α-grade (Ue−1 ) = e − 1. On the other hand, by formula (4.2) in 4.1 we get α-grade
(hxe−2 y, . . . , y e−1i) = e − 1, too. From this it follows that C0 = C0 .
4◦ Besides C∞ , the limit cycle C∞ contains other prime components D1 , D2 , . . . which
are B(3; k)-invariant curves in Iϕ .
As char(k) = 0 is supposed, if m ≤ n, G := Grassm (Rn ) has only one Ga -fixed point,
namely the subspace hxn , · · · , xn−m+1 y m−1 i. Then ([T1], Proposition 0, p.3) shows that
each B(2; k)-invariant curve in G has the form Ga · η, where η ∈ G(k) corresponds to a
monomial subspace of Rn . It follows that each B(3; k)-invariant curve in Iϕ also has the
form Ga · η, where η ∈ Iϕ (k) corresponds to a monomial ideal.
5◦ Figure 14.1 - Figure 14.6 show all possible monomial ideals with Hilbert function ϕ,
and using formula (4.2) we compute the degrees of the corresponding 1-cycles:
e−2
e−2
e−2
deg c2 = e−2
,
deg
c
=
+
(e
−
1),
deg
c
=
2
+
(e
−
1),
deg
c
=
2
+ (e −
3
4
5
2
2
2
2
2), deg c6 = 1.
We see that c2 (respectively c3 ) is equal to C∞ (respectively C0 ). If C0 (respectively
Di ) occurs in C∞ with the multiplicity n (respectively ni ), then from deg C∞ = deg C0 =
e−2
deg C0 it follows that e−2
+
(e
−
1)
=
n
+ n1 deg D1 + · · · where n > 0, ni ≥ 0.
2
2
e−2
1st case: e ≥ 6. Then 2 > e − 1, hence n = 1. From D1 ∈ {c4 , c5 , c6 } and e − 1 ≥
n1 deg D1 we conclude that D1 = c6 and
[C0 ] = [C∞ ] + (e − 1)[c6 ]
2nd case: e = 5. Then deg C0 = 7, deg C∞ = 3, deg c4 = 10, deg c5 = 9, and we get
[C0 ] = 2[C∞ ] + [c6 ] or [C0 ] = [C∞ ] + 4[c6 ].
3rd case: e = 4. Then deg C0 = 4, deg C∞ = 1, deg c4 = 5, deg c5 = 4. Therefore one gets
[C0 ] = n[C∞ ] + m[c6 ]
where n, m ∈ N, n ≥ 1, n + m = 4.
6◦ One has a closed immersion Iϕ → Hd,g(d) defined by I 7→ I ∗ , where I ∗ is the ideal
120
generated by I in OP3 . That means H 0 (P3 ; I ∗ (n)) =
case, one gets an equation
∗
[C0∗ ] = n[C∞
] + m[c∗6 ],
n
L
i=0
tn−i H 0 (P2 ; I(i)), n ∈ N. In any
m, n ∈ N, n ≥ 1.
∗
Now from ([T1], Lemma 5, p.45) it follows that one can deform C0∗ (respectively C∞
) by
a sequence of graded deformations of type 3 (cf. [T1], p.44) modulo A(H) in the cycle C3
(respectively in the zero cycle). The cycle c∗6 is equal to the cycle Dα , α = (d + 2)/2 = e,
which had been introduced in ([T4], p.20f). By ([T4], Lemma 5, p.25) one gets De ≡ D2
modulo A(H), and in ([T4], Abschnitt 3.3, p.25) it had been noted that [D2 ] = [D], where
D is the cycle introduced in (1.1). From ([T4], Satz 1, p.26) it follows [C3 ] ∈ A(Hd,g(d) ).
Applying the shifting morphism (cf. [T3], Folgerung, p.55 and Proposition, p.56) we get:
Conclusion 14.3. If d ≥ 6 is even and g ≤ g(d), then [C3 ] ∈ A(Hd,g ).
14.3. Summary.
Theorem 14.1. Let be d ≥ 5. Then [C3 ] ∈ A(Hd,g ) if and only if g ≤ g(d).
Proof. This follows from Conclusion 14.1- 14.3.
Fig.: 14.2
Fig.: 14.1
c2
0
1
2
3
4 ... ... e
0
121
1
2
3
4 ... ... e
Fig.: 14.3
Fig.: 14.4
c3
0
1
2
3
c4
4 ... ... e
0
1
2
3
4 ... ... e
Fig.: 14.6
Fig.: 14.5
c5
c6
0
1
2
3
4 ... ... e
0
122
1
2
3
4 ... ... e
CHAPTER 15
The case d ≥ 5
We suppose d ≥ 5, i.e. a ≥ 6, and g <
d−1
2
, i.e. g not maximal.
From ([T1], Satz 2, p.91) and ([T4], Proposition 2, p.26) it follows that A1 (Hd,g )/A(Hd,g )
is generated by the so-called combinatorial cycles C1 , C2 , C3 . Using the formulas in (1.1),
one shows that g ≤ γ(d) := d−2
is equivalent to b ≥ 2a − 4.
2
1st case: g > γ(d). This is equivalent to b < 2a − 4. By ([T4], Satz 1, p.26) one has
A(Hd,g ) = hEi. Assume there is a relation q0 [E] + q1 [C1 ] + q2 [C2 ] + q3 [C3 ] = 0, qi ∈ Q.
Computing the intersection numbers with the tautological line bundles Ln by means of
the formulas (1)-(5) in ([T2], p.134f) and the formula in ([T3], Hilfssatz 1, p.50), one
sees that q2 = 0. Put r := 2a − 4 − b and denote the r-times applied shifting morphism by f (see [T3], p.52 ). The images of E, C1 , C3 under f in Hd,γ(d) are denoted
by e, c1 , c3 . By ([T2], 3.2.2 Folgerung, p.124) and ([T3], Anhang 2, p.54f) it follows that
[f (E)] ≡ [e], hf (C1)i ≡ hc1 i and hf (C3 )i ≡ hc3 i modulo A(Hd,γ(d) ). By ([T3], Proposition
4, p. 22) [c1 ] ∈ A(Hd,γ(d) . As γ(d) > g(d) if d ≥ 5, from Theorem 14.1 it follows that
[c3 ] ∈
/ A(Hd,γ(d) ). Hence f (C3 ) is not a single point, hence deg(f |C3 ) 6= 0. Applying f∗
to the relation q0 [E] + q1 [C1 ] + q3 [C3 ] = 0 then gives q3 deg(f |C3 ) · [c3 ] ∈ A(Hd,γ(d) ). If
q3 6= 0 it would follow that [c3 ] ∈ A(Hd,γ(d) ), contradiction. But from q0 [E] + q1 [C1 ] = 0
it follows q0 = q1 = 0 (cf. [T2], 4.1.3).
2nd case: g(d) < g ≤ γ(d). Then b ≥ 2a − 4 and in (1.1) it was explained that in this case
A1 (Hd,g ) is generated by E, D, C2 and C3 . If q0 [E]+q1 [D]+q2 [C2 ]+q3 [C3 ] = 0, then q2 = 0
by ([T2], loc.cit.). If one assumes that q3 6= 0, it follows that [C3 ] ∈ hE, Di ⊂ A(Hd,g ),
contradicting Theorem 14.1. As in the first case we get q0 = q1 = 0.
3rd case: g ≤ g(d). Then A(Hd,g ) = hE, Di, [C1] ∈ A(Hd,g ) and [C3 ] ∈ A(Hd,g )
(see.[T3], Proposition 4, p.22 ; [T4], Satz 1, p.26 ; and finally Theorem 14.1 ). Thus
A1 (Hd,g ) = hE, D, C2 i.
All in all we get: Suppose that d ≥ 5 and g < d−1
, i.e. g is not maximal. Put
2
d−2
2
g(d) := (d − 2) /4 and γ(d) := 2 .
Theorem 15.1. (i) If g > γ(d), then A1 (Hd,g ) is freely generated by E, C1 , C2 , C3.
(ii) If g(d) < g ≤ γ(d), then A1 (Hd,g ) is freely generated by E, D, C2 , C3 .
(iii) If g ≤ g(d), then A1 (Hd,g ) is freely generated by E, D, C2.
N.B.: If g = d−1
, then A1 (Hd,g ) is freely generated by C1 := {(x, y d (αy + z))|α ∈ k}−
2
and C2 := {(αx + y, xd+1 )|α ∈ k}− ([T1], Satz 2, p.91).
123
CHAPTER 16
The cases d = 3 and d = 4
16.1. The case d = 3.
T −4+2
T −b+1
If g is not maximal, i.e., if g ≤ 0, then Q(T ) = T −1+3
+
+
, where
3
2
1
b ≥ 4. If b = 4, then in ([T2], p.137) it had been shown that [C3 ] ∈ A(H). Applying the
shifting morphism , (see [T3] ,p .54 ) then shows that this statement is true for all g < 0
(cf. [T3], Anhang 2, Folgerung, p.55 and Proposition, p.56). By ([T1], Satz, p.91, [T3],
Prop. 4, p.22 and Satz 1, p.26) it follows that A1 (H3,g ) is generated by [E], [D], [C2], if
g ≤ 0. The three cycles are linear independent, as already was noted in Chapter 15.
16.2. The case d = 4
If g is not maximal, i.e., if g < 3, then Q(T ) = T −1+3
+ T −5+2
+ T −b+1
and b ≥ 5.
3
2
1
4
If b = 5, then g = 2, and then A1 (H) = Q , as had been shown in ([T3], Anhang 1).
We now treat the case b = 6, i.e., g = 1. As the condition b ≥ 2a − 4 is fulfilled,
A(H) = hE, Di by ([T4], Satz 1, p.26). The quotient A1 (H)/A(H) is generated by C1 , C2
and further cycles C3 , C4, C5 of type 3 (cf. [T1], Satz 2c, p.92). By ([T3], Proposition 4,
p.22) [C1 ] ∈ A(H), and we are going to simplify the reductions of C3 , C4 , C5 described in
([T3], Abschnitt 8).
16.2.1. The cycle C3 . By definition a cycle of type 3 has the form C = Ga · ξ, where
ξ corresponds to a monomial ideal J with Hilbert polynomial Q, which is invariant under
the subgroup G3 < U(4; k) (cf. Chapter 1). Here Ga operates as usual by ψα : x 7→
x, y 7→ αx + y, z 7→ z, t 7→ t. Let I := J ′ ∈ H 4 (k) be the image of J under the restriction
morphism. If I is Ga -invariant, then I is B(3; k)-invariant, hence deg(Ln |C) = constant
and [C] ∈ A(H) (cf. [T3], Anhang 2, Hilfssatz 2, p.50). Therefore we can assume without
restriction that I is not Ga -invariant. As the colength of I in OP2 is equal to 4, the Hilbert
function of I is equal to one of the functions represented in Figure 2.7a and Figure 2.7b.
In (2.2.1) we had obtained g ∗ (ϕ1 ) = 1 and g ∗ (ϕ2 ) = 3. The Hilbert function ϕ1 leads to
two possible 1-cycles of proper type 3, namely to
F1 := {ψα (ξ1 )|α ∈ k}− ,
F2 := {ψα (ξ2 )|α ∈ k}− ,
ξ1 ↔ (xy, y 2, x3 ),
and
ξ2 ↔ (x2 , y 2).
The Hilbert function ϕ2 leads to different 1-cycles of proper type 3, which however by
means of admissible G3 -invariant deformations all can be deformed modulo A(H) in
C3 = {ψα (ξ3 )|α ∈ k}− , ξ3 ↔ (x5 , y, x4z 2 ).
125
With such admissible G3 -invariant deformations we can deform C3 in the cycle generated
by (x4 , xy, y 2, yz 2 ), and afterwards this cycle can be deformed by the graded deformation
(yz 2 , yz 3 , . . . ) 7→ (x3 , x3 z, . . . ) in the cycle F1 .
We deform F1 into F2 in the following way: Put K := (x3 , x2 y, xy 2, y 3). If L ⊂
R2 = k[x, y]2 is a 2-dimensional subspace, then hx, yiL ⊂ H 0 (K(3)) = R3 and z n L ∩
H 0 (K(n + 2)) = (0) for all n ≥ 0. It follows that the ideal (L, K) ⊂ OP3 has the Hilbert
polynomial Q, and L 7→ (L, K) defines a closed immersion P2 ≃ Grass2 (R2 ) → HQ . Let
hxy, y 2i ↔ η1 ∈ P1 , hx2 , y 2i ↔ η2 ∈ P2 , ℓi := {ψα (ηi )|α ∈ k}− , i = 1, 2. Because of
reasons of degree one has [ℓ1 ] = 2[ℓ2 ] in A1 (P2 ), hence [F1 ] = 2[F2 ] in A1 (HQ ). Now
F2 = {(x2 , xy 2, y 3 , (αx + y)2 |α ∈ k}− = {(x2 , xy 2 , y 3, λxy + µy 2 )|(λ : µ) ∈ P1 } is equal to
the cycle D3 which had been introduced in ([T4], Abschnitt 3.2, p.20). By Lemma 5 in
(loc.cit.) D3 ≡ D2 modulo A(H), where D2 := {(x2 , xy, y 4, λxz 2 + µy 3)|(λ : µ) ∈ P1 } is
the cycle , which had been denoted by D in Chapter 1. It follows that [C3 ] ∈ A(H), and
applying the shifting morphism gives
Conclusion 16.1. [C3 ] ∈ A(H4,g ) for all g ≤ 1.
Remark 16.1. If d = 3 and g > g(3) = 1/4, C3 does not occur at all. If d = 4 and
g > g(4) = 1, then A(H4,2 ) = hEi ([T4], Satz 1, p.26) and hence [C3 ] ∈
/ A(H4,2 ).
16.2.2.
[C4 ] ∈ A(H) is true for arbitrary d and g ([T4], Proposition 2, p.26).
16.2.3. The cycle C5 . In (loc.cit.) [C5 ] ∈ A(H) was shown, too, but the proof
required tedious computations, which can be avoided by the following argumentation.
One has only to treat the cases g = 0 and g = −2, that means, the cases b = 7 and b = 9
([T1], Satz 2c(iii), p.92).
We start with b = 7. Then C5 = Ga · ξ0 , where ξ0 ↔ J0 := (y 2, K), K := (x3 , x2 y, xy 2,
y 3, x2 z, y 2 z). Put J := (x2 + y 2, K) ↔ ξ. As J0 and J have the same Hilbert function, ξ0
and ξ both lie in the same graded Hilbert scheme Hϕ (cf. Appendix A). If Gm operates
by τ (λ) : x 7→ λx, y 7→ y, z 7→ z, t 7→ t, then one sees that indeed ξ0 = lim τ (λ)ξ and
λ→0
ξ∞ := lim τ (λ)ξ ↔ J∞ := (x2 , K). Put C := {ψα (ξ)|α ∈ k}− , C0/∞ := {ψα (ξ0/∞ )|α ∈
λ→∞
k}− . One sees that α-grade H 0 (J (n)) = α-grade H 0 (J0 (n)) = α-grade H 0 (J∞ (n)) + 2,
for all n ≥ 2, hence deg C = deg C0 = deg C∞ + 2, relative to an appropriate imbedding
of Hϕ into PN .
Put V := Gm · Ga · ξ. As ξ is invariant under G := G3 · T23 and as G3 and T23 =
{(1, 1, λ2, λ3 )|λ2 , λ3 ∈ k ∗ } are normalized by Gm · Ga , V is pointwise G-invariant. Let p
be the Hilbert polynomial of C ⊂ PN . Then the standard construction by means of V
takes place in X := Hilbp (V )Ga , as C is Ga -invariant. Thus the limit curves C0/∞ are
pointwise G-invariant and invariant under Ga and Gm , hence they are B(4; k)-invariant.
Now C0/∞ ⊂ C0/∞ and the above computation of the degrees shows that C0 = C0 ,
whereas C∞ except from C∞ has to contain a further 1-cycle Z of degree 2, i.e., C∞
has to contain an irreducible curve F of degree ≥ 1, which is B(4; k)-invariant. As V is
pointwise G-invariant, F is either pointwise U(4; k)-invariant or an 1-cycle of type 3. As
126
deg(Ln |F ) is constant it follows that [F ] ∈ A(H) (see [T3], Anhang 2, Hilfssatz 2, p.50).
It follows that Z ∈ A(H).
From [C5 ] = [C0 ] = [C∞ ] + Z and C∞ = C4 (cf. [T1], Satz 2, p. 91), because of
(16.2.2) it follows that [C5 ] ∈ A(H).
We now treat the case b = 9. Here C5 = Ga · η, where η ↔ (x3 , x2 y, y 2, x2 z 3 ). By
the G3 -admissible deformation y 2 7→ x2 z 2 C5 is deformed in the cycle C5′ := Ga · ξ0 ,
where ξ0 ↔ J0 := (x3 , x2 y, xy 2, y 3 , y 2z, x2 z 2 ). By ([T1], 1.4.4) [C5 ] = q[C5′ ] modulo
A(H). Hence we can argue with C5′ and J0 in the same way as in the case b = 7: Let
K := (x3 , x2 y, xy 2, y 3, x2 z 2 , y 2z 2 ) and J := (x2 z + y 2 z, K) ↔ ξ. If ξ0/∞ := lim τ (λ)ξ ,
λ→0/∞
then ξ0 is as above and ξ∞ ↔ J∞ = (x3 , x2 y, xy 2, y 3, x2 z, y 2 z 2 ).
Obviously, one has the same computation of degree as in the case b = 7 and the
same argumentation shows that [C5′ ] = [C∞ ] modulo A(H). C∞ is deformed by the G3 admissible deformation y 2 z 2 7→ x2 in the cycle Ga · ζ, where ζ ↔ (x2 , xy 2 , y 3, y 2z 3 ) As
this is the cycle C4 , by 16.2.2 we get [C5 ] ∈ A(H).
Conclusion 16.2. [C5 ] ∈ A(H4,g ) if g = 0 or g = −2.
16.2.4. Summary. The results of 16.1 and 16.2 give
Theorem 16.1. (i) If g ≤ 0, then A1 (H3,g ) is freely generated by [E], [D], [C2 ].
(ii) A1 (H4,2 ) ≃ Q4 and if g ≤ 1, then A1 (H4,g ) is freely generated by [E], [D], [C2 ].
127
CHAPTER 17
Correction of the results of [T4] and summary
In [T4] the computation of the degree on page 28 is wrong, and hence Conclusions 1-3
and Proposition 3 on the pages 29-32 are wrong. As well ([T4], Lemma 7, p.33) is wrong.
The statement of ([T4], Proposition 4, p.33) is right, but with regard to Theorem 15.1(i) it
is irrelevant. The statement in ([T4], Satz 2, p.35) is wrong and has to be replaced by the
statements of Theorem 15.1 and Theorem 16.1 , respectively. The results of the sections 8
and 9 in [T4] remain valid, if one replaces the old bound by the bound g(d) = (d − 2)2 /4.
Furthermore, ([T4], Satz 3, p.35) has to be replaced by:
Theorem 17.1. Let be d ≥ 3 and g not maximal, let C be the universal curve with
degree d and genus g over Hd,g .
(i) If g > γ(d) := d−2
, then A1 (C) is freely generated by E ∗ , C1∗ , C2∗ , C3∗ and L∗ .
2
(ii) If g(d) < g ≤ γ(d), then A1 (C) is freely generated by E ∗ , D ∗ , C2∗ , C3∗ and L∗ .
(iii) If g ≤ g(d), then A1 (C) is freely generated by E ∗ , D ∗ , C2∗ and L∗ .
The statements of ([T4], Satz 4 and Satz 5, p. 36) are correct, if the bound mention
there is replaced by g(d) = (d − 2)2 /4. The reason is that the arguments used in (loc.cit.)
formally do not depend on the bound.
All in all, one gets the results which had been stated in the introduction.
Concluding Remark: Having arrived at this point, it is not so difficult any more to
explicitly determine the cone of curves and the ample cone of Hd,g (and of C).
129
APPENDIX A
Notations
The ground field is C; all schemes are of finite type over C; k denotes an extension
field of C. P = k[x, y, z, t], S = k[x, y, z], R = k[x, y] are the graded polynomial rings.
T = T (4; k) group of diagonal matrices
∆ = U(4; k) unitriangular group
B = B(4; k) Borel group
T (ρ)
subgroup
of T
(3;k) or of T (4; k) (cf. 2.4.1 and [T1], p.2).
1
0
0
∗
0
1
0
∗
< U(4; k)
Γ=
0 0 1 ∗
0 0 0 1
G1 , G2 , G3 subgroups of U(4; k) (cf. 1.1)
H = Hd,g Hilbert scheme of curves in P3 with degree d ≥ 1 and genus g, i.e. H =
Hilbp (P3k ), where P (T ) = dT − g + 1.
Q(T ) = T +3
− P (T ) complementary Hilbert polynomial
3
HQ = Hilbert scheme of ideals I ⊂ OP3 with Hilbert polynomial Q(T ), i.e. H = Hd,g =
HQ .
HQ 6= ∅ if and only if Q(T ) = T −1+3
+ T −a+2
or Q(T ) = T −1+3
+ T −a+2
+ T −b+1
,
3
2
3
2
1
where a and b are natural numbers 1 ≤ a ≤ b. The first case is equivalent with d = a
and g = (d − 1)(d − 2/2, i.e., equivalent with the case of plane curves. We consider
only the case g < (d − 1)(d − 2)/2. In this case we have the relations d = a − 1 and
g = (a2 − 3a + 4)/2 − b.
It was not possible to reserve the letter d for denoting the degree of a curve. If
necessary d denotes a number large enough, e.g. d ≥ b = bound of regularity of all ideals
in OP3 with Hilbert polynomial Q (cf. [G1], Lemma 2.9, p.65).
G = Grassm (Pd ) Grassmann scheme of m-dimensional subspaces of Pd .
Let ϕ : N → N be a function with the following properties: There is an ideal I ⊂ OP2
of finite colength with Hilbert function h(n) = h0 (I(n)), such that 0 ≤ ϕ(n) ≤ h(n) for
all n ∈ N and ϕ(n) = h(n) for n ≥ d, where n is large enough, e.g. n ≥ d := colength(I).
On the category of k-schemes a functor is defined by
Hϕ (Spec A) = {(U0 , · · · , Ud )|Un ⊂ Sn ⊗ A subbundle of rank ϕ(n) such that S1 Un−1 ⊂
Un , 1 ≤ n ≤ d}
Hϕ is a closed subscheme of a suitable product of Grassmann schemes; it is called
graded Hilbert scheme.
131
To each ideal J ⊂ OP3k with Hilbert polynomial Q corresponds a point ξ ∈ H(k),
which we denote by ξ ↔ J .
h(J ) denotes the Hilbert function of J , that means h(J )(n) = dimk H 0 (J (n)), n ∈ N.
If ϕ is the Hilbert function of an ideal in OP2k of colength d, then
Hϕ := {I ⊂ OP2k |h0 (I(n)) = ϕ(n), n ∈ N}
is a locally closed subset of Hilbd (P2 ), which we regard to have the induced reduced scheme
structure.
If G is a subgroup of GL(4; k), then HG denotes the fixed-point scheme, which is to
have the induced reduced scheme structure. The same convention is to be valid for all
fixed-point subschemes of H d = Hilbd (P2 ) .
If C ֒→ H is a curve, then by means of the Grothendieck-Plücker embedding H −→ PN
we can regard C as a curve in a projective space, whose Hilbert polynomial has the form
deg(C) · T + c. Here deg(C) is defined as follows : If I is the universal sheaf of ideals
on X = H × P3k , then F := OX /I is the structure sheaf of the universal curve C over
H, and the direct image π∗ (F (n)) is locally free on H of rank P (n) for all n ≥ b. The
˙ ∗ (F (n)) are called the tautological line bundles on H,which are
line bundles Mn := ∧π
very ample and thus define the Grothendieck - Plücker embeddings in suitable projective
spaces. Here ∧˙ is to denote the highest exterior power. Then deg(C) is the intersection
number deg(Mn |C) := (Mn · C). (If C is a so called tautological or basis cycle one can
compute this intersection number directly, see [T2], Section 4.1.)
After these more or less conventional notations we introduce some notations concerning
monomial ideals. If J ⊂ OP3 is T -invariant, then H 0 (P3k ; J (d)) ⊂ Pd is generated by
monomials . To each monomial xd−(a+b+c) y a z b tc in H 0 (J (d)) we associate the cube [a, a +
1] × [b, b + 1] × [c, c + 1] in an y − z − t - coordinate system, and the union of these
cubes gives a so called pyramid, which is denoted by E(J (d)). Usually we assume that
d
L
J is invariant under ∆ or Γ. Then we can write H 0 (J (d)) =
td−n Un , where Un ⊂ Sn
n=0
are subspaces such that S1 · Un ⊂ Un+1 , 0 ≤ n ≤ d − 1, which we call the layers of the
pyramid. (In [T1]–[T4] we made extensive use of this concept, but here it occurs only
once in 12.3.3.)
A1 (−) denotes the group of rational equivalence classes with coefficients in Q.
132
Fig.: A.1
1
0
0
1
2 ... ... ... ... ... ...
133
APPENDIX B
Hilbert functions without Uniform Position Property
Lemma B.1. Let be k be an algebraically closed field, I ⊂ OP2k an ideal of finite
colength with Hilbert function ϕ and difference function ϕ′ (n) = ϕ(n) − ϕ(n − 1). Let m
be a natural number such that ϕ′ (m + 1) = ϕ′ (m) + 1. The ideal J ⊂ OP2k generated by
H 0 (I(m)) has the following properties:
(i) J is m-regular;
(ii) H 0 (J (n)) = H 0 (I(n)) for all n ≦ m + 1;
(iii) If δ := m + 1 − ϕ′ (m) > 0, then there is a form f ∈ Sδ and an ideal L ∈ OP2 of finite
colength such that J = f · L(−δ).
Proof. Let be In := H 0 (I(n)), I :=
∞
L
In . The ideal I is called Borel-normed, if
n=0
in(I) is invariant under B(3; k), where in(I) is the ideal generated by the initial monomials
of all forms in I. According to a theorem of Galligo, there is a g ∈ GL(3; k) such
that g(I) is Borel-normed. (In [G4], Anhang IV, in the special case of three variables,
there is an ad-hoc-proof.) Therefore we can assume without restriction that I is Borelnormed . Then Fig.A.1 shows not only the graph of ϕ′ , but also the monomials in
H 0 (I0 (n)), where I0 := [in(I)]∼ (cf. [G4], Anhang V, Hilfssatz 1, p.116). One sees that
S1 in(Im ) = in(Im+1 ) and this implies the statements (i) and (ii) (cf. loc.cit., Lemma, p.
116 or [Gre], Proposition 2.28, p. 41).
Let ψ be the Hilbert function of J . Then ψ is also the Hilbert function of J0 = in(J )
(cf. [G4], Hilfssatz 1, p.114), and the further development of the graph of ψ ′ is marked
by · · · in Fig. A.1. The line l : y = x − δ + 1 is marked by - - - - . If c is the number
of monomials between the graphs of ψ ′ and l, then ψ(n) = n−δ+2
− c, n ≥ m. Then the
2
n+2
Hilbert polynomial of OP2 /J is equal to p(n) = 2 − ψ(n) = δn + 1.5δ − 0.5δ 2 + c.
Hence V+ (J ) ⊂ P2k is 1- codimensional and there is an irreducible component which is
equal to a hypersurface V (f ), f ∈ Sν an irreducible form (Hauptidealsatz of Krull). From
J ⊂ Rad(J ) ⊂ (f ) it follows J = f · K(−ν), where K ⊂ OP2 is an ideal with Hilbert
function χ(n) := ψ(n + ν) = n−(δ−ν)+2
− c. If δ − ν > 0, one argues with K in the same
2
way as with J , and one finally gets the statement (iii) .
Lemma B.2. Let the assumptions and the notations be as before. Then reg(I) =
min { n ∈ Z | ϕ′ (n) = n + 1 }.
Proof. As in the proof of Lemma 1, we can assume that I is Borel-normed. We let
2
Gm operate on S by σ(λ) : x 7→ x, y 7→ λg y, z 7→ λg z, where g is a high enough natural
number. Then lim σ(λ)I is equal to the ideal I0 as in the proof of Lemma 1, I and I0
λ→0
have the same Hilbert function, and reg(I) = reg(I0 ) ( cf. [Gre], Theorem 2.27). Hence
135
one can assume without restriction that I is monomial. But then the statement follows
from ([T1], Anhang 2), for instance.
136
APPENDIX C
Ideals with many monomials
If k is a field, let be S = k[x1 , . . . , xr , t] and R = k[x1 , . . . , xr ]. Gm operates on S by
σ(λ) : xi 7→ xi , 1 ≤ i ≤ r, and t 7→ λt, λ ∈ k ∗ . Let H be the Hilbert scheme of ideals
− Q(n) is
I ⊂ OPr with Hilbert polynomial Q, i.e., H = HilbP (Prk ), where P (n) = n+r
r
r
the complementary Hilbert polynomial of the subscheme V+ (I) ⊂ P . We suppose that
H is not empty. Then the ideals I ⊂ OPr with Hilbert polynomial Q, for which t is a
non-zero divisor of OP r /I, form an open, non-empty subset Ut ⊂ H.
If K/k is an extension field and if I ∈ H(K), then the limit ideals I0/∞ := lim σ(λ)I
λ→0/∞
are in H(K) again, and if I ∈ Ut , then I0 ∈ Ut , too (cf. [G2], Lemma 4). We say that I
fulfils the limit condition, if I∞ ∈ Ut .
Remark C.1. If I is fixed by the subgroup Γ : xi 7→ xi , t 7→ α1 x1 + · · · + αr xr + t of
U(r + 1; k), then I does fulfil the limit condition (cf. [G2], proof of Lemma 3, p. 541).
If I ∈ Ut , then I ′ := I + tOPr (−1)/tOPr (−1) can be regarded as an ideal in OPr−1
with Hilbert polynomial Q′ (T ) = Q(T ) − Q(T − 1).
C.0.5. Lemma. Let I ∈ H(k) ∩ Ut be an ideal which fulfils the limit condition.
(i) If d ≥ max(reg(I0 ), reg(I∞ )), then H 0 (Prk , I(d)) ∩ Rd has the dimension Q′ (d).
(ii) If d ≥ reg(I ′ ) and H 0 (I(d))∩Rd has a dimension ≥ Q′ (d), then d ≥ max(reg(I0 ), reg(I∞ )).
Proof. (i) There is a basis of M := H 0 (I(d)) of the form gi = tei gi0 + tei −1 gi1 + · · · ,
such that 0 ≤ e1 ≤ e2 ≤ · · · ≤ em , m := Q(d), gij ∈ R and gi0 ∈ Rd−ei , 1 ≤ i ≤ m, linear
independent. Then M∞ := lim σ(λ)M = h{tei gi0 |1 ≤ i ≤ m}i (limit in Grassm (Sd )) has
λ→∞
the dimension m. As d ≥ reg(I∞ ) by assumption, it follows that Q(d) = h0 (I∞ (d)), and
L 0
hence M∞ = H 0 (I∞ (d)). Now t is a non-zero divisor of S/
H (I∞ (n) by assumption,
n≥0
Thus it follows that H 0 (I∞ (n)) = h{tei −(d−n) gi0 |ei ≥ d − n}i for all 0 ≤ n ≤ d. If n = d−1
one gets H 0 (I∞ (d −1)) = h{tei −1 gi0 |ei ≥ 1}i, hence Q(d −1) = |{i|ei ≥ 1}|. It follows that
Q′ (d) = |{i|ei = 0}|. Thus M ∩ Rd ⊃ h{gi0|ei = 0}i has a dimension ≥ Q′ (d). Because
of reg(I ′ ) ≤ reg(I) one has h0 (I ′ (d)) = Q′ (d) and the canonical restriction mapping
ρd : M = H 0 (I(d)) −→ H 0 (I ′ (d)) is injective on M ∩ Rd . It follows that the dimension
of M ∩ Rd can not be greater than Q′ (d).
(ii) From the exact sequence
(1)
·t
0 −→ I(−1) −→ I −→ I ′ −→ 0
137
it follows that H i (I(n − i)) = (0) if i ≥ 2 and n ≥ e := reg(I ′ ) (see [M], p.102). The
sequence
ρ
d
0 −→ H 0 (I(d − 1)) −→ H 0 (I(d)) −→
H 0 (I ′ (d)) −→ H 1 (I(d − 1)) −→ H 1 (I(d)) −→ 0
is exact as d ≥ e, where ρ is induced by the canonical restriction mapping S −→
S/tS(−1) = R. As ρd is injective on H 0 (I(d)) ∩ Rd and h0 (I ′ (d)) = Q′ (d), it follows
from the assumption that ρd is surjective. From the e - regularity of I ′ it follows that
R1 H 0 (I ′ (n)) = H 0 (I ′ (n + 1)), for all n ≥ e. Hence ρn is surjective for all n ≥ d. Hence
0 −→ H 1 (I(n − 1)) −→ H 1 (I(n)) −→ 0 is exact for all n ≥ d, thus H 1 (I(n − 1)) = (0)
for all n ≥ d. It follows that reg(I) ≤ d. One again has the exact sequences:
(2)
·t
′
0 −→ I0/∞ (−1) −→ I0/∞ −→ I0/∞
−→ 0
As (I ′ )0/∞ = (I0/∞ )′ ⊃ I ′ and all these ideals have the Hilbert polynomial Q′ , it follows
that (I ′ )0/∞ = I ′ . As H 0 (I(d)) ∩ Rd is fixed by σ(λ), it follows that H 0 (I(d)) ∩ Rd ⊂
H 0 (I0/∞ (d)). Then one argues as before, using (2) instead of (1).
Remark C.2. Let I ⊂ OP2 be an ideal of colength d, let be S = k[x, y, z], R = k[x, y],
and let Gm operate by σ(λ) : x 7→ x, y 7→ y, z 7→ λz . We assume I to be invariant under
Γ (see above). As d ≥ reg(I) for all ideals I ⊂ OP2 of colength d, the assumption of part
(i) of the lemma is fulfilled, hence H 0 (I(n)) ∩ Rn has the dimension Q′ (n) = n+1
for all
1
n ≥ d and therefore:
H 0 (I(n)) ⊃ Rn
(3)
for all n ≥ d.
This inclusion has been used in the text for several times, e.g. in Section (2.2).
Remark C.3. Let I ⊂ OP2 be an ideal of finite colength, with Hilbert function ϕ,
which is invariant under Γ · T (ρ). Let Gm operate on S by σ(λ) : x 7→ x, y 7→ y, z 7→ λz. If
in(I) is the initial ideal with regard to the inverse lexicographical order, then in(I) is equal
to the limit ideal I0 = lim σ(λ)I. As h0 (I0 (n)) = ϕ(n), it follows that in(H 0 (I(n))) =
λ→0
H 0 (I0 (n)), for all n ∈ N (cf. Appendix E and [G2], Lemma 3 and Lemma 4, pp. 541).
Thus the number of the initial monomials and of the monomials in H 0 (I0 (n)), which are
represented in our figures, can be determined by means of the Hilbert function, alone.
138
APPENDIX D
Unipotent groups acting on polynomial rings
Lemma D.1. The 5-dimensional subgroups of ∆ = U(4; k) have the form
1 α ∗ ∗
0
1
β
∗
G(p) :=
aα + bβ + cγ = 0
0 0 1 γ
0 0 0 1
where p = (a : b : c) ∈ P2 (k) is uniquely determined.
∗
∗
⊂ ∆ is a normal subgroup. Let G be a 5-dimensional
0
1
−→ ∆/N
is
an injective homomorphism
and ∆/N ≃ G3a .
1 0 x y
0
1
0
z
x, y, z ∈ k, ax + by + cz = 0
First case: dim G∩N = 2. Then G∩N =
0 0 1 0
0 0 0 1
1 α x y
0 1 β z
2
, where
where (a : b : c) ∈ P (k) is a suitable point. It follows that G =
0 0 1 γ
0 0 0 1
α,
β, γ are any element
of k, and x, y, z ∈ k have to fulfil the conditions noted above. If
′
′
′
1
α
x
y
′
′
0
1
β
z
0 0 1 γ ′ is any other element of G, then
0 0 0 1
1
0
Proof. N :=
0
0
0 ∗
1 0
0 1
0 0
subgroup of ∆. Then G/G ∩ N
a(x′ + αβ ′ + x) + b(y ′ + αz ′ + xγ ′ + y) + c(z ′ + βγ ′ + z) = 0.
As α, β, γ, α′, β ′ , γ ′ are any elements of k, we conclude that a = b = c = 0, contradiction.
Second case: N ⊂ G. Then G/N ֒→ G3a is 2-dimensional, and one concludes from this
that G has the form noted above and that p ∈ P2 (k) is uniquely determined. Furthermore
it is easy to see that G(p) is a subgroup.
Lemma D.2. Let be P = k[x, y, z, t], let V ⊂ P be a subspace which is invariant under
G(p). If f ∈ P is invariant under G(p) modulo V , then the polynomials
x∂f /∂z, y∂f /∂t, x∂f /∂t, bx∂f /∂y − ay∂f /∂z, cx∂f /∂y − az∂f /∂t, cy∂f /∂z − bz∂f /∂t all
lie in V .
139
1 α 0 0
0 1 β 0
r s m n
Proof. If g =
0 0 1 γ ∈ G(p) and M = x y z t , then g(M) − M =
0 0 0 1
r
s
m
x (αx+y) (βy +z) (γz +t)n −M = αsxr+1 y s−1z m tn +βmxr y s+1z m−1 tn +γnxr y s z m+1 tn−1
+ terms containing αi , β i , γ i , where i ≥ 2.
If g(f )−f ∈ V , then it follows that αx∂f /∂y +βy∂f /∂z +γz∂f /∂t+ terms containing
α , β i , γ i , where i ≥ 2, lies in V .
First case: c 6= 0. Then γ = −(aα/c + bβ/c), hence αx∂f /∂y + βy∂f /∂z − (aα/c +
bβ/c)z∂f /∂t + terms containing α2 , αβ, β 2 · · · ∈ V . It follows that α(cx∂f /∂y−az∂f /∂t)+
β(cy∂f /∂z − bz∂f /∂t)+ terms containing α2 , αβ, β 2, · · · ∈ V . Put α = 0 and β = 0,
respectively. It follows that cy∂f /∂z − bz∂f /∂t ∈ V and cx∂f /∂y − az∂f /∂t ∈ V . Multiplication by a and b, respectively, and then subtracting the two polynomials from each
other gives c(bx∂f /∂y − ay∂f /∂z) ∈ V . As c 6= 0, the last three statements of the assertion follow.
Second case : c = 0, b 6= 0. Then we can choose γ ∈ k arbitrarily and −aα/b = β.
It follows that αx∂f /∂y − (aα/b)y∂f /∂z + γz∂f /∂t + terms containing α2 , γ 2 , · · · ∈ V .
Putting α = 0 gives z∂f /∂t ∈ V . Putting γ = 0 gives bx∂f /∂y − ay∂f /∂z ∈ V .
Third case: b = 0, c = 0. Then a = 1 and β and
γare any elements
of k, whereas α = 0.
1 0 ∗ ∗
0 1 0 ∗
⊂ G(p), the same reaThis gives y∂f /∂z and z∂f /∂t ∈ V . As N =
0 0 1 0
0 0 0 1
i
soning as in the proof of ([T2], Hilfssatz 1, p. 142) shows that x∂f /∂z, x∂f /∂t, y∂f /∂t ∈
V.
Lemma D.3. Let I ⊂ OP2 be a monomial ideal of colength d > 0 and let ξ be the
corresponding point in H d (k). If ξ is not invariant under the Ga -operation ψα : x 7→
x, y 7→ αx + y, z 7→ z, then C := Ga · ξ contains exactly two fixed points under T (3; k),
namely the point ξ and the Ga - fixed point ψ∞ (ξ) := lim ψα (ξ).
α−→∞
Proof. Embedding H d in Grassq (Sd ), where q := d+2
−d, one sees that it is sufficient
2
to prove the corresponding statement for a T (3; k)-invariant q - dimensional subspace
d
L
U ⊂ Sd and the corresponding point in Grassq (Sd ). As one can write U =
z d−i Ui ,
i=0
where Ui ⊂ Ri , is a subspace, it suffices to prove the corresponding statement for an
r-dimensional subspace U ⊂ Rn , which is invariant under T (2; k), but not invariant under
Ga . As lim ψα (U) is a Ga - invariant subspace and as char(k) = 0, it follows that this
α−→∞
subspace is equal to hxn , xn−1 y, . . . , xn−r+1 y r−1i. It follows that C has two fixed-points
under T (2; k). In order to prove there are no more fixed-points, it suffices to show the
following: If there is an element α 6= 0 in k such that ψα (U) is T (2; k)-invariant, then
ψα (U) is T (2; k)-invariant for all α 6= 0. If one takes a monomial M = xn−r y r ∈ U, then
ψα (M) = xn−r (αx + y)r ∈ ψα (U). As this is a monomial subspace by assumption, it
140
follows that xn−r xi y r−i ∈ ψα (U), 0 ≤ i ≤ r. Thus one has M ∈ ψα (U) and it follows
that U = ψα (U), But this implies ψnα (U) = U for all n ∈ N and thus ψα (U) = U for all
α ∈ k.
Lemma D.4. Suppose C ⊂ HQ is a B-invariant, irreducible, reduced curve, which is
pointwise invariant under Γ such that the image of C under the restriction morphism h
is a single point. Then (Mn · C) is constant for n ≫ 0 .
Proof. From ([T1], Proposition 0, p.3) it follows that C is either pointwise ∆invariant or a 1-cycle of type 3. We consider the first case. Then C = {σ(λ)ξ|λ ∈ k ∗ }− ,
where σ is a suitable Gm -operation and ξ ∈ H(k) corresponds to a ∆-invariant ideal J .
Let be I = h(J ) and I ∗ the ideal on P3 ,which is generated by I (cf. 1.2.2). Then
n
H 0 (J (n)) ⊂ ⊕ tn−i H 0 (I(i)) for all n (cf. [G5], Hilfssatz 3.2, p.295). Now the assertion
i=0
follows, as I is fixed by σ.
In the second case on has C = {ψα (ξ)|α ∈ k}− , where ψα is the usual Ga -operation
and ξ ∈ H(k) corresponds to a monomial ideal J . Then one argues as in the first case,
with ψα instead of σ.
141
APPENDIX E
Standard bases
Let k be an extension field of C. If ρ = (ρ0 , . . . , ρr ) ∈ Zr+1 − (0), then T (ρ) :=
{ λ = (λ0 , · · · , λr ) | λi ∈ k ∗ and λρ := λρ00 · · · λρr r = 1 } is a subgroup of dimension r of
Gr+1
m .
Auxiliary lemma: If σ, τ ∈ Zr+1 − (0) such that T (σ) ⊂ T (τ ), then there is an integer
n such that τ = n · σ.
Proof. Write σ = (a0 , · · · , ar ), τ = (b0 , · · · , br ) . As the dimension of T (σ) is equal
to r, there is an index i such that ai 6= 0 and bi 6= 0.Without restriction one can assume
that a0 6= 0 and b0 6= 0 . Choose p, q ∈ Z such that pa0 = qb0 and (p, q) = 1. Then
1 −qb1
r −qbr
λpa
· · · λpa
= 1 for all λ ∈ T (σ) follows. Because of dim T (σ) = r one gets
1
r
pai − qbi = 0, 0 ≤ i ≤ r, and thus σ = qρ, τ = pρ, where ρ ∈ Zr+1 − (0) is a suitable
vector. If ǫ is any q-th root of unity in C, one can choose λ ∈ (C∗ )r+1 such that λρ = ǫ.
From λqρ = λσ = 1 it follows that ǫp = λpρ = λτ = 1, too, and q = 1 follows.
We let Gr+1
operate on S = k[X0 , · · · , Xr ] by Xi 7→ λi Xi . If ρ = (ρ0 , . . . , ρr ), then
m
ρ0
ρ
ρr
X := X0 · · · Xr .
Lemma E.1. Let V ⊂ Sd be a T (ρ)-invariant subspace. Then V has a basis of the
form fi = mi · pi (X ρ ), where the mi are different monomials, pi is a polynomial in one
variable with constant term 1 and mi does not appear in mj · pj (X ρ ) if i 6= j.
Proof. By linearly combining any basis of V one obtains a basis fi = mi + gi , where
the mi are different monomials, each gi is a sum of monomials, each of which is greater
than mi in the inverse lexicographic order, and mi does not appear in gj . If g ∈ T (ρ),
then g(fi ) contains the same monomials as fi and from g(fi ) ∈ h{fi }i we conclude that
each fi is a semi-invariant, i.e, hg(fi )i = hfi i for all g ∈ T (ρ) .
P
P
ai λαi X αi =
Now let be f =
ai X αi any T (ρ)-semi-invariant. Let be λ ∈ T (ρ). Then
P
α (0)
α (r)
c(λ) · ai X αi , where λαi := λ0 i . . . λr i . It follows that λαi = c(λ), if ai 6= 0, and
therefore λαi −αj = 1 for all i, j, such that ai 6= 0 and aj 6= 0. Thus T (ρ) ⊂ T (αi − αj ), and
the Auxiliary lemma gives αi − αj = nij ρ, nij ∈ Z, if ai 6= 0 and aj 6= 0. One sees that
P
there is an exponent α0 ∈ {αi } and natural numbers ni , such that f = X α0 · ai X ni ρ .
Corollary E.1. Let V ⊂ Sd be a m-dimensional subspace, let x ∈ Grassm (Sd ) be
the closed point of Grassm (Sd ) defined by V . If the orbit T · x has the dimension 1, then
the inertia group Tx of x has the form T (ρ), where ρ ∈ Zr+1 − (0).
143
Proof. This follows by similar argumentations as before (see[T2], Hilfssatz 7,p.141).
144
Bibliography
[F] Fogarty, J.: Algebraic families on an algebraic surface. Amer. J. Math. 90, 511–521 (1968).
[Fu] Fulton, W.: Intersection theory. Springer-Verlag 1984.
[G1] Gotzmann, G.: Eine Bedingung fr die Flachheit und das Hilbertpolynom eines graduierten Ringes.
Math. Z. 158, 61–70 (1978).
: A stratification of the Hilbert scheme of points in the projective plane. Math. Z. 199,
[G2]
539–547 (1988),
: Einfacher Zusammenhang der Hilbertschemata von Kurven im komplex-projectiven Raum.
[G3]
Invent.math. 99, 655–675 (1990).
[G4]
: Topologische Eigenschaften von Hilbertfunktion–Strata. Habilitationsschrift, Universität
Münster, 1993.
: Einige einfach-zusammenhängende Hilbertschemata. Math. Z. 180, 291–305 (1982)
[G5]
[Gre] Green, M.: Generic initial ideals. In: Six lectures on commutative algebra. Progress in Mathematics
166, 119–186, Birkhäuser, Basel (1998).
[HM] Harris, J., Morrison, I.: Moduli of curves, Springer–Verlag 1998.
[Ha] Hartshorne, R.: Algebraic geometry, Springer–Verlag 1977.
[Hi] Hirschowitz, A.: Le group de Chow équivariant. C.R. Acad, Sc. Paris, t.298, Serie I, no. 5, 87–89
(1984).
[Ho] Horrocks, G.: Properties of schemes inherited from orbits. J. Alg. 40, 819–823 (1976).
[I]
Iarrobino, A.: Punctual Hilbert schemes. Mem. Amer. Math. Soc. Vol.10, N. 188 (1977).
[K] Kraft, H. : Geometrische Methoden in der Invariantentheorie, Verlag Vieweg 1985.
[L] Lella, P.: A network of rational curves on the Hilbert scheme. Preprint, available at
http://arxiv.org./abs/1006.5020.
[LR] Lella, P., Roggero, M.: Rational components of Hilbert schemes. Preprint, available at
http://arxiv.org/abs/0903.1029.
[Mu] Mumford, D.: Lectures on curves on an algebraic surface, Pinceton, 1966.
[T1] Gotzmann, G.: Der kombinatorische Teil der ersten Chowgruppe eines Hilbertschemas von
Raumkurven, Schriftenreihe des Mathematischen Instituts der Universität Münster, 3. Serie, Heft
13, September 1994.
: Der algebraische Teil der ersten Chowgruppe eines Hilbertschemas von Raumkurven, ibid.,
[T2]
Heft 19, Februar 1997.
: Die Néron–Severi–Gruppe eines Hilbertschemas von Raumkurven und der universellen
[T3]
Kurve, ibid., Heft 23, Januar 1999.
: Die erste Chowgruppe eines Hilbertschemas von Raumkurven, ibid., Heft 25, März 2000.
[T4]
145
| 0 |
DeepTransport: Learning Spatial-Temporal Dependency
for Traffic Condition Forecasting
Xingyi Cheng†∗ , Ruiqing Zhang† , Jie Zhou, Wei Xu
Baidu Research - Institute of Deep Learning
{chengxingyi,zhangruiqing01,zhoujie01,wei.xu}@baidu.com
Abstract
Predicting traffic conditions has been recently explored as
a way to relieve traffic congestion. Several pioneering approaches have been proposed based on traffic observations
of the target location as well as its adjacent regions, but
they obtain somewhat limited accuracy due to lack of mining road topology. To address the effect attenuation problem, we propose to take account of the traffic of surrounding
locations(wider than adjacent range). We propose an endto-end framework called DeepTransport, in which Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain spatial-temporal traffic
information within a transport network topology. In addition,
attention mechanism is introduced to align spatial and temporal information. Moreover, we constructed and released a
real-world large traffic condition dataset with 5-minute resolution. Our experiments on this dataset demonstrate our
method captures the complex relationship in temporal and
spatial domain. It significantly outperforms traditional statistical methods and a state-of-the-art deep learning method.
Introduction
With the development of location-acquisition and wireless
device, a vast amount of data with spatial transport networks
and timestamps can be collected by mobile phone map app.
The majority of map apps can tell users real-time traffic conditions, as shown in Figure 1. However, only the current
traffic conditions are not enough for making effective route
planing, a traffic system to predict future road condition may
be more valuable.
In the past, there are mainly two approaches for traffic
prediction: time-series analysis based on classical statistics
and data-driven methods based on machine learning. Most
former methods are univariate; they predict the traffic of a
place at a certain time. The fundamental work was Auto
Regressive Integrated Moving Average (ARIMA) (Ahmed
and Cook 1979) and its variations (Pan, Demiryurek, and
Shahabi 2012; Williams and Hoel 1999). Motivated by the
fact (Williams 2001) that traffic evolution is a temporalspatial phenomenon, multivariate methods with both temporal and spatial features was proposed. (Stathopoulos
and Karlaftis 2003) developed a model that feeds on data
∗
†
Xingyi Cheng is the corresponding author.
main contribution
Figure 1: A real-time traffic network example from a commercial map app, the networks including many locations and
the color(green, yellow, red, dark red) depth illustrated the
condition of a location(a stretch of road).
from upstream detectors to improve the predictions of downstream locations. However, many statistics are needed
in such methods. On the other hand, data-driven methods (Jeong et al. 2013; Vlahogianni, Karlaftis, and Golias
2005) fit a single model from vector-valued observations including historical scalar measurements with the trend, seasonal, cyclical, and calendar variations. For instance, (Deng
et al. 2016) expressed traffic pattern by mapping road attributes to a latent space. However, the linear model here is
limited in its ability to extract effective features.
Neural networks and deep learning have been demonstrated as a unified learning framework for feature extraction and data modeling. Since its applicability in this topic,
significant progress has been made in related work. Firstly,
both temporal and spatial dependencies between observations in time and space are complex and can be strongly
nonlinear. While the statistics frequently fail when dealing with nonlinearity, neural networks are powerful to capture very complex relations (LeCun, Bengio, and Hinton
2015). Secondly, neural networks can be trained with raw
data in an end-to-end manner. Apparently, hand-crafted
engineered features that extract all information from data
spread in time and space is laborious. Data-driven based
neural networks extracts features without the need of statistical feature, e.g., mean or variance of all adjacent lo-
cations of the current location. The advantage of neural
networks for traffic prediction has long been discovered
by researchers. Some early work (Chang and Su 1995;
Innamaa 2000) simply put observations into input layer, or
take sequential feature into consideration (Dia 2001a) to
capture temporal patterns in time-series. Until the last few
years, some works of deep learning was applied. For instance, Deep Belief Networks (DBN) (Huang et al. 2014)
and Stack Autoencoders (SAEs) (Lv et al. 2015). However,
input data in these works are directly concatenated from different locations, which ignored the spatial relationship. In
general, the existing methods either concerns with the time
series or just a little use of the spatial information. Depending on traffic condition of a “narrow” spatial range will undoubtely degrades prediction accuracy. To achieve a better
undestanding of spatial information, we propose to solve this
problem by taking the intricate topological graph as a key
feature in traffic condition forecasting, especially for long
prediction horizon.
To any target location as the center of radiation, surrounding locations with same order form a “width” region, and
regions with different order constitute a “depth” sequence.
We propose a double sequential deep learning model to explore the traffic condition pattern. This model adopts a combination of convolutional neural networks (CNN) (LeCun,
Bengio, and others 1995) and recurrent networks with long
short-term memory (LSTM) units (Hochreiter and Schmidhuber 1997) to deal with spatial dependencies. CNN is
responsible for maintaining the “width” structure, while
LSTM for the “depth” structure. To depict the complicated spatial dependency, we utilize attention mechanism to
demonstrate the relationships between time and space.
The main contribution of the paper is summarized as follows:
• We introduce a novel deep architecture to enable temporal and dynamical spatial modeling for traffic condition
forecasting.
• We propose the necessity of aligning spatial and temporal
information and introduce attention mechanism into the
model to quantify their relationship. The obtained attention weight is helpful for daily traveling and path planning.
• Experiment results demonstrate that the proposed model
significantly outperforms existing methods based on deep
learning and time series forecasting methods.
• We also release a real large (millions) traffic dataset with
topological networks and temporal traffic condition 1 .
Preliminary
In this section, we briefly revisit the traffic prediction problem and introduce notations in this work.
Common Notations and Definition
A traffic network can be represented in a graph in two ways.
Either monitoring the traffic flow of crossings, take crossing
1
https://github.com/cxysteven/MapBJ
(a) A plain graph at a time point
(b) A graph with time-series
Figure 2: Traffic condition. Five colors in this graph denote five states for visually displaying: green(1, fluency),
yellow(2, slow), red(3, congestion) and dark red(4, extreme
congestion). “27180” is the ID number of a location(road
section).
as node and road as an edge of graph, or conversely, monitoring the condition of roads, take roads as nodes and crossings
as connecting edges. The latter annotation is adopted in our
work. Taking figure 2(a) as an example, each colored node
corresponds to a stretch of road in a map app.
We consider a graph consists of weighted vertices and
directed edges. Denote the graph as G = hV, Ei. V is
the set of vertices and E ⊆ {(u, v)|u ∈ V, v ∈ V } is
the set of edges, where (u, v) is an ordered pair. A location(vertex) v at any time point t have five traffic condition states c(v, t) ∈ {0, 1, 2, 3, 4}, expressing not-released,
fluency, slow, congestion, extreme congestion respectively.
Figure 2(b) presents an example of road traffic at three time
points in an area.
Observations: Each vertex in the graph is associated with
a feature vector, which consists of two parts, time-varying
O and time-invariant variables F . Time-varying variables
that characterize the traffic network dynamically are traffic
flow observation aggregated by a 5-minutes interval. Timeinvariant variables are static features as natural properties
which do not change with time s, such as the number of
input and output degrees of a road, its length, limit speed
and so forth.
In particular, the time-varying and time-invariant variables are denoted as:
c(v, t)
fv,1
c(v, t − 1)
fv,2
Fv = .
Ov,t =
(1)
..
..
.
c(v, t − p)
fv,k
where c(v, t) is traffic condition of vertex v at time t, p is
the length of historical measurement. fv,k are time-invariant
features.
Order Slot: In a path of the directed graph, the number of edges required to take from one vertex to another is
called order. Vertices of the same order constitute an order
slot. Directly linked vertices are termed first-order neighbors. Second-order spatial neighbors of a vertex are the firstorder neighbors of its first-order neighbors and so forth. For
any vertex in our directed graph, we define the incoming
L4
L
L1
[ [L2 L6], [L5 L5] ]
L
L
2nd-order
L
Downstream neighbors
Upstream neighbors
1st-order
L4
[ [L3 L3], [L1 L2] ]
1st-order
2nd-order
L
(a) The direct graph of
a location.
(b) Upstream flow and downstream flow neighbor of L4 with
in order 2.
Figure 3: An example of directed graph and order slot notation in DeepTransport
Convolutional Layer CNN is used to extract temporal
and “width” spatial information. As demonstrated in the example of figure 3, when feeding into our model, L4 ’s first
upstream neighbor L5 should be copied twice, because there
are two paths to L4 , that are [L6 , L5 ] and [L2 , L5 ]. With
the exponential growth of paths, the model suffers from the
high dimension and intensive computation. Therefore, we
employ a convolution operation with multiple encoders and
shared weights (LeCun, Bengio, and others 1995). To further reduce the parameter space while maintaining independence among vertices with the same order, we set the convolution stride to the convolution kernel window size, which is
equal to the length of a vertex’s observation representation.
The non-linear convolutional feature is obtained as follows:
erup,q
traffic flow as its upstream flow and the outflow as its downstream flow. Take figure 3(a) as an example, L4 is the target
location to be predict. L3 is the first-order downstream vertex of L4 . L1 , L2 is the first order downstream set of L3 and
they constitute the second order slot of L4 . Each vertex in
the traffic flow that goes in one direction is affected by its
upstream flow and downstream flow. The first and second
order slots of L4 is shown in Figure 3(b). Introducing the
dimension of time series, any location Lv,t is composed of
two vectors, Ov,t and Fv . Any order slot consists of some
locations:
T
Lu1 ,t
LTu ,t
Ov,t
2
Lv,t =
Xjv,t = .
(2)
Fv
..
LTuk ,t
where location index u· is one of the jth order neighbors
of v.
Perceptive Radius: The maximum ordered number controls the perceptive scope of the target location. It is an important hyperparameter describing spatial information, we
call it perceptive radius and denote it as r.
Problem Definition: According to the above notation,
we define the problem as follows: Predict a sequence of
traffic flow Lv,t+h for prediction horizon h given the historical observations of Lv0 ,t0 , where v 0 ∈ neighbor(v, r),
t0 ∈ {t − p, · · · , t}, r ∈ {0, · · · , R} is perceptive radius and
p is the length of historical measurement.
Model
As shown in Figure 4, our model consists of four
parts: upstream flow observation(left), target location module(middle), downstream flow observation(right), and training cost module(top). In this section, we detail the work
process of each module.
erdown,q
(3)
= σ(Wdown,q ∗ Dv,t + bdown,q ),
(4)
[X1v,t , · · ·
where Uv,t =
, Xrv,t ](only upstream neighbors) is
denoted as upstream input matrix, while Dv,t is downstream
input matrix. The er·,q is at rth order vector of upstream
or downstream module where q ∈ {1, 2...m} and m is the
number of feature map. We set erup = [erup,1 , · · · , erup,m ]
and erup ∈ Rl×m , l is the number of observations in a slot.
Similarly, we can get the erdown . The weights W and bias
b composes parameters of CNN subnetworks. σ represents
nonlinear activation, we empirically adopt the tanh function
here.
Recurrent Layer RNN is utilized to represent each path
that goes to the target location(upstream path) or go out
from the target location(downstream path). The use of
RNN have been investigated for traffic prediction for a long
time, (Dia 2001b) used a Time-Lag RNN for short-term
speed prediction(from 20 seconds to 15 minutes) and (Lint,
Hooqendoorn, and Zuvlen 2002) adopted RNN to model
state space dynamics for travel time prediction. In our proposed method, since the upstream flow from high-order to
low-order, while the downstream flow is contrary, the output of the CNN layer in upstream module and downstream
module are fed into RNN layer separately.
The structure of vehicle flow direction uses LSTM with
“peephole” connections to encode a path as a sequential representation. In LSTM, the forget gate f controls memory cell
c to erase, the input gate i helps to ingest new information,
and the output gate o exposes the internal memory state outward. Specifically, given a rth slot matrix erdown ∈ Rl×m ,
map it to a hidden representation hrdown ∈ Rl×d with LSTM
as follows:
r
c̃
tanh
r
r
e
o σ
=
W
+
b
(5)
ir σ
p
p ,
hr−1
σ
fr
cr = c̃r
Spatial-temporal Relation Construction
Since the traffic condition of a road is strongly impacted by
its upstream and downstream flow, we use a convolutional
subnetwork and a recurrent subnetwork to maintain the road
topology in the proposed model.
= σ(Wup,q ∗ Uv,t + bup,q ),
r
h = [o
r
l×m
r
ir + cr−1
r
f r,
T
tanh (c )] ,
(6)
(7)
where e ∈ R
is the input at the rth order step; Wp ∈
R4d×(m+d) and bp ∈ R4d are parameters of affine trans-
Mul8-task training
15min
Square error
45min
Square error
30min
Square error
45min
Square error
Downstream module
Upstream module
α3
Max-pooling
+
α2
+
α1
Max-pooling
α'1
Max-pooling
Max-pooling
LSTM
α'2
Max-pooling
α'3
Max-pooling
LSTM
Conv
Conv
Conv
FC
Conv
Conv
Conv
3rd –order
upstream
slot
2nd –order
upstream
slot
1st –order
upstream
slot
Target
local8on
1rd –order
upstream
slot
2nd –order
upstream
slot
3st –order
upstream
slot
Figure 4: An example of the model architecture. There are three slots in upstream and downstream module respectively, each
with input vertices length of two. The convolution operation has four sharing feature map. The middle block demonstrates that
the target location is propagated by the fully-connected operation. A multi-task module that with four cost layers on the top
block. Conv: Convolution; FC: Fully-connection.
formation; σ denotes the logistic sigmoid function and
denotes elementwise multiplication.
The update of upstream and downstream LSTM unit can
be written precisely as follows:
r
hrdown = LSTM(hr−1
down , edown , θp ).
(8)
r
hrup = LSTM(hr+1
up , eup , θp ).
(9)
The function LSTM(·, ·, ·) is a shorthand for Eq. (5-7),
in which θp represents all the parameters of LSTM.
Slot Attention To get the representation of each order slot,
max-pooling is performed on the output of LSTM. As hr
represents the status sequence of the vertices in the corresponding order slot, we pool on each order slot to get
r number of slot embeddings Sup = [s1up , · · · , srup ] and
Sdown = [s1down , · · · , srdown ]. Since different order slot
have different effects on target prediction, we introduce
attention mechanisms to align these embeddings. Given
the target location hidden representation g, we get the jth
slot attention weights (Bahdanau, Cho, and Bengio 2014;
Rocktschel et al. 2015) as follows:
exp a(g, sj )
.
αj = Pr
k
k=1 exp a(g, s )
(10)
We parametrize the model a as a Feedforward Neural Networks that is used to compute the relevance between target
location and corresponding order slot. The weight αj is normalized by a softmax function. To write it precisely, we let
ATTW(sj ) is a shorthand for Eq.(10), we get the upstream
and downstream hidden representation by weighting sum of
these slots:
r
X
zdown =
ATTW(sjdown )sjdown .
(11)
j=1
zup =
r
X
ATTW(sjup )sjup .
(12)
j=1
Lastly, we concatenate the zup , zdown and target location’s hidden representation g and then sent them to cost
layer.
Top Layers with Multi-task Learning
The choice of cost function on the top layer is tightly coupled with the choice of the output unit. We simply use square
error to fit the future conditions of the target locations.
Multi-task learning is first introduced by (Huang et al.
2014) for traffic forecasting task. It is considered as soft
constraints imposed on the parameters arising out of several
tasks (Evgeniou and Pontil 2004). These Additional training examples put more pressure on the parameters of the
model towards values that generalize well when part of a
model is shared across tasks. Forecasting traffic future condition is a multi-task problem as time goes on and different time points correspond to different tasks. In DeepTransport model, in addition to the computation of the attention
weights and affine transformations of the output layer, all
other parameters are shared.
Experiments
Dataset
We adopt snowball sampling method (Biernacki and Waldorf 1981) to collect an urban areal dataset in Beijing from
a commercial map app and named it “MapBJ”. The dataset
provides traffic condition in {fluency, slow, congestion, extreme congestion}. The dataset contains about 349 locations
which are collected from March 2016 to June for every five
minutes. We select the first two months data for training
and the remaining half month for testing. Besides traffic
topological graph and time-varying traffic condition, we also
provide the limit speed of each road. Since the limit speed of
different roads may be very distinct, and locations segmentations method regards this as an important reference index.
We introduce a time-invariable feature called limit level and
discretize it into four classes.
Evaluation
Evaluation is ranked based on quadratic weighted Cohen’s
Kappa (Ben-David 2008), a criterion for evaluating the performance of categorical sorting.
In our problem, quadratic weighted Cohen’s Kappa is
characterized by three 4 × 4 matrices: observed matrix
O, expected matrix E and weight matrix w. Given Rater
A(ground truth) and Rater B(prediction), Oi,j denotes the
number of records rating i in A while rating j in B, Ei,j
indicates how many samples with label i is expected to be
rated as j by B and wi,j is the weight of different rating,
wi,j =
(i − j)2
,
(N − 1)2
(13)
where N is the number of subjects, we have N = 4 in our
problem. From these three matrices, the quadratic weighted
kappa is calculated as:
κ=1−
Σi,j wi,j Oi,j
.
Σi,j wi,j Ei,j
(14)
This metric typically in the range of 0 (random agreement
between raters) to 1 (complete agreement between raters).
Implementation Details
Since the condition value ranges in {1, 2, 3, 4}, the multiclassification loss can be treated as the objective function.
However, cost layer with softmax cross-entropy for does not
take into account the magnitude of the rating. Thus, square
error loss is applied as the training objective. But another
disadvantage straightforward use linear regression is that the
predicted value may be out of the range in {1, 2, 3, 4}. However, we can avoid this problem by label projection as follows:
We have a statistical analysis on the state distribution of
training data. Fluency occupies 88.2% of all records, fluency and slower occupies about 96.7%, fluency, slower and
congestion occupies about 99.5%, the extreme congestion
is very rare that it accounts for only 0.5%. Therefore, we
rank the prediction result in ascending order and set the first
88.2% to fluency, 88.2%-96.7% to slower, 96.7%-99.5% to
congestion, 99.5%-100% to extreme congestion.
We put the all the observation into 32 dimension continuous vectors. The training optimization is optimized by backpropagation using Adam (Kingma and Ba 2014). Parameters
are initialized with uniformly distributed random variables
and we use batch size 1100 for 11 CPU threads, with each
thread processes 100 records. All models are trained until
convergence. Besides, there are two important hyperparameters in our model, the length of historical measurement p
and perceptive radius r that controls temporal and spatial
magnitude respectively.
Choosing Hyperparamerters
We intuitively suppose that expanding perceptive radius
would improve prediction accuracy, but also increase the
amount of computation, so it is necessary to explore the correlation between the target location and its corresponding
rth order neighbors.
Mutual Infomation(MI) measures the degree of correlation between two random variables. When MI is 0, it means
the given two random variables are completely irrelevant.
When MI reaches the maximum value, it equals to the entropy of one of them, and the uncertainty of the other variable can be eliminated. MI is defined as
M I(X; Y) = H(X) − H(X|Y)
X
p(x, y)
=
p(x, y) log
, (15)
p(x)p(y)
x∈X,y∈Y
where H(X) and H(X|Y) are marginal entropy and conditional entropy respectively. MI describes how much uncertainty is reduced.
With MI divided by the average of entropy of the given
two variables, we get Normalized mutual information(NMI)
in [0, 1]:
M I(X, Y)
N M I(X; Y) = 2
.
(16)
H(X) + H(Y)
We calculated NMI between observation of each vertex and
its rth neighbors over all time points. The NMI gradually decreases as the order increases, it values 0.116, 0.052, 0.038,
0.035, 0.034 for r in {1, 2, 3, 4, 5} respectively and hardly
change after r > 5.
Therefore, we set the two hyperparameters as: p ∈
{3, 6, 12, 18} (corresponding to 15, 30, 60, 90 minutes
past measurements as 5-minutes record interval) and r ∈
{1, 2, 3, 4, 5}.
Effects of Hyperparameters
Figure 5 shows the averaged quadratic weighted kappa of
corresponding prediction horizon. Figure 5(a) illustrates 1)
closer prediction horizon always performs better; 2) As r increases, its impaction on the prediction also increases. This
can be seen from the slope between r = 1 and r = 5, the
slope at 60-min is greater than the same segment of 15-min.
Figure 5(b) takes 60-min estimation as an example, indicating that the predictive effect is not monotonically increasing
as the length of measurement p, and the same result can be
obtained at other time points. This is because the increase in
p brings an increase in the amount of parameter, which leads
to overfitting.
15min
30min
0.6829
0.6858
p=3
60min
0.687
0.67
0.62
0.57
p=6
p = 12
p = 18
0.6047
0.5388
0.6114
0.5494
0.6192
0.5611
0.5079
0.52
0.479
0.6233
0.5684
0.5152
Model
RW
ARIMA
FNN-P12
SAEs
DeepTransport-R1P12
DeepTransport-R5p12
0.689
0.6267
0.5724
0.5259
0.4925
Quadra2c weighted kappa
0.6787
Quadra2c weighted kappa
45min
0.69
0.72
0.685
0.68
0.675
0.67
Quadratic Weighted Kappa
15-min 30-min 45-min 60-min
0.5106 0.4474 0.3917 0.3427
0.6716 0.5943 0.5389 0.4545
0.6729
0.596
0.5292 0.4689
0.6782 0.6157 0.5553 0.4919
0.6787 0.6114 0.5494 0.4925
0.6889 0.6267 0.5724 0.5259
Avg.
0.4231
0.5648
0.5667
0.5852
0.5841
0.6035
0.665
0.47
1
2
3
4
Percep2ve radius
5
(a) Prediction with p = 12
1
2
3
Percep2ve radius
4
5
(b) 60-minute prediction
Figure 5: Averaged quadratic weighted kappa over the perceptive radius r and the length of historical measurement p
on validation data. The left figure illustrates that as a function of perceptive radius r increase, the longer horizon prediction has more growth. The right figure shows that the
optimal p should be chosen by observing the perceptive radius.
Table 1: Models performance comparison at various future
time points.
any couple locations directly connect each other so that it neglects the topology structure of transport networks. On the
contrary, DeepTransport considers traffic structure results
into higher performance than these baselines, demonstrating that our proposed model has good generalization performance.
0.2
0.15
0.15
0.13
0.14
0.14
0.12
0.11
0.13
0.16
0.15
0.17
0.2
0.19
0.27
0.3
0.33
1
0.29
0.34
0.34
0.35
0.35
Perceptive Radius
4
3
2
1
0.31
Perceptive Radius
4
3
2
0.22
0.18
0.17
0.15
0.24
0.22
0.22
0.22
0.093
0.11
0.11
0.1
0.1
0.16
0.16
0.18
15
30
45
Prediction Minutes
0.30
0.30
0.25
0.20
0.25
0.20
0.15
0.15
5
We compared DeepTransport with four representative
approaches: Random Walk(RW), Autoregressive Integrated Moving Average(ARIMA) and Stacked AutoEncoders(SAEs).
RW: In this baseline, the traffic condition at the next moment is estimated as a result of the random walk at the current moment condition that adds a white noise(a normal variable with zero mean and variance one).
ARIMA: It (Ahmed and Cook 1979) is a common statistical method for learning and predicting future values with
time series data. We take a grid search over all admissible
values of p, d and q which are less than p = 5, d = 2 and q =
5.
FNN: We also implemented Feed-forward Neural Networks (FNN), with a single hidden layer and an output layer
with regression cost. Hidden layer has 32 neurons, and four
output neurons refer to the prediction horizon. Hyperbolic
tangent function and linear transfer function are used for activation function and output respectively.
SAEs: We also implemented SAEs (Lv et al. 2015), one
of the most effective deep learning based methods for traffic
condition forecasting. It concatenates observations of all locations to a large vector as inputs. SAEs also can be viewed
as a pre-training version of FNN with large input vector that
proposed by (Polson and Sokolov 2017). The stacked autoencoder is configured with four layers with [256, 256, 256,
256] hidden units for pre-train. After that, a multi-task linear
regression model is trained on the top layer.
Besides, we also provides the result of DeepTransport with two configurations, with r = 1, p = 12
(DeepTransport-R1P12) and r = 5, p = 12 (DeepTransportR5P12).
Table 1 shows the results of our model and other baselines
on MapBJ. In summary, the models that use spatial information(SAEs, DeepTransport) significantly have higher performance than those that do not use(RW, ARIMA, FNN), especially in longer prediction horizon. On the other hand, SAEs
is a the fully-connected form, meaning that it assumes that
0.36
5
0.35
Comparison with Other Methods
0.10
15
30
45
Prediction Minutes
60
60
(a) Downstream attention weights (b) Upstream attention weights
Figure 6: Average attention weights alignments. It quantifies the spatial-temporal dependency relationships. The left
figure is downstream alignments; it captures our intuition
that as predict time increased, the attention weights shifts
from low order slot to higher ones. The right figure is upstream alignments; the model pay more attention to lower
orders because traffic flow in higher order is dispersed.
Slot Attention Weights
DeepTransport also can observe the influence of each slot
on the target location by checking slot attention weights.
Figure 6 illustrates the attention weights between prediction
minutes and perceptive radius by averaging all target locations. For downstream order slots, as shown in figure 6(a),
it can be seen that as predict time increased, the attention
weights shifts from low order slot to higher ones. On the
other side, figure 6(b) shows that upstream first order slot
has more impact on target location for any future time. To
capture this intuition, we utilized sandglass as a metaphor to
depict the spatial-temporal dependencies of traffic flow. The
flowing sand passes through the aperture of a sandglass just
like traffic flow through the target location. For the downstream part, the sand is first to sink to the bottom, after a
period, these accumulated sand will affect the aperture just
like the cumulative congestion from the higher order to the
lower order. Thus, when we predict the long-period condition of the target location, our model is more willing to
refer to higher order current conditions. On the other hand,
the upstream part is a little different. Higher order slots are
RW
ARIMA
SAEs
DeepTransport-R5P12
0.55
0.5
0.45
0.4
RMSE
0.35
0.3
0.25
0.2
0.15
0.1
0:15
0:45
1:15
1:45
2:15
2:45
3:15
3:45
4:15
4:45
5:15
5:45
6:15
6:45
7:15
7:45
8:15
8:45
9:15
9:45
10:15
10:45
11:15
11:45
12:15
12:45
13:15
13:45
14:15
14:45
15:15
15:45
16:15
16:45
17:15
17:45
18:15
18:45
19:15
19:45
20:15
20:45
21:15
no longer important references because traffic flow in higher
order is dispersed. The target location may not be the only
channel of upstream traffic flow. The nearest locations are
that can directly affect the target location just like the sand
gather to the aperture of the sandglass. So the future condition of target location put more attention on the lower order.
Although higher order row receives less attention in the upstream module, there is still a gradual change as prediction
minutes increase.
Time
Case Study
(a) 15-minute prediction
1. During flat periods, especially in the early morning, there
is almost no difference between models as almost all
roads are fluency
2. Rush hours are usually used to test the effectiveness of
models. When the prediction horizon is 15 minutes,
DeepTransport has lower errors than other models, and
the advantage of DeepTransport is more obvious when
predicting the far point of time(60-minute prediction).
3. After the traffic peak, it is helpful to tell when the traffic
condition can be mitigated. The result just after traffic
peaks shows that DeepTransport predicts better over these
periods.
Related Works
There has been a long thread of statistical models based
on solid mathematical foundations for traffic prediction.
Such as ARIMA (Ahmed and Cook 1979) and its large variety (Kamarianakis and Vouton 2003; Kamarianakis and
Prastacos 2005; Kamarianakis, Shen, and Wynter 2012)
played a central role due to effectiveness and interpretability. However, the statistical methods rely on a set of constraining assumptions that may fail when dealing when complex and highly nonlinear data. (Karlaftis and Vlahogianni
2011) compare the difference and similarity between statistical methods versus neural networks in transportation research.
To our knowledge, the first deep learning approach to
traffic prediction was published by (Huang et al. 2014),
they used a hierarchical structure with a Deep Belief Networks(DBN) in the bottom and a (multi-task) regression
layer on the top. Afterward, (Lv et al. 2015) used deep
stacked autoencoders(SAEs) model for traffic prediction. A
comparison (Tan et al. 2016) between SAEs and DNB for
traffic flow prediction was investigated. More recently, (Polson and Sokolov 2017) concatenated all observations to a
large vector as inputs and send them to Feed-forward Neural Networks(FNN) that predicted future traffic conditions at
each location.
RW
ARIMA
SAEs
DeepTranport-R5P12
0.7
0.6
RMSE
0.5
0.4
0.3
0.2
0.1
1:15
1:45
2:15
2:45
3:15
3:45
4:15
4:45
5:15
5:45
6:15
6:45
7:15
7:45
8:15
8:45
9:15
9:45
10:15
10:45
11:15
11:45
12:15
12:45
13:15
13:45
14:15
14:45
15:15
15:45
16:15
16:45
17:15
17:45
18:15
18:45
19:15
19:45
20:15
20:45
21:15
21:45
22:15
For office workers, it might be more valuable to tell when
traffic congestion comes and when the traffic condition will
ease. We analyze the model performance over time in figure 7, which shows the Root Mean Square Error(RMSE) between ground truth and prediction result of RW, ARIMA,
SAEs, DeepTransport-R5P12. It has two peak periods, during morning and evening rush hours. We summed up three
points from this figure:
Time
(b) 60-minute prediction
Figure 7: Model comparison with RMSE over time when
prediction horizon equals 3 (15-minute) and 12 (60-minute)
On other spatial-temporal tasks, several recent deep learning works attempt to capture both time and space information. DeepST (Zhang et al. 2016) uses convolutional neural
networks to predict citywide crowd flows. Meanwhile, STResNet (Zhang, Zheng, and Qi 2016) uses the framework
of the residual neural networks to forecast the surrounding
crowds in each region through a city. These works partition
a city into an I × J grid map based on the longitude and
latitude (Lint, Hooqendoorn, and Zuvlen 2002) where a grid
denotes a region. However, MapBJ provides the traffic networks in the form of traffic sections instead of longitude and
latitude, and the road partition method should be considered
the speed limit level rather than equally cut by road length.
Due to the differences in data granularity, we do not follow
these methods on traffic forecasting.
Conclusions
In this paper, we demonstrate the importance of using road
temporal and spatial information in traffic condition forecasting. We proposed a novel deep learning model (DeepTransport) to learn the spatial-temporal dependency. The
model not only adopts two sequential models(CNN and
RNN) to capture the spatial-temporal information but also
take attention mechanism to quantify the spatial-temporal
dependency relationships. We further released a real-world
large traffic condition dataset including millions of recordings. Our experiment shows that DeepTransport significantly outperformed other previous statistical and deep
learning methods for traffic forecasting.
References
[Ahmed and Cook 1979] Ahmed, M. S., and Cook, A. R.
1979. Analysis of freeway traffic time-series data by using
Box-Jenkins techniques. Number 722.
[Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.;
and Bengio, Y. 2014. Neural machine translation by jointly
learning to align and translate. Computer Science.
[Ben-David 2008] Ben-David, A. 2008. Comparison of classification accuracy using cohen‘’s weighted kappa. Expert
Systems with Applications 34(2):825–832.
[Biernacki and Waldorf 1981] Biernacki, P., and Waldorf, D.
1981. Snowball sampling: Problems and techniques of
chain referral sampling. Sociological methods & research
10(2):141–163.
[Chang and Su 1995] Chang, G.-L., and Su, C.-C. 1995.
Predicting intersection queue with neural network models.
Transportation Research Part C: Emerging Technologies
3(3):175–191.
[Deng et al. 2016] Deng, D.; Shahabi, C.; Demiryurek, U.;
Zhu, L.; Yu, R.; and Liu, Y. 2016. Latent space model for
road networks to predict time-varying traffic. arXiv preprint
arXiv:1602.04301.
[Dia 2001a] Dia, H. 2001a. An object-oriented neural network approach to short-term traffic forecasting. European
Journal of Operational Research 131(2):253–261.
[Dia 2001b] Dia, H. 2001b. An object-oriented neural network approach to short-term traffic forecasting. European
Journal of Operational Research 131(2):253–261.
[Evgeniou and Pontil 2004] Evgeniou, T., and Pontil, M.
2004. Regularized multi–task learning. In Proceedings of
the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 109–117. ACM.
[Hochreiter and Schmidhuber 1997] Hochreiter, S., and
Schmidhuber, J. 1997. Long short-term memory. Neural
computation 9(8):1735–1780.
[Huang et al. 2014] Huang, W.; Song, G.; Hong, H.; and Xie,
K. 2014. Deep architecture for traffic flow prediction: Deep
belief networks with multitask learning. IEEE Transactions
on Intelligent Transportation Systems 15(5):2191–2201.
[Innamaa 2000] Innamaa, S. 2000. Short-term prediction of
traffic situation using mlp-neural networks. In Proceedings
of the 7th world congress on intelligent transport systems,
Turin, Italy, 6–9.
[Jeong et al. 2013] Jeong, Y.-S.; Byon, Y.-J.; Castro-Neto,
M. M.; and Easa, S. M. 2013. Supervised weightingonline learning algorithm for short-term traffic flow prediction. IEEE Transactions on Intelligent Transportation Systems 14(4):1700–1707.
[Kamarianakis and Prastacos 2005] Kamarianakis, Y., and
Prastacos, P. 2005. Space-time modeling of traffic flow.
Computers & Geosciences 31(2):119–133.
[Kamarianakis and Vouton 2003] Kamarianakis, Y., and
Vouton, V. 2003. Forecasting traffic flow conditions
in an urban network: Comparison of multivariate and
univariate approaches. Transportation Research Record
1857(1):74–84.
[Kamarianakis, Shen, and Wynter 2012] Kamarianakis, Y.;
Shen, W.; and Wynter, L. 2012. Real-time road traffic
forecasting using regime-switching space-time models and
adaptive lasso. Applied stochastic models in business and
industry 28(4):297–315.
[Karlaftis and Vlahogianni 2011] Karlaftis, M. G., and Vlahogianni, E. I. 2011. Statistical methods versus neural networks in transportation research: Differences, similarities
and some insights. Transportation Research Part C: Emerging Technologies 19(3):387–399.
[Kingma and Ba 2014] Kingma, D., and Ba, J. 2014. Adam:
A method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
[LeCun, Bengio, and Hinton 2015] LeCun, Y.; Bengio, Y.;
and Hinton, G.
2015.
Deep learning.
Nature
521(7553):436–444.
[LeCun, Bengio, and others 1995] LeCun, Y.; Bengio, Y.;
et al. 1995. Convolutional networks for images, speech,
and time series. The handbook of brain theory and neural
networks 3361(10):1995.
[Lint, Hooqendoorn, and Zuvlen 2002] Lint, J. W. C. V.;
Hooqendoorn, S. P.; and Zuvlen, H. J. V. 2002. Freeway travel time prediction with state-space neural networks:
Modeling state-space dynamics with recurrent neural networks. Transportation Research Record Journal of the
Transportation Research Board 1811(1):347369.
[Lv et al. 2015] Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; and
Wang, F.-Y. 2015. Traffic flow prediction with big data:
a deep learning approach. IEEE Transactions on Intelligent
Transportation Systems 16(2):865–873.
[Pan, Demiryurek, and Shahabi 2012] Pan, B.; Demiryurek,
U.; and Shahabi, C. 2012. Utilizing real-world transportation data for accurate traffic prediction. In Data Mining
(ICDM), 2012 IEEE 12th International Conference on, 595–
604. IEEE.
[Polson and Sokolov 2017] Polson, N. G., and Sokolov,
V. O. 2017. Deep learning for short-term traffic flow prediction. Transportation Research Part C Emerging Technologies 79:1–17.
[Rocktschel et al. 2015] Rocktschel, T.; Grefenstette, E.;
Hermann, K. M.; Koisk, T.; and Blunsom, P. 2015. Reasoning about entailment with neural attention.
[Stathopoulos and Karlaftis 2003] Stathopoulos, A., and
Karlaftis, M. G. 2003. A multivariate state space approach for urban traffic flow modeling and prediction.
Transportation Research Part C: Emerging Technologies
11(2):121–135.
[Tan et al. 2016] Tan, H.; Xuan, X.; Wu, Y.; Zhong, Z.; and
Ran, B. 2016. A comparison of traffic flow prediction methods based on dbn. In CICTP 2016. 273–283.
[Vlahogianni, Karlaftis, and Golias 2005] Vlahogianni,
E. I.; Karlaftis, M. G.; and Golias, J. C. 2005. Optimized
and meta-optimized neural networks for short-term traffic
flow prediction: A genetic approach.
Transportation
Research Part C: Emerging Technologies 13(3):211–234.
[Williams and Hoel 1999] Williams, B. M., and Hoel, L. A.
1999. Modeling and forecasting vehicular traffic flow as a
seasonal stochastic time series process. Technical report.
[Williams 2001] Williams, B. 2001. Multivariate vehicular traffic flow prediction: Evaluation of arimax modeling.
Transportation Research Record: Journal of the Transportation Research Board.
[Zhang et al. 2016] Zhang, J.; Zheng, Y.; Qi, D.; Li, R.;
and Yi, X. 2016. Dnn-based prediction model for spatiotemporal data. In Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic
Information Systems, 92. ACM.
[Zhang, Zheng, and Qi 2016] Zhang, J.; Zheng, Y.; and
Qi, D. 2016. Deep spatio-temporal residual networks
for citywide crowd flows prediction.
arXiv preprint
arXiv:1610.00081.
| 2 |
arXiv:1403.4200v1 [] 17 Mar 2014
A family of quotients of the Rees algebra
V. Barucci ∗, M. D’Anna †, F. Strazzanti
‡
Dedicated to Marco Fontana on occasion of his 65-th birthday
Abstract
A family of quotient rings of the Rees algebra associated to a commutative ring is studied. This family generalizes both the classical
concept of idealization by Nagata and a more recent concept, the
amalgamated duplication of a ring. It is shown that several properties
of the rings of this family do not depend on the particular member.
MSC: 20M14; 13H10; 13A30.
Introduction
Let R be a commutative ring and let M be an R-module; the idealization,
also called trivial extension, is a classical construction introduced by Nagata
(see [15, page 2], [11, Chapter VI, Section 25] and [8]) that produces a new
ring containing an ideal isomorphic to M. Recently, D’Anna and Fontana
introduced the so-called amalgamated duplication (see [4], [2], studied also in,
e.g., [5], [14] and [1]), that, starting with a ring R and an ideal I, produces a
new ring that, if M = I, has many properties coinciding with the idealization
(e.g., they have the same Krull dimension and if I is a canonical ideal of a
local Cohen-Macaulay ring R, both of them give a Gorenstein ring). On the
other hand, while the idealization is never reduced, the duplication can be
∗
email: barucci@mat.uniroma1.it
email: mdanna@dmi.unict.it
‡
email: strazzanti@mail.dm.unipi.it
†
1
reduced, but is never an integral domain. Looking for a unified approach to
these two constructions, D’Anna and Re in [6] observed that it is possible
to present both of them as quotients of the Rees algebra modulo particular
ideals. This observation leaded to the subject of this paper, where we study
a more general construction, that produces a ring which, in some cases, is an
integral domain.
More precisely, given a monic polynomial t2 + at + b ∈ R[t] and denoting
with R+ the Rees
associated to the ring R with respect to the ideal
L algebra
n n
2 2
I, i.e. R+ =
n≥0 I t , we study the quotient ring R+ /(I (t + at + b)),
2 2
where (I (t + at + b)) is the contraction to R+ of the ideal generated by
t2 + at + b in R[t]. We denote such ring by R(I)a,b .
In the first section we introduce the family of rings R(I)a,b , show that
idealization and duplication are particular cases of them (cf. Proposition
1.4) and study several general properties such as Krull dimension, total ring
of fractions, integral closure, Noetherianity and spectrum. In Section 2 we
assume that R is local; in this case we prove that the rings R(I)a,b have the
same Hilbert function and that they are Cohen-Macaulay if and only if I
is a maximal Cohen-Macaulay R-module. We conclude this section proving
that, if R is a Noetherian integral domain of positive dimension, there exist
infinitely many choices of b such that the ring R(I)0,−b is an integral domain.
Finally in the last section we study the one-dimensional case. If R is local,
Noetherian and I a regular ideal we find a formula for the CM type of R(I)a,b
(cf. Theorem 3.2) and prove that it is Gorenstein if and only if I is a canonical
ideal of R. Moreover, we show the connection of the numerical duplication
of a numerical semigroup (see [7]) with R(I)0,−b , where R is a numerical
semigroup ring or an algebroid branch and b has odd valuation (see Theorems
3.4 and 3.6).
1
Basic properties
Let R be a commutative ring with unity and I a proper ideal of R; let t be
an indeterminate. The Rees algebra (also called Blow-up algebra) associated
to R and I is defined as the following graded subring of R[t]:
M
R+ =
I n tn ⊆ R[t].
n≥0
2
Lemma 1.1. Let f (t) ∈ R[t] be a monic polynomial of degree k. Then
f (t)R[t] ∩ R+ = {f (t)g(t); g(t) ∈ I k R+ }
P
Proof. Observe first that I k R+ = { ni=0 bi ti ; bi ∈ I k+i }. It is trivial that
each element of the form f (t)g(t), with g(t) ∈ I k R+ , is in f (t)R[t] ∩ R+ .
Conversely, if g(t) ∈ R[t] and if f (t)g(t) ∈ R+ , we prove by induction on
the degree of g(t), that g(t) ∈ I k R+ . If the degree of g(t) is zero, i.e.
g(t) = r ∈ R, and if f (t)r ∈ R+ , then the leading term of f (t)r is rtk and
r ∈ I k ⊂ I k R+ . The inductive step: suppose that the leading term of g(t)
is hn tn ; thus the leading term of f (t)g(t) is hn tk+n . If f (t)g(t) ∈ R+ , then
hn ∈ I k+n and so f (t)hn tn ∈ R+ . It follows that, if f (t)g(t) ∈ R+ , then
f (t)g(t) − f (t)hn tn = f (t)ḡ(t) ∈ R+ , where deg(ḡ(t)) < n = deg(g(t)). By
inductive hypothesis ḡ(t) ∈ I k R+ , hence g(t) = ḡ(t) + hn tn ∈ I k R+ .
We denote the ideal of the previous lemma by (I k f (t)).
Lemma 1.2. Let f (t) ∈ R[t] be a monic polynomial of degree k > 0. Then
each element of the factor ring R+ /(I k f (t)) is represented by a unique polynomial of R+ of degree < k.
Proof. The euclidean division of an element g(t) of R+ by the monic polynomial f (t) is always possible and gives g(t) = f (t)q(t) + r(t), with deg(r(t)) <
k. Moreover, an easy calculation shows that q(t) ∈ I k R+ and r(t) ∈ R+ .
Thus g(t) ≡ r(t) (mod (I k f (t))). Finally, if r1 (t) and r2 (t) are distinct polynomials of R+ of degree < k, also deg (r1 (t) − r2 (t)) < k and they represent
different classes.
It follows from Lemma 1.2 that the ring R is a subring of R+ /(I k f (t)).
Proposition 1.3. The ring extensions R ⊆ R+ /(I k f (t)) ⊆ R[t]/(f (t)) are
both integral and the three rings have the same Krull dimension.
Proof. By the two lemmas above we have the two inclusions. Moreover, the
class of t in R[t]/(f (t)) is integral over R and over R+ /(I k f (t)) as well. It
follows that all the extensions are integral. By a well known theorem on
integral extensions, we get that the three rings have the same dimension.
We observe now that, for particular choices of the polynomial f (t) above,
we get known concepts.
3
Recall that the Nagata’s idealization, or simply idealization, of R with
respect to an ideal I of R (that could be defined for any R-module M) is
defined as the R-module R ⊕ I endowed with the multiplication (r, i)(s, j) =
(rs, rj + si) and it is denoted by R ⋉ I.
The duplication of R with respect to I is defined as follows:
R ✶ I = {(r, r + i) | r ∈ R, i ∈ I} ⊂ R × R;
note that R ✶ I ∼
= R ⊕ I endowed with the multiplication (r, i)(s, j) =
(rs, rj + si + ij).
Proposition 1.4. We have the following isomorphisms of rings:
1) R+ /(I 2 t2 ) ∼
=R⋉I
2 2
2) R+ /(I (t − t)) ∼
= R✶I
Proof. 1) For each residue class modulo (I 2 t2 ), let r + it ∈ R+ , with r ∈ R
and i ∈ I, be its unique representative; the map
α : R+ /(I 2 t2 ) → R ⋉ I
defined setting α(r + it + (I 2 t2 )) = (r, i) is an isomorphism of rings: as a
matter of fact, α preserves sums and, if r, s ∈ R, i, j ∈ I, we have α((r + it +
(I 2 t2 ))(s + jt + (I 2 t2 ))) = α(rs + (rj + si)t + ijt2 + (I 2 t2 )) = α(rs + (rj +
si)t + (I 2 t2 )) = (rs, rj + si) = (r, i)(s, j).
2) Similarly to 1), the map
β : R+ /(I 2 (t2 − t)) → R ✶ I
defined setting β(r+it+(I 2(t2 −t))) = (r, r+i) is an isomorphism of rings. As
for the product, we have β((r +it+(I 2(t2 −t)))(s+jt+(I 2 (t2 −t)))) = β(rs+
(rj +si)t+ijt2 +(I 2 (t2 −t))) = β(rs+(rj +si+ij)t+ij(t2 −t)+(I 2 (t2 −t))) =
β(rs+(rj+si+ij)t+(I 2(t2 −t))) = (rs, rs+rj+si+ij) = (r, r+i)(s, s+j).
The previous proposition makes natural to consider the family R(I)a,b =
R+ /(I 2 (t2 + at + b)), where a, b ∈ R. As R-module R(I)a,b ∼
= R ⊕ I and
the natural injection R ֒→ R(I)a,b is a ring homomorphism; however 0 ⊕ I
in general (if b 6= 0) is not an ideal of R(I)a,b , although this happens for
idealization and duplication.
Both idealization and duplication can be realized in other cases.
4
Proposition 1.5. 1) If t2 +at+b = (t−α)2 , with α ∈ R, then R(I)a,b ∼
= R⋉I.
2
2) If t + at + b = (t − α)(t − β), with (t − α) and (t − β) comaximal ideals
of R[t], then R(I)a,b ∼
= R ✶ I.
Proof. 1) It is enough to consider the automorphism of R[t], induced by
t 7→ t − α.
2) By Chinese Remainder Theorem, the map Φ : R[t]/(t2 + at + b) → R × R,
defined by Φ(r + st) = (r + αs, r + βs) is an isomorphism of rings, as well as
the map Ψ : R[t]/(t2 − t) → R × R, defined by Ψ(r + st) = (r, r + s), so that
Ψ−1 ◦ Φ : R[t]/(t2 + at + b) → R[t]/(t2 − t),
where (Ψ−1 ◦ Φ)(r + st) = (r + αs) + (β − α)st is also an isomorphism. If
we fix an ideal I of R and if we restrict Ψ−1 ◦ Φ to the subring R(I)a,b , i.e.
to the elements r + it, with r ∈ R and i ∈ I, we get
R(I)a,b ∼
={(r + αi) + (β − α)it; r ∈ R, i ∈ I}
={r ′ + (β − α)it; r ′ ∈ R, i ∈ I};
the last ring is R ✶ J, where J = (β − α)I. To finish the proof we show that
β − α is invertible. In the authomorphism of R[t] induced by t → t + β, the
ideal (t − α, t − β) corresponds to (t − α + β, t) = (β − α, t) and this last ideal
is R[t] if and only if β − α is invertible.
Example 1.6. Let R = Z and t2 + at + b = t2 − 5t + 6 = (t − 2)(t − 3). Then
for each ideal I of Z, Z(I)−5,6 ∼
= Z ✶ I.
In this paper we study the family of rings of the form R(I)a,b , showing
that many relevant properties are independent by the member of the family.
From now on, we denote each element of R(I)a,b simply by r + it (r ∈ R,
i ∈ I).
Proposition 1.7. Let Q be the total ring of fractions of R(I)a,b . Then each
element of Q is of the form r+it
, where u is a regular element of R.
u
Proof. Assume that (s + jt) is a regular element of R(I)a,b and that
(r + it)/(s + jt) ∈ Q. Since (s + jt) is regular, then x(s + jt) 6= 0, for every
x ∈ R \ {0}. Hence, xj = 0 implies xs 6= 0.
Consider, now, the element (ja − s + jt). To prove the Proposition, it is
enough to show that:
5
i) the product u = (s + jt)(ja − s + jt) is a regular element of R,
ii) (ja − s + jt) is a regular element of R(I)a,b .
In fact in this case we can write (r + it)/(s + jt) = (r + it)(ja − s − jt)/u.
Observing that −at − t2 = b ∈ R, we have u = s(ja − s) − j 2 b ∈ R.
If x(ja − s + jt) = 0 (for some x ∈ R \ {0}), then xj = 0, that implies
x(ja − s + jt) = xs = 0, a contradiction. Hence (ja − s + jt) is not killed by
any non zero element of R; it follows that u is regular in R, otherwise there
would exist x ∈ R \ {0} such that ux = 0 that implies (s + jt) not regular in
R(I)a,b , since it is killed by (ja − s + jt)x 6= 0. Thus i) is proved.
ii): if (ja − s + jt) is not regular in R(I)a,b , there exists (h + kt) 6= 0
such that (ja − s + jt)(h + kt) = 0. Hence u(h + kt) = (s + jt)(ja − s +
jt)(h + kt) = 0, so u is not regular in R(I)a,b . But then it is not regular in
R; contradiction.
Corollary 1.8. Assume that I is a regular ideal; then the rings R(I)a,b and
R[t]/(t2 + at + b) have the same total ring of fractions and the same integral
closure.
Proof. Each element of R[t]/(t2 + at + b), let’s say r + r1 t with r, r1 ∈ R,
is in Q, in fact if i is an element of I regular in R, i is also regular in
R[t]/(t2 + at + b) and r + r1 t = (ir + ir1 t)/i ∈ Q. Moreover, if r + r1 t is
regular in R[t]/(t2 + at + b), it is also regular in Q. In fact, according to
Proposition 1.7, an element of Q is of the form (s + jt)/u (s ∈ R, j ∈ I,
u ∈ R and regular); if (r + r1 t)(s + jt)/u = 0, then (s + jt)/u = 0. It follows
that, if (r + r1 t)/(s + s1 t) is an element of Q′ , the total ring of fractions of
R[t]/(t2 + at + b), then r + r1 t and s + s1 t belong to Q and s + s1 t is regular
in Q, so (r + r1 t)/(s + s1 t) ∈ Q. On the other hand, if (r + it)/u ∈ Q,
with u ∈ R and regular in R, then u is also regular in R[t]/(t2 + at + b) and
(r + it)/u ∈ Q′ .
By Corollary 1.8, it follows that the integral closure of R(I)a,b contains
R[t]/(t2 + at + b), where R is the integral closure of R, but it may be strictly
larger. For example, for R = Z and t2 + at + b = t2 + 4, we have that
Z[t]/(t2 + 4) is not integrally closed, in fact (t/2)2 + 1 = 0.
Using the chain of inclusions R ⊆ R(I)a,b ⊆ R[t]/(t2 + at + b) and the fact
that these extensions are integral, we can get information on Spec(R(I)a,b )
with respect to Spec(R).
6
Proposition 1.9. For each prime ideal P of R, there are at most two prime
ideals of R(I)a,b lying over P . Moreover if t2 + at + b is irreducible on R/m
for any maximal ideal m of R, then there is exactly one prime ideal of R(I)a,b
lying over P .
Proof. Every prime ideal of R(I)a,b , lying over P has to be the contraction
of a prime ideal of R[t]/(t2 + at + b). It is well known (see e.g. [9, Chapter
6]) that for every prime ideal P of R, P [t] is a prime of R[t] lying over P
and there exist infinitely many other primes in R[t], lying over P , all of them
containing P [t] and with no inclusions among them. In particular, there is a
bijection between these ideals and the nonzero prime ideals of (Q(R/P ))[t]
(here Q(R/P ) denotes the field of fractions of R/P ); therefore the image of all
these prime ideals J in (Q(R/P ))[t] is of the form (f (t)), for some irreducible
polynomial f (t); hence J = ϕ−1
P ((f (t))), where ϕP is the composition of the
canonical homomorphisms R[t] → (R/P )[t] ֒→ (Q(R/P ))[t]. Thus the prime
ideals of R[t]/(t2 + at + b) lying over P are of the form J/(t2 + at + b), with
J ⊇ (t2 + at + b). This means that the polynomial f (t), corresponding to
J, divides the image of t2 + at + b in Q(R/P )[t]. Hence, if t2 + at + b is
irreducible in Q(R/P )[t], there is only one prime of R[t]/(t2 + at + b) lying
over P ; on the other hand, if t2 + at + b has two distinct irreducible factors in
(Q(R/P ))[t], there exist exactly two prime ideals in R[t]/(t2 + at + b) lying
over P . Hence there are at most two primes in R(I)a,b lying over P and the
first part of the proposition is proved.
Suppose that J/(t2 + at + b)) ∈ Spec(R[t]/(t2 + at + b)) and J/(t2 + at +
b)) ∩ R = P . We know that J = ϕ−1
P ((f (t))), where f (t) is an irreducible
factor of t2 + at + b in Q(R/P )[t]. If P ′ ∈ Spec(R), P ′ ⊂ P , then the prime
ideals of R[t]/(t2 + at + b) lying over P ′ correspond to the irreducible factors
of t2 + at + b in Q(R/P ′ )[t]; since the factorization of t2 + at + b in Q(R/P ′)[t]
induces a factorization in Q(R/P )[t], f (t) is irreducible also in Q(R/P ′)[t]
and we have a prime ideal of R[t]/(t2 + at + b) lying over P ′ of the form
J ′ /(t2 + at + b), with J ′ = ϕ−1
P ′ ((f (t))) ⊂ J. In particular, if m is a maximal
ideal of R containing P and t2 + at + b is irreducible on R/m, then there
is one and only one prime ideal of R[t]/(t2 + at + b) lying over P and the
same happens for R(I)a,b because the extension R(I)a,b ⊆ R[t]/(t2 + at + b)
is integral.
Remark 1.10. 1) Notice that, for particular a and b, the factorization of
t2 + at + b in Q(R/P )[t] may not depend on P . For example, in the case of
the idealization, the equality t2 = t · t, implies that there is only one prime
7
lying over P , both in R[t]/(t2 ) and in the idealization. As for the case of the
duplication, the equality t2 − t = t · (t − 1), implies that there are two primes
in R[t]/(t2 − t) lying over P , namely (P, t) and (P, t − 1). Contracting these
primes to the duplication we get the same prime if and only if P ⊇ I (see,
e.g., [4]).
2) By the proof of Proposition 1.9 we see that the extension R ⊆ R[t]/(t2 +
at + b) and the extension R ⊆ R(I)a,b as well fulfill the going down property.
In particular a minimal prime of R(I)a,b lies over a minimal prime P of R.
3) The proof of the previous proposition also implies that a sufficient
condition for R(I)a,b to be an integral domain is that R is an integral domain
and t2 + at + b is irreducible in Q(R)[t]. We will see in the next section
that, under particular assumptions on R, we can prove the existence of such
polynomials.
We conclude this section characterizing the rings R(I)a,b which are Noetherian.
Proposition 1.11. The following conditions are equivalent:
(i) R is a Noetherian ring;
(ii) R(I)a,b is a Noetherian ring for all a, b ∈ R;
(ii) R(I)a,b is a Noetherian ring for some a, b ∈ R.
Proof. If R is Noetherian, also the Rees algebra R+ is Noetherian; hence
it is straightforward that R(I)a,b is Noetherian for every a, b ∈ R, being a
quotient of a Noetherian ring.
Since the condition (iii) is a particular case of (ii), we need to prove only
that (iii) implies (i). Assume by contradiction that R is not a Noetherian ring;
then there exists an ideal J = (f1 , f2 , . . . ) of R that is not finitely generated
and we can assume that fi+1 ∈
/ (f1 , . . . fi ) for any i. Consider the ideal
JR(I)a,b of R(I)a,b ; by hypothesis, it is finitely generated and its generators
can be chosen from those of J (regarded as elements of R(I)P
a,b ). Hence we can
s
assume that JR(I)a,b = (f1 , . . . , fs ). This implies fP
=
s+1
k=1 fk (rk + ik t),
s
for some rk ∈ R and ik ∈ I, and therefore fs+1 = k=1 fk rk ; contradiction.
2
The local case
Assume that R is local, with maximal ideal m. Then it is known that both
R ✶ I and R ⋉ I are local with maximal ideals m ⊕ I (in the first case under
8
the isomorphism R ✶ I ∼
= R ⊕ I). More generally:
Proposition 2.1. R is local if and only if R(I)a,b is local. In this case the
maximal ideal of R(I)a,b is m ⊕ I (as R-module).
Proof. Let R be local; we claim that all the elements r + it with r ∈
/ m
are invertible in R(I)a,b . As a matter of fact, looking for s + jt such that
(r + it)(s + jt) = 1, we obtain the linear system
(
rs − ibj = 1
is + (r − ia)j = 0
which has determinant δ = r 2 − iar + i2 b ∈ r 2 + m. Thus δ is invertible in R;
moreover, it is easy to check that if (s, j) is the solution of the system, then
j ∈ I; hence s + jt ∈ R(I)a,b and it is the inverse of r + it.
Conversely, if R(I)a,b is local, R has to be local, since R ⊆ R(I)a,b is an
integral extension (cf. Proposition 1.3).
It is also clear that, if (R, m) is local and if we denote by M the maximal
ideal of R(I)a,b , then k = R/m ∼
= R(I)a,b /M. In the sequel, we will always
denote with k the common residue field of R and R(I)a,b .
Remark 2.2. Since R(I)a,b is an R-algebra, every R(I)a,b -module N is also
an R-module and then λR(I)a,b (N) ≤ λR (N) (where λ( ) denote the length
of a module).
If we consider a R(I)a,b -module N annihilated by M, we have that, as
R-module, N is annihilated by m. Hence it is naturally an R(I)a,b /M-vector
space and an R/m-vector space; in particular, λR(I)a,b (N) = dimk (N) =
λR (N) (where k = R/m ∼
= R(I)a,b /M).
For a Noetherian local ring (R, m), we denote by ν(I) the cardinality
of a minimal set of generators of the ideal I. The embedding dimension
of R, ν(R), is by definition ν(m) and the Hilbert function of R is H(n) =
λR (mn /mn+1 ).
Proposition 2.3. Let (R, m) be a Noetherian local ring. Then, for every
a, b ∈ R the rings R(I)a,b have the same Hilbert function. In particular, they
have the same embedding dimension ν(R(I)a,b ) = ν(R) + ν(I) and the same
multiplicity.
9
Proof. First of all, let us consider M 2 ; we have M 2 = m2 +mIt (and hence, as
R-module, it is isomorphic to m2 ⊕ mI): in fact, if (r + it) and (s + jt) are in
M, then their product rs−bij+(rj+si−aij)t ∈ m2 ⊕mI. Conversely, pick an
element in m2 ⊕ mI of the form rs + uit (with r, s, u ∈ m and i ∈ I); we have
rs+uit = rs+u(it) ∈ M 2 ; since m2 ⊕mI is generated by elements of this form
we have the equality. Arguing similarly for any n ≥ 2, we immediately obtain
that M n = mn + mn−1 I and, as R-module, it is isomorphic to mn ⊕ mn−1 I.
It follows that, as R-modules, M n /M n+1 ∼
= mn /mn+1 ⊕ Imn−1 /Imn . By
the previous remark the length of M n /M n+1 as R(I)a,b -module coincides with
its dimension as k-vector space and with its length as R-module. The thesis
follows immediately.
Remark 2.4. Let R be a Noetherian local ring. By Propositions 1.3 and
2.3 we get
dim R(I)a,b = dim R ≤ ν(R) ≤ ν(R) + ν(I) = ν(R(I)a,b ).
The first inequality is an equality if and only if R is regular and the second
if and only if ν(I) = 0, that is equivalent to I = 0, by Nakayama’s lemma.
This means that R(I)a,b is regular if and only if R is regular and I = 0;
clearly if I = 0 one has R(I)a,b = R.
We want to show that, if R is a local Noetherian integral domain, we can
always find integral domains in the family of rings R(I)a,b . The following
proposition was proved in [6] and we publish it with the permission of the
second author.
Proposition 2.5. Let (R, m) be a local Noetherian integral domain with
dim R ≥ 1 and Q(R) its field of fractions. Then for any integer n > 1,
not multiple of 4, there exist infinitely many elements b ∈ R such that the
polynomial tn − b is irreducible over Q(R).
Proof. We will use the following well-known criterion of irreducibility: if b is
not a p-th power for any prime p|n and b 6∈ −4Q(R)4 if 4|n, then tn − b is
irreducible (see [13, Chapter VI, Theorem 9.1]). In particular, if 4 does not
divide n and b is not a d-th power for any integer d > 1 such that d|n, then
tn − b is irreducible.
Taking a prime ideal P ⊂ R such that ht P = 1 we have dim RP = 1 and,
by the Krull-Akizuki Theorem, its integral closure RP of RP in Q(RP ) =
Q(R) is Noetherian (see, e.g. [12, Theorem 4.9.2]), hence it is a Dedekind
10
ring. So there is at least a discrete valuation v : Q(R)∗ → Z with v((RP )M ) =
N (with M maximal ideal of RP ). Since R ⊆ RP ⊆ (RP )M have the same
field of fractions, it follows that v(R) ⊆ N is a semigroup containing two
consecutive integers; so there exists c > 0 such that any x ∈ N, x ≥ c
belongs to v(R).
In particular, there exist infinitely many elements b ∈ R such that v(b)
is prime to n, so b cannot be a d-th power in Q(R) for any d > 1 such that
d|n. Hence we can find infinitely many b ∈ R such that (tn − b) ⊂ Q(R)[t] is
irreducible.
Corollary 2.6. Let R be a local Noetherian integral domain with dim R ≥ 1,
let Q(R) be its field of fractions and let I ⊂ R be an ideal. Then there exist
infinitely many elements b ∈ R such that R(I)0,−b is an integral domain.
Proof. By Proposition 2.5 we can find b such that (t2 − b) is irreducible in
Q(R)[t]. The thesis now follows by point 3) of Remark 1.10.
Now we want to investigate the Cohen-Macaulayness of R(I)a,b .
Assume that R is a CM ring; we set dim R = depth R = d; moreover,
Ann(R(I)a,b ) = (0), hence the dimension of R(I)a,b as R-module (i.e., since
R(I)a,b is a finite R-module, dim(R/Ann(R(I)a,b ))) equals the Krull dimension of R(I)a,b . We can assume that d ≥ 1, otherwise both R and R(I)a,b are
trivially CM.
Given a regular sequence x = x1 , x2 , . . . , xd of the ring R, it is not difficult
to check that it is an R(I)a,b -regular sequence if and only if its image in
R(I)a,b is a regular sequence of R(I)a,b as a ring. Moreover, since x is a
system of parameters of R, then it is a system of parameters of R(I)a,b (since
R ⊆ R(I)a,b is an integral extensions) and x is a system of parameters for
the R-module R(I)a,b . Hence, arguing exactly as in [2] we have that R(I)a,b
is a CM ring if and only if it is a CM R-module.
Since R(I)a,b ∼
= R ⊕ I as R-module, it follows that depth (R ⊕ I) =
min{depth I, depth R} = depth I and therefore R(I)a,b is a CM R-module if
and only if I is a CM R-module of dimension d (that is if and only if I is a
maximal CM R-module).
Hence we can state the following:
Proposition 2.7. Assume that R is a local CM ring of dimension d. Then
R(I)a,b is CM if and only if I is a CM R-module of dimension d. In particular, the Cohen-Macaulayness of R(I)a,b depends only on the ideal I.
11
Remark 2.8. We notice that if I is a canonical ideal of R, since R(I)a,b ∼
=
R ⊕ I, we can apply a result of Eisenbud (stated and proved in [2]) to get
that R(I)a,b is Gorenstein for every a, b ∈ R. We will see that in the onedimensional case we can determine the CM type of R(I)a,b and deduce that
it is a Gorenstein ring if and only if I is a canonical ideal.
Remark 2.9. In [3, Corollary 5.8], under the assumption that the ring (R, m)
is a local CM ring with infinite residue field, it has been proved the following
formula about the multiplicity of the duplication: e(R ✶I) = e(R)+λR (I/IJ)
(where J is any minimal reduction of m); in particular, if dim R = 1, then
e(R ✶I) = 2e(R).
By Proposition 2.3 we can state that, under the same assumptions, the
same formulas hold for the multiplicity of R(I)a,b , for every a, b ∈ R.
3
One-dimensional case
Assume for all this section that (R, m) is a one-dimensional, Noetherian, and
local ring and I a regular ideal; in this section we determine the CM type of
R(I)a,b .
Since I is regular, it is a maximal Cohen-Macaulay R-module and R is
a CM ring; therefore R(I)a,b is also CM by Proposition 2.7. In this case the
type of R(I)a,b equals the length of (R(I)a,b : M)/R(I)a,b as R(I)a,b -module,
where M is the maximal ideal of R(I)a,b ; so we start studying R(I)a,b : M.
Lemma 3.1. For any a, b ∈ R, the R(I)a,b -module R(I)a,b : M is equal to
r i
i
r
+ t; ∈ I : m, ∈ (I : I) ∩ (R : m)
s s
s
s
Proof. Consider a generic element r/s + (i/s)t of Q(R(I)a,b ), where r, s ∈
R, i ∈ I and s is regular (cf. Proposition 1.7). It is an element of R(I)a,b : M
if and only if
(r/s + (i/s)t)(m + jt) = rm/s + (im/s)t + (rj/s)t + (ij/s)t2
= rm/s − ijb/s + (im/s + rj/s − ija/s)t
is an element of R(I)a,b , for any m ∈ m and for any j ∈ I, that is (rm/s −
ijb/s) ∈ R and (im/s + rj/s − ija/s) ∈ I.
12
Suppose that r/s + (i/s)t ∈ R(I)a,b : M; in particular, if j = 0 we have
rm/s ∈ R and im/s ∈ I, that is r/s ∈ R : m and i/s ∈ I : m. Moreover
since ja ∈ I ⊆ m and i/s ∈ I : m, we have im/s, ija/s ∈ I, hence rj/s ∈ I
for any j ∈ I and then r/s ∈ I : I.
Conversely, suppose that i/s ∈ I : m and r/s ∈ (I : I) ∩ (R : m). Then
rm/s − ijb/s ∈ R + I = R and im/s + rj/s − ija/s ∈ I + I + I = I,
consequently r/s + (i/s)t ∈ R(I)a,b : M.
Theorem 3.2. The CM type of R(I)a,b is
(I : I) ∩ (R : m)
I:m
t(R(I)a,b ) = λR
+ λR
;
R
I
in particular, it does not depend on a and b.
Proof. Consider the homomorphism ϕ of R-modules
(I : I) ∩ (R : m) I : m
×
R
I
r
i
r i
+ t 7→
+ R, + I .
s s
s
s
R(I)a,b : M →
Thanks to the previous lemma, ϕ is well defined and surjective; moreover,
its kernel is given by the elements r/s + (i/s)t with r/s ∈ R and i/s ∈ I,
that is ker ϕ = R(I)a,b ; hence
R(I)a,b : M ∼ (I : I) ∩ (R : m) I : m
×
.
=
R(I)a,b
R
I
Consequently, using Remark 2.2, we have
R(I)a,b : M
R(I)a,b : M
= λR
=
t(R(I)a,b ) =λR(I)a,b
R(I)a,b
R(I)a,b
(I : I) ∩ (R : m) I : m
=λR
=
×
R
I
I:m
(I : I) ∩ (R : m)
+ λR
.
=λR
R
I
13
Corollary 3.3. The ring R(I)a,b is Gorenstein if and only if I is a canonical
ideal of R.
Proof. Recall first that a ring is Gorenstein if and only if it has CM type 1.
Recall also that I is a canonical ideal of a one-dimensional CM local ring R,
i.e. an ideal I such that I : (I : J) = J for each regular ideal J of R, if and
only if λR ((I : m)/I) = 1 (cf. [10], Satz 3.3). Notice that, for any ideal I
regular and proper, λR ((I : m)/I) ≥ 1.
Thus, by the formula of Theorem 3.2 we get: R(I)a,b is Gorenstein if
and only if t(R(I)a,b ) = 1 = 0 + 1; hence, λR ((I : m)/I) = 1, i.e. I is a
canonical ideal. Conversely if I is a canonical ideal, then I : I = R and
λR ((I : m)/I) = 1; by the same formula we get t(R(I)a,b ) = 0 + 1 = 1, i.e.
R(I)a,b is Gorenstein.
We conclude this section studying two particular cases of one dimensional
rings: numerical semigroup rings and algebroid branches; in both cases we
show the connection with the numerical duplication of a numerical semigroup
(see [7]).
Recall that a numerical semigroup is a submonoid of N with finite complement in N and it can be expressed in terms of its minimal set of generators,
S = hn1 , . . . , nν i, with GCD(n1 , . . . , nν ) = 1. A semigroup ideal E is a subset
of S such that S + E ⊆ E. We set 2 · E = {2s| s ∈ E}. According to [7], the
numerical duplication of S with respect to a semigroup ideal E of S and an
odd integer m ∈ S is the numerical semigroup
S ✶mE = 2 · S ∪ (2 · E + m).
A numerical semigroup ring is a ring R of the form k[[S]] = k[[X n1 , . . . , X nν ]],
where k is a field and X an indeterminate. Such a ring is a one-dimensional,
Noetherian, local integral domain; moreover, it is analytically irreducible (i.e.
its integral closure R is a DVR, which is a finite R-module) and in this case
R = k[[X]], the ring of formal power series. The valuation v induced by
k[[X]] on k[[S]] is given by the order of a formal power series and, if I is an
ideal of k[[S]], v(I) = {v(i); i ∈ I, i 6= 0} is a semigroup ideal of S.
Theorem 3.4. Let R = k[[S]] be a numerical semigroup ring, let b = X m ∈
R, with m odd, and let I be a proper ideal of R. Then R(I)0,−b is isomorphic
to the semigroup ring k[[T ]], where T = S ✶mv(I).
14
Proof. If S = hn1 , . . . , nν i, an element of R(I)0,b is of the form
r(X) + i(X)t
where r(X) = r(X n1 , . . . , X nν ) ∈ k[[S]] and i(X) = i(X n1 , . . . , X nν ) ∈ I.
Taking into account that we are factoring out the ideal (I 2 (t2 − X m )), we
can easily check that the map Φ : R(I)0,−b → k[[T ]], defined by
Φ(r(X) + i(X)t) = r(X 2 ) + i(X 2 )X m ,
is an isomorphism of rings.
Example 3.5. If R = k[[X 2 , X 3 ]], b = X 5 and I = X 3 k[[X 2 , X 3 ]], then
R(I)0,b ∼
= k[[X 4 , X 6 , X 11 ]]. According to Corollary 3.3, we get a Gorenstein
ring (in fact the semigroup h4, 6, 11i is symmetric), because the ideal I is a
canonical ideal of R.
We consider now the case of algebroid branches, i.e. local rings (R, m) of
the form k[[X1 , . . . Xn ]]/P , where P is a prime ideal of height n − 1 and k is
algebraically closed. We have that (R, m) is a one-dimensional, Noetherian,
complete, local integral domain; moreover, R is analytically irreducible with
integral closure isomorphic to k[[X]] and k ⊂ R. If we consider the valuation
v induced by k[[X]] on R, we get again that v(R) = {v(r); r ∈ R, r 6= 0} is
a numerical semigroup and that v(I) = {v(i); i ∈ I, i 6= 0} is a semigroup
ideal of v(R).
Theorem 3.6. Let R be an algebroid branch and let I be a proper ideal of
R; let b ∈ R, such that m = v(b) is odd. Then R(I)0,−b an algebroid branch
and its value semigroup is v(R) ✶mv(I).
Proof. Since v(b) is odd, by Proposition 2.5, t2 − b is irreducible in Q(R)[t]
and R(I)0,−b is an integral domain. Moreover, applying the results of the
previous sections, we know that R(I)0,−b is local (we will denote by M its
maximal ideal), Noetherian and one-dimensional. It is not difficult to check
that the m-adic topology on the R-module R(I)0,−b coincide with the M-adic
topology, hence it is complete. Since R(I)0,−b contains its residue field k, by
Cohen structure theorem, it is of the form k[[Y1 , . . . , Yl ]]/Q, for some prime
ideal Q of height l − 1; so it is an algebroid branch.
Let V = k[[Y ]] be the integral closure of R(I)0,−b in its quotient field
Q(R(I)0,−b ) = Q(R)(t) = k((Y )). We denote by v ′ the valuation associated
15
to k[[Y ]]; in particular v ′ (Y ) = 1. Since Q(R) = k((X)), we have k((Y )) =
k((X))(t); moreover, t2 = b implies that 2v ′ (t) = v ′ (b) = mv ′ (X). In order
to obtain v ′ (Y ) = 1 it is necessary that v ′ (t) = m and v ′ (X) = 2.
Now, it is straightforward that v ′ (R(I)0,−b ) = v(R) ✶mv(I).
References
[1] A. Bagheri, M. Salimi, E. Tavasoli, S. Yassemi, A construction of
quasi-Gorenstein rings, J. Algebra Appl. 11 (2012), no. 1. DOI:
10.1142/s0219498811005361.
[2] M. D’Anna, A construction of Gorenstein rings, J. Algebra 306 (2006), no.
2, 507-519.
[3] M. D’Anna, C. A. Finocchiaro, M. Fontana Algebraic and topological properties
of an amalgamated algebra along an ideal, preprint.
[4] M. D’Anna, M. Fontana, An amalgamated duplication of a ring along an ideal:
basic properties, J. Algebra Appl. 6 (2007), no. 3, 443-459.
[5] M. D’Anna, M. Fontana, The amalgamated duplication of a ring along a
multiplicative-canonical ideal, Arkiv Mat. 45 (2007), 241–252.
[6] M. D’Anna, R. Re, On the amalgamated duplication of a curve singularity
along an ideal, private communication.
[7] M. D’Anna, F. Strazzanti, The numerical duplication of a numerical semigroup, Semigroup forum, to appear, DOI: 10.1007/s00233-012-9451-x.
[8] R. Fossum, Commutative extensions by canonical modules are Gorenstein
rings, Proc. Am. Math. Soc. 40 (1973), 395–400.
[9] A. V. Geramita, C. Small, Introduction to homological methods in commutative rings, Queen’s Papers in Pure and Applied Mathematics, No. 43. Queen’s
University, Kingston, Ont., 1976.
[10] J. Herzog, E. Kunz, Kanonische Modul eines Cohen-Macaulay Rings, Springer
Lecture Notes in Math. 238, 1971.
[11] J. Huckaba, Commutative rings with zero divisors, M. Dekker, New York,
1988.
[12] C. Huneke, I. Swanson, Integral Closure of ideals, rings and modules, London Mathematical Society Lecture Note Series vol. 336, Cambridge University
Press, Cambridge 2006.
16
[13] S. Lang, Algebra, 3rd edition, Addison-Wesley, New Haven 1993.
[14] H. R. Maimani, S. Yassemi, Zero-divisor graphs of amalgamated duplication
of a ring along an ideal J. Pure Appl. Algebra, 212 (2008), 168–174.
[15] M. Nagata, Local Rings, Interscience, New York, 1962.
17
| 0 |
Security Type Systems as Recursive Predicates⋆
Andrei Popescu
arXiv:1308.3472v1 [cs.CR] 15 Aug 2013
Technische Universität München
Abstract. We show how security type systems from the literature of languagebased noninterference can be represented more directly as predicates defined by
structural recursion on the programs. In this context, we show how our uniform
syntactic criteria from [7,8] cover several previous type-system soundness results.
1 Security type systems
As in Example 2 from [7, 8], we assume that atomic statements and tests are built by
means of expressions applied to variables taken from a set var, ranged over by x, y, z.
Thus, exp, ranged over by e, is the set of arithmetic expressions (e.g., x + 1, x ∗ y + 5).
Then atomic commands atm ∈ atom are assignment statements x := e and tests tst ∈ test
are Boolean expressions built from exp (e.g., x > 0, x + 1 = y + z). For any expression
e and test tst, Vars e and Vars tst denote their sets of variables.
States are assignments of integers to variables, i.e., the set state is var → int. Variables are classified as either low (lo) or high (hi) by a fixed security level function
sec : var → {lo, hi}. We let L be the lattice {lo, hi}, where lo < hi.1 We shall use the standard infima and suprema notations for L. Then ∼ is defined as follows: s ∼ t ≡ ∀x ∈
var. sec x = lo =⇒ s x = t x.
We shall look into type systems from the literature, ::, assigning security levels l ∈
{lo, hi}, or pairs of security levels, to expressions and commands. All have in common
the following:
Typing of expressions:
e :: lo if ∀x ∈ Vars e. sec x = lo
e :: hi always
Typing of tests (similar):
tst :: lo if ∀x ∈ Vars tst. sec x = lo
tst :: hi always
The various type systems shall differ in the typing of commands.
But first let us look more closely at their aforementioned common part. We note
that, if an expression or a test has type l and l ≤ k, then it also has type k. In other
words, the following covariant subtyping rules for tests and expressions hold:
⋆
1
This work was supported by the DFG project Ni 491/13–2, part of the DFG priority program
Reliably Secure Software Systems (RS3).
One can also consider the more general case of multilevel security, via an unspecified lattice of
security levels L—however, this brings neither much additional difficulty, nor much additional
insight, so here focus on this 2-level lattice.
tst :: l l ≤ k
(SUBTYPE-TST)
tst :: k
e :: l l ≤ k
(SUBTYPE-EXP)
e :: k
Thus, the typing of an expression or test is uniquely determined by its minimal type,
defined as follows:
minTp e =
_
{sec x. x ∈ Vars e}
minTp tst
=
_
{sec x. x ∈ Vars tst}
The minimal typing operators can of course recover the original typing relation ::
as follows:
Lemma 1. The following hold:
(1) e :: l iff minTp e ≤ l.
(2) tst :: l iff minTp tst ≤ l.
1.1 Volpano-Smith possibilistic noninterference
In [11, §4], the typing of commands (which we denote by ::1 ) is defined inductively as
follows:
sec x = l e :: l
(ASSIGN)
(x := e) ::1 l
tst ::1 l c1 ::1 l c2 ::1 l
(IF)
(If tst c1 c2 ) ::1 l
c1 ::1 l c2 ::1 l
(PAR)
(Par c1 c2 ) ::1 l
c1 ::1 l c2 ::1 l
(COMPOSE)
(Seq c1 c2 ) ::1 l
tst ::1 lo c ::1 l
(WHILE)
(While tst c) ::1 lo
c ::1 l k ≤ l
(SUBTYPE)
c ::1 k
We think of c ::1 l as saying:
– There is no downwards flow in c.
– l is a lower bound on the level of the variables that the execution of c writes to.
(This intuition is accurately reflected by Lemma 2 below.)
Actually, [11] does not explicitly consider a rule like (PAR), and in fact uses parallel composition only at the top level. However, it does require that the thread pool
(which can be viewed as consisting of a number of parallel compositions) has welltyped threads, which is the same as typing the pool to the minimum of the types of its
threads—this is precisely what (PAR) does. (Also, in [11], the rule (WHILE) has the
assumption c ::1 lo rather that c ::1 l—this alternative is of course equivalent, thanks to
(SUBTYPE).)
Due to the subtyping rule, here we have a phenomenon dual to the one for expressions and tests: if a command has type l and k ≤ l, then it also has type k—thus, the
typing of a command, if any, is uniquely determined by its maximal type. The difference
from expressions and tests is that such a type may not exist, making it necessary to keep
a “safety" predicate during the computation of the maximal type. For example, consider
the computation of the minimal type of If tst c1 c2 according to the (IF) rule: Assume l0
is the minimal type of tst and l1 , l2 are the maximal types of c1 and c2 , respectively. The
rule (IF) requires the three types involved in the hypothesis to be equal, and therefore
we need to upcast l0 and downcast l1 and l2 so that we obtain a common type l—thus,
we need l0 ≤ l ≤ l1 ∧ l2 . Moreover, l has to be as high as possible. Such an l of course
only exists if l0 ≤ l1 ∧ l2 , and in this case the maximal l is l1 ∧ l2 . In summary, the rule
(IF) tells us the following:
– If tst c1 c2 is safe (i.e., type checks) iff c1 and c2 are safe and l0 ≤ l ≤ l1 ∧ l2 .
– If safe, the maximal type of If tst c1 c2 is l1 ∧ l2 .
Applying this reasoning to all the rules for ::1 , we obtain the function maxTp1 :
com → L and the predicate safe1 : com → bool defined recursively on the structure of
commands:2
Definition 1. – safe1 (x := e) = (minTp e ≤ sec x)
– maxTp1 (x := e) = sec x
– safe1 (Seq c1 c2 ) = (safe1 c1 ∧ safe1 c2 )
– maxTp1 (Seq c1 c2 ) = (maxTp1 c1 ∧ maxTp1 c2 )
– safe1 (If tst c1 c2 ) = (safe1 c1 ∧ safe1 c2 ∧ (minTp tst ≤ (maxTp1 c1 ∧ maxTp1 c2 )))
– maxTp1 (If tst c1 c2 ) = (maxTp1 c1 ∧ maxTp1 c2 )
– safe1 (While tst c) = (safe1 c ∧ (minTp tst = lo))
– maxTp1 (While tst c) = lo
– safe1 (Par c1 c2 ) = (safe1 c1 ∧ safe1 c2 )
– maxTp1 (Par c1 c2 ) = (maxTp1 c1 ∧ maxTp1 c2 )
Lemma 2. The following are equivalent:
(1) c ::1 l
(2) safe1 c and l ≤ maxTp1 c.
Proof idea: (1) implies (2): By easy induction on the definition of ::1 .
(2) implies (1): By easy structural induction on c.
⊓
⊔
Now, let us write:
– low e, for the sentence minTp e = lo
– low tst, for the sentence minTp tst = lo
– fhigh c (read “c finite and high"), for the sentence maxTp1 c = hi
(Thus, low : exp → bool, low : test → bool and fhigh : com → bool.)
Then, immediately from the definitions of minTp and maxTp1 (taking advantage of
the fact that L = {hi, lo}) we have the following:
– low e = (∀x ∈ Vars e. sec x = lo)
– low tst = (∀x ∈ Vars tst. sec x = lo)
– safe1 (x := e) = ((sec x = hi) ∨ low e)
– fhigh (x := e) = (sec x = hi)
– safe1 (Seq c1 c2 ) = (safe1 c1 ∧ safe1 c2 )
2
Notice the overloaded, but consistent usage of the infimum operator ∧ in both the lattice L =
{lo, hi} and the lattice of truth values bool (the latter simply meaning the logical “and").
– fhigh (Seq c1 c2 ) = (
fhigh c1 ∧ fhigh c2 )
safe1 c1 ∧ safe1 c2 ,
if low tst
– safe1 (If tst c1 c2 ) =
safe1 c1 ∧ safe c2 ∧ fhigh c1 ∧ fhigh c2 , otherwise
– fhigh (If tst c1 c2 ) = (fhigh c1 ∧ fhigh c2 )
– safe1 (While tst c) = (low tst ∧ safe1 c)
– fhigh (While tst c) = False
– safe1 (Par c1 c2 ) = (safe1 c1 ∧ safe1 c2 )
– low (Par c1 c2 ) = (low c1 ∧ low c2 )
Notice that the above clauses characterize the prediactes safe1 : com → bool and
fhigh : com → bool uniquely, i.e., could act as their definitions (recursively on the structure of commands). Since the predicate safe1 is stronger than fhigh (as its clauses are
strictly stronger), we can remove safe1 c1 ∧ safe c2 from the “otherwise" case of the If
clause for safe1 , obtaining:
– safe1 (If tst c1 c2 ) =
safe1 c1 ∧ safe1 c2 ,
fhigh c1 ∧ fhigh c2 ,
if low tst
=
otherwise
safe1 c1 ∧ safe1 c2 ,
fhigh (If tst c1 c2 ),
if low tst
otherwise
The clauses for safe1 and fhigh are now seen to coincide with our [7, 8, §6] clauses
for ≈WT and discr ∧ mayT, respectively, with the following variation: in [7, 8, §6] we do
not commit to particular forms of tests or atomic statements, and therefore replace:
– low tst with cpt tst
– fhigh atm with pres atm (where atm is an atom, such as x := e)
– safe1 atm with cpt atm
Note that the predicates cpt and pres, as defined in [7, 8, §4], are semantic conditions
expressed in terms of state indistinguishability, while low, fhigh and safe1 are syntactic checks. than syntactic checks as here—the syntactic checks are easyly seen to be
stronger, i.e., we have low tst =⇒ cpt tst, fhigh atm =⇒ pres atm and safe1 atm =⇒ cpt atm.
The main concurrent noninterference result from [11], Corollary 5.7, states (something
slightly weaker than) the following: if c ::1 l for some l ∈ L, then c ≈WT c. In the light of
Lemma 2 and the above discussion, this result is subsumed by our Prop. 4 from [7, 8],
taking χ to be ≈WT.
For the rest of the type systems we discuss, we shall proceed with similar transformations at a higher pace.
1.2 Volpano-Smith scheduler-independent noninterference
In [11, §7], another type system is defined, ::2 , which has the same typing rules as ::1
except for the rule for If, which is weakened by requiring the typing of the test to be lo:3
tst :: lo c1 ::2 l c2 ::2 l
(IF)
(If tst c1 c2 ) ::2 l
3
The same type system (except for the (PAR) rule) is introduced in [12] for a sequential language
with the purpose of preventing leaks through the covert channels of termination and exceptions.
Definition 2. We define safe2 just like safe1 , except for the case of If, which becomes:
– safe2 (If tst c1 c2 ) = ((minTp tst = lo) ∧ safe2 c1 ∧ safe2 c2 )
Similarly to Lemma 2, we can prove:
Lemma 3. The following are equivalent:
(1) c ::2 l
(2) safe2 c and l ≤ maxTp1 c.
If,
The inferred clauses for safe2 are the same as those for safe1 , except for the one for
which becomes:
– safe2 (If tst c1 c2 ) = (low tst ∧ safe2 c1 ∧ safe2 c2 )
Then safe2 is seen to coincide with siso from [7, 8, §6].
In [11] it is proved (via Theorem 7.1) that the soundness result for ::1 also holds for ::2 .
In fact, one can see that Theorem 7.1 can be used to prove something much stronger:
if c ::2 l for some l ∈ L, then siso c. This result is subsumed by our Prop. 4 from [7, 8],
taking χ to be siso.
1.3 Boudol-Castellani termination-insensitive noninterference
As we already discussed in [7, 8], Boudol and Castellani [3, 4] work on improving the
harsh Vopano-Smith typing of While (which requires low tests), but they pay a (comparatively small) price in terms of typing sequential composition, where what the first
command reads is required to be below what the second command writes. (Essentially
the same type system is introduced independently by Smith [9, 10] for studying probabilistic noninterference in the presence of uniform scheduling. Boudol and Castellani,
as well as Smith, consider parallel composition only at the top level. Barthe and Nieto [1] raise this restriction, allowing nesting Par inside other language constructs, as
we do here.)
To achieve this, they type commands c to a pair of security levels (l, l ′ ): the contravariant “write" type l (similar to the Volpano-Smith one) and an extra covariant
“read" type l ′ .
sec x = l e :: l
(ASSIGN)
(x := e) ::2 (l, l ′ )
tst :: l0
c1 ::3 (l1 , l1′ ) c2 ::3 (l2 , l2′ ) l1′ ≤ l2
(COMPOSE)
(Seq c1 c2 ) ::3 (l1 ∧ l2 , l1′ ∨ l2′ )
c1 ::3 (l, l ′ ) c2 ::3 (l, l ′ )
(If tst c1 c2 ) ::3 (l, l0 ∨ l ′ )
c1 ::3 l c2 ::3 l
(PAR)
(Par c1 c2 ) ::3 l
We think of c ::3 (l, l ′ ) as saying:
– There is no downwards flow in c.
l0 ≤ l
(IF)
tst :: l ′ c ::3 (l, l ′ ) l ′ ≤ l
(WHILE)
(While tst c) ::3 (l, l ′ )
c ::3 (l1 , l1′ ) l2 ≤ l1
c ::3 (l2 , l2′ )
l1′ ≤ l2′
(SUBTYPE)
– l is a lower bound on the level of the variables that the execution of c writes to.
– l ′ is an upper bound on the level of the variables that c reads, more precisely, that
the control flow of the execution of c depends on.
(This intuition is accurately reflected by Lemma 4 below.)
In [3, 4], the rule for While is slightly different, namely:
tst :: l0 c ::3 (l, l ′ ) l0 ∨ l ′ ≤ l
(WHILE’)
(While tst c) ::3 (l, l0 ∨ l ′ )
However, due to subtyping, it is easily seen to be equivalent to the one we listed. Indeed:
– (WHILE) is an instance of (WHILE’) taking l0 = l ′ .
– Conversely, (WHILE’) follows from (WHILE) as follows: Assume the hypotheses
of (WHILE’). By subtyping, we have tst :: l0 ∨ l ′ and c ::3 (l, l0 ∨ l ′ ), hence, by
(WHILE), we have (While tst c) ::3 (l, l0 ∨ l ′ ), as desired.
Following for ::3 the same technique as in the case of ::1 and ::2 , we define the
functions maxWtp : com → L (read “maximum writing type") and minRtp : com → L
(read “minimum reading type") and the predicate safe3 : com → bool:
Definition 3. – safe3 (x := e) = (minTp e ≤ sec x)
– maxWtp (x := e) = sec x
– minRtp (x := e) = lo
– safe3 (Seq c1 c2 ) = (safe3 c1 ∧ safe3 c2 ∧ (minRtp c1 ≤ maxWtp c2 ))
– maxWtp (Seq c1 c2 ) = (maxWtp c1 ∧ maxWtp c2 )
– minRtp (Seq c1 c2 ) = (minRtp c1 ∨ minRtp c2 )
– safe3 (If tst c1 c2 ) = (safe3 c1 ∧ safe3 c2 ∧ (minTp tst ≤ (maxWtp c1 ∧ maxWtp c2 )))
– maxWtp (If tst c1 c2 ) = (maxWtp c1 ∧ maxWtp c2 )
– minRtp (If tst c1 c2 ) = (minTp tst ∨ minRtp c1 ∨ minRtp c2 )
– safe3 (While tst c) = (safe3 c ∧ ((minTp tst ∨ minRtp c) ≤ maxWtp c))
– maxWtp (While tst c) = maxWtp c
– minRtp (While tst c) = (minTp tst ∨ minRtp c)
– safe3 (Par c1 c2 ) = (safe3 c1 ∧ safe3 c2 )
– maxWtp (Par c1 c2 ) = (maxWtp c1 ∧ maxWtp c2 )
– minRtp (Par c1 c2 ) = (minRtp c1 ∨ minRtp c2 )
Furthermore, similarly to the cases of safe1 and safe2 , we have that:
Lemma 4. The following are equivalent:
(1) c ::3 (l, l ′ )
(2) safe3 c and l ≤ maxWtp c and minRtp c ≤ l ′ .
Now, let us write:
– high c, for the sentence maxWtp c = hi
– low c, for the sentence minRtp c = lo
Then, immediately from the definitions of maxWtp and minRtp, we have the following:
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
safe3 (x := e) = ((sec x = hi) ∨ low e)
high (x := e) = (sec x = hi)
low (x := e) = True
safe3 (Seq c1 c2 ) = (safe3 c1 ∧ safe3 c2 ∧ (low c1 ∨ high c2 ))
high (Seq c1 c2 ) = (high c1 ∧ high c2 )
low (Seq c1 c2 ) = (low c1 ∧ low c2 )
safe3 (If tst c1 c2 ) = (safe3 c1 ∧ safe3 c2 ∧ (low tst ∨ (high c1 ∧ high c2 )))
high (If tst c1 c2 ) = (high c1 ∧ high c2 )
low (If tst c1 c2 ) = (low tst ∧ low c1 ∧ low c2 )
safe3 (While tst c) = (safe3 c ∧ ((low tst ∧ low c) ∨ high c))
high (While tst c) = high c
low (While tst c) = (low tst ∧ low c)
safe3 (Par c1 c2 ) = (safe3 c1 ∧ safe3 c2 )
high (Par c1 c2 ) = (high c1 ∧ high c2 )
low (Par c1 c2 ) = (low c1 ∧ low c2 )
Then high and low are stronger than safe3 , and hence we can rewrite the Seq, If and
clauses for safe3 as follows:
While
– safe3 (Seq c1 c2 ) =
((low c1 ∧ safe3 c2 ) ∨ (safe3 c1 ∧ highc2 ))
safe3 c1 ∧ safe3 c2 , if low tst
safe3 c1 ∧ safe3 c2 , if low tst
– safe3 (If tst c1 c2 ) =
=
high c1 ∧ high c2 ,
otherwise
high (If tst c1 c2 ), otherwise
– safe3 (While tst c) = ((low tst ∧ low c) ∨ high c) = (low (While tst c) ∨ high (While tst c))
The clauses for safe3 , high and low are now seen to coincide with our [7,8, §6] clauses
for ≈01 and discr and siso, respectively.
The main concurrent noninterference result from [3, 4] (Theorem 3.13 in [3] and Theorem 3.16 in [4]), states (something slightly weaker than) the following: if c ::3 l for
some l ∈ L, then c ≈01 c. In the light of Lemma 4 and the above discussion, this result
is subsumed by our Prop. 4 from [7, 8], taking χ to be ≈01.
1.4 Matos and Boudol’s further improvement
Mantos and Boudol [2, 5, 6] study a richer language than the one we consider here,
namely, an ML-like language. Moreover, they also consider a declassification construct. We shall ignore these extra features and focus on the restriction of their results
to our simple while language. Moreover, they parameterize their development by a set
of strongly terminating expressions (commands in our setting)—here we fix this set to
be that of commands not containing while loops.
The type system ::4 from [2,5,6] is based on a refinement of ::3 , noticing that, as far
as the reading type goes, one does not care about all variables a command reads (i.e.,
the variables that affect the control flow of its execution), but can restrict attention to
those that may affect the termination of its execution.
The typing rules of ::4 are identical to those of ::3 , except for the If rule, which
becomes:
tst :: l0
c1 ::3 (l, l ′ ) c2 ::3 (l, l ′ )
(If tst c1 c2 ) ::3 (l, k)
l0 ≤ l
(IF)
if c1 , c2 do not contain While subexpressions
l0 ∨ l ′ , otherwise
We think of c ::4 (l, l ′ ) as saying:
where k =
lo,
– There is no downwards flow in c.
– l is a lower bound on the level of the variables that the execution of c writes to.
– l ′ is an upper bound on the level of the variables that c termination-reads, i.e., that
termination of the execution of c depends on.
(In [2, 5, 6], While is not a primitive, but is derived from higher-order recursion—
however, the effect of the higher-order typing system on While is the same as that of our
::3 , as shown in [6]. Moreover, due to working in a functional language with side effects,
[2, 5, 6] record not two, but three security types: in addition to our l and l ′ (called there
the writing and termination effects, respectively), they also record l ′′ (called there the
reading effect) which represents an upper bound on the security levels of variables the
returned value of c depends on—here, this information is unnecessary, since c returns
no value.)
Definition 4. We define the function minTRtp : com → L (read “minimum terminationreading type") and the predicate safe4 : com → bool as follows: minTRtp is defined using
the same recursive clauses as minRtp, except for the clause for If, which becomes:
–
minTRtp (If tst c1 c2 ) =
lo,
if c1 , c2 do not contain While subexpressions
minTp tst ∨ minTRtp c1 ∨ minTRtp c2 , otherwise
safe4
is defined using the same clauses as safe3 with minTRtp replacing minRtp.
Lemma 5. The following are equivalent:
(1) c ::4 (l, l ′ )
(2) safe4 c and l ≤ maxWtp c and minTRtp c ≤ l ′ .
Now, let us write:
– wlow c (read “c has low tests on top of while subexpressions"), for the sentence
minTRtp c = lo
– noWhile c, for the sentence “c contains no While subexpressions"
We obtain:
–
–
–
–
–
safe4 (x := e) = ((sec x = hi) ∨ low e)
wlow (x := e) = True
safe4 (Seq c1 c2 ) = (safe4 c1 ∧ safe4 c2 ∧ (wlow c1 ∨ high c2 ))
wlow (Seq c1 c2 ) = (wlow c1 ∧ wlow c2 )
safe4 (If tst c1 c2 ) = (safe4 c1 ∧ safe4 c2 ∧ (wlow tst ∨ (high c1 ∧ high c2 )))
–
–
–
–
–
wlow (If tst c1 c2 ) = (low tst ∧ wlow c1 ∧ wlow c2 ) ∨ (noWhile c1 ∧ noWhile c2 )
safe4 (While tst c) = (safe4 c ∧ ((low tst ∧ low c) ∨ high c))
wlow (While tst c) = (low tst ∧ wlow c)
safe4 (Par c1 c2 ) = (safe4 c1 ∧ safe4 c2 )
wlow (Par c1 c2 ) = (wlow c1 ∧ wlow c2 )
We can prove by induction on c that safe1 c = (safe4 c ∧ wlow c) Using this, we
rewrite the Seq, If and While clauses for safe4 as follows:
– safe4 (Seq c1 c2 ) = ((
safe1 c1 ∧ safe4 c2 ) ∨ (safe4 c1 ∧ high c2 ))
safe4 c1 ∧ safe4 c2 , if low tst
– safe4 (If tst c1 c2 ) =
high (If tst c1 c2 ), otherwise
– safe4 (While tst c) = (safe1 (While tst c) ∨ high (While tst c))
Then safe4 turns out to coincide with our ≈W from [7, 8, §6].
The main noninterference result from [2, 5, 6] (in [2], the soundness theorem in §5),
states the following: if c ::4 l for some l ∈ L, then c ≈W c. In the light of Lemma 4 and
the above discussion, this result is subsumed by our Prop. 4 from [7, 8], taking χ to be
≈W.
References
1. G. Barthe and L. P. Nieto. Formally verifying information flow type systems for concurrent
and thread systems. In FMSE, pages 13–22, 2004.
2. G. Boudol. On typing information flow. In ICTAC, pages 366–380, 2005.
3. G. Boudol and I. Castellani. Noninterference for concurrent programs. In ICALP, pages
382–395, 2001.
4. G. Boudol and I. Castellani. Noninterference for concurrent programs and thread systems.
Theoretical Computer Science, 281(1-2):109–130, 2002.
5. A. A. Matos and G. Boudol. On declassification and the non-disclosure policy. In CSFW,
pages 226–240, 2005.
6. A. A. Matos and G. Boudol. On declassification and the non-disclosure policy. Journal of
Computer Security, 17(5):549–597, 2009.
7. A. Popescu, J. Hölzl, and T. Nipkow. Proving concurrent noninterference. In CPP, pages
109–125, 2012.
8. A. Popescu, J. Hölzl, and T. Nipkow. Formal verification of concurrent noninterference.
Journal of Formalized Reasoning, 2013. Extended version of [7]. To appear.
9. G. Smith. A new type system for secure information flow. In IEEE Computer Security
Foundations Workshop, pages 115–125, 2001.
10. G. Smith. Probabilistic noninterference through weak probabilistic bisimulation. In IEEE
Computer Security Foundations Workshop, pages 3–13, 2003.
11. G. Smith and D. Volpano. Secure information flow in a multi-threaded imperative language.
In ACM Symposium on Principles of Programming Languages, pages 355–364, 1998.
12. D. M. Volpano and G. Smith. Eliminating covert flows with minimum typings. In CSFW,
pages 156–169, 1997.
| 6 |
1
Power-Traffic Coordinated Operation for Bi-Peak Shaving and Bi-Ramp Smoothing
–A Hierarchical Data-Driven Approach
Huaiguang Jiang, Member, IEEE, Yingchen Zhang, Senior Member, IEEE, Yuche Chen, Changhong
Zhao, Member, IEEE, Jin Tan, Member, IEEE
arXiv:1712.00211v1 [] 1 Dec 2017
National Renewable Energy Laboratory, Golden, CO 80401
Abstract—With the rapid adoption of distributed photovoltaics
(PVs) in certain regions, issues such as lower net load valley during the day and more steep ramping of the demand after sunset
start to challenge normal operations at utility companies. Urban
transportation systems also have high peak congestion periods
and steep ramping because of traffic patterns. We propose using
the emerging electric vehicles (EVs) and the charing/discharging
stations (CDSs) to coordinate the operation between power
distribution system (PDS) and the urban transportation system
(UTS), therefore, the operation challenges in each system can be
mitigated by utilizing the flexibility of the other system.
The proposed operation approach is designed hierarchically
consists of a higher and a lower level. In the higher level, we
assume integrated operation of both the PDS and UTS, and target
of the operation is to minimize the social cost. Meanwhile, the
target for the EVs and the CDSs as customers is to minimize their
own expenditures. Then, there exists an equilibrium between
two targets to determine the optimal charging/discharging price.
In the lower level, the temporal & spatial models of PDS
and UTS are developed to provide a detailed analysis of the
power-traffic system. Specifically, the PDS is built with a threephase unbalanced AC power flow model, the optimal power flow
(OPF) problem is relaxed with the semidefinite relaxation programming (SDP), and solved with alternating direction method
of multiplier (ADMM). A dynamic user equilibrium (DUE)
problem is formulated for the UTS, which is based on the static
user equilibrium (SUE) with additional constraints to ensure
a temporally continuous path of flow. The EVs and the CDSs
function as reserves for both the PDS and UTS, and the state of
charge (SOC) is considered to optimize the charging/discharging
schedule and reduce the impacts to the PDS. We conducted the
simulation and numerical analysis using the IEEE 8,500-bus for
the PDS and the Sioux Falls system with about 10,000 cars for
the UTS. Two systems are simulated jointly to demonstrate the
feasibility and effectiveness of the proposed approach.
Index terms— Power distribution system, duck curve, urban
transportation system, electrical vehicle, charging/discharging
station, state of charge, dynamic user equilibrium, traffic
congestion, traffic pattern, optimal power flow, distributed
computation
N OMENCLATURE
Functions
t
FUT
Y
t
FCSO
t
FCDS
fPt S
t
fUT
S
t
fW
T
QtP DS
QtDP DS
QtDUT S
FPt DS
fka (xtka )
Parameters
G UT S
Pru su
Cka
G P DS
Υ, Υ
γ1
γ2
Abbreviations
PDS
UTS
CDS
SUE
DUE
OPF
SDP
ADMM
EV
SOC
Power distribution system
Urban transportation system
Charging/discharging station
Static user equilibrium
Dynamic user equilibrium
Optimal power flow
Semidefinite relaxation programming
Alternating direction method of multiplier
Electrical vehicle
State of charge
Objective function of the utility and customer in time interval t, t ∈ Dt
Objective function of the customer in time
interval t, t ∈ Dt
Objective function of the smart charging/discharging in a CDS
Cost function of PDS in time interval t
Cost function of UTS in time interval t
Cost function of charging/discharging waiting time
Netload of PDS in time interval t
Total charging/discharging power of all
CDS
Total EVs for charging/discharging
Objective function of OPF in PDS
Travel time function on link ka in time
interval t, where traffic flow is xtka
̺1 , ̺2
̺3 , ̺4
χi1
The graph for UTS with a node set and
a link set: G UT S = [V UT S , E UT S ], where
V UT S = {1, 2, · · · , nV U T S }, and E UT S =
{1, 2, · · · , mE U T S }
A set of path used to connect the OD
S
and
pair, which is defined as ru ∈ VrUT
u
UT S
UT S
UT S
su ∈ Vsu , Vru and su ∈ Vsu are the
original node set and destination node set,
respectively
The traffic flow capacity on link ka
The graph for PDS with a node set and a
link set: G P DS = [V P DS , E P DS ], where
V P DS = {1, 2, · · · , nV P DS }, and E P DS =
{1, 2, · · · , mE P DS }
The upper bound and lower bound of SOC
Weight factor between PDS and UTS
Iteration step coefficient for gradient descent
The electrical power price and the congestion fee
The EV parking ratios to the CDSs
The capacity of CDS i1
Variables
θkt a
π ∗t
The traffic flow on link ka in time interval
t
The optimal charging/discharging price in
time interval t
qrdtu su
hdpr1u su
δpdr1ut su ka
Viφt
Iiφt
sφt
i
ziφ
Siφt
viφt , iφt
i
xt,k , y t,k , λt,k
t1
Cev,i
3
Number of vehicles from ru to su departing
in time interval d via any path, d ∈ Dt , Dt
is the time interval set
Number of vehicles assigned to path pru su
with departing time interval d1
A 0-1 variable to indicate in time interval t,
whether the trip from ru to su is assigned to
path pru su ka via link ka departing in time
interval d1
Complex voltage on bus i with phase φ, φ
⊆ {a, b, c}
Complex current on bus i with phase φ, φ
⊆ {a, b, c}
φt
Power injection on bus i, where sφt
i = pi
φt
+ iqi
The complex impedance matrix ziφ = riφ +
ixφi
The complex power from bus i to bus i1 ,
where bus i1 is the ancestor of bus i, Siφt
= Viφt (Iiφt )H , H indicates the hermitian
transpose
viφt = Viφt (Viφt )H , liφt = Iiφt (Iiφt )H
The kth iterative variables of ADMM for
distributed OPF computation
The discharging (positive) and charging
(negative) speed of EV i3 at time t1
and transportation information. In [18], based on locational
marginal pricing (LMP), an optimal deployment of charging
stations is proposed for EVs. In [19], an optimal traffic-power
flow is proposed with the wireless charging technology and
congestion toll manipulation.
In this paper, a two-step power-traffic coordinated operation approach is proposed to address the bi-peak and biramp problems for both PDS and UTS. It also provides a
multifunction platform for the researches such as emission
reduction, multisystem integration, and future city planning.
The main contributions of this paper are:
1) Considering the complexity of the power-traffic system,
a two-step coordinate approach is designed from the
higher level to lower level to operate the system in
spatial and temporal domain hierarchically. In the higher
level, both the PDS and the UTS are regulated and
treated together as an utility to minimize the social
cost. The EVs and CDSs are treated as customers to
minimize the expenditure. Then, an equilibrium exists
between the utility and the customers to determine the
operation variables such as optimal charging/discharging
price, total electrical demand, and total available EVs. In
the lower level, the detailed models of PDS, UTS, EVs
& CDS are considered to specifically determine these
variables in spatial and temporal domains.
2) In the lower level, the PDS is built with the branch
flow model, which is equivalent to the classic businjection model and more suitable for the PDS with
radial topology [20]. There are several developed approaches for solving an optimization problem associated
with a PDS, usually called OPF such as DC OPF [21],
genetic algorithm [22], [23], and OPF relaxation with the
second-order cone program (SOCP) [24]. Based on [25],
[26], we relax an OPF with the SDP for the threephase unbalanced PDS. Meanwhile, several distributed
algorithm are designed to solve the OPF such as dual
decomposition [27] and predictor corrector proximal
multiplier (PCPM) [28]. Based on [25], [29], the ADMM
is applied to decompose the OPF problem and reduce
computation time.
3) In the lower level, for the UTS, the DUE is widely
used to estimate the traffic volume, congestion time,
and travel cost for each link [30]. Based on the assumption that all travelers can choose the best route
to minimize the cost of transportation, in the Wardrop
user equilibrium (UE), the travel times of all paths are
equal for each original-destination (OD) pair and less
than any unused paths [31], [32]. In [19], a static user
equilibrium assignment is used to integrate with the
PDS. Considering that the EV behaviors can be impacted
by the charging/discharging prices, a DUE is applied to
keep the temporal domain continuous for the UTS.
4) In this paper, the EVs and the CDSs are designed as
the “reserves” for both PDS and UTS, respectively.
Considering the SOC [33], [34], an optimal charging/discharging schedule is designed to meet the requirements of PDS and UTS and reduce the impacts to
I. I NTRODUCTION
Rooftop photovaltaic (PV) gained a foothold in many power
systems because of its continuously declining cost [1]–[3].
It brings many benefit to costumers such as reduced energy
cost; however, it also presents unprecedented challenges to
power systems operation and control. The upper right-hand
plot in Fig. 1 shows the so-called “duck curve”, which has
been recorded in some high-distribution PV regions such as
California [4]–[6]. The peak solar generation in the middle
of the day sinks the netload to lower valley and then when
the peak load occurs right after sunset, a huge volume of
energy demand ramps up in a short time frame. This creates
the artificial peak and ramping that are costly to balance using
the current power system assets.
It is not surprising that other man-made complex systems
also suffer similar issues. As shown in the lower right-hand
plot in Fig. 1, the transportation system also has high peaks
and steep ramps due to fairly constrained traffic patterns at the
rush hours in urban areas [7]–[9]. Thanks to the spawn of EVs
and the widely located CDSs, the two originally independent
systems: the PDS and the UTS can be coupled together.
Specifically, the increasing number of EVs can be seen
as a significant amount of distributed and highly manipulatable small reserves that can be used to provide demand
response, enhance the system reliability, and provide other
services for the power systems [10]–[16]. In these studies,
the geographical information and transportation information
are ignored, because the EVs are treated as the aggregated
loads/reserves. In [17], the optimal placement of the charging stations is studied with the geographical information
2
Fig. 1. The main idea and the bi-peak and bi-ramp problems in a power distribution system and a transportation system in a urban area.
tion VIII.
PDS. The test bench consists of real systems, the PDS is
the IEEE 8,500-bus distribution system, and the UTS is
the Sioux Falls transportation system with about 10,000
EVs, to demonstrate the proposed approach.
Based on Fig. 1, we acknowledge that cybersecurity is
a critical aspect to consider in the proposed power-traffic
system operation. Also, the large volume of data generated
by the proposed power-traffic system requires a high-speed,
flexible and reliable communication network. Because the
proposed power-traffic system contains two complex systems,
many industrial communication network infrastructures for
PDS and UTS operations need to be attached, such as 3G,
4G, and WiFi [35]. The multi-network integration brings a
lot of challenge for the 1). real-time monitoring & anomaly
detection [36], [37], 2). data transmission, storage, & security,
3). attack analysis & mitigation [38]–[41]. In this paper, we
assume that all the messages such as electrical power price,
traffic congestion information, and various control signals can
be transmitted correctly in real-time without any cybersecurity
issues.
The paper is organized as follows. In Section II, the
flowchart of the proposed approach is introduced. In Section III, the equilibrium between the utility and the customers
is designed to determine the optimal system variables. In
Section IV, based on SUE, a DUE is applied to model the
UTS and compute the dynamic traffic flow. In Section V, based
on the branch flow model, the OPF problem is relaxed with
SDP and solved with ADMM. In Section VI, considering the
SOC and the EVs behaviors, an optimal charging/discharging
schedule is designed to reduce the impacts to the PDS. In
Section VII, the numerical results are presented to validate
the proposed approach. The conclusion is presented in Sec-
II. T HE F LOWCHART OF THE P ROPOSED A PPROACH
The proposed two-step power-traffic operation approach is
shown in Fig. 2. In the higher level, the power-traffic system
consists of two major parts: the utility part (taking the PDS
and the UTS as a whole) shown in green, and customer
part (taking the CDSs and related EVs) shown in yellow.
At time interval t, the duck curve of the netload is recorded
with the renewable energy generation data and end user load
data in PDS. Meanwhile, the travel data is collected with the
traffic monitoring data and congestion model in UTS. Then,
as shown in the two green blocks, the PDS and UTS are
treated as a regulated utility to minimize the social cost. In the
yellow block, the EVs and CDSs are treated as the customers
to minimize their expenditures. There exists an equilibrium
to determine the system operation variables such as optimal
charging/discharging prices, total demand electrical power, and
total required EVs. These variables are fitted into the lower
level as inputs.
In the lower level, the power-traffic system consists of three
major parts: the PDS part, the EVs & CDSs part, and the UTS
part, which determine the system operation variables in spatial
and temporal domains with detailed models. First, in the UTS,
to accurately simulate the dynamic traffic system in spatial
and temporal domain, the DUE is built based on SUE with
temporal continuous constraints. The proposed DUE problem
can be transfered into a convex problem and solved with
optimal results. Second, in the PDS, the objective function
FP DS is designed to minimize the cost of the PDS, which
is built with a three-phase unbalanced branch flow model.
Based on the SDP relaxation, FP DS can be minimized in an
3
Synchronized time and unified GIS information
Load
data
Net load
(Duck curve)
EV
EV
CDS
CDS
CDS
Utility PDS
Traffic Flow
Monitoring
Congestion
EV
Traffic flow curve
Upper Level
Renewable
energy
Utility UTS
Customers
An equilibrium between the utility and customers
SCADA
AMI
Demand electrical
power
Required power,
charging/discharging
price
PDS
information
SDP
relaxation
Three-phase unbalanced power
distribution system
(Branch flow model)
Power distribution system
Road
Battery
SOC
Dynamic traffic
assginment
(DUE Model)
Model of
EVs
Charging/
discharging power
Optimal Power Flow
(ADMM)
GIS
Charging/
discharging price
and schedule
Charging/
discharging
speed
GIS
Total available EVs
Bi-direction charging/
discharging and schedule
Congestion
model
Congestion, available
EVs
Temporal continuosly
constraints
User
Equilibrium
EVs & charging/discharging stations
Lower Level
Optimal charging/
discharging price
Urban transportation system
Fig. 2. Flowchart of the proposed power-traffic coordinate operation approach.
number of available EVs on CDS i1 . The constraints of (1)
are illustrated as follows:
OPF problem, which is solved with ADMM. Third, with the
specified electrical power demands and available EVs for each
CDS, an objective function FCDS is designed to minimize the
charging/discharging impacts to the PDS while considering the
SOC.
In addition, in this paper, it is assumed that all the drivers
know the traffic information very well, and can get the optimal
charing/discharging price through the wireless communication
immediately. The user-equilibrium in UTS network can be
reached in a very short time.
Pit1 ≤ Pit1 ≤ Pit1 ,
(2a)
0 ≤ γ1 ≤ γ1
(2b)
Nit1
(2c)
0≤
Nit1
≤
fPt S is defined as a convex function [42], which determine
the PDS cost in time interval t, QtP DS is the netload of PDS,
and QtDP DS (Pit1 ) is the total charging/discharging power of
all CDSs, which can be defined as
III. E QUILIBRIUM D ESIGN B ETWEEN U TILITIES AND
C USTOMERS (H IGHER L EVEL )
QtDP DS (Pit1 ) =
X
Pit1
(3)
i1 ∈NCDS
t
where NCDS is a set of CDS. fUT
S is defined as a convex
function, which determines the UTS cost in time interval
t, QtUT S is the traffic load, and QtDUT S (Nit1 ) is the total
t
EVs for charging/discharging. fW
T (τ1 ) is the time cost with
charging/discharging time τ1 . Considering the number of EVs
is very large, QtDUT S (Nit1 ) can be estimated as
A. Equilibrium Design
In this paper, we focus on a small city with tens of thousands
of cars, which means any single user behavior has a very weak
impact on the equilibrium design in the higher level. The utility
consists of PDS and UTS. A customer consists of a CDS and
related EVs.
Utility Objective (Minimize Social Cost)
t
As shown in (1), FUT
Y is the objective function of the
utility, which is designed to minimize the social cost with the
variables Pit1 and Nit1 , t is a time interval t ∈ Dt . Pit1 is the
electrical power charging (negative) or discharging (positive)
at CDS i1 . γ1 is the weight coefficient of UTS. Nit1 is the
QtDUT S (Nit1 ) =
X
i1 ∈NCDS
Nit1 =
X
i1 ∈NCDS
Pit1
C1t
(4)
where C1t is the average charging/discharging speed of all
t
the EVs. fW
T is the waiting cost when an EV is parking
for charging/discharging, which is also defined as a convex
4
t
FUT
Y = min
t
t
Pi ,Ni
1
1
t
t
t
t
t
t
t
))
))
+
f
(τ
Q
(N
(Q
−
Q
(N
fPt S (QtP DS − QtDP DS (Pit1 )) + γ1 fUT
1
WT
DUT S
i1
S
UT S
DUT S
i1
function. In summary, the proposed utility objective function
t
FUT
Y is convex, which indicate an optimal point existed.
Customer Objective (Minimize Own Expenditure) Based
on (4), the objective function of a customer (a CDS and related
EVs) can be designed as follows:
t
t
FCSO
= min
fW
T (τ1
t
Pi
X Pit
1
i1
1
C1t
) − π t Pit1
IV. DYNAMIC U SER E QUILIBRIUM A SSIGNMENT
(L OWER L EVEL )
Compared with the SUE, the DUE can accurately describe
the traffic dynamics in a set of successive time intervals. In
this paper, the DUE is based on the SUE with a temporal
generalization [32], [46].
A. Basic Model of UTS
(5b)
The UTS can be represented in a graph with a node
set and a link set: G UT S = [V UT S , E UT S ], where V UT S
= {1, 2, · · · , nV U T S }, and E UT S = {1, 2, · · · , mE U T S }. The
S
origin and destination (OD) pair can be defined as ru ∈ VrUT
u
UT S
and su ∈ Vsu , a set of paths Pru su is used to connect the
OD pair, and the traffic flow between the OD pair can be
represented as qrdu su with departing time dt ∈ Dt , where Dt
is a set of all time intervals. Then, the congestion model can
be presented as follows [47]:
θt
(8)
fka (θkt a ) = fk0a 1 + 0.15( ka )4
Cka
where the objective function of the customer consists of two
parts: the time consumption in the CDSs and the benefit/cost
in charging/discharging. π t is the electrical power price, and
π t Pit1 is the benefit (by discharging, positive) or cost (by
charging, negative) at CDS i1 . As discuss above, the objective
t
function of the customer FCSO
is also convex.
B. Solutions
exists, and the optimal price
An equilibrium π ∗t and Pi∗t
1
π can be computed as [43], [44]
∗t
where fka (θkt a ) is the travel time (travel impedance) when the
traffic flow at link ka ∈ KA is θkt a in time interval t ∈ Dt , fk0
is the initial travel time, and Cka is the traffic flow capacity.
X Pi∗t
γ1 ′t
t
1
+ t fUT
−
π =
S (QUT S −
t )
C
C
1
1
i1
i1
(6)
where the optimal price depends on the PDS part fP′tS and the
′t
UTS part fUT
S . Based on the gradient descent, a distributed
algorithm is proposed to optimize the utility and the customers
jointly the equilibrium. Specifically, the objective function of
each customer can be computed independently to the optimal
price π ∗t . At k1 -th iteration:
X
fP′tS (QtP DS
)
Pi∗t
1
B. Dynamic User Equilibrium
Based on the SUE, the DUE can be built as follows [46]:
t
JDUE = mint
ka ,θka
π k1 ,t =
X k ,t γ 1
X Pik1 ,t
1 ′t
′t
t
t
1
1
+ t fUT S QUT S −
Pi1
fP S QP DS −
t
C
C
1
1
i
i
1
1
s.t. θkt a =
(π
k1 ,t
=
Pik11 ,t
, Pik11 +1,t )
+ γ2
=
X Pit
τ1 ′t
k1 ,t
1
f
(τ
1
W
T
t )−π
C1t
C
1
i
[π k1 ,t , Pik11 +1,t ]̟
1
qrdu1 su =
fkta (w)dw
X
(9a)
(9b)
hdpr1u su
(9c)
fkta (θkt a )δpdr1ut su ka
(9d)
pru su ∈Pru su
bdpr1u su =
(7b)
θka
X X Z
ka ∈KA t∈Dt 0
X
X
hdpr1u su δpdr1ut su ka
pru su ∈Pru su d1 ∈Dt
(7a)
Pik11 +1,t
UTS
(5a)
s.t. (2a), π t ≤ π t ≤ π t
∗t
FOR
(1)
X X
t∈Dt ka ∈Ka
where hdpr1u su is the number of vehicles in path pru su with
departing time interval d1 , bdpr1u su is the travel time from ru
to su via path pru su , and δpdr1ut su ka is a 0-1 variable and can
be defined as
(7c)
where the optimization stepsize can be set as 0 ≤ γ2 , and the
algorithm can converge when γ2 is small enough [45]. [·]̟
means that the results are projected onto the set ̟ defined by
(5b).
In summary, in this section, the optimal price π ∗t can be
gives
computed in the higher level, and the optimal results Pi∗t
1
the upper bound of power injection in (20). In the following
sections, the models of the UTS, PDS, and CDS are built
to provide detailed information in both temporal and spatial
domains about how to operate the power-traffic system in the
lower level.
δpdr1ut su ka =
1
0
If in time interval t, link ka is in
path pru su with departing time d1
otherwise
(10)
In time interval t, the traffic flow θkt a on link ka is given as
in (9b), which equals to the sum of the traffic flows hdpr1u su ≥ 0
via link ka for any departing time interval t ∈ Dt and any path
pru su ∈ Pru su . The number of vehicles from ru to su is given
5
Algorithm 1 DUE solution
Step 1: Initialization. Set the iteration number n1 = 1, the
.
threshold ε1 . Update the new demand as qrdtu su = qrdt−1
u su
Then using all-or-nothing assignment [32], [48] to generate
θkt a .
as in (9c), which equals to the sum of traffic flows hdpr1u su on
any assigned path pru su ∈ Pru su . The total time consumption
of path pru su from ru to su is given as in (9d), which keeps the
temporal continuous traffic flow for all time intervals t ∈ Dt
with a temporal unique constraint:
X
δpdr1ut su ka = 1
1
1
),
= fkta (θkt,n
Step 2: Update the travel time with θkt a : fkt,n
a
a
which is the beginning of the loop (Step 2 to Step 6).
(11)
t∈Dt
which ensures that a vehicle cannot be assigned on two links
in a certain time interval t.
Step 3: Determine the descend direction. Find the shortest
1
using all-or-nothing assignment to generate
path with fkt,n
a
t,n1
φ , which is the shortest route pattern.
C. Solutions
Step 4: Determine the iteration step ζ n1 , which can be
designed as:
Considering the temporal continuous traffic flow, in time
interval t, take the initial state fk0a in (8) as fkt−1
. In this
a
paper, the test bench is based on Sioux Falls network, and the
departing time is the same during each time interval t. Then,
in time interval t, the DUE problem can be transfered as a
SUE problem. The SUE is based on (9a), (9b), and (9c), and
the Lagrangian can be generated as (12).
The Hessian matrix of (9a) is positive definite, which
indicate the convexity. The optimal travel flow θk∗ta can be
computed with the KKT conditions [32], [43]. Therefore, the
solution of the UTS in time interval t can be designed as in
Algorithm 1.
η
minζ n1
1
1
)
+ ζ n1 (φt,n1 − θkt,n
s.t. η = θkt,n
a
a
0≤ζ
=
viφt
−
j∈Ci
φ φt
2(ri Pi +
φ 2 φt
xφi Qφt
i ) + |zi | li ,
Step 6: Compare the result to the threshold ε1 as:
qP
t,n1 +1
1 2
)
− θkt,n
ka (θka
a
≤ ε1
P
t,n1
ka θka
(13b)
(13c)
(14)
(15)
where if (15) is fulfilled, the result of traffic flow in time
1 +1
; otherwise, go back to step 2, with
interval t is θkt,n
a
n1 = n1 + 1.
B. Objective Function and SDP Relaxation of OPF in PDS
The objective function contains two major parts: the generation cost and system line loss, which can be designed as
follows:
(16a)
(16b)
FPt DS =
φt
φt
where sφt
i = pi + iqi indicates the power injection on bus
φt
φt φt H
i, Si = Vi (Ii ) is the complex power from bus i to bus
i1 , viφt = Viφt (Viφt )H , liφt = Iiφt (Iiφt )H , ziφ = riφ + ixφi , Ci is
the set of buses that have a common ancestor bus i, and H
indicates the Hermitian transpose.
In addition, according to [50], the matrix Mt is defined as
" φt
#
vi
Siφt
t
M =
(17)
(Siφt )H liφt
t
≤1
1
1
1 +1
)
+ ζ n1 (φt,n1 − θkt,n
= θkt,n
θkt,n
a
a
a
The PDS can be represented in a radial graph as: G P DS =
[V
, E P DS ], where V P DS = {1, 2, · · · , nV P DS }, and E P DS
= {1, 2, · · · , mE P DS }. Similar as UTS, the PDS also contains
the temporal characteristics, the superscript (·)t , t ∈ Dt .
Considering the three-phase unbalanced system, the phase configuration is defined as φ ⊆ {a, b, c}. For a radial distribution
system, there is an unique i1 as the ancestor bus for bus i,
and the branch between bus i1 and i can be named as i [25],
[49]. The branch flow model in time interval t can be defined
as:
viφt
1
n1
1 +1
with
Step 5: Update θkt,n
a
P DS
(Sjφt − zjφt ljφt )),
(13a)
The Golden-section search is employed to determine ζ n1 in
a short time.
A. Branch Flow Model
X
fkta (ω)dω
ka 0
V. O PTIMAL P OWER F LOW OF P OWER D ISTRIBUTION
S YSTEM (L OWER L EVEL )
φt
sφt
i = diag(Si −
XZ
X X
φt
2
[α1 (pφt
i ) + β1 (pi )]
(18)
i∈V PDS φi
where α1 and β1 are two weight coefficients for the generation
cost and system line loss, respectively.
The constraints of the voltage viφt and current liφt are
illustrated as follows:
t
where in (17), M 0 ( M is positive semidefinite) and
Rank(Mt ) = 1.
6
viφ ≤ viφt ≤ viφ ,
(19a)
liφ ≤ liφt ≤ liφ ,
(19b)
t
L1 (θ, λ1 , µ1 ) =
θka
X X Z
ka ∈KA t∈Dt 0
X
X X
hdpr1u su (12)
hdpr1u su δpdr1ut su ka + µd1,ru su qrdu1 su −
fkta (w)dw + λt1,ka θkt a −
In this paper, the injection power sφt
i plays an important
role in active control and reactive control for the OPF.
sφi
≤
sφt
i
≤
Pi∗t ,
min
Sit ,sti ,vit ,lti
(20)
min
FPt DS
s.t. (16), (19), and Mt 0
FPt DS =
min
Sit ,sti ,vit ,lti
s.t. (16), (19), and M 0
where Pi∗t is the optimal power injection in (7b). Considering
the relationship with the UTS, the optimal injection power s∗t
i1
decides the number of EVs in CDS i1 (Here for CDS, i1 = i
for the PDS and UTS, which can be explained as Fig. 3 with
yellow circles).
According to the SDP relaxation introduced in [25], [50],
the constraint Rank(Mt ) = 1 of (17) can be removed, and
the relaxed objective problem is shown as follows:
φt φt
φt
sφt
i ,vi ,li ,Si
Pru su
Pru su d1 ∈Dt
φt,(x)
φt,(y) φt,(x)
, li,i1
Si,i1 = Si
φt,(y) φt,(x)
φt,(x)
, li
= Si
Si
φt,(y)
φt,(x)
= si
si
=
=
X
FPt DS,i
(24a)
i∈V PDS
φt,(y) φt,(x)
, vi,i1
li
φt,(y) φt,(x)
, vi
li
(24b)
=
=
φt,(y)
vi
φt,(y)
vi
(21a)
(21b)
(26a)
y t,k+1 ∈ arg min Lρ (xt,k+1 , y t , λt,k )
(26b)
λt,k+1 = arg min λt,k + ρ(xt,k+1 − y t,k+1 )
(26c)
y∈Ky
C. OPF Solution with ADMM
In the power-traffic system, the OPF computation of the
three-phase unbalanced PDS is the bottleneck of the whole
system operation. Firstly, because the real PDS usually contains a lot of feeders, which brings high computation loads
for the system. Secondly, the PDS is a highly dynamic
system, which requires a short computation time. Therefore,
the ADMM, a distributed algorithm is proposed to solve this
problem in a short time, which is based on Lagrange multiplier
with an additional penalty term (augmentation term).
The standard form of the ADMM can be formulated as
follows:
s.t. x ∈ Kx , y ∈ Ky , and x = y
(24e)
xt,k+1 ∈ arg min Lρ (xt , y t,k , λt,k )
x∈Kx
x,y
(24d)
where the variables (Sit , sti , vit , lit ) is maintained in bus i,
which can be seemed as an local agent. In the ADMM, the
x-update and y-update indicate as (·)(x) and (·)(y) .
Then, as in [25], [29] the augmented Lagrangian of (24)
can be derived as (25).
The detail formulation of fPt DS,i,y and fPt DS,i,x can be
found in [25], and at iteration k, the variables x, y, and λ
are illustrated as follows:
where (21) is a convex problem and can be solved with
ADMM.
min fA (x) + gA (y)
(24c)
x∈Kx
Considering the time intervals of the UTS, the ADMMbased distributed OPF computation architecture is built to
dramatically reduce the time consumption to the proposed
power-traffic coordinated operation approach.
Therefore, in addition to the designed user equilibrium in
the higher level, the detail models of UTS and PDS are build
to provide the spatial and temporal information in the lower
level. In the next section, a smart charging/discharging strategy
is proposed for the EVs and bidirectional CDSs to further
reduce the impact to PDS.
(22a)
VI. S MART EV C HARGING / DISCHARGING (L OWER
L EVEL )
(22b)
In this paper, the EVs and CDSs act as the reserves for both
PDS and UTS. In this section, we take CDS i1 as an example,
other CDSs contain the same performances. In time interval
t, nti1 EVs are parked in CDS i1 with parking time τ2 , which
is defined in (1). In real world, nti1 depends on many factors,
for example, different habits of the drivers, which are beyond
the scope of this paper. Here, nti1 is determined as follows:
where fA and gA are convex, and Kx and Ky are convex sets.
With an additional penalty term ρ/2||x − y||22 , the efficient
augmented Lagrangian function is shown as follows [29]:
ρ
Lρ (x, y, λ) = fA (x) + gA (y)+ < λ, x − y > + ||x − y||22 ,
2
(23)
where 0 ≤ ρ is a coefficient for convergence, and ρ2 ||x − y||22
is the quadratic penalty term to increase the converge speed.
Therefore, the relaxed objective function (21) can be formulated as
7
nti1 =
t t
∗t
≥ ̺1 + ̺2
̺3 ∗ min{s∗t
i1 /C1 , θka } If π
t t
∗t
̺4 ∗ min{si1 /C1 , θka } If ̺1 ≤ π ∗t ≤ ̺1 + ̺2
0
If ̺1 ≥ π ∗t
(27)
X
Ltρ (xt , y t , λt ) =
FPt DS,i + < λt , xti − yit > + < µti , xti,i1 − yit > +
X
FPt DS,i + < λt , xti − yit > +
X
where s∗t
i1 is the optimal power injection, which is determined in the OPF solution (26). C1t is the average discharging/charging speed of EVs. ̺1 is the electrical power price,
̺2 is the congestion fee, ̺3 is the ratio for EV parking to the
CDSs. nti1 < χi1 , χi1 is the capacity of CDS i1 .
During the parking time τ2 , the stochastic EVs charging/discharging can impact the stability of the PDS. Considering the SOC, a smart EVs charging/discharging approach is
proposed to reduce the impact to PDS as follows:
t
FCDS
= min
t
1
Cev,i
3 t1 ∈τ2
nt
t1
Cev,i
3
(28c)
i3
X
t1
Cev,i
, 0 ≤ i3 ≤ nti1
3
(28d)
t1 ∈τ2
Υ ≤ SOC ≤ Υ,
(28e)
t1
Cev,i
3
where
is the discharging (positive) and charging (negt1
t1
ative) speed of EV i3 at time t1 . In (28c), PCDS
(Cev,i
) is
3
the netload at CDS i1 , which equals to original load of PDS
QtP1DS minus the total charging/discharging power. In (28d),
2
Qτev,i
is the total demand of charging/discharging of EV i3 ,
3
which contains a SOC constraint as (28e), Υ is the ratio of
SOC.
VII. N UMERICAL S IMULATION
AND
(25b)
The simulation flowchart is shown in Fig. 4, and the
corresponding description is shown in Algorithm 2. In Step 2,
a distributed algorithm is used to reduce the time consumption
of the equilibrium computation. In Step 3, for each time
interval t, the DUE problem is transferred as a SUE problem.
The the Golden-section search is employed to determine the
iteration step ζ n1 in a short time for the UTS with about 10,000
EVs and 11 CDSs. In Step 4, because the test bench is based
on the IEEE 8,500-bus PDS, the ADMM is implemented to
computed the OPF in distributed manner with the constraints
from Step 3. In Step 5, for each CDS, the charging/discharging
are independent, the smart EVs charging/discharging can be
computed in parallel, which also helps to reduce the computation time. The computation time analysis is shown in Fig 5.
The two curves indicate the average computation times of
the proposed approach are less than 50 s with 100 scenarios.
Compared with the time interval 15 minutes, the proposed
approach is quick enough to support continuously operations
for the power-traffic systems.
(28b)
i1
X
(y)t
(25a)
A. Flowchart of the Simulation
(28a)
t1
t1
PCDS
(Cev,i
) = QtP1DS −
3
(x)t
ρ
> + fPt DS,i,y
2
ρ
> + fPt DS,i,x
2
̺1 = 45 $/M W h, and the congestion fee ̺2 = 2 $/h [54],
[55]. The EVs parking ratio ̺3 = 0.8, ̺4 = 0.3. The parking
capacity for each CDS is χ1 = 100. The simulations are
executed using a server with 3.60 GHz Intel Xeon CPU and 32
GB RAM, and the software are Python, MATLAB, MATLAB
global optimization toolbox, and parallel computing toolbox.
The communication network is good enough to transmit all
the information in real-time without any cybersecurity issues.
2
t1 +1
t1
t1
t1
PCDS (Cev,i3 ) − PCDS (Cev,i3 )
t1
≤ Cev
s.t. Cev ≤ Cev,i
3
2
Qτev,i
=
3
(y)t
< µtj , xtj,i − yjt > + < γit , vi1 ,i − vi1
j∈Ci
i∈V PDS
X
(x)t
< γjt , vi,j − vi
j∈Ci
i∈V PDS
=
X
B. Bi-Peak Shaving and Bi-Ramp Smoothing
R ESULTS
As shown in Fig. 6(a), the blue curve is the netload peak of
PDS from 17:00 to 22:00, and the orange curve is the result of
proposed approach. The peak load decreases from 78 MW to
72 MW with 15% EVs discharging. The yellow curve is the
traffic delay peak in UTS, and the purple curve is the reduced
traffic delay with the proposed approach. The total peak traffic
delay time of UTS decreases from about 4,100 hours to 2,900
hours. In Fig. 6(b), it is also clear that from 8:00 to 15:00,
the netload of PDS increases from 48 MW to 51 MW, and the
total traffic delay of UTS decreases from 3,500 hours to 2,800
hours. From Fig. 6, it is clear that the proposed approach can
benefit both PDS and UTS to reduce the bi-peak and smooth
the bi-ramp.
In Fig. 7, the detailed traffic congestion information of
a congestion scenario is shown with the traffic maps. The
OD pair of this scenario is shown in Table I. The red and
pink arrows indicate the traffic congestions in the roads. The
traffic congestion information without the proposed approach
As shown in Fig. 3, the test bench is based on the IEEE
8,500-bus PDS and the Sioux Falls UTS [51], [52], which
cover the similar geographical areas. It is assumed that a small
city with 10,000 EVs has the bi-peak and bi-ramp problem,
the ”duck curve” data is based on [53] and the traffic behavior
data is based on [7]. 11 CDSs are located in node 2, 3, 5, 8,
10, 11, 17, 18, 20, 22, 23, which are shown with yellow circles
and illustrated in both PDS and UTS, respectively. The time
interval t is 15 minutes for the power-traffic systems. The
PDS cost function fPt DS is defined as quadratic function as
t
in [42], and the UTS cost function fUT
S is defined similarly.
t
t
Both fP DS and fUT S are convex, and the quadratic form
indicates the utility cost increase fast with the increasing
of PDS load and UTS load. As discussed above, the utility
t
objective function FUT
Y is convex. The weight factor γ1
equals to 1, which indicates the same importance of PDS
t
and UTS. Similarly, the customer objective function FCSO
is also set as a convex function. The electrical power price
8
7
18
6
8
16
5
9
10
15
22
21
4
11
14
23
24
3
12
18
2
2
17
19
20
17
8
20
22
10
5
23
11
3
1
13
Fig. 3. The test bench consists of a PDS and a UTS.
Algorithm 2 Simulation Process
Step 1: Initialization and data collection. Collect the data
and parameters of the PDS, UTS, CDSs, and EVs, such
as the topology information, electrical power price ̺1 ,
congestion fee ̺2 , set the simulation time interval t = 15
min, etc.
Objective
function of Utility
(PDS and UTS)
Objective
function of
Customers (EVs)
Optimal Charging/
EV Information
Discharging Price
∗
∗
Step 2: Higher level: equilibrium computation for the
defined utility part (PDS & UTS) and customer part (CDSs
& EVs). Build the utility and customer objective function
as (1) and (5). Then, compute the equilibrium π ∗t and Pi∗t
1
with (6) and (7).
1
UTS (DUE)
Parking EVs
in CDSs
Step 3: Lower level: DUE in UTS. Receive the information
, build the DUE objective
such as π ∗t , Nit1 , qrdu1 tsu and Pi∗t
1
function with constraints as in (9), solve it with Algorithm
1, and generate the traffic information such as θkt a and bdru1 su .
1
Traffic
Data
Information Collection
(OD Pair)
Traffic Information
1
1
∗
CDSs (Smart Charging/discharging)
CDS 3
CDS 5
Optimal
Power Flow
Step 4: Lower level: OPF in PDS. Receive the information
such as π ∗t , θkt a , and Pi∗t
, build the OPF objective function
1
as in (18) with the constraints (19) and (20), relax it as (21),
solve it as (25) and (26). Then, generate the s∗t
i1 to CDS.
∗
Upper Bound of
Power Injection
∗
1
Step 5: Lower level: smart EVs charging/discharging. Re∗t
t
ceive the information such as s∗t
i1 , π , and Ni1 , compute
the parking EVs for each CDS as (27), build the smart
charging/discharging as (28), then generate the feedback
Pnt t1
for PDS.
information nti1 for UTS and i3i1 Cev,i
3
1
CDS 23
Optimal Charging/
Discharging Price
Optimal
Power
Injection
1
,3
PDS (OPF)
Demand
Load
Data
Collection
(Load)
Fig. 4. The test bench consists of a PDS and a UTS.
is shown in Fig. 7(a) in 18:30. The red arrows indicate that
the traffic delays on these roads are larger than 15 minutes.
As shown in Fig. 7(b), with the proposed approach, the pink
arrows indicate that the traffic delays on these roads are less
than 15 minutes. The number of pink arrows (6 pink arrows)
Step 6: The power-traffic system update with the new UTS
and PDS information, then back to Step 2 with t = t + 1.
9
6
50
6.5
7
7.5
EV (thousand)
8
8.5
9
9.5
10
7
18
6
8
16
5
9
10
15
22
21
4
11
14
23
24
3
12
Computation time (Traffic)
Computation time (Power)
48
46
2
17
19
20
Computation Time (S)
44
42
40
38
36
34
32
30
0.9
0.95
1
1.05
1.1
Power (P.U.)
1.15
1.2
1.25
1.3
1
Fig. 5.
Computation time comparison with different traffic and load factor.
78
13
(a)
7
18
6
8
16
5
9
10
15
22
21
4
11
14
23
24
3
12
4.2
Original Netload
Proposed Approach
Original Traffic Hours
Proposed Approach
66
1.4
Power (MW)
2.8
60
17
18
19
Time (h)
20
Total Traffic Delay (Kh)
2
72
(a)
20
1
13
3.5
(b)
2.8
57
2.1
54
1.4
51
0.7
8
9
10
11
12
13
14
Fig. 7. (a) The traffic congestion scenario in 18:30, (b) The traffic congestion
scenario with proposed approach in 18:30.
Total Traffic Delay (Kh)
Original Netload
Proposed Approach
Original Traffic Hours
Proposed Approach
60
Power (MW)
19
0
22
21
63
48
17
Nodes
1
2
3
5
6
10
0
15
Time (h)
(b)
Fig. 6. (a) The bi-peak shaving and bi-ramp smoothing for PDS and UTS
from 17:00 to 22:00, (b) The peak-shaving and ramp-smoothing for UTS and
over-generation compensation for PDS from 8:00 to 15:00.
TABLE I
T HE OD PAIR OF UTS
13
14
18
20
200
350
240
300
270
210
360
200
200
300
220
345
250
320
260
450
300
210
270
250
200
200
345
345
21
256
200
270
230
300
200
23
345
310
345
345
300
300
C. Social Cost Analysis
Fig. 8, the cost of the proposed approach is lower, and it is
clear that the proposed approach can benefit more for the social
cost with the increasing number of EVs. In Fig. 8, the red
curve and cyan curve indicate the social cost with different
power factors. The EV number is 8000 in the UTS. From
Fig. 8, it is also clear that the proposed approach can benefit
more for the social cost with the increasing power factor. In
summary, the prosed approach can benefit both PDS and UTS
with different traffic scenarios and load factors. In addition, the
social cost increases more with the increasing of PDS load or
EV number, which indicates the quadratic objective function
of PDS and UTS. This design benefits the power-traffic system
to reduce the total social cost.
In Fig. 8, the blue curve with circle and green curve with
square are in one group, which indicates the social costs with
different number of EVs on the road. The load of PDS is 1.0
P.U., which means the total load of PDS is 60 MW. From
D. Smart EVs Charging/Discharging
To test the robustness of the proposed approach, we select
two continuous intervals load data, which contains the biggest
is much less than the number of red arrows (16 red arrows),
which also indicates the alleviation of traffic congestion with
the proposed approach. The total congestion delay reduces
from 4100 hours to 2900 hours.
In summary, from Fig. 6 and Fig. 7, the bi-peak and biramp problems can be reduced and smoothed by the proposed
approach, which benefits both PDS and UTS.
10
6
15000
6.5
7
EV (thousand)
8
7.5
8.5
9
9.5
discharging speeds are controlled to meet the constraint (28b).
10
Original Cost 1 (Traffic)
Proposed Approach (Traffic)
Original Cost 2 (Power)
Proposed Approach (Power)
VIII. C ONCLUSION
In this paper, a hierarchical approach is proposed to shave
the bi-peak and smooth the bi-ramp problem in the powertraffic system from higher level to lower level. The higher
level gives an “overview” for the whole system, and the lower
level gives the detailed description for each part. The EVs
and CDSs function as reserves for both the PDS and UTS
to utilize the flexibility and optimize the operations of the
power-traffic system. In the higher level, the PDS and UTS are
treated together to minimize the social cost, and the EVs and
CDSs are treated as customers to minimize their expenditure.
Then, an equilibrium is designed to determine the optimal
charging/discharging prices, total demand electrical power, and
total required EVs. In the lower level, considering the spatial
and time domain, the detail models of PDS and UTS are
built to specifically determine the power injection and EVs
behaviors. In each CDS, a smart EVs charging/discharging
approach is proposed to further reduce the impacts to the PDS.
In the numerical results, the test bench consists of the IEEE
8,500-bus PDS and Sioux Falls UTS with about 10,000 EVs
and 11 CDSs, which are used to demonstrate the feasibility
and effectiveness of the proposed approach.
In real-world implementation, the weather, human behavior,
and social issues bring the stochastic impacts to the powertraffic system, which increase the uncertainties and bring
more challenges for system operation. In addition, the other
systems such as the natural gas delivery system and the water
system can also impact the proposed system and result in
nonnegligible consequences. In the next step, other factors
such as stochastic of renewable energies, the departure time of
EVs, cybersecurity, multi-energy system, and human behaviors
will be taken into consideration.
Social Cost ($)
10000
5000
0
0.9
0.95
1
1.05
1.1
Power (P.U.)
1.15
1.2
1.25
1.3
Fig. 8. The different social costs comparison with different traffic and load
factors.
1000
180
Original Netload
Original Netload+EV Discharging
Power (KW)
120
500
60
EV Discharging Power (KW)
750
250
0
0
5
10
15
20
0
25
30
Time (Minutes)
(a)
1000
180
Original Netload
Original Netload+EV Discharging
Power (KW)
120
500
60
EV Discharging Power (KW)
750
R EFERENCES
[1] C.-M. Cheng, S.-L. Tsao, and P.-Y. Lin, “Seeds: A solar-based energyefficient distributed server farm,” IEEE Transactions on Systems, Man,
and Cybernetics: Systems, vol. 45, no. 1, pp. 143–156, 2015.
[2] M. Cui, J. Zhang, A. Florita, B.-M. Hodge, D. Ke, and Y. Sun,
“Solar power ramp events detection using an optimized swinging door
algorithm,” in Proc. ASME Int. Design Eng. Tech. Conf. Comput. Inf.
Eng. Conf, 2015.
[3] Y. Gu, H. Jiang, Y. Zhang, J. J. Zhang, T. Gao, and E. Muljadi,
“Knowledge discovery for smart grid operation, control, and situation
awarenessła big data visualization platform,” in North American Power
Symposium (NAPS), 2016. IEEE, 2016, pp. 1–6.
[4] Y. A. Katsigiannis, P. S. Georgilakis, and G. J. Tsinarakis, “A novel colored fluid stochastic petri net simulation model for reliability evaluation
of wind/pv/diesel small isolated power systems,” IEEE Transactions on
Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 40,
no. 6, pp. 1296–1309, 2010.
[5] Y. Tian and C.-Y. Zhao, “A review of solar collectors and thermal energy
storage in solar thermal applications,” Applied energy, vol. 104, pp. 538–
553, 2013.
[6] D. Connolly, H. Lund, B. V. Mathiesen, and M. Leahy, “A review of
computer tools for analysing the integration of renewable energy into
various energy systems,” Applied Energy, vol. 87, no. 4, pp. 1059–1082,
2010.
[7] E. J. Gonzales and C. F. Daganzo, “The evening commute with cars and
transit: Duality results and user equilibrium for the combined morning
and evening peaks,” Procedia-Social and Behavioral Sciences, vol. 80,
pp. 249–265, 2013.
250
0
0
5
10
15
Time (Minutes)
20
25
30
0
(b)
Fig. 9. (a) The normal EV discharging in CDS 3 without considering the
impacts to the PDS, (b) smart EV discharging in CDS 3 considering the
impacts to the PDS.
deviations within the 30 minutes. In Fig. 9, EV discharging
is taken as an example and 50 EVs are employed to test
the proposed approach. As shown in Fig. 9(a), without the
proposed approach, the stochastic EV discharging behaviors
increase the deviations of the netload. In Fig. 9(b), with the
proposed smart charging/discharging approach, the deviations
of the netload are smoothed, which decreases the impacts to
the PDS during the EV discharging. At the same time, the
11
[8] Y.-H. Cheng, Y.-H. Chang, and I. Lu, “Urban transportation energy and
carbon dioxide emission reduction strategies,” Applied Energy, vol. 157,
pp. 953–973, 2015.
[9] Z. Huang, X. Xu, H. He, J. Tan, and Z. Sun, “Parameterized batch
reinforcement learning for longitudinal control of autonomous land vehicles,” IEEE Transactions on Systems, Man, and Cybernetics: Systems,
2017.
[10] A. Foley, B. Tyther, P. Calnan, and B. Ó. Gallachóir, “Impacts of electric
vehicle charging under electricity market operations,” Applied Energy,
vol. 101, pp. 93–102, 2013.
[11] C. Liu, J. Wang, A. Botterud, Y. Zhou, and A. Vyas, “Assessment
of impacts of PHEV charging patterns on wind-thermal scheduling by
stochastic unit commitment,” IEEE Transactions on Smart Grid, vol. 3,
no. 2, pp. 675–683, 2012.
[12] D. Wu, D. C. Aliprantis, and L. Ying, “Load scheduling and dispatch for
aggregators of plug-in electric vehicles,” IEEE Transactions on Smart
Grid, vol. 3, no. 1, pp. 368–376, 2012.
[13] E. Sortomme and M. A. El-Sharkawi, “Optimal scheduling of vehicle-togrid energy and ancillary services,” IEEE Transactions on Smart Grid,
vol. 3, no. 1, pp. 351–359, 2012.
[14] H. Jiang, Y. Zhang, J. J. Zhang, D. W. Gao, and E. Muljadi,
“Synchrophasor-based auxiliary controller to enhance the voltage stability of a distribution system with high renewable energy penetration,”
IEEE Transactions on Smart Grid, vol. 6, pp. 2107–2115, 2015.
[15] C. Heymans, S. B. Walker, S. B. Young, and M. Fowler, “Economic
analysis of second use electric vehicle batteries for residential energy
storage and load-levelling,” Energy Policy, vol. 71, pp. 22–30, 2014.
[16] X. Geng and L. Xie, “Learning the lmp-load coupling from data: A
support vector machine based approach,” IEEE Transactions on Power
Systems, vol. 32, no. 2, pp. 1127–1138, 2017.
[17] P. Sadeghi-Barzani, A. Rajabi-Ghahnavieh, and H. Kazemi-Karegar,
“Optimal fast charging station placing and sizing,” Applied Energy, vol.
125, pp. 289–299, 2014.
[18] F. He, D. Wu, Y. Yin, and Y. Guan, “Optimal deployment of public
charging stations for plug-in hybrid electric vehicles,” Transportation
Research Part B: Methodological, vol. 47, pp. 87–101, 2013.
[19] W. Wei, S. Mei, L. Wu, M. Shahidehpour, and Y. Fang, “Optimal
traffic-power flow in urban electrified transportation networks,” IEEE
Transactions on Smart Grid, vol. 8, no. 1, pp. 84–95, 2017.
[20] B. Subhonmesh, S. H. Low, and K. M. Chandy, “Equivalence of
branch flow and bus injection models,” in Communication, Control, and
Computing (Allerton), 2012 50th Annual Allerton Conference on. IEEE,
2012, pp. 1893–1899.
[21] B. Stott, J. Jardim, and O. Alsaç, “Dc power flow revisited,” IEEE
Transactions on Power Systems, vol. 24, no. 3, pp. 1290–1300, 2009.
[22] A. G. Bakirtzis, P. N. Biskas, C. E. Zoumas, and V. Petridis, “Optimal
power flow by enhanced genetic algorithm,” IEEE Transactions on
power Systems, vol. 17, no. 2, pp. 229–236, 2002.
[23] Y. Gu, H. Jiang, Y. Zhang, and D. W. Gao, “Statistical scheduling of
economic dispatch and energy reserves of hybrid power systems with
high renewable energy penetration,” in 2014 48th Asilomar Conference
on Signals, Systems and Computers, 2014, pp. 530–534.
[24] Q. Peng and S. H. Low, “Distributed algorithm for optimal power flow
on a radial network,” in 2014 IEEE 53rd Annual Conference on Decision
and Control (CDC). IEEE, 2014, pp. 167–172.
[25] ——, “Distributed algorithm for optimal power flow on an unbalanced
radial network,” in Decision and Control (CDC), 2015 IEEE 54th Annual
Conference on. IEEE, 2015, pp. 6915–6920.
[26] E. Dall’Anese, H. Zhu, and G. B. Giannakis, “Distributed optimal power
flow for smart microgrids,” IEEE Transactions on Smart Grid, vol. 4,
no. 3, pp. 1464–1475, 2013.
[27] A. Y. Lam, B. Zhang, and N. T. David, “Distributed algorithms for
optimal power flow problem,” in Decision and Control (CDC), 2012
IEEE 51st Annual Conference on. IEEE, 2012, pp. 430–437.
[28] N. Li, L. Chen, and S. H. Low, “Demand response in radial distribution
networks: Distributed algorithm,” in Signals, Systems and Computers
(ASILOMAR), 2012 Conference Record of the Forty Sixth Asilomar
Conference on. IEEE, 2012, pp. 1549–1553.
[29] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed
optimization and statistical learning via the alternating direction method
of multipliers,” Foundations and Trends R in Machine Learning, vol. 3,
no. 1, pp. 1–122, 2011.
[30] M. Patriksson, The traffic assignment problem: models and methods.
Courier Dover Publications, 2015.
[31] N. Jiang, C. Xie, J. C. Duthie, and S. T. Waller, “A network equilibrium
analysis on destination, route and parking choices with mixed gasoline
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
12
and electric vehicular flows,” EURO Journal on Transportation and
Logistics, vol. 3, no. 1, pp. 55–92, 2014.
Y. Sheffy, “Urban transportation networks: equilibrium analysis with
mathematical programming methods,” Traffic engineering control.
Prentice-Hall, ISBN 0-13-93-972, 1985.
Y. He, X. Liu, C. Zhang, and Z. Chen, “A new model for state-of-charge
(soc) estimation for high-power li-ion batteries,” Applied Energy, vol.
101, pp. 808–814, 2013.
B. Pattipati, C. Sankavaram, and K. Pattipati, “System identification and
estimation framework for pivotal automotive battery management system
characteristics,” IEEE Transactions on Systems, Man, and Cybernetics,
Part C (Applications and Reviews), vol. 41, no. 6, pp. 869–884, 2011.
H. Sedjelmaci, S. M. Senouci, and N. Ansari, “A hierarchical detection
and response system to enhance security against lethal cyber-attacks in
uav networks,” IEEE Transactions on Systems, Man, and Cybernetics:
Systems, 2017.
H. Jiang, J. J. Zhang, W. Gao, and Z. Wu, “Fault detection, identification,
and location in smart grid based on data-driven computational methods,”
IEEE Transactions on Smart Grid, vol. 5, pp. 2947 – 2956, 2014.
H. Jiang, X. Dai, W. Gao, J. Zhang, Y. Zhang, and E. Muljadi, “Spatialtemporal synchrophasor data characterization and analytics in smart
grid fault detection, identification and impact causal analysis,” IEEE
Transactions on Smart Grid, vol. 7, no. 5, pp. 2525–2536, 2016.
C.-W. Ten, G. Manimaran, and C.-C. Liu, “Cybersecurity for critical
infrastructures: Attack and defense modeling,” IEEE Transactions on
Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 40,
no. 4, pp. 853–865, 2010.
K. Islam, W. Shen, and X. Wang, “Wireless sensor network reliability
and security in factory automation: A survey,” IEEE Transactions on
Systems, Man, and Cybernetics, Part C (Applications and Reviews),
vol. 42, no. 6, pp. 1243–1256, 2012.
X. Geng and L. Xie, “A data-driven approach to identifying system
pattern regions in market operations,” in Power & Energy Society
General Meeting, 2015 IEEE. IEEE, 2015, pp. 1–5.
H. Jiang, Y. Li, Y. Zhang, J. J. Zhang, D. W. Gao, E. Muljadi, and Y. Gu,
“Big data-based approach to detect, locate, and enhance the stability of
an unplanned microgrid islanding,” Journal of Energy Engineering, vol.
143, no. 5, p. 04017045, 2017.
J.-B. Park, K.-S. Lee, J.-R. Shin, and K. Y. Lee, “A particle swarm
optimization for economic dispatch with nonsmooth cost functions,”
IEEE Transactions on Power systems, vol. 20, no. 1, pp. 34–42, 2005.
S. Boyd and L. Vandenberghe, Convex optimization.
Cambridge
university press, 2004.
N. Li, L. Chen, and S. H. Low, “Optimal demand response based on
utility maximization in power networks,” in Power and Energy Society
General Meeting, 2011 IEEE. IEEE, 2011, pp. 1–8.
D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation:
numerical methods. Prentice hall Englewood Cliffs, NJ, 1989, vol. 23.
B. N. Janson, “Dynamic traffic assignment for urban road networks,”
Transportation Research Part B: Methodological, vol. 25, no. 2, pp.
143–161, 1991.
T. A. Manual, “Bureau of public roads,” US Department of Commerce,
1964.
C. Chekuri, S. Khanna, and F. B. Shepherd, “The all-or-nothing multicommodity flow problem,” in Proceedings of the thirty-sixth annual
ACM symposium on Theory of computing. ACM, 2004, pp. 156–165.
S. H. Low, “Convex relaxation of optimal power flowłpart i: Formulations and equivalence,” IEEE Transactions on Control of Network
Systems, vol. 1, no. 1, pp. 15–27, 2014.
L. Gan and S. H. Low, “Convex relaxations and linear approximation for
optimal power flow in multiphase radial networks,” in Power Systems
Computation Conference (PSCC), 2014. IEEE, 2014, pp. 1–9.
Distribution
test
feeder.
[Online].
Available:
http://ewh.ieee.org/soc/pes/dsacom/testfeeders/index.html
W. H. Lam, H. Shao, and A. Sumalee, “Modeling impacts of adverse
weather conditions on a road network with uncertainties in demand and
supply,” Transportation research part B: methodological, vol. 42, no. 10,
pp. 890–910, 2008.
California ISO today’s outlook details. [Online]. Available:
http://www.caiso.com/Pages/Today’s-Outlook-Details.aspx
Eia
us
energy
information
administration
regional
wholesale
markets.
[Online].
Available:
https://www.eia.gov/electricity/monthly/update/wholesale markets.php
A. de Palma and R. Lindsey, “Traffic congestion pricing methodologies
and technologies,” Transportation Research Part C: Emerging Technologies, vol. 19, no. 6, pp. 1377–1399, 2011.
| 3 |
arXiv:1109.2368v2 [math.AG] 24 Mar 2013
COMPUTING TROPICAL RESULTANTS
ANDERS JENSEN AND JOSEPHINE YU
Abstract. We fix the supports A = (A1 , . . . , Ak ) of a list of tropical polynomials and define the tropical resultant T R(A) to be the set of choices of
coefficients such that the tropical polynomials have a common solution. We
prove that T R(A) is the tropicalization of the algebraic variety of solvable
systems and that its dimension can be computed in polynomial time. The
tropical resultant inherits a fan structure from the secondary fan of the Cayley configuration of A, and we present algorithms for the traversal of T R(A) in
this structure. We also present a new algorithm for recovering a Newton polytope from the support of its tropical hypersurface. We use this to compute the
Newton polytope of the sparse resultant polynomial in the case when T R(A)
is of codimension 1. Finally we consider the more general setting of specialized
tropical resultants and report on experiments with our implementations.
1. Introduction
We study generalizations of the problem of computing the Newton polytope of
the sparse resultant combinatorially, without first computing the resultant polynomial. The input is a tuple A = (A1 , A2 , . . . , Ak ) of integer point configurations
in Zn . The sparse resultant R(A) of A, or the variety of solvable systems, is the
closure in (C∗ )A1 × (C∗ )A2 × · · · × (C∗ )Ak of the collection of tuples of polynomials
(f1 , f2 , . . . , fk ) such that f1 = f2 = · · · = fk = 0 has a solution in (C∗ )n and each
fi has support Ai . This variety is irreducible and defined over Q [Stu94]. If R(A)
is a hypersurface, then it is defined by a polynomial, unique up to scalar multiple,
called the (sparse) resultant polynomial of A. Its Newton polytope is called the
resultant polytope of A.
In the hypersurface case, Sturmfels gave a combinatorial description of the resultant polytope [Stu94], giving rise to a combinatorial algorithm for computing
its vertices from the vertices of the secondary polytope of the Cayley configuration
Cay(A). A drawback of this construction is that the secondary polytope typically has far more vertices than the resultant polytope. There have been attempts
to compute the resultant polytopes without enumerating all vertices of the secondary polytope [EFK10]. A main contribution of our paper is an algorithm (Section 2.5) for traversing the tropicalization of R(A) as a subfan of the secondary
fan of Cay(A). This approach allows us to compute tropicalizations of resultant
varieties of arbitrary codimension.
The tropical resultant T R(A) consists of tuples of tropical polynomials having
a common solution. We show in Theorem 2.4 that T R(A) coincides with the
tropicalization of R(A). The tropical resultant is combinatorial in nature, and we
Date: March 26, 2013.
2010 Mathematics Subject Classification: 14T05, 13P15, 14M25, 52B20, 52B55
Keywords: tropical geometry, resultant, Newton polytope, computational geometry .
1
2
ANDERS JENSEN AND JOSEPHINE YU
present in Theorem 2.9 a simple description of it as a union of polyhedral cones,
each of which is the sum of a positive orthant and a linear space.
In [DFS07], the tropical discriminant is described as a sum of a tropical linear
space and an ordinary linear space. This description carries over to the tropical
resultant when A is essential, and in particular R(A) is a hypersurface. Our description in Theorem 2.9 is different and also works for non-essential cases and
non-hypersurface cases. Moreover, it is simpler, and we do not need to compute a
nontrivial tropical linear space.
The tropicalization of a variety is a polyhedral fan of the same dimension as
the original variety. We derive a new formula for the codimension of the (tropical)
resultant in Theorem 2.23 and show that it can be computed in polynomial time
using the cardinality matroid intersection algorithm.
Specialized resultants are obtained by fixing some coefficients of fi ’s and considering the collection of other coefficients giving a polynomial system solvable in
the algebraic torus. In other words, the specialized resultants are intersections of
sparse resultants and subspaces parallel to coordinate subspaces. When the specialized coefficient values are generic, the tropicalization T RS (A) of the specialized
resultant is the stable intersection of the tropical resultant T R(A) with a coordinate subspace. This is a subfan of the restriction of the secondary fan of Cay(A)
to the subspace and can be computed by a fan traversal. The algorithms are significantly more complex and are described in Section 3. Moreover, using the results
from our concurrent work on tropical stable intersections [JY], we describe the specialized tropical resultant as a union of cones, each of which is the intersection of a
coordinate subspace and the sum of a positive orthant and a linear space.
Computation of resultants and specialized resultants, of which the implicitization
problem is a special case, is a classical problem in commutative algebra that remains
an active area. In the concurrent work [EFKP11] an algorithm for computing
Newton polytopes of specialized resultant polynomials using Sturmfels’ formula
and the beneath-beyond method is presented and implemented, and the work is
therefore highly relevant for our project. While the main focus of [EFKP11] is
the efficiency of the computation of the Newton polytopes of specialized resultant
polynomials, our main interest has been the geometric structure of secondary fans
which allows traversal of tropical resultants of arbitrary codimension.
The tropical description of a polytope P is a collection of cones whose union
is the support of the codimension one skeleton of the normal fan of P , with multiplicities carrying lengths of the edges of P . That is, the union is the tropical
hypersurface defined by P . For example, the tropical hypersurface of a zonotope is
the union of the dual hyperplanes (zones), and the tropical hypersurface of the secondary polytope of a point configuration contains codimension one cones spanned
by vectors in the Gale dual. See Section 2.3. The tropical description uniquely
identifies the polytope up to translation, and we consider it to be an equally important representation of a polytope as the V- and H-descriptions. Furthermore,
the conversion algorithms between these representations deserve the same attention
as other fundamental problems in convex geometry. A contribution of this paper
is an algorithm (Algorithm 4.1) for reconstructing normal fans of polytopes from
their tropical descriptions. We apply the algorithm to the tropical description of
resultant polytopes in Theorem 2.9 to recover the combinatorics of the resultant
COMPUTING TROPICAL RESULTANTS
3
polytope. From the normal fan, we can efficiently obtain the V-description of the
polytope.
All the algorithms described in this paper have been implemented in the software
Gfan [Jen]. Computational experiments and examples are presented in Section 5.
A list of open problems is presented in Section 6.
2. Resultants
Let A = (A1 , A2 , . . . , Ak ) where each Ai = {ai,1 , ai,2 , . . . , ai,mi } is a multi-subset
of Zn , and let m = m1 + m2 + · · · + mk . Throughout this paper, we assume that
mi ≥ 2 for all i. However, the points in Ai need not be distinct. This is important
for some applications such as implicitization. Let Q1 , Q2 , . . . , Qk be the convex
∗ Ai
denote the set of polynomials of
hulls of AP
1 , A2 , . . . , Ak respectively. Let (C )
mi
aij
in C[x1 , x2 , . . . , xn ], where each cj is in C∗ := C\{0}. Let
the form j=1 cj x
Q
Z ⊆ ki=1 (C∗ )Ai be the set consisting of tuples (f1 , f2 , . . . , fk ) such that the system
of equations f1 = f2 = · · · = fk = 0 has a solution in (C∗ )n .
Definition 2.1. The resultant variety, or the variety of solvable systems, is the
Qk
closure Z of Z in i=1 (C∗ )Ai and is denoted R(A).
Qk
The resultant variety is usually defined as a subvariety of i=1 CAi or its projecQk
tivization [GKZ94, Stu94], but we chose to work in i=1 (C∗ )Ai as tropicalizations
are most naturally defined for subvarieties of tori.
2.1. A simple description of the tropical resultant and its multiplicities.
The tropical semiring T = (R, ⊕, ⊙) is the set of real numbers with minimum as
tropical addition ⊕ and usual addition as tropical multiplication ⊙. A tropical
(Laurent) polynomial F in n variables x = (x1 , x2 , . . . , xn ) is a multiset of terms
(c, a) or c ⊙ xa where c ∈ R is the coefficient
and a = (a1 , a2 , . . . , an ) ∈ Zn is
L
the exponent. We will also write F = (c,a)∈F (c ⊙ xa ). The support of F is the
multiset of a’s, and the Newton polytope of F is the convex hull of its support.
The tropical solution set T (F ) of a tropical polynomial F is the locus of points
x ∈ Rn such the minimum is attained at least twice in the expression
M
(c ⊙ xa ) = min (c + a1 x1 + a2 x2 + · · · + an xn ).
(c,a)∈F
(c,a)∈F
In other words, a point x ∈ Rn is in T (F ) if and only if the minimum for (1, x)·(c, a)
is attained for two terms in F , which may be repeated elements. Therefore, T (F )
is a (not necessarily pure dimensional) subcomplex of a polyhedral complex dual
to the marked regular subdivision of the support of F induced by the coefficients
c, consisting of duals of cells with at least two marked points. See Section 2.2 for
definitions of subdivisions and marked points.
When F contains no repeated elements, the tropical solution set coincides with
the non-smooth
locus of the piecewise-linear function from Rn to R given by x 7→
L
F (x) = (c,a)∈F (c ⊙ xa ), which is also called a tropical hypersurface. In particular,
if all coefficients of F are the same and if F contains no repeated elements, then
the tropical hypersurface is the codimension one skeleton of the inner normal fan
of the Newton polytope of F .
Let A = (A1 , A2 , . . . , L
Ak ) be as before, and let RAi denote the set of tropical
mi
(cij ⊙ x⊙aij ).
polynomials of the form j=1
4
ANDERS JENSEN AND JOSEPHINE YU
Definition 2.2. The tropical resultant T R(A) of A is the subset of Rm , or RA1 ×
RA2 × · · · × RAk , consisting of tuples (F1 , F2 , . . . , Fk ) such that the tropical solution
sets of F1 , F2 , . . . , Fk have a nonempty common intersection in Rn .
Q
We can also consider the tropical resultant as a subset of ki=1 RAi /(1, 1, . . . , 1)R,
but we prefer to work with Rm in this paper.
For two univariate tropical polynomials, the term “tropical resultant” had been
used by other authors to describe a tropical polynomial analogous to ordinary
resultants. In [Oda08] it is defined as the tropical determinant of the tropical
Sylvester matrix. In [Tab08] it is defined as the tropicalization of the ordinary
resultant polynomial. In this paper the term “tropical resultant” always refers to
a fan and never a tropical polynomial.
Definition 2.3. Let k be a field and I ⊆ k[x1 , . . . , xn ] an ideal. The tropical
variety T (I) of I, or the tropicalization of V (I), is a polyhedral fan with support
T (I) := {ω ∈ Rn : the initial ideal inω (I) contains no monomials}.
For ω in the relative interior of a cone Cω ∈ T (I) we define its multiplicity as
multω (T (I)) := dimk (k[Zn ∩ Cω⊥ ]/hinω (I)i)
when the right hand side is finite, in particular when Cω is a Gröbner cone of the
same dimension as T (I).
In this definition we refer to the “constant coefficient” initial ideal as in [BJS+ 07],
where we disregard any valuation of the ground field even if it is non-trivial, except that we are picking out the terms with smallest ω-degree. If the ideal I is
homogeneous, T (I) gets a fan structure from the Gröbner fan of I. When Cω is
the smallest Gröbner cone in T (I) containing ω, the initial ideal inω (I) is homogeneous with respect to any weight in the linear span of Cω . Hence after multiplying
each homogeneous element of inω (I) by a Laurent monomial they generate an ideal
hinω (I)i in the ring k[Zn ∩ Cω⊥ ] of Laurent polynomials in x1 , x2 , . . . , xn which are
of degree zero with respect to the weight vector ω.
The following is the first main result toward a combinatorial description of the
tropicalization of R(A).
Theorem 2.4. The support of the tropicalization of the resultant variety R(A)
coincides with the tropical resultant T R(A).
A consequence is that we may identify T R(A) with the tropicalization of R(A)
and we define its multiplicities accordingly.
We will use incidence varieties to give a proof of Theorem 2.4. Let the incidence
variety be
(1)
W := {(f1 , f2 , . . . , fk , x) : fi (x) = 0 for all i} ⊆
k
Y
(C∗ )Ai × (C∗ )n ,
i=1
and let the tropical incidence variety be the set
T W := {(F1 , F2 , . . . , Fk , X) : X ∈ T (Fi ) for all i} ⊆
k
Y
i=1
+
RA i × Rn .
The tropical incidence variety is the tropical prevariety [BJS 07] defined by the
tropicalization of the polynomials f1 , f2 , . . . , fk , where fi is considered as a polynomial in mi + n variables whose support in the n variables is Ai and whose mi terms
COMPUTING TROPICAL RESULTANTS
5
have indeterminate coefficients. Even if Ai contains repeated points, the support
of fi in mi + n variables has no repeated points.
Lemma 2.5. The polynomials f1 , f2 , . . . , fk form a tropical basis for the incidence
variety W , i.e. the tropical incidence variety coincides with the support of the tropicalization of the incidence variety.
Proof. Let K be the field of Puiseux series in t with complex coefficients. By
the Fundamental Theorem of Tropical Geometry [JMM08, MS], ω ∈ T (I) ∩ Qn
if and only if ω = val(x) for some K-valued point x in the variety of I. Since
our fans are rational, it suffices to check that they agree on rational points. Let
(F1 , F2 , . . . , Fk , X) be a rational point in the tropical prevariety, i.e. F1 , F2 , . . . , Fk
are (coefficient vectors of) tropical polynomials with support sets A1 , A2 , . . . , Ak ,
and X ∈ Qn is a tropical solution for each Fi . We will show that this tuple can
be lifted to a K-valued point in W , by first lifting X, then F1 , F2 , . . . , Fn . Let
x0 = (tX1 , tX2 , . . . , tXk ) ∈ (K ∗ )n . Then Fi ∈ Qmi is contained in the tropical
hypersurface of fi (x0 ) considered as a polynomial in the indeterminate coefficients.
By the hypersurface case of the Fundamental Theorem (also known as Kapranov’s
Theorem) there is a tuple ci ∈ (K ∗ )mi of coefficients of fi with val(ci ) = Fi giving
fi (x0 ) = 0. Therefore (F1 , F2 , . . . , Fk , X) can be lifted to the incidence variety and
lies in the tropicalization of the incidence variety.
A consequence of Lemma 2.5 is that we may identify the tropical incidence
variety with the support of the tropicalization of W and we define its multiplicities
accordingly.
The following lemma follows immediately from the definitions. It is a tropical
counterpart of an analogous statement for classical resultants.
Lemma 2.6. The tropical resultant is the projection of the tropical incidence variety onto the first factor.
Let π be the projection from Rm × Rn , where the incidence variety lies, to the
first factor Rm . We can now prove Theorem 2.4.
Proof of Theorem 2.4. The resultant variety R(A) is obtained from the incidence
Qk
variety W by projecting onto the first factor i=1 (C∗ )Ai and taking the closure.
This proves the first of the following equalities.
T (R(A)) = T (π(W )) = π(T (W )) = π(T W ) = T R(A)
The second follows from [ST08] which says that the tropicalization of the closure
of a projection of W is the projection of the tropicalization of W . The third is
Lemma 2.5, and the last is Lemma 2.6.
For each i = 1, 2, . . . , k, let Pei be the Newton polytope of fi in Rmi × Rn , which
is in turn embedded in Rm × Rn . The tropical incidence variety is equal to the
f1 ) ∩ · · · ∩ T (P
fk ), which is a union of normal cones of P
f1 + · · · + P
fk
intersection T (P
associated to faces that are Minkowski sums of faces of dimension at least one.
f1 , . . . , P
fk together linearly span an m-dimensional subspace in
The vertices of P
m
n
R × R . Projecting this onto Rm takes each Pei isomorphically onto the standard
f1 +
simplex in Rmi which is embedded in Rm . In particular, the Minkowski sum P
f
· · · + Pk projects isomorphically onto the Minkowski sum of standard simplices
lying in orthogonal subspaces. It follows that every maximal cone in the tropical
6
ANDERS JENSEN AND JOSEPHINE YU
incidence variety appears uniquely as the intersection of some normal cones to edges
f1 , P
f2 , . . . , P
fk .
of P
The tropical incidence variety is
!
k
\
[
fi )
N (E
(2)
(E1 ,E2 ,...,Ek )
i=1
fi )
where the union runs over all choices of pairs Ei of points from Ai and N (E
f
e
denotes the inner normal cone of the corresponding edge Ei in Pi . Even if the pair
fi is always an edge
Ei does not form an edge in the convex hull Qi of Ai , the pair E
e
f
of the simplex Pi , so N (Ei ) has the right dimension.
Lemma 2.7. Every maximal cone in the tropical incidence variety T W = T (W )
has multiplicity one.
Proof. Since every vertex of every Pei has its own coordinate, the dimension of a
f1 + P
f2 + · · · + P
fk minimizing a vector ω ∈ Rm × Rn is
face of the Minkowski sum P
the sum of the dimensions of the faces of each Pei with respect to ω. The dimension
of the incidence variety is m + n − k and therefore, for a generic ω ∈ T (W ), the
f1 + P
f2 + · · · + P
fk minimizing ω has dimension k and must be a zonotope.
face of P
Consequently the forms inω (f1 ), inω (f2 ), . . . , inω (fk ) are binomials, each with an
associated edge vector vi ∈ Zm+n . The vectors v1 , v2 , . . . , vk generate Cω⊥ and after
multiplying each inω (fi ) by a monomial it ends up in hinω (I)i ⊆ C[Zm+n ∩ Cω⊥ ].
Hence using the binomials to rewrite modulo hinω (I)i we get that dimC (C[Zm+n ∩
Cω⊥ ]/hinω (I)i) is bounded by the index of the sublattice generated by v1 , v2 , . . . , vn
in Zm+n ∩ Cω⊥ . If we write the edge vectors as columns of a matrix, then the matrix
contains a full-rank identity submatrix, so the sublattice has index one.
The tropical resultant is the projection of the tropical incidence variety, so
!
k
\
[
fi ) .
N (E
π
T R(A) =
(E1 ,E2 ,...,Ek )
i=1
The Cayley configuration Cay(A) of a tuple A = (A1 , A2 , . . . , Ak ) of point configurations in Zn is defined to be the point configuration
Cay(A) = ({e1 } × A1 ) ∪ · · · ∪ ({ek } × Ak )
in Zk × Zn . We will also use Cay(A) to denote a matrix whose columns are points
in the Cayley configuration. See Example 2.10.
Lemma 2.8. Let E = (E1 , E2 , . . . , Ek ) be a tuple of pairs from A1 , A2 , . . . , Ak
respectively. Then the following cones coincide:
!
k
\
fi ) = R≥0 {eij : aij ∈
N (E
/ Ei } + rowspace(Cay(A)).
π
i=1
Proof. Let E be fixed. The left hand side consists of tuples of tropical polynomiQk
als (F1 , F2 , . . . , Fk ) ∈ i=1 RAi for which there is a point w ∈ Rn attaining the
minimum for Fi at Ei for every i.
On the other hand, the cone R≥0 {eij : aij ∈
/ Ei } consists of all F = (F1 , F2 , . . . , Fk )
such that the minimum for Fi evaluated at the point 0 ∈ Rn has value 0 and is
attained at Ei for every i. The tropical solution sets remains the same if coefficients
COMPUTING TROPICAL RESULTANTS
7
of Fi are changed by a tropical scalar multiple, which corresponds to adding to F
a multiple of the i-th row of Cay(A). For w ∈ Rn and F ∈ RA ,
F (x − w) = min c + a · (x − w) = (F − w · A)(x),
(c,a)∈F
where A denotes the matrix whose columns are points in A1 ∪· · ·∪Ak , i.e. A consists
of the last n rows of Cay(A), so
T (F ) + w = T (F − wA).
Therefore, changing the coefficients (F1 , F2 , . . . , Fk ) by an element in the row space
of Cay(A) has the effect of tropically scaling Fi ’s and translating all the tropical
solution sets together. Thus the set on the right hand side consists of all tuples
(F1 , F2 , . . . , Fk ) having a point w ∈ Rn achieving the minimum for Fi at Ei for
every i.
The following result gives a simple description of the tropical resultant as a union
of cones with multiplicities.
Theorem 2.9. The tropical resultant of A is the set
[
(3)
T R(A) =
R≥0 {eij : aij ∈
/ Ei } + rowspace(Cay(A))
E
where E = (E1 , E2 , . . . , Ek ) and each Ei consists of two elements in Ai . The
multiplicity of the cone associated to E is the index of the lattice spanned by the
rows of Cay(E) in rowspace(Cay(E)) ∩ Zm .
The set described in the right hand side of (3) may not have a natural fan structure.
See Example 2.18(b).
For a generic ω ∈ T R(A), we can compute the multiplicity of T R(A) at ω as
follows. We say that ω is generic if all the cones on the right hand side of (3)
that contain ω are maximal-dimensional, contain ω in their relative interior, and
have the same span. Then the multiplicity at ω is the sum of multiplicities of the
cones that contain ω. The generic points form a dense open set in T R(A), and the
lower-dimensional cones on the right hand side do not contribute to the multiplicity.
Proof of Theorem 2.9. The set theoretic statement follows immediately from (2)
and LemmasT2.6 and 2.8.
k
fi ) be the cone corresponding to E in the incidence variety,
Let σ = i=1 N (E
and τ = π(σ). Using the refinement in [CTY10] of the multiplicity formula from
tropical elimination theory [ST08], the multiplicity of τ in the tropical resultant is
the lattice index [Lτ : π(Lσ )], where Lτ = Rτ ∩ Zm and Lσ = Rσ ∩ Zm+n . The
lattice Lσ is defined by the following equations on (c, x) ∈ Zm+n
c · (eij − eik ) + x · (aij − aik ) = 0 for {aij , aik } = Ei
and is spanned by the integer points in the lineality space of the tropical incidence
variety and the standard basis vectors eij for aij ∈
/ Ei . The rows of the following
matrix span the lattice points in the lineality space of the incidence variety:
0
.
Cay(A)
−In
Hence π(Lσ ) is spanned by the rows of Cay(A) and the eij ’s for aij ∈
/ Ei .
8
ANDERS JENSEN AND JOSEPHINE YU
The first summand in (3) plus the linear span of the first k rows of Cay(A)
is a tropical linear space obtained as a Cartesian product of tropical hyperplanes.
Hence Theorem 2.9 can be rephrased as follows. Let C be the matrix consisting of
the first k rows of Cay(A), so the kernel of C is defined by equations of the form
ci,1 + ci,2 + · · · + ci,mi = 0 for i = 1, 2, . . . , k. Then the tropical resultant is the set
(4)
T R(A) = T (ker(C)) + rowspace [A1 |A2 | · · · |Ak ] .
The tropical linear space here is trivial to compute, as it is described by the first
summand of (3). By contrast the tropical linear space computation required for
tropical discriminants in [DFS07] can be challenging. The state of the art in computing tropical linear spaces is the work of Rincón [Rin].
Example 2.10. Consider the tuple A = (A1 , A2 , A3 ) of the following point configurations in Z2 :
A1 = {(0, 0), (0, 1), (1, 0)},
(5)
A2 = {(0, 0), (1, 0), (2, 1)},
A3 = {(0, 0), (0, 1), (1, 2)}.
The Cayley configuration Cay(A)
which we also denote Cay(A):
1 1
0 0
Cay(A) =
0 0
0 1
0 0
consist of columns of the following matrix,
1
0
0
0
1
0
1
0
0
0
0
1
0
1
0
0
1
0
2
1
0
0
1
0
0
0
0
1
0
1
The corresponding system of polynomials consist of
0
0
1
1
2
f1 = c11 + c12 y + c13 x,
(6)
f2 = c21 + c22 x + c23 x2 y,
f3 = c31 + c32 y + c33 xy 2 .
The point
(0, 0, 0, 0, 1, 5, 0, 1, 5)
is in the tropical resultant variety because the tropical hypersurfaces of the three
tropical polynomials
F1 = 0 ⊕ X ⊕ Y,
(7)
F2 = 0 ⊕ (1 ⊙ X) ⊕ (5 ⊙ X ⊙2 ⊙ Y )
F3 = 0 ⊕ (1 ⊙ Y ) ⊕ (5 ⊙ X ⊙ Y ⊙2 )
contain the common intersection points (−1, −1) and (−2, −2). See Figure 1.
Consider the incidence variety defined by the ideal
I = hf1 , f2 , f3 i ⊆ C[c±1 , x±1 , y ±1 ].
COMPUTING TROPICAL RESULTANTS
9
(0,0)
(-3,-1)
(-1,-1)
(-2,-2)
(-1,-3)
Figure 1. A tropical hypersurface arrangement and its dual regular mixed subdivision (RMS) of the Minkowski sum of point configurations. The mixed cells are shaded. See Examples 2.10
and 2.18(a).
The resultant variety is obtained by eliminating x and y from the system, i.e. it is
defined by the ideal I ∩ C[c±1 ]. In this case, the resultant variety is a hypersurface
defined by the resultant polynomial
c312 c323 c331 − 2c11 c212 c323 c231 c32 − c212 c13 c22 c223 c31 c232 + c211 c12 c323 c31 c232 −
c12 c213 c21 c223 c332 + c11 c12 c13 c22 c223 c332 + 3c212 c13 c21 c223 c231 c33 + c11 c212 c22 c223 c231 c33 +
2c212 c13 c222 c23 c31 c32 c33 − c11 c12 c13 c21 c223 c31 c32 c33 − c211 c12 c22 c223 c31 c32 c33 +
2c12 c213 c21 c22 c23 c232 c33 − 2c11 c12 c13 c222 c23 c232 c33 − c212 c13 c322 c31 c233 +
3c12 c213 c221 c23 c31 c233 − c11 c12 c13 c21 c22 c23 c31 c233 − c311 c21 c223 c31 c233 −
c12 c213 c21 c222 c32 c233 + c11 c12 c13 c322 c32 c233 + c11 c213 c221 c23 c32 c233 −
c211 c13 c21 c22 c23 c32 c233 + c313 c321 c333 − 2c11 c213 c221 c22 c333 + c211 c13 c21 c222 c333 .
It is homogeneous with respect to the rows of Cay(A). Its Newton polytope is
four-dimensional, has f-vector (15, 40, 38, 13, 1) and lies in an affine space parallel
to the kernel of Cay(A).
The tropical resultant is an eight-dimensional fan in R9 with a five-dimensional
lineality space rowspace(Cay(A)). As a subfan of the secondary fan of Cay(A),
it consists of 89 (out of 338) eight-dimensional secondary cones, which can be
coarsened to get the 40 normal cones dual to edges of the resultant polytope. In
other words, the 40 normal cones can be subdivided to obtain the 89 secondary
cones.
In this example, the point configuration is essential, so T (RA) is equal to the
tropical discriminant of Cay(A), which is described in [DFS07] as
T (ker Cay(A)) + rowspace(Cay(A)).
With the Gröbner fan structure, the tropical linear space T (ker Cay(A)) is a 4dimensional fan with f-vector (1, 15, 66, 84), so in the reconstruction of the Newton
polytope, we have to process 84 maximal cones, compared with 27 cones from
our description in (3) or (4). For larger examples, computing tropical linear spaces
becomes a challenging problem, while our description remains simple. In both cases,
however, the main computational difficulty is the reconstruction of the Newton
polytope from the tropical hypersurface.
10
ANDERS JENSEN AND JOSEPHINE YU
2.2. Secondary fan structure and links in tropical resultants. Let A ∈
Zd×m be an integer matrix with columns a1 , a2 , . . . , am ∈ Zd . We will also denote
by A the point configuration {a1 , a2 , . . . , am }. We allow repeated points in A, as
we consider the points to be labeled by the set {1, 2, . . . , m}, and every column of
A gets a distinct label.
Following [GKZ94, Section 7.2A], a subdivision of A is defined as a family ∆ =
{Ci ⊆ A : i ∈ I} of subsets of A such that
(1) dim(conv(CS
i )) = dim(conv(A)) for each i ∈ I,
(2) conv(A) = i∈I conv(Ci ), and
(3) for every i, j ∈ I, the intersection of conv(Ci ) and conv(Cj ) is a face of
both, and Ci ∩ conv(Cj ) = Cj ∩ conv(Ci ).
This notion is also called a marked subdivision by some authors, as it depends not
only
S on the polyhedra conv(Ci ) but also on the labeled sets Ci . The elements in
i∈I Ci are called marked. If F is a face of conv(Ci ) for some Ci ∈ ∆, then the
labeled set Ci ∩ F is called a cell of the subdivision. The sets Ci ’s are maximal
cells.
For two subdivisions ∆ and ∆′ of A, we say that ∆ refines ∆′ or ∆′ coarsens ∆
if every Ci ∈ ∆ is contained in some Cj′ ∈ ∆′ . A subdivision is a triangulation if no
proper refinement exists, and equivalently, if every maximal cell contains exactly
dim(conv(A)) + 1 elements.
Let ω : A → R be an arbitrary real valued function on A, called a weight vector.
We can define a subdivision of A induced by ω as follows. Consider the unbounded
polyhedron P = conv{(a, ω(a))} + R≥0 {ed+1 } in Rd+1 , and let {Fi : i ∈ I} be its
bounded facets. Then the induced subdivision is {Ci : i ∈ I} where Ci = {a ∈
A : (a, ω(a)) ∈ Fi }. A subdivision A is regular or coherent if it is induced by some
weight vector ω. The partition of the space of weight vectors RA according to
induced subdivisions is a fan, called the secondary fan of A.
Following [GKZ94, Section 7.1D], we can construct the secondary polytope of A
as follows. For a triangulation T of a point configuration A, define the GKZ-vector
φT ∈ RA as
X
φT (a) :=
vol(σ)
σ∈T :a∈σ
where the summation is over all maximal cells σ of T containing a.
Definition 2.11. The secondary polytope Σ(A) is the convex hull in RA of the
vectors φT where T runs over all triangulations of A.
Theorem 2.12. [GKZ94, § 7.1, Theorem 1.7] The vertices of Σ(A) are precisely
the vectors φT for which T is a regular triangulation of A. The normal fan of the
secondary polytope Σ(A) is the secondary fan of A. The normal cone of Σ(A) at
φT is the closure of the set of all weights w ∈ RA which induce the triangulation T .
The link of a cone σ ⊆ Rm at a point v ∈ σ is
linkv (σ) = {u ∈ Rm | ∃δ > 0 : ∀ε between 0 and δ : v + εu ∈ σ}.
The link of a fan F at a point v in the support of F is the fan
linkv (F ) = {linkv (σ) | v ∈ σ ∈ F }.
For any cone τ ∈ F , any two points in the relative interior of τ give the same link
of the fan, denoted linkτ (F ). If a maximal cone τ ∈ F has an assigned multiplicity,
then we let linkv (τ ) ∈ linkv (F ) inherit it.
COMPUTING TROPICAL RESULTANTS
11
We will first show that the link of the secondary fan at a point is a common
refinement of secondary fans, or, more precisely, that a face of a secondary polytope
is a Minkowski sum of secondary polytopes. For a sub-configuration C ⊆ A, we
can consider the secondary polytope of C as embedded in RA by setting φT (a) = 0
for a ∈ A\C for every triangulation T of C. On the other hand, the secondary
fan of C embeds in RA with lineality space containing the coordinate directions
corresponding to a ∈ A\C.
Lemma 2.13. Let A be a point configuration, ω ∈ RA , and ∆ω be the regular
subdivision of A induced by ω. Then the face Fω of the secondary polytope of A
supported by ω is the Minkowski sum of secondary polytopes of maximal cells in
∆ω .
Proof. Let ω ′ ∈ RA be a generic weight vector and p be the vertex of the Minkowski
sum picked out by ω ′ . For all sufficiently small ε > 0, the triangulation ∆ω+εω′
refines the subdivision ∆ω . Let pi be the GKZ-vector of the triangulation of the
i-th maximal cell induced by (the restriction of) the vector ω + εω ′ , which is the
same as the triangulation induced by ω ′ because ω induces the trivial
subdivision
P
on each cell of ∆ω . Then the P
GKZ-vector of ∆ω+εω′ is the sum i pi . Hence the
vertex of Fω in direction ω ′ is i pi . We can then conclude that the two polytopes
are the same since they have the same vertex in each generic direction.
We now define mixed subdivisions as in [DLRS10]. For point configurations
A1 , A2 , . . . , Ak in Rn , with Ai = {ai,j : 1 ≤ j ≤ mi }, the Minkowski sum
k
X
Ai = {a1,j1 + a2,j2 + · · · + ak,jk : 1 ≤ ji ≤ mi }
i=1
is a configuration of m1 m2 · · · mk points labeled by [m1 ] × [m2 ] × · · · × [mk ].
Definition 2.14. A subset of labels is a mixed cell if it is a product of labels
J1 × J2 × · · · × Jk where Ji is a nonempty subset of [mi ], and it is fully mixed if in
addition Ji contains at least two elements for every i = 1, 2, . . . , k. A subdivision
Pk
of the Minkowski sum i=1 Ai is mixed if every maximal cell is labeled by a mixed
cell.
Pk
A mixed subdivision of i=1 Ai is also referred to as a mixed subdivision of the
tuple A = (A1 , A2 , . . . , Ak ). Our definition of fully mixed cell differs from that of
[DFS07, Section 6] where it is required that conv(ai,j : j ∈ Ji ) has affine dimension
at least one, while we only require that Ji contains at least two elements. These
two definitions coincide if none of the Ji ’s contains repeated points.
A mixed subdivision is called regular if it is induced by a weight vector
w:
k
X
i=1
Ai → R, where w :
k
X
i=1
ai,ji 7→
k
X
wi,ji
i=1
for some (w1 , w2 , . . . , wk ) ∈ Rm1 × Rm2 × · · · × Rmk . In [Stu94] a regular mixed
subdivision (RMS) is also called a coherent mixed decomposition.
Theorem 2.15. [Stu94, Theorem 5.1] For a subdivision ∆ of Cay(A), the collection
Sk
Pk
of mixed cells of the form i=1 Ci such that Ci ⊆ Ai and i=1 Ci is a maximal cell
P
of ∆ forms a mixed subdivision of ki=1 Ai . This gives a one-to-one correspondence
Pk
between the regular subdivisions of Cay(A) and RMSs of i=1 Ai . Moreover the
12
ANDERS JENSEN AND JOSEPHINE YU
partition of weight vectors (w1 , w2 , . . . , wk ) ∈ Rm1 × Rm2 × · · · × Rmk according to
the induced RMS coincides with the secondary fan of Cay(A).
From our description of tropical resultants, we get the following result which was
proven for the resultant hypersurfaces in [Stu94, Theorem 5.2] and stated for the
essential configurations with no repeated points in [DFS07, Proposition 6.8]. See
Remark 2.25 for a definition of essential.
Theorem 2.16. The tropical resultant is a subfan of the secondary fan of the Cayley configuration Cay(A1 , A2 , . . . , Ak ), consisting of the cones whose corresponding
mixed subdivision contains a fully mixed cell.
The multiplicities of secondary cones in the tropical resultant will be computed
in Proposition 2.20 below.
Proof. For a tropical polynomial F ∈ RA the tropical solution set T (F ) is dual to
the cells with at least two elements in the subdivision of A induced by the coefficients
of F . More precisely, by the definition of tropical solution sets, w ∈ T (F ) if
and only if (1, w) is an inner-normal vector for the convex hull of lifted points
{(c, a) ∈ Rn+1 : c ⊙ xa is a term in F } supporting at least two points of A. The
two points supported need not have distinct coordinates a.
Let (F1 , F2 , . . . , Fk ) ∈ RA1 × RA2 × · · · × RAk . The union of tropical solution sets
Sk
i=1 T (Fi ) inherits a polyhedral complex structure from the common refinement
of the completions of T (Fi ) to Rm , which is dual to the RMS of A induced by the
coefficients of (F1 , F2 , . . . , Fk ). The tuple (F1 , F2 , . . . , Fk ) is in the tropical resultant
if and only if the tropical solution sets have a common intersection, which holds if
and only if there is a fully mixed cell in the dual RMS.
The tropical resultant is a subfan of the secondary fan. It is pure and connected
in codimension one, so we can compute it by traversing, as in [BJS+ 07]. To traverse
the resultant fan, we need to know how to find the link of the fan at a cone.
Proposition 2.17. Let A = (A1 , A2 , . . . , Ak ). The support of the link at a point
ω of the tropical resultant T R(A) is a union of tropical resultants corresponding to
sub-configurations of fully mixed cells in the RMS ∆ω of A induced by ω.
Proof. By definition, a point u is in the link if and only if ω + εu induces a RMS
with a fully mixed cell for all sufficient small ε > 0. This happens if and only if at
least one of the fully mixed cells in ∆ω is subdivided by u into a RMS with a fully
mixed cell, i.e. u is in the tropical resultant of the sub-configurations of fully mixed
cells.
Example 2.18. Let A be as in Example 2.10.
(a) The link at the point (0, 0, 0, 0, 1, 5, 0, 1, 5) of the tropical resultant is a
union of two hyperplanes whose normal vectors are:
(0, −1, 1, −1, 1, 0, 1, −1, 0) and (0, 0, 0, 0, 1, −1, 0, −1, 1)
respectively. They are the resultant varieties of the sub-configurations of
the two fully mixed cells in Figure 1.
(b) The link at the point (0, 0, 0, 0, −1, −1, 0, 0, 1) consists of four rays modulo
lineality space. Figure 2 shows the induced mixed subdivision, which contains two fully mixed cells. The resultant of one fully mixed cell consists
COMPUTING TROPICAL RESULTANTS
13
(0,0) (1,0)
(-1,0)
(1,-1)
(2,-3)
Figure 2. The tropical solution sets at (0, 0, 0, 0, −1, −1, 0, 0, 1)
and the corresponding dual RMS in Example 2.18(b).
of three rays (modulo lineality), and the resultant of the other fully mixed
cell consists of two rays. They overlap along a common ray.
The following lemma follows immediately from the definition of induced or regular subdivisions and shows that the description of the tropical resultant as a union
of cones in Theorem 2.9 is somewhat compatible with the secondary fan structure.
For any tuple E = (E1 , E2 , . . . , Ek ) of pairs Ei ⊂ Ai , let CE := R≥0 {eij : aij ∈
/
Ei } + rowspace(Cay(A)) be the cone as in Theorem 2.9.
Lemma 2.19. For each tuple E as above, the cone CE is a union of secondary
Pk
cones of Cay(A) corresponding to mixed subdivisions of
i=1 Ai having a mixed
Pk
cell containing i=1 Ei .
Let σ be a secondary cone of Cay(A) which is a maximal cone in the tropical
resultant T R(A), and let ∆σ be the corresponding regular mixed subdivision. Then
P
all the fully mixed cells in ∆σ are of the form ki=1 Ei where each E is a tuple of
pairs as above. Otherwise σ is not maximal in T R(A).
Proposition 2.20. The multiplicity of the tropical resultant T R(A) at a secondary
cone σ of Cay(A) is the sum of multiplicities of cones CE (given in Theorem 2.9)
over all tuples E of pairs forming a mixed cells in ∆σ .
Proof. By Lemma 2.19, for each tuple E of pairs, the cone CE contains σ if and
P
only if ki=1 Ei is a mixed cell in ∆σ . Otherwise CE is disjoint from the interior of
σ. The multiplicity of σ is the sum of multiplicities of CE ’s containing σ.
The edges of the resultant polytope are normal to the maximal cones in the
tropical resultant, and Proposition 2.20 can be used to find the lengths of the edges.
From this description, one can derive Sturmfels’ formula [Stu94, Theorem 2.1] for
the vertices of the resultant polytope.
2.3. Tropical description of secondary polytopes. We will give a tropical
description of secondary polytopes of arbitrary point configurations and show how
tropical resultants fit in.
14
ANDERS JENSEN AND JOSEPHINE YU
22
12
23
11
21
13
Figure 3. A projective drawing of the tropical hypersurface of
the secondary polytope of the Cayley configuration of two 1dimensional configurations in Example 2.22. The tropical resultant is shown in color (or in bold). A vertex labeled ij represents
the vector eij in R6 = RA1 × RA2 , and an edge between ij and kl
represents the cone R≥0 {eij , ekl } + rowspace(Cay(A)). Compare
with dual pictures in [Stu94, Figure 2] and [EFK10, Figure 3].
Proposition 2.21. Let A be a d × m integer matrix whose columns affinely span
an r-dimensional space. The tropical hypersurface of the secondary polytope of the
columns of A is the set
[
R≥0 {ei : i ∈
/ I} + rowspace(A) + R{1}.
I⊂{1,...,m}
|I|=r+2
where
1 denotes the all-ones vector in Rm .
Proof. Let ω ∈ Rm , and let ∆ω be the regular subdivision of the columns of A
induced by ω. The weight vector ω is not in the tropical hypersurface of the
secondary polytope if and only if ∆ω is not a triangulation, which happens if and
only if there exists a maximal cell of ∆ω containing at least r +2 points of A. For an
r + 2-subset I of {1, . . . , m}, the cone R≥0 {ei : i ∈
/ I} + rowspace(A)+ R{1} consists
of all ω such that a cell of ∆ω contains I. Note that rowspace(A) + R{1} consists
precisely of the weight vectors that induce the trivial subdivision of A where there
is a single maximal cell and all points are marked.
Comparing with Theorem 2.9, we see that in the tropical description of the
secondary polytope of Cay(A), the tropical resultant of A is the union of the cones
corresponding to the I’s with two points from each Ai .
Example 2.22. Let A1 = A2 = {0, 1, 2} in Z. For A = (A1 , A2 ), the tropical
hypersurface of the secondary polytope of Cay(A) and the tropical resultant of A are
depicted as graphs in Figure 3. The resultant polytope has f-vector (6, 11, 7, 1). The
secondary polytope in this case is combinatorially equivalent to the 3-dimensional
associahedron and has f-vector (14, 21, 9, 1). The first entry, the number of vertices
of the polytope, is the number of connected components in the complement of the
graph (the tropical hypersurface). The third entry, the number of facets of the
polytope, can be seen as the number of crossings in this case.
COMPUTING TROPICAL RESULTANTS
15
2.4. Codimension of the resultant variety. In this section we discuss how to
determine the codimension of the tropical resultant variety T R(A). By the Bieri–
Groves Theorem [BG84] this is also the codimension of R(A).
Theorem 2.23. The codimension of the tropical resultant equals
k
X
conv(Ei ))
k − MaxE dim(
i=1
where each Ei runs through all cardinality two subsets of Ai .
Proof. The tropical resultant variety is the collection of all lifts of all points in
A which give a fully mixed cell in the subdivision. Therefore it is the closure
of the collection of lifts which give a zonotope in the mixed subdivision being a
sum of convex hull of two points from each Ai . Let P be such a zonotope and
E = (E1 , . . . , Ek ) the k pairs of points. We wish to find the dimension of the
(relatively open) cone CP of lifts which induces P . The height of the first point of
each Ei may be chosen freely. The remaining k points of E must be lifted to the
same subspace of dimension dim(P ), whose lift may be chosen with dim(P ) degrees
of freedom. Finally, the height of the points not in E maybe chosen generically as
long as sufficiently large. The codimension of CP is therefore k − dim(P ). The
theorem follows since there are only finitely many choices for E.
Lemma 2.24. Let Li denote the subspace affinely spanned by Ai . The codimension
of R(A) only depends on the Li ’s and equals
k − Maxv∈Qi Li dim(span(v1 , . . . , vk )).
Proof. Since conv(Ei ) ⊆ Li the quantity of the lemma is smaller
Q than or equal to
that of Theorem 2.23. Conversely, if we have a collection v ∈ i Li we now show
how we can perform a sequence of changes to v to make it only consist of vectors vi
which are each differences between points of Ai without lowering the dimension of
span(v1 , . . . , vk ). Consider a vector vi . It is a linear combination of some uj where
each uj is of the form ais − ait . If all uj belong to W := span(v1 , . . . , vbi , . . . , vk )
then so will vi and it may be substituted by an arbitrary uj without lowering the
dimension. If some uj does not belong to W then substituting uj for vi will not
lower the dimension.
The proof also shows that instead of considering all line segments in Theorem 2.23
it suffices to consider only a basis for the affine span for each Ai . This is useful
while computing the codimension with this formula.
Remark 2.25. We can define a matroid on a set of polytopes as follows. A
set of polytopes is independent if they contain independent line segments. It is
straightforward to check that the base exchange axiom holds. The rank of the
matroid is the maximal dimension of a fully mixed cell (a zonotope) spanned by two
element subsets, one subset from each polytope. The codimension of the tropical
resultant equals the corank of the matroid, i.e. the number of polytopes minus
the largest dimension of such a zonotope. The (tropical) resultant variety is a
hypersurface if and only if the matroid has corank one, which holds if and only if
there is a unique circuit in the matroid. The tuple A is essential [Stu94] if and only
if this matroid of k polytopes is uniform of rank k − 1, that is, the unique circuit
of the matroid consists of the entire ground set.
16
ANDERS JENSEN AND JOSEPHINE YU
Using Theorem 2.23, we get a new proof of Sturmfels’ formula for the codimension of R(A). Recall that Qi is the convex hull of Ai .
Theorem 2.26. [Stu94, Theorem 1.1] The codimension of the resultant variety
P
Qk
R(A) in i=1 (C∗ )mi is the maximum of the numbers |I| − dim( i∈I Qi ) where I
runs over all subsets of {1, . . . , k}.
By the Bieri–Groves Theorem [BG84] and Theorem 2.4, the codimension of
Theorem 2.23 equals that of Theorem 2.26. In the following we explain how the
equality of the two combinatorial quantities of Theorems 2.23 and 2.26 can also be
seen as a consequence of Perfect’s generalization (Theorem 2.27) of Hall’s marriage
theorem and Rado’s theorem on independent transversals.
Let S be the ground set of a matroid with rank function ρ. Let U = {Si : 1 ≤
i ≤ k} be a family of subsets of S. A subset S ′ of S is called an independent
partial transversal of U if S ′ is independent and there exists an injection θ : S ′ →
{1, 2, . . . , k} with s ∈ Sθ(s) for each s ∈ S ′ .
Theorem 2.27. (Perfect’s Theorem [Per69, Theorem 2]) With the notation above,
for every positive integer d, the family U has an independent partial transversal of
cardinality d if and only if
d ≤ ρ(∪i∈I Si ) + k − |I|
for every I ⊆ {1, 2, . . . , k}.
In particular, the maximum cardinality of an independent partial transversal is
equal to the minimum of the numbers on the RHS of the inequality.
S
Proof of Theorem 2.26. Let Si = {a−b : a, b ∈ Ai }, S = ki=1 Si , and U = {Si : 1 ≤
i ≤ k}. Consider the vector matroid on S given by linear independence. Then the
Pk
quantity MaxE dim( i=1 conv(Ei )) is the cardinality of the maximal independent
partial transversal of U. By Perfect’s Theorem,
k
X
X
conv(Ei )) = MinI⊆{1,2,...,k} dim(
Qi ) + k − |I|.
MaxE dim(
i=1
i∈I
Hence the two quantities from Theorems 2.23 and 2.26 are equal.
Straightforward evaluation of the formulas in Theorems 2.23 and 2.26 will require time complexity exponential in the input. Moreover, the maximal bipartite
matching problem is a special case of this codimension problem.
Lemma 2.28. The maximal bipartite matching problem is reducible in polynomial
time to the problem of computing codimension of resultants.
Proof. Let G be a bipartite graph with vertices U ⊔ V and edges E ⊂ U × V . Let
{eu : u ∈ U } be the standard basis for RU . For each v ∈ V , let Av = {eu : (v, u) ∈
E}. Then the maximal cardinality of a bipartite matching in G is equal to the
dimension of the resultant variety of A = ({0} ∪ Av : v ∈ V ), and the size of A is
polynomial in the size of G.
We use Theorem 2.23 to construct an efficient algorithm for computing the codimension of a resultant.
Theorem 2.29. The codimension of the resultant can be computed in polynomial
time in the size of the input.
COMPUTING TROPICAL RESULTANTS
17
Proof. Let A = (A1 , A2 , . . . , Ak ) where each Ai is a point configuration in Zn . By
Lemma 2.24, the codimension of R(A) depends only on the linear spaces L1 , L2 , . . . ,
Lk affinely spanned by A1 , A2 , . . . , Ak respectively. Choose a basis Bi for each linear
Sk
space Li . Let B = {B1 , B2 , . . . , Bk } and S = i=1 Bi . A subset S ′ of S is called a
partial transversal of B if there is an injection θ : S ′ → {1, 2, . . . , k} with s ∈ Bθ(s) .
The collection of partial transversals form an independent system of a matroid M1
on ground set S, called the transversal matroid of B. Let M2 be the vector matroid
on S defined by linear independence. By Theorem 2.23, computing the codimension
of the resultant is equivalent to computing the maximum cardinality of a linearly
independent partial transversal, i.e. the largest subset of S which is independent in
both M1 and M2 .
We can use the cardinality matroid intersection algorithm [Sch03, Section 41.2]
to find the maximum cardinality of a set independent in two matroids on the same
ground set. This algorithm is polynomial in the size of S and the time for testing
independence in the matroids. Testing independence in M1 can be reduced to the
maximal bipartite matching problem and can be solved in polynomial time. Testing
linear independence in M2 can be reduced to finding the rank of a matrix, which
also takes polynomial time.
Alternatively, in the proof above, we can take S to be the disjoint union of
B1 , B2 , . . . , Bk , then the transversal matroid M1 can be replaced by the partition
matroid. This was done in [DGH98] to prove that whether a mixed volume is
zero can be decided in polynomial time. This problem reduces to the codimension
problem by observing that for a tuple A = (A1 , A2 , . . . , An ) of point configurations
in Zn , the mixed volume of the convex hulls of A1 , A2 , . . . , An is non-zero if and
only if the codimension of R(A) is zero, by Bernstein’s Theorem.
The algorithm described in Theorem 2.29 is rather complex, but there is a simpler
probabilistic or numerical algorithm. For generic vectors vi ∈ Li for i = 1, 2, . . . , k,
the codimension of the resultant is equal to k − rank([v1 |v2 | · · · |vk ]). The challenge
of turning this into a deterministic algorithm lies in making sure that the choices for
vi are generic. Our naive attempts at symbolic perturbations resulted in matrices
whose ranks cannot be computed in polynomial time.
The polynomial time algorithm of Theorem 2.29 can be used for finding a generic
point in T R(A) in polynomial time. Simply remove points from A as long as the
dimension does not drop. When we can no longer remove points, we have exactly
two points left from each configuration of A. We then compute a generic point in
T R(A) using Theorem 2.9, possibly using a symbolic ε. It is unclear if a polynomial
time algorithm exists for finding a generic point in specialized tropical resultants
that we will see in Section 3.
2.5. Traversing tropical resultants. Tropical resultants are pure and connected
in codimension 1. This allows the facets (maximal faces) to be enumerated via the
well-known adjacency decomposition approach. By this we mean traversing the
connected bipartite graph encoding the facet-ridge incidences of the fan. Three
operations are essential. We must be able to find some maximal cone in the fan,
find the link at a ridge, and compute an adjacent maximal cone given a ray of
the link at the ridge. In [Jen10] these subcomputations were isolated in an oracle,
and the author discussed a general algorithm for traversing a polyhedral fan (up
to symmetry) represented only through oracle calls. In the following paragraphs
18
ANDERS JENSEN AND JOSEPHINE YU
we will describe how to walk locally in the tropical resultant. More details can
be found in the next section for the more general setting of specialized tropical
resultant.
To find a starting cone for the traversal, we use the description of the tropical
resultant as a union of orthants plus a linear space, as described in Theorem 2.9.
Alternatively, a generic vector in a maximal cone of a resultant fan can be found
in polynomial time using the algorithms for the codimension, as noted at the end
of Section 2.4.
To find ridges, we compute facets of maximal cones. To find the link of the
tropical resultant at a point, we use the fact that the link at a point ω is a union
of smaller tropical resultants associated to the fully mixed cells in the mixed subdivision of A induced by ω, as shown in Proposition 2.17.
In the tropical resultant, as a subfan of the secondary fan of Cay(A), each cone
can be represented by a regular subdivision of Cay(A). The smallest secondary
cone containing a given vector ω can be constructed from the regular subdivision
induced by ω as explained in [DLRS10, Section 5.2].
In our implementation we represent the regular subdivision ∆ induced by ω by ω
and the triangulation induced by a “placing” or “lexicographic” perturbation of ω.
From this triangulation, we can easily recover the subdivision ∆ by comparing the
normal vectors of the maximal cells of the triangulation lifted by ω. For this to work,
it is important to perturb ω in such a way that all the marked points in ∆ remain
marked in the refined triangulation. A full triangulation of Cay(A) is computed
from scratch only once at the beginning. There are standard methods for computing
a placing triangulation of any point configuration; see [DLRS10, Section 8.2.1]. To
obtain a desired triangulation from a known triangulation, we find a path in the
flip graph of regular triangulations and perform flips as in [DLRS10, Section 8.3.1].
This is the analogue of a Gröbner walk in the setting of secondary fans.
To find the secondary cone in the link at u given by a ray v, we compute the
subdivision induced by u + εv for sufficiently small ε > 0. Such a vector u + εv is
represented symbolically in a way similar to a matrix term order in the theory of
Gröbner bases.
3. Resultants with specialized coefficients
For some applications such as implicitization we need to compute resultant varieties while specializing some of the coefficients. This problem was studied in
[EKP07, EFKP11] for the case when the resultant variety is a hypersurface. In
that case, the Newton polytope of the specialized resultant is the projection of the
resultant polytope, and the authors computed the projection of resultant polytopes
using Sturmfels’ formula for vertices of resultant polytopes [Stu94, Theorem 2.1]
and beneath-beyond or gift-wrapping methods for computing convex hulls. In our
language, computing a projection of a polytope is equivalent to computing the
restriction of the normal fan to a subspace.
In tropical geometry, specialization of certain coefficients amounts to taking the
stable intersection of the tropical resultant with certain coordinate hyperplanes.
In this section we first define the specialized tropical resultants and then present
algorithms for computing them.
A polyhedral complex in Rn is called locally balanced if it is pure dimensional
and the link at every codimension one face positively spans a linear subspace of Rn .
COMPUTING TROPICAL RESULTANTS
19
Definition 3.1. Let F1 and F2 be locally balanced fans in Rn . We define the
stable intersection as the fan
F1 ∩st F2 := {C1 ∩ C2 :(C1 , C2 ) ∈ F1 × F2 and
supp(linkC1 (F1 )) − supp(linkC2 (F2 )) = Rn }
with support
supp(F1 ∩st F2 ) = {ω ∈ Rn : supp(linkω (F1 )) − supp(linkω (F2 )) = Rn }.
If in addition F1 and F2 are balanced then the stable intersection inherits multiplicities from linkω (F1 ) and linkω (F2 ) as follows:
multω (F1 ∩st F2 ) :=
X
multC1 (linkω F1 ) · multC2 (linkω F2 ) · [Zn : (Zn ∩ RC1 ) + (Zn ∩ RC2 )]
C1 ,C2
where the sum runs over C1 ∈ linkω (F1 ) and C2 ∈ linkω (F2 ) such that ω ′ ∈ C1 −C2
for a fixed generic vector ω ′ ∈ Rn .
Notice that the support of F1 ∩st F2 depends only on supp(F1 ) and supp(F2 ).
We will therefore extend the definition of stable intersections to intersections of
supports of locally balanced fans and regard them as subsets of Rn .
For proofs of the following six statements, of which some are known to the
community already, we refer to the upcoming paper [JY].
Orthogonally projecting a polytope onto a linear space is equivalent to stably
intersecting the tropical hypersurface of the polytope with the linear space. This
is consistent with the fact that the Newton polytope of the specialized resultant is
a projection of the resultant polytope onto a suitable coordinate subspace.
Theorem 3.2. Let P ⊂ Rn be a polytope, L ⊂ Rn be a linear subspace, and
π : Rn → L be the orthogonal projection. Then
T (π(P )) = (T (P ) ∩st L) + L⊥ .
Lemma 3.3. For any locally balanced fans F1 , F2 , and F3 , we have
(1) (F1 ∩st F2 ) ∩st F3 = F1 ∩st (F2 ∩st F3 )
(2) (supp(F1 ) ∪ supp(F2 )) ∩st supp(F3 ) = supp(F1 ∩st F3 ) ∪ supp(F2 ∩st F3 ).
Lemma 3.4. For locally balanced fans F1 and F2
[
supp(F1 ∩st F2 ) =
C1 ∩ C2 .
C1 ∈F1 ,C2 ∈F2
codim(C1 +C2 )=0
Corollary 3.5. For locally balanced fans F1 and F2
linkω (F1 ) ∩st linkω (F2 ) = linkω (F1 ∩st F2 ).
Proposition 3.6. The stable intersection of two locally balanced fans is either
empty or a locally balanced fan whose codimension is the sum of the codimensions.
Lemma 3.7. Let I be an ideal in k[x1 , x2 , . . . , xn ]. Then
supp(T (I)) ∩st {x : x1 = 0} = supp(T (hIi + hx1 − αi))
where hIi is the ideal in k(α)[x1 , x2 , . . . , xn ] generated by I.
20
ANDERS JENSEN AND JOSEPHINE YU
Definition 3.8. Let S = (S1 , . . . , Sk ) where each Si ⊆ {1, . . . , mi } represent a
choice of points in the configuration A. The coefficients of the monomials indexed
by S are called specialized. Let Ui := {x ∈ Rmi : xj = 0 for all ∀j ∈ Si } and
Qk
US := i=1 Ui . We define the specialized tropical resultant as
T RS (A) := T R(A) ∩st {US }.
We will use the following proposition to justify the word “specialized.” Let I be
the ideal of R(A) and J be a new ideal generated by I together with the binomials
cj − γj for specialized coefficients cj , where γj ’s are parameters. We define the
Qk
specialized resultant variety RS (A) := V (J) ⊆ i=1 (K ∗ )mi as the variety defined
by J where K is the field of rational functions in γj ’s with coefficients in C.
When all the containments Si ⊂ {1, . . . , mi } are strict, i.e. not all coefficients are
specialized in any of the polynomials fi , then the variety RS (A) is irreducible. To
see this, consider the specialized incidence variety WS cut out by the polynomials
f1 , . . . , fk with some, but not all, coefficients specialized in each fi . For a fixed
x ∈ (C∗ )n each fi gives an affine constraint on the the non-specialized coefficients
in fi . Such constraints are solvable since each xj 6= 0 and they are simultaneously
solvable since they concern different sets of coefficients. Hence WS is a vector bundle
over (C∗ )n and is irreducible. Therefore its projection RS (A) is also irreducible.
In general, for a prime ideal I ⊂ k[x1 , . . . , xn ], with k algebraically closed, and
a generic α, specializing a variable x1 to α may not preserve primality, i.e. the
ideal I1 := I + hx1 − αi ⊆ k(α)[x1 , . . . , xn ] need not be prime. However, all
its irreducible components have the same tropical variety. To see this, note that
T (I1 ) = {0}1 × T (I2 ) where I2 := I ⊆ k(x1 )[x2 , . . . , xn ]. The ideal I3 := I ⊆
k(x1 )[x2 , . . . , xn ] is prime because I remains prime under extension from k[x1 ] to
k(x1 ), as primality is preserved under localization. Deciding whether a point is in a
tropical variety can be done with reduced Gröbner bases which are independent of
the field extension, so we have {0}1 × T (I3 ) = {0}1 × T (I2 ) = T (I1 ). Furthermore,
since I1 is prime, by [CP13, Proposition 4], all irreducible components of I2 have
the same tropicalization. Since the tropical varieties of the irreducible components
of I3 are the same as those of the irreducible components of I2 , the conclusion
follows.
Proposition 3.9. The tropicalization of RS (A) is T RS (A).
Proof. The statement follows from Lemmas 3.7 and 3.3(1).
The computation of the tropicalization of RS (A) can be performed using Buchberger’s algorithm as explained in [BJS+ 07] over the field of rational functions in
the γj ’s. During this computation finitely many polynomials in the γj ’s appear
as numerators and denominators of the coefficients. Substituting constant values
for the γj ’s will give the same computation unless one of these polynomials vanish. Hence specializing γj ’s to values outside a hypersurface in (C∗ )S will lead to a
specialized tropical resultant variety. This explains the word “specialized.”
If T RS (A) is nonempty, then its codimension can be computed using Proposition 3.6 and the codimension formulas from Section 2.4. Thus it remains to give
P an
algorithm for checking if the specialized resultant is empty. Recall that m := i mi
is the total number of points in A.
Lemma 3.10. Let A and S be as in Definition 3.8. Define the extended tuple
B = (B1 , . . . , Bk ) where Bi consists of bi,j = (ai,j , vi,j ) ∈ Zn × Zm−|S| , with vi,j ∈
COMPUTING TROPICAL RESULTANTS
21
Zm−|S| being 0 if j ∈ Si and a standard basis vector otherwise. If the standard
vector is chosen differently for every non-specialized coefficient then
T RS (A) 6= ∅ ⇔ T R(B) = Rm .
Proof. According to Lemma 3.4, T RS (A) 6= ∅ if and only if there exists a cone
C ⊆ T R(A) such that US + C = Rm where US is as in Definition 3.8. According
to the simple description of tropical resultants in Theorem 2.9 we may assume that
C has the form R≥0 {eij : aij ∈
/ Ei } + rowspace(Cay(A)). Equivalently, the stable
intersection is nonempty if and only if there exists a choice E such that R≥0 {eij :
aij ∈
/ Ei } + rowspace(Cay(A)) + US has dimension m. Applying Theorem 2.9 to B,
this is equivalent to T R(B) being full dimensional, since rowspace(Cay(A)) + US =
rowspace(Cay(B)).
Combining Lemma 3.10 and the results from Section 2.4 about computation of
codimension, we get a polynomial time algorithm for deciding if a specialized result
is nonempty. Another consequence of the lemma is the following algorithm for
checking membership of a point in a specialized tropical resultant.
Algorithm 3.11. (SpecializedResultantContains(A, S, ω))
Input: A tuple A of point configurations and a choice S of specialized coefficients.
A vector ω ∈ Rm .
Output: “True” if ω ∈ T RS (A), “False” otherwise.
• Compute the mixed subdivision of A induced by ω by computing the regular
subdivision of Cay(A) induced by ω. (See Section 2.5).
• For each fully mixed cell:
– construct a subconfiguration A′ of points involved in the cell.
– Return “True” if the specialized resultant of A′ is nonempty.
• Return “False”.
Proof. By Lemma 3.3(2), Proposition 2.17 and Corollary 3.5, we have that the support of linkω (T RS (A)) is the union of supports of T RS (A′ ), under the appropriate
identification of T RS (A′ ) as a subset of Rm , where A′ runs over all fully mixed
cells of the mixed subdivision of A induced by ω. Hence ω ∈ T RS (A) if and only
if one of T RS (A′ ) is nonempty.
Algorithm 3.12. (NonTrivialVectorInSpecializedResultant(A, S))
Input: A tuple A of configurations, a choice S of specialized coefficients such that
US ∩ rowspace(Cay(A)) ( T RS (A).
Output: A vector ω ∈ T RS (A) \ rowspace(Cay(A))
• For each E = (E1 , E2 , . . . , Ek ) : Ei is a two-element subset of Ai ,
– Let C = R≥0 {ei,j : i ∈
/ Ej } + rowspace(Cay(A)).
– If codim(C + US ) = 0 and US ∩ C 6= US ∩ rowspace(Cay(A)) then
∗ Find among the generators of US ∩ C a vector v outside the
subspace US ∩ rowspace(Cay(A)).
∗ Return v.
The following recursive algorithm finds a perturbed point in a starting cone for
the specialized tropical resultant T RS (A).
Algorithm 3.13. (StartingPoint(A, S))
Input: A tuple A of configurations, a choice S of specialized coefficients such that
22
ANDERS JENSEN AND JOSEPHINE YU
T RS (A) 6= ∅.
Output: A vector ωε ∈ Q(ε)m such that for every fan structure of T RS (A) defined
over Q it holds that for ε > 0 sufficiently small, ωε is in a maximal cone of T RS (A).
• If dim(T RS (A)) = dim(US ∩ rowspace(Cay(A))), then return b1 + εb2 +
· · · + εt−1 bt where b1 , b2 , . . . , bt is some basis of US ∩ rowspace(Cay(A)).
• Compute an ω ∈ T RS (A) \ rowspace(Cay(A)) using Algorithm 3.12.
• Compute the subdivision ∆ω of Cay(A) induced by ω.
• For every fully mixed cell in ∆ω .
– Let A′ be the subconfiguration of the involved points.
– Let S ′ be the restriction of S to A′ .
– If codimension(T RS ′ (A′ )) = codimension(T RS (A)) then
∗ Return ω + ε · StartingPoint(A′ , S ′ ).
Proof. The correctness of the algorithm follows from the facts that the link at ω
of the tropical resultant is the union of tropical resultants corresponding to the
fully mixed cells in ∆ω (Proposition 2.17), that taking links commutes with taking
stable intersections (Corollary 3.5), and that the returned value from the recursive
call (after expansion with zeros) is a generic vector in the link of T RS (A) at ω and
in particular lies outside the secondary cone of ω.
We now turn to the problem of enumerating all maximal cones in T RS (A)
considered as a subfan of the restriction of the secondary fan of Cay(A) to the
subspace US . While connectedness in codimension 1 is not preserved under stable
intersections in general, a specialized tropical resultant T RS (A) is connected in
codimension 1 because it coincides with the tropical variety of a prime ideal, as
shown in the paragraph above Proposition 3.9. The proof in [BJS+ 07] that the
tropical varieties of prime ideals are connected in codimension 1 contained some
mistakes, which were later corrected in [CP13].
The output of Algorithm 3.13 can be converted into a secondary cone in T R(A)
containing ωε in its relative interior, for example by computing a maximal secondary
cone containing ωε and taking the face containing ωε in its relative interior. For
sufficiently small ε > 0, this secondary cone would not change with ε.
Following the approach of [Jen10] discussed in Section 2.5, we are left with the
problem of computing the link at a ridge in T RS (A). If the subspace US is generic
enough such that
codim(US ∩ rowspace(Cay(A)) = codim(US ) + codim(rowspace(Cay(A))),
then the link of T RS (A) is combinatorially equivalent to the link of T R(A) and
the support of the link is a union of resultant fans of subconfigurations (Proposition 2.17) where each fan can be found using Theorem 2.9. If US is not generic,
then computing a stable intersection with US is required for finding the link in
T RS (A) (Corollary 3.5). This is Algorithm 3.14 below. Recall that the dimension
of T RS (A) = T R(A) ∩st {US } can be computed using Proposition 3.6 and the
codimension formulas from Section 2.4.
Algorithm 3.14. StableLink(A, S, ω)
Input: A tuple A of configurations, a choice S of specialized coefficients, a vector
ω ∈ Rn in the relative interior of a ridge R of T RS (A).
Output: A vector in each facet of linkω (T RS (A)).
• Let d be the dimension of T R(A) ∩st {US }.
COMPUTING TROPICAL RESULTANTS
23
• Compute the subdivision ∆ω of Cay(A) induced by ω.
• l := ∅.
• For every fully mixed cell in ∆ω
– Let A′ be the subconfiguration of involved points in the cell.
– For each E = (E1 , E2 , . . . , Ek ) : Ei is a two-element subset of A′i ,
∗ Let C = R≥0 {ei,j : i ∈
/ Ej } + rowspace(Cay(A)).
∗ If dim(US + C) = m and dim(US ∩ C) = d then
· Let V be a set of one or two vectors in US ∩ C such that
(US ∩ C) + span(R) is positively spanned by V ∪ span(R).
· l := l ∪ V
• Return l.
Another approach to computing a link at a point of the stable intersection is to
compute the restriction of the secondary fan of each fully mixed subconfiguration
to US . We then get the resultant fan as certain rays of the secondary fan. This is
Algorithm 3.15.
Algorithm 3.15. StableLink(A, S, ω)
Input: A tuple A of configurations, a choice S of specialized coefficients, a vector
ω ∈ Rn in the relative interior of a ridge R of T RS (A).
Output: A vector in each facet of linkω (T RS (A)).
• Let d be the dimension of T R(A) ∩st {US }.
• Compute the subdivision ∆ω of Cay(A) induced by ω.
• l := ∅.
• For every fully mixed cell in ∆ω
– Let A′ be the subconfiguration of the involved points of the cell.
– If the codimension of the lineality space of the restriction F of the
secondary fan of Cay(A′ ) to US is m − d, then
∗ Choose v such that v extends span(R) ∩ US to a generating set
of the lineality space of F .
∗ If SpecializedResultantContains(A′ , S, v) then l := l ∪ {v, −v}.
– else
∗ Compute all maximal cones in F (by traversal).
∗ For each ray v in F , if SpecializedResultantContains(A′ , S, v)
then l := l ∪ {v}.
• Return l.
The above algorithm is to be read with proper identifications. Namely, when
restricting to A′ the vectors in Rm need to be truncated accordingly, and so does
the set S, and v needs to be expanded when adding it to l. When adding vectors
to l, it is advantageous to choose the vectors as primitive vectors orthogonal to the
span of the ridge so that duplicates can be removed easily.
If US is of high dimension, a typical situation is that each subconfiguration
is a number of edges and a triangle. In this case there are only few choices E
to run through in Algorithm 3.14. For lower dimensional US there can be many
choices of E but with many of the contributions to the stable intersection being
the same. See Example 3.16. In such a case Algorithm 3.15 performs better than
Algorithm 3.14. In general it is difficult to predict which algorithms is better. In
our implementation we use mostly Algorithm 3.15, and Algorithm 3.14 only when
there is no specialization.
24
ANDERS JENSEN AND JOSEPHINE YU
Example 3.16. Let A = (A1 , A2 , A3 ) with
A1 = {(0, 0), (0, 1), (0, 3), (1, 0), (3, 0)}
A2 = {(0, 0), (0, 1), (0, 3), (1, 0), (3, 0)}
A3 = {(0, 0), (0, 1), (0, 2), (1, 0), (1, 3), (2, 0), (3, 1), (3, 3)}.
Choosing the specialization S of every coefficient except the coefficient of the point
(0, 0) in each configuration, we get that T RS (A) is a two-dimensional fan with
f-vector (1, 13, 17) living inside R3 ⊆ R18 . The link at e11 ∈ R18 consists of 4 rays.
The traversal of T RS (A) takes 79 seconds if Algorithm 3.14 is used but only 5
seconds if Algorithm 3.15 is used for computing the links. Algorithm 3.14 needs
to iterate through 2100 vertex pair choices at e11 , but much fewer for many of the
other links.
3.1. Implicitization using specialized resultants. In this section we will show
that the tropicalization of a variety parameterized by polynomials with generic coefficients can be computed using specialized tropical resultants. Let f1 , f2 , . . . , fk ∈
±1
±1
k
C[x±1
1 , x2 , . . . , xn ] be polynomials parameterizing a variety X in C . Let Γ be
the graph of the parameterizing map, defined by hy1 − f1 , y2 − f2 , . . . , yk − fk i in
±1
±1
C[x±1
1 , x2 , . . . , xn , y1 , y2 , . . . , yk ]. When f1 , f2 , . . . , fk have generic coefficients,
the tropical variety of Γ is the stable intersection of the tropical hypersurfaces of
the polynomials y1 − f1 , y2 − f2 , . . . , yk − fk . Since X is the closure of the projection of Γ ⊂ (C∗ )n × Ck onto Ck , by tropical elimination theory, we can compute
the tropical variety T (X) as a projection of T (Γ). This approach was used in
[STY07, SY08].
Another way to compute T (X) is by using specialized resultants. Let A =
(A1 , A2 , . . . , Ak ) where Ai = supp(fi ) ⊔ {0} for each i = 1, 2, . . . , k. Let S =
(supp(f1 ), supp(f2 ), . . . , supp(fk )) be the sets of points to specialize, and let VS be
Qk
the subspace of i=1 RAi × Rn defined by setting the coordinates in S to 0.
Proposition 3.17. With the notation above, T (X) = T RS (A), i.e. the tropicalization of a variety parameterized by polynomials with generic coefficients coincides
with a specialized resultant.
Q
Proof. Let W be the incidence variety in ki=1 (C∗ )Ai × (C∗ )n as in (1), defined by
equations of the form yi − gi where gi is a polynomial with the same support as fi
but with indeterminate coefficients. Then the graph Γ is obtained by specializing
the coefficients of gi to those of fi . Since the coefficients of fi were assumed to be
generic, we get T (Γ) = T (W ) ∩st VS . By tropical elimination, T (X) + ({0} × Rn) =
Qk
T (Γ) + ({0} × Rn ) in Rk × Rn , which is in turn embedded in i=1 RAi × Rn . By
the following lemma, T (Γ) + ({0} × Rn ) = (T (W ) + ({0} × Rn )) ∩st VS . After
quotienting out both sides by {0} × Rn , which is in the lineality space, we obtain
T (X) = T RS (A).
Lemma 3.18. Let F be a locally balanced fan in RN . Let L and L′ be linear
subspaces of RN such that L′ ⊂ L. Then
(F ∩st L) + L′ = (F + L′ ) ∩st L
In other words, stable intersection with a linear space commutes with Minkowski
sum with a smaller linear space.
COMPUTING TROPICAL RESULTANTS
25
Proof. Both (F ∩st L) + L′ and (F + L′ ) ∩st L are empty if F + L has dimension less
than N . Suppose this is not the case. Then both sets contain L′ in their lineality
space and consist of points of the form u + v ∈ RN where u ∈ L′ and v ∈ F ∩ L are
such that dim(linkv (F ) + L) = N .
Since the tropical variety of the graph Γ only depends on the extreme monomials
of the parameterizing polynomials, the next result follows immediately.
Corollary 3.19. When using specialized resultants for implicitization, the extreme
monomials of the input polynomials determine the tropical variety of the parameterized variety, so we can safely disregard the non-extreme terms.
Using specialized resultants for implicitization instead of the approach in [STY07,
SY08] has the advantage that the computation of T (Γ) as a stable intersection can
be avoided. Experiments show that the resultant description may speed up the
reconstruction of the Newton polytope in some cases. See Section 5 for examples.
Moreover, when the variety X is not a hypersurface, our resultant description
gives a fan structure of T (X) derived from the restriction of a secondary fan to a
linear subspace, which is the normal fan of a fiber polytope. Tropical elimination
does not give a fan structure for varieties of codimension more than one.
3.2. Tropical elimination for specialized tropical resultants. As before, let
A be a tuple of point configurations in Zn and S be the tuple of subsets to be
specialized. Let W be the incidence variety and T W be is tropicalization as in
Section 2.1. Let WS be a variety cut out by polynomials fi where the coefficients
of monomials in S have been specialized. Then f1 , f2 , . . . , fk may no longer form a
tropical basis, but the tropicalization of WS can be computed as the stable intersection of tropical hypersurfaces of f1 , f2 , . . . , fk because the coefficients are assumed
to be generic (or indeterminates). The incidence variety W is irreducible because it
is a vector bundle over (C∗ )n , and although specializing coefficients may make WS
reducible, all the irreducible components have the same tropical variety as seen in
the paragraph above Proposition 3.9. Hence any stable intersection of tropical hypersurfaces is connected in codimension 1, and we can use fan traversal to compute
the stable intersection of hypersurfaces.
The specialized resultant is the projection of WS onto the non-specialized coefficient variables, and we can compute this using tropical elimination theory, which
gives the tropical variety as a union of cones. When the specialized tropical resultant is a tropical hypersurface, then we can reconstruct the normal fan of the dual
Newton polytope using the methods in the next section.
The tropical hypersurface of fi only depends on the Newton polytope Pi of fi .
The non-specialized points in Ai always contribute as vertices of Pi , but some
specialized points of Ai may not. From this observation, we obtain the following
result, which is not obvious from the resultant point of view.
Lemma 3.20. If aij ∈ Ai is a specialized point lying in the convex hull of other
specialized points in Ai , then removing aij from Ai does not change the specialized
tropical resultant.
In other words, we may disregard the non-vertices among the specialized points
because the Newton polytope and the tropical hypersurface of fi remain the same.
Using this lemma, we may be able to reduce the amount of work for computing
specialized tropical resultants or specialized resultant polytopes.
26
ANDERS JENSEN AND JOSEPHINE YU
4. Polytope reconstruction
In this section we describe an algorithm for finding a fan structure on a tropical
hypersurface T ⊆ Rn . Recall that the tropical hypersurface of a polytope P ⊂ Rn
is the set of ω ∈ Rn for which there exist distinct p, q ∈ P such that for any r ∈ P ,
ω · p = ω · q ≤ ω · r. In other words, the tropical hypersurface of a polytope is
the union of the normal cones to the polytope at the edges. The multiplicity of
a point in the relative interior of such a normal cone is the (lattice) length of the
edge. The tropical hypersurface of a polynomial is the tropical hypersurface of its
Newton polytope.
The tropical hypersurface T will be presented to us as a finite collection of codimension 1 cones which may overlap badly but whose union is T . What we wish
to compute is a collection of codimension 1 cones such that the collection of all
their faces is a polyhedral fan with support T . This fan is not unique unless we
require it to be the coarsest — that is, that it is the normal fan of the polytope
defining T with its maximal cones removed. If the codimension 1 cones come with
a multiplicity then an advantage of having the fan structure is that it is straightforward to reconstruct the 1-skeleton of the polytope defining T , hence the vertices
of the polytope, up to translation. Therefore we will consider the computations of
the vertices of a polytope, the normal fan, and the tropical hypersurface with the
coarsest fan structure to be equivalent in what follows.
One way to perform the polytope reconstruction is to use the beneath-beyond
method for computing convex hulls. The key observation is that for any generic
ω ∈ Rn the vertex faceω (New(f )) can be computed using “ray shooting.” See
[DFS07] and [CTY10]. The method we present in this paper uses the adjacency
decomposition approach (see Section 2.5) and the following algorithm for computing
normal cones at vertices of the polytope defining T .
Algorithm 4.1 (Region(S,ω)).
Input: A collection S of codimension 1 cones in Rn such that T := ∪C∈S C is the
support of a tropical hypersurface. A vector ω ∈ Rn \ T .
Output: The (open) connected component of R \ T containing ω.
• R := Rn .
• For each C ∈ S:
– While R ∩ C 6= ∅:
∗ Find a point p ∈ R ∩ C.
∗ Introduce the parameter
P ε > 0 and let h be the open half line
from ω through p + ni=1 εi ei .
∗ The set of cones which intersect h is the same for all ε > 0
sufficiently small. Furthermore, the ordering of the intersection
points along h is fixed for ε > 0 sufficiently small. Among the
cones that intersect h, let D be a cone whose intersection point is
closest to ω. (The choice of D is not unique because the cones in
S need not form a fan and may overlap each other arbitrarily).
∗ Let the halfspace H ⊂ Rn be the connected component of Rn \
span(D) containing ω.
∗ R := R ∩ H.
• Return R.
COMPUTING TROPICAL RESULTANTS
27
Proof. The set R stays open and convex throughout the computation. At the
end R ∩ T = ∅. Each added constraint H for R is necessarily satisfied by the
connected component because of its convexity. The symbolic perturbation of p and
the convexity of R ensures that H is independent of the choice of D, as all the
possible choices of cones must be parallel. In fact, the set of constraints gives an
irredundant inequality description of the returned cone.
In computational geometry a standard way of handling the parameter ε > 0 is to
pass to the ordered field R(ε). Since perturbed values are never multiplied together,
there is no exponent growth. Indeed, the implementation is relatively simple.
Proposition 4.2. Let a be the number of facets of the closure of the returned cone
of Algorithm 4.1. The number of checks “R ∩ C 6= ∅” performed in algorithm is
|S| + a while the number of interior point computations “p ∈ R ∩ C” is a.
Proof. The check is done for every cone in C ∈ S. In addition, whenever the
algorithm enters the body of the while loop, a facet constraint H is added to R,
and an additional check “R ∩ C 6= ∅” and a computation of p is performed.
The condition that the generic h intersects a given polyhedral cone C can be
phrased as a condition on the ordering in which h intersects the defining hyperplanes
of C. We can imagine moving a point starting from ω and along the half-line
h, keeping track of which equations and inequalities defining C are satisfied and
updating when a defining hyperplane of C is crossed. Hence the implementation
reduces to a check of the order in which h intersects two given hyperplanes. The
perturbation in such a check is not difficult to handle symbolically. The check can
be used again to actually find a D in the algorithm with intersection point closest
to ω.
To apply the adjacency decomposition approach we must be able to compute a
starting cone and move across ridges to find neighboring cones, while computing
links at ridges is trivial for complete fans. To find a starting cone we guess a vector
outside T and apply Algorithm 4.1. Suppose now that C is a full dimensional cone
in the normal fan and u is a relative interior point on a facet of C with outer
normal vector v. For ε > 0 sufficiently small, calling Algorithm 4.1 with argument
u + εv will give us the desired neighboring cone. In our implementation we again
use comparison of intersection points on line segments to find an ε sufficiently small
to avoid all hyperplanes appearing in the description of T .
If we precompute generators for the cones in S then most of the checks for empty
intersection with R can be done without using linear programming, but rather
for each defining hyperplane of R checking if the cone generators are completely
contained on the wrong side. In our current implementation the time spent on
finding first intersections along the half-lines is comparable to the time spent on
linear programming. We present two examples to illustrate the usability of our
algorithm. These examples appeared earlier in the literature.
Example 4.3. The f-vector of the tropical hypersurface of the 2 × 2 × 2 × 2 hyperdeterminant was computed in [HSYY08]. The support of the hypersurface is the
sum of a tropical linear space and a classical linear space in R16 and is easy to write
as a union of cones. We reconstruct the 25448 normal cones of the Newton polytope of the defining equation in 163 minutes. Exploiting the 384 order symmetry
as explained in [Jen10] we reduce the running time to 7 minutes for computing the
28
ANDERS JENSEN AND JOSEPHINE YU
111 orbits of maximal cones. With suitable input files the following Gfan command
[Jen] will compute the f-vector. See also Section 5 for further details.
anders@gureko:~$ gfan_tropicalhypersurfacereconstruction -i troplinspc.fan
--sum --symmetry <claslinspc_and_symmetry.txt | grep -A1 F_VECTOR
F_VECTOR
1 268 5012 39680 176604 495936 927244 1176976 1005946 555280 178780 25448
Example 4.4. The implicitization challenge solved in [CTY10] is to reconstruct
the Newton polytope of the defining equation of a tropical variety given as a union
of 6865824 cones. This 11-dimensional polytope lives in R16 and has a symmetry
group of order 384. Its vertices come in 44938 orbits. In [CTY10], a modified
version of the ray-shooting method was used to produce coordinates of the vertices
at a rate of a few (2-5) minutes per vertex. Each round took about 45 minutes
found 10-20 vertices typically. However, a lot more computation, with some human
interaction and parallelization, over a period of a few months was required to make
sure that all the vertices were discovered, and this was done by computing the
tangent cone at each found vertex, up to symmetry. During the process most
vertices were re-discovered multiple times.
On this example our new implementation in Gfan spends approximately 1 minute
for each call of Algorithm 4.1. We estimate that the enumeration of the 44938 orbits
would finish after 30 days of computation. With the new method, we do not need to
process a vertex more than once, and we obtain all the facet directions as the rays
in the normal fan and all the tangent cones as duals of the normal cones. Moreover,
there is no post-processing needed to certify that all vertices have been found.
The method we just described does not make use of multiplicities. In fact, it is
not necessary that the fan is polytopal, or even locally balanced. We only require
that each connected component of the complement of T is convex.
Before settling with Algorithm 4.1 we also experimented with storing the codimension one cones in a binary space partitioning tree (BSP tree). See [TN87] for
a definition of BSP trees and an application to a computational geometry problem in arbitrary dimension. The tree would be built at initialization, and the
connected components of the complement could be computed by gathering convex
regions stored in the tree. This method worked as well as Algorithm 4.1 in small
dimensions, but in higher dimensions, like the examples above, Algorithm 4.1 would
always perform better. In Example 4.3 the difference would be a factor of five without exploiting symmetry. But in Example 4.4 the number of required nodes of the
tree would grow too large to have any chance of fitting in memory. The intuition
behind the explosion in complexity is that cones (for example, simplicial cones of
codimension one) in a higher dimensional space have larger chances of intersecting
a fixed hyperplane. Therefore in the process of building the BSP tree, a codimension one cone from the input will meet many other hyperplanes coming from other
cones, causing an explosion in the number of nodes in the BSP tree.
5. Comparison of algorithms
In this section, we consider various algorithms and compare the combinatorial
complexity of the output (e.g. f-vector) and running time (recorded on a laptop
computer with a 2.66 GHz Intel Core i5 processor and 8GB of memory). All implementations are single threaded, done in C++ using cddlib [Fuk05] and SoPlex
[Wun96], and will be part of Gfan in its next release, unless otherwise noted. The
COMPUTING TROPICAL RESULTANTS
29
combinatorial complexity of the output is essential for a fair comparison since different amounts of effort went into making each of the implementations fast. We
mostly concentrated on the implementation of Algorithm 4.1 and the secondary fan
computation because of their broad range of applications, while less optimization
effort has gone into algorithms specific to tropical resultants.
In general, the software Gfan uses the max convention for tropical varieties and
Gröbner fans. However, for the fact that the secondary fan of a point configuration
is a coarsening of the Gröbner fan of the associated binomial (lattice) ideal, we need
the subdivisions to be defined with respect to min if the initial ideals are defined
with respect to max. Therefore Gfan uses min for secondary fans. As tropical
resultants are subfans of secondary fans, we chose to use min in this paper for
tropical addition.
Hypersurfaces. Let us first consider the case where the resultant variety R(A) is
a hypersurface. Following is a list of different methods for computing the resultant
polytope (or its tropical hypersurface or its normal fan).
(1) Enumerating the vertices of the secondary polytope of Cay(A), and then
using Sturmfels’ formula [Stu94, Theorem 2.1] to obtain the vertices of the
resultant polytope. We did not make an implementation but list only the
time spent computing the secondary fan with the Gfan command
gfan_secondaryfan <cayley.txt
(2) Computing the tropical hypersurface of the resultant as a subfan of the
secondary fan by fan traversal using the methods described in Section 2.5.
gfan_resultantfan --vectorinput <tuple.txt
(3) Constructing the normal fan of the resultant polytope from the simple
description of the tropical resultant as a union of cones as in Theorem 2.9.
Our implementation in Gfan uses Algorithm 4.1 for this.
gfan_resultantfan --vectorinput --projection <tuple.txt
(4) Using Sturmfels’ formula [Stu94, Theorem 2.1] for finding the optimal
vertex of the resultant polytope in a generic direction together with the
beneath-beyond convex hull algorithm for recovering the whole polytope.
The software ResPol [EFKP11] is a recent implementation of this method
using the CGAL library.
For the third approach, one can also use other methods for reconstructing a
polytope from its tropical hypersurface, such as ray-shooting/beneath-beyond and
BSP trees, as discussed in Section 4, although we found Algorithm 4.1 to perform
better, especially for polytopes of dimension 5 or more (compared to beneathbeyond in iB4e [Hug06] and BSP).
For Example 2.10 above, each of the first three methods finished in under one
second in Gfan. We present more challenging examples below. In the examples
each matrix represents the point configuration consisting of its columns.
Example (a).
0
A = 0
1
1 3
0 0
0 1, 0 2
1 1
3 2
1
0 2
1, 2 1
0
3 1
2
1
2, 2
1
1
2 2
0 3
0 2
30
ANDERS JENSEN AND JOSEPHINE YU
Method/fan
(1) secondary fan
(2) traversing tropical resultant
(3) normal fan from simple description
(4) beneath-beyond (ResPol)
Example (b).
0 0
A=
0 1
Method/fan
(1) secondary fan
(2) tropical resultant
(3) normal fan
(4) beneath-beyond (ResPol)
Example (c).
1
1
A=
3
1
2
2
1
2
1
2
1
3
,
2 0
3
2
F-vector of output
1 10432 55277 106216 88509 27140
1 5152 21406 28777 12614
1 78 348 570 391 93
1 - - - - 93
1 3
1 2
,
2 0
1 2
3
0
3
0
,
1
1
1 2
1 0
Timing
467 s
733 s
1.4 s
2.7 s
3
3
F-vector of output
1 3048 38348 178426 407991 494017 304433 75283
1 2324 26316 106083 197576 173689 58451
1 56 497 1779 3191 3018 1412 249
1 - - - - - - 249
3
2
1
3
Method/fan
(3) normal fan from simple descr.
(4) beneath-beyond (ResPol)
0
3
2
2
,
3 2
1
1
2
0
3
3
1
2
2
2
,
0 0
0
0
1
3
1
3
1
3
3
3
,
0 1
3
2
F-vector of output
1 937 5257 11288 11572 5589 985
1 - - - - - 985
3
2
1
0
Timing
55 s
236 s
Timing
506 s
1238 s
6s
35 s
3
2
2
2
In Example (c) we were not able to compute the secondary fan and the resultant fan
with the secondary fan structure due to integer overflow in intermediate polyhedral
computations. Gfan has been designed to work well for Gröbner fans, where the
degrees of the polynomials are never very large, since that would prevent us from
computing a single Gröbner basis anyway (except for binomial ideals). In Example
(c), a primitive normal vector of a codimension 1 cone of the normal fan of the resultant is (−32, 0, 32, 27, 0, −27, 25, −25, 0, 0, 51, −51, −87, 0, 87), showing that the
resultant has degree at least 32+27+25+51+87=222. On such examples overflows
typically arise when trying to convert an exactly computed rational generator of a
ray to a primitive vector of 32-bit integers. Algorithm 4.1 will show similar behavior
on other examples, for example when converting “p ∈ R ∩ C” to a vector of 32-bit
integers. We intend to fix these implementation problems in the future.
Hypersurfaces with Specialization. If the specialized resultant is a hypersurface, then we can compute its tropical variety using the following methods.
(1) Compute T RS (A) as a subfan of the restriction of the secondary fan to a
subspace US by fan traversal using the algorithms in Section 3.
gfan_resultantfan --vectorinput --special <tuple_and_spcvec.txt
(2) Compute the stable intersection T RS (A) = T R(A) ∩st {US } as a union of
cones, using the simple description from Theorem 2.9 and the characterization of stable intersections from Lemma 3.4. Then reconstruct the normal
fan of the dual polytope using Algorithm 4.1.
gfan_resultantfan --vectorinput --special --projection <tup_and_sv.txt
(3) Compute the specialized tropical resultant as a union of cones using stable intersection of hypersurfaces and tropical elimination theory as in Section 3.2 and reconstruct the normal fan of the dual polytope using Algorithm 4.1. We combine the commands (see also [SY08]):
COMPUTING TROPICAL RESULTANTS
31
gfan_tropicalstartingcone --stable >startingcone.txt
gfan_tropicaltraverse --stable <startingcone.txt >stable.fan
gfan_tropicalhypersurfacereconstruction --sum -i stable.fan <lnspc.txt
(4) For a generic direction, Sturmfels’ formula [Stu94, Theorem 2.1] gives the
optimal vertex of the resultant polytope in that direction, which can then be
projected to get a point in the Newton polytope of the specialized resultant
polynomial. This can be combined with the beneath-beyond convex hull
algorithm for recovering the whole polytope. The software ResPol was used
in the timings below.
In [EKP07], the authors proposed computing a silhouette or a projection of the
secondary polytope. This is dual to computing the restriction of the secondary fan
to a subspace. We provide the results and timings of this dual computation for
comparison.
In the following examples specialized points are shown in non-black color.
Example (d).
0 1 1 2
0 0 1 1
0 1 1 2
,
,
A=
0 1 0 1
1 0 1 2
0 1 2 1
Method/fan
Restriction of secondary fan
(1) traversing tropical resultant
(2) normal fan from stable intersection
(3) normal fan from tropical elimination
(4) beneath-beyond (ResPol)
Example (e).
A=
0 0
0 1
0 1
1 1
,
0 1
1 0
Method/fan
Restriction of secondary fan
(1) traversing tropical resultant
(2) normal fan from stbl. inters.
(3) normal fan from trop. elim.
(4) beneath-beyond (ResPol)
Example (f ).
1 1 2
A = 2 2 3
0 2 1
Method
(2)
(3)
F-vector
1 372 2514 5829 5661 1976
1 126 476 561 212
1 25 127 250 211 65
1 25 127 250 211 65
1 - - - - 65
3
0
2, 1
2
0
1
1
2
0
,
2
0
1 1
1 2
F-vector
1 709 6955 24354 39464 30226 8870
1 469 3993 11296 12853 5040
1 29 209 597 792 485 110
1 29 209 597 792 485 110
1 - - - - - 110
0 1
2 1
2 1
1
1 1
1, 1 3
3
1 1
Timing
26 s
14 s
0.7 s
1.4 s
0.5 s
2
1
Timing
116 s
320 s
1.3 s
3.2 s
2.3 s
2 3
1 1
3 2, 0 2
0 1
3 2
F-vector
1 1566 19510 98143 265202 424620 413455 238425 73741 9156
1 1566 19510 98143 265202 424620 413455 238425 73741 9156
3 3
0 1
1 1
Timing
798 s
974 s
The current version of ResPol could not complete the computation for this example.
Furthermore, we could not apply method (1) because of 32-bit integer overflows as
explained in Example (c).
Implicitization of hypersurfaces. Implicitization is a special case of the specialized resultants, and we compare the three methods as before.
Example (g). (Implicitization of a bicubic surface [EK05, Example 3.4])
0 0 0 0 1 2 3
0 0 0 1 2 3
,
A=
,
0 1 2 3 0 0 0
0 1 3 0 0 0
32
ANDERS JENSEN AND JOSEPHINE YU
0
0
0 0
1 2
1 1
0 1
Method/fan
Restriction of secondary fan
(1) traversing tropical resultant
(2) normal fan from stable inters.
(3) normal fan from tropical elim.
(4) beneath-beyond (ResPol)
1 1
2 3
2 2
0 1
F-vector
1 26 66 42
1 13 17
1596
1596
1596
2
2
Timing
5s
16 s
171 s
0.4 s
< 0.1 s
2 3
3 1
3 3
2 3
No interior points
2s
4s
9s
0.4 s
< 0.1 s
As we saw in Corollary 3.19, removing the non-extreme monomials from the parameterizing polynomials does not change the resultant polytope, and in this example,
this also does not change the restriction of the secondary fan. However, doing so
speeds up the computations, as seen on the right most column.
Example (h). (Implicitization of a hypersurface in four dimensions)
0 1 2 3
0 2 3 4
0 0 4 4
0 0 2 4
A = 0 2 4 1, 0 2 2 0, 0 4 0 1, 0 2 2 3
0 2 4 1
0 1 4 1
0 2 4 2
0 4 2 3
Method/fan
(1) traversing tropical resultant
(2) normal fan from stable intersection
(3) normal fan from tropical elimination
(4) beneath-beyond (ResPol)
F-vector
1 10665 24204 13660
1 111 358 368 121
1 111 358 368 121
1 111 358 368 121
Timing
2 h 10 m
9s
2.6 s
1.5 s
For (3), computing the polytope from the tropical hypersurface using ray-shooting
and beneath-beyond took 47 s in the TrIm implementation [SY08] using the library
iB4e [Hug06] on a slightly slower machine.
Example (i). (Implicitization of a hypersurface in five dimensions)
0
0
A =
0
0
1
1
2
2
3
4
2
4
0
4
4
, 0
4 0
0
0
0
0
1
1
1
2
1
2
Method/fan
(2) normal fan from stable inters.
(3) normal fan from tropical elim.
(4) beneath-beyond (ResPol)
0
3
3
, 0
3 0
3
0
0
1
1
4
2
4
1
2
0
3
2
, 0
1 0
3
0
1
1
0
1
0
3
2
, 0
0 0
3
0
2
4
1
3
F-vector
1 5932 23850 35116 22289 5093
1 5932 23850 35116 22289 5093
1 5932 23850 35116 22289 5093
0
4
3
1
2
1
4
3
4
3
3
1
Timing
351 s
184 s
898 s
For (3), timing includes 17 seconds for computing the specialized tropical incidence
variety. The normal fan reconstruction computation in TrIm with iB4e took 3375
seconds on a slightly slower machine.
Non-hypersurfaces. When R(A) is not a hypersurface, the only method we know
for computing T R(A) with a fan structure without knowing the defining ideal is
to traverse the secondary fan of Cay(A) and enumerating just the secondary cones
whose RMS contains a fully mixed cell. There are other descriptions of tropical
resultants as a set, such as Theorem 2.9, but none gives a fan structure.
Example (j).
0 2 4
3 5 5
3 4 5
0 1 2
A=
,
,
,
4 1 1
1 0 4
1 5 2
4 3 5
Method/fan
Secondary fan
Traversing tropical result.
F-vector
1 8876 72744 222108 322303 225040 60977
1 968 4495 6523 3000
Timing
478 s
81 s
COMPUTING TROPICAL RESULTANTS
33
We used, respectively, the commands:
gfan_secondaryfan <cayley.txt
gfan_resultantfan --vectorinput <tuple.txt
Non-hypersurfaces with Specialization. The only method here is to traverse
T RS (A) as a subfan of a restriction of the secondary fan using the algorithms in
Section 3.
Example (k).
0 2 4
3 5 5
3 4 5
0 1 2
,
,
,
A=
4 1 1
1 0 4
1 5 2
4 3 5
Method/fan
Restriction of secondary fan
Traversing spec. tropical result.
F-vector
1 4257 23969 48507 42260 13467
1 310 831 533
Timing
256 s
81 s
We used, respectively, the commands:
gfan_secondaryfan --restrictingfan subspace.fan <cayley.txt
gfan_resultantfan --vectorinput --special <tup_and_sv.txt
5.1. Conclusion. The new method of using adjacency decomposition with Algorithm 4.1 for constructing the normal fan of a polytope from it tropical hypersurface
works very well in practice. Our implementation is much faster than any existing
implementation of the beneath-beyond method with ray-shooting for polytope reconstruction, and we think the gap will widen even more in higher dimension since
this new method scales well — multi-linearly with respect to the number of cones
in input and the number of vertices and edges of the output polytope, as shown in
Proposition 4.2.
The normal fan reconstruction method can be used together with either the
simple description of tropical resultants (Theorem 2.9) or tropical elimination (Section 3.2) for computing resultant polytopes efficiently. Traversing the (specialized)
tropical resultant as a subfan of (a restriction of) the secondary fan of the Cayley
configuration is combinatorially interesting but not computationally competitive.
For implicitization, the beneath-beyond method from [EFKP11] works faster
than any of our “tropical” methods when the output polytope is low dimensional,
while our methods seem to have an advantage in higher dimension (5 or more).
However, the method of [EFKP11] may have an advantage when there are many
specialized points in the input configurations, as the number of cones in the tropical
description increases rapidly. See the last problem in Section 6 below.
For resultant varieties of codimension higher than one, whether specialized or
not, we only know of one method for computing the tropicalization as a fan, without
knowing the defining polynomials, which is to traverse the secondary fan of the
Cayley configuration or a restriction of it to a subspace.
6. Open problems
Combinatorial classification of resultant polytopes: For 1-dimensional
point configurations, the combinatorics of the resultant polytope only depend on the (partial) order of the (not necessarily distinct) points in each Ai
[GKZ94], so a combinatorial classification is easy to obtain. No such classification is known even for point configurations in Z2 . A concrete problem is
to classify 4-dimensional resultant polytopes combinatorially. This
was done for 3-dimensional resultant polytopes by Sturmfels [Stu94], and
34
ANDERS JENSEN AND JOSEPHINE YU
only one-dimensional point configurations were needed for this case. To
understand the 4-dimensional resultant polytopes, we need to work with
the case A = (A1 , A2 , A3 ) where each Ai consists of three points in Z2 that
are not necessarily distinct. How can we stratify the space of tuples A’s
according to the combinatorial type of the resultant polytope?
Using symbolic perturbation: At the end of Section 2.4, we gave a probabilistic algorithm for computing codimension of resultants. Can we turn
this into a polynomial time deterministic algorithm using symbolic perturbation?
Finding a point in the specialized tropical resultant: For non-specialized
tropical resultants, the polynomial time algorithm for computing codimension from Section 2.4 can also be used to find a generic point, by Theorem 2.9. Is there a polynomial time algorithm for finding a generic vector
ω ∈ Q(ε)m in the specialized tropical resultant?
Improved description of specialized tropical resultants: By combining
the descriptions of tropical resultants in Theorem 2.9 and stable intersections in Lemma 3.4, we get a specialized tropical resultant as a union of
Qk
cones. In computations, we need to go through a list of i=1 m2i choices
of tuples of pairs from Ai , many of which do not contribute to a facet of
specialized tropical resultant. Give a combinatorial characterization
for the choices of the tuples of pairs that contribute to a facet.
Corollary 3.19 and Lemma 3.20 are results in this direction.
Acknowledgments
We thank MSRI (Berkeley, USA) and Institut Mittag-Leffler (Djursholm, Sweden) for their support and hospitality. The first author was partially supported
by the German Research Foundation (Deutsche Forschungsgemeinschaft (DFG))
through the Institutional Strategy of the University of Göttingen, partially by the
DFG grant MA 4797/3-1 (SPP 1489) and partially by the Danish Council for Independent Research, Natural Sciences (FNU). The second author was partially
supported by a Postdoctoral Research Fellowship and DMS grant #1101289 from
the National Science Foundation (USA). We would like to express our gratitude to
the anonymous referees, especially Reviewer #1, for exceptionally careful reading
and detailed comments.
References
[BG84]
Robert Bieri and J. R. J. Groves, The geometry of the set of characters induced by
valuations, J. Reine Angew. Math. 347 (1984), 168–195. MR 733052 (86c:14001)
[BJS+ 07] T. Bogart, A. N. Jensen, D. Speyer, B. Sturmfels, and R. R. Thomas, Computing
tropical varieties, J. Symbolic Comput. 42 (2007), no. 1-2, 54–73. MR 2284285
(2007j:14103)
[CP13]
Dustin Cartwright and Sam Payne, Connectivity of tropicalizations, Mathematical
Research Letters 19 (2013), no. 5.
[CTY10] Marı́a Angélica Cueto, Enrique A. Tobis, and Josephine Yu, An implicitization challenge for binary factor analysis, J. Symbolic Comput. 45 (2010), no. 12, 1296–1315.
MR 2733380
[DFS07] Alicia Dickenstein, Eva Maria Feichtner, and Bernd Sturmfels, Tropical discriminants, J. Amer. Math. Soc. 20 (2007), no. 4, 1111–1133 (electronic). MR 2328718
(2008j:14095)
COMPUTING TROPICAL RESULTANTS
[DGH98]
[DLRS10]
[EFK10]
[EFKP11]
[EK05]
[EKP07]
[Fuk05]
[GKZ94]
[HSYY08]
[Hug06]
[Jen]
[Jen10]
[JMM08]
[JY]
[MS]
[Oda08]
[Per69]
[Rin]
[Sch03]
[ST08]
[Stu94]
[STY07]
35
Martin Dyer, Peter Gritzmann, and Alexander Hufnagel, On the complexity of computing mixed volumes, SIAM J. Comput. 27 (1998), no. 2, 356–400. MR 1616544
(99f:68092)
Jesús A. De Loera, Jörg Rambau, and Francisco Santos, Triangulations, Algorithms
and Computation in Mathematics, vol. 25, Springer-Verlag, Berlin, 2010, Structures
for algorithms and applications. MR 2743368
Ioannis Z. Emiris, Vissarion Fisikopoulos, and Christos Konaxis, Regular triangulations
and resultant polytopes, In Proceedings of European Workshop on Computational Geometry (EuroCG) (Dortmund, Germany), 2010, pp. 137–140.
Ioannis Z. Emiris, Vissarion Fisikopoulos, Christos Konaxis, and Luis Peñaranda, Efficient computation of Newton polytopes of specialized resultants, arXiv:1108.5985v1,
2011.
Ioannis Z. Emiris and Ilias S. Kotsireas, Implicitization exploiting sparseness, Geometric and algorithmic aspects of computer-aided design and manufacturing (Ravi
Janardan, Michiel Smid, and Debasis Dutta, eds.), DIMACS Ser. Discrete Math. Theoret. Comput. Sci., vol. 67, Amer. Math. Soc., Providence, RI, 2005, pp. 281–297.
MR 2200413 (2006j:65028)
Ioannis Z. Emiris, Christos Konaxis, and Leonidas Palios, Computing the newton polytope of specialized resultants, Proceeding of the MEGA 2007 conference, 2007.
Komei Fukuda, cddlib reference manual, cddlib version 094b, Swiss Federal Institute of Technology, Lausanne and Zürich, Switzerland, 2005,
http://www.inf.ethz.ch/personal/fukudak/cdd_home/.
I. M. Gel′ fand, M. M. Kapranov, and A. V. Zelevinsky, Discriminants, resultants,
and multidimensional determinants, Mathematics: Theory & Applications, Birkhäuser
Boston Inc., Boston, MA, 1994. MR 1264417 (95e:14045)
Peter Huggins, Bernd Sturmfels, Josephine Yu, and Debbie S. Yuster, The hyperdeterminant and triangulations of the 4-cube, Math. Comp. 77 (2008), no. 263, 1653–1679.
MR 2398786 (2009c:52021)
Peter Huggins, ib4e: A software framework for parametrizing specialized lp problems,
Mathematical Software – ICMS 2006, Lecture Notes in Computer Science, vol. 4151,
Springer, 2006, pp. 245–247.
Anders N. Jensen, Gfan, a software system for Gröbner fans and tropical varieties,
Available at http://home.imf.au.dk/jensen/software/gfan/gfan.html.
, Traversing symmetric polyhedral fans, Proceedings of the International Congress on Mathematical Software (Kobe, Japan, 2010), 2010.
Anders Nedergaard Jensen, Hannah Markwig, and Thomas Markwig, An algorithm
for lifting points in a tropical variety, Collect. Math. 59 (2008), no. 2, 129–165.
MR 2414142 (2009a:14077)
Anders N. Jensen and Josephine Yu, Stable intersection of tropical varieties, in preparation.
Diane
Maclagan
and
Bernd
Sturmfels,
Introduction
to
tropical
geometry,
book
draft
available
at
http://homepages.warwick.ac.uk/staff/D.Maclagan/papers/papers.html.
Shinsuke Odagiri, The tropical resultant, Proc. Japan Acad. Ser. A Math. Sci. 84
(2008), no. 7, 93–96. MR 2450058 (2009i:14075)
Hazel Perfect, A generalization of Rado’s theorem on independent transversals, Proc.
Cambridge Philos. Soc. 66 (1969), 513–515. MR 0244065 (39 #5382)
Felipe Rincón, Computing tropical linear spaces, arXiv:1109.4130. To appear in Journal
of Symbolic Computation.
Alexander Schrijver, Combinatorial optimization. Polyhedra and efficiency. Vol. B,
Algorithms and Combinatorics, vol. 24, Springer-Verlag, Berlin, 2003.
Bernd Sturmfels and Jenia Tevelev, Elimination theory for tropical varieties, Math.
Res. Lett. 15 (2008), no. 3, 543–562. MR 2407231 (2009f:14124)
Bernd Sturmfels, On the Newton polytope of the resultant, J. Algebraic Combin. 3
(1994), no. 2, 207–236. MR 1268576 (95j:52024)
Bernd Sturmfels, Jenia Tevelev, and Josephine Yu, The Newton polytope of the implicit
equation, Mosc. Math. J. 7 (2007), no. 2, 327–346, 351. MR 2337885 (2008f:14073)
36
ANDERS JENSEN AND JOSEPHINE YU
[SY08]
[Tab08]
[TN87]
[Wun96]
Bernd Sturmfels and Josephine Yu, Tropical implicitization and mixed fiber polytopes, Software for algebraic geometry (Jan Verschelde, Michael Stillman, and Nobuki
Takayama, eds.), IMA Vol. Math. Appl., vol. 148, Springer, New York, 2008, pp. 111–
131. MR 2410718 (2009m:14089)
Luis Felipe Tabera, Tropical resultants for curves and stable intersection, Rev. Mat.
Iberoam. 24 (2008), no. 3, 941–961. MR 2490204 (2010b:14123)
William C. Thibault and Bruce F. Naylor, Set operations on polyhedra using binary
space partitioning trees, SIGGRAPH Comput. Graph. 21 (1987), no. 4, 153–162.
Roland
Wunderling,
Paralleler
und
objektorientierter
SimplexAlgorithmus,
Ph.D.
thesis,
Technische
Universität
Berlin,
1996,
http://www.zib.de/Publications/abstracts/TR-96-09/.
Institut for Matematik, Aarhus Universitet, Aarhus, Denmark
E-mail address: jensen@imf.au.dk
School of Mathematics, Georgia Institute of Technology, Atlanta GA, USA
E-mail address: jyu@math.gatech.edu
| 0 |
arXiv:1802.01884v2 [] 13 Feb 2018
Asymptotic invariants of ideals with Noetherian
symbolic Rees algebra and applications to cover ideals
Benjamin Drabkin
∗
Lorenzo Guerrieri
†
February 14, 2018
Abstract
Let I be an ideal whose symbolic Rees algebra is Noetherian. For m ≥ 1, the m-th
symbolic defect, sdefect(I, m), of I is defined to be the minimal number of generators
(m)
of the module II m . We prove that sdefect(I, m) is eventually quasi-polynomial as
a function in m. We compute the symbolic defect explicitly for certain monomial
ideals arising from graphs, termed cover ideals. We go on to give a formula for the
Waldschmidt constant, an asymptotic invariant measuring the growth of the degrees of
generators of symbolic powers, for ideals whose symbolic Rees algebra is Noetherian.
MSC: 13F20; 05C25.
Keywords: Symbolic powers, Symbolic defect, Waldschimdt constant, Monomial ideals,
Cover ideals of graphs.
1
Introduction
Let I be a homogeneous ideal in a commutative Noetherian graded ring R. Given m ∈ N,
the m-th symbolic power of I is defined to be
\
I (m) =
(I m Rp ∩ R).
p∈Ass(I)
From the definition is clear that I m ⊆ I (m) . The opposite containment, however, does not
hold in general. Much effort has been invested into determining for which values of r the
containment I (r) ⊆ I m holds. An overview of this topic, often called the “containment
problem”, can be found in papers like [13], [7], in the survey paper [20], and in [4].
While the containment problem is the most-explored line of inquiry into the relationship between symbolic and ordinary powers, there are other avenues to investigate this
relationship. One such method is to study the module I (m) /I m . This module is relatively
unexplored: Herzog in [10] studies the module using homological methods when I is a prime
∗
University of Nebraska – Lincoln, Lincoln Nebraska Email address: benjamin.drabkin@huskers.unl.edu
Università di Catania, Dipartimento di Matematica e Informatica, Viale A. Doria, 6, 95125 Catania,
Italy Email address: guerrieri@dmi.unict.it
†
1
ideal of height two in a three-dimensional local Noetherian ring. Arsie and Vatne study the
Hilbert function of I (m) /I m in [1], giving examples for ideals of coordinate planes and sets of
points in Pn . More recently, in [9] Galetto, Geramita, Shin, and Van Tuyl study this module
by focusing on the m-th symbolic defect, defined by
(m)
I
sdefect(I, m) := µ
,
Im
that is, the number of minimal generators of I (m) /I m . The symbolic defect measures how
different I (m) is from I m in that it counts the number of generators which must be added to
I m to make I (m) . For instance, the equality I (m) = I m is equivalent to sdefect(I) = 0. In
studying symbolic defect, there are many interesting questions which arise.
1. For which ideals is sdefect(I, m) bounded as a function of m?
2. For which ideals does sdefect(I, m) attain the value t, for given m, t ∈ N?
3. How does sdefect(I, m) grow as m grows?
In [9], the first two questions are analyzed for the defining ideal of a general set of points
in P2 or a set of points forming a star configuration. Interesting results about the symbolic
defect of edge ideals of graphs can also be found in the recent work [15]. In this paper, we
concern ourselves primarily with the asymptotic behavior of the symbolic defect, i.e., we
focus on the question of how the symbolic defect sdefect(I, m) grows as a function of m for
a fixed ideal I.
Another asymptotic invariant important for the study of the symbolic powers of a homogeneous ideal I is the Waldschimdt constant, defined as
α(I (m) )
,
m→∞
m
α̂(I) = lim
where α(I) = min {deg f | f ∈ I, f 6= 0} . This constant was introduced in [4] and is related
to work done by Waldschmidt in [23]. The Waldschimdt constant of squarefree monomial
ideals can be computed as the optimum value of a linear optimization problem as shown in
[3]. The Waldschimdt constant is connected to the containment problem by results of Bocci
and Harbourne [4]. They use it to find a lower bound for the resurgence of an ideal I, which
is defined as
nm
o
ρ(I) = sup
| I (m) 6⊆ I r .
r
In this paper, we study asymptotic invariants pertaining to the symbolic powers of I in
the case where I is an ideal with Noetherian symbolic Rees algebra. A notable class of ideals
having this property are the monomial ideals, as proven by Herzog, Hibi and Trung [12]. In
particular, we focus on characterizing the growth of sdefect(I, m) for any ideal satisfying the
assumption that the symbolic Rees algebra is Noetherian. We then specialize to the case in
which I is the cover ideal of a graph to obtain more refined results. In addition, we give a
formula for the Waldschimdt constant α̂(I) in the former setup.
We now describe our main results and the structure of this paper. In Section 2, we prove
in Theorem 2.4 that, when I is an ideal with Noetherian symbolic Rees algebra, sdefect(I, m)
is eventually quasi-polynomial as a function of m.
2
In Section 3, we prove that the Waldschmidt constant
of an ideal can be computed by
(m)
α(I
)
considering only the first few terms of the sequence
as long as I has Notherian
m
m∈N
symbolic Rees algebra. In particular, we prove in Theorem 3.6 that the Waldschimdt constant
is equal to
α(I (m) )
α̂(I) = minm≤n
,
m
where n is the highest degree of a generator of the symbolic Rees algebra of I.
In Section 4, we consider the cover ideal J(G) of a graph G. Cover ideals, which are
generated by monomials corresponding to vertex covers of graphs, are an interesting family of
squarefree monomial ideals. Their structure has been studied in the recent years due to the
relationships between their algebraic and combinatorial properties. After some preliminary
results, we state, in Theorem 4.15, a recursive formula which computes the symbolic defect
of cover ideals for certain graphs. Moreover, in Theorem 4.8 we describe a family of graphs
achieving arbitrarily large symbolic defects.
In Section 5, we compute the degree as a quasi-polynomial of the symbolic defect for
several families of graphs including complete, circulant and cyclic graphs.
Throughout this paper, for all the standard notations we refer to classical commutative
algebra textbooks such as [2] and [8] and for the theory of monomial ideals in polynomial
rings we refer to the book [11]. For an introductory text about cover and edge ideals we
refer to [22].
2
Asymptotic behavior of the symbolic defect
We start by considering the growth of sdefect(I, m) when I is an ideal (or homogeneous
ideal) in a Noetherian local (or graded-local) ring R.
Definition 2.1. A quasi-polynomial in Q is a function f : N → Q such that for some
n ∈ N there exist rational polynomials f0 , . . . fn−1 having the property that for all m ∈ N,
f (m) = fr (m) whenever m ≡ r mod n. The value n is called a quasi-period of f .
Definition 2.2. The Rees algebra of an ideal I is defined to be the graded ring
R(I) =
∞
M
I i ti ⊆ R[t]
i=0
and the symbolic Rees algebra of I is defined to be
Rs (I) =
∞
M
I (i) ti ⊆ R[t]
i=0
where t is an indeterminate tracking the grading in the graded families of the powers and
symbolic powers of I, respectively.
We recall the useful fact that the property of a function being eventually quasi-polynomial
can be read off its generating function.
3
Lemma 2.3. [18, 4.4.1] If a numerical function φ : N → Z has generating function
∞
X
n=0
q(z)
di
i=1 (1 − z )
φ(n)z n = Qs
for some d1 , d2 , . . . , ds ∈ N, and some polynomial q(z) ∈ Z[z], then φ(n) = Q(n) for n sufficiently large, where Q(n) is a quasi-polynomial with quasi-period given by the least common
multiple of d1 , . . . , dn .
One of our main results is that sdefect(I, m) grows quasi-polynomially in m if the symbolic
Rees algebra of I, Rs (I) is a Noetherian ring.
Theorem 2.4. Let R be a Noetherian local or graded-local ring with maximal (or homogeneous maximal) ideal m and residue field R/m = k. Let I be an ideal (or homogeneous ideal)
of R such that Rs (I) is a Noetherian ring, and let d1 , . . . , ds be the degrees of the generators
of Rs (I) as an R-algebra. Then sdefect(I, m) is eventually a quasi-polynomial in m with
quasi-period d = lcm(d1 , . . . , ds ).
Proof. We first note that for each m ∈ N,
0 −→ I m −→ I (m) −→
I (m)
−→ 0
Im
is an exact sequence of R-modules. Taking direct sums, this gives a short exact sequence of
R(I)-modules
0 −→ R(I) −→ Rs (I) −→ C(I) −→ 0
where
C(I) =
∞ (m)
M
I
m=0
Im
tm .
Tensoring with R(I) = R(I)/mR(I) gives the exact sequence
0 −→ K −→ R(I) −→ Rs (I) −→ C(I) −→ 0
of R(I)-modules where
∞
M
I (m)
tm
Rs (I) =
(m)
mI
m=0
and
C(I) =
∞
M
m=0
I (m)
(I m + mI (m) )
tm .
In particular, each strand of the above sequence is of the form
0 −→ Km −→
I (m)
I (m)
Im
−→
−→
−→ 0
mI m
mI (m)
(I m + mI (m) )
where Km = (I m ∩ mI (m) )/mI m .
4
Let hC(I) (z) be the Hilbert series for C(I). We note that hC(I) (z) =
since, by Nakayama’s Lemma,
dimk
I (m)
I m + mI (m)
=µ
I (m)
Im
∞
P
sdefect(I, m)z m
m=0
.
Furthermore,
hC(I) (z) = hRs (I) (z) + hK (z) − hR(I) (z)
where hRs (I) (z), hK (z), and hR(I) (z) are the Hilbert series of Rs (I), K, and R(I) respectively.
Since R(I) is a Noetherian k-algebra generated in degree 1 and K is a homogeneous ideal
of this ring, hR(I) and hK are rational functions of z with denominators which are powers of
(1 − z). Since Rs (I) is a Noetherian ring, it follows that hRs (I) is a rational function with
Q
denominator given by si=1 (1 − z di ), where d1 , d2 , . . . , ds are the degrees of the generators of
Rs (I) as a Noetherian k-algebra. Thus hC(I) (z) is a rational function on z, and Lemma 2.3
show both that sdefect(I, m) is quasi-polynomial in m and that the quasi-period d is given
by lcm(d1 , . . . , ds ).
We note that, in particular, Theorem 2.4 holds for monomial ideals. Sections 4 and 5 of
this paper are dedicated to sharpening the results in Theorem 2.4 in the case of monomial
ideals arising from vertex covers of graphs.
3
Computing the Waldschmidt constant
In this section we prove a decomposition for the symbolic powers of an ideal I having
Noetherian symbolic Rees algebra and we give an useful application. In order to do this, we
use information about the relationships between different symbolic powers of I captured in
the symbolic Rees algebra. In particular, when Rs (I) is Noetherian, its maximum generating
degree can be used to understand the structure of I (m) , as shown in the following lemma.
Lemma 3.1. Let R be a positive-integer-graded Noetherian ring, and let I ⊆ R be a homogeneous ideal such that Rs (I) is generated in degree at most n. Then
I (m) = I m + I (2) I (m−2) + · · · + I (n) I (m−n)
for all m > n.
Proof. Since the symbolic powers of I form a graded family of ideals, we have that
I (m) ⊇ I m + I (2) I (m−2) + · · · + I (n) I (m−n) .
As the symbolic Rees algebra is generated in degree at most n, Rs (I) = R[I, I (2) t2 , . . . , I (n) tn ].
Therefore we have that
X
I (m) =
I i1 (I (2) )i2 · · · (I (n) )in .
i1 +2i2 +...+nin =m
Thus
I (m) ⊆ I m + I (2) I (m−2) + · · · + I (n) I (m−n) .
5
Decompositions of symbolic powers of ideals along the lines of Lemma 3.1 offer a method
to computing the Waldschmidt constant of ideals with finitely generated symbolic Rees
algebra in terms of the initial degrees of finitely many symbolic powers.
Definition 3.2. Given an ideal I ⊆ k[x1 , . . . , xn ], we define
α(I) = min{deg f | f ∈ I, f 6= 0}.
Definition 3.3. Given an ideal I, the Waldschmidt constant, α̂(I), is defined by
α(I (m) )
.
m→∞
m
α̂(I) = lim
For many details about Waldschmidt constant of squarefree monomial ideals see the paper
[3]. The resurgence of an ideal is another constant related to the containment problem.
Definition 3.4. Given an ideal I, its the resurgence, ρ(I), is defined by
o
nm
|I (m) 6⊆ I r .
ρ(I) = sup
r
The resurgence ρ(I) can be bounded below using the Waldschmidt constant. In particular
ρ(I) ≥
α(I)
α̂(I)
(see [4] Theorem 1.2).
Lemma 3.5. Let I be a homogeneous ideal such that Rs (I) is generated in degree at most
n. For each i ∈ {1, . . . , n}, let αi := α(I (i) ). Then
α1 m + (α2 − 2α1 )y2 + · · · + (αn − nα1 )yn
m→∞
m
α̂(I) = lim
where y2 , . . . , yn are positive integers minimizing α1 m + (α2 − 2α1 )y2 + · · · + (αn − nα1 )yn
with respect to the constraint 2y2 + 3y3 + · · · + nyn ≤ m.
Proof. Since Rs (I) = R[I, I (2) t2 , . . . , I (n) tn ], we have that
X
I (m) =
(I (2) )y2 (I (3) )y3 · · · (I (n) )yn I m−2y2 −...−nyn .
2y2 +3y3 +···+nyn ≤m
Thus
α(I (m) ) = min α (I (2) )y2 (I (3) )y3 . . . (I (n) )yn I m−2y2 −...−nyn |2y2 + 3y3 + . . . nyn ≤ m ,
and setting αi = α(I (i) )) this gives
α(I (m) ) = min {α1 m + (α2 − 2α1 )y2 + · · · + (αn − nα1 )yn |2y2 + 3y3 + . . . nyn ≤ m} .
6
We proceed to give a formula for the Waldschmidt constant in terms of finitely many
symbolic powers.
Theorem 3.6. Let I be a homogeneous ideal such that Rs (I) is generated in degree at most
n. Then
α(I (m) )
.
α̂(I) = minm≤n
m
Proof. By Lemma 3.5, α(I (m) ) is the minimum value of
α1 m + (α2 − 2α1 )y2 + · · · + (αn − nα1 )yn
subject to the condition 2y2 + 3y3 + · · · + nyn ≤ m, where αi = α(I (i) )). Equivalently,
assuming that n!|m and setting zi := iyi , we see that α(I (m) ) is the minimum value of
α1 m +
(α2 − 2α1 )
(αn − nα1 )
z2 + · · · +
zn
2
n
(3.1)
subject to z2 + z3 + · · · + zn ≤ m, and for each i, zi is a multiple of i. Let c ∈ {2, . . . , n}
1
such that αc −cα
is minimal. Then
c
n
α1 m +
X αc − cα1
(αn − nα1 )
(α2 − 2α1 )
z2 + · · · +
zn ≥ mα1 +
zi .
2
n
c
i=2
(3.2)
Since I i ⊆ I (i) , we have αi − iα1 ≤ 0 for all i. Thus Equation 3.2 is minimized when
z2 + · · · + zn = m. Hence
mα1 +
n
X
αc − cα1
i=2
c
zi ≥ mα1 + m
αc − cα1
.
c
Thus, when n!|m, (3.1) is minimized at zc = m and zi = 0 for i 6= c with a value of
Therefore
α(I (m) )
αc
lim
= .
m→∞
m
c
mαc
.
c
We note that, in particular, Theorem 3.6 holds for monomial ideals.
4
Symbolic defect of cover ideals of graphs
Definition 4.1. Let G be a graph with vertex set {x1 , . . . , xn } and edge set E. The cover
ideal of G is defined to be
\
J(G) :=
(xi , xj ) ⊆ R.
{xi ,xj }∈E
For m ≥ 1, we say that a monomial g ∈ R is an m-cover of G if for every edge {xi , xj } ∈ E,
there exists one monomial of the form hij = xai xbj with a + b ≥ m such that hij divides g.
7
The generators of J(G) are the monomials which correspond to the vertex 1-covers of G.
For any m ∈ N, we see that
\
J(G)(m) =
(xi , xj )m
{xi ,xj }∈E
is generated by the monomials which correspond to vertex m-covers of G and J(G)m is
generated by the monomials which correspond to vertex m-covers which decompose into the
product of m vertex 1-covers. We say that a vertex m-cover is minimal if it is not divisible
by any other different vertex m-cover. Thus, in the context of cover ideals, sdefect(J(G), m)
counts the number of minimal vertex m-covers of G which cannot be decomposed as a
product of m vertex 1-covers. We call such m-covers indecomposable.
Herzog, Hibi and Trung proved the next important result:
Theorem 4.2 ([12], Theorem 5.1). Let G be a graph and let I = J(G) be its cover ideal.
Then, the symbolic Rees algebra Rs (I) is generated in degree at most 2.
As a corollary to this result and Theorem 2.4, we have the following:
Corollary 4.3. Let G be a graph and let I = J(G) be its cover ideal. Then sdefect(I, m) is
eventually quasi-polynomial with quasi-period at most 2.
As another consequence of Theorem 4.2 and of Theorem 3.6, we have the following
information about the Waldschmidt constant of cover ideals.
Corollary 4.4. Let I = J(G) be the cover ideal of a graph G with n vertices. Then
α̂(I) =
α(I (2) )
2
Proof. By Theorem 4.2, Rs (I) is generated in degree 2. Hence, Theorem 3.6 implies that
(2)
(m)
= α(I).
α̂(I) = minm≤2 α(Im ) . Now the first result follows, since α(I2 ) ≤ 2α(I)
2
Definition 4.5. A graph G = (V, E) is bipartite if there is a partition V = V1 ∪ V2 on the
vertex set, such that V1 ∩ V2 = ∅ and for any edge {xi , xj } of G, xi ∈ V1 and xj ∈ V2 .
The following theorem by Dupont and Villareal [5] characterizes the minimal indecomposable vertex covers of bipartite graphs and the minimal indecomposable vertex 0,1 and
2-covers of non-bipartite graphs. In the latter case, since Rs (I) is generated in degree 2,
these vertex covers generate all the vertex m-covers. Given a graph G = (V, E), the set of
neighbors of a vertex xi is the set of vertices xj adjacent to xi , which means {xi , xj } is an
edge of G. Given a set of vertices S ⊆ V in G, the induced subgraph on S is the graph with
vertex set S, and edge set {{x, y} ∈ E|x, y ∈ S}.
Theorem 4.6 ([5], Theorem 1.7). Let f = xa11 . . . xann ∈ k[x1 , . . . , xn ].
8
(i) If G is bipartite, then f is an indecomposable minimal vertex m-cover of G if and only
if m = 0 and f = xi for some 1 ≤ i ≤ n or m = 1 and f = xj1 . . . xjl is such that
f x−1
ji is not a vertex 1-cover of G for every i = j1 , . . . , jl . For m ≥ 2, all the vertex
m-covers of G are divisible by m vertex 1-covers.
(ii) If G is non-bipartite, then the minimal indecomposable vertex 0,1 and 2-covers are of
the form:
(a) (0-covers) m = 0 and f = xi for some i,
(b) (1-covers) m = 1 and f = xj1 . . . xjl is such that f x−1
ji is not a vertex 1-cover of
G for every i = j1 , . . . , jl ,
(c) (2-covers) m = 2 and f = x1 x2 · · · xn is the product of all the variables,
(d) (2-covers) m = 2 and
f = x0i1 · · · x0is x2is+1 · · · x2is+t xis+t+1 · · · xis+t+u
is such that:
(1)
(2)
(3)
(4)
(5)
each xij is distinct,
s + t + u = n,
{xis+1 , . . . , xis+t } is the set of neighbors of {xi1 , . . . , xis } in G,
g = xis+1 · · · xis+t is not a vertex cover of G, and u 6= 0,
the induced subgraph on {xis+t+1 , . . . , xis+t+u } has no isolated vertices and is
not bipartite.
An important consequence for the theory of cover ideals of graph is the following result
which is also a corollary of work by Sullivant in [19]:
Corollary 4.7. Let G be a bipartite graph, and let I = J(G). Then sdefect(I, m) = 0 for
all m ∈ N.
Proof. As G is bipartite, by Theorem 4.6(i), the graph G does not have indecomposable
m-covers for m > 1. Thus sdefect(I, m) = 0.
In contrast to this fact, we show that in general sdefect(J(G), 2) can be arbitrarily large.
Let for n ∈ N, let Tn be the graph on vertices {x1 , x2 , x3 , y1, . . . , yn } such that the induced
subgraph on {x1 , x2 , x3 } is C3 , the induced subgraph on {y1 , . . . , yn } is a path of length n−1,
and x3 is adjacent to y1 . In the case of n = 3, this graph is pictured below:
x1
x2
x3
y1
y2
y3
Theorem 4.8. For all n ≥ 5, sdefect(J(Tn ), 2) = sdefect(J(Tn−1 ), 2) + µ(J(Pn−4 )2 ) where
Pi is the path of length i.
9
Proof. Let C = xa11 xa22 xa33 y1b1 y2b2 · · · ynbn be a minimal indecomposable vertex 2-cover of Tn .
Since every non-bipartite induced subgraph of Tn contains the vertices x1 , x2 , x3 , by Theorem
4.6(ii.5), we have that a1 = a2 = a3 = 1. Thus, b1 is nonzero. If b1 = 1, then b2 > 0. Consider
bn
.
C ′ = xa11 xa22 xa33 y1b2 y2b3 · · · yn−1
We claim that C ′ is an indecomposable vertex 2-cover of Tn−1 . Suppose that bi = 2 for
some i. Then, as C is a minimal indecomposable vertex 2-cover for Tn , we know that either
bi−1 = 0 or bi+1 = 0. Thus, either yi−2 or yi has exponent equal to zero in C ′ , Thus C ′ is a
vertex 2-cover of Tn−1 .
On the other hand, if b1 = 2, then by Theorem 4.6(ii) we know that b2 = 0 and b3 = 2.
The vertices y4 , . . . , yn form the path of length n − 4, that is Pn−4 , and y4b4 · · · ynbn is a vertex
2-cover of this path. Thus, we see that
sdefect(J(Tn ), 2) ≥ sdefect(J(Tn−1 ), 2) + µ(J(Pn−4 )2 ).
a
n−1
To show the other inequality, let G = x1 x2 x3 y1a1 · · · yn−1
be an indecomposable 2-cover
a1
an−1
for Tn−1 . Then C = x1 x2 x3 y1 y2 · · · yn
is a vertex 2-cover for Tn . Moreover, we note that
the set {y | y 2 divides C} is exactly the set of neighbors of {y | y does not divide C}. Thus,
again by Theorem 4.6, we see that C is an indecomposable 2-cover of Tn .
Let D = y4d4 · · · yndn be a minimal 2-cover of Pn−4 . We claim that H = x1 x2 x3 y12y20 y32D is
an indecomposable 2-cover for Tn . Certainly H is a 2-cover of Tn . As {y | y 2 divides C} is
exactly the set of neighbors of {y | y does not divide C} and H is divisible by x1 x2 x3 , we see
that H is indeed indecomposable by Theorem 4.6.
It is interesting to characterize which graphs have cover ideal I with sdefect(I, 2) = 1. A
similar characterization of such graphs using the language of symbolic Rees algebras can be
found in [12] (Proposition 5.3). Our proof is different from the proof presented in [12], so we
include it below.
Recall that a graph is not bipartite if and only if it contains an odd cycle, which means
that there is an odd integer l ≥ 3 and l vertices xi1 , xi2 , . . . , xil such that for 1 ≤ j < l,
{xij , xij+1 } are edges and {xil , xi1 } is an edge.
Theorem 4.9. Let G be a graph with n vertices and let I = J(G). Then sdefect(I, 2) = 1 if
and only if G is non-bipartite and every vertex in G is adjacent to every odd cycle in G.
Proof. First assume sdefect(I, 2) = 1. Hence G is not bipartite by Corollary 4.7 and hence
there is at least an odd cycle contained in G. Assume by way of contradiction that one vertex
xi of G is not adjacent to the odd cycle and let xj1 , . . . , xjc be the vertices of G adjacent
to xi . Clearly F = x1 x2 · · · xn is an indecomposable vertex 2-cover by Theorem 4.6(ii), but
in this case, by the same characterization of vertex 2-covers of graph, we also have that
F x−1
i xj1 · · · xjc is an indecomposable vertex 2-cover and hence sdefect(I, 2) ≥ 2.
Conversely, assume G is non-bipartite and every vertex in G is adjacent to every odd
cycle in G and let f = xa11 · · · xann ∈ I (2) \ I 2 be an indecomposable minimal vertex 2-cover.
The set {xi : ai = 1} is not bipartite again by Theorem 4.6(ii), hence the induced graph on
this set contains an odd cycle. Since any vertex of the graph is adjacent to this odd cycle,
then ai 6= 0 for every i = 1, . . . , n. But this also implies ai 6= 2 for every i since f is a
minimal vertex 2-cover. Hence f = F and sdefect(I, 2) = 1.
10
Remark 4.10. From the proof of Theorem 4.9 it is clear that when sdefect(I, 2) = 1 where
(2)
I = J(G), the unique generator of II 2 is F = x1 x2 . . . xn . Hence, for m ≥ 3,, using the
decomposition of I (m) given in Lemma 3.1 together with the fact that the symbolic Rees
algebra Rs (I) is generated in degree 2 (see Theorem 4.2), we obtain
I (m) = (F )I (m−2) + I m .
A consequence of Remark 4.10 is a lower bound on the resurgence of the cover ideal of
this kind of graphs. Such a bound is interesting when the degree of F is strictly smaller than
2α(I), where α(I) is the minimal degree of an element of I.
Corollary 4.11. Let I = J(G) be the cover ideal of a graph G with n vertices such that
sdefect(I, 2) = 1. Then
2α(I)
if n2 < α(I)
n
ρ(I) ≥
1
if n2 ≥ α(I).
Proof. By Remark 4.10, since the degree of F = x1 x2 . . . xn is n and F is the unique generator
(2)
of II 2 , we get α(I (2) ) = min{n, 2α(I)}. Hence the result follows from Definition 3.4 and
Corollary 4.4.
For some of the graphs having sdefect(J(G), 2) = 1 we can give an explicit recursive
formula to compute the symbolic defect of any symbolic power of the cover ideal.
Using the same notation as before, for a graph G on n vertices, we set F = x1 · · · xn
and I = J(G) = (g1 , . . . , gt ). Since by Remark 4.10, I (m) = (F )I (m−2) + I m , the possible
elements of I (m) \ I m are of the form F k gi1 · · · gis for some integers k, s such that m = 2k + s.
It is possible to find an exact formula for the symbolic defect for the graphs in which all the
elements of this form are not in the ordinary power I m . For this reason we give the following
definition:
Definition 4.12. Let G be a graph of n vertices and I = J(G) be its cover ideal. Assume
I = (g1 , . . . , gt ) and sdefect(I, 2) = 1 and let F = x1 · · · xn . We say that G satisfies the
Indecomposability Property if for any integers k ≥ 1 and s ≥ 0, the monomial
F k gi1 · · · gis 6∈ I 2k+s .
The following lemma describes some graphs satisfying Indecomposability Property.
Lemma 4.13. Let I = J(G) ⊆ R = k[x1 . . . xn ] be the cover ideal of a graph G. Assume
sdefect(I, 2) = 1 and let F = x1 · · · xn . Then G satisfies the Indecomposability Property if at
least one of the following conditions is satisfied:
1. I = (g1 , . . . , gt ), deg F < 2α(I) and deg gi = α(I) for every i = 1, . . . , c.
2. I = (g1 , . . . , gt , h1 , . . . , hs ) is generated in two different degrees deg gi = α1 < α2 =
deg hl for all i, l and deg F < α1 + α2 . Moreover there exists j ∈ {1, . . . , n} such that
the variable xj divides gi for every i but it does not divide hl for every l.
11
3. I = (g1 , . . . , gt , h1 , . . . , hs ) is generated in two different degrees deg gi = α1 < α2 =
deg hl for all i, l. Moreover deg F < α1 + α2 , there are c indices j1 , . . . , jc , such that
the variables xj1 , . . . , xjc divide hl for every l and do not divide gi for every i and
α2 − α1 ≤ c.
Proof. 1) The thesis follows since deg F k gi1 · · · gis < (2k + s)α(I).
2) Take an element of the form
q = F k gi1 · · · gis1 hi1 · · · his2 .
1
It follows that xk+s
divides q but there is not bigger power of xj dividing q. By way of
j
contradiction suppose q ∈ I 2k+s . Hence q is divisible by at most k + s1 generators of the
form gi and by at least k + s2 generators of the form hl . Hence there are two integers a, b
such that a ≤ k + s1 , b ≥ k + s2 and a + b = 2k + s and q is divisible by a generators of the
form gi and by b generators of the form hl . Then,
deg q = k deg F + s1 α1 + s2 α2 < (k + s1 )α1 + (k + s2 )α2 ,
but there exists an integer c ≥ 0 such that (k + s1 )α1 + (k + s2 )α2 = (a + c)α1 + (b − c)α2
and hence, since α1 < α2 ,
deg q < (k + s1 )α1 + (k + s2 )α2 ≤ aα1 + bα2 ≤ deg q
and this is a contradiction for what assumed on a and b.
3) As in the proof of (2), take
q = F k gi1 · · · gis1 hi1 · · · his2 .
Let X := xk1 · · · xkc . Thus X k+s2 divides q and no bigger power of X divides it. Now,
assuming by way of contradiction q ∈ I 2k+s , we get that q is divisible by at least k + s1
generators of the form gi and by at most k + s2 generators of the form hl . As before there
are two integers a, b such that a ≤ k + s1 , b ≥ k + s2 , a + b = 2k + s and m is divisible by a
generators of the form gi and by b generators of the form hl .
Hence there exists an integer d ≥ 0 such that k + s1 = a − d and k + s2 = b + d and we
can write deg q = aα1 + bα2 + w with w ≥ dc since we need to have X k+s2 dividing q. It
follows that
deg q = k deg F + s1 α1 + s2 α2 < (k + s1 )α1 + (k + s2 )α2 = aα1 + bα2 + d(α2 − α1 ) ≤ deg q
since α2 − α1 ≤ c and this is a contradiction.
We can give examples of graphs satisfying each one of the conditions of Lemma 4.13.
Recall that a graph is complete is for every two distinct vertices xi , xj , the pair {xi , xj } is
an edge. We denote the complete graph with n vertices by Kn . A graph is a cycle with n
vertices x1 , x2 , . . . , xn if its edges are of the form {xi , xi+1 } modulo n. (i.e also {xn , x1 } is an
edge). We denote the cycle with n vertices by Cn . Every cyclic graph with an even number
of vertices is bipartite.
In the next section, complete graphs and the cycles C3 , C5 , C7 will be shown to satisfy
condition 1 of Lemma 4.13. The following graph satisfies condition 2 of Lemma 4.13.
12
a
e
b
x
c
f
d
Similarly, any graph consisting of 3-cycles all joined at a single vertex satisfies Lemma
4.13. The following graph satisfies condition 3 of Lemma 4.13:
x2
x3
x1
x4
We now concern ourselves with the computation of the symbolic defect for graphs satisfying the Indecomposability Property.
Lemma 4.14. Let I = J(G) ⊆ R = k[x1 , . . . , xn ] be a cover ideal of a graph, G, on n
vertices and let f ∈ k[x1 , . . . , xn ] be such that
Y
xi ∈ I (m)
f
i6=a,b
for all a, b ∈ {1, . . . , n}. Then f ∈ I (m) .
a2
an
(m)
Proof. Suppose f = xa11 x
exists an edge {xs , xt } such that
Q2 · · · xn 6∈ I .m Then there Q
as + at < m. But then f i6=s,t xi 6∈ (xs , xt ) , and thus f i6=s,t xi 6∈ I (m) .
Theorem 4.15. Let I = J(G) ⊆ R = k[x1 . . . xn ] where G is a graph such that sdefect(I, 2) =
1 and let F = x1 · · · xn . Assume that the graph G satisfies the Indecomposability Property.
Let ν(I, m) be the number of minimal generators of I m which are not divisible by F .
Then for m ≥ 3,
sdefect(I, m) = sdefect(I, m − 2) + ν(I, m − 2).
Proof. By Remark 4.10 and by assuming sdefect(I, 2) = 1, we have I (m) = (F )I (m−2) + I m .
Then the minimal elements of I (m) \ I m are of the form F k gi1 · · · gis where gi are minimal
generators of I and m = 2k+s. Conversely, since the graph G satisfies the Indecomposability
Property, any such monomial is in I (m) \ I m . Hence, for g such that g is a minimal generator
(m−2)
of II m−2 , we have that F g is in I (m) \ I m .
The module I (m) /I m is generated by the images of the minimal generators of (F )I (m−2) ,
which are either generators of (F )(I (m−2) /I m−2 ) or images of generators of (F )I m−2 . It
13
follows that sdefect(I, m) is equal to sdefect(I, m − 2) plus the number of minimal generator
of (F )I m−2 which are not already multiples of F g for some g ∈ I (m−2) \ I m−2 .
By dividing F on both sides, we have that the last number coincides with the number of
minimal generator of I m−2 which are not multiple of any element of I (m−2) \ I m−2 and we
will show that this number is ν(I, m − 2). Let f be a minimal generator of I m−2 . If f 6∈ (F ),
then f is not a multiple of an element of I (m−2) \ I m−2 since we showed, in Remark 4.10,
that I (m−2) \ I m−2 ⊆ (F ).
On the other hand, assume f ∈ (F ). Then f = F h for some h ∈ R. Since F h ∈ I m−2 , we
know F h ∈ (xs , xt )m−2 for all xs , xt where {xs , xt } is an edge of G. Thus xsFxt h ∈ (xs , xt )m−4
and so h ∈ I (m−4) by Lemma 4.14.
Then f ∈ F I (m−4) , and so by Remark 4.10, we get
m
f ∈ (F )I (m−4) = (F )I m−4 + (F 2 )I m−6 + . . . + (F )⌊ 2 ⌋ J ∗
where J ∗ = R for m even and J ∗ = I for m odd. Notice now that the minimal generators
of all the summands of the previous equation are contained in I (m−2) \ I m−2 (since F 6∈ I 2 ).
Therefore f is divisible by some element in I (m−2) \ I m−2 . This proves the formula and the
theorem.
Remark 4.16. The formula for the symbolic defect presented in Theorem 4.15 does not
apply to every graph. For instance consider the graph G = (V, E), pictured below.
y1
x1
x2
x3
y2
y3
For this graph, the cover ideal is
I = (g1 , g2 , g3 , g4) = (x1 x2 x3 , x1 x2 y3 , x1 x3 y2 , x2 x3 y1 )
and we observe that, since F = x1 x2 x3 y1 y2 y3 has degree 6, the ideal I does not satisfy any
of the conditions of Lemma 4.13. Moreover we see that F g1 = g2 g3 g4 ∈ I 3 and therefore
sdefect(I, m) is less than what predicted by the formula of Theorem 4.15 for m ≥ 3.
5
Computing symbolic defect for certain cover ideals
Recall that by Theorem 2.4, the symbolic defect of the cover ideal, I, of a graph grows quasipolynomially in m, since Rs (I) is Noetherian (see Theorem 4.2). After some preliminary
results, we discuss some examples of cover ideals of graphs for which we are able to compute
the symbolic defect or its degree as quasi-polynomial in m. In the simple case of complete
graphs it is possible to find the explicit value of the symbolic defect using Theorem 4.15.
14
Theorem 5.1. Let Kn be the complete graph with n vertices and let I be its cover ideal.
Call F = x1 x2 · · · xn . Then sdefect(I, m) grows as a linear polynomial in m.
Proof. Observe that I = (g1 , . . . , gn ) where gi = F x−1
i . Since I satisfies condition 1 of
Lemma 4.13, it satisfies the Indecomposability Property.
Since F divides gi gj for i 6= j, the minimal generators of I m which are not divisible by F
are g1m, g2m , . . . , gnm.
It follows that ν(I, m) = n for every m ≥ 1 and by Theorem 4.15, sdefect(I, m) = nk + 1
for m = 2k + 2 and sdefect(I, m) = nk for m = 2k + 1. Thus sdefect(I, m) is eventually a
linear polynomial in m.
For the computation in other cases of graphs with sdefect(J(G), 2) = 1, we need some
further result about the degree in m of ν(I, m). For an ideal I ⊆ R = k[x1 . . . xn ], the
number of generators of the power µ(I m ) grows as a polynomial in m for m >> 0. This is
clear since the fiber cone of I is a standard-graded Noetherian k-algebra and thus its Hilbert
function is eventually given by a polynomial (see [16] theorem 13.2). We call the degree of
this polynomial the degree of µ(I m ).
In Proposition 5.2 we relate the degree in m of ν(I, m) is related to the degree of µ(I m ).
In the case in which the generators of I are algebraically independent over the base field k
we recall how to compute µ(I m ) in Proposition 5.3.
Proposition 5.2. Let I = J(G) ⊆ R = k[x1 . . . xn ] where G is a graph such that sdefect(I, 2) =
1 and assume that G satisfies Indecomposability Property.
Then for every i = 1, . . . , n we have
m
m
n
X
I + xj R
I + xi R
µ
≤ ν(I, m) ≤
µ
.
(5.1)
xi R
x
R
j
j=1
I+x R
iR
iR m
Moreover, take i such that µ( I+x
) ≥ µ( xj Rj ) for all j and let d be the degree of µ(( I+x
) )
xi R
xi R
seen as a polynomial in m for m >> 0. Then for m >> 0, sdefect(I, m) is a quasi-polynomial
in m of degree d + 1.
Proof. Recall that from Remark 4.10, it follows that when the sdefect(I, 2) = 1, the unique
(2)
generator of II 2 is F = x1 x2 · · · xn . Hence ν(I, m) is the number of minimal generators of I m
iR m
) ) is the number of minimal generators
not divisible by at least one variable. Since µ(( I+x
xi R
m
of I not divisible by xi , we can see easily the inequalities in (5.1).
P
I+x R
I+x R
iR
iR m
Now if µ( I+x
) ≥ µ( xj Rj ) for all j, then µ(( I+x
) ) and nj=1 µ(( xj Rj )m ) have both
xi R
xi R
degree d as polynomials in m.
The sequence {ν(I, m)}m≥1 is non-decreasing since F is not in the set of minimal generators of I, and it is bounded by a polynomial. Hence it admits a (possibly infinite) limit as
m grows to infinity. Therefore, dividing both sides of (5.1) by md and passing to the limit
for m going to infinity, we can see that ν(I, m) is quasi-polynomial in m of degree d.
]. By Theorem 4.15 we have
For a given m, let k = [ m−2
2
sdefect(I, m) =
k
X
i=0
15
ν(I, 2i + c)
where c = 0 if m is even and c = 1 if m is odd (recall also that ν(I, 0) = 1 and ν(I, 1) = µ(I)).
It follows that sdefect(I, m) is quasi-polynomial in m of degree one more than the degree of
ν(I, m).
Proposition 5.3. Let I = (g1 , . . . , gs ) be a squarefree monomial ideal in a polynomial ring
over a field k. Then:
∂gi
1. The generators of I are algebraically independent over k if the matrix Mij = { ∂x
} has
j
maximal rank.
2. If the generators of I are algebraically independent over k, then
m+s−1
m
µ(I ) =
s−1
is a polynomial in m of degree s − 1.
Proof. We note that 1 follows immediately from Jacobi’s criterion [14].
To prove 2, we first note that, since g1 , . . . , gs are algebraically independent, g1a1 · · · gsas =
g1b1 · · · gsbs if and only if ai = bi for 1 ≤ i ≤ s. Thus the number of minimal generators of I m
is equal to the number of ways to distribute m objects between s sets. Therefore
(m + s − 1) · · · (m + 1)
m+s−1
m
=
µ(I ) =
(s − 1)!
s−1
which is a polynomial in m of degree s − 1 .
We can use Propositions 5.2 and 5.3 in order to explicitly compute the degree in m of
sdefect(I, m) when I is the cover ideal of a cycle in Theorem 5.6.
Let Cn be the n-cycle and let I be its cover ideal. Since an even cycle is a bipartite
graph, it is enough to consider n = 2k + 1 to be an odd number. For every i = 1, . . . , n the
monomials
gi = xi xi+2 xi+4 · · · xi+n−1 mod n
are 1-vertex covers of Cn and their degree is minimal among the degrees of the minimal
generators of I. Indeed, any other generator of I, if exists, has degree greater than α(I). We
prove the following lemmas:
Lemma 5.4. Let n = 2k + 1 and let Cn be the n-cycle. Let I be the cover ideal of Cn and
call F = x1 x2 · · · xn . Assume G is a minimal generators of I such that deg G > α(I). Then:
1. There exists a minimal generator H of I and a positive even integer s < n such that
deg H = deg G − 1 and
−1
−1
G = Hxj x−1
j+1 xj+2 xj+3 · · · xj+s−1 xj+s .
2. F G ∈ I 3 . In particular F G is equal to the product of three minimal generators of I.
16
Proof. 1) All the generators of I of degree α(I) are of the form gi = xi xi+2 xi+4 · · · xi+n−1 mod n,
as mentioned in the paragraph above. Hence, if we assume deg G > α(I), we have that there
are two different monomials xi xi+1 and xj xj+1 dividing G, with i + 1 < j. We can also
assume without loss of generality that, for every i + 1 < h < j, xh divides G if and only if
xh−1 and xh+1 do not divide G. Clearly, since G is a minimal 1-cover of Cn , xi+2 and xj−1
do not divide G. Thus, we have a sequence of indices i + 1, i + 2, i + 3, . . . , j − 2, j − 1, j such
that the variables xi+1 , xi+3 , . . . , xj−3 , xj divide G and all the others in the sequence do not
divide G. It follows that j − i is an odd number, otherwise we would have two consecutive
variables xh xh+1 not dividing G and this is impossible since {h, h + 1} is an edge of Cn .
Hence the monomial
−1
−1
H = Gx−1
i+1 xi+2 xi+3 · · · xj−1 xj
is a well defined 1-cover of Cn and deg H = deg G − 1. This proves item 1 taking s = j − i.
2) We claim that
−1
−1
F G = F Hxj x−1
j+1 xj+2 xj+3 · · · xj+s−1 xj+s = Hgj gj+s+1
and hence F G ∈ I 3 . Indeed, this is equivalent to prove
−1
−1
gj gj+s+1 = F xj x−1
j+1 xj+2 xj+3 · · · xj+s−1 xj+s .
But this follows since, in the case j is odd, j + s + 1 is even and by definition
gj = xj xj+2 xj+4 · · · xn x2 x4 · · · xj−3 xj−1
and
gj+s+1 = xj+s+1 xj+s+3 · · · xn−1 x1 x3 · · · xj+s−2xj+s
and hence in the products, all the variables with odd index between j and j + s appear twice
and all the variables with even index in the same interval do not appear. If j is even, the
situation is reversed and the result follows in the same way.
Lemma 5.5. Let n = 2k + 1 and let Cn be the n-cycle. Let I be the cover ideal of Cn and
call F = x1 x2 · · · xn . For m ≥ 2, take q = F h f1 · · · fs where fi are minimal generators of I
and m = 2h + s.
Then, q ∈ I (m) \ I m if and only if deg fi = α(I) for every i. Moreover, all the monomials
generating I (m) /I m are of this form.
Proof. A cyclic graph of odd order has sdefect(I, 2) = 1 by Theorem 4.9. Hence, by Remark
4.10, we have I (m) = (F )I (m−2) + I m and the possible minimal generators of I (m) /I m are all
of the form q = F h f1 · · · fs described in the statement above. It is clear that q ∈ I (m) . By
item 2 of Lemma 5.4, if some fi has degree greater than α(I) , we note that F fi is a product
of three minimal generators of I, and thus we can rewrite q as a product of F h−1 and s + 2
minimal generators of I. Iterating this process as far as we can, we find two possibilities for
q:
i) Either q = F a f1 · · · fb with 2a + b = m and deg fi = α(I) for every i = 1, . . . , b, or
ii) q = F f1 · · · fb with b + 2 = m and there exists at least one fi such that deg fi > α(I).
17
In the first case, since deg F = n < n + 1 = 2α(I), we have
deg q = a deg F + bα(I) < (2a + b)α(I) = α(I m ).
and therefore q 6∈ I m .
In the second case, assuming deg f1 > α(I), we have again by item 2 of Lemma 5.4,
q = h1 h2 h3 f2 . . . fb with hi ∈ I, and hence q ∈ I b+2 = I m . This proves the lemma.
Theorem 5.6. Let n = 2k + 1 and let Cn be the n-cycle. Let I be the cover ideal of Cn and
call F = x1 x2 · · · xn . Define I ⋆ = (g1 , . . . , gn ) where gi = xi xi+2 xi+4 · · · xi+n−1 mod n and let
ν(I ⋆ , m) be the number of minimal generators of (I ⋆ )m which are not divisible by F . Then:
1. For m ≥ 3, sdefect(I, m) = sdefect(I, m − 2) + ν(I ⋆ , m − 2).
2. The degree of sdefect(I, m) as a quasipolynomial in m is k.
Proof. 1) In the case n ≤ 7, it is possible to observe that all the minimal generators of I have
the same degree, equal to k + 1. Hence the graph satisfies the Indecomposability Property,
because it satisfies condition 1 of Lemma 4.13. In this case, by definition I = I ⋆ and thus
the result follows applying Theorem 4.15.
Now suppose that n ≥ 9. We proceed along the method use to prove Theorem 4.15. Let
m ≥ 2 and let q ∈ I (m) \ I m a minimal generator of I (m) /I m . By Lemma 5.5, q = F h f1 · · · fs
where fi are minimal generators and of I, m = 2h + s and deg fi = α(I) for every i. Hence
for such a monomial q ∈ I (m−2) \ I m−2 , we have F q ∈ I (m) \ I m , again by Lemma 5.5.
Thus we have that, as in the proof of Theorem 4.15, the symbolic defect sdefect(I, m) is
equal to sdefect(I, m − 2) plus the number of minimal generators of (F )(I ⋆ )m−2 which are
not multiples of F g for some g ∈ I (m−2) \ I m−2 .
Again as in the proof of Theorem 4.15, we have that the last number coincides with
the number of minimal generators of (I ⋆ )m−2 which are not multiples of any element of
I (m−2) \ I m−2 and we can prove that this number is ν(I ⋆ , m − 2) and this proves the formula
in this case, using the same argument.
2) Applying the proof of Proposition 5.2 to the ideal I ⋆ , we see that this fact is equivalent
⋆
iR m
to showing that µ(( I x+x
) ) is a quasipolynomial of degree k − 1 for some variable xi which
iR
⋆
iR m
maximizes such degree. The symmetry of the cyclic graphs tell us that µ(( I x+x
) ) =
iR
I ⋆ +x1 R
I ⋆ +x1 R m
µ(( x1 R ) ) for every i. Hence we just consider the ideal x1 R = (f3 , f5 , . . . , fn ).
Notice that µ(I ⋆ /x1 ) = k and we want to conclude the proof applying item 2 of Proposition 5.3. In order to do this, by item (i) of Proposition 5.3, we need to show that the matrix
∂gi
} has maximal rank.
M = Mij = { ∂x
j
The matrix Mij has k rows and n − 3 columns. Consider the maximal square submatrix
H obtained taking the columns 3, 5, . . . , n − 2, n − 1 of M and let 3 ≤ j ≤ n − 2 an odd
number. Then, by definition of gi for i 6= 1 and odd, we have
∂gi
= 0 ⇐⇒ xj does not divide gi ⇐⇒ j < i.
∂xj
n
i
= 0 for i 6= 1 odd and ∂x∂gn−1
6= 0. Hence the matrix H is upper
It is also easy to see that ∂x∂gn−1
triangular with non zero elements on the diagonal and hence the rank of M is maximal.
18
A natural generalization of complete graphs and cyclic graphs is given by the class of
circulant graphs. Results about ideals related to circulant graphs are given for instance in
[17], [21], [6]. Here we state a result about their cover ideals.
Definition 5.7. Let n be a positive integer and let 1 ≤ s ≤ ⌊n/2⌋. Denote by Zn the cyclic
group with n elements let S = {1, 2, . . . s} ⊆ Zn .
The circulant graph Cn (1, 2, . . . , s) is defined as the graph with vertex set {x1 , . . . , xn }
and with the edge set such that, for i > j, {xi , xj } is an edge if and only if i − j ∈ S. Note
that the cycle Cn is equal to Cn (1) and the complete graph Kn is Cn (1, 2, . . . , ⌊n/2⌋).
The graph in the next picture is the circulant graph C6 (1, 2).
x1
x5
x2
x6
x3
x4
Now characterize which circulant graphs have cover ideals with sdefect(I, 2) = 1 and compute
sdefect(I, m) in such cases. In next proposition we will consider circulant graphs which are
neither cyclic nor complete since we have already dealt with those cases.
Theorem 5.8. Let n ≥ 6 and 2 ≤ s < k := ⌊ n2 ⌋ two integers. Let G = Cn (1, 2, . . . , s) and
let I = J(G). Then:
1. sdefect(I, 2) = 1 if and only if s = k − 1.
2. If s = k − 1, then for m >> 0, sdefect(I, m) grows as a quasi-polynomial in m of
degree 1 if m is even and of degree 2 if m is odd.
Proof. 1) We will use the characterization of graphs such that sdefect(J(G), 2) = 1 given
in Theorem 4.9. Since s ≥ 2, every three consecutive vertices of G form an odd cycle. By
definition of circulant graph for every i = 1, . . . n, {xi , xi+j } and {xi , xi−j } are edges of G for
all positive j ≤ s (notice that i + j and i − j are considered modulo n).
For simplicity, using the symmetry of G we can assume without loss of generality that
i = 1. Then, the vertices xs+2 , xs+3 , xs+4 form an odd cycle and hence, if sdefect(I, 2) = 1,
it follows that x1 has to be adjacent to such odd cycle. This means that xs+4 has to be
adjacent to x1 and thus is of the form xn+1−j for 1 ≤ j ≤ s. It follows that s + 4 ≥ n − s + 1
which implies n ≤ 2s + 3.
Conversely, assume n ≤ 2s + 3. Hence, for every vertex v of G there are at most two
vertices not adjacent to v and this implies that v is adjacent to every odd cycle of G.
It is easy to see that if n ≤ 2s + 3, then n−3
≤ s < k and hence s = k − 1.
2
19
2) In this case, we first show that the graph Cn (1, 2, . . . , k − 1) satisfies the Indecomposability Property and hence sdefect(I, m) is given by the formula of Theorem 4.15. As usual
let F = x1 x2 · · · xn .
−1
−1 −1
The generators of I are of the form F x−1
i xk+i if n is even and of the form F xi xk+i and
−1
F x−1
i xk+i+1 if n is odd, with i ∈ {1, 2, . . . , n} and where the sums k + i and k + i + 1 are
considered modulo n. Hence every generators of I has degree n − 2 and the graph satisfies
condition 1 of Lemma 4.13 and therefore the Indecomposability Property.
Like we did in Example 5.1 for cyclic graph we use the symmetry of the graph and
Proposition 5.2 to see that the degree as a quasipolynomial in m of sdefect(I, m) is one more
1R m
then the degree as a quasipolynomial in m of µ(( I+x
) ).
x1 R
When n is even, since there is only one vertex vk+1 not adjacent to v1 , we have that
−1
I+x1 R
I+x1 R m
is principal generated by F x−1
1 xk+1 and then µ(( x1 R ) ) is constant. If instead n
x1 R
1R
is
is odd there are exactly two vertices vk+1 and vk+2 not adjacent to v1 and hence I+x
x1 R
−1 −1
−1 −1
generated by F x1 xk+1 and F x1 xk+2 which are monomials algebraically independents over
1R m
K and hence the degree in m of µ(( I+x
) ) is 1 by item 2 of Proposition 5.3.
x1 R
We conclude this section by showing how the symbolic defect of a cover ideal can behaves
under a graph operation.
Let G = (V = {x1 , . . . , xn }, E) be a graph. Consider the graph
′
G = (V ∪ {y}, E ∪
n
[
{xi , y})
i=1
obtained by adding one vertex y to G and connecting all the vertices of G to the new
vertex y. We can compute the symbolic defect of G′ assuming that both G and G′ satisfy
the Indecomposability Property. For instance, it happens if the cover ideal I of G fulfills
condition 1 or 2 of Lemma 4.13, thus it is generated in one degree, and therefore the cover
ideal I ′ of G′ satisfies condition 2 of Lemma 4.13.
Theorem 5.9. Let G = (V = {x1 , . . . , xn }, E) be a graph and let
G′ = (V ∪ {y}, E ∪
n
[
{xi , y}).
i=1
Assume both G and G′ satisfy the Indecomposability Property.
Then, for m ≥ 3, sdefect(I ′ , m) = sdefect(I, m) + ⌊ m−1
⌋.
2
Proof. Clearly by Theorem 4.9, if sdefect(I, 2) = 1, then also sdefect(I ′ , 2) = 1.
Following the formula given in Theorem 4.15, it suffices to compute ν(I ′ , m) in terms of
ν(I, m). First we observe that I ′ = yI + (F ) where F = x1 · · · xn . Hence
(I ′ )m = y m I m + F y m−1 I m−1 + · · · + F m−1 yI + (F m ).
All the terms except the first and the last are divisible by F y and therefore by definition they
are not counted in ν(I ′ , m). Moreover, F m is not divisible by F y and a generator y m g of y m I m
is divisible by F y if and only if g is divisible by F . It follows that ν(I ′ , m) = ν(I, m) + 1.
20
Acknowledgements
Most of this work was carried out at the PRAGMATIC workshop at the Universitá di Catania
in the summer of 2017. We would like to thank the workshop organizers, Alfio Ragusa, Elena
Guardo, Francesco Russo, and Giuseppe Zappalá, as well as the lecturers and collaborators,
Brian Harbourne, Adam Van Tuyl, Enrico Carlini, and Tài Huy Hà. Special thanks go
to Adam Van Tuyl, Alexandra Seceleanu, Tài Huy Hà, Jonathan Montaño, and Giancarlo
Rinaldo for helpful conversations and guidance.
References
[1] A. Arsie, J.E. Vatne, A note on symbolic and ordinary powers of homogeneous ideals.
Ann. Univ. Ferrara Sez. VII (N.S.)49 (2003), 19–30.
[2] M.F. Atiyah, I.G. Macdonald, Introduction to commutative algebra. Addison-Wesley
Publishing Co., Reading, Mass.-London-Don Mills, Ont. 1969.
[3] C. Bocci, S. Cooper, E. Guardo, B. Harbourne, M. Janssen, U. Nagel, A. Seceleanu,
A. Van Tuyl, T. Vu, The Waldschmidt constant for squarefree monomial ideals. J.
Algebraic Combin.44 (2016), no. 4, 875–904.
[4] C. Bocci, B. Harbourne, Comparing powers and symbolic powers of ideals. J.Algebraic
Geom.19 (2010), no. 3, 399–417.
[5] L. Dupont, R. Villarreal, Symbolic Rees algebras, vertex covers and irreducible representations of Rees cones. Algebra Discrete Math.10 (2010), no. 2, 64–86 (2011).
[6] J. Earl, K. N. Vander Meulen, A. Van Tuyl, Independence complexes of well-covered
circulant graphs. Experimental Mathematics 25 (2016) 441-451.
[7] L.Ein, R.Lazarsfeld, K.Smith, Uniform behavior of symbolic powers of ideals. Invent.
Math.144 (2001), 241–252.
[8] D. Eisenbud, Commutative Algebra: with a View Toward Algebraic Geometry. Springer
Verlag, New York, 1995.
[9] F. Galetto, A. V. Geramita, Y.-S. Shin, A. Van Tuyl, The symbolic defect of an ideal.
Preprint (2016). arXiv:1610.00176
[10] J. Herzog, A homological approach to symbolic powers. Springer, Commutative Algebra
Proceedings of a Workshop held in Salvador, Brazil, Aug. 8–17, 1988 32-46.
[11] J. Herzog, T. Hibi, Monomial ideals. Graduate Texts in Mathematics, 260. SpringerVerlag London, Ltd., London, 2011.
[12] J. Herzog, T. Hibi, N. V. Trung, Symbolic powers of monomial ideals and vertex cover
algebras. Advances in Mathematics 210 (2007) 304-322.
21
[13] M. Hochster, C. Huneke, Comparison of symbolic and ordinary powers of ideals. Invent.
Math.147 (2002), no. 2, 349–369.
[14] C.G.J. Jacobi, De determinantibus functionalibus. Reine Angew. Math. 22 (4) (1841),
319-359.
[15] M. Janssen, T. Kamp, J. Vander Woude, Comparing Powers of Edge Ideals.
arXiv:1709.08701
[16] H. Matsumura (trans. M. Reid) Commutative Ring Theory Cambridge University Press,
Cambridge, UK 1986.
[17] G. Rinaldo, Some algebraic invariants of edge ideal of circulant graphs. arXiv:1701.01357
[18] R. Stanley, Enumerative Combinatorics, Second Edition. Cambridge University Press,
1997.
[19] S. Sullivant, Combinatorial symbolic powers, Journal of Algebra 319 (2008) 115-142.
[20] T. Szemberg, J. Szpond, On the containment problem, Rend. Circ. Mat. Palermo, II.
Ser (2017) 66, 233-245
[21] K. N. Vander Meulen, A. Van Tuyl, C. Watt, Cohen-Macaulay Circulant Graph, Communications in Algebra 42 (2014) 1896-1910
[22] A. Van Tuyl, A beginner’s guide to edge and cover ideals In Monomial Ideals, Computations and Applications. Lecture Notes Math. 2083, Springer, 2013, 63–94.
[23] M. Waldschmidt, Propriétés arithmétiques de fonctions de plusieurs variables. II. In
Séminaire P. Lelong (Analyse), 1975/76, Lecture Notes Math. 578, Springer, 1977,
108–135.
22
| 0 |
Slope Stability Analysis with Geometric Semantic Genetic Programming
Juncai Xu1,2,
Zhenzhong Shen1, Qingwen Ren1, Xin Xie3, and Zhengyu Yang3
1. College of Water Conservancy and Hydropower Engineering, Hohai University, Nanjing, China
2. State Key Laboratory of Simulation and Regulation of Water Cycle in River Basin, China Institute of Water Resources and
Hydropower Research, Beijing, China
3. Department of Electrical and Computer Engineering, Northeastern University, Boston, MA USA
E-mail: xujc@hhu.edu.cn
Abstract:
Genetic programming has been widely used in the engineering field. Compared with the conventional genetic
programming and artificial neural network, geometric semantic genetic programming (GSGP) is superior in
astringency and computing efficiency. In this paper, GSGP is adopted for the classification and regression analysis of
a sample dataset. Furthermore, a model for slope stability analysis is established on the basis of geometric semantics.
According to the results of the study based on GSGP, the method can analyze slope stability objectively and is highly
precise in predicting slope stability and safety factors. Hence, the predicted results can be used as a reference for slope
safety design.
Key words: genetic programming; artificial neural network; geometric semantics; slope stability; safety factor
Analysis of Slope Stability with Geometric Semantic Genetic Programming
1
1
Introduction
Slope stability is an important issue in slope safety evaluation methods. At present, the widely used evaluation
methods include the limit equilibrium method and numerical analysis method
[1, 2, 17-26]
. Although these methods are
supported by the perfect theories of mechanics and scientific systems, these methods are nonetheless based on
assumptions. Hence, given the nonlinear relations between various factors that influence slope systems, these methods
are ineffective in predicting slope stability. These methods also have other weaknesses, such as a complicated
computing process and large computing scale.
To address the aforementioned problems, machine learning algorithms, including neural net algorithm, genetic
programming, and adaptive fussy inference, are applied in slope stability prediction. Studies have demonstrated that
these artificial intelligent methods can be used in slope stability analysis
[3, 4, 27-40]
. Despite these methods, some
problems remain. For example, when the neural net algorithm is adopted for prediction, slow convergence speed, local
convergence, and overfitting usually occur because the relationship between the input and output variables are
considered one black box. Although genetic programming can solve the formula for slope stability prediction, the
problem of low convergence rate and divergent results still exist, except at a large computing scale
[5–7]
. A support
vector machine (SVM) based on minimum risk probability and a least square SVM (LSSVM) were recently applied
in slope stability analysis. According to the analysis results, SVM is relatively effective in predicting safety factors
but is deficient in classifying the stability state of slopes. Thus, we use a geometric semantic algorithm to improve
computing efficiency and reduce the errors by improving the evolution strategy in genetic programming. As revealed
by the application of GSGP in the prediction of the strength of high-performance concrete and the effect of drugs,
GSGP can solve regression problems well
[8–11]
. Thus, to further investigate the slope stability analysis, we introduce
GSGP. However, such analyses involve not only the safety factor regression problem but also the classification of
slope stability status; thus, the model based on GSGP must also solve the classification problem. Evidently, this
research is significant in addressing the slope stability analysis problem and in developing GSGP theory.
2
Geometric Semantic Genetic Programming Algorithm
2.1 Genetic Programming Regression Algorithm
Genetic programming algorithms are based on a structuring process by using an evolutionary function. This
algorithm uses genetic manipulation, including reproduction, crossover, and mutation, to derive the solution of
different iterations. The optimal solution is retained as the result of the problem. The solution of genetic programming
can be represented by a one-tree structure–function expression, which is often considered the function and terminator
sets. However, the conventional genetic programming algorithm does not consider the actual meaning of the function.
The semantic genetic programming algorithm was developed from the conventional genetic programming algorithm.
For crossover and mutation operations, the GSGP algorithm uses the geometric semantic approach instead of the binary
tree of the conventional genetic algorithm.
The specific implementation steps of the GSGP are as follows (Fig.1):
In the initialization process, individuals consist of the function set F and terminator set
the initial population. The function and terminator sets are expressed as follows:
(1)
F = { f1 , f 2 ,!, f n }
where
fi
,
denotes the mathematical operation symbols, including
(1)
+, -, ´, ÷ .
T
to generate
2
T = {t1 , t2 ,!, tn }
where
ti
,
(2)
is the variable comprising the terminator set.
(2)
The fitness function is used to evaluate the quality standards for each individual in the population and
to calculate the fitness of each individual in the population. This function is also the driving process of the evolution
to assess each individual. The degrees of measurement for adaptation methods usually include original fitness, fitness,
and standard normalized fitness. When using the error index, the original fitness can be defined as follows:
n
e ( i, t ) = å s ( i, j ) - c ( j )
(3)
j =1
where
s ( i, j )
c( j)
is the actual result of instance
(3)
is the computational result of individual
i
,
j, n
in instance
is the number of instances, and
j.
The genetic operation consists of a parent individual copy, crossover, and mutation operations. By
using the parent individual, which is produced by using the geometric semantic method, the crossover operation
generates individual
TC
and the mutation operation generates individual
Tc = (T1 × TR ) + (1 - TR ) × T2
where
T1
and
T2
are two parent individuals, and
TR
T
is the parent individual,
TR1
and
TR 2
(4)
is a real random number.
TM = T + ms × (TR1 - TR 2 )
where
,
TM , which are expressed as follows:
,
(5)
are two real random numbers, and
ms
is the mutation step.
Analysis of Slope Stability with Geometric Semantic Genetic Programming
3
St
en:
Star
artt:G
:G
en:=0
=0
CCrreat
eatee M
Mii nini ttii alal
RRandom
andomPopul
Popul at
atii on
on
Yes
G
en:
en+1
G
en:=G
=G
en+1
Ter
ii nat
Term
m
natii on
on cr
crii tter
erii on
on
sat
satii sf
sfii ed?
ed?
DDesi
esi gnat
gnatee
rresul
esul tt
No
Eval
Eval uat
uatee ffii ttness
ness of
of
each
each ii ndi
ndi vivi dual
dual ii nn
popul
popul at
atii on
on
End
End
ii ::=0
=0
ii ::=0
=0
G
eom
et
ant
G
eom
etrrii cc sem
sem
antii cc
cr
crossover
ossover oper
operat
atii on
on
ii ::=i=i +1
+1
Per
Perffor
orm
mrrepr
eproduct
oductii on
on
oper
operat
atii on
on
ii ::=i=i +1
+1
No
ii =p1*
??
=p1*M
M
ii ::=0
=0
Yes
No
Yes
ii =p2*
??
=p2*M
M
G
eom
et
ant
G
eom
etrrii cc sem
sem
antii cc
m
ut
m
utat
atii on
on oper
operat
atii on
on
ii ::=i=i +1
+1
No
Yes
ii =p3*
??
=p3*M
M
Genetic operation: perform
reproduction, crossover and mutation
according to probability of p1,p2,p3
Fig. 1. Operational structure of the GSGP.
2.2 Genetic Programming Classification Algorithm
In sample classification by using genetic programming, the operations for achieving prediction results include
copying, exchange, and mutation. Unlike in regression problems, all category boundaries have to be determined in
classification problems and the fitness function is defined by the classification correctness of [13]. For a multi-category
problem, we need to determine multiple boundaries. By contrast, the bi-class classification problem requires the
determination of only one cut-off point. When the objection function value is beyond the cut-off point, the result is set
to 1; otherwise, the result is set to −1.
In the classification problem under genetic programming, the fitness can be defined as follows [14]:
fitness = 1 where
Rnum
Rnum
Snum
(6)
,
is the number of times that the individual classification is correct and
Snum
is the number of
classification samples. The fitness, which is the classification error, is reflected by the sample classification accuracy.
3
Slope Stability Analysis Model for Genetic Programming
According to references and conclusions, the effect factors for the slope stability analysis problem includes unit
weight (γ), cohesion (c), angle of internal friction (Φ), slope angle (β), slope height (H), and pore water pressure
coefficient (ru) as input variables [15]. Either the status of slope (S) or factor of safety (FS) is included as the output
variable. The network structure is composed of six input units and one output unit. Under GSGP, given that the slope
4
stability state can only either be stable or unstable, we can consider the problem as a binary classification problem.
We take the result as 1 when the slope is stable; otherwise, we take the result as −1. Fitness is the classification error
of the slope stability state. However, under GSGP, the prediction of slope safety factor can be considered a typical
nonlinear regression problem. Therefore, in the solution process, the prediction error of slope safety factor can be
directly defined as fitness.
A series of evolution parameters for the GSGP has to be determined in the solution process. In the established
model, the function set is
{+,-,´, ÷}
and the terminator set has eight variables
{x1,x2 ,!,x6 }
corresponding to
γ, c, Φ, β, H, ru. Furthermore, we set the size of the population, iterative algebraic algorithm, and variation coefficient
according to the computational conditions of the genetic programming algorithm. Given the training samples, the
GSGP algorithm can predict the recycled concrete slump of the test samples.
4
Engineering Application and Effect Analysis
4.1 Application Engineering
By using a slope experimental dataset for testing (Table 1)
[6, 16]
, the established model is applied to predict the
slope status and safety factors.
Table 1. Dataset used in the study
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
γ
c
(kN/m)
(kPa)
18.80
18.77
19.97
22.38
18.77
28.40
19.97
13.97
18.77
18.83
28.40
18.77
16.47
20.56
18.66
13.97
25.96
18.46
19.97
20.39
19.60
20.96
17.98
20.96
22.38
18.77
21.78
21.47
21.98
18.80
21.36
18.80
15.99
21.98
19.08
19.08
17.98
24.96
20.39
17.98
18.97
21.98
20.96
20.96
18.46
19.97
14.40
30.01
19.96
10.05
30.01
39.16
10.05
12.00
25.06
10.35
29.41
25.06
11.55
16.21
26.41
12.00
150.1
25.06
40.06
24.91
12.00
19.96
24.01
45.02
99.93
19.96
8.55
6.90
19.96
57.47
10.05
14.40
70.07
19.96
10.05
10.05
45.02
120.0
33.46
4.95
30.01
19.96
30.01
34.96
12.00
40.06
Input variable
Φ
β
Actual
H
(°)
(°)
(m)
25.02
9.99
36
35.01
19.98
37.98
28.98
26.01
19.98
21.29
35.01
9.99
0
26.51
14.99
26.01
45
0
30.02
13.01
19.98
40.01
30.15
25.02
45
19.98
32
30.02
22.01
19.98
30.33
25.02
19.98
36
9.99
19.98
25.02
45
10.98
30.02
35.01
22.01
35.01
27.99
0
40.01
19.98
25.02
45
45
30
34.98
34.03
30
30
34.03
34.98
25.02
30
30
34.98
30
49.98
30
30
22
22
40.02
45
49.03
45
30
27.98
31.01
19.98
19.98
30
19.98
40.02
45
25.02
30
25.02
53
16.01
19.98
34.98
19.98
40.02
40.02
30
40.02
30.6
50
50
10
50
100
6
88
50
37
100
50
3.6
40
8.2
88
200
6
15
10.6
12.2
12
20
12
15
50
12.8
76.8
180
30.6
20
30.6
115
50
50
50
14
120
45.8
8
11
180
12
12
6
10
ru
S
0
0.1
0.5
0.4
0.1
0
0.3
0
0.2
0.3
0
0.2
0
0
0
0.5
0
0
0.3
0.4
0.4
0
0.1
0.3
0.3
0.3
0.5
0.4
0.1
0
0
0.5
0
0
0.4
0.4
0.3
0
0.2
0.3
0.2
0
0.4
0.5
0
0.2
1
1
-1
-1
1
1
1
-1
-1
-1
1
-1
-1
-1
-1
-1
1
-1
1
1
-1
1
-1
1
1
-1
-1
-1
-1
1
1
-1
-1
-1
-1
-1
1
1
-1
1
1
-1
1
1
-1
1
FS
1.876
1.400
0.829
0.901
1.460
1.989
1.340
1.021
1.210
1.289
1.781
1.180
1.000
1.250
1.111
0.626
1.199
1.090
1.841
1.400
1.349
1.841
1.120
1.529
1.799
1.000
1.030
1.009
0.991
2.044
1.700
1.111
1.111
1.021
0.649
0.649
2.091
1.301
1.280
2.049
2.000
1.120
1.490
1.430
0.781
2.310
Computational
S
FS
-1
1
-1
-1
1
1
1
-1
-1
-1
1
-1
-1
-1
-1
-1
1
-1
1
1
-1
1
-1
1
1
-1
-1
-1
-1
1
1
-1
-1
-1
-1
-1
1
1
-1
1
1
-1
1
1
-1
1
1.473
1.313
0.963
0.890
1.359
2.000
1.257
0.848
1.213
1.227
1.673
1.173
0.982
1.199
1.154
0.848
1.271
1.059
1.956
1.439
1.341
1.786
1.205
1.502
1.838
1.072
1.151
1.007
1.006
1.930
1.572
1.473
1.130
1.018
0.699
0.754
2.009
1.273
1.289
1.931
1.726
1.006
1.492
1.487
0.996
1.935
Analysis of Slope Stability with Geometric Semantic Genetic Programming
47
48
49
50
51
52
19.97
18.77
18.83
19.03
22.38
18.80
19.96
19.96
24.76
11.70
10.05
15.31
36
9.99
21.29
27.99
35.01
30.02
45
25.02
29.2
34.98
30
25.02
50
50
37
21
10
10.6
0.3
0.3
0.5
0.1
0
0.4
-1
-1
-1
-1
1
1
0.961
0.970
1.070
1.090
2.000
1.631
-1
-1
-1
-1
-1
1
5
0.963
1.011
1.200
1.199
1.564
1.747
In model testing, from the dataset of 52 samples shown in Table 1, the first 40 samples are considered the training
set and the remaining samples are considered the test set. The size of the initial population is set to 500, and the
maximum iteration epoch is set to 50. The genetic mutation step is set to 0.1. By using the GSGP algorithm, we can
obtain the slope status and safety factors of the training and testing datasets. The results correspond to the last two
columns in Table 1. As shown in the table, the slope status and safety factors of the datasets can be determined when
the six parameters (γ, c, Φ, β, H, ru) are given.
4.2 Effect Analysis
To evaluate the performance of the GSGP algorithm, the results for predicting the slope status and safety factors
need to be analyzed. Slope status prediction is a classification problem. However, safety factor prediction is a
regression problem. The resulting classification accuracy and correlation coefficients are used to evaluate the
performance of the algorithm.
The formula for the classification prediction accuracy of the GSGP is defined as follows:
æ Number of data predicted accurately by GSGP ö
Accuracy ( % ) = ç
÷ ´100
Total data
è
ø
(7)
By using the classification prediction results and actual values in Table 1, the accuracies for the training and
testing datasets can be obtained by using Eq. (7). The classification accuracies for the training and testing datasets
were 97.5% and 91.7%, respectively. On the basis of the existing references, the accuracy of the GSGP algorithm is
superior to that of other algorithms such as the ANN and SVM.
With regard to the safety factor, the index correlation coefficient between the true and computational values is
defined as follows [10]:
R=
nå y × y '- ( å y )( å y ')
å y - ( å y ) å y ' - ( å y ')
2
2
2
(8)
2
,
where y is the true value, y ' is the computational value, and n is the sample size.
In the same way, the correlation coefficient can be obtained in Eq. (8) by using the prediction result for safety
factor and the actual values in Table 1. Given the FS of the training and testing datasets, the calculation results are
shown in Figures 2–3.
6
Fig 2. Performance for training dataset using GSGP
Fig 3. Performance for the testing dataset by using GSGP
In Figures 2–3, the results show that the correlation coefficient of the predicted and actual values of FS for the
training dataset was 95.8%. The correlation coefficient of the predicted and actual values of FS for the testing dataset
was 93.4%. The precision for the training dataset is slightly higher than that for the testing dataset. Nonetheless, high
correlation exists for both datasets.
To further study the accuracy of GSGP, we compared its RMSE with two others algorithm, support vector
machine (SVM) and standard genetic programing (STGP). Based on Table 1 data sets, is the three algorithms were
run 50 times for the predication of recycled concrete slump. Fig. 4 is the statistical results of RMSE about the three
algorithms.
Analysis of Slope Stability with Geometric Semantic Genetic Programming
7
Fig. 4. Box-whisker of three algorithm errors in the test dataset
From Fig. 4, the errors rang, the interquartile ranges (IQR), using GSGP algorithm is narrowest in the three
methods, however, the error rang using STGP algorithm is widest. In the Wilcoxon rank-sum analysis, p of the three
algorithms, the p from GSGP algorithm is still lowest in three algorithms. Thus, the solutions using GSGP algorithm
is significantly better than the other two algorithms.
5
Conclusion
Given the variety of complex factors in slope stability analysis, the use of a mechanical method therein is
excessively difficult. By using such methods, the prediction for slope status and safety factors often become poor.
In this paper, GSGP was introduced into slope stable analysis. The established slope stability analysis method
based on the GSGP algorithm successfully predicted slope status and safety factors. Furthermore, upon application
of the model in slope stability analysis, the following conclusions were made:
(1) By adapting the GSGP algorithm to different adjustments, the classification or regression analysis of
datasets results in different functions;
(2) When the unit weight (γ), cohesion (c), angle of internal friction (Φ), slope angle (β), slope height (H),
and pore water pressure coefficient (ru) are considered input variables, GSGP can predict the slope status and
safety factors;
(3) Within the allowable GSGP prediction error, the method can be applied in slope stability analysis. The
results can be used as a reference in the design process for slopes.
Acknowledgement
This research was funded by the Open Research Fund of State Key Laboratory of Simulation and Regulation of
Water Cycle in River Basin (Grant No. IWHRSKL-201518) and A Project Funded by the Priority Academic Program
Development of Jiangsu Higher Education Institutions (Grant No.3014-SYS1401).
References
[1] Hungr O, Hungr O. Discussion: An extension of Bishop's simplified method of slope stability analysis to three
dimensions[J]. Géotechnique. 1987.
8
[2] Jun-Feng Z, Hua D. Generalized 3d limit-equilibrium method for slope stabilityanalysis and its application[J].
Chinese Journal of Rock Mechanics and Engineering. 2005, 24(3): 365-370.
[3] Sakellariou M G, Ferentinou M D. A Study Of Slope Stability Prediction Using Neural Networks[J]. Geotechnical
and Geological Engineering. 2005, volume 23(4): 419-445.
[4] Jin-Li Q, Bo L, Yan-Yan L I, et al. The prediction of the safety factor of the slope stability based on genetic
programming[J]. Journal of China Coal Society. 2010, 35(9): 1466-1469.
[5] Zhang Z, Liu Z, Zheng L, et al. Development of an adaptive relevance vector machine approach for
slope stability inference[J]. Neural computing & applications. 2014, 25(7-8): 2025-2035.
[6] Samui P, Kothari D P. Utilization of a least square support vector machine (LSSVM) for slope stability analysis[J].
Scientia Iranica. 2011, 18(1): 53-58.
[7] Zhan-You L, Xiao-Jun Y, Xiao-Nan G. Support vector machine model in slope stability evaluation[J]. Chinese
Journal of Rock Mechanics and Engineering. 2005, 24(1): 144-148.
[8] Moraglio A, Krawiec K, Johnson C. Geometric Semantic Genetic Programming[J]. 2012, 7491: 21-31.
[9] Vanneschi L, Castelli M, Manzoni L, et al. A New Implementation of Geometric Semantic GP and Its Application
to Problems in Pharmacokinetics[M]. Genetic Programming, 2013.
[10] Erdal H I. Two-level and hybrid ensembles of decision trees for high performance concrete compressive strength
prediction[J]. Engineering Applications of Artificial Intelligence. 2013, 26(7): 1689-1697.
[11] Castelli M, Vanneschi L, Silva S. Prediction of high performance concrete strength using Genetic Programming
with geometric semantic genetic operators[J]. Expert Systems with Applications. 2013, 40(17): 6856-6862.
[12] Koza J R. Genetic programming. On the programming of computers by means of natural selection[J]. Complex
adaptive systems, Cambridge, MA: The MIT (Massachusetts Institute of Technology) Press. 1992.
[13] Kishore J K, Patnaik L M, Mani V, et al. Application of genetic programming for multicategory pattern
classification[J]. Evolutionary Computation, IEEE Transactions on. 2000, 4(3): 242-258.
[14] Loveard T, Ciesielski V. Representing classification problems in genetic programming[C]. 2001.
[15] Di S, Gao Q. Intelligent identification of rock mass parameters based on evolutionary algorithm[J]. Journal of the
China Coal Society. 2011, 36(1): 34-38.
[16] Zhao H, Yin S, Ru Z. Relevance vector machine applied to slope stability analysis[J]. International Journal for
Numerical and Analytical Methods in Geomechanics. 2012, 36(5): 643-652.
| 9 |
Comparative Study of Eight Formal Specifications
of the Message Authenticator Algorithm
Hubert Garavel
Lina Marsso
INRIA Grenoble, France
Univ. Grenoble Alpes, LIG, F-38000 Grenoble, France
CNRS, LIG, F-38000 Grenoble, France
Hubert.Garavel@inria.fr
Lina.Marsso@inria.fr
The Message Authenticator Algorithm (MAA) is one of the first cryptographic functions for computing a Message Authentication Code. Between 1987 and 2001, the MAA was adopted in international
standards (ISO 8730 and ISO 8731-2) to ensure the authenticity and integrity of banking transactions. In 1990 and 1991, three formal, yet non-executable, specifications of the MAA (in VDM, Z,
and LOTOS) were developed at NPL. Since then, five formal executable specifications of the MAA
(in LOTOS, LNT, and term rewrite systems) have been designed at INRIA Grenoble. This article provides an overview of the MAA and compares its formal specifications with respect to common-sense
criteria, such as conciseness, readability, and efficiency of code generation.
1
Introduction
To handle real problems, formal methods should be capable of describing the different facets of a system: data structures, sequential algorithms, concurrency, real time, probabilistic and stochastic aspects,
hybrid systems, etc. In the present article, we address the two former points. In most case studies, the
data structures and their algorithms are relatively simple, the most complex ones being trees, which are
explored using breadth-first or depth-first traversals, etc. Contrary to such commonplace examples, cryptographic functions exhibit more diverse behaviour, as they rather seek to perform irregular computations
than linear ones.
To explore this dimension, we consider the Message Authenticator Algorithm (MAA, for short), a
pioneering cryptographic function designed in the mid-80s at the National Physical Laboratory (NPL,
United Kingdom). The MAA was adopted in two international standards (ISO 8730 and ISO 8731-2)
and served, between 1987 and 2001, to secure the authenticity and integrity of banking transactions. The
MAA also played a role in the history of formal methods, as the NPL developed, in the early 90s, three
formal specifications of the MAA in VDM, Z, and LOTOS abstract data types.
The present article revives these early efforts by examining, twenty-five years later, how the new
generation of formal methods can cope with the MAA case study. The article is organized as follows.
Section 2 presents the MAA from both an historical and technical perspective. Section 3 introduces
the eight formal specifications of the MAA we are aware of. Section 4 discusses some key modelling
issues that arise when specifying the MAA. Section 5 precises how the formal specifications have been
validated and which issues have been uncovered. Section 6 gives concluding remarks. Annexes A and
B report errors found in the MAA test vectors prescribed by ISO standards 8730 and 8731-2. Finally,
Annexes C and D provide two formal specifications of the MAA in LOTOS and LNT, which are novel
contributions.
John P. Gallagher, Rob van Glabbeek and Wendelin Serwe (Eds):
Models for Formal Analysis of Real Systems (MARS’18)
and Verification and Program Transformation (VPT’18)
EPTCS 268, 2018, pp. 41–87, doi:10.4204/EPTCS.268.2
c H. Garavel & L. Marsso
42
2
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
The Message Authenticator Algorithm (MAA)
In data security, a Message Authentication Code (MAC) is a short sequence of bits that is computed
from a given message; the MAC ensures both the authenticity and integrity of the message, i.e., that
the message sender is the stated one and that the message contents have not been altered. A MAC is
more than a mere checksum, as it must be secure enough to defeat attacks; its design usually involves
cryptographic keys shared between the message sender and receiver. One of the first MAC algorithms to
gain widespread acceptance was the MAA, which we now present in more detail.
2.1 History of the MAA
The MAA was designed in 1983 by Donald Watt Davies and David Clayden at NPL, in response to a
request of the UK Bankers Automated Clearing Services [3] [2]. Its authors were formerly involved in
the detailed design and development of Pilot ACE (Automatic Computing Engine), an early computer
based on original designs of Alan Turing. Donald Watt Davies (1924–2000) is a founding father of
computer science, also well known for his pioneering work on computer networks and packet switching
in the mid-60s1 . Shortly after its design, the MAA became standardized at the international level in two
complementary ISO banking standards:
• The ISO international standard 8730 (published in 1986 [14] and revised in 1990 [16]) specifies
methods and procedures for protecting messages exchanged between financial institutions. Such
a protection is based on secret keys symmetrically shared between these institutions and on the
computation of a MAC for each message exchanged.
The 1986 version of this standard [14] was independent from any particular algorithm for MAC
computation. Such independence was slightly undermined by the 1990 revision of this standard
[16], which added two annexes D and E providing test vectors (i.e., MAC values for a few sample
messages and given keys) computed using two specific algorithms (DEA and MAA) presented
hereafter. A technical corrigendum was later issued in 1999 [18] to address the Year-2000 problem,
without any impact of the MAC computation itself.
• The ISO international standard 8731 has two distinct parts, each devoted to an approved algorithm
for MAC computation that can be used in the security framework specified by ISO 8730. Both algorithms are mutually exclusive, in the sense that using only one of them is deemed to be sufficient
for authenticating messages:
– Part 1 (i.e., ISO 8731-1) describes the DEA (Data Encryption Algorithm) which is a CBCMAC (Cipher Block Chaining Message Authentication Code) based on the DES standard
cryptographic algorithm. The DEA is not addressed in the present article.
– Part 2 (i.e., ISO 8731-2, published in 1987 [15] and slightly revised in 1992 [17]) describes
the MAA itself. An equivalent, freely available specification of the MAA can also be found
in a 1988 NPL technical report written by the designers of the MAA [4].
Later, cryptanalysis of MAA revealed several weaknesses, including feasible brute-force attacks,
existence of collision clusters, and key-recovery techniques [29] [30] [33] [27] [32] [31]. After such
discoveries, MAA ceased to be considered as secure enough and was withdrawn from ISO standards in
2002 [28].
1 Biographic information about D. W. Davies can be found from http://en.wikipedia.org/wiki/Donald_Davies and
http://thelinuxmaniac.users.sourceforge.net/docs/be/chc61.
H. Garavel & L. Marsso
43
2.2 Overview of the MAA
Nowadays, Message Authentication Codes are computed using different families of algorithms based
on either cryptographic hash functions (HMAC), universal hash functions (UMAC), or block ciphers
(CMAC, OMAC, PMAC, etc.). Contrary to these modern approaches, the MAA was designed as a
standalone algorithm that does not rely on any preexisting hash function or cipher.
In this section, we briefly explain the principles of the MAA. More detailed explanations can be
found in [2], [4] and [23, Algorithm 9.68].
The MAA was intended to be implemented in software and to run on 32-bit computers. Hence, its
design intensively relies on 32-bit words (called blocks) and 32-bit machine operations.
The MAA takes as inputs a key and a message. The key has 64 bits and is split into two blocks J and
K. The message is seen as a sequence of blocks. If the number of bytes of the message is not a multiple
of four, extra null bytes are added at the end of the message to complete the last block. The size of the
message should be less than 1,000,000 blocks; otherwise, the MAA result is said to be undefined; we
believe that this restriction, which is not inherent to the algorithm itself, was added in the ISO 8731-2
standard to provide MAA implementations with an upper bound (four megabytes) on the size of memory
buffers used to store messages.
The MAA produces as output a block, which is the MAC value computed from the key and the
message. The fact that this result has only 32 bits proved to be a major weakness enabling cryptographic
attacks; MAC values computed by modern algorithms now have a much larger number of bits. Apart
from the aforementioned restriction on the size of messages, the MAA behaves as a totally-defined
function; its result is deterministic in the sense that, given a key and a message, there is only a single
MAC result, which neither depends on implementation choices nor on hidden inputs, such as nonces or
randomly-generated numbers.
The MAA calculations rely upon conventional 32-bit logical and arithmetic operations, among
which: AND (conjunction), OR (disjunction), XOR (exclusive disjunction), CYC (circular rotation by one bit
to the left), ADD (addition), CAR (carry bit generated by 32-bit addition), MUL (multiplication, sometimes
decomposed into HIGH_MUL and LOW_MUL, which denote the most- and least-significant blocks in the
64-bit product of a 32-bit multiplication). On this basis, more involved operations are defined, among
which MUL1 (result of a 32-bit multiplication modulo 232 − 1), MUL2 (result of a 32-bit multiplication
modulo 232 − 2), MUL2A (faster version of MUL2), FIX1 and FIX2 (two unary functions2 respectively defined as x → AND(OR(x, A), C) and x → AND(OR(x, B), D), where A, B, C, and D are four hexadecimal block
constants A = 02040801, B = 00804021, C = BFEF7FDF, and D = 7DFEFBFF). The MAA operates in
three successive phases:
• The prelude takes the two blocks J and K of the key and converts them into six blocks X0 , Y0 , V0 ,
W , S, and T . This phase is executed once. After the prelude, J and K are no longer used.
• The main loop successively iterates on each block of the message. This phase maintains three
variables X , Y , and V (initialized to X0 , Y0 , and V0 , respectively), which are modified at each
iteration. The main loop also uses the value of W , but neither S nor T .
• The coda adds the blocks S and T at the end of the message and performs two more iterations on
these blocks. After the last iteration, the MAA result, noted Z, is XOR(X ,Y ).
In 1987, the ISO 8731-2 standard [15, Sect. 5] introduced an additional feature (called mode of
operation), which concerns messages longer than 256 blocks (i.e., 1024 bytes) and which, seemingly,
2 The
names FIX1 and FIX2 are borrowed from [24, pages 36 and 77].
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
44
was not present in the early MAA versions designed at NPL. Each message longer than 256 blocks must
be split into segments of 256 blocks each, with the last segment possibly containing less than 256 blocks.
The above MAA algorithm (prelude, main loop, and coda) is applied to the first segment, resulting in a
value noted Z1 . This block Z1 is then inserted before the first block of the second segment, leading to a
257-block message to which the MAA algorithm is applied, resulting in a value noted Z2 . This is done
repeatedly for all the n segments, the MAA result Zi computed for the i-th segment being inserted before
the first block of the (i + 1)-th segment. Finally, the MAC for the entire message is the MAA result Zn
computed for the last segment.
2.3 Informal Specifications of the MAA
We consider the 1988 NPL technical report [4] to be the reference document for the MAA definition in
natural language. Indeed, this technical report is freely available from the NPL library or can be downloaded from the web, whereas the (withdrawn) ISO standards 8730 and 8731-2 need to be purchased.
The algorithm described in [4] is identical to the MAA definition given in ISO 8731-2.
Moreover, [4] provides the source code of two different implementations of the MAA, in BASIC
(227 lines3 ) and C (182 lines4 ). None of these implementations supports the aforementioned “mode of
operation”; we therefore added 31 lines of C code implementing this missing functionality. Although
the C code was written in 1987 for the Turbo C and Zorland compilers, it still compiles and executes
properly today after a few simple corrections, provided that long integers are set to 32 bits5 .
2.4 Test Vectors for the MAA
There are two official sources of test vectors for the MAA:
• [4, Sections 15 to 20] provides a series of tests vectors contained in six tables, which can also
be found in [17, Annex A]. These test vectors specify, for a few given keys and messages, the
expected values of intermediate calculations (e.g., MUL1, MUL2, MUL2A, prelude, main loop, etc.)
and the expected MAA results for the entire algorithm. The “mode of operation” is not tested as
the messages considered contain either 2 or 20 blocks, i.e., less than 256 blocks.
• Another series of test vectors that take into account the “mode of operation” can be found in [16,
Annex E]. More precisely, Annex E.3.3 gives expected values for an execution of the prelude,
Annex E.3.4 gives results for an 84-block message, and Annex E.4 gives results for a 588-block
message.
In both series of test vectors, we found mistakes, which we document and for which we give corrections
in Annexes A and B of the present article.
3
Formal Specifications of the MAA
As far as we know, not less than eight formal specifications have been produced for the MAA. We
present each of them in turn, drawing a clear distinction between non-executable and executable speci3 In
the present paper, when counting lines of code, we exclude blank lines, comments, as well as predefined libraries.
Concerning the MAA implementation in BASIC, we also exclude all PRINT statements, mostly intended for debugging purpose.
4 We exclude all prinf statements, as well as five non-essential functions (menu1, inmain, mainloop1, fracsec, and
guesstime), only retaining case 5 in the main function, and unfolding multiple instructions present on the same lines.
5 For instance, using GCC with options -m32 -std=c90.
H. Garavel & L. Marsso
45
fications. To unambiguously designate these specifications, we adopt the following naming convention:
LANG-XX refers to the formal specification written in language LANG during year XX.
3.1 Non-Executable Formal Specifications
For cryptographic protocols, an informal specification is often not precise enough, and the MAA makes
no exception. For instance, G. I. Parkin and G. O’Neill devoted four pages in [25, Sect. 3] and [26,
Sect. 3] to discuss all possible interpretations of the MAA definition in natural language. The need for
unambiguous specifications was certainly felt by stakeholders, as three formal specifications of the MAA
were developed at NPL in the early 90s, as part of a comparative study in which common examples were
modelled using various formal methods. All these specifications were non-executable, in the sense that
MAA implementations had to be developed manually and could not be automatically derived from the
formal specifications — at least, using the software tools available at that time. Let us briefly review
these specifications:
• VDM-90 : In 1990, G. I. Parkin and G. O’Neill designed a formal specification of the MAA
in VDM [25] [26]. To our knowledge, their work was the first attempt at applying formal methods to the MAA. This attempt was clearly successful, as the VDM specification became a (nonauthoritative) annex in the 1992 revision of the ISO standard defining the MAA [17, Annex B].
This annex is concise (9 pages, 275 lines) and its style is close to functional programming. Due
to the lack of VDM tools, its correctness could only be checked by human proofreading. Three
implementations in C [25, Annex C], Miranda [25, Annex B], and Modula-2 [21] were written by
hand along the lines of this VDM specification.
• Z-91 : In 1991, M. K. F. Lai formally specified the MAA using the set-theoretic Z notation. Based
upon Knuth’s “literate programming” approach, this work resulted in a 57-page technical report
[20], in which formal fragments (totalling 608 lines of Z code) are inserted in the natural-language
description of the MAA. This formal specification was designed to be as abstract as possible, not
to constrain implementations unduly, and it was lightly verified using a type-checking tool.
• LOTOS-91 : In 1991, Harold B. Munster produced another formal specification of the MAA in
LOTOS presented in a 94-page technical report [24]6 . This specification (16 pages, 438 lines) uses
only the data part of LOTOS (namely, abstract data types inspired from the ACT ONE language
[7] [22]), importing the predefined LOTOS libraries for octets, strings, natural numbers in unary,
binary, and decimal notation; the behavioural part of LOTOS, which serves to describe concurrent
processes, is not used at all. This specification is mostly declarative, and not directly executable,
for at least two reasons:
– Many computations are specified using the predefined type Nat that defines natural numbers
in unary notation, i.e., numbers built using the two Peano constructor operations 0 :→ Nat
and succ : Nat → Nat. On this basis, the usual arithmetic operations (addition, multiplication, etc.) are defined equationally. In practice, such a simple encoding for Nat cannot
feasibly implement the large 32-bit numbers manipulated in MAA calculations.
– The full expressiveness of LOTOS equations is used in an unconstrained manner, making it
necessary at many places to invert non-trivial user-defined functions. For instance, given a
6 This
report and its LOTOS specification are available on-line from ftp://ftp.inrialpes.fr/pub/vasy/
publications/others/Munster-91-a.pdf and ftp://ftp.inrialpes.fr/pub/vasy/demos/demo_12/LOTOS/maa_
original.lotos, respectively.
46
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
conditional equation of the form x = g(y) ⇒ f (x) = y, evaluating f (z) requires to compute
g−1 (z). Such situations arise, in a more involved way, with f = NAT and g = NatNum, f =
MUL1 and g = NatNum, f = PAT and g = BitString, f = BYT and g = BitString, f = MAC
and g = Flatten, etc.
Interestingly, such executability issues are not discussed in [24]. Instead, the report stresses the intrinsic difficulty of describing partial or incompletely-specified functions in LOTOS, the equational
semantics of which requires functions to be totally defined. Such difficulty is stated to be a major
limitation of LOTOS compared to VDM and Z, although the report claims that LOTOS is clearly
superior to these methods as far as the description of communication protocols is concerned.
3.2 Executable Formal Specifications
As a continuation of the work undertaken at NPL, five formal specifications of the MAA have been
developed at INRIA Grenoble. These specifications are executable, in the sense that all expressions
that contain neither free variables nor infinite recursion can be given to some interpretation engine and
evaluated to produce relevant results. But executable also means that these specifications can be compiled
automatically (e.g., using the translators of the CADP toolbox [10]) into some executable program that
will be run to generate the expected results. Let us review these five specifications:
• LOTOS-92 : In 1992, Hubert Garavel and Philippe Turlier, taking LOTOS-91 as a starting
point, gradually transformed it to obtain an executable specification from which the CÆSAR.ADT
compiler [8] [13] could generate C code automatically. Their goal was to remain as close as
possible to the original LOTOS specification of Harold B. Munster, so as to demonstrate that
limited changes were sufficient to turn a non-executable LOTOS specification into an executable
one. The aforementioned causes of non-executability in LOTOS-91 were addressed by fulfilling
the additional semantic constraints set on LOTOS by the CÆSAR.ADT compiler to make sure that
LOTOS specifications are executable:
– The algebraic equations, which are not oriented in standard LOTOS, were turned into term
rewrite rules, which are oriented from left to right and, thus, more amenable to efficient
translation.
– A distinction was made between constructor and non-constructor operations, and the discipline of “free” constructors required by CÆSAR.ADT [8] was enforced: namely, each rule
defining a non-constructor f must have the form either “ f (p1 , ..., pn ) → e” or “c1 , ..., cm ⇒
f (p1 , ..., pn ) → e”, where each pi is a term containing only constructors and free variables,
and where c1 , ..., cm , and e are terms whose variables must be also present in some pi .
– To avoid issues with the unary notation of natural numbers, the Nat sort was implemented
manually as a C type (32-bit unsigned integer). Similarly, a few operations on sort Nat
(integer constants, addition, multiplication, etc.) were also implemented by manually written
C functions — the ability to import externally defined C types and functions, and to combine
them with automatically generated C code being a distinctive feature of the CÆSAR.ADT
compiler. Additionally, all occurrences of the sort BitString used for the binary notation
of natural numbers, octets, and blocks were eliminated from the MAA specification.
This resulted in a 641-line LOTOS specification, together with two C files (63 lines in total) implementing the LOTOS sorts and operations defined externally. The CÆSAR.ADT compiler translated this LOTOS specification into C code that, combined with a small handwritten main program
(161 lines of C code), could compute the MAC value corresponding to a message and a key.
H. Garavel & L. Marsso
47
• LNT-16 : In February 2016, Wendelin Serwe manually translated LOTOS-92 into LNT [1],
which is the most recent specification language supported by the CADP toolbox and the state-ofthe-art replacement for LOTOS [11]. This translation was done in a systematic way, the goal being
to emphasize common structure and similarities between the LOTOS and LNT specifications. The
resulting 543-line LNT specification thus has the style of algebraic specifications and functional
programs, relying massively on pattern matching and recursive functions. The handwritten C code
imported by the LOTOS specification was reused, almost as is, for the LNT specification.
• REC-17 : Between September 2016 and February 2017, Hubert Garavel and Lina Marsso undertook the translation of LOTOS-92 into a term rewrite system7 . This system was encoded in the
simple language REC proposed in [6, Sect. 3] and [5, Sect. 3.1], which was lightly enhanced to
distinguish between free constructors and non-constructors.
Contrary to higher-level languages such as LOTOS or LNT, REC is a purely theoretical language
that does not allow to import external fragments of code written in a programming language. Thus,
all types (starting by the most basic ones, such as Bit and Bool) and their associated operations
were exhaustively defined “from scratch” in the REC language. To address the aforementioned
problem with natural numbers, two different types were defined: a Nat used for “small” counters,
the values of which do not exceed a few thousands, and a Block type that represents the 32bit machine words used for MAA calculations. The Nat was defined in the Peano-style unary
notation, while the Block sort was defined in binary notation (as a tuple sort containing or four
octets, each composed of eight bits). To provide executable definitions for the modular arithmetic
operations on type Block, the REC specification was equipped with 8-bit, 16-bit, and 32-bit adders
and multipliers, somehow inspired from the theory of digital circuits. To check whether the MAA
calculations are correct or not, the REC specification was enriched with 203 test vectors [12,
Annexes B.18 to B.21] originating from diverse sources.
The resulting REC specification has 1575 lines and contains 13 sorts, 18 constructors, 644 nonconstructors, and 684 rewrite rules. It is minimal, in the sense that each sort, constructor, and
non-constructor is actually used (i.e., the specification does not contain “dead” code). As far as
we are aware, it is one of the largest handwritten term rewrite systems publicly available. Parts of
this specification (e.g., the binary adders and multipliers) are certainly reusable for other purposes.
However, it is fair to mention that term rewrite systems are a low-level theoretical model that
does not scale well to large problems, and that it took considerable effort to come up with a REC
specification that is readable and properly structured.
Using a collection of translators8 developed at INRIA Grenoble, the REC specification was automatically translated into various languages: AProVE (TRS), Clean, Haskell, LNT, LOTOS,
Maude, mCRL2, OCaml, Opal, Rascal, Scala, SML, Stratego/XT, and Tom. Using the interpreters, compilers, and checkers available for these languages, it was shown [12, Sect. 5] that
the REC specification terminates, that it is confluent, and that all the 203 tests pass successfully.
Also, the most involved components (namely, the binary adders and multipliers) were validated
separately using more than 30,000 test vectors.
The two remaining formal specifications of the MAA are novel contributions of the present paper:
7 Actually,
it is a conditional term rewrite system with only six conditional rewrite rules that, if needed, can easily be turned
into non-conditional rewrite rules as explained in [12].
8 http://gforge.inria.fr/scm/viewvc.php/rec/2015-CONVECS
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
48
• LOTOS-17 : Between January and February 2017, Hubert Garavel and Lina Marsso performed
a major revision of LOTOS-92 based upon the detailed knowledge of the MAA acquired during the development of REC-17 . Their goal was to produce an executable LOTOS specification as simple as possible, even if it departed from the original specification LOTOS-91 written
by Harold B. Munster. Many changes were brought: the two sorts AcceptableMessage and
SegmentedMessage were removed, and the Nat sort was replaced almost everywhere by the
Block sort; about seventy operations were removed, while a dozen new operations were added;
the Block constructor evolved by taking four octets rather than thirty-two bytes; the constructors
of sort Message were replaced by standard list constructors; the equations defining various operations (FIX1, FIX2, BYT, PAT, etc.) were shortened; each message is now processed in a single pass
without first duplicating it to build a list of segments; the Prelude operation is executed only once
per message, rather than once per segment; the detection of messages larger than 1,000,000 blocks
is now written directly in C. These changes led to a 266-line LOTOS specification (see Annex C)
with two companion C files (157 lines in total) implementing the basic operations on blocks9 . Interestingly, all these files taken together are smaller than the original specification LOTOS-91 ,
demonstrating that executability and conciseness are not necessarily antagonistic notions.
• LNT-17 : Between December 2016 and February 2017, Hubert Garavel and Lina Marsso entirely rewrote LNT-16 in order to obtain a simpler specification. First, the same changes as
for LOTOS-17 were applied to the LNT specification. Also, the sorts Pair, TwoPairs, and
ThreePairs, which had been introduced by Harold B. Munster to describe functions returning
two, four, and six blocks, have been eliminated; this was done by having LNT functions that return
their computed results using “out” or “in out” parameters (i.e., call by result or call by valueresult) rather than tuples of values; the principal functions (e.g., MUL1, MUL2, MUL2A, Prelude,
Coda, MAC, etc.) have been simplified by taking advantage of the imperative style LNT, i.e., mutable variables and assignments; many auxiliary functions have been gathered and replaced by a
few larger functions (e.g., PreludeJ, PreludeK, PreludeHJ, and PreludeHK) also written in the
imperative style. These changes resulted in a 268-line LNT specification with a 136-line companion C file, which have nearly the same size as LOTOS-17 , although the LNT version is more
readable and closer to the original MAA specification [4], also expressed in the imperative style.
Taken alone, the LNT code has approximately the same size as VDM-90 , the non-executable
specification that was included as a formal annex in the MAA standard [17].
As for REC-17 , the LNT specification was then enriched with a collection of “assert” statements
implementing: (i) the test vectors listed in Tables 1 to 6 of [17, Annex A] and [4]; (ii) the test
vectors of [16, Annex E.3.3]; (iii) supplementary test vectors intended to specifically check for
certain aspects (byte permutations and message segmentation) that were not enough covered by
the above tests; this was done by introducing a makeMessage function acting as a pseudo-random
message generator.
Consequently, the size of the LNT files grew up to 1334 lines in total (see Annex D)10 . Finally,
the remaining test vectors of [16, Annexes E.3.4 and E.4], which were too lengthy to be included
in REC-17 , have been stored in text files and can be checked by running the C code generated
from the LNT specification. This makes of LNT-17 the most complete formal specification of
the MAA as far as validation is concerned.
9 The most recent version of these files is available from ftp://ftp.inrialpes.fr/pub/vasy/demos/demo_12/LOTOS.
10 The
most recent version of these files is available from ftp://ftp.inrialpes.fr/pub/vasy/demos/demo_12.
H. Garavel & L. Marsso
4
49
Modelling issues
In this section, we investigate some salient issues faced when modelling the MAA using diverse formal
methods. We believe that such issues are not specific to the MAA, but are likely to arise whenever
non-trivial data structures and algorithms are to be described formally.
4.1 Local variables in function definitions
Local variables are essential to store computed results that need to be used several times, thus avoiding identical calculations to be repeated. LNT allows to freely define and assign local variables in an
imperative-programming style; the existence of a formal semantics is guaranteed by static semantic constraints [9] ensuring that each variable is duly assigned before used. For instance, the MUL1 function11 is
expressed in LNT as follows:
function MUL1 (X, Y : Block) : Block is
var U, L, S, C : Block in
U := HIGH_MUL (X, Y);
L := LOW_MUL (X, Y);
S := ADD (U, L);
C := CAR (U, L);
assert (C == x00000000) or (C == x00000001);
return ADD (S, C)
end var
end function
In VDM, which enjoys a “let” operator, the definition of MUL1 is very similar to the LNT one [25,
page 11] [26, Sect. 2.2.5]. The situation is quite different for term rewrite systems and abstract data
types, which lack a “let” operator in their rewrite rules or equations. Interestingly, LOTOS-91 tries to
emulate such a “let” operator by (ab)using the premises of conditional equations [24, pages 37 and 78]:
opns MUL1 : Block, Block -> Block
forall X, Y, U, L, S, P: Block, C: Bit
NatNum (X) * NatNum (Y) = NatNum (U ++ L),
NatNum (U) + NatNum (L) = NatNum (S) + NatNum (C),
NatNum (C + S) = NatNum (P)
=> MUL1 (X, Y) = P;
These premises define and compute12 the variables (U, L), (S, C), and P, respectively. Unfortunately,
most languages and tools for term rewriting forbid such free variables in premises, requiring that only the
parameters of the function under definition (here, X and Y for the MUL1 function) can occur in premises.
Instead, LOTOS-17 and REC-17 adopt a more conventional style in which auxiliary operations
are introduced, the parameters of which are used to store computed results that need to be used more
than once:
opns MUL1
MUL1_UL
MUL1_SC
forall X, Y,
11 The
: Block,
: Block,
: Block,
U, L, S,
Block -> Block
Block -> Block
Block -> Block
C : Block
same discussion is also valid for MUL2, MUL2A, and many other MAA functions.
premises silently require the computation of inverse functions for NumNat, +, and ++ (bit string concatenation).
12 These
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
50
MUL1 (X, Y)
= MUL1_UL (HIGH_MUL (X, Y), LOW_MUL (X, Y));
MUL1_UL (U, L) = MUL1_SC (ADD (U, L), CAR (U, L));
MUL1_SC (S, C) = ADD (S, C);
In comparison, the imperative-programming style of LNT is clearly more concise, more readable, and
closer to the original description of MUL1. Moreover, LNT permits successive assignments to the same
variable, which proved to be useful in, e.g., the MainLoop and MAC functions.
4.2 Functions returning multiple results
Another point in which the various MAA specifications differ is the handling of functions that compute
more than one result. There are several such functions in the MAA; let us consider the Prelude function,
which takes two block parameters J and K and returns six block parameters X, Y, V, W, S, and T.
The simplest description of this function is achieved in LNT-17 , which exploits the fact that LNT
functions, like in imperative programming languages, may return a result and/or have “out” parameters.
In LNT, the Prelude function can be defined this way:
function Prelude (in J, K : Block, out X, Y, V, W, S, T : Block) is
...
end function
and invoked as follows:
Prelude (J, K, ?X0, ?Y0, ?V0, ?W, ?S, ?T)
Although this approach is the simplest one, most formal methods do not support procedures or functions
with “out” parameters13 . In such languages where functions return only a single result, there are two
different options for describing functions with multiple results such as Prelude.
The first option is return a unique result of some compound type (record, tuple, array, etc.). For
instance, both VDM-90 and Z-91 describe Prelude as a function taking a pair of blocks and returning
a result of a new type (called Key-Constant [25, Sections 2.2.2 and 2.2.7] or DerivedSpace [20, pages 45–
46]) defined as a sextuple of blocks. LOTOS-91 and LOTOS-17 adopt a similar approach by defining
Prelude to return a result of a new sort ThreePairs, which is a triple of Pair values, where sort Pair
is itself defined as a pair of blocks. Other examples can be found in the binary adders and multipliers of
REC-17 ; for instance, the 8-bit adder returns a result of sort OctetSum that is a pair gathering a sum
(of sort Octet) and a carry (of sort Sum).
The drawbacks of this first option are numerous: (i) new types have to be introduced — potentially
one type per defined function in the worst case; (ii) each of these types introduces in turn a constructor
and, often, equality and projection functions as well; (iii) the specification gets obscured by tupling/detupling operations, with the aggravating circumstance that detupling can be performed in different ways
(pattern matching, destructuring “let”, or projection functions), which makes it difficult to follow the flow
of a particular variable embedded in a tuple of values; (iv) tupling complicates the efforts of compilers
and garbage collector to allocate memory efficiently.
The second option is to split a function returning N > 1 results into N separate functions. For instance, REC-17 has split Prelude into three operations: preludeXY, which computes the pair (X0, Y0),
preludeVW, which computes the pair (V0, W), and preludeST, which computes the pair (S, T). This
transformation applied to Prelude and to the main-loop functions enabled the sorts TwoPairs and
ThreePairs introduced in LOTOS-91 to be entirely removed from REC-17 .
13 Besides
LNT, the only other language we know to offer “out” parameters is the synchronous dataflow language Lustre.
H. Garavel & L. Marsso
51
The drawbacks of this second option are two-fold: (i) splitting a function with multiple results might
be difficult if the calculations for these results are tightly intertwined; this was not the case with the six
Prelude results, each of which does not depend on the five other ones14 ; (ii) splitting may require to
duplicate identical calculations, and thus create inefficiencies that in turn may require the introduction of
auxiliary functions to be avoided.
5
Validation of MAA Specifications
The two most recent specifications of the MAA have been validated as follows:
• LOTOS-17 : The specification was validated by the CÆSAR.ADT compiler, which implements
all the syntactic and semantic checks stated in the definition of LOTOS [19]. The C code generated
from the LOTOS specification passed the test vectors specified in [16, Annexes E.3.4 and E.4].
• LNT-17 : The specification was validated by the LNT2LOTOS translator, which implements the
syntactic checks and (part of) the semantic checks stated in the definition of LNT [1] and generates
LOTOS code, which is then validated by the CÆSAR.ADT compiler, therefore performing the
remaining semantics checks of LNT. The C code generated by the CÆSAR.ADT compiler passed
the test vectors specified in [17, Annex A], in [16, Annexes E.3], in [16, Annexes E.3.4 and E.4],
and the supplementary test vectors based on the MakeMessage function.
Due to these checks, various mistakes were discovered in prior (informal and formal) specifications
of the MAA: (i) Annex A corrects the test vectors given in [16, Annex E]; (ii) Annex B corrects the test
vectors given for function PAT in [17, Annex A] and [4]; (iii) an error was found in the main C program,
which computed an incorrect MAC value, as the list of blocks storing the message was built in reverse
order; (iv) another error was found in the external implementation in C of the function HIGH_MUL, which
computes the highest 32 bits of the 64-bit product of two blocks and is imported by the LOTOS and LNT
specifications — this illustrates the risks arising when formal and non-formal codes are mixed.
6
Conclusion
Twenty-five years after, we revisited the Message Authenticator Algorithm (MAA), which used to be a
pioneering case study for cryptography in the 80s and for formal methods in the early 90s. The three
MAA specifications VDM-90 , Z-91 , and LOTOS-91 developed at NPL in 1990–1991 were clearly
leading-edge, as can be seen from the adoption of the VDM specification as part of the ISO international
standard 8731-2 in 1992. However, they also faced limitations: these were mostly pen-and-pencil formal methods that lacked automated validation tools and that required implementations to be developed
manually, thus raising the difficult question of the compatibility between the formal specification and the
handwritten implementation code.
A different path has been followed at INRIA Grenoble since the early 90s, with an emphasis on executable formal methods, from which implementations can be generated automatically. Five specifications
have been successively developed: LOTOS-92 , LNT-16 , REC-17 , LOTOS-17 , and LNT-17 .
Retrospectively, heading towards executable formal methods proved to be a successful bet:
14 This
was pointed out as a cryptographic weakness of the MAA in [33, Sect. 6].
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
52
• It turns out that executable specifications are not necessarily longer than non-executable ones:
LNT-17 and LOTOS-17 (345 and 423 lines, respectively, including the external C code fragments) are half way between the non-executable specifications VDM-90 (275 lines) and Z-91
(608 lines). Also, LNT-17 is only 60% larger than the direct implementation in C given in [4].
• One might argue that the LOTOS and LNT specifications are not entirely formal, as they import a
few C types and functions to implement blocks and arithmetic operations on blocks. We see this
as a strength, rather than a weakness, of our approach. Moreover, nothing prevents such external
types and functions to be instead defined in LOTOS or in LNT, as this was the case with the
REC-17 specification, which was then automatically translated to self-contained, fully-formal
LOTOS and LNT specifications that were successfully compiled and executed.
• The insight gained by comparing the eight formal specifications of the MAA confirms that LNT
is a formal method of choice for modelling complex algorithms and data structures. Compared
to other formalisms, LNT offers an imperative specification style (based on mutable variables and
assignments) that proved to be simpler to write, easier to read, more concise, and closer to the
MAA description in natural language [4], from which specifications based on term rewrite systems and abstract data types significantly depart due to picky technical restrictions in these latter
formalisms. LNT also favors a more disciplined specification style that, we believe, is of higher
quality because of the numerous static-analysis checks (e.g., unused variables, useless assignments, etc.) performed by the LNT2LOTOS translator; such strict controls are, to the best of our
knowledge, absent from most other specification languages.
• The application of executable formal methods to the MAA case study was fruitful in several respects: (i) it detected errors in the reference test vectors given in ISO standards 8730 and 8731-2;
(ii) the LOTOS specification of the MAA, due to its size and complexity, was helpful in improving
early versions of the CÆSAR.ADT compiler; (iii) similarly, the LNT specification of the MAA
revealed in the LNT2LOTOS translator a few defects and performance issues, which have been
dealt with in 2016 and 2017.
• Moreover, executable formal methods benefit from significant progress in their compiling techniques. In 1990, a handwritten implementation of the MAA in Miranda took 60 seconds to process
an 84-block message and 480 seconds to process a 588-block message [25, page 37]. Today, the
implementations automatically generated from the LNT and LOTOS specifications of the MAA
take 0.65 and 0.37 second, respectively, to process a one-million-block message15 . As it appears,
“formal” and “executable” are no longer mutually exclusive qualities.
Acknowledgements
We are grateful to Philippe Turlier who, in 1992, helped turning the non-executable LOTOS specification
of Harold B. Munster into an executable one, to Wendelin Serwe, who, in 2016, produced the first
LNT specification of the MAA, and to Frédéric Lang, who, in 2016–2017, improved the LNT2LOTOS
translator to address the issues pointed out. Acknowledgements are also due to Keith Lockstone for his
advice and his web site16 giving useful information about the MAA, and to Sharon Wilson, librarian
of the National Physical Laboratory, who provided us with valuable early NPL reports that cannot be
fetched from the web.
15 The
C code generated from LNT and LOTOS by the CADP translators was compiled using “gcc -O3” and ran on a Dell
Latitude E6530 laptop.
16 http://www.cix.co.uk/ klockstone
~
H. Garavel & L. Marsso
53
References
[1] David Champelovier, Xavier Clerc, Hubert Garavel, Yves Guerte, Christine McKinty, Vincent
Powazny, Frédéric Lang, Wendelin Serwe & Gideon Smeding (2017): Reference Manual of the
LNT to LOTOS Translator (Version 6.7).
Available at http://cadp.inria.fr/publications/
Champelovier-Clerc-Garavel-et-al-10.html. INRIA/VASY and INRIA/CONVECS, 130 pages.
[2] Donald W. Davies (1985): A Message Authenticator Algorithm Suitable for a Mainframe Computer. In G. R.
Blakley & David Chaum, editors: Advances in Cryptology – Proceedings of the Workshop on the Theory
and Application of Cryptographic Techniques (CRYPTO’84), Santa Barbara, CA, USA, Lecture Notes in
Computer Science 196, Springer, pp. 393–400, doi:10.1007/3-540-39568-7_30.
[3] Donald W. Davies & David O. Clayden (1983): A Message Authenticator Algorithm Suitable for a Mainframe
Computer. NPL Report DITC 17/83, National Physical Laboratory, Teddington, Middlesex, UK.
[4] Donald W. Davies & David O. Clayden (1988): The Message Authenticator Algorithm (MAA) and its Implementation. NPL Report DITC 109/88, National Physical Laboratory, Teddington, Middlesex, UK. Available
at http://www.cix.co.uk/~klockstone/maa.pdf.
[5] Francisco Durán, Manuel Roldán, Jean-Christophe Bach, Emilie Balland, Mark van den Brand, James R.
Cordy, Steven Eker, Luc Engelen, Maartje de Jonge, Karl Trygve Kalleberg, Lennart C. L. Kats, PierreEtienne Moreau & Eelco Visser (2010): The Third Rewrite Engines Competition. In Peter Csaba
Ölveczky, editor: Proceedings of the 8th International Workshop on Rewriting Logic and Its Applications
(WRLA’10), Paphos, Cyprus, Lecture Notes in Computer Science 6381, Springer, pp. 243–261, doi:10.
1007/978-3-642-16310-4_16.
[6] Francisco Durán, Manuel Roldán, Emilie Balland, Mark van den Brand, Steven Eker, Karl Trygve Kalleberg,
Lennart C. L. Kats, Pierre-Etienne Moreau, Ruslan Schevchenko & Eelco Visser (2009): The Second Rewrite
Engines Competition. Electronic Notes in Theoretical Computer Science 238(3), pp. 281–291, doi:10.1016/
j.entcs.2009.05.025.
[7] Hartmut Ehrig & Bernd Mahr (1985): Fundamentals of Algebraic Specification 1 – Equations and
Initial Semantics. EATCS Monographs on Theoretical Computer Science 6, Springer, doi:10.1007/
978-3-642-69962-7.
[8] Hubert Garavel (1989): Compilation of LOTOS Abstract Data Types. In Son T. Vuong, editor: Proceedings of
the 2nd International Conference on Formal Description Techniques FORTE’89 (Vancouver B.C., Canada),
North-Holland, pp. 147–162. Available at http://cadp.inria.fr/publications/Garavel-89-c.
html.
[9] Hubert Garavel (2015): Revisiting Sequential Composition in Process Calculi. Journal of Logical and Algebraic Methods in Programming 84(6), pp. 742–762, doi:10.1016/j.jlamp.2015.08.001.
[10] Hubert Garavel, Frédéric Lang, Radu Mateescu & Wendelin Serwe (2013): CADP 2011: A Toolbox for
the Construction and Analysis of Distributed Processes. Springer International Journal on Software Tools for
Technology Transfer (STTT) 15(2), pp. 89–107, doi:10.1007/s10009-012-0244-z. Available at http://
cadp.inria.fr/publications/Garavel-Lang-Mateescu-Serwe-13.html.
[11] Hubert Garavel, Frédéric Lang & Wendelin Serwe (2017): From LOTOS to LNT. In Joost-Pieter Katoen,
Rom Langerak & Arend Rensink, editors: ModelEd, TestEd, TrustEd – Essays Dedicated to Ed Brinksma on
the Occasion of His 60th Birthday, Lecture Notes in Computer Science 10500, Springer, pp. 3–26, doi:10.
1007/978-3-319-68270-9_1.
[12] Hubert Garavel & Lina Marsso (2017): A Large Term Rewrite System Modelling a Pioneering Cryptographic
Algorithm. In Holger Hermanns & Peter Höfner, editors: Proceedings of the 2nd Workshop on Models
for Formal Analysis of Real Systems (MARS’17), Uppsala, Sweden, Electronic Proceedings in Theoretical
Computer Science 244, pp. 129–183, doi:10.4204/EPTCS.244.6.
[13] Hubert Garavel & Philippe Turlier (1993): CÆSAR.ADT : un compilateur pour les types abstraits algébriques
du langage LOTOS. In Rachida Dssouli & Gregor v. Bochmann, editors: Actes du Colloque Francophone
54
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
pour l’Ingénierie des Protocoles (CFIP’93), Montréal, Canada, Hermès, Paris, pp. 325–339. Available at
http://cadp.inria.fr/publications/Garavel-Turlier-93.html.
[14] ISO (1986): Requirements for Message Authentication (Wholesale). International Standard 8730, International Organization for Standardization – Banking, Geneva.
[15] ISO (1987): Approved Algorithms for Message Authentication – Part 2: Message Authenticator Algorithm
(MAA). International Standard 8731-2, International Organization for Standardization – Banking, Geneva.
[16] ISO (1990): Requirements for Message Authentication (Wholesale). International Standard 8730, International Organization for Standardization – Banking, Geneva.
[17] ISO (1992): Approved Algorithms for Message Authentication – Part 2: Message Authenticator Algorithm.
International Standard 8731-2, International Organization for Standardization – Banking, Geneva.
[18] ISO (1999): Requirements for Message Authentication (Wholesale). Technical Corrigendum 1 8730, International Organization for Standardization – Banking, Geneva.
[19] ISO/IEC (1989): LOTOS – A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour. International Standard 8807, International Organization for Standardization – Information
Processing Systems – Open Systems Interconnection, Geneva.
[20] M. K. F. Lai (1991): A Formal Interpretation of the MAA Standard in Z. NPL Report DITC 184/91, National
Physical Laboratory, Teddington, Middlesex, UK.
[21] R. P. Lampard (1991): An Implementation of MAA from a VDM Specification. NPL Technical Memorandum
DITC 50/91, National Physical Laboratory, Teddington, Middlesex, UK.
[22] Jan de Meer, Rudolf Roth & Son Vuong (1992): Introduction to Algebraic Specifications Based on
the Language ACT ONE. Computer Networks and ISDN Systems 23(5), pp. 363–392, doi:10.1016/
0169-7552(92)90013-G.
[23] Alfred Menezes, Paul C. van Oorschot & Scott A. Vanstone (1996): Handbook of Applied Cryptography.
CRC Press, doi:10.1201/9781439821916. Available at http://cacr.uwaterloo.ca/hac.
[24] Harold B. Munster (1991): LOTOS Specification of the MAA Standard, with an Evaluation of LOTOS. NPL
Report DITC 191/91, National Physical Laboratory, Teddington, Middlesex, UK. Available at ftp://ftp.
inrialpes.fr/pub/vasy/publications/others/Munster-91-a.pdf.
[25] Graeme I. Parkin & G. O’Neill (1990): Specification of the MAA Standard in VDM. NPL Report DITC
160/90, National Physical Laboratory, Teddington, Middlesex, UK.
[26] Graeme I. Parkin & G. O’Neill (1991): Specification of the MAA Standard in VDM. In Søren Prehn &
W. J. Toetenel, editors: Formal Software Development – Proceedings (Volume 1) of the 4th International
Symposium of VDM Europe (VDM’91), Noordwijkerhout, The Netherlands, Lecture Notes in Computer
Science 551, Springer, pp. 526–544, doi:10.1007/3-540-54834-3_31.
[27] Bart Preneel (1997): Cryptanalysis of Message Authentication Codes. In Eiji Okamoto, George I. Davida &
Masahiro Mambo, editors: Proceedings of the 1st International Workshop on Information Security (ISW’97),
Tatsunokuchi, Japan, Lecture Notes in Computer Science 1396, Springer, pp. 55–65, doi:10.1007/
BFb0030408. Available at http://www.cosic.esat.kuleuven.be/publications/article-61.pdf.
[28] Bart Preneel (2011): MAA. In Henk C. A. van Tilborg & Sushil Jajodia, editors: Encyclopedia of Cryptography and Security (2nd Edition), Springer, pp. 741–742, doi:10.1007/978-1-4419-5906-5_591.
[29] Bart Preneel & Paul C. van Oorschot (1995): MDx-MAC and Building Fast MACs from Hash Functions. In
Don Coppersmith, editor: Advances in Cryptology – Proceedings of 15th Annual International Cryptology
Conference (CRYPTO’95), Santa Barbara, CA, USA, Lecture Notes in Computer Science 963, Springer,
pp. 1–14, doi:10.1007/3-540-44750-4_1. Available at http://citeseerx.ist.psu.edu/viewdoc/
download?doi=10.1.1.490.8595&rep=rep1&type=pdf.
[30] Bart Preneel & Paul C. van Oorschot (1996): On the Security of Two MAC Algorithms. In Ueli M. Maurer,
editor: Advances in Cryptology – Proceedings of the International Conference on the Theory and Application
H. Garavel & L. Marsso
55
of Cryptographic Techniques (EUROCRYPT’96), Saragossa, Spain, Lecture Notes in Computer Science
1070, Springer, pp. 19–32, doi:10.1007/3-540-68339-9_3.
[31] Bart Preneel & Paul C. van Oorschot (1999): On the Security of Iterated Message Authentication Codes.
IEEE Transactions on Information Theory 45(1), pp. 188–199, doi:10.1109/18.746787.
[32] Bart Preneel, Vincent Rumen & Paul C. van Oorschot (1997): Security Analysis of the Message Authenticator
Algorithm (MAA). European Transactions on Telecommunications 8(5), pp. 455–470, doi:10.1002/ett.
4460080504.
[33] Vincent Rijmen, Bart Preneel & Erik De Win (1996): Key Recovery and Collision Clusters for MAA. In: Proceedings of the 1st International Conference on Security in Communication Networks (SCN’96). Available
at https://www.cosic.esat.kuleuven.be/publications/article-437.pdf.
A
Errata Concerning Annex E of the ISO-8730:1990 Standard
After reading and checking carefully the test vectors given in [16, Annex E], we discovered a number of
errors17 . Here is the list of errors found and their corrections:
• In Annex E.2, some characters of the text message differ from the corresponding ASCII code
given (in hexadecimal) below in Annex E.3.2. Precisely, the string "BE CAREFUL" should read
"BE\n\n\ \ \ Careful", where "\n" and "\ " respectively denote line-feed and white space.
The corresponding hexadecimal values are indeed 42 45 0A 0A 20 20 20 43 61 72 65 66 75 6C.
• Annex E.3.2 and Annex E.3.4 state that this text message has 86 blocks. Actually, it has 84 blocks
only. This is confirmed by the table of hexadecimal values in Annex E.3.2 (42 lines × 2 blocks per
line give 84 blocks) and by the iterations listed in Annex E.3.4, in which the number of message
blocks (i.e., variable N) ranges between 1 and 84.
• Annex E.4 states that the long message is obtained by repeating six times the message of 86 blocks,
leading to a message length of 516 blocks. Actually, it is obtained by repeating seven times the
message of 84 blocks, leading to a message length of 588 blocks. This can be seen from the
iterations listed in Annex E.4 where variable N ranges between 1 and 588, and by the fact that
588 = 7 × 84. Moreover, computing the MAA result on the 588-block long message with the same
key J = E6 A1 2F 07 and K = 9D 15 C4 37 as in Annex E.3.3 indeed gives the expected MAC
value C6 E3 D0 00.
B
Errata Concerning Annex A of the ISO-8731-2:1992 Standard
After checking carefully all the test vectors contained in the original NPL report defining the MAA [4]
and in the 1992 version of the MAA standard [17], we believe that there are mistakes18 in the test vectors
given for function PAT.
More precisely, the three last lines of Table 3 [4, page 15] — identically reproduced in Table A.3 of
[17, Sect. A.4] — are written as follows:
{X0,Y0}
{V0,W}
{S,T}
17 We used
0103 0703 1D3B 7760
0103 050B 1706 5DBB
0103 0705 8039 7302
PAT{X0,Y0} EE
PAT{V0,W} BB
PAT{S,T}
E6
the French version of that standard, which we acquired from AFNOR, but have no reason to believe that the same
errors are absent from other translations of this standard.
18 Again, we used the French version of this standard, but we believe that this plays no role, as the same mistakes were already
present in the 1988 NPL report.
56
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
Actually, the inputs of function PAT should not be {X0,Y0}, {V0,W}, {S,T} but rather {H4,H5},
{H6,H7}, {H8,H9}, the values of H4, ..., H9 being those listed above in Table 3. Notice that the confusion
was probably caused by the following algebraic identities:
{X0,Y0} = BYT (H4, H5)
{V0,W} = BYT (H6, H7)
{S,T}
= BYT (H8, H9)
If one gives {X0,Y0}, {V0,W}, {S,T} as inputs to PAT, then the three results of PAT are equal to 00
and thus cannot be equal to EE, BB, E6, respectively.
But if one gives {H4,H5}, {H6,H7}, {H8,H9} as inputs to PAT, then the results of PAT are the
expected values EE, BB, E6.
Thus, we believe that the three last lines of Table 3 should be modified as follows:
{H4,H5}
{H6,H7}
{H8,H9}
0000 0003 0000 0060
0003 0000 0006 0000
0000 0005 8000 0002
PAT{H4,H5}
PAT{H6,H7}
PAT{H8,H9}
EE
BB
E6
H. Garavel & L. Marsso
C
57
Formal Specification of the MAA in LOTOS
This annex presents the specification LOTOS-17 of the MAA in LOTOS. This specification uses several
predefined libraries of LOTOS, namely: the libraries for Booleans and natural numbers, which we do
not reproduce here, and the libraries for bits, octets, and octet values, of which we only display excerpts
needed for understanding the MAA specification.
C.1
The BIT library
This predefined LOTOS library defines the Bit type with its related operations. Only a simplified version
of this library is presented here.
type Bit is Boolean, NaturalNumber
sorts
Bit
opns
0 (∗! constructor ∗), 1 (∗! constructor ∗) : -> Bit
not : Bit -> Bit
_and_, _or_, _xor_ : Bit, Bit -> Bit
_eq_, _ne_ : Bit, Bit -> Bool
eqns
forall x,y : Bit
ofsort Bit
not (0) = 1;
not (1) = 0;
x and 0 = 0;
x and 1 = x;
x or 0 = x;
x or 1 = 1;
x xor 0 = x;
x xor 1 = not (x);
ofsort Bool
x eq x = true;
0 eq 1 = false;
1 eq 0 = false;
x ne y = x eq y;
endtype
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
58
C.2
The OCTET library
This predefined LOTOS library defines the Octet type (i.e., an 8-bit word) with its related operations.
Only an excerpt of this library is presented here.
type Octet is Bit, Boolean
sorts
Octet
opns
Octet (∗! constructor ∗) : Bit, Bit, Bit, Bit, Bit, Bit, Bit, Bit -> Octet
Bit1, Bit2, Bit3, Bit4, Bit5, Bit6, Bit7, Bit8 : Octet -> Bit
not : Octet -> Octet
_and_, _or_, _xor_ : Octet, Octet -> Octet
_eq_, _ne_ : Octet, Octet -> Bool
eqns
ofsort Bit
forall b1, b2, b3, b4, b5, b6, b7, b8 : Bit
Bit1
Bit2
Bit3
Bit4
Bit5
Bit6
Bit7
Bit8
(Octet
(Octet
(Octet
(Octet
(Octet
(Octet
(Octet
(Octet
(b1,
(b1,
(b1,
(b1,
(b1,
(b1,
(b1,
(b1,
b2,
b2,
b2,
b2,
b2,
b2,
b2,
b2,
b3,
b3,
b3,
b3,
b3,
b3,
b3,
b3,
b4,
b4,
b4,
b4,
b4,
b4,
b4,
b4,
b5,
b5,
b5,
b5,
b5,
b5,
b5,
b5,
b6,
b6,
b6,
b6,
b6,
b6,
b6,
b6,
b7,
b7,
b7,
b7,
b7,
b7,
b7,
b7,
b8))
b8))
b8))
b8))
b8))
b8))
b8))
b8))
=
=
=
=
=
=
=
=
b1;
b2;
b3;
b4;
b5;
b6
b7;
b8;
ofsort Octet
forall x, y : Octet
not (x) = Octet (not (Bit1 (x)), not (Bit2 (x)),
not (Bit3 (x)), not (Bit4 (x)),
not (Bit5 (x)), not (Bit6 (x)),
not (Bit7 (x)), not (Bit8 (x)));
x and y = Octet (Bit1 (x) and Bit1 (y), Bit2 (x) and Bit2 (y),
Bit3 (x) and Bit3 (y), Bit4 (x) and Bit4 (y),
Bit5 (x) and Bit5 (y), Bit6 (x) and Bit6 (y),
Bit7 (x) and Bit7 (y), Bit8 (x) and Bit8 (y));
x or y = Octet (Bit1 (x) or Bit1 (y), Bit2 (x) or Bit2 (y),
Bit3 (x) or Bit3 (y), Bit4 (x) or Bit4 (y),
Bit5 (x) or Bit5 (y), Bit6 (x) or Bit6 (y),
Bit7 (x) or Bit7 (y), Bit8 (x) or Bit8 (y));
x xor y = Octet (Bit1 (x) xor Bit1 (y), Bit2 (x) xor Bit2 (y),
Bit3 (x) xor Bit3 (y), Bit4 (x) xor Bit4 (y),
Bit5 (x) xor Bit5 (y), Bit6 (x) xor Bit6 (y),
Bit7 (x) xor Bit7 (y), Bit8 (x) xor Bit8 (y));
H. Garavel & L. Marsso
59
ofsort Bool
forall x, y : Octet
x eq y = (Bit1 (x) eq Bit1 (y)) and (Bit2 (x) eq Bit2 (y)) and
(Bit3 (x) eq Bit3 (y)) and (Bit4 (x) eq Bit4 (y)) and
(Bit5 (x) eq Bit5 (y)) and (Bit6 (x) eq Bit6 (y)) and
(Bit7 (x) eq Bit7 (y)) and (Bit8 (x) eq Bit8 (y));
x ne y = not (x eq y);
endtype
C.3
The OCTETVALUES library
This predefined LOTOS library defines 256 constant functions x00, ..., xFF that provide shorthand notations for octet values. Only an excerpt of this library is presented here.
type OctetValues is Bit, Octet
opns
x00, x01, ... xFE, xFF : -> Octet
eqns
ofsort Octet
x00
x01
...
xFE
xFF
endtype
C.4
= Octet (0, 0, 0, 0, 0, 0, 0, 0);
= Octet (0, 0, 0, 0, 0, 0, 0, 1);
= Octet (1, 1, 1, 1, 1, 1, 1, 0);
= Octet (1, 1, 1, 1, 1, 1, 1, 1);
The MAA specification
specification MAA: noexit
library
X_NATURAL, BIT, BOOLEAN, OCTET, OCTETVALUES
endlib
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
type Block is Boolean, Bit, Octet
sorts
Block (∗! implementedby BLOCK printedby PRINT BLOCK ∗)
opns
Block (∗! implementedby BUILD BLOCK constructor ∗) :
Octet, Octet, Octet, Octet -> Block
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
60
_eq_ (∗! implementedby EQUAL BLOCK ∗) : Block, Block -> Bool
AND : Block, Block -> Block
OR : Block, Block -> Block
XOR : Block, Block -> Block
CYC : Block -> Block
ADD (∗! implementedby ADD external ∗) : Block, Block -> Block
CAR (∗! implementedby CAR external ∗) : Block, Block -> Block
HIGH_MUL (∗! implementedby HIGH MUL external ∗) : Block, Block -> Block
LOW_MUL (∗! implementedby LOW MUL external ∗) : Block, Block -> Block
eqns
ofsort Bool
forall X, Y : Block
X eq X = true;
X eq Y = false; (∗ assuming priority between equations ∗)
ofsort Block
forall O1, O2, O3, O4, P1, P2, P3, P4 : Octet
AND (Block (O1, O2, O3, O4), Block (P1, P2, P3, P4)) =
Block (O1 and P1, O2 and P2, O3 and P3, O4 and P4);
OR (Block (O1, O2, O3, O4), Block (P1, P2, P3, P4)) =
Block (O1 or P1, O2 or P2, O3 or P3, O4 or P4);
XOR (Block (O1, O2, O3, O4), Block (P1, P2, P3, P4)) =
Block (O1 xor P1, O2 xor P2, O3 xor P3, O4 xor P4);
ofsort Block
forall B1, B2, B3, B4, B5, B6, B7, B8,
B9, B10, B11, B12, B13, B14, B15, B16,
B17, B18, B19, B20, B21, B22, B23, B24,
B25, B26, B27, B28, B29, B30, B31, B32 : Bit
CYC (Block (Octet (B1, B2, B3, B4, B5, B6, B7, B8),
Octet (B9, B10, B11, B12, B13, B14, B15, B16),
Octet (B17, B18, B19, B20, B21, B22, B23, B24),
Octet (B25, B26, B27, B28, B29, B30, B31, B32))) =
Block (Octet (B2, B3, B4, B5, B6, B7, B8, B9),
Octet (B10, B11, B12, B13, B14, B15, B16, B17),
Octet (B18, B19, B20, B21, B22, B23, B24, B25),
Octet (B26, B27, B28, B29, B30, B31, B32, B1));
endtype
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
type Pairs is Block
sorts
H. Garavel & L. Marsso
61
Pair,
TwoPairs,
ThreePairs
opns
Pair (∗! constructor ∗) : Block, Block -> Pair
TwoPairs (∗! constructor ∗) : Pair, Pair -> TwoPairs
ThreePairs (∗! constructor ∗) : Pair, Pair, Pair -> ThreePairs
endtype
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
type Message is Block, Natural
sorts
Message (∗! implementedby MESSAGE ∗)
opns
nil (∗! implementedby NIL MESSAGE constructor ∗) : -> Message
_++_ (∗! implementedby PLUS MESSAGE constructor ∗) :
Block, Message -> Message
Reverse (∗! implementedby REVERSE external ∗) : Message -> Message
(∗ Reverse is not invoked in ”maa.lotos” but in ”main.c” ∗)
(∗ for efficiency reason, Reverse is implemented directly in C ∗)
endtype
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
type Functions is
opns
A_CONSTANT
B_CONSTANT
C_CONSTANT
D_CONSTANT
Block, OctetValues, Pairs
:
:
:
:
->
->
->
->
Block
Block
Block
Block
FIX1 : Block -> Block
FIX2 : Block -> Block
MUL1 : Block, Block -> Block
MUL1_UL : Block, Block -> Block
MUL1_SC : Block, Block -> Block
MUL2 : Block, Block -> Block
MUL2_UL : Block, Block -> Block
MUL2_DEL : Block, Block, Block -> Block
MUL2_FL : Block, Block -> Block
MUL2_SC : Block, Block -> Block
MUL2A : Block, Block ->
MUL2A_UL : Block, Block
MUL2A_DL : Block, Block
MUL2A_SC : Block, Block
Block
-> Block
-> Block
-> Block
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
62
NeedAdjust : Octet -> Bool
AdjustCode : Octet -> Bit
Adjust : Octet, Octet -> Octet
PAT : Block, Block -> Octet
BYT : Block, Block -> Pair
AuxBYT : Block, Block, Octet -> Pair
eqns
ofsort Block
A_CONSTANT
B_CONSTANT
C_CONSTANT
D_CONSTANT
=
=
=
=
Block
Block
Block
Block
(x02,
(x00,
(xBF,
(x7D,
x04,
x80,
xEF,
xFE,
x08,
x40,
x7F,
xFB,
x01);
x21);
xDF);
xFF);
ofsort Block
forall X : Block
FIX1 (X) = AND (OR (X, A_CONSTANT), C_CONSTANT);
FIX2 (X) = AND (OR (X, B_CONSTANT), D_CONSTANT);
ofsort Block
forall X, Y, U, L, S, C : Block
MUL1 (X, Y) = MUL1_UL (HIGH_MUL (X, Y), LOW_MUL (X, Y));
MUL1_UL (U, L) = MUL1_SC (ADD (U, L), CAR (U, L));
MUL1_SC (S, C) = ADD (S, C);
ofsort Block
forall X, Y, U, L, D, F, E, S, C : Block
MUL2 (X, Y) = MUL2_UL (HIGH_MUL (X, Y), LOW_MUL (X, Y));
MUL2_UL (U, L) = MUL2_DEL (ADD (U, U), CAR (U, U), L);
MUL2_DEL (D, E, L) = MUL2_FL (ADD (D, ADD (E, E)), L);
MUL2_FL (F, L) = MUL2_SC (ADD (F, L), CAR (F, L));
MUL2_SC (S, C) = ADD (S, ADD (C, C));
ofsort Block
forall X, Y, U, L, D, S, C : Block
MUL2A (X, Y)
MUL2A_UL (U,
MUL2A_DL (D,
MUL2A_SC (S,
= MUL2A_UL (HIGH_MUL (X, Y), LOW_MUL (X, Y));
L) = MUL2A_DL (ADD (U, U), L);
L) = MUL2A_SC (ADD (D, L), CAR (D, L));
C) = ADD (S, ADD (C, C));
ofsort Bool
forall O: Octet
NeedAdjust (O) = (O eq x00) or (O eq xFF);
H. Garavel & L. Marsso
63
ofsort Bit
forall O: Octet
NeedAdjust (O) => AdjustCode (O) = 1;
not (NeedAdjust (O)) => AdjustCode (O) = 0;
ofsort Octet
forall O, P: Octet
NeedAdjust (O) => Adjust (O, P) = O xor P;
not (NeedAdjust (O)) => Adjust (O, P) = O;
ofsort Octet
forall X, Y: Block, O1, O2, O3, O4, O5, O6, O7, O8: Octet
PAT (Block (O1, O2, O3, O4), Block (O5, O6, O7, O8)) =
Octet (AdjustCode (O1), AdjustCode (O2),
AdjustCode (O3), AdjustCode (O4),
AdjustCode (O5), AdjustCode (O6),
AdjustCode (O7), AdjustCode (O8));
ofsort Pair
forall B1, B2, B3, B4, B5, B6, B7, B8 : Bit,
O1, O2, O3, O4, O5, O6, O7, O8: Octet,
J, K : Block
BYT (J, K) = AuxBYT (J, K, PAT (J, K));
AuxBYT (Block (O1, O2, O3, O4), Block (O5, O6, O7, O8),
Octet (B1, B2, B3, B4, B5, B6, B7, B8)) =
Pair (Block (Adjust (O1, Octet (0, 0, 0, 0, 0, 0, 0, B1)),
Adjust (O2, Octet (0, 0, 0, 0, 0, 0, B1, B2)),
Adjust (O3, Octet (0, 0, 0, 0, 0, B1, B2, B3)),
Adjust (O4, Octet (0, 0, 0, 0, B1, B2, B3, B4))),
Block (Adjust (O5, Octet (0, 0, 0, B1, B2, B3, B4, B5)),
Adjust (O6, Octet (0, 0, B1, B2, B3, B4, B5, B6)),
Adjust (O7, Octet (0, B1, B2, B3, B4, B5, B6, B7)),
Adjust (O8, Octet (B1, B2, B3, B4, B5, B6, B7, B8))));
endtype
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
type Prelude is Functions
opns
Q : Octet -> Block
SQUARE : Block -> Block
J1_2, J1_4, J1_6, J1_8 : Block
J2_2, J2_4, J2_6, J2_8 : Block
K1_2, K1_4, K1_5, K1_7, K1_9 :
K2_2, K2_4, K2_5, K2_7, K2_9 :
H4, H6, H8, H0, H7, H9 : Block
-> Block
-> Block
Block -> Block
Block -> Block
-> Block
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
64
H5 : Block, Octet -> Block
Prelude : Block, Block -> ThreePairs
AuxPrelude : Pair, Octet -> ThreePairs
eqns
ofsort Block
forall P: Octet
Q (P) = SQUARE (ADD (Block (x00, x00, x00, P),
Block (x00, x00, x00, x01)));
ofsort Block
forall B: Block
SQUARE (B) = LOW_MUL (B, B);
ofsort Block
forall J: Block
J1_2
J1_4
J1_6
J1_8
J2_2
J2_4
J2_6
J2_8
(J)
(J)
(J)
(J)
(J)
(J)
(J)
(J)
=
=
=
=
=
=
=
=
MUL1
MUL1
MUL1
MUL1
MUL2
MUL2
MUL2
MUL2
(J, J);
(J1_2 (J),
(J1_2 (J),
(J1_2 (J),
(J, J);
(J2_2 (J),
(J2_2 (J),
(J2_2 (J),
MUL1
MUL1
MUL1
MUL1
MUL1
MUL2
MUL2
MUL2
MUL2
MUL2
(K, K);
(K1_2 (K), K1_2
(K, K1_4 (K));
(K1_2 (K), K1_5
(K1_2 (K), K1_7
(K, K);
(K2_2 (K), K2_2
(K, K2_4 (K));
(K2_2 (K), K2_5
(K2_2 (K), K2_7
J1_2 (J));
J1_4 (J));
J1_6 (J));
J2_2 (J));
J2_4 (J));
J2_6 (J));
ofsort Block
forall K: Block
K1_2
K1_4
K1_5
K1_7
K1_9
K2_2
K2_4
K2_5
K2_7
K2_9
(K)
(K)
(K)
(K)
(K)
(K)
(K)
(K)
(K)
(K)
=
=
=
=
=
=
=
=
=
=
(K));
(K));
(K));
(K));
(K));
(K));
ofsort Block
forall J, K: Block, P: Octet
H4 (J) = XOR (J1_4 (J), J2_4 (J));
H6 (J) = XOR (J1_6 (J), J2_6 (J));
H8 (J) = XOR (J1_8 (J), J2_8 (J));
H0 (K) = XOR (K1_5 (K), K2_5 (K));
H5 (K, P) = MUL2 (H0 (K), Q (P));
H. Garavel & L. Marsso
65
H7 (K) = XOR (K1_7 (K), K2_7 (K));
H9 (K) = XOR (K1_9 (K), K2_9 (K));
ofsort ThreePairs
forall J, K: Block, P: Octet
Prelude (J, K) = AuxPrelude (BYT (J, K), PAT (J, K));
ofsort ThreePairs
forall J, K: Block, P: Octet
AuxPrelude (Pair (J, K), P) =
ThreePairs (BYT (H4 (J), H5 (K, P)),
BYT (H6 (J), H7 (K)),
BYT (H8 (J), H9 (K)));
endtype
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
type MAA is Prelude, Message
opns
ComputeXY : Pair, Block, Block -> Pair
MainLoop : TwoPairs, Block -> TwoPairs
ComputeZ : TwoPairs -> Block
Coda : TwoPairs, Pair -> Block
255 (∗! implementedby N255 external ∗) : -> Nat
MAC (∗! implementedby MAC ∗) : Block, Block, Message -> Block
MAAstart : ThreePairs, Message -> Block
MAA : Message, Nat, TwoPairs, Pair, Pair, Pair -> Block
MAAjump : Message, Block, Pair, Pair, Pair -> Block
eqns
ofsort Pair
forall X, Y, M, E: Block
ComputeXY (Pair (X, Y), M, E) =
Pair (MUL1 (XOR (X, M), FIX1 (ADD (XOR (Y, M), E))),
MUL2A (XOR (Y, M), FIX2 (ADD (XOR (X, M), E))));
ofsort TwoPairs
forall XY: Pair, B, V, W: Block
MainLoop (TwoPairs (XY, Pair (V, W)), B) =
TwoPairs (ComputeXY (XY, B, XOR (CYC (V), W)), Pair (CYC (V), W));
ofsort Block
forall X, Y: Block, VW: Pair
ComputeZ (TwoPairs (Pair (X, Y), VW)) = XOR (X, Y);
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
66
ofsort Block
forall XYVW: TwoPairs, S, T: Block
Coda (XYVW, Pair (S, T)) = ComputeZ (MainLoop (MainLoop (XYVW, S), T));
ofsort Block
forall J, K : Block, M: Message
MAC (J, K, M) = MAAstart (Prelude (J, K), M);
ofsort Block
forall X0Y0, V0W, ST : Pair, M: Message
MAAstart (ThreePairs (X0Y0, V0W, ST), M) =
MAA (M, 255, TwoPairs (X0Y0, V0W), X0Y0, V0W, ST);
ofsort Block
forall X0Y0, V0W, ST : Pair, XYVW: TwoPairs, B : Block, N: Nat, M: Message
MAA (nil, N, XYVW, X0Y0, V0W, ST) = Coda (XYVW, ST);
MAA (B ++ M, 0, XYVW, X0Y0, V0W, ST) =
MAAjump (M, Coda (MainLoop (XYVW, B), ST), X0Y0, V0W, ST);
MAA (B ++ M, succ (N), XYVW, X0Y0, V0W, ST) =
MAA (M, N, MainLoop (XYVW, B), X0Y0, V0W, ST);
ofsort Block
forall X0Y0, V0W, ST : Pair, B, Z : Block, M: Message
MAAjump (nil, Z, X0Y0, V0W, ST) = Z;
MAAjump (B ++ M, Z, X0Y0, V0W, ST) =
MAA (B ++ M, 255, MainLoop (TwoPairs (X0Y0, V0W), Z), X0Y0, V0W, ST);
endtype
(∗ −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− ∗)
behaviour
stop
endspec
H. Garavel & L. Marsso
D
67
Formal Specification of the MAA in LNT
This annex presents the specification LNT-17 of the MAA in LNT. This specification uses several
predefined libraries of LNT, namely: the libraries for Booleans and natural numbers, which we do not
reproduce here, and the libraries for bits, octets, and octet values, of which we only display excerpts
needed for understanding the MAA specification. It also defines two new libraries for blocks and block
values, which we display hereafter.
D.1
The BIT library
This predefined LNT library defines the Bit type with its related operations. Only an excerpt of this
library is presented here.
module BIT is
type BIT is
0, 1
with "eq", "ne", "lt", "le", "ge", "gt"
end type
function not (B : Bit) : Bit is
case B in
0 -> return 1
| 1 -> return 0
end case
end function
function _and_ (B1, B2 : Bit) : Bit is
case B1 in
0 -> return 0
| 1 -> return B2
end case
end function
function _or_ (B1, B2 : Bit) : Bit is
case B1 in
0 -> return B2
| 1 -> return 1
end case
end function
function _xor_ (B1, B2 : Bit) : Bit is
case B1 in
0 -> return B2
| 1 -> return not (B2)
end case
end function
end module
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
68
D.2
The OCTET library
This predefined LNT library defines the Octet type (i.e., an 8-bit word) with its related operations. Only
an excerpt of this library is presented here.
module OCTET (BIT) is
type Octet is
Octet (B1, B2, B3, B4, B5, B6, B7, B8 : Bit)
with "eq", "ne", "get"
end type
function not (O : Octet) : Octet is
return Octet (not (O.B1), not (O.B2),
not (O.B3), not (O.B4),
not (O.B5), not (O.B6),
not (O.B7), not (O.B8))
end function
function _and_ (O1, O2 : Octet) : Octet is
return Octet (O1.B1 and O2.B1, O1.B2 and O2.B2,
O1.B3 and O2.B3, O1.B4 and O2.B4,
O1.B5 and O2.B5, O1.B6 and O2.B6,
O1.B7 and O2.B7, O1.B8 and O2.B8)
end function
function _or_ (O1, O2 : Octet) : Octet is
return Octet (O1.B1 or O2.B1, O1.B2 or O2.B2,
O1.B3 or O2.B3, O1.B4 or O2.B4,
O1.B5 or O2.B5, O1.B6 or O2.B6,
O1.B7 or O2.B7, O1.B8 or O2.B8)
end function
function _xor_ (O1, O2 : Octet) : Octet is
return Octet (O1.B1 xor O2.B1, O1.B2 xor O2.B2,
O1.B3 xor O2.B3, O1.B4 xor O2.B4,
O1.B5 xor O2.B5, O1.B6 xor O2.B6,
O1.B7 xor O2.B7, O1.B8 xor O2.B8)
end function
end module
D.3
The OCTETVALUES library
This predefined LNT library defines 256 constant functions x00, ..., xFF that provide shorthand notations
for octet values. Only an excerpt of this library is presented here.
module OCTETVALUES (BIT, OCTET) is
function x00 : Octet is
return Octet (0, 0, 0, 0, 0, 0, 0, 0)
end function
H. Garavel & L. Marsso
69
function x01 : Octet is
return Octet (0, 0, 0, 0, 0, 0, 0, 1)
end function
...
function xFE : Octet is
return Octet (1, 1, 1, 1, 1, 1, 1, 0)
end function
function xFF : Octet is
return Octet (1, 1, 1, 1, 1, 1, 1, 1)
end function
end module
D.4
The BLOCK library
This library defines the Block type (i.e., a 32-bit word) with its logical and arithmetical operations, the
latter being implemented externally as a set of functions written in the C language.
module BLOCK (BIT, OCTET) is
type Block is
Block (O1, O2, O3, O4 : Octet)
with "get", "=="
end type
function _==_ (O1, O2 : Octet) : Bool is
return eq (O1, O2)
end function
function _AND_ (X, Y : Block) : Block is
return Block (X.O1 and Y.O1, X.O2 and Y.O2, X.O3 and Y.O3, X.O4 and Y.O4)
end function
function _OR_ (X, Y : Block) : Block is
return Block (X.O1 or Y.O1, X.O2 or Y.O2, X.O3 or Y.O3, X.O4 or Y.O4)
end function
function _XOR_ (X, Y : Block) : Block is
return Block (X.O1 xor Y.O1, X.O2 xor Y.O2, X.O3 xor Y.O3, X.O4 xor Y.O4)
end function
function CYC (X : Block) : Block is
return Block (
Octet (X.O1.B2, X.O1.B3, X.O1.B4,
Octet (X.O2.B2, X.O2.B3, X.O2.B4,
Octet (X.O3.B2, X.O3.B3, X.O3.B4,
Octet (X.O4.B2, X.O4.B3, X.O4.B4,
end function
X.O1.B5,
X.O2.B5,
X.O3.B5,
X.O4.B5,
X.O1.B6,
X.O2.B6,
X.O3.B6,
X.O4.B6,
X.O1.B7,
X.O2.B7,
X.O3.B7,
X.O4.B7,
X.O1.B8,
X.O2.B8,
X.O3.B8,
X.O4.B8,
X.O2.B1),
X.O3.B1),
X.O4.B1),
X.O1.B1))
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
70
function ADD (X, Y : Block) : Block is
!implementedby "ADD" !external
null −− sum modulo 2ˆ32 of X and Y
end function
function CAR (X, Y : Block) : Block is
!implementedby "CAR" !external
null −− carry of the sum of X and Y; result is either x00000000 or x00000001
end function
function HIGH_MUL (X, Y : Block) : Block is
!implementedby "HIGH_MUL" !external
null −− 32 most significant bits of the 64−bit product of X and Y
end function
function LOW_MUL (X, Y : Block) : Block is
!implementedby "LOW_MUL" !external
null −− 32 least significant bits of the 64−bit product of X and Y
end function
end module
D.5
The BLOCKVALUES library
This library defines constant functions x00000000, ..., xFFFFFFFF that provide shorthand notations for
block values. Only the useful constants (207 among 232 ) are defined. An excerpt of this library is
presented here.
module BLOCKVALUES (OCTETVALUES, BLOCK) is
function x00000000 : Block is
return Block (x00, x00, x00, x00)
end function
function x00000001 : Block is
return Block (x00, x00, x00, x01)
end function
...
function xFFFFFFFE : Block is
return Block (xFF, xFF, xFF, xFE)
end function
function xFFFFFFFF : Block is
return Block (xFF, xFF, xFF, xFF)
end function
end module
H. Garavel & L. Marsso
D.6
71
The MAA specification
module MAA (BIT, OCTET, OCTETVALUES, BLOCK, BLOCKVALUES) is
!nat bits 32
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
type Message is
list of Block
with "==", "!=", "head", "tail", "append", "reverse"
end type
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function MakeMessage (N : Nat, in var INIT : Block, INCR : Block) : Message is
assert N > 0;
var I : Nat, RESULT : Message in
RESULT := {};
for I := 1 while I <= N by I := I + 1 loop
RESULT := append (INIT, RESULT);
INIT := ADD (INIT, INCR)
end loop;
return RESULT
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function FIX1 (X : Block) : Block is
return (X OR x02040801) AND xBFEF7FDF −− A = x02040801, C = xBFEF7FDF
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function FIX2 (X : Block) : Block is
return (X OR x00804021) AND x7DFEFBFF −− B = x00804021, D = x7DFEFBFF
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function MUL1 (X, Y : Block) : Block is
var U, L, S, C : Block in
U := HIGH_MUL (X, Y);
L := LOW_MUL (X, Y);
S := ADD (U, L);
C := CAR (U, L);
assert (C == x00000000) or (C == x00000001);
return ADD (S, C)
end var
end function
72
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function MUL2 (X, Y : Block) : Block is
var U, L, D, F, S, E, C : Block in
U := HIGH_MUL (X, Y);
L := LOW_MUL (X, Y);
D := ADD (U, U);
E := CAR (U, U);
assert (E == x00000000) or (E == x00000001);
F := ADD (D, ADD (E, E));
S := ADD (F, L);
C := CAR (F, L);
assert (C == x00000000) or (C == x00000001);
return ADD (S, ADD (C, C))
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function MUL2A (X, Y : Block) : Block is
var U, L, D, S, C : Block in
U := HIGH_MUL (X, Y);
L := LOW_MUL (X, Y);
D := ADD (U, U);
S := ADD (D, L);
C := CAR (D, L);
assert (C == x00000000) or (C == x00000001);
return ADD (S, ADD (C, C))
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function NeedAdjust (O : Octet) : Bool is
return (O == x00) or (O == xFF)
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function AdjustCode (O : Octet) : Bit is
if NeedAdjust (O) then
return 1
else
return 0
end if
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function Adjust (O, P : Octet) : Octet is
if NeedAdjust (O) then
return O xor P
H. Garavel & L. Marsso
73
else
return O
end if
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function PAT (X, Y : Block) : Octet is
return Octet (AdjustCode (X.O1), AdjustCode (X.O2),
AdjustCode (X.O3), AdjustCode (X.O4),
AdjustCode (Y.O1), AdjustCode (Y.O2),
AdjustCode (Y.O3), AdjustCode (Y.O4))
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function BYT (X, Y : Block, out U, L : Block) is
var P : Octet in
P := PAT (X, Y);
U := Block (
Adjust (X.O1, Octet (0, 0, 0, 0, 0, 0, 0, P.B1)),
Adjust (X.O2, Octet (0, 0, 0, 0, 0, 0, P.B1, P.B2)),
Adjust (X.O3, Octet (0, 0, 0, 0, 0, P.B1, P.B2, P.B3)),
Adjust (X.O4, Octet (0, 0, 0, 0, P.B1, P.B2, P.B3, P.B4)));
L := Block (
Adjust (Y.O1, Octet (0, 0, 0, P.B1, P.B2, P.B3, P.B4, P.B5)),
Adjust (Y.O2, Octet (0, 0, P.B1, P.B2, P.B3, P.B4, P.B5, P.B6)),
Adjust (Y.O3, Octet (0, P.B1, P.B2, P.B3, P.B4, P.B5, P.B6, P.B7)),
Adjust (Y.O4, Octet (P.B1, P.B2, P.B3, P.B4, P.B5, P.B6, P.B7, P.B8)))
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function PreludeJ (J1 : Block,
out var J12, J14, J16 : Block, out J18 : Block,
out var J22, J24, J26 : Block, out J28 : Block) is
J12 := MUL1 (J1, J1);
J14 := MUL1 (J12, J12);
J16 := MUL1 (J12, J14);
J18 := MUL1 (J12, J16);
J22 := MUL2 (J1, J1);
J24 := MUL2 (J22, J22);
J26 := MUL2 (J22, J24);
J28 := MUL2 (J22, J26)
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function PreludeK (K1 : Block,
out var K12, K14, K15, K17 : Block, out K19 : Block,
out var K22, K24, K25, K27 : Block, out K29 : Block) is
74
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
K12 := MUL1
K14 := MUL1
K15 := MUL1
K17 := MUL1
K19 := MUL1
K22 := MUL2
K24 := MUL2
K25 := MUL2
K27 := MUL2
K29 := MUL2
end function
(K1, K1);
(K12, K12);
(K1, K14);
(K12, K15);
(K12, K17);
(K1, K1);
(K22, K22);
(K1, K24);
(K22, K25);
(K22, K27)
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function Q (O : Octet) : Block is
var B : Block in
B := ADD (Block (x00, x00, x00, O), x00000001);
return LOW_MUL (B, B)
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function PreludeHJ (J14, J16, J18, J24, J26, J28 : Block,
out H4, H6, H8 : Block) is
H4 := XOR (J14, J24);
H6 := XOR (J16, J26);
H8 := XOR (J18, J28)
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function PreludeHK (K15, K17, K19, K25, K27, K29 : Block, P : Octet,
out var H0 : Block, out H5, H7, H9 : Block) is
H0 := XOR (K15, K25);
H5 := MUL2 (H0, Q (P));
H7 := XOR (K17, K27);
H9 := XOR (K19, K29)
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function Prelude (in J, K : Block, out X, Y, V, W, S, T : Block) is
var P : Octet,
J1, J12, J14, J16, J18, J22, J24, J26, J28 : Block,
K1, K12, K14, K15, K17, K22, K24, K25, K27, K19, K29 : Block,
H4, H0, H5, H6, H7, H8, H9 : Block in
BYT (J, K, ?J1, ?K1);
P := PAT (J, K);
PreludeJ (J1, ?J12, ?J14, ?J16, ?J18, ?J22, ?J24, ?J26, ?J28);
use J12;
use J22;
H. Garavel & L. Marsso
75
PreludeK (K1, ?K12, ?K14, ?K15, ?K17, ?K19, ?K22, ?K24, ?K25, ?K27, ?K29);
use K12;
use K22;
use K14;
use K24;
PreludeHJ (J14, J16, J18, J24, J26, J28, ?H4, ?H6, ?H8);
PreludeHK (K15, K17, K19, K25, K27, K29, P, ?H0, ?H5, ?H7, ?H9);
use H0;
BYT (H4, H5, ?X, ?Y);
BYT (H6, H7, ?V, ?W);
BYT (H8, H9, ?S, ?T)
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function MainLoop (in out X, Y, V : Block, W, B : Block) is
V := CYC (V);
var E, X1, Y1 : Block in
E := XOR (V, W);
X1 := MUL1 (XOR (X, B), FIX1 (ADD (XOR (Y, B), E)));
Y1 := MUL2A (XOR (Y, B), FIX2 (ADD (XOR (X, B), E)));
X := X1;
Y := Y1
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function Coda (in var X, Y, V : Block, W, S, T : Block, out Z : Block) is
−− Coda (two more iterations with S and T)
MainLoop (!?X, !?Y, !?V, W, S);
MainLoop (!?X, !?Y, !?V, W, T);
use V;
Z := XOR (X, Y)
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function MAC (J, K : Block, in var M : Message) : Block is
!implementedby "MAC"
−− this function is invoked externally from handwritten C code
assert M != {};
var X, X0, Y, Y0, V, V0, W, S, T, Z : Block, N : Nat in
Prelude (J, K, ?X0, ?Y0, ?V0, ?W, ?S, ?T);
X := X0;
Y := Y0;
V := V0;
N := 0;
loop
MainLoop (!?X, !?Y, !?V, W, head (M));
M := tail (M);
76
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
N := N + 1;
if M == {} then
Coda (X, Y, V, W, S, T, ?Z);
return Z
elsif N == 256 then
Coda (X, Y, V, W, S, T, ?Z);
X := X0;
Y := Y0;
V := V0;
N := 0;
MainLoop (!?X, !?Y, !?V, W, Z)
end if
end loop
end var
end function
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
function CHECK : int is
!implementedby "CHECK"
−− this function is invoked externally from handwritten C code
−− it checks the official test vectors given in [ISO 8730:1990] on the one
−− hand, and [ISO 8731−2:1992] and [Davies−Clayden−88] on the other hand
−− test vectors for function MUL1 − cf. Table 1 of [ISO 8731−2:1992]
assert MUL1 (x0000000F, x0000000E) == x000000D2;
assert MUL1 (xFFFFFFF0, x0000000E) == xFFFFFF2D;
assert MUL1 (xFFFFFFF0, xFFFFFFF1) == x000000D2;
−− test vectors for function MUL2 − cf. Table 1 of [ISO 8731−2:1992]
assert MUL2 (x0000000F, x0000000E) == x000000D2;
assert MUL2 (xFFFFFFF0, x0000000E) == xFFFFFF3A;
assert MUL2 (xFFFFFFF0, xFFFFFFF1) == x000000B6;
−− test vectors for function MUL2A − cf. Table 1 of [ISO 8731−2:1992]
assert MUL2A (x0000000F, x0000000E) == x000000D2;
assert MUL2A (xFFFFFFF0, x0000000E) == xFFFFFF3A;
assert MUL2A (x7FFFFFF0, xFFFFFFF1) == x800000C2;
assert MUL2A (xFFFFFFF0, x7FFFFFF1) == x000000C4;
−− test vectors for function BYT − cf. Table 2 of [ISO 8731−2:1992]
var U, L : Block in
BYT (x00000000, x00000000, ?U, ?L);
assert U == x0103070F;
assert L == x1F3F7FFF;
BYT (xFFFF00FF, xFFFFFFFF, ?U, ?L);
assert U == xFEFC07F0;
assert L == xE0C08000;
BYT (xAB00FFCD, xFFEF0001, ?U, ?L);
assert U == xAB01FCCD;
assert L == xF2EF3501
end var;
H. Garavel & L. Marsso
−− test vectors for function PAT − cf. Table 2 of [ISO 8731−2:1992]
assert PAT (x00000000, x00000000) == xFF;
assert PAT (xFFFF00FF, xFFFFFFFF) == xFF;
assert PAT (xAB00FFCD, xFFEF0001) == x6A;
var J1, J12, J14, J16, J18, J22, J24, J26, J28 : Block,
K1, K12, K14, K15, K17, K19, K22, K24, K25, K27, K29 : Block,
H0, H4, H5, H6, H7, H8, H9 : Block, P : Octet in
J1 := x00000100;
K1 := x00000080;
P := x01;
PreludeJ (J1, ?J12, ?J14, ?J16, ?J18, ?J22, ?J24, ?J26, ?J28);
PreludeK (K1, ?K12, ?K14, ?K15, ?K17, ?K19, ?K22, ?K24, ?K25, ?K27, ?K29);
PreludeHJ (J14, J16, J18, J24, J26, J28, ?H4, ?H6, ?H8);
PreludeHK (K15, K17, K19, K25, K27, K29, P, ?H0, ?H5, ?H7, ?H9);
−− test vectors for J1i values − cf. Table 3 of [ISO 8731−2:1992]
assert J12 == x00010000;
assert J14 == x00000001;
assert J16 == x00010000;
assert J18 == x00000001;
−− test vectors for J2i values − cf. Table 3 of [ISO 8731−2:1992]
assert J22 == x00010000;
assert J24 == x00000002;
assert J26 == x00020000;
assert J28 == x00000004;
−− test vectors for Hi values − cf. Table 3 of [ISO 8731−2:1992]
assert H4 == x00000003;
assert H6 == x00030000;
assert H8 == x00000005;
−− test vectors for K1i values − cf. Table 3 of [ISO 8731−2:1992]
assert K12 == x00004000;
assert K14 == x10000000;
assert K15 == x00000008;
assert K17 == x00020000;
assert K19 == x80000000;
−− test vectors for K2i values − cf. Table 3 of [ISO 8731−2:1992]
assert K22 == x00004000;
assert K24 == x10000000;
assert K25 == x00000010;
assert K27 == x00040000;
assert K29 == x00000002;
−− test vectors for Hi values − cf. Table 3 of [ISO 8731−2:1992]
assert H0 == x00000018;
assert Q (P) == x00000004;
assert H5 == x00000060;
77
78
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
assert H7 == x00060000;
assert H9 == x80000002;
−− test vectors for function PAT − cf. Table 3 of [ISO 8731−2:1992]
assert PAT (H4, H5) == xEE;
assert PAT (H6, H7) == xBB;
assert PAT (H8, H9) == xE6;
−− test vectors for function BYT − logically inferred from Table 3
var U, L : Block in
BYT (H4, H5, ?U, ?L);
assert U == x01030703;
assert L == x1D3B7760;
BYT (H6, H7, ?U, ?L);
assert U == x0103050B;
assert L == x17065DBB;
BYT (H8, H9, ?U, ?L);
assert U == x01030705;
assert L == x80397302
end var
end var;
−− test vectors for function Main Loop − cf. Table 4 of [ISO 8731−2:1992]
var A, B, C, D, E, F, G, M, V, W, X0, X, Y0, Y, Z : Block in
−− first single−block message
−− input values given in Table 4
A := x00000004; −− fake ”A” constant
B := x00000001; −− fake ”B” constant
C := xFFFFFFF7; −− fake ”C” constant
D := xFFFFFFFB; −− fake ”D” constant
V := x00000003;
W := x00000003;
X0 := x00000002;
Y0 := x00000003;
M := x00000005;
−− loop iteration described page 10 of [ISO 8731−2:1992]
V := CYC (V); assert V == x00000006;
E := XOR (V, W); assert E == x00000005;
X := XOR (X0, M); assert X == x00000007;
Y := XOR (Y0, M); assert Y == x00000006;
F := ADD (E, Y); assert F == x0000000B;
G := ADD (E, X); assert G == x0000000C;
F := OR (F, A); assert F == x0000000F;
G := OR (G, B); assert G == x0000000D;
F := AND (F, C); assert F == x00000007;
G := AND (G, D); assert G == x00000009;
X := MUL1 (X, F); assert X == x00000031;
Y := MUL2A (Y, G); assert Y == x00000036;
Z := XOR (X, Y); assert Z == x00000007
end var;
var A, B, C, D, E, F, G, M, V, W, X0, X, Y0, Y, Z : Block in
H. Garavel & L. Marsso
−− second single−block message
−− input values given in Table 4
A := x00000001; −− fake ”A” constant
B := x00000004; −− fake ”B” constant
C := xFFFFFFF9; −− fake ”C” constant
D := xFFFFFFFC; −− fake ”D” constant
V := x00000003;
W := x00000003;
X0 := xFFFFFFFD;
Y0 := xFFFFFFFC;
M := x00000001;
−− loop iteration described page 10 of [ISO 8731−2:1992]
V := CYC (V); assert V == x00000006;
E := XOR (V, W); assert E == x00000005;
X := XOR (X0, M); assert X == xFFFFFFFC;
Y := XOR (Y0, M); assert Y == xFFFFFFFD;
F := ADD (E, Y); assert F == x00000002;
G := ADD (E, X); assert G == x00000001;
F := OR (F, A); assert F == x00000003;
G := OR (G, B); assert G == x00000005;
F := AND (F, C); assert F == x00000001;
G := AND (G, D); assert G == x00000004;
X := MUL1 (X, F); assert X == xFFFFFFFC;
Y := MUL2A (Y, G); assert Y == xFFFFFFFA;
Z := XOR (X, Y); assert Z == x00000006
end var;
var A, B, C, D, E, F, G, M, V, W, X0, X, Y0, Y, Z : Block in
−− third single−block message
−− input values given in Table 4
A := x00000001; −− fake ”A” constant
B := x00000002; −− fake ”B” constant
C := xFFFFFFFE; −− fake ”C” constant
D := x7FFFFFFD; −− fake ”D” constant
V := x00000007;
W := x00000007;
X0 := xFFFFFFFD;
Y0 := xFFFFFFFC;
M := x00000008;
−− loop iteration described page 10 of [ISO 8731−2:1992]
V := CYC (V); assert V == x0000000E;
E := XOR (V, W); assert E == x00000009;
X := XOR (X0, M); assert X == xFFFFFFF5;
Y := XOR (Y0, M); assert Y == xFFFFFFF4;
F := ADD (E, Y); assert F == xFFFFFFFD;
G := ADD (E, X); assert G == xFFFFFFFE;
F := OR (F, A); assert F == xFFFFFFFD;
G := OR (G, B); assert G == xFFFFFFFE;
F := AND (F, C); assert F == xFFFFFFFC;
G := AND (G, D); assert G == x7FFFFFFC;
X := MUL1 (X, F); assert X == x0000001E;
Y := MUL2A (Y, G); assert Y == x0000001E;
79
80
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
Z := XOR (X, Y); assert Z == x00000000
end var;
var A, B, C, D, E, F, G, M, V, W, X0, X, Y0, Y, Z : Block in
−− three−block message: first block
−− input values given in Table 4
A := x00000002; −− fake ”A” constant
B := x00000001; −− fake ”B” constant
C := xFFFFFFFB; −− fake ”C” constant
D := xFFFFFFFB; −− fake ”D” constant
V := x00000001;
W := x00000001;
X0 := x00000001;
Y0 := x00000002;
M := x00000000;
−− loop iteration described page 10 of [ISO 8731−2:1992]
V := CYC (V); assert V == x00000002;
E := XOR (V, W); assert E == x00000003;
X := XOR (X0, M); assert X == x00000001;
Y := XOR (Y0, M); assert Y == x00000002;
F := ADD (E, Y); assert F == x00000005;
G := ADD (E, X); assert G == x00000004;
F := OR (F, A); assert F == x00000007;
G := OR (G, B); assert G == x00000005;
F := AND (F, C); assert F == x00000003;
G := AND (G, D); assert G == x00000001;
X := MUL1 (X, F); assert X == x00000003;
Y := MUL2A (Y, G); assert Y == x00000002;
Z := XOR (X, Y); assert Z == x00000001;
−− three−block message: second block
−− input values given in Table 4
A := x00000002; −− fake ”A” constant
B := x00000001; −− fake ”B” constant
C := xFFFFFFFB; −− fake ”C” constant
D := xFFFFFFFB; −− fake ”D” constant
V := x00000002;
W := x00000001;
X0 := x00000003;
Y0 := x00000002;
M := x00000001;
−− loop iteration described page 10 of [ISO 8731−2:1992]
V := CYC (V); assert V == x00000004;
E := XOR (V, W); assert E == x00000005;
X := XOR (X0, M); assert X == x00000002;
Y := XOR (Y0, M); assert Y == x00000003;
F := ADD (E, Y); assert F == x00000008;
G := ADD (E, X); assert G == x00000007;
F := OR (F, A); assert F == x0000000A;
G := OR (G, B); assert G == x00000007;
F := AND (F, C); assert F == x0000000A;
G := AND (G, D); assert G == x00000003;
H. Garavel & L. Marsso
X := MUL1 (X, F); assert X == x00000014;
Y := MUL2A (Y, G); assert Y == x00000009;
Z := XOR (X, Y); assert Z == x0000001D;
−− three−block message: third block
−− input values given in Table 4
A := x00000002; −− fake ”A” constant
B := x00000001; −− fake ”B” constant
C := xFFFFFFFB; −− fake ”C” constant
D := xFFFFFFFB; −− fake ”D” constant
V := x00000004;
W := x00000001;
X0 := x00000014;
Y0 := x00000009;
M := x00000002;
−− loop iteration described page 10 of [ISO 8731−2:1992]
V := CYC (V); assert V == x00000008;
E := XOR (V, W); assert E == x00000009;
X := XOR (X0, M); assert X == x00000016;
Y := XOR (Y0, M); assert Y == x0000000B;
F := ADD (E, Y); assert F == x00000014;
G := ADD (E, X); assert G == x0000001F;
F := OR (F, A); assert F == x00000016;
G := OR (G, B); assert G == x0000001F;
F := AND (F, C); assert F == x00000012;
G := AND (G, D); assert G == x0000001B;
X := MUL1 (X, F); assert X == x0000018C;
Y := MUL2A (Y, G); assert Y == x00000129;
Z := XOR (X, Y); assert Z == x000000A5
end var;
−− test vectors of Annex E.3.3 of [ISO 8730:1990]
var A, B, C, D, E, F, G, M, V0, V, W, X0, X, Y0, Y : Block in
A := x02040801; −− true ”A” constant
B := x00804021; −− true ”B” constant
C := xBFEF7FDF; −− true ”C” constant
D := x7DFEFBFF; −− true ”D” constant
X0 := x21D869BA;
Y0 := x7792F9D4;
V0 := xC4EB1AEB;
W := xF6A09667;
M := x0A202020;
−− loop iteration on the first block M
V := CYC (V0); assert V == x89D635D7;
E := XOR (V, W); assert E == x7F76A3B0;
X := XOR (X0, M); assert X == x2BF8499A;
Y := XOR (Y0, M); assert Y == x7DB2D9F4;
F := ADD (E, Y); assert F == xFD297DA4;
G := ADD (E, X); assert G == xAB6EED4A;
F := OR (F, A); assert F == xFF2D7DA5;
G := OR (G, B); assert G == xABEEED6B;
F := AND (F, C); assert F == xBF2D7D85;
81
82
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
G := AND (G, D); assert G == x29EEE96B;
X := MUL1 (X, F); assert X == x0AD67E20;
Y := MUL2A (Y, G); assert Y == x30261492
end var;
−− test vectors for the whole algorithm − cf. Table 5 of [ISO 8731−2:1992]
var J, K, X, Y, V, W, S, T, Z, M1, M2 : Block in
−− first column of Table 5
J := x00FF00FF;
K := x00000000;
M1 := x55555555;
M2 := xAAAAAAAA;
assert PAT (J, K) == xFF;
Prelude (J, K, ?X, ?Y, ?V, ?W, ?S, ?T);
assert X == x4A645A01;
assert Y == x50DEC930;
assert V == x5CCA3239;
assert W == xFECCAA6E;
assert S == x51EDE9C7;
assert T == x24B66FB5;
−− 1st MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M1);
assert X == x48B204D6;
assert Y == x5834A585;
−− 2nd MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M2);
assert X == x4F998E01;
assert Y == xBE9F0917;
−− Coda: MainLoop iteration with S
MainLoop (!?X, !?Y, !?V, W, S);
assert X == x344925FC;
assert Y == xDB9102B0;
−− Coda: MainLoop iteration with T
MainLoop (!?X, !?Y, !?V, W, T);
use V;
assert X == x277B4B25;
assert Y == xD636250D;
Z := XOR (X,Y);
assert Z == xF14D6E28
end var;
var J, K, X, Y, V, W, S, T, Z, M1, M2 : Block in
−− second column of Table 5
J := x00FF00FF;
K := x00000000;
M1 := xAAAAAAAA;
M2 := x55555555;
assert PAT (J, K) == xFF;
Prelude (J, K, ?X, ?Y, ?V, ?W, ?S, ?T);
assert X == x4A645A01;
assert Y == x50DEC930;
assert V == x5CCA3239;
H. Garavel & L. Marsso
assert W == xFECCAA6E;
assert S == x51EDE9C7;
assert T == x24B66FB5;
−− 1st MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M1);
assert X == x6AEBACF8;
assert Y == x9DB15CF6;
−− 2nd MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M2);
assert X == x270EEDAF;
assert Y == xB8142629;
−− Coda: MainLoop iteration with S
MainLoop (!?X, !?Y, !?V, W, S);
assert X == x29907CD8;
assert Y == xBA92DB12;
−− Coda: MainLoop iteration with T
MainLoop (!?X, !?Y, !?V, W, T);
use V;
assert X == x28EAD8B3;
assert Y == x81D10CA3;
Z := XOR (X,Y);
assert Z == xA93BD410
end var;
var J, K, X, Y, V, W, S, T, Z, M1, M2 : Block in
−− third column of Table 5
J := x55555555;
K := x5A35D667;
M1 := x00000000;
M2 := xFFFFFFFF;
assert PAT (J, K) == x00;
Prelude (J, K, ?X, ?Y, ?V, ?W, ?S, ?T);
assert X == x34ACF886;
assert Y == x7397C9AE;
assert V == x7201F4DC;
assert W == x2829040B;
assert S == x9E2E7B36;
assert T == x13647149;
−− 1st MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M1);
assert X == x2FD76FFB;
assert Y == x550D91CE;
−− 2nd MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M2);
assert X == xA70FC148;
assert Y == x1D10D8D3;
−− Coda: MainLoop iteration with S
MainLoop (!?X, !?Y, !?V, W, S);
assert X == xB1CC1CC5;
assert Y == x29C1485F;
−− Coda: MainLoop iteration with T
MainLoop (!?X, !?Y, !?V, W, T);
83
84
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
use V;
assert X == x288FC786;
assert Y == x9115A558;
Z := XOR (X,Y);
assert Z == xB99A62DE
end var;
var J, K, X, Y, V, W, S, T, Z, M1, M2 : Block in
−− fourth column of Table 5
J := x55555555;
K := x5A35D667;
M1 := xFFFFFFFF;
M2 := x00000000;
assert PAT (J, K) == x00;
Prelude (J, K, ?X, ?Y, ?V, ?W, ?S, ?T);
assert X == x34ACF886;
assert Y == x7397C9AE;
assert V == x7201F4DC;
assert W == x2829040B;
assert S == x9E2E7B36;
assert T == x13647149;
−− 1st MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M1);
assert X == x8DC8BBDE;
assert Y == xFE4E5BDD;
−− 2nd MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, M2);
assert X == xCBC865BA;
assert Y == x0297AF6F;
−− Coda: MainLoop iteration with S
MainLoop (!?X, !?Y, !?V, W, S);
assert X == x3CF3A7D2;
assert Y == x160EE9B5;
−− Coda: MainLoop iteration with T
MainLoop (!?X, !?Y, !?V, W, T);
use V;
assert X == xD0482465;
assert Y == x7050EC5E;
Z := XOR (X,Y);
assert Z == xA018C83B
end var;
var J, K, X, Y, V, W, S, T : Block in
−− test vectors of Annex E.3.3 of [ISO 8730:1990]
J := xE6A12F07;
K := x9D15C437;
Prelude (J, K, ?X, ?Y, ?V, ?W, ?S, ?T);
assert X == x21D869BA;
assert Y == x7792F9D4;
assert V == xC4EB1AEB;
assert W == xF6A09667;
assert S == x6D67E884;
H. Garavel & L. Marsso
assert T == xA511987A
end var;
−− test vectors for the whole algorithm
var B, J, K, X, Y, V, W, S, T : Block, M : Message in
J := x80018001;
K := x80018000;
−− test mentioned in Table 6 of [ISO 8731−2:1992]
−− iterations on a message containg 20 null blocks
Prelude (J, K, ?X, ?Y, ?V, ?W, ?S, ?T);
B := x00000000;
−− 1st MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x303FF4AA;
assert Y == x1277A6D4;
−− 2nd MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x55DD063F;
assert Y == x4C49AAE0;
−− 3rd MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x51AF3C1D;
assert Y == x5BC02502;
−− 4th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xA44AAAC0;
assert Y == x63C70DBA;
−− 5th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x4D53901A;
assert Y == x2E80AC30;
−− 6th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x5F38EEF1;
assert Y == x2A6091AE;
−− 7th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xF0239DD5;
assert Y == x3DD81AC6;
−− 8th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xEB35B97F;
assert Y == x9372CDC6;
−− 9th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x4DA124A1;
assert Y == xC6B1317E;
−− 10th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x7F839576;
assert Y == x74B39176;
85
86
Comparative Study of Eight Formal Specifications of the Message Authenticator Algorithm
−− 11th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x11A9D254;
assert Y == xD78634BC;
−− 12th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xD8804CA5;
assert Y == xFDC1A8BA;
−− 13th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x3F6F7248;
assert Y == x11AC46B8;
−− 14th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xACBC13DD;
assert Y == x33D5A466;
−− 15th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x4CE933E1;
assert Y == xC21A1846;
−− 16th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xC1ED90DD;
assert Y == xCD959B46;
−− 17th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x3CD54DEB;
assert Y == x613F8E2A;
−− 18th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xBBA57835;
assert Y == x07C72EAA;
−− 19th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == xD7843FDC;
assert Y == x6AD6E8A4;
−− 20th MainLoop iteration
MainLoop (!?X, !?Y, !?V, W, B);
assert X == x5EBA06C2;
assert Y == x91896CFA;
−− Coda: MainLoop iteration with S
MainLoop (!?X, !?Y, !?V, W, S);
assert X == x1D9C9655;
assert Y == x98D1CC75;
−− Coda: MainLoop iteration with T
MainLoop (!?X, !?Y, !?V, W, T);
use V;
assert X == x7BC180AB;
assert Y == xA0B87B77;
M := MakeMessage (20, x00000000, x00000000);
assert MAC (J, K, M) == xDB79FBDC;
H. Garavel & L. Marsso
−− supplementary tests added by H. Garavel and L. Marsso
M := MakeMessage (16, x00000000, x07050301);
assert MAC (J, K, M) == x8CE37709;
M := MakeMessage (256, x00000000, x07050301);
assert MAC (J, K, M) == x717153D5;
M := MakeMessage (4100, x00000000, x07050301);
assert MAC (J, K, M) == x7783C51D
end var;
return 0
end function
end module
87
| 6 |
1
Report LIDS-P-3174, May 2015 (Revised Sept. 2015)
To appear in IEEE Transactions on Neural Networks, 2015
Value and Policy Iteration in Optimal Control and
Adaptive Dynamic Programming
arXiv:1507.01026v2 [] 1 Oct 2015
Dimitri P. Bertsekas
Abstract—In this paper, we consider discrete-time infinite
horizon problems of optimal control to a terminal set of states.
These are the problems that are often taken as the starting
point for adaptive dynamic programming. Under very general
assumptions, we establish the uniqueness of solution of Bellman’s
equation, and we provide convergence results for value and policy
iteration.
policy µ, the corresponding cost function is denoted by Jµ .
The optimal cost function is defined as
J ∗ (x) = inf Jπ (x),
π∈Π
and a policy π ∗ is said to be optimal if it attains the minimum
of Jπ (x) for all x ∈ X, i.e.,
Jπ∗ (x) = inf Jπ (x) = J ∗ (x),
π∈Π
I. I NTRODUCTION
In this paper we consider a deterministic discrete-time optimal
control problem involving the system
xk+1 = f (xk , uk ),
k = 0, 1, . . . ,
(1)
where xk and uk are the state and control at stage k, lying in
sets X and U , respectively, and f is a function mapping X ×U
to X. The control uk must be chosen from a constraint set
U (xk ) ⊂ U that may depend on the current state xk . The cost
for the kth stage, denoted g(xk , uk ), is assumed nonnnegative
and may possibly take the value ∞:
0 ≤ g(xk , uk ) ≤ ∞,
xk ∈ X, uk ∈ U (xk ),
(2)
[values g(xk , uk ) = ∞ may be used to model constraints
on xk , for example]. We are interested in feedback policies
of the form π = {µ0 , µ1 , . . .}, where each µk is a function
mapping every x ∈ X into the control µk (x) ∈ U (x). The
set of all policies is denoted by Π. Policies of the form π =
{µ, µ, . . .} are called stationary, and for convenience, when
confusion cannot arise, will be denoted by µ. No restrictions
are placed on X and U : for example, they may be finite sets as
in classical shortest path problems involving a graph, or they
may be continuous spaces as in classical problems of control
to the origin or some other terminal set.
Given an initial state x0 , a policy π = {µ0 , µ1 , . . .} when
applied to the system (1), generates a unique sequence of state
control pairs xk , µk (xk ) , k = 0, 1, . . . , with cost
Jπ (x0 ) = lim
k→∞
k
X
t=0
g xt , µt (xt ) ,
x0 ∈ X,
(3)
[the limit exists thanks to the nonnegativity assumption (2)].
We view Jπ as a function over X that takes values in [0, ∞].
We refer to it as the cost function of π. For a stationary
Dimitri Bertsekas is with the Dept. of Electr. Engineering and Comp.
Science, M.I.T., Cambridge, Mass., 02139. dimitrib@mit.edu
Many thanks are due to Huizhen (Janey) Yu for collaboration and many
helpful discussions in the course of related works.
x ∈ X,
∀ x ∈ X.
In the context of dynamic programming (DP for short),
one hopes to prove that the optimal cost function J ∗ satisfies
Bellman’s equation:
∀ x ∈ X, (4)
g(x, u) + J ∗ f (x, u) ,
J ∗ (x) = inf
u∈U(x)
and that an optimal stationary policy may be obtained through
the minimization in the right side of this equation. Note that
Bellman’s equation generically has multiple solutions, since
adding a positive constant to any solution produces another
solution. A classical result, stated in Prop. 4(a) of Section II,
is that the optimal cost function J ∗ is the “smallest” solution
of Bellman’s equation. In this paper we will focus on deriving
conditions under which J ∗ is the unique solution within a
certain restricted class of functions.
In this paper, we will also consider finding J ∗ with the
classical algorithms of value iteration (VI for short) and policy
iteration (PI for short). The VI algorithm starts from some
nonnegative function J0 : X 7→ [0, ∞], and generates a
sequence of functions {Jk } according to
(5)
g(x, u) + Jk f (x, u) .
Jk+1 = inf
u∈U(x)
We will derive conditions under which Jk converges to J ∗
pointwise.
The PI algorithm starts from a stationary policy µ0 , and generates a sequence of stationary policies {µk } via a sequence
of policy evaluations to obtain Jµk from the equation
Jµk (x) = g x, µk (x) + Jµk f x, µk (x) ,
x ∈ X, (6)
interleaved with policy improvements to obtain µk+1 from Jµk
according to
µk+1 (x) ∈ arg min g(x, u) + Jµk f (x, u) ,
x ∈ X.
u∈U(x)
(7)
We implicitly assume here is that Jµk satisfies Eq. (6), which
is true under the cost nonnegativity assumption (2) (cf. Prop.
4 in the next section). Also for the PI algorithm to be welldefined, the minimum in Eq. (7) should be attained for each
2
x ∈ X, which is true under some conditions that guarantee
compactness of the level sets
u ∈ U (x) | g(x, u) + Jµk f (x, u) ≤ λ ,
λ ∈ ℜ.
We will derive conditions under which Jµk converges to J ∗
pointwise.
In this paper, we will address the preceding questions, for
the case where there is a nonempty stopping set Xs ⊂ X,
which consists of cost-free and absorbing states in the sense
that
g(x, u) = 0,
x = f (x, u),
∀ x ∈ Xs , u ∈ U (x).
(8)
Clearly, J ∗ (x) = 0 for all x ∈ Xs , so the set Xs may be
viewed as a desirable set of termination states that we are
trying to reach or approach with minimum total cost. We will
assume in addition that J ∗ (x) > 0 for x ∈
/ Xs , so that
∗
Xs = x ∈ X | J (x) = 0 .
(9)
In the applications of primary interest, g is usually taken to
be strictly positive outside of Xs to encourage asymptotic
convergence of the generated state sequence to Xs , so this
assumption is natural and often easily verifiable. Besides Xs ,
another interesting subset of X is
Xf = x ∈ X | J ∗ (x) < ∞ .
Ordinarily, in practical applications, the states in Xf are
those from which one can reach the stopping set Xs , at least
asymptotically.
For an initial state x, we say that a policy π terminates
starting from x if the state sequence {xk } generated starting
from x and using π reaches Xs in finite time, i.e., satisfies
xk̄ ∈ Xs for some index k̄. A key assumption in this paper is
that the optimal cost J ∗ (x) (if it is finite) can be approximated
arbitrarily closely by using policies that terminate from x. In
particular, in all the results and discussions of the paper we
make the following assumption (except for Prop. 5, which
provides conditions under which the assumption holds).
Assumption 1. The cost nonnegativity condition (2) and
stopping set conditions (8)-(9) hold. Moreover, for every pair
(x, ǫ) with x ∈ Xf and ǫ > 0, there exists a policy π that
terminates starting from x and satisfies Jπ (x) ≤ J ∗ (x) + ǫ.
Specific and easily verifiable conditions that imply this
assumption will be given in Section IV. A prominent case
is when X and U are finite, so the problem becomes a deterministic shortest path problem with nonnegative arc lengths. If
all cycles of the state transition graph have positive length, all
policies π that do not terminate from a state x ∈ Xf must
satisfy Jπ (x) = ∞, implying that there exists an optimal
policy that terminates from all x ∈ Xf . Thus, in this case
Assumption 1 is naturally satisfied.
When X is the n-dimensional Euclidean space ℜn , a primary case of interest for this paper, it may easily happen that
the optimal policies are not terminating from some x ∈ Xf ,
but instead the optimal state trajectories may approach Xs
asymptotically. This is true for example in the classical linearquadratic optimal control problem, where X = ℜn , Xs = {0},
U = ℜm , g is positive semidefinite quadratic, and f represents
a linear system of the form xk+1 = Axk + Buk , where A and
B are given matrices. However, we will show in Section IV
that Assumption 1 is satisfied under some natural and easily
verifiable conditions.
Regarding notation, we denote by ℜ and ℜn the real line
and n-dimensional Euclidean space, respectively. We denote
by E + (X) the set of all functions J : X 7→ [0, ∞], and by J
the set of functions
(10)
J = J ∈ E + (X) | J(x) = 0, ∀ x ∈ Xs .
Since Xs consists of cost-free and absorbing states [cf. Eq.
(8)], the set J contains the cost function Jπ of all policies π,
as well as J ∗ . In our terminology, all equations, inequalities,
and convergence limits involving functions are meant to be
pointwise. Our main results are given in the following three
propositions.
Proposition 1 (Uniqueness of Solution of Bellman’s Equation). Let Assumption 1 hold. The optimal cost function J ∗ is
the unique solution of Bellman’s equation (4) within the set
of functions J.
There are well-known examples where g ≥ 0 but Assumption 1 does not hold, and there are additional solutions of
Bellman’s equation within J. The following is a two-state
shortest path example, which is discussed in more detail in
[12], Section 3.1.2, and [14], Example 1.1.
Example 1 (Counterexample for Uniqueness of Solution of
Bellman’s Equation). Let X = {0, 1}, where 0 is the unique
cost-free and absorbing state, Xs = {0}, and assume that at
state 1 we can stay at 1 at no cost, or move to 0 at cost 1.
Here J ∗ (0) = J ∗ (1) = 0, so Eq. (9) is violated. It can be
seen that
J = J | J ∗ (0) = 0, J ∗ (1) ≥ 0 ,
and that Bellman’s equation is
J ∗ (0) = J ∗ (0),
J ∗ (1) = min J ∗ (1), 1 + J ∗ (0) .
It can be seen that Bellman’s
equation has infinitely many
solutions within J, the set J | J(0) = 0, 0 ≤ J(1) ≤ 1 .
Proposition 2 (Convergence of VI). Let Assumption 1 hold.
(a) The VI sequence {Jk } generated by Eq. (5) converges
pointwise to J ∗ starting from any function J0 ∈ J with
J0 ≥ J ∗ .
(b) Assume further that U is a metric space, and the sets
Uk (x, λ) given by
Uk (x, λ) = u ∈ U (x) | g(x, u) + Jk f (x, u) ≤ λ ,
are compact for all x ∈ X, λ ∈ ℜ, and k, where {Jk } is
the VI sequence {Jk } generated by Eq. (5) starting from
J0 ≡ 0. Then the VI sequence {Jk } generated by Eq.
(5) converges pointwise to J ∗ starting from any function
J0 ∈ J.
3
The compactness assumption of Prop. 2(b) is satisfied if
U (x) is finite for all x ∈ X. Other easily verifiable assumptions implying this compactness assumption will be given
later. Note that when there are solutions to Bellman’s equation
within J, in addition to J ∗ , VI will not converge to J ∗ starting
from any of these solutions. However, it is also possible that
Bellman’s equation has J ∗ as its unique solution within J, and
yet VI does not converge to J ∗ starting from the zero function
because the compactness assumption of Prop. 2(b) is violated.
There are several examples of this type in the literature, and
the following example, an adaptation of Example 4.3.3 of [12],
is a deterministic problem for which Assumption 1 is satisfied.
Example 2 (Counterexample for Convergence of VI). Let
X = [0, ∞) ∪ {s}, with s being a cost-free and absorbing
state, and let U = (0, ∞) ∪ {ū}, where ū is a special stopping
control, which moves the system from states x ≥ 0 to state s
at unit cost. The system has the form
xk + uk if xk ≥ 0 and uk 6= ū,
xk+1 = s
if xk ≥ 0 and uk = ū,
s
if xk = s and uk ∈ U .
The cost per stage has the form
xk if xk ≥ 0 and uk 6= ū,
g(xk , uk ) = 1
if xk ≥ 0 and uk = ū,
0
if xk = s and uk ∈ U .
Let also Xs = {s}. Then it can be verified that
(
1 if x ≥ 0,
∗
J (x) =
0 if x = s,
and that an optimal policy is to use the stopping control ū
at every state (since using any other control at states x ≥ 0,
leads to unbounded accumulation of positive cost). Thus it can
be seen that Assumption 1 is satisfied. On the other hand, the
VI algorithm is
Jk+1 (x) = min 1 + Jk (s), inf x + Jk (x + u)
u≥0
for x ≥ 0, and Jk+1 (s) = Jk (s), and it can be verified by
induction that starting from J0 ≡ 0, the sequence {Jk } is
given for all k by
(
min{1, kx} if x ≥ 0,
Jk (x) =
0
if x = s.
Thus Jk (0) = 0 for all k, while J ∗ (0) = 1, so the VI algorithm
fails to converge for the state x = 0. The difficulty here is that
the compactness assumption of Prop. 2(b) is violated.
Proposition 3 (Convergence of PI). Let Assumption 1 hold.
A sequence {Jµk } generated by the PI algorithm (6), (7),
satisfies Jµk (x) ↓ J ∗ (x) for all x ∈ X.
It is implicitly assumed in the preceding proposition that the
PI algorithm is well-defined in the sense that the minimization
in the policy improvement operation (7) can be carried out for
every x ∈ X. Easily verifiable conditions that guarantee this
also guarantee the compactness condition of Prop. 2(b), and
will be noted following Prop. 4 in the next section. Moreover,
in Section IV we will prove a similar convergence result for
a variant of the PI algorithm where the policy evaluation is
carried out approximately through a finite number of VIs.
Example 3 (Counterexample for Convergence of PI). For a
simple example where the PI sequence Jµk does not converge
to J ∗ if Assumption 1 is violated, consider the two-state
shortest path Example 2. Let µ be the suboptimal policy that
moves from state 1 to state 0. Then Jµ (0) = 0, Jµ (1) = 1, and
it can be seen that µ satisfies the policy improvement equation
µ(1) ∈ arg min 1 + Jµ (0), Jµ (1) .
Thus PI may stop with the suboptimal policy µ.
The results of the preceding three propositions are new at
the level of generality given here. For example there has been
no proposal of a valid PI algorithm in the classical literature on
nonnegative cost infinite horizon Markovian decision problems
(exceptions are special cases such as linear-quadratic problems
[23]). The ideas of the present paper stem from a more general
analysis regarding the convergence of VI, which was presented
recently in the author’s research monograph on abstract DP
[Ber12], and various extensions given in the recent papers [13],
[14]. Two more papers of the author, coauthored with H. Yu,
deal with issues that relate in part to the intricacies of the
convergence of VI and PI in undiscounted infinite horizon DP
[35], [5].
The paper is organized as follows. In Section II we provide
background and references, which place in context our results
and methods of analysis in relation to the literature. In Section
III we give the proofs of Props. 1-3. In Section IV we discuss
special cases and easily verifiable conditions that imply our
assumptions, and we provide extensions of our analysis.
II. BACKGROUND
The issues discussed in this paper have received attention since
the 60’s, originally in the work of Blackwell [15], who considered the case g ≤ 0, and the work by Strauch (Blackwell’s PhD
student) [30], who considered the case g ≥ 0. For textbook
accounts we refer to [2], [25], [11], and for a more abstract
development, we refer to the monograph [12]. These works
showed that the cases where g ≤ 0 (which corresponds to
maximization of nonnegative rewards) and g ≥ 0 (which is
most relevant to the control problems of this paper) are quite
different in structure. In particular, while VI converges to J ∗
starting for J0 ≡ 0 when g ≤ 0, this is not so when g ≥ 0;
a certain compactness condition is needed to guarantee this
[see Example 2, and part (d) of the following proposition].
Moreover when g ≥ 0, Bellman’s equation may have solutions
Jˆ 6= J ∗ with Jˆ ≥ J ∗ (see Example 1), and VI will not
ˆ In addition it is known
converge to J ∗ starting from such J.
that in general, PI need not converge to J ∗ and may instead
stop with a suboptimal policy (see Example 3).
The following proposition gives the standard results when
g ≥ 0 (see [2], Props. 5.2, 5.4, and 5.10, [11], Props.
4.1.1, 4.1.3, 4.1.5, 4.1.9, or [12], Props. 4.3.3, 4.3.9, and
4
4.3.14). These results hold for stochastic infinite horizon DP
problems with nonnegative cost per stage, and do not take
into account the favorable structure of deterministic problems
or the presence of the stopping set Xs .
Proposition 4. Let the nonnegativity condition (2) hold.
(a) J ∗ satisfies Bellman’s equation (4), and if Jˆ ∈ E + (X)
is another solution, i.e., Jˆ satisfies
ˆ
J(x)
= inf
g(x, u) + Jˆ f (x, u) ,
∀ x ∈ X,
u∈U(x)
ˆ
then J ∗ ≤ J.
(b) For all stationary policies µ we have
Jµ (x) = g x, µ(x) + Jµ f x, µ(x) ,
(11)
∀ x ∈ X.
(12)
(c) A stationary policy µ∗ is optimal if and only if
µ∗ (x) ∈ arg min g(x, u)+J ∗ f (x, u) ,
∀ x ∈ X.
u∈U(x)
(13)
(d) If U is a metric space and the sets
Uk (x, λ) = u ∈ U (x) | g(x, u) + Jk f (x, u) ≤ λ
(14)
are compact for all x ∈ X, λ ∈ ℜ, and k, where {Jk } is
the sequence generated by VI [cf. Eq. (5)] starting from
J0 ≡ 0, then there exists at least one optimal stationary
policy, and we have Jk → J ∗ .
Compactness assumptions such as the one of part (d) above,
were originally given in [9], [10], and in [29]. They have been
used in several other works, such as [3], [11], Prop. 4.1.9. In
particular, the condition of part (d) holds when U (x) is a finite
set for all x ∈ X. The condition of part (d) also holds when
X = ℜn , and for each x ∈ X, the set
u ∈ U (x) | g(x, u) ≤ λ
is a compact subset of ℜm , for all λ ∈ ℜ, and g and f are
continuous in u. The proof consists of showing by induction
that the VI iterates Jk have compact level sets and hence are
lower semicontinuous.
Let us also note a recent result of H. Yu and the author [35],
where it was shown that J ∗ is the unique solution of Bellman’s
equation within the class of all functions J ∈ E + (X) that
satisfy
0 ≤ J ≤ cJ ∗
for some c > 0,
(15)
(we refer to [35] for discussion and references to antecedents
of this result). Moreover it was shown that VI converges to
J ∗ starting from any function satisfying the condition
J ∗ ≤ J ≤ cJ ∗
for some c > 0,
and under the compactness conditions of Prop. 4(d), starting
from any J that satisfies Eq. (15). The same paper and a related
paper [5] discuss extensively PI algorithms for stochastic
nonnegative cost problems.
For deterministic problems, there has been substantial research in the adaptive dynamic programming literature, regarding the validity of Bellman’s equation and the uniqueness of
its solution, as well as the attendant questions of convergence
of VI and PI. In particular, infinite horizon deterministic
optimal control for both discrete-time and continuous-time
systems has been considered since the early days of DP in the
works of Bellman. For continuous-time problems the questions
discussed in the present paper involve substantial technical
difficulties, since the analog of the (discrete-time) Bellman
equation (4) is the steady-state form of the (continuoustime) Hamilton-Jacobi-Bellman equation, a nonlinear partial
differential equation the solution and analysis of which is
in general very complicated. A formidable difficulty is the
potential lack of differentiability of the optimal cost function,
even for simple problems such as time-optimal control of
second order linear systems to the origin.
The analog of VI for continuous-time systems essentially
involves the time integration of the Hamilton-Jacobi-Bellman
equation, and its analysis must deal with difficult issues of stability and convergence to a steady-state solution. Nonetheless
there have been proposals of continuous-time PI algorithms,
in the early papers [26], [23], [28], [34], and the thesis [6], as
well as more recently in several works; see e.g., the book [32],
the survey [18], and the references quoted there. These works
also address the possibility of value function approximation,
similar to other approximation-oriented methodologies such
as neurodynamic programming [4] and reinforcement learning
[31], which consider primarily discrete-time systems. For
example, among the restrictions of the PI method, is that it
must be started with a stabilizing controller; see for example
the paper [23], which considered linear-quadratic continuoustime problems, and showed convergence to the optimal policy
of the PI algorithm, assuming that an initial stabilizing linear
controller is used. By contrast, no such restriction is needed in
the PI methodology of the present paper; questions of stability
are addressed only indirectly through the finiteness of the
values J ∗ (x) and Assumption 1.
For discrete-time systems there has been much research,
both for VI and PI algorithms. For a selective list of recent
references, which themselves contain extensive lists of other
references, see the book [32], the papers [19], [16], [17], [22],
[33], the survey papers in the edited volumes [27] and [21],
and the special issue [20]. Some of these works relate to
continuous-time problems as well, and in their treatment of
algorithmic convergence, typically assume that X and U are
Euclidean spaces, as well as continuity and other conditions
on g, special structure of the system, etc. It is beyond our
scope to provide a detailed survey of the state-of-the-art of
the VI and PI methodology in the context of adaptive DP.
However, it should be clear that the works in this field involve
more restrictive assumptions than our corresponding results
of Props. 1-3. Of course, these works also address questions
that we do not, such as issues of stability of the obtained
controllers, the use of approximations, etc. Thus the results
of the present work may be viewed as new in that they
rely on very general assumptions, yet do not address some
important practical issues. The line of analysis of the present
paper, which is based on general results of Markovian decision
problem theory and abstract forms of dynamic programming,
is also different from the lines of analysis of works in adaptive
5
DP, which make heavy use of the deterministic character of
the problem and control theoretic methods such as Lyapunov
stability.
Still there is a connection between our line of analysis and
Lyapunov stability. In particular, if π ∗ is an optimal controller,
i.e., Jπ∗ = J ∗ , then for every x0 ∈ Xf , the state sequence
{xk } generated using π ∗ and starting from x0 remains within
Xf and satisfies J ∗ (xk ) ↓ 0. This can be seen by writing
J ∗ (x0 ) =
k−1
X
t=0
g xt , µ∗t (xt ) + J ∗ (xk ),
k = 1, 2, . . . ,
ˆ k ) = 0 for all sufficiently large k, we have
J(x
)
(
k−1
X
ˆ k) +
lim sup J(x
g xt , µt (xt )
k→∞
t=0
= lim
k→∞
(k−1
X
t=0
= Jπ (x0 ).
g xt , µt (xt )
)
By combining the last two relations, we obtain
ˆ 0 ) ≤ Jπ (x0 ),
J ∗ (x0 ) ≤ J(x
∀ x0 ∈ Xf , π ∈ ΠT,x0 .
∗
and using the facts g ≥ 0 and J (x0 ) < ∞. Thus an optimal
controller, restricted to the subset Xf , may be viewed as a
Lyapunov-stable controller where the Lyapunov function is J ∗ .
On the other hand, existence of a “stable” controller does
not necessarily imply that J ∗ is real-valued. In particular, it
may not be true that if the generated sequence {xk } by an
optimal controller starting from some x0 converges to Xs ,
then we have J ∗ (x0 ) < ∞. The reason is that the cost per
stage g may not decrease fast enough as we approach Xs . As
an example, let
X = {0} ∪ {1/m | m : is a positive integer},
with Xs = {0}, and assume that there is a unique controller,
which moves from 1/m to 1/(m + 1) with incurred cost 1/m.
Then we have J ∗ (x) = ∞ for all x 6= 0, despite the fact
that the controller is “stable” in the sense that it generates a
sequence {xk } converging to 0 starting from every x0 6= 0.
Taking the infimum over π ∈ ΠT,x0 and using Eq. (16), it
ˆ 0 ) for all x0 ∈ Xf . Also for x0 ∈
follows that J ∗ (x0 ) = J(x
/
ˆ 0 ) = ∞ [since J ∗ ≤ Jˆ by Prop.
Xf , we have J ∗ (x0 ) = J(x
ˆ
4(a)], so we obtain J ∗ = J.
Proof of Prop. 2: (a) Suppose that J0 ∈ J and J0 ≥ J ∗ .
Starting with J0 , let us apply the VI operation to both sides
of the inequality J0 ≥ J ∗ . Since J ∗ is a solution of Bellman’s
equation and VI has a monotonicity property that maintains
the direction of functional inequalities, we see that J1 ≥ J ∗ .
Continuing similarly, we obtain Jk ≥ J ∗ for all k. Moreover,
we clearly have Jk (x) = 0 for all x ∈ Xs , so Jk ∈ J for
all k. We now argue that since Jk is produced by k steps
of VI starting from J0 , it is the optimal cost function of the
k-stage version of the problem with terminal cost function
J0 . Therefore, we have for every x0 ∈ X and policy π =
{µ0 , µ1 , . . .},
∗
III. P ROOFS
OF THE
J (x0 ) ≤ Jk (x0 ) ≤ J0 (xk )+
M AIN R ESULTS
t=0
Let us denote for all x ∈ X,
ΠT,x = π ∈ Π | π terminates from x ,
and note the following key implication of Assumption 1:
J ∗ (x) =
inf
π∈ΠT ,x
Jπ (x),
∀ x ∈ Xf .
(16)
In the subsequent arguments, the significance of policies
that terminate starting from some initial state x0 is that the
corresponding generated sequences {xk } satisfy J(xk ) = 0
for all J ∈ J and k sufficiently large.
Proof of Prop. 1: Let Jˆ ∈ J be a solution of the Bellman
equation (11), so that
ˆ
J(x)
≤ g(x, u)+ Jˆ f (x, u) ,
∀ x ∈ X, u ∈ U (x), (17)
ˆ For any x0 ∈ Xf and policy
while by Prop. 4(a), J ∗ ≤ J.
π = {µ0 , µ1 , . . .} ∈ ΠT,x0 , we have by using repeatedly Eq.
(17),
ˆ 0 ) ≤ J(x
ˆ k) +
J ∗ (x0 ) ≤ J(x
k−1
X
t=0
k−1
X
g xt , µt (xt ) , k = 1, 2, . . . ,
where {xk } is the state sequence generated starting from x0
and using π. Also, since π ∈ ΠT,x0 and hence xk ∈ Xs and
g xt , µt (xt ) , k = 1, 2, . . . ,
where {xt } is the state sequence generated starting from x0
and using π. If x0 ∈ Xf and π ∈ ΠT,x0 , we have xk ∈ Xs
and J0 (xk ) = 0 for all sufficiently large k, so that
)
(
k−1
X
lim sup J0 (xk ) +
g xt , µt (xt )
k→∞
t=0
= lim
k→∞
(k−1
X
t=0
= Jπ (x0 ).
)
g xt , µt (xt )
By combining the last two relations, we obtain
J ∗ (x0 ) ≤ lim inf Jk (x0 ) ≤ lim sup Jk (x0 ) ≤ Jπ (x0 ),
k→∞
k→∞
for all x0 ∈ Xf and π ∈ ΠT,x0 . Taking the infimum over π ∈
ΠT,x0 and using Eq. (16), it follows that limk→∞ Jk (x0 ) =
J ∗ (x0 ) for all x0 ∈ Xf . Since for x0 ∈
/ Xf , we have J ∗ (x0 ) =
∗
Jk (x0 ) = ∞, we obtain Jk → J .
(b) Let {Jk } be the VI sequence generated starting from some
function J ∈ J. By the monotonicity of the VI operation, {Jk }
lies between the sequence of VI iterates starting from the zero
function [which converges to J ∗ from below by Prop. 4(d)],
and the sequence of VI iterates starting from J0 = max{J, J ∗ }
[which converges to J ∗ from above by part (a)].
6
Proof of Prop. 3: If µ is a stationary policy and µ̄ satisfies
the policy improvement equation
µ̄(x) ∈ arg min g(x, u) + Jµ f (x, u) ,
x ∈ X,
u∈U(x)
[cf. Eq. (7)], we have for all x ∈ X,
Jµ (x) = g x, µ(x) + Jµ f x, µ(x)
≥ min g(x, u) + Jµ f (x, u)
u∈U(x)
= g x, µ̄(x) + Jµ f x, µ̄(x) ,
(18)
where the first equality follows from Prop. 4(b) and the second
equality follows from the definition of µ̄. Let us fix x and let
{xk } be the sequence generated starting from x and using µ.
By
repeatedly applying Eq. (18), we see that the sequence
J˜k (x) defined by
J˜0 (x) = Jµ (x),
J˜1 (x) = Jµ (x1 ) + g x, µ̄(x) ,
and more generally,
J˜k (x) = Jµ (xk ) +
k−1
X
t=0
g xt , µ̄(xt ) ,
k = 1, 2, . . . ,
is monotonically nonincreasing. Thus, using also Eq. (18), we
have
Jµ (x) ≥ min g(x, u) + Jµ f (x, u)
u∈U(x)
= J˜1 (x)
≥ J˜k (x),
for all x ∈ X and k ≥ 1. This implies that
Jµ (x) ≥ min g(x, u) + Jµ f (x, u)
u∈U(x)
≥ lim J˜k (x)
k→∞
(
Jµ (xk ) +
= lim
k→∞
≥ lim
k→∞
k−1
X
t=0
= Jµ̄ (x),
k−1
X
t=0
)
g xt , µ̄(xt )
g xt , µ̄(xt )
where the last inequality follows since Jµ ≥ 0. In conclusion,
we have
Jµ (x) ≥ inf
g(x, u) + Jµ f (x, u) ≥ Jµ̄ (x), x ∈ X.
u∈U(x)
Using µk and µk+1 in place of µ and µ̄ in the preceding
relation, we obtain for all x ∈ X,
g(x, u) + Jµk f (x, u) ≥ Jµk+1 (x).
Jµk (x) ≥ inf
u∈U(x)
(19)
Thus the sequence {Jµk } generated by PI converges monotonically to some function J∞ ∈ E + (X), i.e., Jµk ↓ J∞ .
Moreover, by taking the limit as k → ∞ in Eq. (19), we have
the two relations
x ∈ X,
J∞ (x) ≥ inf
g(x, u) + J∞ f (x, u) ,
u∈U(x)
and
g(x, u) + Jµk f (x, u) ≥ J∞ (x),
x ∈ X, u ∈ U (x).
We now take the limit in the second relation as k → ∞, then
the infimum over u ∈ U (x), and then combine with the first
relation, to obtain
x ∈ X.
g(x, u) + J∞ f (x, u) ,
J∞ (x) = inf
u∈U(x)
Thus J∞ is a solution of Bellman’s equation, satisfying J∞ ∈
J (since Jµk ∈ J and Jµk ↓ J∞ ), so by the uniqueness result
of Prop. 1, we have J∞ = J ∗ .
IV. D ISCUSSION , S PECIAL C ASES ,
AND
E XTENSIONS
In this section we elaborate on our main results and we derive
easily verifiable conditions under which our assumptions hold.
A. Conditions that Imply Assumption 1
Consider Assumption 1. As noted in Section I, it holds when
X and U are finite, a terminating policy exists from every
x, and all cycles of the state transition graph have positive
length. For the case where X is infinite, let us assume that
X is a normed space with norm denoted k · k, and say that
π asymptotically terminates from x if the sequence {xk }
generated starting from x and using π converges to Xs in
the sense that
lim dist(xk , Xs ) = 0,
k→∞
where dist(x, Xs ) denotes the minimum distance from x to
Xs ,
dist(x, Xs ) = inf kx − yk,
x ∈ X.
y∈Xs
The following proposition provides readily verifiable conditions that guarantee Assumption 1.
Proposition 5. Let the cost nonnegativity condition (2) and
stopping set conditions (8)-(9) hold, and assume further the
following:
(1) For every x ∈ Xf and ǫ > 0, there exits a policy π that
asymptotically terminates from x and satisfies
Jπ (x) ≤ J ∗ (x) + ǫ.
(2) For every ǫ > 0, there exists a δǫ > 0 such that for each
x ∈ Xf with
dist(x, Xs ) ≤ δǫ ,
there is a policy π that terminates from x and satisfies
Jπ (x) ≤ ǫ.
Then Assumption 1 holds.
Proof: Fix x ∈ Xf and ǫ > 0. Let π be a policy
that asymptotically terminates from x, and satisfies Jπ (x) ≤
J ∗ (x) + ǫ, as per condition (1). Starting from x, this policy
will generate a sequence {xk } such that for some index k̄ we
have
dist(xk̄ , Xs ) ≤ δǫ ,
7
so by condition (2), there exists a policy π̄ that terminates
from xk̄ and is such that Jπ̄ (xk̄ ) ≤ ǫ. Consider the policy π ′
that follows π up to index k̄ and follows π̄ afterwards. This
policy terminates from x and satisfies
Jπ′ (x) = Jπ,k̄ (x) + Jπ̄ (xk̄ ) ≤ Jπ (x) + Jπ̄ (xk̄ ) ≤ J ∗ (x) + 2ǫ,
where Jπ,k̄ (x) is the cost incurred by π starting from x up to
reaching xk̄ .
Condition (1) of the preceding proposition requires that
for states x ∈ Xf , the optimal cost J ∗ (x) can be achieved
arbitrarily closely with policies that asymptotically terminate
from x. Problems for which condition (1) holds are those
involving a cost per stage that is strictly positive outside of
Xs . More precisely, condition (1) holds if for each δ > 0 there
exists ǫ > 0 such that
inf g(x, u) ≥ ǫ,
∀ x ∈ X such that dist(x, Xs ) ≥ δ.
u∈U(x)
(20)
Then for any x and policy π that does not asymptotically
terminate from x, we will have Jπ (x) = ∞, so that if
x ∈ Xf , all policies π with Jπ (x) < ∞ must be asymptotically terminating from x. In applications, condition (1)
is natural and consistent with the aim of steering the state
towards the terminal set Xs with finite cost. Condition (2)
is a “controllability” condition implying that the state can be
steered into Xs with arbitrarily small cost from a starting state
that is sufficiently close to Xs .
Example 4 (Linear System Case). Consider a linear system
unstable (the instability of the system is important for the
purpose of this example). Then it can be verified that the
quadratic function J(x) = (a2 − 1)x2 , which belongs to
J, also solves Bellman’s equation. This is a case where the
algebraic Riccati equation associated with the problem has
two nonnegative solutions because there is no cost on the
state, and a standard observability condition for uniqueness
of solution of the Riccati equation is violated.
If on the other hand, in addition to (a)-(c), we assume that
for some positive scalars γ, p, we have inf u∈U(x) g(x, u) ≥
γkxkp for all x ∈ ℜn , then J ∗ (x) > 0 for all x 6= 0 [cf. Eq.
(9)], while condition (1) of Prop. 5 is satisfied as well [cf. Eq.
(20)]. Then by Prop. 5, Assumption 1 holds, and Bellman’s
equation has a unique solution within J.
There are straightforward extensions of the conditions of
the preceding example to a nonlinear system. Note that even
for a controllable system, it is possible that there exist states
from which the terminal set cannot be reached, because U (x)
may imply constraints on the magnitude of the control vector.
Still the preceding analysis allows for this case.
B. An Optimistic Form of PI
Let us consider a variant of PI where policies are evaluated
inexactly, with a finite number of VIs. In particular, this
algorithm starts with some J0 ∈ E(X), and generates a
sequence of cost function and policy pairs {Jk , µk } as follows:
Given Jk , we generate µk according to
xk+1 = Axk + Buk ,
where A and B are given matrices, with the terminal set being
the origin, i.e., Xs = {0}. We assume the following:
(a) X = ℜn , U = ℜm , and there is an open sphere R
centered at the origin such that U (x) contains R for all
x ∈ X.
(b) The system is controllable, i.e., one may drive the system
from any state to the origin within at most n steps
using suitable controls, or equivalently that the matrix
[B AB · · · An−1 B] has rank n.
(c) g satisfies
∀ (x, u) ∈ V,
0 ≤ g(x, u) ≤ β kxkp + kukp ,
where V is some open sphere centered at the origin,
β, p are some positive scalars, and k · k is the standard
Euclidean norm.
Then condition (2) of Prop. 5 is satisfied, while x = 0 is costfree and absorbing [cf. Eq. (8)]. Still, however, in the absence
of additional assumptions, there may be multiple solutions to
Bellman’s equation within J.
As an example, consider the scalar system xk+1 = axk +uk
with X = U (x) = ℜ, and the quadratic cost g(x, u) = u2 .
Then Bellman’s equation has the form
J(x) = min u2 + J(ax + u) ,
x ∈ ℜ,
u∈ℜ
and it is seen that the optimal cost function, J ∗ (x) ≡ 0,
is a solution. Let us assume that a > 1 so the system is
µk (x) ∈ arg min
u∈U(x)
g(x, u) + Jk f (x, u) ,
x ∈ X,
(21)
and then we obtain Jk+1 with mk ≥ 1 VIs using µk :
Jk+1 (x0 ) = Jk (xmk ) +
mX
k −1
t=0
g xt , µk (xt ) ,
x0 ∈ X,
(22)
where {xt } is the sequence generated using µk and starting
from x0 , and mk are arbitrary positive integers. Here J0 is a
function in J that is required to satisfy
J0 (x) ≥
inf
u∈U(x)
g(x, u)+J0 f (x, u) , ∀ x ∈ X, u ∈ U (x).
(23)
For example J0 may be equal to the cost function of some
stationary policy, or be the function that takes the value 0
for x ∈ Xs and ∞ at x ∈
/ Xs . Note that when mk ≡ 1
the method is equivalent to VI, while the case mk = ∞
corresponds to the standard PI considered earlier. In practice,
the most effective value of mk may be found experimentally,
with moderate values mk > 1 usually working best. We refer
to the textbooks [25] and [11] for discussions of this type of
inexact PI algorithm (in [25] it is called “modified” PI, while
in [11] it is called “optimistic” PI).
Proposition 6 (Convergence of Optimistic PI). Let Assumption 1 hold. For the PI algorithm (21)-(22), where J0 belongs
to J and satisfies the condition (23), we have Jk ↓ J ∗ .
8
Proof: We have for all x ∈ X,
J0 (x) ≥ inf
g(x, u) + J0 f (x, u)
u∈U(x)
= g x, µ0 (x) + J0 f (x, µ0 (x))
≥ J1 (x)
≥ g x, µ0 (x) + J1 f (x, µ0 (x))
≥ inf
g(x, u) + J1 f (x, u)
u∈U(x)
= g x, µ1 (x) + J1 f (x, µ1 (x))
≥ J2 (x),
where the first inequality is the condition (23), the second and
third inequalities follow because of the monotonicity of the
m0 value iterations (22) for µ0 , and the fourth inequality follows from the policy improvement equation (21). Continuing
similarly, we have
Jk (x) ≥ inf
g(x, u) + Jk f (x, u) ≥ Jk+1 (x),
u∈U(x)
for all x ∈ X and k. Moreover, since J0 ∈ J, we have Jk ∈ J
for all k. Thus Jk ↓ J∞ for some J∞ ∈ J, and similar to the
proof of Prop. 3, it follows that J∞ is a solution of Bellman’s
equation. Hence, by the uniqueness result of Prop. 1, we have
J∞ = J ∗ .
C. Minimax Control to a Terminal Set of States
Our analysis can be readily extended to minimax problems
with a terminal set of states. Here the system is
xk+1 = f (xk , uk , wk ),
k = 0, 1, . . . ,
where wk is the control of an antagonistic opponent that aims
to maximize the cost function. We assume that wk is chosen
from a given set W to maximize the sum of costs per stage,
which are assumed nonnegative:
0 ≤ g(x, u, w) ≤ ∞,
x ∈ X, U ∈ U (x), w ∈ W.
We wish to choose a policy π = {µ0 , µ1 , . . .} to minimize
the cost function
Jπ (x0 ) =
sup
wk ∈W
k=0,1,...
lim
k→∞
k
X
t=0
assuming also that the control process stops once the state
enters the set Xs . Here w is a disturbance described by set
membership (w ∈ W ), and the objective is to reach the set Xs
in the minimum guaranteed number of steps. The set Xf is
the set of states for which Xs is guaranteed to be reached in a
finite number of steps. Another related problem is reachability
of a target tube, where for a given set X̂,
(
0 if x ∈ X̂,
g(x, u, w) =
1 if x ∈
/ X̂,
and the objective is to find the initial states starting from
which we can guarantee to keep all future states within X̂.
These two reachability problems were first formulated and
analyzed as part of the author’s Ph.D. thesis research [7], and
the subsequent paper [8]. In fact the reachability algorithms
given in these works are essentially special cases of the VI
algorithm of the present paper, starting with appropriate initial
functions J0 .
To extend our results to the general form of the minimax
problem described above, we need to adapt the definition of
termination. In particular, given a state x, in the minimax
context we say that a policy π terminates from x if there
exists an index k̄ [which depends on (π, x)] such that the
sequence {xk }, which is generated starting from x and using
π, satisfies xk̄ ∈ Xs for all sequences {w0 , . . . , wk̄−1 } with
wt ∈ W for all t = 0, . . . k̄−1. Then Assumption 1 is modified
to reflect this new definition of termination, and our results can
be readily extended, with Props. 1, 2, 3, and 6, and their proofs,
holding essentially as stated. The main adjustment needed is
to replace expressions of the forms
g(x, u) + J f (x, u)
and
J(xk ) +
k−1
X
g(xt , ut )
t=0
g xk , µk (xk ), wk ,
where xk , µk (xk ) is a state-control sequence corresponding
to π and the sequence {w0 , w1 , . . .}. We assume that there is
a termination set Xs , the states of which are cost-free and
absorbing, i.e.,
g(x, u, w) = 0,
problem that is closely related is reachability of a target set
in minimum time, which is obtained for
(
0 if x ∈ Xs ,
g(x, u, w) =
1 if x ∈
/ Xs ,
x = f (x, u, w),
for all x ∈ Xs , u ∈ U (x), w ∈ W , and that all states outside
Xs have strictly positive optimal cost, so that
Xs = x ∈ X | J ∗ (x) = 0 .
The finite-state version of this problem has been discussed in
[13], under the name robust shortest path planning, for the
case where g can take both positive and negative values. A
in these proofs with
sup g(x, u, w) + J f (x, u, w)
w∈W
and
sup
wt ∈W
t=0,...,k−1
(
J(xk ) +
k−1
X
)
g(xt , ut , wt ) ,
t=0
respectively; see also [14] for a more abstract view of such
lines of argument.
V. C ONCLUDING R EMARKS
In this paper we have considered problems of deterministic
optimal control to a terminal set of states subject to very
general assumptions. Under reasonably practical conditions,
we have established the uniqueness of solution of Bellman’s
9
equation, and the convergence of value and policy iteration algorithms, even when there are states with infinite optimal cost.
Our analysis bypasses the need for assumptions involving the
existence of globally stabilizing controllers, which guarantee
that the optimal cost function J ∗ is real-valued. This generality
makes our results a convenient starting point for analysis of
problems involving additional assumptions, and perhaps cost
function approximations.
While we have restricted attention to undiscounted problems, the line of analysis of the present paper applies also to
discounted problems with one-stage cost function g that may
be unbounded from above. Similar but more favorable results
can be obtained, thanks to the presence of the discount factor;
see the author’s paper [14], which contains related analysis
for stochastic and minimax, discounted and undiscounted
problems, with nonnegative cost per stage.
The results for these problems, and the results of the present
paper, have a common ancestry. They fundamentally draw
their validity from notions of regularity, which were developed
in the author’s abstract DP monograph [12] and were extended
recently in [14]. Let us describe the regularity idea briefly,
and its connection to the analysis of this paper. Given a set of
functions S ∈ E + (X), we say that a collection C of policystate pairs (π, x0 ), with π ∈ Π and x0 ∈ X, is S-regular if
for all (π, x0 ) ∈ C and J ∈ S, we have
)
(
k−1
X
g xt , µt (xt ) .
Jπ (x0 ) = lim J(xk ) +
k→∞
t=0
In words, for all (π, x0 ) ∈ C, Jπ (x0 ) can be obtained in
the limit by VI starting from any J ∈ S. The favorable
properties with respect to VI of an S-regular collection C can
be translated into interesting properties relating to solutions of
Bellman’s equation and convergence of VI. In particular, the
optimal cost function over the set of policies {π | (π, x) ∈ C},
JC∗ (x) =
inf
{π | (π,x)∈C}
Jπ (x),
x ∈ X,
under appropriate problem-dependent assumptions, is the
unique solution of Bellman’s equation within the set J ∈
S | J ≥ JC∗ , and can be obtained by VI starting from any J
within that set (see [14]).
Within the deterministic optimal control context of this
paper, it works well to choose C to be the set of all (π, x)
such that x ∈ Xf and π is terminating starting from x, and
to choose S to be J, as defined by Eq. (10). Then, in view of
Assumption 1, we have JC∗ = J ∗ , and the favorable properties
of JC∗ are shared by J ∗ . For other types of problems different
choices of C may be appropriate, and corresponding results
relating to the uniqueness of solutions of Bellman’s equation
and the validity of value and policy iteration may be obtained;
see [14].
Dimitri P. Bertsekas studied engineering at the
National Technical University of Athens, Greece,
obtained his MS in electrical engineering at the
George Washington University, Wash. DC in 1969,
PLACE
and his Ph.D. in system science in 1971 at the
PHOTO
Massachusetts Institute of Technology (M.I.T.).
HERE
Dr. Bertsekas has held faculty positions with
the Engineering-Economic Systems Dept., Stanford
University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (19741979). Since 1979 he has been teaching at the
Electrical Engineering and Computer Science Department of M.I.T., where
he is currently McAfee Professor of Engineering. He consults regularly with
private industry and has held editorial positions in several journals. His
research has spanned several fields, including optimization, control, largescale and distributed computation, and data communication networks, and
is closely tied to his teaching and book authoring activities. He has written
numerous research papers, and sixteen books, several of which are used as
textbooks in M.I.T. classes.
Professor Bertsekas was awarded the INFORMS 1997 Prize for Research
Excellence in the Interface Between Operations Research and Computer
Science for his book “Neuro-Dynamic Programming” (co-authored with John
Tsitsiklis), the 2001 ACC John R. Ragazzini Education Award, the 2009
INFORMS Expository Writing Award, the 2014 ACC Richard E. Bellman
Control Heritage Award, the 2014 Khachiyan Prize for Life-Time Accomplishments in Optimization, and the SIAM/MOS 2015 George B. Dantzig
Prize. In 2001 he was elected to the United States National Academy of
Engineering.
Dr. Bertsekas’ recent books are “Dynamic Programming and Optimal
Control: 4th Edition” (2012), “Abstract Dynamic Programming” (2013), and
“Convex Optimization Algorithms” (2015), all published by Athena Scientific.
VI. R EFERENCES
[1] Bertsekas, D. P., and Rhodes, I. B., 1971. “On the Minimax
Reachability of Target Sets and Target Tubes,” Automatica,
Vol. 7, pp. 233-241.
[2] Bertsekas, D. P., and Shreve, S. E., 1978.
Stochastic Optimal Control: The Discrete Time Case,
Academic Press, N. Y.; may be downloaded from
http://web.mit.edu/dimitrib/www/home.html
[3] Bertsekas, D. P., and Tsitsiklis, J. N., 1991. “An Analysis
of Stochastic Shortest Path Problems,” Math. of Operations
Research, Vol. 16, pp. 580-595.
[4] Bertsekas, D. P., and Tsitsiklis, J. N., 1996. NeuroDynamic Programming, Athena Scientific, Belmont, MA.
[5] Bertsekas, D. P., and Yu, H., 2015. “Stochastic Shortest
Path Problems Under Weak Conditions,” Lab. for Information
and Decision Systems Report LIDS-2909, revision of March
2015, to appear in Math. of Operations Research.
[6] Beard, R. W., 1995. Improving the Closed-Loop Performance of Nonlinear Systems, Doctoral dissertation, Rensselaer
Polytechnic Institute.
[7] Bertsekas, D. P., 1971. “Control of Uncertain Systems
With a Set-Membership Description of the Uncertainty,” Ph.D.
Thesis, Dept. of EECS, MIT; may be downloaded from
http://web.mit.edu/dimitrib/www/publ.html.
[8] Bertsekas, D. P., 1972. “Infinite Time Reachability of
State Space Regions by Using Feedback Control,” IEEE Trans.
Automatic Control, Vol. AC-17, pp. 604-613.
[9] Bertsekas, D. P., 1975. “Monotone Mappings in Dynamic
Programming,” Proc. 1975 IEEE Conference on Decision and
Control, Houston, TX, pp. 20-25.
[10] Bertsekas, D. P., 1977. “Monotone Mappings with Application in Dynamic Programming,” SIAM J. on Control and
Optimization, Vol. 15, pp. 438-464.
10
[11] Bertsekas, D. P., 2012. Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming,
Athena Scientific, Belmont, MA.
[12] Bertsekas, D. P., 2013. Abstract Dynamic Programming,
Athena Scientific, Belmont, MA.
[13] Bertsekas, D. P., 2014. “Robust Shortest Path Planning
and Semicontractive Dynamic Programming,” Lab. for Information and Decision Systems Report LIDS-P-2915, Feb. 2014
(revised Jan. 2015).
[14] Bertsekas, D. P., 2015. “Regular Policies in Abstract
Dynamic Programming,” Lab. for Information and Decision
Systems Report LIDS-3173, April 2015.
[15] Blackwell, D., 1965. “Positive Dynamic Programming,”
Proc. Fifth Berkeley Symposium Math. Statistics and Probability, pp. 415-418.
[16] Heydari, A., 2014. “Revisiting Approximate Dynamic
Programming and its Convergence,” IEEE Transactions on
Cybernetics, Vol. 44, pp. 2733-2743.
[17] Heydari, A., 2014. “Stabilizing Value Iteration With and
Without Approximation Errors,” available at arXiv:1412.5675.
[18] Jiang, Y., and Jiang, Z. P., 2013. “Robust Adaptive
Dynamic Programming for Linear and Nonlinear Systems: An
Overview,” Eur. J. Control, Vol. 19, pp. 417-425.
[19] Jiang, Y., and Jiang, Z. P., 2014. “Robust Adaptive Dynamic Programming and Feedback Stabilization of Nonlinear
Systems,” IEEE Trans. on Neural Networks and Learning
Systems, Vol. 25, pp. 882-893.
[20] Lewis, F. L., Liu, D., and Lendaris, G. G., 2008. Special
Issue on Adaptive Dynamic Programming and Reinforcement
Learning in Feedback Control, IEEE Trans. on Systems, Man,
and Cybernetics, Part B, Vol. 38, No. 4.
[21] Lewis, F. L., and Liu, D., (Eds), 2013. Reinforcement
Learning and Approximate Dynamic Programming for Feedback Control, Wiley, Hoboken, N. J.
[22] Liu, D., and Wei, Q., 2013. “Finite-Approximation-ErrorBased Optimal Control Approach for Discrete-Time Nonlinear
Systems, IEEE Transactions on Cybernetics, Vol. 43, pp. 779789.
[23] Kleinman, D. L., 1968. “On an Iterative Technique
for Riccati Equation Computations,” IEEE Trans. Automatic
Control, Vol. AC-13, pp. 114-115.
[24] Pallu de la Barriere, R., 1967. Optimal Control Theory,
Saunders, Phila; reprinted by Dover, N. Y., 1980.
[25] Puterman, M. L., 1994. Markov Decision Processes:
Discrete Stochastic Dynamic Programming, J. Wiley, N. Y.
[26] Rekasius, Z. V., 1964. “Suboptimal Design of Intentionally Nonlinear Controllers,” IEEE Trans. on Automatic
Control, Vol. 9, pp. 380-386.
[27] Si, J., Barto, A., Powell, W., and Wunsch, D., (Eds.)
2004. Learning and Approximate Dynamic Programming,
IEEE Press, N. Y.
[28] Saridis, G. N., and Lee, C.-S. G., 1979. “An Approximation Theory of Optimal Control for Trainable Manipulators,”
IEEE Trans. Syst., Man, Cybern., Vol. 9, pp. 152-159.
[29] Schal, M., 1975. “Conditions for Optimality in Dynamic
Programming and for the Limit of n-Stage Optimal Policies to
be Optimal,” Z. Wahrscheinlichkeitstheorie und Verw. Gebiete,
Vol. 32, pp. 179-196.
[30] Strauch, R., 1966. “Negative Dynamic Programming,”
Ann. Math. Statist., Vol. 37, pp. 871-890.
[31] Sutton, R. S., and Barto, A. G., 1998. Reinforcement
Learning, MIT Press, Cambridge, MA.
[32] Vrabie, D., Vamvoudakis, K. G., and Lewis, F. L., 2013.
Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles, The Institution of Engineering
and Technology, London.
[33] Wei, Q., Wang, F. Y., Liu, D., and Yang, X., 2014. “FiniteApproximation-Error-Based Discrete-Time Iterative Adaptive
Dynamic Programming,” IEEE Transactions on Cybernetics,
Vol. 44, pp. 2820-2833.
[34] Werbos, P. J., 1992. “Approximate Dynamic Programming
for Real-Time Control and Neural Modeling,” in Handbook
of Intelligent Control (D. A. White and D. A. Sofge, eds.),
Multiscience Press.
[35] Yu, H., and Bertsekas, D. P., 2013. “A Mixed Value and
Policy Iteration Method for Stochastic Control with Universally Measurable Policies,” Lab. for Information and Decision
Systems Report LIDS-P-2905, MIT; to appear in Math. of
Operations Research.
| 8 |
arXiv:1611.04196v3 [] 21 Nov 2017
Self-Calibration and Bilinear Inverse Problems
via Linear Least Squares∗
Shuyang Ling†, Thomas Strohmer‡
November 22, 2017
Abstract
Whenever we use devices to take measurements, calibration is indispensable. While the
purpose of calibration is to reduce bias and uncertainty in the measurements, it can be quite
difficult, expensive, and sometimes even impossible to implement. We study a challenging
problem called self-calibration, i.e., the task of designing an algorithm for devices so that the
algorithm is able to perform calibration automatically. More precisely, we consider the setup
y = A(d)x+ε where only partial information about the sensing matrix A(d) is known and where
A(d) linearly depends on d. The goal is to estimate the calibration parameter d (resolve the
uncertainty in the sensing process) and the signal/object of interests x simultaneously. For three
different models of practical relevance, we show how such a bilinear inverse problem, including
blind deconvolution as an important example, can be solved via a simple linear least squares
approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus
potentially allowing for real-time deployment. We also present a variation of the least squares
approach, which leads to a spectral method, where the solution to the bilinear inverse problem
can be found by computing the singular vector associated with the smallest singular value of a
certain matrix derived from the bilinear system. Explicit theoretical guarantees and stability
theory are derived for both techniques; and the number of sampling complexity is nearly optimal
(up to a poly-log factor). Applications in imaging sciences and signal processing are discussed
and numerical simulations are presented to demonstrate the effectiveness and efficiency of our
approach.
1
Introduction
Calibration is ubiquitous in all fields of science and engineering. It is an essential step to guarantee
that the devices measure accurately what scientists and engineers want. If sensor devices are not
properly calibrated, their measurements are likely of little use to the application. While calibration
is mostly done by specialists, it often can be expensive, time-consuming and sometimes even impossible to do in practice. Hence, one may wonder whether it is possible to enable machines to calibrate
themselves automatically with a smart algorithm and give the desired measurements. This leads
to the challenging field of self-calibration (or blind calibration). It has a long history in imaging
sciences, such as camera self-calibration [33, 22], blind image deconvolution [12], self-calibration in
∗
This research was supported by the NSF via Award Nr. dtra-dms 1322393 and Award Nr. DMS 1620455.
Courant Institute of Mathematical Sciences and the Center for Data Science, New York University, NY 10003
(Email: sling@cims.nyu.edu)
‡
Department of Mathematics, University of California Davis, CA 95616 (Email: strohmer@math.ucdavis.edu).
†
1
medical imaging [35], and the well-known phase retrieval problem (phase calibration) [16]. It also
plays an important role in signal processing [18] and wireless communications [38, 32].
Self-calibration is not only a challenging problem for engineers, but also for mathematicians. It
means that one needs to estimate the calibration parameter of the devices to adjust the measurements as well as recover the signal of interests. More precisely, many self-calibration problems are
expressed in the following mathematical form,
y = A(d)x + ε,
(1.1)
where y is the observation, A(d) is a partially unknown sensing matrix, which depends on an unknown parameter d and x is the desired signal. An uncalibrated sensor/device directly corresponds
to “imperfect sensing”, i.e., uncertainty exists within the sensing procedure and we do not know
everything about A(d) due to the lack of calibration. The purpose of self-calibration is to resolve
the uncertainty i.e., to estimate d in A(d) and to recover the signal x at the same time.
The general model (1.1) is too hard to get meaningful solutions without any further assumption
since there are many variants of the general model under different settings. In (1.1), A(d) may
depend on d in a nonlinear way, e.g., d can be the unknown orientation of a protein molecule
and x is the desired object [44]; in phase retrieval, d is the unknown phase information of the
Fourier transform of the object [16]; in direction-of-arrival estimation d represents unknown offset,
gain, and phase of the sensors [47]. Hence, it is impossible to resolve every issue in this field,
but we want to understand several scenarios of self-calibration which have great potential in real
world applications. Among all the cases of interest, we assume that A(d) linearly depends on the
unknown d and will explore three different types of self-calibration models that are of considerable
practical relevance. However, even for linear dependence, the problem is already quite challenging,
since in fact we are dealing with bilinear (nonlinear) inverse problems. All those three models have
wide applications in imaging sciences, signal processing, wireless communications, etc., which will
be addressed later. Common to these applications is the desire or need for fast algorithms, which
ideally should be accompanied by theoretical performance guarantees. We will show under certain
cases, these bilinear problems can be solved by linear least squares exactly and efficiently if no
noise exists, which is guaranteed by rigorous mathematical proofs. Moreover, we prove that the
solution is also robust to noise with tools from random matrix theory. Furthermore, we show that
a variation of our approach leads to a spectral method, where the solution to the bilinear problem
can be found by computing the singular vector associated with the smallest singular matrix of a
certain matrix derived from the bilinear system.
1.1
State of the art
By assuming that A(d) linearly depends on d, (1.1) becomes a bilinear inverse problem, i.e., we
want to estimate d and x from y, where y is the output of a bilinear map from (d, x). Bilinear
inverse problems, due to its importance, are getting more and more attentions over the last few
years. On the other hand, they are also notoriously difficult to solve in general. Bilinear inverse
problems are closely related to low-rank matrix recovery, see [15] for a comprehensive review. There
exists extensive literature on this topic and it is not possible do justice to all these contributions.
Instead we will only highlight some of the works which have inspired us.
Blind deconvolution might be one of the most important examples of bilinear inverse problems [12], i.e., recovering f and g from y = f ∗ g, where “∗” stands for convolution. If both
f and g are inside known low-dimensional subspaces, the blind deconvolution can be rewritten
2
as F(y) = diag(Bd)Ax, where F(f ) = Bd, F(g) = Ax and “F” denotes the Fourier transform. In the inspiring work [4], Ahmed, Romberg and Recht apply the “lifting” techniques [13]
and convert the problem into estimation of the rank-one matrix dx∗ . It is shown that solving a
convex relaxation enables recovery of dx∗ under certain choices of B and A. Following a similar
spirit, [30] uses “lifting” combined with a convex approach to solve the scenarios with sparse x
and [29] studies the so called “blind deconvolution and blind demixing” problem. The other line of
blind deconvolution follows a nonconvex optimization approach [3, 26, 25]. In [3], Ahmed, Romberg
and Krahmer, using tools from generic chaining, obtain local convergence of a sparse power factorization algorithm to solve this blind deconvolution problem when h and x are sparse and B and A
are Gaussian random matrices. Under the same setting as [3], Lee et al. [25] propose a projected
gradient descent algorithm based on matrix factorizations and provide a convergence analysis to
recover sparse signals from subsampled convolution. However, this projection step can be hard to
implement. As an alternative, the expensive projection step is replaced by a heuristic approximate
projection, but then the global convergence is not fully guaranteed. Both [3, 25] achieve nearly
optimal sampling complexity. [26] proves global convergence of a gradient descent type algorithm
when B is a deterministic Fourier type matrix and A is Gaussian. Results about identifiability
issue of bilinear inverse problems can be found in [27, 23, 28].
Another example of self-calibration focuses on the setup yl = DAl x, where D = diag(d).
The difference from the previous model consists in replacing the subspace assumption by multiple
measurements. There are two main applications of this model. One application deals with blind
deconvolution in an imaging system which uses randomly coded masks [5, 37]. The measurements
are obtained by (subsampled) convolution of an unknown blurring function D with several random
binary modulations of one image. Both [5] and [2] developed convex relaxing approaches (nuclear
norm minimization) to achieve exact recovery of the signals and the blurring function. The other
application is concerned with calibration of the unknown gains and phases D and recovery of the
signal x, see e.g. [10, 9]. Cambareri and Jacques propose a gradient descent type algorithm in [10, 11]
and show convergence of the iterates by first constructing a proper initial guess. An empirical
study is given in [9] when x is sparse by applying an alternating hard thresholding algorithm.
Recently, [1, 2] study the blind deconvolution when inputs are changing. More precisely, the authors
consider yl = f ∗ gl where each gl belongs to a different known subspace, i.e., gl = Cl xl . They
employ a similar convex approach as in [4] to achieve exact recovery with number of measurements
close to the information theoretic limit.
An even more difficult, and from a practical viewpoint highly relevant, scenario focuses on selfcalibration from multiple snapshots [47]. Here, one wishes to recover the unknown gains/phases
D = diag(d) and a signal matrix X = [x1 , · · · , xp ] from Y = DAX. For this model, the
sensing matrix A is fixed throughout the sensing process and one measures output under different
snapshots {xl }pl=1 . One wants to understand under what conditions we can identify D and {xl }pl=1
jointly. If A is a Fourier type matrix, this model has applications in both image restoration from
multiple filters [21] and also network calibration [6, 31]. We especially benefitted from work by
Gribonval and coauthors [20, 8], as well as by Balzano and Nowak [6, 7]. The papers [6, 7] study
the noiseless version of the problem by solving a linear system and [31] takes a total least squares
approach in order to obtain empirically robust recovery in the presence of noise. If each xl is
sparse, this model becomes more difficult and Gribonval et al. [8, 20] give a thorough numerical
study. Very recently, [43] gave a theoretic result under certain conditions. This calibration problem
is viewed as a special case of the dictionary learning problem where the underlying dictionary DA
3
possesses some additional structure. The idea of transforming a blind deconvolution problem into a
linear problem can also be found in [32], where the authors analyze a certain non-coherent wireless
communication scenario.
1.2
Our contributions
In our work, we consider three different models of self-calibration, namely, yl = DAl x, yl = DAl xl
and yl = DAxl . Detailed descriptions of these models are given in the next Section. We do not
impose any sparsity constraints on x or xl . We want to find out xl (or x) and D when yl (or y) and
Al (or A) are given. Roughly, they correspond to the models in [5, 1, 8] respectively. Though all of
the three models belong to the class of bilinear inverse problems, we will prove that simply solving
linear least squares will give solutions to all those models exactly and robustly for invertible D and
for several useful choices of A and Al . Moreover, the sampling complexity is nearly optimal (up to
poly-log factors) with respect to the information theoretic limit (degree of freedom of unknowns).
As mentioned before, our approach is largely inspired by [8] and [6, 7]; there the authors convert
a bilinear inverse problem into a linear problem via a proper transformation. We follow a similar
approach in our paper. The paper [8] provides an extensive empirical study, but no theoretical analysis. Nowak and Balzano, in [6, 7] provide numerical simulations as well as theoretical conditions
on the number of measurements required to solve the noiseless case.
Our paper goes an important step further: On the one hand we consider more general selfcalibration settings. And on the other hand we provide a rigorous theoretical analysis for recoverability and, perhaps most importantly, stability theory in the presence of measurement errors.
Owing to the simplicity of our approach and the structural properties of the underlying matrices, our framework yields self-calibration algorithms that are numerically extremely efficient, thus
potentially allowing for deployment in applications where real-time self-calibration is needed.
1.3
Notation and Outline
We introduce notation which will be used throughout the paper. Matrices are denoted in boldface
or a calligraphic font such as Z and Z; vectors are denoted by boldface lower case letters, e.g. z.
The individual entries of a matrix or a vector are denoted in normal font such as Zij or zi . For any
matrix Z, kZk denotes its operator
qP norm, i.e., the largest singular value, and kZkF denotes its the
2
Frobenius norm, i.e., kZkF =
ij |Zij | . For any vector z, kzk denotes its Euclidean norm. For
both matrices and vectors, Z T and z T stand for the transpose of Z and z respectively while Z ∗
and z ∗ denote their complex conjugate transpose. For any real number z, we let z+ = 12 (z + |z|).
We equip the matrix space CK×N with the inner product defined by hU , V i := Tr(U ∗ V ). A special
case is the inner product of two vectors, i.e., hu, vi = Tr(u∗ v) = u∗ v. We define the correlation
u∗ v
. For a given vector v, diag(v) represents the
between two vectors u and v as Corr(u, v) = kukkvk
diagonal matrix whose diagonal entries are given by the vector v.
C is an absolute constant and Cγ is a constant which depends linearly on γ, but on no other
parameters. In and 1n always denote the n × n identity matrix and a column vector of “1” in
Rn respectively. And {ei }m
el }pl=1 stand for the standard orthonormal basis in Rm and Rp
i=1 and {e
respectively. “∗” is the circular convolution and “⊗” is the Kronecker product.
The paper is organized as follows. The more detailed discussion of the models under consideration and the proposed method will be given in Section 2. Section 3 presents the main results
4
of our paper and we will give numerical simulations in Section 4. Section 5 contains the proof for
each scenario. We collect some useful auxiliary results in the Appendix.
2
Problem setup: Three self-calibration models
This section is devoted to describing three different models for self-calibration in detail. We will also
explain how those bilinear inverse problems are reformulated and solved via linear least squares.
2.1
Three special models of self-calibration
Self-calibration via repeated measurements Suppose we are seeking for information with
respect to an unknown signal x0 with several randomized linear sensing designs. Throughout this
procedure, the calibration parameter D remains the same for each sensing procedure. How can
we recover the signal x0 and D simultaneously? Let us make it more concrete by introducing the
following model,
yl = DAl x0 + εl , 1 ≤ l ≤ p
(2.1)
where D = diag(d) ∈ Cm×m is a diagonal matrix and each Al ∈ Cm×n is a measurement matrix.
Here yl and Al are given while D and x0 are unknown. For simplicity, we refer to the setup (2.1)
as “self-calibration from repeated measurements”. This model has various applications in selfcalibration for imaging systems [10, 11], networks [6], as well as in blind deconvolution from random
masks [5, 37].
Blind deconvolution via diverse inputs Suppose that one sends several different signals
through the same unknown channel, and each signal is encoded differently. Namely, we are considering
yl = f ∗ Cl xl + εl , 1 ≤ l ≤ p.
How can one estimate the channel and each signal jointly? In the frequency domain, this “blind
deconvolution via diverse inputs” [1, 2] problem can be written as (with a bit abuse of notation),
yl = DAl xl + εl ,
1≤l≤p
(2.2)
where D = diag(d) ∈ Cm×m and Al ∈ Cm×n are the Fourier transform of f and Cl respectively.
We aim to recover {xl }pl=1 and D from {yl , Al }pl=1 .
Self-calibration from multiple snapshots Suppose we take measurements of several signals
{xl }pl=1 with the same set of design matrix DA (i.e., each sensor corresponds one row of A and has
an unknown complex-valued calibration term di ). When and how can we recover D and {xl }pl=1
simultaneously? More precisely, we consider the following model of self-calibration model from
multiple snapshots:
yl = DAxl + εl , 1 ≤ l ≤ p.
(2.3)
Here D = diag(d) is an unknown diagonal matrix, A ∈ Cm×n is a sensing matrix, {xl }pl=1 are
n × 1 unknown signals and {yl }pl=1 are their corresponding observations. This multiple snapshots
model has been used in image restoration from multiple filters [21] and self-calibration model for
sensors [47, 6, 31, 20, 8].
5
2.2
Linear least squares approach
Throughout our discussion, we assume that D is invertible, and we let S := diag(s) = D−1 . Here,
D = diag(d) stands for the calibration factors of the sensor(s) [6, 8] and hence it is reasonable to
assume invertibility of D. For, if a sensor’s gain were equal to zero, then it would not contribute
any measurements to the observable y, in which case the associated entry of y would be zero.
But then we could simply discard that entry and consider the correspondingly reduced system of
equations, for which the associated D is now invertible.
One simple solution is to minimize a nonlinear least squares objective function. Let us take (2.1)
as an example (the others (2.2) and (2.3) have quite similar formulations),
min
D,x
p
X
l=1
kDAl x − yl k2 .
(2.4)
The obvious difficulty lies in the biconvexity of (2.4), i.e., if either D or x is fixed, minimizing over
the other variable is a convex program. In general, there is no way to guarantee that any gradient
descent algorithm/alternating minimization will give the global minimum. However, for the three
models described above, there is one shortcut towards the exact and robust recovery of the solution
via linear least squares if D is invertible.
We continue with (2.1) when εl = 0, i.e.,
diag(yl )s = Al x,
1≤l≤p
(2.5)
where Syl = diag(yl )s with si = d−1
and S is defined as diag(s). The original measurement
i
equation turns out to be a linear system with unknown s and x. The same idea of linearization
can be also found in [20, 8, 43, 6]. In this way, the ground truth z0 := (s0 , x0 ) lies actually inside
the null space of this linear system.
Two issues arise immediately: One the one hand, we need to make sure that (s0 , x0 ) spans
the whole null space of this linear system. This is equivalent to the identifiability issue of bilinear
problems of the form (2.4), because if the pair (α−1 d0 , αx0 ) for some α 6= 0 is (up to the scalar
α) unique solution to (2.1), then (αs0 , αx0 ) spans the null space of (2.5), see also [27, 23, 28]. On
the other hand, we also need to avoid the trivial scenario (s0 , x0 ) = (0, 0), since it has no physical
meaning. To resolve the latter issue, we add the extra linear constraint (see also [8, 6])
s
w,
= c,
(2.6)
x
where the scalar c can be any nonzero number (we note that w should of course not be orthogonal
to the solution). Therefore, we hope that in the noiseless case it suffices to solve the following linear
system to recover (d, x0 ) up to a scalar, i.e.,
diag(y1 ) −A1
..
..
0
s
.
.
(2.7)
=
c (mp+1)×1
x (m+n)×1
diag(yp ) −Ap
{z
} |
{z
}
|
w∗
z
(mp+1)×(m+n)
b
{z
}
|
Aw
6
In the presence of additive noise, we replace the linear system above by a linear least squares
problem
p
X
minimize
k diag(yl )s − Al xk2 + |w ∗ z − c|2
l=1
with respect to s and x, or equivalently,
minimize kAw z − bk2
z
(2.8)
s
0
where z =
,b=
, and Aw is the matrix on the left hand side of (2.7). Following the same
x
c
idea, (2.2) and (2.3) can also be reformulated into linear systems and be solved via linear least
squares. The matrix Aw and the vector z take a slightly different form for those cases, see (3.1)
and (3.3), respectively.
Remark 2.1. Note that solving (2.8) may not be the optimal choice to recover the unknowns
from the perspective of statistics since the noisy perturbation actually enters into Aw instead of b.
More precisely, the noisy perturbation δA to the left hand side of the corresponding linear system
for (2.1), (2.2) and (2.3), is always in the form of
diag(ε1 ) 0
..
..
.
.
(2.9)
δA :=
.
diag(εp ) 0
0
0
The size of δA depends on the models. Hence total least squares [31] could be a better alternative
while it is more difficult to analyze and significantly more costly to compute. Since computational
efficiency is essential for many practical applications, a straightforward implementation of total least
squares is of limited use. Instead one should keep in mind that the actual perturbation enters only
into diag(yl ), while the other matrix blocks remain unperturbed. Constructing a total least squares
solution that obeys these constraints, doing so in a numerically efficient manner and providing
theoretical error bounds for it, is a rather challenging task, which we plan to address in our future
work.
Remark 2.2. Numerical simulations imply that the performance under noisy measurements depends on the choice of w, especially how much w and z0 are correlated. One extreme case is that
hw, z0 i = 0, in which case we cannot avoid the solution z = 0. It might be better to add a constraint
like kwk = 1. However, this will lead to a nonlinear problem which may not be solved efficiently
and not come with rigorous recovery guarantees. Therefore, we present an alternative approach in
the next subsection.
2.3
Spectral method
In this subsection, we discuss a method for solving the self-calibration problem, whose performance
does not depend on the choice of w as it avoids the need of w in the first place. Let S be the
7
matrix Aw excluding the last row (the one which contains w). We decompose S into S = S0 + δS
where S0 is the noiseless part of S and δS is the noisy part1 .
We start with the noise-free scenario: if δS = 0, there holds S = S0 and the right singular
vector of S corresponding to the smallest singular value is actually z0 . Therefore we can recover z0
by solving the following optimization problem:
min kSzk.
kzk=1
(2.10)
Obviously, its solution is equivalent to the smallest singular value of S and its corresponding singular
vector.
If noise exists, the performance will depend on how large the second smallest singular value of
S0 is and on the amount of noise, given by kδSk. We will discuss the corresponding theory and
algorithms in Section 3.4, and the proof in Section 5.4.
3
Theoretical results
We present our theoretical findings for the three models (2.1), (2.2) and (2.3) respectively for
different choices of Al or A. In one of our choices the Al are Gaussian random matrices. The
rationale for this choice is that, while a Gaussian random matrix is not useful or feasible in most
applications, it often provides a benchmark for theoretical guarantees and numerical performance.
Our other choices for the sensing matrices are structured random matrices, such as e.g. the product
of a deterministic partial (or a randomly subsampled) Fourier matrix or a Hadamard matrix2 with a
diagonal binary random matrix3 . These matrices bring us closer to what we encounter in real world
applications. Indeed, structured random matrices of this type have been deployed for instance in
imaging and wireless communications, see e.g. [19, 41].
By solving simple variations (for different models) of (2.8), we can guarantee that the ground
truth is recovered exactly up to a scalar if no noise exists and robustly if noise is present. The
number of measurements required for exact and robust recovery is nearly optimal, i.e., close to the
information-theoretic limit up to a poly-log factor. However, the error bound for robust recovery
is not optimal. It is worth mentioning that once the signals and calibration parameter D are
identifiable, we are able to recover both of them exactly in absence of noise by simply solving a
linear system. However, identifiability alone cannot guarantee robustness.
Throughout this section, we let dmax := max1≤i≤m |di,0 | and dmin := min1≤i≤m |di,0 | where
{di,0 }m
i=1 are the entries of the ground truth d0 . We also define Aw,0 as the noiseless part of Aw
for each individual model.
3.1
Self-calibration via repeated measurements
For model (2.1) we will focus on three cases:
1
This is a slight abuse of notation, since the δA defined in (2.9) has an additional row with zeros. However, it will
be clear from the context which δA we refer to, and more importantly, in our estimates we mainly care about kδAk
which coincides for both choices of δA.
2
A randomly subsampled Fourier matrix is one, where we randomly choose a certain number of rows or columns
of the Discrete Fourier Transform matrix, and analogously for a randomly subsampled Hadamard matrix
3
At this point, we are not able to prove competitive results for fully deterministic sensing matrices.
8
(a) Al is an m × n complex Gaussian random matrix, i.e., each entry in Al is given by
√i N (0, 1).
2
√1 N (0, 1) +
2
(b) Al is an m × n “tall” random DFT/Hadamard matrix with m ≥ n, i.e., Al := HMl where H
consists of the first n columns of an m × m DFT/Hadamard matrix and each Ml := diag(ml )
is a diagonal matrix with entries taking on the value ±1 with equal probability. In particular,
there holds,
A∗l Al = Ml∗ H ∗ HMl = mIn .
(c) Al is an m × n “fat” random partial DFT matrix with m < n, i.e., Al := HMl where H
consists of m columns of an n × n DFT/Hadamard matrix and each Ml := diag(ml ) is a
diagonal matrix, which is defined the same as case (b),
Al A∗l = HH ∗ = nIm .
Our main findings are summarized as follows:
Theorem 3.1. Consider the self-calibration model given in (2.1), where Aw is as in (2.7) and
Aw,0 is the noiseless part of Aw . Then, for the solution ẑ of (2.8) and α = w∗cz0 , there holds
2
kẑ − αz0 k
≤ κ(Aw,0 )η 1 +
kαz0 k
1 − κ(Aw,0 )η
if κ(Aw,0 )η < 1 where η =
2kδAk
√
mp .
κ(Aw,0 ) ≤
s
The condition number of Aw satisfies
max{d2max kxk2 , m}
6(mp + kwk2 )
,
min{mp, kwk2 | Corr(w, z0 )|2 } min{d2min kxk2 , m}
√
mp
s
√
2 3
max{d2max kxk2 , m}
κ(Aw,0 ) ≤
,
| Corr(w, z0 )| min{d2min kxk2 , m}
n
log2 (m + n);
(a) with probability 1 − (m + n)−γ if Al is Gaussian and p ≥ c0 γ max 1, m
where Corr(w, z0 ) =
w ∗ z0
kwkkz0 k ,
and for kwk =
(b) with probability 1 − (m + n)−γ − 2(mp)−γ+1 if each Al is a “tall” (m × n, m ≥ n) random
Hadamard/DFT matrix and p ≥ c0 γ 2 log(m + n) log(mp);
(c) with probability 1 − (m + n)−γ − 2(mp)−γ+1 if each Al is a “fat” (m × n, m ≤ n) random
Hadamard/DFT matrix and mp ≥ c0 γ 2 n log(m + n) log(mp).
Remark 3.2. Our result is nearly optimal in terms of required number of measurements, because
the number of constraints mp is required to be slightly greater than n + m, the number of unknowns.
Theorem 3.1 can be regarded as a generalized result of [10, 11], in which D is assumed to be positive
and Al is Gaussian. In our result, we only need D to be an invertible complex diagonal matrix and
Al can be a Gaussian or random Fourier type matrix. The approaches are quite different, i.e., [10]
essentially uses nonconvex optimization by first constructing a good initial guess and then applying
gradient descent to recover D and x. Our result also provides a provable fast alternative algorithm
to “the blind deconvolution via random masks” in [5, 37] where a SDP-based approach is proposed.
9
3.2
Blind deconvolution via diverse inputs
We now analyze model (2.2). Following similar steps that led us from (2.1) to (2.7), it is easy to
see that the linear system associated with (2.2) is given by
diag(y1 ) −A1
0
···
0
s
diag(y2 )
0
−A2 · · ·
0
x1
0
..
..
..
.
.
x
..
..
=
.
(3.1)
2
.
.
.
c (mp+1)×1
..
diag(yp )
.
0
0
· · · −Ap
x
w∗
p (m+n)×1
(mp+1)×(np+m)
{z
}|
{z
}
|
z
Aw
We consider two scenarios:
(a) Al is an m × n complex Gaussian random matrix, i.e., each entry in Al yields
√i N (0, 1).
2
√1 N (0, 1)
2
+
(b) Al is of the form
Al = Hl Ml ,
(3.2)
where Hl ∈ Cm×n is a random partial Hadamard/Fourier matrix, i.e., the columns of Hl
are uniformly sampled without replacement from an m × m DFT/Hadamard matrix; Ml :=
diag(ml ) = diag(ml,1 , · · · , ml,n ) is a diagonal matrix with {ml,i }ni=1 being i.i.d. Bernoulli
random variables.
Theorem 3.3. Consider the self-calibration model given in (2.2), where Aw is as in (3.1). Let
xmin := min1≤l≤p kxl k and xmax := max1≤l≤p kxl k. Then, for the solution ẑ of (2.8) and α = w∗cz0 ,
there holds
2
kẑ − αz0 k
≤ κ(Aw,0 )η 1 +
kαz0 k
1 − κ(Aw,0 )η
if κ(Aw,0 )η < 1 where η =
κ(Aw,0
and for kwk =
√
2kδAk
√
.
m
v
u
u
)≤t
The condition number of Aw,0 obeys
max{pd2max , x2m }
6x2max (m + kwk2 )
min
,
x2min min{m, kwk2 | Corr(w, z0 )|2 } min{pd2min , x2m }
max
m, there holds
v
u
√
u max{pd2max , x2m }
2 3xmax
min
t
κ(Aw,0 ) ≤
,
xmin | Corr(w, z0 )| min{pd2min , x2m }
max
(a) with probability at least 1 − (np + m)−γ if Al is an m × n (m > n) complex Gaussian random
matrix and
n
1
1
+
(γ + 1) log 2 (np + m) ≤ .
C0
p m
4
10
(b) with probability at least 1 − (np + m)−γ if Al yields (3.2) and
n−1
1
1
+
γ 3 log4 (np + m) ≤ ,
C0
p m−1
4
m ≥ 2.
Remark 3.4. Note that if δA = 0, i.e., in the noiseless case, we have z = αz0 if mp ≥ (np +
m)poly(log(np + m)). Here mp is the number of constraints and np + m is the degree of freedom.
Therefore, our result is nearly optimal in terms of information theoretic limit. Compared with a
similar setup in [1], we have a more efficient algorithm since [1] uses nuclear norm minimization
to achieve exact recovery. However, the assumptions are slightly different, i.e., we assume that D
is invertible and hence the result depends on D while [1] imposes “incoherence” on d by requiring
kF (d)k∞
relatively small, where F denotes Fourier transform.
kdk
3.3
Self-calibration from multiple snapshots
We again follow a by now familiar procedure to derive the linear system associated with (2.3),
which turns out to be
s
diag(y1 ) −A 0 · · ·
0
x1
diag(y2 ) 0 −A · · ·
0
0
x2
..
..
..
..
.
.
=
.
(3.3)
.
.
.
.
.
c (mp+1)×1
..
diag(yp ) 0
.
0 · · · −A
x
w∗
p (m+np)×1
(mp+1)×(m+np)
{z
}|
{z
}
|
z
Aw
For this scenario we only consider the case when A is a complex Gaussian random matrix.
Theorem 3.5. Consider the self-calibration model given in (2.3), where Aw is as in (3.3) and Aw,0
corresponds to the noiseless part of Aw . Let xmin := min1≤l≤p kxl k and xmax := max1≤l≤p kxl k.
Then, for the solution ẑ of (2.8) and α = w∗cz0 , there holds
2
kẑ − αz0 k
≤ κ(Aw,0 )η 1 +
kαz0 k
1 − κ(Aw,0 )η
if κ(Aw,0 )η < 1 where η =
κ(Aw,0
and for kwk =
√
2kδAk
√
.
m
v
u
u
)≤t
Here the upper bound of κ(Aw,0 ) obeys
max{pd2max , x2m }
6x2max (m + kwk2 )
min
.,
x2min min{m, kwk2 | Corr(w, z0 )|2 } min{pd2min , x2m }
max
m, there holds
v
u
√
u max{pd2max , x2m }
2 3xmax
min
t
κ(Aw,0 ) ≤
,
xmin | Corr(w, z0 )| min{pd2min , x2m }
max
with probability at least 1 − 2m(np + m)−γ if A is a complex Gaussian random matrix and
n
1
kGk kGk2F
,
(3.4)
+
log2 (np + m) ≤
C0 max
2
p
p
m
16(γ + 1)
11
where G is the Gram matrix of
n
C0
xl
kxl k
op
l=1
n
1
+
p m
. In particular, if G = Ip and kGkF =
log2 (np + m) ≤
√
p, (3.4) becomes
1
.
16(γ + 1)
Remark 3.6. When kδAk = 0, Theorem 3.5qsays that the solution to (2.3) is uniquely determined
up to a scalar if mp = O(max{np + mkGk, mnp2 + m2 kGk2F }), which involves the norm of the
Gram matrix G. This makes sense if we consider two extreme cases: if {vl }pl=1 are all identical,
then we have kGk = p and kGkF = p and if the {vl }pl=1 are mutually orthogonal, then G = Ip .
Remark 3.7. Balzano and Nowak [6] show exact recovery of this model when A is a deterministic
DFT matrix (discrete Fourier matrix) and {xl }pl=1 are generic signals drawn from a probability
distribution, but their results do not include stability theory in the presence of noise.
Remark 3.8. For Theorem 3.3 and Theorem 3.5 it does not come as a surprise that the error
bound depends on the norm of {xl }pl=1 and D as well as on how much z0 and w are correlated.
We cannot expect a relatively good condition number for Aw if kxl k varies greatly over 1 ≤ l ≤ p.
Concerning the correlation between z0 and w, one extreme case is hz0 , wi = 0, which does not rule
out the possibility of z = 0. Hence, the quantity hz0 , wi affects the condition number.
3.4
Theoretical results for the spectral method
Let S be the matrix Aw excluding the last row and consider S = S0 + δS where S0 is the noiseless
part of S and δS is the noise term. The performance under noise depends on the second smallest
singular value of S0 and the noise strength kδSk.
Theorem 3.9. Denote ẑ as the solution to (2.10), i.e., the right singular vector of S w.r.t. the
smallest singular value and kẑk = 1. Then there holds,
kδSk
(I − ẑ ẑ ∗ )z0
kα0 ẑ − z0 k
≤
=
kz0 k
kz0 k
[σ2 (S0 ) − kδSk]+
α0 ∈C
min
where σ2 (S0 ) is the second smallest singular value of S0 , z0 satisfies S0 z0 = 0, and ẑ is the right
singular vector with respect to the smallest singular value of S, i.e., the solution to (2.10).
Moreover, the lower bound of σ2 (S0 ) satisfies
q
√
(a) σ2 (S0 ) ≥ p2 min{ m, dmin kxk} for model (2.1) under the assumption of Theorem 3.1;
(b) σ2 (S0 ) ≥
√1 xmin min
2
(c) σ2 (S0 ) ≥
√1 xmin min
2
n√
√ o
m
pdmin , xmax
for model (2.2) under the assumption of Theorem 3.3;
n√
√ o
m
for model (2.3) under the assumption of Theorem 3.5.
pdmin , xmax
Remark 3.10. Note that finding z which minimizes (2.10) is equivalent to finding the eigenvector
with respect to the smallest eigenvalue of S ∗ S. If we have a good approximate upper bound λ of
kS ∗ Sk, then it suffices to find the leading eigenvector of λI − S ∗ S, which can be done efficiently by
using power iteration.
How to choose λ properly in practice? We do not want to choose λ too large since this would
imply slow convergence of the power iteration. For each case, it is easy to get a good upper bound
of kSk2 based on the measurements and sensing matrices,
12
(a) for (2.1), λ = k
Pp
∗
l=1 diag(yl ) diag(yl )k
+k
Pp
∗
l=1 Al Al k;
(b) for (2.2), λ = k diag(yl ) diag(yl∗ )k + max1≤l≤p kAl k2 ;
(c) for (2.3), λ = k diag(yl ) diag(yl∗ )k + kAk2 .
Those choices of λ are used in our numerical simulations.
4
Numerical simulations
This section is devoted to numerical simulations. Four experiments for both synthetic and real data
will be presented to address the effectiveness, efficiency and robustness of the proposed approach.
For all three models presented (2.1), (2.2) and (2.3), the corresponding linear systems have
simple block structures which allow for fast implementation via the conjugate gradient method for
non-hermitian matrices [34]. In our simulations, we do not need to set up Aw explicitly to carry
out the matrix-vector multiplications arising in the conjugate gradient method. Moreover, applying
preconditioning via rescaling all the columns to be of similar norms can give rise to an even faster
convergence rate. Therefore, we are able to deal with medium- or large-scale problems from image
processing in a computationally efficient manner.
In our simulations the iteration stops if either the number of iterations reaches at most 2000 or
the residual of the corresponding normal equation is smaller than 10−8 . Throughout our discussion,
the SNR (signal-to-noise ratio) in the scale of dB is defined as
Pp
2
l=1 kyl k
P
.
SNR := 10 log10
p
2
l=1 kεl k
We measure the performance with RelError (in dB) := 20 log 10 RelError where
(
)
kα1 x̂ − x0 k
kα2 dˆ − d0 k
RelError = max min
, min
.
kx0 k
kd0 k
α1 ∈C
α2 ∈C
Here d0 and x0 are the ground truth. Although RelError does not match the error bound in our
theoretic analysis, certain equivalent relations hold if one assumes all |di,0 | are bounded away from
0 because there holds ŝ ≈ s0 if and only if dˆ ≈ d0 .
For the imaging examples, we only measure the relative error with respect to the recovered
x̂−x0 k
.
image x̂, i.e., minα1 ∈C kα1kx
0k
4.1
Self-calibration from repeated measurements
Suppose we have a target image x and try to estimate x through multiple measurements. However,
the sensing process is not perfect because of the missing calibration of the sensors. In order to
estimate both the unknown gains and phases as well as the target signal, a randomized sensing
procedure is used by employing several random binary masks.
We assume that yl = DHMl x0 + εl where H is a “tall” low-frequency DFT matrix, Ml is a
diagonal ±1-random matrix and x0 is an image of size 512×512. We set m = 10242 , n = 5122 , p = 8
2
pm
= 6.4. We
and D = diag(d) ∈ 10242 × 10242 with d ∈ C1024 ; the oversampling ratio is m+n
compare two cases: (i) d is a sequence distributed uniformly over [0.5, 1.5] with w = 1m+n , and (ii)
13
0m
. We
d is a Steinhaus sequence (uniformly distributed over the complex unit circle) with w =
1n
pick those choices of w because we know that the image we try to reconstruct has only non-negative
values. Thus, by choosing w to be non-negative, there are fewer cancellation in the expression w ∗ z0 ,
which in turn leads to a smaller condition number and better robustness. The corresponding results
of our simulations are shown in Figure 1 and Figure 2, respectively. In both cases, we only measure
the relative error of the recovered image.
Figure 1: Here m = 10242 , n = 5122 , p = 8, SNR=5dB, D = diag(d) where di is uniformly
distributed over [0.5, 1.5]. Left: original image; Middle: uncalibrated image, RelError = −13.85dB;
Right: calibrated image, RelError = −20.23dB
Figure 2: Here m = 10242 , n = 5122 , p = 8, SNR=5dB, D = diag(d) where d is a Steinhauss
sequence. Left: original image; Middle: uncalibrated image; Right: calibrated image, RelError =
−10.02dB
In Figure 1, we can see that both, the uncalibrated and the calibrated image are quite good.
Here the uncalibrated image is obtained by first applying the inverse Fourier transform and the
inverse of the mask to each yi and then taking the average of p samples. We explain briefly why
the uncalibrated image still looks good. Note that
x̂uncali
p
p
l=1
l=1
1 X −1 †
1 X −1 †
Ml H (DHMl x0 ) = x0 +
Ml H (D − I)HMl x0
=
p
p
1
H ∗ is the pseudo inverse of H. Here D−I is actually a diagonal matrix with random
where H † = m
1
entries ± 2 . As a result, each Ml−1 H † (D −I)HMl x0 is the sum of m rank-1 matrices with random
± 12 coefficients and is relatively small due to many cancellations. Moreover, [14] showed that most
2-D signals can be reconstructed within a scale factor from only knowing the phase of its Fourier
transform, which applies to the case when d is positive.
However, when the unknown calibration parameters are complex variables (i.e., we do not know
much about the phase information), Figure 2 shows that the uncalibrated recovered image is totally
14
meaningless. Our approach still gives a quite satisfactory result even at a relatively low SNR of
5dB.
4.2
Blind deconvolution in random mask imaging
The second experiment is about blind deconvolution in random mask imaging [5, 37]. Suppose we
observe the convolution of two components,
y l = h ∗ M l x 0 + εl ,
1≤l≤p
where both, the filter h and the signal of interests x0 are unknown. Each Ml is a random ±1-mask.
The blind deconvolution problem is to recover (h, x0 ). Moreover, here we assume that the filter is
actually a low-pass filter, i.e., F(h) is compactly supported in an interval around the origin, where
F is the Fourier transform. After taking the Fourier transform on both sides, the model actually
ends up being of the form (2.1) with Al = HMl where H is a “fat” partial DFT matrix and d is
the nonzero part of F(h). In our experiment, we let x0 be a 128 × 128 image and d = F(h) be a
2-D Gaussian filter of size 45 × 45 as shown in Figure 3.
In those experiments, we choose w = 1m+n since both d and x0 are nonnegative. Figure 4
shows the recovered image from p = 32 sets of noiseless measurements and the performance is quite
pm
≈ 3.52. We can see from Figure 5 that the blurring
satisfactory. Here the oversampling ratio is m+n
effect has been removed while the noise still exists. That is partially because we did not impose any
denoising procedure after the deconvolution. A natural way to improve this reconstruction further
would be to combine the blind deconvolution method with a total variation minimization denoising
step.
4.3
Blind deconvolution via diverse inputs
We choose Al to be random Hadamard matrices with m = 256 and n = 64 and D = diag(d0 )
with d0 being a positive/Steinhaus sequence, as we did previously. Each xl is sampled from
1m
standard Gaussian distribution. We choose w =
if d0 is uniformly distributed over [0.5, 1.5]
0np×1
√
me1
for Steinhaus d0 . 10 simulations are performed for each level of SNR. The
and w =
0np×1
pm
is 2, 2.67 and 3 for
test is also given under different choices of p. The oversampling ratio pn+m
p = 4, 8, 12 respectively. From Figure 6, we can see that the error scales linearly with SNR in
dB. The performance of Steinhaus d0 is not as good as that of positive d0 for SNR ≤ 10 when
∗
we use
squares method. That is because w z0 is quite small when d0 is complex and
√the least
me1
. Note that the error between ẑ and z0 is bounded by κ(Aw,0 )η 1 + 1−κ(A2 w,0 )η .
w =
0np×1
Therefore, the error bound does not depend on kδAk linearly if κ(Aw,0 )η is close to 1. This may
explain the nonlinear behavior of the relative error in the low SNR regime.
We also apply spectral method to this model with complex gains d0 and Al chosen as either a
random Hadamard matrix or a Gaussian matrix. Compared to the linear least squares approach,
the spectral method is much more robust to noise, as suggested in Figure 7, especially in the low
SNR regime.
15
Figure 3: Left: Original image; Right: Gaussian filter in Fourier domain. The support of the filter
is 45 × 45 and hence m = 452 = 2025, n = 1282 , p = 32.
Figure 4: Left: Blurred image without noise, Right: Recovered image, RelError = -45.47dB
Figure 5: Left: Blurred image with SNR = 5dB, Right: Recovered image, RelError = -5.84dB
4.4
Multiple snapshots
We make a comparison of performances between the linear least squares approach and the spectral
method when d0 is a Steinhaus sequence and A is a Gaussian random matrix A. Each xl is sampled
from the standard Gaussian distribution and hence the underlying Gram matrix G is quite close
to Ip (this closeness could be easily made more precise, but we refrain doing so here). The choice
of w and oversampling ratio are the same as those in Section 4.3. From Figure 8, we see that
the performance is not satisfactory for the Steinhaus case using the linear least squares approach,
especially in the lower SNR regime
√ (SNR
≤ 10). The reason is the low correlation between w and
me1
z0 if d0 is Steinhaus and w =
. The difficulty of choosing w is avoided by the spectral
0np×1
method. As we can see in Figure 8, the relative error given by spectral method is approximately
7dB smaller than that given by linear least squares approach when d0 is a complex vector and the
SNR is smaller than 20dB.
16
m = 256, n = 64, d:Positive, LS method, Hadamard
m = 256, n = 64, d:Steinhaus, LS method, Hadamard
0
0
p=4
p=8
p = 12
−20
−30
−40
−50
−60
−70
−80
p=4
p=8
p = 12
−10
Average of sin(θ) of 10 Samples(dB)
Average of sin(θ) of 10 Samples(dB)
−10
−20
−30
−40
−50
−60
−70
0
10
20
30
40
50
60
−80
70
0
10
20
30
SNR(dB)
40
50
60
70
SNR(dB)
Figure 6: Performance of linear least squares approach: RelError (in dB) vs. SNR for yl =
DAl xl + εl , 1 ≤ l ≤ p where m = 256, n = 64, D = diag(d0 ) and each Al is a random Hadamard
matrix. d0 is a Steinhaus sequence.
m = 256, n = 64, d:Steinhaus, spectral method, Gaussian
m = 256, n = 64, d:Steinhaus, spectral method, Hadamard
0
0
p=4
p=8
p = 12
−20
−30
−40
−50
−60
−70
−80
p=4
p=8
p = 12
−10
Average of sin(θ) of 10 Samples(dB)
Average of sin(θ) of 10 Samples(dB)
−10
−20
−30
−40
−50
−60
−70
0
10
20
30
40
50
60
−80
70
0
10
20
30
SNR(dB)
40
50
60
70
SNR(dB)
Figure 7: Performance of spectral method: RelError (in dB) vs. SNR for yl = DAl xl +εl , 1 ≤ l ≤ p
where m = 256, n = 64, D = diag(d0 ). Here Al is either a random Hadamard matrix or a Gaussian
matrix. d0 is a Steinhaus sequence.
5
Proofs
For each subsection, we will first give the result of noiseless measurements. We then prove the
stability theory by using the result below. The proof of spectral method can be found in Section 5.4.
Proposition 5.1. [46] Suppose that Au0 = b is a consistent and overdetermined system. Denote
û as the least squares solution to k(A + δA)u − bk2 with kδAk ≤ ηkAk. If κ(A)η < 1, there holds,
kû − u0 k
2
≤ κ(A)η 1 +
.
ku0 k
1 − κ(A)η
To apply the proposition above, it suffices to bound κ(A) and η.
17
m = 256, n = 64, d:Steinhaus, LS method
m = 256, n = 64, d:Steinhaus, spectral method
0
0
p=4
p=8
p = 12
−20
−30
−40
−50
−60
−70
−80
p=4
p=8
p = 12
−10
Average of sin(θ) of 10 Samples(dB)
Average of sin(θ) of 10 Samples(dB)
−10
−20
−30
−40
−50
−60
−70
0
10
20
30
40
50
60
−80
70
0
10
SNR(dB)
20
30
40
50
60
70
SNR(dB)
Figure 8: Comparison between linear least squares approach and spectral method: RelError (in
dB) vs. SNR for yl = DAxl + εl , 1 ≤ l ≤ p where m = 256, n = 64, D = diag(d0 ) and A is a
Gaussian random matrix. The gain d0 is a random vector with each entry uniformly distributed
over unit circle.
5.1
Self-calibration from repeated measurements
Let us start with (2.7) when εl = 0 and denote Λl := diag(Al v) and v :=
x
kxk
∈ Cn ,
∗
Λ1 − √1m A1
diag(y1 ) −A1
Dkxk √ 0
..
.. = ..
..
A0 :=
.
.
. .
.
0
mIn
1
∗
Λp − √m Ap
diag(yp ) −Ap
Then we rewrite A∗w Aw as
A∗w Aw = A∗0 A0 + ww ∗ =
where
Zl :=
"
Λl
− √1m A∗l
By definition,
#
∈C
(m+n)×m
,
p
X
P Zl Zl∗ P ∗ + ww ∗
l=1
D∗ kxk
0
√
P :=
∈ C(m+n)×(m+n) .
0
mIn
(5.1)
#
∗
√1 Λl Al
−
Λ
Λ
l
l
m
∈ C(m+n)×(m+n) .
Zl Zl∗ =
1
∗A
− √1m A∗l Λ∗l
A
l
m l
"
Our goal is to find out P
the smallest and the largest eigenvalue of Aw . Actually it suffices to under∗
stand the spectrum of " pl=1 Z
# l Zl . Obviously, its smallest eigenvalue is zero and the corresponding
√1
2
1m
√
m
. Let al,i be the i-th column of A∗l and we have E(al,i a∗l,i ) = In under
"
#
√1 1m v ∗
I
−
m
m
all the three settings in Section 3.1. Hence, C := E(Zl Zl∗ ) =
.
− √1m v1∗m
In
It is easy to see that rank(C) = m + n − 1 and the null space of C is spanned by u1 . C has an
eigenvalue with value 1 of multiplicity m + n − 2 and an eigenvalue with value 2 of multiplicity 1.
eigenvector is u1 :=
v
18
More importantly, the following proposition holds and combined with Proposition 5.1, we are able
to prove Theorem 3.1.
Proposition 5.2. There holds
p
X
l=1
(a) with probability 1 − (m +
n)−γ
Zl Zl∗ − pC ≤
p
2
n
if Al is Gaussian and p ≥ c0 γ max 1, m
log2 (m + n);
(b) with probability 1 − (m + n)−γ − 2(mp)−γ+1 if each Al is a “tall” (m × n, m ≥ n) random
Hadamard/DFT matrix and p ≥ c0 γ 2 log(m + n) log(mp);
(c) with probability 1 − (m + n)−γ − 2(mp)−γ+1 if each Al is a “fat” (m × n, m ≤ n) random
Hadamard/DFT matrix and mp ≥ c0 γ 2 n log(m + n) log(mp).
Remark 5.3. Proposition 5.2 actually addresses the identifiability issue of the model (2.1) in
absence of noise. More precisely, the invertibility of P is guaranteed by that
P of D. By Weyl’s
theorem for singular value perturbation in [36], mP
+ n − 1 eigenvalues of pl=1 Zl Zl∗ are greater
than p2 . Hence, the rank of A0 is equal to rank( pl=1 Zl Zl∗ ) = m + n − 1 if p is close to the
n
). In other words, the
information theoretic limit under the conditions given above, i.e., p ≥ O( m
null space of A0 is completely spanned by z0 := (s0 , x0 ).
5.1.1
Proof of Theorem 3.1
Note that Proposition 5.2 gives the result if εl = 0. The noisy counterpart is obtained by applying
perturbation theory for linear least squares.
Let Aw := Aw,0 + δA where Aw,0 is the noiseless part and δA is defined in (2.9).
Note
0
by the
that αz0 with α = w∗cz0 is actually a solution to the overdetermined system Aw,0 z =
c
definition of Aw,0 . Proposition 5.1 implies that it suffices to estimate the condition number κ(Aw,0 )
of Aw,0 and η such that kδAk ≤ ηkAw,0 k holds. Note that
!
p
X
A∗w,0 Aw,0 = P
(5.2)
Zl Zl∗ P ∗ + ww ∗
Proof:
= P
l=1
p
X
Zl Zl∗
l=1
e :=
where w
P −1 w.
ew
e
+w
∗
!
e ∗
P ∗ =: P CP
(5.3)
From Proposition 5.2 and Theorem 1 in [36], we know that
!
!
p
p
X
X
p
5p
Zl Zl∗ ≥ , λn+m
λ2
Zl Zl∗ ≤
2
2
l=1
l=1
Pp
where λ1 ≤ · · · ≤ λn+m and λ1 ( l=1 Zl Zl∗ ) = 0. Following from (5.2), we have
kA∗w,0 Aw,0 k
2
≤ kP k
p
X
l=1
Zl Zl∗ + kwk2 ≤ 3p max{d2max kxk2 , m} + kwk2
kwk2
≤ 3 p+
m
λ2max (P ).
19
(5.4)
On the other hand,
kA∗w,0 Aw,0 k ≥
p
X
l=1
A∗l Al ≥
mp
2
(5.5)
follows from Proposition 5.2. In other words, we have found the lower and upper bounds for
kAw,0 k or equivalently, λmax (A∗w,0 Aw,0 ). Now we proceed to the estimation of λmin (A∗w,0 Aw,0 ).
#
"
1m
√
P
Pm+n
m+n
1
2
m
with m+n
Let u := j=1 αj uj be a unit vector, where u1 = √2
j=1 |αj | = 1 and {uj }j=1
v
P
e defined in (5.3) has a lower
are the eigenvectors of pl=1 Zl Zl∗ . Then the smallest eigenvalue of C
bound as follows:
!
p
X
e
ew
e ∗u
Zl Z ∗ u + u∗ w
u∗ Cu
= u∗
l
l=1
≥
≥
e ≥
which implies λmin (C)
e ∗,
P CP
1
2
m+n
X
j=2
m+n
X
j=2
ew
e ∗ u1 u∗1 u
λj |αj |2 + u∗ u1 u∗1 w
λj |αj |2 +
o
n
∗ z |2
0
≥
min p, |w
mkxk2
1
2
|w ∗ z0 |2
|α1 |2
2mkxk2
(5.6)
n
o
2
2
0 )|
min p, kwk | Corr(w,z
. Combined with A∗w,0 Aw,0 =
m
e 2 (P ) ≥ 1 min mp, kwk2 | Corr(w, z0 )|2 λ2 (P ).
λmin (A∗w,0 Aw,0 ) ≥ λmin (C)λ
min
min
2m
Therefore, with (5.4), the condition number of A∗w,0 Aw,0 is bounded by
κ(A∗w,0 Aw,0 ) ≤
From (5.5), we set η =
2kδAk
√
mp
≥
6(mp + kwk2 )
κ2 (P ).
min{mp, kwk2 | Corr(w, z0 )|2 }
kδAk
kAw,0 k .
Applying Proposition 5.1 gives the following upper bound of the estimation error
κ(Aw,0 )η 1 + 1−κ(A2 w,0 )η where α = w∗cz0 and
κ(Aw,0 ) ≤
If kwk =
√
s
6(mp + kwk2 )
κ(P ).
min{mp, kwk2 | Corr(w, z0 )|2 }
mp, the upper bound of κ(Aw,0 ) satisfies
κ(Aw,0 ) ≤
s
s
√
2 3
12
max{d2max kxk2 , m}
κ(P
)
≤
.
| Corr(w, z0 )|2
|Corr(w, z0 )| min{d2min kxk2 , m}
20
kẑ−αz0 k
kαz0 k
≤
5.1.2
Proof of Proposition 5.2(a)
Proof: [Proof of Proposition 5.2(a)] From now on, we assume al,i ∈ Cn , i.e., the i-th column
of A∗l , obeys a complex Gaussian distribution, √12 N (0, In )+ √i2 N (0, In ). Let zl,i be the i-th column
of Zl ; it can be written in explicit form as
"
#
"
#
| hal,i , vi |2 ei e∗i − √1m ei v ∗ al,i a∗l,i
hal,i , viei
∗
, zl,i zl,i =
zl,i =
1
∗
− √1m al,i
− √1m al,i a∗l,i ve∗i
m al,i al,i .
∗ − E(z z ∗ ), we obtain
Denoting Zl,i := zl,i zl,i
l,i l,i
"
#
(| hal,i , vi |2 − 1)ei e∗i
− √1m ei v ∗ (al,i a∗l,i − In )
Zl,i :=
.
1
∗
− √1m (al,i a∗l,i − In )ve∗i
m (al,i al,i − In )
P
Obviously each Zl,i is independent. In order to apply Theorem 5.12 to estimate k l,i Zl,i k, we
Pp Pm
∗
need to bound maxl,i kZl,i kψ1 and
i=1 E(Zl,i Zl,i ) . Due to the semi-definite positivity of
l=1
∗ , we have kZ k ≤ max{kz k2 , k E(z z ∗ )k} ≤ max | ha , vi |2 + 1 ka k2 , 2 and hence
zl,i zl,i
l,i
l,i
l,i
l,i l,i
l,i
m
n
1
kZl,i kψ1 ≤ (| hal,i , vi |2 )ψ1 + (kal,i k2 )ψ1 ≤ C 1 +
m
m
n
which follows from Lemm 5.14 and k·kψ1 is a norm. This implies R := maxl,i kZl,i kψ1 ≤ C 1 + m
.
P
P
p
m
∗
∗
∗
Now we consider σ02 =
i=1 E(Zl,i Zl,i ) by computing (Zl,i Zl,i )1,1 and (Zl,i Zl,i )2,2 , i.e.,
l=1
∗ ,
the (1, 1)-th and (2, 2)-th block of Zl,i Zl,i
1 ∗
2
2
2
∗
∗
(Zl,i Zl,i )1,1 = (| hal,i , vi | − 1) + v (al,i al,i − In ) v ei e∗i ,
m
1
1
∗
(al,i a∗l,i − In )vv ∗ (al,i a∗l,i − In ) + 2 (al,i a∗l,i − In )2 .
(Zl,i Zl,i
)2,2 =
m
m
Following from (5.36), (5.37), (5.38) and Lemma 5.10, there holds
p X
p X
m
m
∗ )
X
X
0
E(Zl,i Zl,i
1,1
∗
2
E(Zl,i Zl,i ) ≤ 2
σ0 =
∗ )
0
E(Zl,i Zl,i
2,2
l=1 i=1
l=1 i=1
p X
m
n
X
1+ m
ei e∗i
0
= 2
n
1
0
m + m2 In
l=1 i=1
n
n
Im
0
1+ m
.
= 2p 1 +
= 2p
n
0
1 + m In
m
By applying the matrix Bernstein inequality (see Theorem 5.13) we obtain
p X
m
nr
X
n p
≤ C0 max
Zl,i
p 1+
t + log(m + n),
m
l=1 i=1
o p
n
(t + log(m + n)) log(m + n) ≤
1+
m
2
n
log2 (m+
with probability 1−e−t . In particular, by choosing t = γ log(m+n), i.e, p ≥ c0 γ max 1, m
n), the inequality above holds with probability 1 − (m + n)−γ .
21
5.1.3
Proof of Proposition 5.2(b)
Proof: [Proof of Proposition 5.2(b)] Each Zl is independent by its definition in (5.1) if
Al := HMl where H is an m × n partial DFT/Hadamard matrix with m ≥ n and H ∗ H = mIn
and Ml = diag(ml ) is a diagonal random binary ±1 matrix. Let Zl := Zl Zl∗ − C ∈ C(m+n)×(m+n) ;
in explicit form
#
"
− √1m (Λl Al − 1m v ∗ )
Λl Λ∗l − Im
.
Zl =
− √1m (A∗l Λ∗l − v1∗m )
0
where A∗l Al = mIn follows from the assumption. First we take a look at the upper bound of kZl k.
It suffices to bound kZl Zl∗ k since kZl k = kZl Zl∗ − Ck ≤ max {kZl Zl∗ k, kCk} and C is positive
semi-definite. On the other hand, due to Lemma 5.10, we have kZl Zl∗ k ≤ 2 max{kΛl Λ∗l k, 1} and
hence we only need to bound kΛl k. For kΛl k, there holds
max kΛl k = max kAl vk∞ =
1≤l≤p
1≤l≤p
max
1≤l≤p,1≤i≤m
| hal,i , vi |.
Also for any pair of (l, i), hal,i , vi can be rewritten as
| hal,i , vi | = | hdiag(ml )hi , vi | = | ml , diag(h̄i )v |
where hi is the i-th column of H ∗ and k diag(h̄i )vk = kvk = 1. Then there holds
P
max kΛl k ≥
1≤l≤p
p
p X
m
X
p
2γ log(mp)
≤
P | hal,i , vi | ≥ 2γ log(mp)
l=1 i=1
p
≤ mpP | hal,i , vi | ≥ 2γ log(mp)
≤ 2mp · e−γ log(mp) ≤ 2(mp)−γ+1 ,
(5.7)
where the third inequality follows from Lemma 5.11. Applying Lemma 5.10 to Zl ,
R := max kZl k ≤ 2 max {kΛl Λ∗l k, 1} ≤ 4γ log(mp),
1≤l≤p
1≤l≤p
with probability at least 1 − 2(mp)−γ+1
P. Denote the event {max1≤l≤p kZl k ≤ 4γ log(mp)} by E1 .
Now we try to understand σ02 = k pl=1 E(Zl Zl∗ )k. The (1, 1)-th and (2, 2)-th block of Zl Zl∗ are
given by
(Zl Zl∗ )1,1 = (Λl Λ∗l − Im )2 +
(Zl Zl∗ )2,2 =
1
(Λl Al − 1m v ∗ )(Λl Al − 1m v ∗ )∗ ,
m
1
(A∗ Λ∗ − v1∗m )(Λl Al − 1m v ∗ ).
m l l
By using (5.41), (5.42) and Al A∗l = HH ∗ mIm , we have
∗
E((Λl Λ∗l − Im )2 ) =
∗ ∗
E(Λl Al − 1m v )(Λl Al − 1m v )
E(A∗l Λ∗l
−
v1∗m )(ΛAl
=
∗
− 1m v ) =
=
22
E(Λl Λ∗l )2 − Im 2Im ,
E(Λl Al A∗l Λ∗l ) − 1m 1∗m mIm ,
E(A∗l Λ∗l Λl Al ) − mvv ∗
m
X
| hal,i , vi |2 al,i a∗l,i − mvv ∗
i=1
(5.8)
(5.9)
3mIn .
(5.10)
Combining (5.8), (5.9), (5.10) and Lemma 5.10,
p
X
0
E(Zl Zl∗ )1,1
2Im + Im 0
2
≤ 2p
≤ 6p.
σ0 ≤ 2
0
E(Zl Zl∗ )2,2
0
3In
l=1
By applying (5.32) with t = γ log(m + n) and R ≤ 4γ log(mp) over event E1 , we have
p
X
p
√ p
(Zl Zl∗ − C) ≤ C0 max{ p (γ + 1) log(m + n), γ(γ + 1) log(mp) log(m + n)} ≤
2
l=1
with probability 1 − (m + n)−γ − 2(mp)−γ+1 if p ≥ c0 γ 2 log(m + n) log(mp).
5.1.4
Proof of Proposition 5.2(c)
Proof: [Proof of Proposition 5.2(c)] Each Zl is independent due to (5.1). Let Zl := Zl Zl∗ −
C ∈ C(m+n)×(m+n) ; in explicit form
#
"
Λl Λ∗l − Im
− √1m (Λl Al − 1m v ∗ )
.
Zl =
1
∗
− √1m (A∗l Λl − v1∗m )
m Al Al − In
Here Al = HMl where H is a “fat” m × n, (m ≤ n) partial DFT/Hadamard matrix satisfying
HH ∗ = nIm and Ml is a diagonal ±1-random matrix. There holds A∗l Al = Ml∗ H ∗ HMl ∈ Cn×n
where H ∈ Cm×n and E(A∗l Al ) = mIn . For each l, kA∗l Al k = kH ∗ Hk = kHH ∗ k = n. Hence,
there holds,
nn
o
1
kA∗l Al k, kΛl k2 , 1 ≤ 2 max
, γ log(mp)
kZl k ≤ max {kZl Zl∗ k, kCkk ≤ 2 max
m
m
with probability at least 1 − 2(mp)−γ+1 , which
Pfollows exactly from (5.7) and Lemma 5.10.
Now we give an upper bound for σ02 := k pl=1 E(Zl Zl∗ )k. The (1, 1)-th and (2, 2)-th block of
Zl Zl∗ are given by
1
(Λl Al − 1m v ∗ )(ΛAl − 1m v ∗ )∗ ,
m
2
1 ∗
1
(A∗l Λ∗l − v1∗m )(Λl Al − 1m v ∗ ) +
Al Al − In .
m
m
(Zl Zl∗ )1,1 = (Λl Λ∗l − Im )2 +
(Zl Zl∗ )2,2 =
By using (5.41), (5.42) and Al A∗l = HH ∗ = nIm , we have
∗
E((Λl Λ∗l − Im )2 ) =
∗ ∗
E(Λl Al − 1m v )(Λl Al − 1m v )
E(A∗l Λ∗l
−
v1∗m )(Λl Al
=
∗
− 1m v ) =
=
For E
E(Λl Λ∗l )2 − Im 2Im ,
E(Λl Al A∗l Λ∗l ) − 1m 1∗m nIm ,
E(A∗l Λ∗l Λl Al ) − mvv ∗
m
X
| hal,i , vi |2 al,i a∗l,i − mvv ∗
i=1
(5.11)
(5.12)
3mIn . (5.13)
2
− In , we have
2
1 ∗
n − 2m ∗
2
1
Al Al − In = 2 A∗l Al A∗l Al − A∗l Al + In =
Al Al + In
m
m
m
m2
1
∗
m Al Al
23
where Al A∗l = nIm . Note that E(A∗l Al ) = mIn , and there holds,
2
1 ∗
n−m
A Al − In =
In .
E
m l
m
(5.14)
Combining (5.11), (5.12), (5.13), (5.14) and Lemma 5.10,
n
6np
Im
0
2+ m
2
σ0 ≤ 2p
≤
.
n
0
2 + m In
m
By applying (5.32) with t = γ log(m + n), we have
r
p
X
p
np p
n
∗
log(m + n)} ≤
(γ + 1) log(m + n), (γ + 1) γ log(mp) +
(Zl Zl − C) ≤ C0 max{
m
m
2
l=1
with probability 1 − (m + n)−γ − 2(mp)−γ+1 if mp ≥ c0 γ 2 n log(m + n) log(mp).
5.2
Blind deconvolution via diverse inputs
We start with (3.1) by setting εl = 0. In this way, we can factorize the matrix Aw
last row) into
√
pD √ 0
diag(A
A1
1 v1 )
√
√
−
·
·
·
0
mIn
kx1 kIm · · ·
0
p
m
0
kx1 k
.
.
.
.
.
.
.
..
..
..
..
..
..
..
A0 :=
..
...
.
Ap
diag(Ap vp )
0
· · · kxp kIm
√
0
· · · − √m
p
|
{z
}|
0
0
{z
}
Q
{z
|
mp×(np+m)
Z∈C
P
(excluding the
···
···
..
.
···
0
0
..
.
√
mIn
kxp k
}
(5.15)
where vl =
is the normalized xl , 1 ≤ l ≤ p. We will show that the matrix Z is of rank
v1
..
np + m − 1 to guarantee that the solution is unique (up to a scalar). Denote v := . ∈ Cnp×1
vp
P
√
and pl=1 eel ⊗ vl = v with kvk = p where {e
el }pl=1 is a standard orthonormal basis in Rp .
xl
kxl k
5.2.1
Proof of Theorem 3.3
The proof of Theorem 3.3 relies on the following proposition. We defer the proof of Proposition 5.4
to the Sections 5.2.2 and 5.2.3.
Proposition 5.4. There holds,
1
kZ ∗ Z − Ck ≤ ,
2
"
1
Im
− √mp
1m v ∗
C := E(Z ∗ Z) =
1
Inp
v1∗m
− √mp
#
(5.16)
(a) with probability at least 1 − (np + m)−γ with γ ≥ 1 if Al is an m × n (m > n) complex Gaussian
random matrix and
n
1
1
+
(γ + 1) log 2 (np + m) ≤ ;
C0
p m
4
24
(b) with probability at least 1 − (np + m)−γ − 2(mp)−γ+1 with γ ≥ 1 if Al yields (3.2) and
n−1
1
1
+
γ 3 log4 (np + m) ≤ .
C0
p m−1
4
Remark 5.5. Note that C has one eigenvalue equal to 0 and all the other eigenvalues are at
least 1. Hence the rank of C is np + m − 1. Similar to Remark 5.3, Proposition 5.4 shows
that the solution (s0 , {xl }pl=1 ) to (3.1) is uniquely identifiable with high probability when mp ≥
(np + m) · poly(log(np + m)) and kZ ∗ Z − Ck ≤ 12 .
Proof: [Proof of Theorem 3.3] From (3.1), we let Aw = Aw,0 + δA where Aw,0 is the noiseless
part of Aw . By definition of Aw,0 , we know that αz0 is the solution to the overdetermined system
0
Aw,0 z =
where α = w∗cz0 . Now, (5.15) gives
c
A∗w,0 Aw,0 = A∗0 A0 + ww ∗ = P ∗ Z ∗ Q∗ QZP + ww ∗ .
Define xmax := max1≤l≤p kxl k and xmin := min1≤l≤p kxl k. From Proposition 5.4 and Theorem 1
in [36], we know that the eigenvalues {λj }1≤j≤np+m of Z ∗ Z fulfill λ1 = 0 and λj ≥ 12 for j ≥ 2
since kZ ∗ Z − Ck ≤ 12 ; and the eigenvalues of C are 0, 1 and 2 with multiplicities 1, np + m − 2, 1
respectively.
The key is to obtain a bound for κ(Aw,0 ). From (5.15),
λmax (A∗w,0 Aw,0 ) ≤ kP k2 kQk2 kZ ∗ Zk + kwk2 ≤ 3x2max λmax (P ∗ P ) + kwk2
kwk2
2
λmax (P ∗ P )
≤ 3xmax 1 +
m
(5.17)
where x2max λmax (P ∗ P ) ≥ x2max x2m ≥ m. On the other hand, (5.16) gives
min
λmax (A∗w,0 Aw,0 ) ≥ max {kA∗l Al k} ≥
1≤l≤p
since
1
∗
m Al Al
m
2
− In ≤ 12 . For λmin (A∗w,0 Aw,0 ),
e .
A∗w,0 Aw,0 λmin (Q∗ Q)P ∗ Z ∗ ZP + ww ∗ x2min P ∗ Z ∗ ZP + ww ∗ =: P ∗ CP
#
" 1
√ 1m
m
e = x2 Z ∗ Z + w
ew
e ∗ where w
e = P −1 w. By
Denote u1 := √12 √
such that Zu1 = 0 and C
1
min
v
p
using the same procedure as (5.6),
where u :=
Since
Pnp+m
j=1
|u∗1 (P −1 )∗ w|2
e ≥ x2
u∗ Cu
min
αj uj with
=
P
1
∗
2mp |w z0
np+m
X
j=2
λj |αj |2 + |α1 |2 |u∗1 P −1 w|2
1
2
j |αj | = 1 and λj ≥ 2 for
|2 , the smallest eigenvalue of
j ≥ 2 follows from Proposition 5.4.
e satisfies
C
|w ∗ z0 |2
x2min
min 1,
2
mpx2min
1
x2min
2
2
min 1, kwk | Corr(w, z0 )|
≥
2
m
e ≥
λmin (C)
25
∗
2
∗
2
z0 |
z0 |
≥ |w
≥ kwk2 | Corr(w, z0 )|2 follows from kz0 k2 ≥ px2min .
where |w
kz0 k2
px2min
Therefore, the smallest eigenvalue of A∗w,0 Aw,0 satisfies
e min (P ∗ P )
λmin (A∗w,0 Aw,0 ) ≥ λmin (C)λ
x2
≥ min min m, kwk2 | Corr(w, z0 )|2 λmin (P ∗ P )
2m
(5.18)
Combining (5.17) and (5.18) leads to
κ(A∗w,0 Aw,0 ) ≤
Applying Proposition 5.1 and η =
6x2max (m + kwk2 )
κ(P ∗ P ).
x2min min{m, kwk2 | Corr(w, z0 )|2 }
2kδAk
√
m
≥
kδAk
kAw,0 k ,
we have
2
kẑ − αz0 k
≤ κ(Aw,0 )η 1 +
,
kαz0 k
1 − κ(Aw,0 )η
α=
c
w ∗ z0
if κ(Aw,0 )η < 1 where κ(Aw,0 ) obeys
v
u
max{pd2max , x2m }
u
6x2max (m + kwk2 )
min
t
.
κ(Aw,0 ) ≤
x2min min{m, kwk2 | Corr(w, z0 )|2 } min{pd2min , x2m }
max
In particular, if kwk =
√
m, then κ(Aw,0 ) has the following simpler upper bound:
v
u
√
u max{pd2max , x2m }
2 3xmax
min
t
κ(Aw,0 ) ≤
|Corr(w, z0 )|xmin min{pd2min , x2m }
max
which finishes the proof of Theorem 3.3.
5.2.2
Proof of Proposition 5.4(a)
In this section, we will prove that Proposition 5.4(a) if al,i ∼ √12 N (0, In ) + √i2 N (0, In ) where
al,i ∈ Cn is the i-th column of A∗l . Before moving to the proof, we compute a few quantities which
will be used later. Define zl,i as the ((l − 1)m + i)-th column of Z ∗ ,
"
#
√1 hal,i , vl iei
p
zl,i :=
, 1 ≤ l ≤ p, 1 ≤ i ≤ m
− √1m eel ⊗ al,i
(np+m)×1
m
where {ei }m
el }pl=1 ∈ Rp are standard orthonormal P
basis P
in Rm and Rp respectively;
i=1 ∈ R and {e
p
∗
“⊗” denotes Kronecker product. By definition, we have Z ∗ Z = l=1 m
i=1 zl,i zl,i and all zl,i are
independent from one another.
#
"
1
1
2
∗
− √mp
hvl , al,i i ei (e
e∗l ⊗ a∗l,i )
p | hal,i , vl i | ei ei
∗
zl,i zl,i =
1
1
el ee∗l ⊗ al,i a∗l,i
hal,i , vl i (e
el ⊗ al,i )e∗i
− √mp
me
26
and its expectation is equal to
"
∗
)=
E(zl,i zl,i
Pp
It is easy to verify that C =
l=1
1
∗
p ei ei
1
− √mp
(e
el ⊗ vl )e∗i
Pm
∗
i=1 E(zl,i zl,i ).
#
1
− √mp
ei (e
e∗l ⊗ vl∗ )
.
1
el ee∗l ⊗ In
me
Proof: [Proof of Proposition 5.4(a)] The
is to use apply the matrix Bernstein inPpkeyPhere
m
∗ . Let Z := z z ∗ − E(z z ∗ ) and
∗
equality in Theorem 5.13. Note that Z Z = l=1 i=1 zl,i zl,i
l,i l,i
l,i
l,i l,i
we have
1
1 1
1
2
∗
2
2
kZl,i k ≤ kzl,i k + k E(zl,i zl,i )k ≤ | hal,i , vl i | + kal,i k + 2 max
,
p
m
p m
∗ )k ≤ 2 max{ 1 , 1 } follows from Lemma 5.10. Therefore, the exponential norm of
since k E(zl,i zl,i
p m
kZl,i k is bounded by
1
1
n
1
1 1
2
2
kZl,i kψ1 ≤ 2
(| hal,i , vl i | )ψ1 + (kal,i k )ψ1 + 2 max
,
+
≤C
p
m
p m
p m
n
.
which follows from Lemma 5.14 and as a result R := maxl,i kZl,i kψ1 ≤ C p1 + m
P
P
p
m
∗
Now we proceed by estimating the variance σ02 :=
i=1 Zl,i Zl,i . We express Zl,i as
l=1
follows:
#
"
1
1
2
∗
− √mp
ei (e
e∗l ⊗ (vl∗ (al,i a∗l,i − In )))
p (| hal,i , vl i | − 1)ei ei
.
Zl,i =
1
1
el ee∗l ⊗ (al,i a∗l,i − In )
(e
el ⊗ ((al,i a∗l,i − In )vl ))e∗i
− √mp
me
∗ are
The (1, 1)-th and the (2, 2)-th block of Zl,i Zl,i
∗
)1,1 =
(Zl,i Zl,i
and
2
1
1 ∗
| hal,i , vl i |2 − 1 ei e∗i +
v (al,i a∗l,i − In )2 vl ei e∗i ,
2
p
mp l
∗
)2,2 = eel ee∗l ⊗
(Zl,i Zl,i
1
1
(al,i a∗l,i − In )vl vl∗ (al,i a∗l,i − In ) + 2 (al,i a∗l,i − In )2 .
mp
m
Following from (5.36), (5.37) and (5.38), we have
1
1
n
n
∗
∗
∗
∗
E(Zl,i Zl,i )1,1 =
+
In + 2 In .
ei ei , E(Zl,i Zl,i )2,2 = eel eel ⊗
p2 mp
mp
m
Due to Lemma 5.10, there holds,
σ02 :=
p X
m
X
l=1 i=1
=
2
1
p
∗
Zl,i Zl,i
≤2
+
n
m
0
Im
p X
m
X
E(Zl,i Z ∗ )1,1
l,i
l=1 i=1
0
1
p
+
n
m
27
Inp
0
≤2
0
∗ )
E(Zl,i Zl,i
2,2
n
1
+
p m
.
P P
∗
Note that σ02 ≥ k pl=1 m
i=1 E(Zl,i Zl,i )1,1 k =
matrix. By applying (5.34),
∗
kZ Z − Ck =
p X
m
X
l=1 i=1
1
p
+
Zl,i =
nr 1
n
m
∗ ) is a positive semi-definite
since E(Zl,i Zl,i
p X
m
X
l=1 i=1
∗
)−C
E(zl,i zl,i
np
≤ C0 max
+
t + log(np + m),
p m
o 1
1
n
+
(t + log(np + m)) log(np + m) ≤ .
p m
2
∗
With
t = γlog(mp + n), there holds kZ Z − Ck ≤
n
(γ + 1) log2 (np + m) ≤ 14 .
C0 1p + m
5.2.3
1
2
with probability at least 1 − (np + m)−γ if
Proof of Proposition 5.4(b)
We prove Proposition 5.4 based on assumption (3.2). Denote al,i and hl,i as the i-th column of A∗l
and Hl∗ and obviously we have al,i = Ml hl,i . Denote Λl = diag(Al vl ) and let Zl be the l-th block
of Z ∗ in (5.15), i.e.,
#
"
√1 Λl
p
∈ C(np+m)×m .
Zl =
− √1m eel ⊗ A∗l
With A∗l Al = mIn , we have
"
Zl Zl∗ =
1
∗
p Λl Λl
1
eel ⊗ (Λl Al )∗
− √mp
#
1
− √mp
ee∗l ⊗ (Λl Al )
∈ C(np+m)×(np+m)
(e
el ee∗l ) ⊗ In
where the expectation of i-th row of Λl Al yields E(Λl Al )i = E(vl∗ al,i a∗l,i ) = vl∗ . Hence E(Λl Al ) =
1m vl∗ ∈ Cm×n . Its expectation equals
"
#
1
∗ ⊗ (1 v ∗ )
√1 e
e
I
−
m
m
l
p
mp l
E(Zl Zl∗ ) =
.
1
− √mp
eel ⊗ (vl 1∗m )
eel ee∗l ⊗ In
Proof: [Proof of Proposition 5.4(b)] Note that each block Zl is independent and we want
to apply the matrix Bernstein inequality to achieve the desired result. Let Zl := Zl Zl∗ − E(Zl Zl∗ )
and by definition, we have
#
"
1
1
∗
− √mp
(Λl Al − 1m vl∗ )
p (Λl Λl − Im )
kZl k =
1
− √mp
(Λl Al − 1m vl∗ )∗
0
≤
1
1
kΛl Λ∗l − Im k + √ kΛl Al − 1m vl∗ k.
p
mp
Note that
√
√
kΛl Al − 1m vl∗ k ≤ kΛl kkAl k + k1m vl∗ k ≤ mkΛl k + m.
p
Since (5.7) implies that P max1≤l≤p kΛl k ≥ 2γ log(mp) ≤ 2(mp)−γ+1 , we have
p
2γ log(mp) + 1
2γ log(mp) + 1
kΛl k2 + 1 kΛl k + 1
≤
+
+
R := max kZl k ≤
√
√
1≤l≤p
p
p
p
p
28
with probability at least 1 − 2(mp)−γ+1 . We proceed with estimation of σ02 :=
looking at the (1, 1)-th and (2, 2)-th block of Zl Zl∗ , i.e.,
(Zl Zl∗ )1,1 =
(Zl Zl∗ )2,2 =
Pp
∗
l=1 E(Zl Zl )
by
1
1
(Λl Λ∗l − Im )2 +
(Λl Al − 1m vl∗ )(Λl Al − 1m vl∗ )∗ ,
2
p
mp
1
(e
el ee∗l ) ⊗ ((Λl Al − 1m vl∗ )∗ (Λl Al − 1m vl∗ )).
mp
Note that E(Λl Λ∗l − Im )2 = E((Λl Λ∗l )2 ) − Im . The i-th diagonal entry of (Λl Λ∗l )2 is | hal,i , vl i |4 =
| ml , diag(h̄l,i )vl |4 where al,i = Ml hl,i = diag(hl,i )ml and (5.42) implies E(| hal,i , vl i |4 ) ≤ 3
since diag(h̄l,i )vl is still a unit vector (note that diag(h̄l,i ) is unitary since hl,i is a column of Hl∗ ).
Therefore,
E(Λl Λ∗l − Im )2 = E((Λl Λ∗l )2 ) − Im 3Im − Im 2Im .
(5.19)
By using Lemma 5.18, we have
E(Λl Al − 1m vl∗ )(Λl Al − 1m vl∗ )∗
= E(Λl Al A∗l Λ∗l ) − 1m 1∗m
m(n − 1)Im
(n − 1)(mIm − 1m 1∗m )
.
=
m−1
m−1
(5.20)
With al,i = Ml hl,i and independence between {hl,i }m
i=1 and Ml , we have
E((Λl Al − 1m vl∗ )∗ (Λl Al − 1m vl∗ ))
= E(A∗l Λ∗l Λl Al ) − mvl vl∗
=E
m
X
i=1
=E
m
X
i=1
| hal,i , vl i |2 al,i a∗l,i
!
− mvl vl∗
∗
!
diag(hl,i ) | ml , diag(h̄l,i )vl |2 ml ml diag(h̄l,i )
3mIn − mvl vl∗ 3mIn
− mvl vl∗
(5.21)
where E | ml , diag(h̄l,i )vl |2 ml m∗l 3In follows from (5.41) and diag(hl,i ) diag(h̄l,i ) = In . By
using (5.19), (5.20), (5.21) and Lemma 5.10, σ02 is bounded above by
!
p
p
X
X
E(Zl Zl∗ )1,1
0
∗
2
≤2
Zl Zl
σ0 := E
0
E(Zl Zl∗ )2,2
l=1
l=1
#
"
p
2
n−1
X
I
0
+
m
2
(m−1)p
p
≤ 2
3
el ee∗l ) ⊗ In
0
p (e
l=1
"
#
2
n−1
+ m−1
Im
0
1
n−1
p
≤ 2
≤6
+
.
3
p m−1
Inp
0
p
29
n
o
p
Conditioned on the event max1≤l≤p kΛl k ≥ 2γ log(mp) , applying (5.32) with t = γ log(np+m)
gives
p
nr 1
X
n−1p
(γ + 1) log(np + m),
+
≤ C0 max
Zl
p m−1
l=1
s
o 1
2γ log(mp)
log(mp) log(np + m) ≤
(γ + 1)
p
2
−γ
−γ+1 and it suffices to require the condition
with
at least 1 − (np + m) − 2(mp)
probability
n−1
γ 3 log4 (np + m) ≤ 41 .
C0 1p + m−1
5.3
Blind Calibration from multiple snapshots
Recall that Aw in (3.3) and the only difference from (3.1) is that here all Al are
εl = 0, Aw (excluding the last row) can be factorized into
√
pD √ 0
diag(Av
A
1)
√
− √m · · ·
0
mIn
kx1 kIm · · ·
0
0
p
kx1 k
.
.
.
.
.
.
.
..
..
..
..
..
..
..
A0 :=
..
...
.
diag(Av
)
p
A
0
· · · kxp kIm
√
0
· · · − √m
p
|
{z
}|
0
0
{z
}
Q
{z
|
mp×(np+m)
Z∈C
equal to A. If
···
···
..
.
···
0
0
..
.
√
mIn
kxp k
P
}
(5.22)
xi
n is the normalized x .
where vi = kx
∈
C
i
ik
Before we proceed to the main result in this section we need to introduce some notation. Let
ai be the i-th column of A∗ , which is a complex Gaussian random matrix; define Zi ∈ C(np+m)×p
to be a matrix whose columns consist of the i-th column of each block of Z ∗ , i.e.,
#
"
√1 hai , v1 iei · · ·
√1 hai , vp iei
p
p
Zi =
− √1m ee1 ⊗ ai · · · − √1m eep ⊗ ai
(np+m)×p
m
where “⊗” denotes Kronecker product and both {ei }m
el }pl=1 ∈ Rp are the standard
i=1 ∈ R and {e
orthonormal basis in Rm and Rp , respectively. In this way, the Zi are independently from one
another. By definition,
" 1 Pp
#
2 e e∗ − √1 e v ∗ (I ⊗ a a∗ )
|
ha
,
v
i
|
i
i
i
p
i
l
i
i
l=1
p
mp
Zi Zi∗ =
1
1
∗
(Ip ⊗ ai a∗i )ve∗i
− √mp
m Ip ⊗ (ai ai )
where
by
Pp
el
l=1 e
⊗ vl = v ∈ Cnp×1
v1
√
and v = ... with kvk = p. The expectation of Zi Zi∗ is given
vp
#
∗
√1 ei v ∗
e
e
−
i
i
mp
,
E(Zi Zi∗ ) =
1
1
ve∗i
− √mp
m Inp
"
C :=
m
X
i=1
30
"
#
√1 1m v ∗
I
−
m
mp
E(Zi Zi∗ ) =
.
1
v1∗m
Inp
− √mp
Our analysis depends on the mutual coherence of {vl }pl=1 . One cannot expect to recover all
{vl }pl=1 and D if all {vl }pl=1 are parallel to each other. Let G be the Gram matrix of {vl }pl=1 with
p ≤ n, i.e., G ∈ Cp×p and Gk,l = hvk , vl i , 1 ≤ k ≤ l ≤ p and in particular, Gl,l = 1, 1 ≤ l ≤ p. Its
eigenvalues are denoted by {λl }pl=1 with λp ≥ · · · ≥ λ1 ≥ 0. Basic linear algebra tells that
p
X
U GU ∗ = Λ,
λl = p,
(5.23)
l=1
v1∗
where U ∈ Cp×p is unitary and Λ = diag(λ1 , · · · , λp ). Let V = ... ∈ Cp×n , then there holds
v∗
√
√p
∗
∗
∗
Λ = U GU = U V (U V ) since G = V V . Here 1 ≤ kGk ≤ p and p ≤ kGkF ≤ p. In
√
particular, if hvk , vl i = δkl , then G = Ip ; if | hvk , vl i | = 1 for all 1 ≤ k, l ≤ p, then kGk = p and
kGkF = p.
We are now ready to state and prove the main result in this subsection.
Proposition 5.6. There holds
kZ ∗ Z − Ck =
m
X
i=1
Zi Zi∗ − C ≤
1
2
with probability at least 1 − 2m(np + m)−γ if
kGk kGk2F
n
1
C0 max
,
+
log2 (np + m) ≤
2
p
p
m
16(γ + 1)
(5.24)
and each al is i.i.d. complex Gaussian, i.e., ai ∼ √12 N (0, In ) + √i2 N (0, In ). In particular, if
√
G = Ip and kGkF = p, (5.24) becomes
n
1
1
+
.
(5.25)
C0
log2 (np + m) ≤
p m
16(γ + 1)
Remark 5.7. The proof of Theorem 3.5 follows exactly from that of Theorem 3.3 when Proposition 5.6 holds. Hence we just give a proof of Proposition 5.6.
Proof: [Proof of Proposition 5.6] Let Zi Zi∗ − E(Zi Zi∗ ) =: Zi,1 + Zi,2 , where Zi,1 and Zi,2 are
defined as
1 Pp
(| hai , vl i |2 − 1)ei e∗i 0
l=1
p
Zi,1 :=
,
0
0
"
#
1
ei v ∗ (Ip ⊗ (ai a∗i − In ))
0
− √mp
Zi,2 :=
.
1
1
∗
− √mp
(Ip ⊗ (ai a∗i − In ))ve∗i
m Ip ⊗ (ai ai − In )
Estimation of k
p
X
l=1
Pm
i=1 Zi,1 k
Following from (5.23), we have
p
2
2
2
1/2
| hai , vl i | = kV ai k = kU V ai k = kΛ
31
−1/2
Λ
1X
2
λl ξi,l
U V ai k =
2
2
l=1
where Λ−1/2 U V is a p×n matrix with orthonormal
rows and hence each ξi,l is Rayleigh distributed.
√
(We say ξ is Rayleigh distributed if ξ = X 2 + Y 2 where both X and Y are standard real Gaussian
variables.)
Due to the simple form of Zi,1 , it is easy to see from Bernstein’s inequality for scalar random
variables (See Proposition 5.16 in [42]) that
m
X
i=1
p
Zi,1
1X
≤ max
| hai , vl i |2 − 1 ≤ C0 max
1≤i≤m p
l=1
with probability 1 − me−t . Here
Therefore,
p
X
Var
1
p
2
λl (ξi,l
l=1
2
max (λl |ξi,l
1≤l≤p
Pp
2
l=1 | hai , vl i |
!
− 1)
− 1|)ψ1
≤
2
Var(ξi,1
1
p
− 1)
tkGk
,
p
Pp
p
X
l=1
2
l=1 λl (ξi,l
√
tkGkF
p
− 1) where
(5.26)
Pp
l=1 λl
= p.
λ2l = c0 kGk2F ,
1≤l≤p
Pm
∗
i=1 E(Zi,2 Zi,2 )k
in order to bound k
Denote zi := (Ip ⊗ (ai a∗i − I))v and there holds,
Estimation of kZi,2 kψ1
≤ c1 max λl = c1 kGk.
Now we only need to find out kZi,2 kψ1 and σ02 = k
kZi,2 k ≤
−1 =
Pm
i=1 Zi,2 k.
1
1
1
1
max{kai k2 , 1} + √ kzi e∗i k =
max{kai k2 , 1} + √ kzi k.
m
mp
m
mp
For kai k2 , its ψ1 -norm is bounded by C0 n due to Lemma 5.14 and we only need to know kzi kψ1 :
v
u p
uX
√
∗
∗
| hai , vl i |2 kai k + p.
kzi k = k(Ip ⊗ (ai ai − In ))vk ≤ kIp ⊗ (ai ai )vk + kvk = t
l=1
Let u =
Pp
2
l=1 | hai , vl i |
=
√
( uv)ψ1
Pp
2
l=1 λl ξi,l
and v = kai k2 . The ψ1 -norm of kIp ⊗ (ai a∗i )vk satifies
√
= sup q −1 (E( uv)q )1/q ≤ sup q −1 (E uq )1/2q (E v q )1/2q
q≥1
≤
r
q≥1
√
sup q −1 (E uq )1/q sup q −1 (E v q )1/q ≤ C1 np
q≥1
q≥1
where the second inequality follows from the Cauchy-Schwarz inequality, kukψ1 ≤ C1 p, and kvkψ1 ≤
√
C1 n. Therefore, kzi kψ1 ≤ C2 np and there holds
kZi,2 kψ1 ≤ C0
r
√
np
n
n
n
+√
+
.
≤ C0
m
mp
m
m
32
(5.27)
Pm
∗
Estimation of σ02 Note that σ02 :=
i=1 E(Zi,2 Zi,2 ) . Let Ai,1,1 and Ai,2,2 be the (1, 1)-th and
∗ respectively, i.e.,
(2, 2)-th block of Zi,2 Zi,2
Ai,1,1 =
Ai,2,2 =
=
1
k(Ip ⊗ (ai a∗i − In ))vk2 ei e∗i ,
mp
1
1
(Ip ⊗ (ai a∗i − In ))vv ∗ (Ip ⊗ (ai a∗i − In )) + 2 Ip ⊗ (ai a∗i − In )2
mp
m
1
1
[(ai a∗i − In )vk vl∗ (ai a∗i − In )]1≤k≤l≤p + 2 Ip ⊗ (ai a∗i − In )2 .
mp
m
(5.28)
∗ is a positive semi-definite matrix, Lemma 5.10 implies
Since Zi,2 Zi,2
σ02
m
X
Ai,1,1
0
E
.
≤2
0
Ai,2,2
(5.29)
i=1
So we only need to compute E(Ai,1,1 ) and E(Ai,2,2 ).
E k(Ip ⊗
(ai a∗i
2
− In ))vk =
p
X
l=1
E k(ai a∗i
where E(ai a∗i − In )2 = nIn . Now we have E Ai,1,1 =
2
− In )vl k = n
n
∗
m ei ei .
p
X
l=1
kvl k2 = np
For E(Ai,2,2 ), note that
E((ai a∗i − In )vk vl∗ (ai a∗i − In )) = Tr(vk vl∗ )In = hvl , vk i In = Gk,l In
which follows from (5.35). By (5.28), (5.29) and Lemma 5.10, there holds,
n
m n
X
ei e∗i
0
0
n
kGk
2
m
m
≤2
+
.
≤2
σ0 :≤ 2
n
n
1
Inp
0
0 1p G ⊗ In + m
p
m
mp G ⊗ In + m2 Inp
i=1
P
n
∗ ) is positive
each E(Zi,2 Zi,2
One lower bound of σ02 is σ02 ≥ k m
i=1 E(Ai,1,1 )k = m because
√
√
√
≤ log 2C0 √ n
≤ log(2C0 m) where R :=
semi-definite. Therefore, we have log σmR
0
n/m
pn
n
max1≤i≤m kZi,2 kψ1 ≤ CP
0( m +
m ) and m ≥ n.
Applying (5.34) to m
Z
i=1 i,2 with (5.29) and (5.27), we have
m
n n r n
X
+
(t + log(np + m)) log(np + m),
Zi,2
≤ C0 max
m
m
i=1
s
o
kGk
n
+
(log(np + m) + t)
(5.30)
p
m
with
1 − e−t . By combining (5.30) with (5.26) and letting t = γ log(np + m), we have
Pm probability
∗
k i=1 Zi Zi − Ck ≤ 21 with probability 1 − 2m(np + m)−γ if
p
γ log(np + m)kGkF
kGk
1
n
1
≤ , C0 (γ + 1)
+
log2 (np + m) ≤
C0
p
4
p
m
16
n
o
kGk2F
n
1
or equivalently, C0 max kGk
+m
.
log2 (np + m) ≤ 16(γ+1)
p , p2
33
5.4
Proof of the spectral method
We provide the proof of the spectral method proposed in Section 2.3. The proof follows two steps: i)
we provide an error bound for the recovery under noise by using singular value/vector perturbation.
The error bound involves the second smallest singular value of S0 and the noise strength kδSk; ii)
we give a lower bound for σ2 (S0 ), which is achieved with the help of Proposition 5.2, 5.4 and 5.6
respectively for three different models.
The first result is a variant of perturbation theory for the singular vector corresponding to the
smallest singular value. A more general version can be found in [45, 36].
Lemma 5.8. Suppose S = S0 + δS where S0 is the noiseless part of S and δS is the noisy part.
Then
kδSk
kα0 ẑ − z0 k
(I − ẑ ẑ ∗ )z0
≤
=
.
min
kz
k
kz
k
[σ
(S
α0 ∈C
0
0
2 0 ) − kδSk]+
where σ2 (S0 ) is the second smallest singular value of S0 , z0 satisfies S0 z0 = 0, and ẑ is the right
singular vector with respect to the smallest singular value of S, i.e., the solution to (2.10).
s
Proof: By definition, there holds S0 z0 = 0 where z0 = 0 is the ground truth and also the
x0
right singular vector corresponding to the smallest singular value. Without loss of generality, we
assume kz0 k = kẑk = 1. For S, we denote its singular value decomposition as
S := S − σ1 (S)ûẑ ∗ + σ1 (S)ûẑ ∗
{z
} | {z }
|
B1
B0
where σ1 (S), û and ẑ are the smallest singular value/vectors of S. By definition, the vector ẑ is
also the solution to (2.10).
First note that I − ẑ ẑ ∗ = B1∗ (B1∗ )† where (B1∗ )† is the pseudo-inverse of B1 . Therefore, we
have
(I − ẑ ẑ ∗ )z0
kz0 k
= kz0 z0∗ B1∗ (B1∗ )† k = kz0 z0∗ (S0∗ + (δS)∗ − (B0 )∗ )(B1∗ )† k
= kz0 z0∗ (δS)∗ (B1∗ )† k ≤ kδSkk(B1∗ )† k ≤
kδAk
kδSk
≤
σ2 (S)
[σ2 (S0 ) − kδSk]+
where S0 z0 = 0 and (B0 )∗ (B1∗ )† = 0. And the last inequality follows from |σ2 (S0 ) − σ2 (S)| ≤ kδSk
and σ2 (S) ≥ [σ2 (S0 ) − kδSk]+ .
The second smallest singular value σ2 (S0 ) is estimated by using the proposition 5.2, 5.4 and 5.6
and the following fact:
Lemma 5.9. Suppose P is an invertible matrix, and A is a positive semi-definite matrix with the
dimension of its null space equal to 1. Then the second smallest singular value of P AP ∗ is nonzero
and satisfies
σ2 (P AP ∗ ) ≥ σ2 (A)σ12 (P )
where σ1 (·) and σ2 (·) denotes the smallest and second smallest singular values respectively.
34
Proof: The proof is very straightforward. Note that k(P AP ∗ )† k =
deficient. Also from the property of pseudo inverse, there holds
k(P AP ∗ )† k = k(P −1 )∗ A† P −1 k ≤ kA† kkP −1 k2 =
1
σ2 (P AP ∗ )
since A is rank-1
1
1
.
2
σ2 (A) σ1 (P )
Hence, σ2 (P AP ∗ ) ≥ σ2 (A)σ12 (P ).
Proof: [Proof of Theorem 3.9] Combined with Lemma 5.8, it suffices to estimate the lower
bound of the second smallest singular P
value of S0 for the proposed three models. We start with
∗
(a). From (5.1), we know that S0 S0 = pl=1 P Zl Zl∗ P ∗ where
"
#
∗
Λl
D kxk
0
(m+n)×m
√
∈ C(m+n)×(m+n) .
Zl :=
∈
C
,
P
:=
− √1m A∗l
mIn
0
P
From Proposition 5.2, we know that the second smallest eigenvalue of pl=1 Zl Zl∗ is at least
it is also rank-1 deficient. Applying Lemma 5.9 gives
!
p
X
√
p
Zl Zl∗ σ1 (P P ∗ ) ≥ (min{ m, dmin kxk})2
σ2 (S0∗ S0 ) ≥ σ2
2
p
2
and
l=1
and hence σ2 (S0 ) ≥
q
p
2
√
min{ m, dmin kxk}.
Since (b) and (c) are exactly the same, it suffices to show (b). From (5.15), we know that S0
can be factorized into A0 := QZP and Proposition 5.4 implies
"
#
√1 1m v ∗
I
−
m
1
mp
.
kZ ∗ Z − Ck ≤ , C := E(Z ∗ Z) =
1
− √mp
v1∗m
Inp
2
Therefore, S0∗ S0 = QZ ∗ P P ∗ ZQ∗ and σ2 (Z ∗ Z) ≥ 12 . Applying Lemma 5.9 leads to
σ2 (S0∗ S0 ) ≥ σ2 (Z ∗ P P ∗ Z)σ12 (Q) ≥ σ2 (Z ∗ Z)σ12 (P )σ12 (Q)
√ 2
√
m
1 2
pdmin ,
≥ xmin min
2
xmax
where xmin = min{kxl k} and xmax = max{kxl k}.
Appendix
S11 S12
, there
Lemma 5.10. For any Hermitian positive semi-definite matrix S with S :=
∗
S12
S22
holds,
S11 0
S11 S12
2
.
∗
S22
S12
0 S22
In other words, kSk ≤ 2 max{kS11 k, kS22 k}.
35
Lemma 5.11. Corollary 7.21 in [17]. Let a ∈ CM and ε = (ε1 , · · · , εM ) be a Rademacher sequence,
then for u > 0,
M
X
P
εj aj ≥ kaku ≤ 2 exp(−u2 /2).
j=1
For Gaussian and random Hadamard cases, the concentration inequalities are slightly different.
The following theorem is mostly due to Theorem 6.1 in [39] as well as due to [42].
Theorem 5.12. Consider a finite sequence of Zl of independent centered random matrices
PL with
dimension M1 × M2 . We assume that kZl k ≤ R and introduce the random matrix S := l=1 Zl .
Compute the variance parameter
σ02
L
L
n X
o
X
∗
= max k
E(Zl Zl )k, k
E(Zl∗ Zl )k ,
l=1
then for all t ≥ 0
kSk ≤ C0 max{σ0
p
(5.31)
l=1
t + log(M1 + M2 ), R(t + log(M1 + M2 ))}
(5.32)
with probability at least 1 − e−t where C0 is an absolute constant.
The concentration inequality is slightly different from Theorem 5.12 if kZl k is a sub-exponential
random variable. Here we are using the form in [24]. Before presenting the inequality, we introduce
the sub-exponential norm k · kψ1 of a matrix, defined as
kZkψ1 := inf {u : E[exp(kZk/u)] ≤ 2}.
u≥0
(5.33)
One can find more details about this norm and norm on Orlicz spaces in [42] and [40].
Theorem 5.13. For a finite sequence of independent M1 × M2 random matrices Zl with R :=
max1≤l≤L kZl kψ1 and σ02 as defined in (5.31), we have the tail bound on the operator norm of S,
!
√
p
LR
(t + log(M1 + M2 ))}
(5.34)
kSk ≤ C0 max{σ0 t + log(M1 + M2 ), R log
σ0
with probability at least 1 − e−t where C0 is an absolute constant.
The estimation of the ψ1 -norm of a sub-exponential random variable easily follows from the
following lemma.
Lemma 5.14 (Lemma 2.2.1 in [40]). Let z be a random variable which obeys P{|z| > u} ≤ ae−bu ,
then kzkψ1 ≤ (1 + a)/b.
Remark 5.15. A direct implication of Lemma 5.14 gives the ψ1 -norm of kak2 where a ∼ N (0, 12 In )+
iN (0, 21 In ) is a complex Gaussian random vector, i.e., (kak2 )ψ1 ≤ C0 n for the absolute constant
C0 .
36
Lemma 5.16. For a ∼ N (0, 12 In ) + iN (0, 21 In ), there holds
E(kak2 aa∗ ) =
∗
∗
E((a Xa)aa ) =
(n + 1)In ,
X + Tr(X)In ,
(5.35)
for any fixed x ∈ Cn and X ∈ Cn×n . In particular, we have
E | ha, xi |2 aa∗
(5.36)
4
= kxk2 In + xx∗ ,
= 2kxk ,
(5.37)
2
= nIn .
(5.38)
E | ha, xi |
∗
E(aa − In )
4
Lemma 5.17. Suppose that a ∈ Rn is a Rademacher sequence and for any fixed x ∈ Cn and
X ∈ Cn×n , there holds
E(kak2 aa∗ )
E((a∗ Xa)aa∗ )
= nIn ,
(5.39)
= Tr(X)In + X + X T − 2
n
X
Xkk Ekk
(5.40)
k=1
where Ekk is an n × n matrix with only one nonzero entry equal to 1 and at position (k, k). In
particular, setting X = xx∗ gives
E | ha, xi |2 aa∗
E | ha, xi |4
= kxk2 In + xx∗ + xx∗ − 2 diag(x) diag(x̄) 3kxk2 In ,
= 2kxk4 +
n
X
2
x2i
k=1
−2
n
X
k=1
|xk |4 ≤ 3kxk4 .
(5.41)
(5.42)
Proof: Since a is a Rademacher sequence, i.e, each ai takes ±1 independently with equal probability, this implies a2i =P1 and
kak2 = n. Therefore, E(kak2 aa∗ ) = n E(aa∗ ) = nIn . The (k, l)-th
P
entry of (a∗ Xa)aa∗ is ni=1 nj=1 Xij ai aj ak al .
1. If k = l,
n
n
n X
X
X
2
E(Xii |ai |2 |ak |2 ) = Tr(X)
Xij ai aj |ak |
=
E
i=1
i=1 j=1
where E(ai aj |ak |2 ) = 0 if i 6= j.
2. If k 6= l,
n X
n
X
Xij ai aj ak al = Xkl E(|ak |2 |al |2 ) + Xlk E(|ak |2 |al |2 ) = Xkl + Xlk .
E
i=1 j=1
Hence, we have E((a∗ Xa)aa∗ ) = Tr(X)In + X + X T − 2
Lemma 5.18. There holds
E(ΛA(ΛA)∗ ) =
Pn
k=1 Xkk Ekk .
(n − 1)(mIm − 1m 1∗m )
+ 1m 1∗m
m−1
37
where A = HM , Λ = diag(Av) and v = (v1 , · · · , vn )T ∈ Cn is a deterministic unit vector.
H ∈ Cm×n is a random partial Fourier/Hadamard matrix with H ∗ H = mIn and m ≥ n, i.e., the
columns of H are uniformly sampled without replacement from an m × m DFT/Hadamard matrix;
M is a diagonal matrix with entries random sampled from ±1 with equal probability; moreover, we
assume M and H are independent from each other. In particular, if m = n = 1, E(ΛA(ΛA)∗ ) = 1.
Proof: We only prove the case when A is a random Fourier matrix since the Hadamard case is
essentially the same modulo very minor differences. By definition,
ΛAA∗ Λ∗ = ΛHH ∗ Λ∗ = diag(HM v)HH ∗ diag(HM v).
Let hi be the i-th column of H ∗ and the (i, j)-th entry of diag(HM v)HH ∗ diag(HM v) is
hhi , M vi hhj , M vi hhi , hj i where hu, vi = u∗ v. The randomness of ΛAA∗ Λ∗ comes from both H
and M and we first take the expectation with respect to M .
E(hhi , M vi hhj , M vi |H)
= h∗i E(M vv ∗ M |H)hj = h∗i diag(v) diag(v̄)hj
= (H diag(v) diag(v̄)H ∗ )i,j
where E(M vv ∗ M ) = diag(v) diag(v̄) follows from each entry in M being a Bernoulli random
variable. Hence, E((ΛAA∗ Λ∗ )i,j |H) = (H diag(v) diag(v̄)H ∗ )i,j · (HH ∗ )i,j .
Let uk be the k-th columnPof H and 1 ≤ k ≤ n and “⊙” denotes the Hadamard (pointwise)
product. So we have HH ∗ = nk=1 uk u∗k . There holds,
!
n
X
|vk |2 ūk ū∗k ⊙ HH ∗
E(ΛAA∗ Λ∗ |H) = (H diag(v) diag(v̄)H ∗ ) ⊙ HH ∗ =
k=1
=
n
X
k=1
=
|vk |2 diag(ūk )HH ∗ diag(uk ) =
X
1≤k6=l≤n
2
|vk |
n
n X
X
k=1 l=1
diag(ūk )ul u∗l diag(uk ) +
|vk |2 diag(ūk )ul u∗l diag(uk )
1m 1∗m
(5.43)
where the third equation follows from linearity of the Hadamard product and from
ūk ū∗k ⊙ HH ∗ = diag(ūk )HH ∗ diag(uk ).
The last one uses the fact that diag(ūk )uk = 1m if uk is a vector from the DFT matrix or
Hadamard matrix. By the property of conditional expectation, we know that E(ΛAA∗ Λ∗ ) =
E(E(ΛAA∗ Λ∗ )|H) and due to the linearity of expectation, it suffices to find out for k 6= l,
E(diag(ūk )ul u∗l diag(uk )) where uk and ul , by definition, are the k-th and l-th columns of H
which are sampled uniformly without replacement from an m × m DFT matrix F . Note that
(uk , ul ) is actually an ordered pair of random vectors sampled without replacement from columns
of F . Hence there are in total m(m − 1) different choices of diag(ūk )ul and
P(uk = φi , ul = φj ) =
1
,
m(m − 1)
38
∀1 ≤ i 6= j ≤ m, ∀1 ≤ k 6= l ≤ n
where φi is defined as the i-th column of an m × m DFT matrix F . Now we have, for any k 6= l,
E(diag(ūk )ul u∗l diag(uk )) =
=
=
X
1
diag(φ̄i )φj φ∗j diag(φi )
m(m − 1)
i6=j
X
1
diag(φ̄i )φj φ∗j diag(φi ) − m1m 1∗m
m(m − 1)
1≤i,j≤m
!
m
X
1
∗
diag(φ̄i ) diag(φi ) − 1m 1m
m−1
i=1
=
mIm − 1m 1∗m
.
m−1
(5.44)
P
∗
∗ ∗
where diag(φ̄i )φi = 1m and m
i=1 φi φi = mIm . Now we return to E(ΛAA Λ ). By substituting (5.44) into (5.43), we get the desired formula:
X
E(ΛAA∗ Λ∗ ) = E(E(ΛAA∗ Λ∗ |H)) = E
|vk |2 diag(uk )ul u∗l diag(uk ) + 1m 1∗m
1≤k6=l≤n
=
where
P
2
1≤k6=l≤n |vk |
1m 1∗m
mIm −
m−1
=n−
Pn
X
1≤k6=l≤n
2
k=1 |vk |
|vk |2 + 1m 1∗m =
= n − 1 follows from
(n − 1)(mIm − 1m 1∗m )
+ 1m 1∗m ,
m−1
Pm
2
k=1 |vk |
= 1.
References
[1] A. Ahmed, A. Cosse, and L. Demanet, A convex approach to blind deconvolution with
diverse inputs, in 2015 IEEE 6th International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP), IEEE, 2015, pp. 5–8.
[2] A. Ahmed and L. Demanet, Leveraging diversity and sparsity in blind deconvolution, arXiv
preprint arXiv:1610.06098, (2016).
[3] A. Ahmed, F. Krahmer, and J. Romberg, Empirical chaos processes and blind deconvolution, arXiv preprint arXiv:1608.08370, (2016).
[4] A. Ahmed, B. Recht, and J. Romberg, Blind deconvolution using convex programming,
IEEE Transactions on Information Theory, 60 (2014), pp. 1711–1732.
[5] S. Bahmani and J. Romberg, Lifting for blind deconvolution in random mask imaging:
Identifiability and convex relaxation, SIAM Journal on Imaging Sciences, 8 (2015), pp. 2203–
2238.
[6] L. Balzano and R. Nowak, Blind calibration of sensor networks, in Proceedings of the 6th
International Conference on Information processing in sensor networks, ACM, 2007, pp. 79–88.
[7] L. Balzano and R. Nowak, Blind calibration of networks of sensors: Theory and algorithms,
in Networked Sensing Information and Control, Springer, 2008, pp. 9–37.
39
[8] C. Bilen, G. Puy, R. Gribonval, and L. Daudet, Convex optimization approaches for
blind sensor calibration using sparsity, IEEE Transactions on Signal Processing, 62 (2014),
pp. 4847–4856.
[9] V. Cambareri and L. Jacques, A greedy blind calibration method for compressed sensing
with unknown sensor gains, arXiv preprint arXiv:1610.02851, (2016).
[10] V. Cambareri and L. Jacques, A non-convex blind calibration method for randomised
sensing strategies, in Compressed Sensing Theory and its Applications to Radar, Sonar and
Remote Sensing (CoSeRa), 2016 4th International Workshop on, IEEE, 2016, pp. 16–20.
[11] V. Cambareri and L. Jacques, Through the haze: A non-convex approach to blind calibration for linear random sensing models, arXiv preprint arXiv:1610.09028, (2016).
[12] P. Campisi and K. Egiazarian, Blind Image Deconvolution: Theory and Applications, CRC
press, 2007.
[13] E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, Phase retrieval via
matrix completion, SIAM Review, 57 (2015), pp. 225–251.
[14] S. Curtis, J. Lim, and A. Oppenheim, Signal reconstruction from one bit of fourier transform phase, in 1984 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 9, IEEE, 1984, pp. 487–490.
[15] M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE Journal of Selected Topics in Signal Processing, 10 (2016), pp. 608–
622.
[16] J. R. Fienup, Reconstruction of an object from the modulus of its fourier transform, Optics
Letters, 3 (1978), pp. 27–29.
[17] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Springer,
2013.
[18] B. Friedlander and T. Strohmer, Bilinear compressed sensing for array self-calibration,
in 2014 48th Asilomar Conference on Signals, Systems and Computers, Asilomar, 2014.
[19] L. Gan, T. T. Do, and T. D. Tran, Fast compressive imaging using scrambled block
hadamard ensemble, in 2008 16th European Signal Processing Conference, IEEE, 2008, pp. 1–
5.
[20] R. Gribonval, G. Chardon, and L. Daudet, Blind calibration for compressed sensing by
convex optimization, in 2012 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), IEEE, 2012, pp. 2713–2716.
[21] G. Harikumar and Y. Bresler, Perfect blind restoration of images blurred by multiple
filters: Theory and efficient algorithms, IEEE Transactions on Image Processing, 8 (1999),
pp. 202–219.
40
[22] A. Ito, A. C. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, Blurburst: Removing blur due to camera shake using multiple images, ACM Trans. Graph., Submitted, 3 (2014).
[23] M. Kech and F. Krahmer, Optimal injectivity conditions for bilinear inverse problems with
applications to identifiability of deconvolution problems, SIAM Journal on Applied Algebra and
Geometry, 1 (2017), pp. 20–37.
[24] V. Koltchinskii et al., Von Neumann entropy penalization and low-rank matrix estimation,
The Annals of Statistics, 39 (2011), pp. 2936–2973.
[25] K. Lee, Y. Li, M. Junge, and Y. Bresler, Blind recovery of sparse signals from subsampled
convolution, IEEE Transactions on Information Theory, 63 (2017), pp. 802–821.
[26] X. Li, S. Ling, T. Strohmer, and K. Wei, Rapid, robust, and reliable blind deconvolution
via nonconvex optimization, arXiv preprint arXiv:1606.04933, (2016).
[27] Y. Li, K. Lee, and Y. Bresler, Identifiability in blind deconvolution with subspace or
sparsity constraints, IEEE Transactions on Information Theory, 62 (2016), pp. 4266–4275.
[28] Y. Li, K. Lee, and Y. Bresler, Optimal sample complexity for blind gain and phase calibration, IEEE Transactions on Signal Processing, 64 (2016), pp. 5549–5556.
[29] S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, arXiv preprint arXiv:1512.07730, (2015).
[30] S. Ling and T. Strohmer, Self-calibration and biconvex compressive sensing, Inverse Problems, 31 (2015), p. 115002.
[31] J. Lipor and L. Balzano, Robust blind calibration via total least squares, in 2014 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2014,
pp. 4244–4248.
[32] V. I. Morgenshtern, E. Riegler, W. Yang, G. Durisi, S. Lin, B. Sturmfels, and H. Bölcskei, Capacity pre-log of noncoherent SIMO channels via Hironaka’s Theorem, IEEE Transactions on Information Theory, 59 (2013), pp. 4213–4229,
http://www.nari.ee.ethz.ch/commth/pubs/p/mrydlsb12.
[33] M. Pollefeys, R. Koch, and L. Van Gool, Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters, International Journal of Computer
Vision, 32 (1999), pp. 7–25.
[34] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, 2003.
[35] P. J. Shin, P. E. Larson, M. A. Ohliger, M. Elad, J. M. Pauly, D. B. Vigneron,
and M. Lustig, Calibrationless parallel imaging reconstruction based on structured low-rank
matrix completion, Magnetic Resonance in Medicine, 72 (2014), pp. 959–970.
[36] G. W. Stewart, Perturbation theory for the singular value decomposition, Technical Report
CS-TR 2539, University of Maryland, (September 1990).
41
[37] G. Tang and B. Recht, Convex blind deconvolution with random masks, in Computational
Optical Sensing and Imaging, Optical Society of America, 2014, pp. CW4C–1.
[38] L. Tong, G. Xu, B. Hassibi, and T. Kailath, Blind identification and equalization based
on second-order statistics: A frequency domain approach, IEEE Transactions on Information
Theory, 41 (1995), pp. 329–334.
[39] J. A. Tropp, User-friendly tail bounds for sums of random matrices, Foundations of Computational Mathematics, 12 (2012), pp. 389–434.
[40] A. Van der Vaart and J. Wellner, Weak Convergence and Empirical Processes: with
Applications to Statistics, Springer Series in Statistics, Springer-Verlag, New York, 1996.
[41] S. Verdú and S. Shamai, Spectral efficiency of cdma with random spreading, IEEE Transactions on Information theory, 45 (1999), pp. 622–640.
[42] R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, in Compressed Sensing: Theory and Applications, Y. C. Eldar and G. Kutyniok, eds., Cambridge
University Press, 2012, ch. 5.
[43] L. Wang and Y. Chi, Blind deconvolution from multiple sparse inputs, IEEE Signal Processing Letters, 23 (2016), pp. 1384–1388.
[44] L. Wang, A. Singer, and Z. Wen, Orientation determination of cryo-em images using
least unsquared deviations, SIAM journal on imaging sciences, 6 (2013), pp. 2450–2483.
[45] P.-Å. Wedin, Perturbation bounds in connection with singular value decomposition, BIT Numerical Mathematics, 12 (1972), pp. 99–111.
[46] M. Wei, The perturbation of consistent least squares problems, Linear Algebra and its Applications, 112 (1989), pp. 231–245.
[47] A. J. Weiss and B. Friedlander, Eigenstructure methods for direction finding with sensor
gain and phase uncertainties, Circuits, Systems, and Signal Processing, 9 (1990), pp. 271–300.
42
| 7 |
ATPboost: Learning Premise Selection
in Binary Setting with ATP Feedback
Bartosz Piotrowski1,2 and Josef Urban1?
1
arXiv:1802.03375v1 [] 9 Feb 2018
2
Czech Institute of Informatics, Robotics and Cybernetics, Prague, Czech Republic
Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland
Abstract. ATPboost is a system for solving sets of large-theory problems by interleaving ATP runs with state-of-the-art machine learning
of premise selection from the proofs. Unlike many previous approaches
that use multi-label setting, the learning is implemented as binary classification that estimates the pairwise-relevance of (theorem, premise)
pairs. ATPboost uses for this the XGBoost gradient boosting algorithm,
which is fast and has state-of-the-art performance on many tasks. Learning in the binary setting however requires negative examples, which is
nontrivial due to many alternative proofs. We discuss and implement several solutions in the context of the ATP/ML feedback loop, and show that
ATPboost with such methods significantly outperforms the k-nearest
neighbors multilabel classifier.
Keywords: automated theorem proving · machine learning · formalized
mathematics
1
Introduction: Machine Learning for Premise Selection
Assume that c is a conjecture which is a logical consequence of a large set of
premises P . The chance of finding a proof of c by an automated theorem prover
(ATP) often depends on choosing a small subset of P relevant for proving c.
This is known as the premise selection task [1]. This task is crucial to make
ATPs usable for proof automation over large formal corpora created with systems such as Mizar, Isabelle, HOL, and Coq [4]. Good methods for premise
selection typically also transfer to related tasks, such as internal proof guidance
of ATPs [8,10,13,17] and tactical guidance of ITPs [7].
The most efficient premise selection methods use data-driven/machine-learning
approaches. Such methods work as follows. Let T be a set of theorems with their
proofs. Let C be a set of conjectures without proofs, each associated with a
set of available premises that can be used to prove them. We want to learn a
(statistical) model from T , which for each conjecture c ∈ C will rank its available premises according to their relevance for producing an ATP proof of c. Two
different machine learning settings can be used for this task:
?
Supported by the AI4REASON ERC Consolidator grant number 649043, and by
the Czech project AI&Reasoning CZ.02.1.01/0.0/0.0/15_003/0000466 and the European Regional Development Fund.
2
1. multilabel classification: we treat premises used in the proofs as opaque labels
and we create a model capable of labeling conjectures based on their features,
2. binary classification: here the aim of the learning model is to recognize
pairwise-relevance of the (conjecture, premise) pairs, i.e. to decide what is
the chance of a premise being relevant for proving the conjecture based on
the features of both the conjecture and the premise.
Most of the machine learning methods for premise selection have so far used
the first setting [3,9,11]. This includes fast and robust machine learning algorithms such as naive Bayes and K nearest neighbors (k-NN) capable of multilabel classification with many examples and labels. This is needed for large
formal libraries with many facts and proofs. There are however several reasons
why the second approach may be better:
1. Generality: in binary classification it is easier to estimate the relevance of
(conjecture, premise) pairs where the premise was so far unseen (i.e., not in
the training data).
2. State-of-the-art ML algorithms are often capable of learning subtle aspects of
complicated problems based on the features. The multilabel approach trades
the rich feature representation of the premise for its opaque label.
3. Many state-of-the-art ML algorithms are binary classifiers or they struggle
when performing multilabel classification for a large number of labels.
Recently, substantial work [2] has been done in the binary setting. In particular,
applying deep learning to premise selection has improved state of the art in
the field. There are however modern and efficient learning algorithms such as
XGBoost [5] that are much less computationally-intensive then deep learning
methods. Also, obtaining negative examples for training the binary classifiers is
a very interesting problem in the context of many alternative ATP proofs and a
feedback loop between the ATP and the learning system.
1.1
Premise Selection in Binary Setting with Multiple Proofs
The existence of multiple ATP proofs makes premise selection different from
conventional machine learning applications. This is evident especially in the binary classification setting. The ML algorithms for recognizing pairwise relevance
of (conjecture, premise) pairs require good data consisting of two (typically balanced) classes of positive and negative examples. But there is no conventional
way how to construct such data in our domain. For every true conjecture there
are infinitely many mathematical proofs. The ATP proofs are often based on
many different sets of premises. The notions of useful or superfluous premise are
only approximations of their counterparts defined for sets of premises.
As an example, consider the following frequent situation: a conjecture c can
be ATP-proved with two sets of axioms: {p1 , p2 } and {p3 , p4 , p5 }. Learning only
from one of the sets as positives and presenting the other as negative (conjecture,
premise) pairs may considerably distort the learned notion of a useful premise.
This differs from the multilabel setting, where negative data are typically not
used by the fast ML algorithms such as naive Bayes and k-NN. They just aggregate different positive examples into the final ranking.
3
Therefore, to further improve the premise selection algorithms it seems useful
to consider learning from multiple proofs and to develop methods producing
good negative data. The most suitable way how to do that is to allow multiple
interactions of the machine learner with the ATP system. In the following section
we present the ATPboost system, which implements several such algorithms.
2
ATPboost: Setting, Algorithms and Components
ATPboost3 is a system for solving sets of large-theory problems by interleaving
ATP runs with learning of premise selection from the proofs using the stateof-the-art XGBoost algorithm. The system implements several algorithms and
consists of several components described in the following sections. Its setting
is a large theory T , extracted from a large ITP library where facts appear in a
chronological order. In more detail, we assume the following inputs and notation:
1. T – names of theorems (and problems) in a large theory T .
2. P – names of all facts (premises) in T . We require P ⊇ T .
3. StatementsP of all p ∈ P in the TPTP format [15] .
4. FeaturesP – characterizing each p ∈ P . Here we use the same features as
in [11] and write fp for the (sparse) vector of features of p.
5. OrderP (<P ) – total order on P ; p may be used to prove t iff p <P t. We
write At for {p : p <P t}, i.e. the set of premises allowed for t.
6. ProofsT 0 for a subset T 0 ⊆ T . Each t ∈ T 0 may have many proofs – denoted
by Pt . Pt denotes the premises needed for at least one proof in Pt .
2.1
Algorithms
We first give a high-level overview and pseudocode of the algorithms implemented in ATPboost. Section 2.2 then describes the used components in detail.
Algorithm 1 is the simplest setting. Problems are split into the train/test
sets, XGBoost learns from the training proofs, and its predictions are ATPevaluated on the test set. This is used mainly for parameter optimization.
Algorithm 2 evaluates the trained XGBoost also on the training part, possibly
finding new proofs that are used to update the training data for the next
iteration. The test problems and proofs are never used for training. Negative mining may be used to find the worst misclassified premises and to
correspondingly update the training data in the next iteration.
Algorithm 3 begins with no training set, starting with ATP runs on random
rankings. XGBoost is trained on the ATP proofs from the previous iteration,
producing new ranking for all problems for the next iteration. This is a
MaLARea-style [16] feedback loop between the ATP and the learner.
2.2
Components
Below we describe the main components of the ATPboost algorithms and the
main ideas behind them. As discussed in Section 1, they take into account the
3
The Python package is at https://github.com/BartoszPiotrowski/ATPboost.
4
binary learning setting, and in particular implement the need to teach the system
about multiple proofs by proper choice of examples, continuous interaction with
the ATP and intelligent processing of its feedback. The components are available
as procedures in our Python package.
Algorithm 1 Simple training/test split.
Require: Set of theorems T , set of premises P ⊇ T , ProofsT , FeaturesP , StatementsP , OrderP ,
paramsset , paramsmodel .
1: Ttrain , Ttest ← RandomlySplit(T )
2: D ← CreateTrainingSet(ProofsTtrain , FeaturesP , OrderP , paramsset )
3: M ← TrainModel(D, paramsmodel )
4: R ← CreateRankings(Ttest , M, FeaturesP , OrderP )
5: P ← ATPevaluation(R, StatementsP )
Algorithm 2 Incremental feedback-loop with training/test split.
Require: Set of theorems T , set of premises P ⊇ T , FeaturesP , StatementsP , ProofsT , OrderP ,
paramsset , paramsmodel , paramsnegmin (optionally).
1: Ttrain , Ttest ← RandomlySplit(T )
2: D ← CreateTrainingSet(ProofsTtrain , FeaturesP , OrderP , paramsset )
3: repeat
4:
M ← TrainModel(D, paramsmodel )
5:
Rtrain ← CreateRankings(Ttrain , M, FeaturesP , OrderP )
6:
Rtest ← CreateRankings(Ttest , M, FeaturesP , OrderP )
7:
Ptrain ← ATPevaluation(Rtrain , StatementsP )
8:
Ptest ← ATPevaluation(Rtest , StatementsP )
9:
Update(Proofstrain , Ptrain )
10:
Update(Proofstest , Ptest )
11:
if paramsnegmin then
12:
D ← NegativeMining(R, Proofstrain , FeaturesP , OrderP , paramsnegmin )
13:
else
14:
D ← CreateTrainingSet(Proofstrain , FeaturesP , OrderP , paramsset )
15: until Number of Proofstest increased after Update.
Algorithm 3 Incremental feedback-loop starting with no proofs.
Require: Set of theorems T , set of premises P ⊇ T , FeaturesP , StatementsP , OrderP , paramsset ,
paramsmodel , paramsnegmin (optionally).
1: ProofsT ← ∅
2: R ← CreateRandomRankings(T )
3: P ← ATPevaluation(R, StatementsP )
4: Update(ProofsT , P)
5: D ← CreateTrainingSet(ProofsT , FeaturesP , OrderP , paramsset )
6: repeat
7:
M ← TrainModel(D, paramsmodel )
8:
R ← CreateRankings(T, M, FeaturesP , OrderP )
9:
P ← ATPevaluation(R, StatementsP )
10:
Update(ProofsT , P)
11:
if paramsnegmin then
12:
D ← NegativeMining(R, ProofsT , FeaturesP , OrderP , paramsnegmin )
13:
else
14:
D ← CreateTrainingSet(ProofsT , FeaturesP , OrderP , paramsset )
15: until Number of ProofsT increased after Update.
CreateTrainingSet(ProofsT , FeaturesP , OrderP , params). This
procedure constructs a TrainingSet for a binary learning algorithm. This is
a sparse matrix of positive/negative examples and a corresponding vector of
5
binary labels. The examples (matrix rows) are created from ProofsT and FeaturesP , respecting OrderP . Each example is a concatenation of ft and fp ,
i.e., the features of a theorem t and a premise p. Positive examples express that
p is relevant for proving t, whereas the negatives mean the opposite.
The default method (simple) creates positives from all pairs (t, p) where
p ∈ Pt . Another method (short) creates positives only from the short proofs of
t. These are the proofs of t with at most m + 1 premises, where m is the minimal
number of premises used in a proof from Pt . Negative examples for theorem t
are chosen randomly from pairs (t, p) where p ∈ At \ Pt . The number of such
randomly chosen pairs is ratio · Npos , where Npos is the number of positives
and ratio∈ N is a parameter that needs to be optimized experimentally. Since
|At \ Pt | is usually much larger than |Pt |, it seems reasonable to have a large
ratio. This however increases class imbalance and the probability of presenting
to the learning algorithm a false negative. This is a pair (t, p) where p ∈
/ Pt , but
there is an ATP proof of t using p that is not yet in our dataset.
TrainModel(TrainingSet, params). This procedure trains a binary
learning classifier on the TrainingSet, creating a Model. We use XGBoost [5]
– a state-of-the-art tree-based gradient boosting algorithm performing very well
in machine learning competitions. It is also much faster to train compared to
deep learning methods, performs well with unbalanced training sets, and is optimized for working with sparse data. XGBoost has several important parameters,
such as numberOfTrees, maxDepth (of trees) and eta (learning rate). These
parameters have significant influence on the performance and require tuning.
CreateRankings(C, Model, FeaturesP , OrderP ). This procedure
uses the trained Model to construct RankingsC of premises from P for conjectures c ∈ C ⊆ T . Each conjecture c is paired with each premise p <P c and
concatenations of fc and fp are passed to the Model. The Model outputs a
real number in [0, 1], which is interpreted as the relevance of p for proving c. The
relevances are then used to sort the premises into RankingsC .
ATPevaluation(Rankings, Statements). Any ATP can be used for
evaluation. By default we use E [14] 4 . As usual, we construct the ATP problems
for several top slices (lengths 1, 2, . . . , 512) of the Rankings. To remove redundant premises we pseudo-minimize the proofs: only the premises needed in the
proofs are used as axioms and the ATP is rerun until a fixpoint is reached.
Update(OldProofs, NewProofs). The Update makes a union of the
new and old proofs, followed by a subsumption reduction. I.e., if premises of two
proofs of t are in a superset relation, the proof with the larger set is removed.
NegativeMining(ProofsT , RankingsT , FeaturesP , OrderP ,
params). This is used as a more advanced alternative to CreateTrainingSet. It examines the last RankingsT for the most misclassified positives.
I.e., for each t ∈ T we create a set MP t of those p that were previously ranked
high for t, but no ATP proof of t was using p. We define three variants:
4
The default time limit is 10 seconds and the memory limit is 2GB. The exact default
command is: ./eprover –auto-schedule –free-numbers -s -R –cpu-limit=10
–memory-limit=2000 –print-statistics -p –tstp-format problem_file
6
1. negmin_all: Let mt be the maximum rank of a t-useful premise (p ∈ Pt )
in RankingsT [t]. Then MP 1t = {p : rankt (p) < mt ∧ p ∈
/ Pt }.
2. negmin_rand: We randomly choose into MP 2t only a half of MP 1t .
3. negmin_1: MP 3t = {p : rankt (p) < |Pt | ∧ p ∈
/ Pt }.
The set MP it is then added as negatives to the examples produced by the CreateTrainingSet procedure. The idea of such negative mining is that the learner
takes into account the mistakes it made in the previous iteration.
3
Evaluation
We evaluate5 the algorithms on a set of 1342 MPTP2078 [1] large (chainy)
problems that are provable in 60s using their small (bushy) versions.
Parameter tuning: First we run Algorithm 1 to optimize the parameters.
The dataset was randomly split into a train set of 1000 problems and test set
of 342. For the train set, we use the proofs obtained by the 60s run on the
bushy versions. We tune the ratio parameter of CreateTrainingSet, and
the numberOfTrees, maxDepth and eta parameters of TrainModel. Due
to resource constraints we a priori assume good defaults: ratio = 16, numberOfTrees = 2000, maxDepth = 10, eta = 0.2. Then we observe how
changing each parameter separately influences the results. Table 1 shows the
ATP results for the ratio parameter, and Figure 1 for the model parameters.
It is clear that a high number of negatives is important. Using ratio = 16
1
2
4
8 16 32 64
ratio
Proved (%) 74.0 78.4 79.0 78.7 80.1 79.8 80.1
Table 1: Influence of the ratio of randomly generated negatives to positives.
Fig. 1: ATP performance of different parameters of the XGBoost model.
proves 6% more test problems than the balanced setting (ratio = 1). It is also
5
All the scripts we used for the evaluation are available at https://github.com/
BartoszPiotrowski/ATPboost/tree/master/experiments
7
clear that a higher number of trees – at least 500 – improves the results. However, too many trees (over 8000) slightly decrease the performance, likely due to
overfitting. The eta parameter gives best results with values between 0.04 and
0.64, and the maxDepth of trees should be around 10.
We evaluate Algorithm 1 also on a much bigger ATP-provable part of MML
with 29271 theorems in train part and 3253 in test. With parameters ratio = 20,
numberOfTrees = 4000, maxDepth = 10 and eta = 0.2 we proved 58.78%
theorems (1912). This is a 15.7% improvement over k-NN, which proved 50.81%
(1653) theorems. For a comparison, the improvement over k-NN obtained (with
much higher ATP time limits) with deep learning in [2] was 4.3%.
Incremental feedback loop with train/test split: This experiment evaluates Algorithm 2, testing different methods of negative mining. The train/test
split and the values of the parameters ratio, numberOfTrees, maxDepth,
eta are taken from the previous experiment. We test six methods in parallel.
Two XGB methods (simple and short) are the variants of the CreateTrainingSet procedure, three XGB methods (negmin_all, negmin_rand and
negmin_1) are the variants of NegativeMining, and the last one is a k-NN
learner similar to the one from [11], used here for comparison.
The experiment starts with the same proofs for training theorems as in the
previous one, and we performed 30 rounds of the feedback loop. Figure 2 shows
the results. All the new methods largely outperform k-NN, and XGB_short
Fig. 2: Number of proved theorems in subsequent iterations of Algorithm 2.
is much better than XGB_simple. I.e., positives from too many proofs seem
harmful, as in [12] where this was observed with k-NN. The differences between
the XGB variants short, negmin_1, negmin_all, and negmin_rand do
not seem significant and all perform well. At the end of the loop (30th round)
315-319 theorems of the 342 (ca 93%) are proved.
Incremental feedback-loop with no initial proofs: This is the final experiment which corresponds to the Algorithm 3 – there is no train/test split
and no initial proofs. The first ATP evaluation is done on random rankings,
proving 335 simple theorems out of the 1342. Than the feedback loop starts
running with the same options as in the previous experiment. Fig. 3 shows the
numbers of theorems that were proved in the subsequent rounds, as well as the
8
growth of the total number of different proofs. This is important, because all
these proofs are taken into account by the machine learning. Again, k-NN is the
weakest and XGB_simple is worse than the rest of the methods, which are
statistically indistinguishable. In the last round XGB_negmin_rand proves
1150 (86%) theorems. This is 26.8% more than k-NN (907) and 7.7% more than
XGB_simple (1068).
Fig. 3: Number of proved theorems (left) and number of all found proofs (right)
in subsequent rounds of the experiment corresponding to Algorithm 3.
References
1. J. Alama, T. Heskes, D. Kühlwein, E. Tsivtsivadze, and J. Urban. Premise selection
for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning,
52(2):191–213, 2014.
2. A. A. Alemi, F. Chollet, G. Irving, C. Szegedy, and J. Urban, editors. DeepMath
- Deep Sequence Models for Premise Selection, 2016.
3. J. C. Blanchette, D. Greenaway, C. Kaliszyk, D. Kühlwein, and J. Urban. A
learning-based fact selector for Isabelle/HOL. J. Autom. Reasoning, 57(3):219–
244, 2016.
4. J. C. Blanchette, C. Kaliszyk, L. C. Paulson, and J. Urban. Hammering towards
QED. J. Formalized Reasoning, 9(1):101–148, 2016.
5. T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. 2016.
6. T. Eiter and D. Sands, editors. LPAR-21, 21st International Conference on Logic
for Programming, Artificial Intelligence and Reasoning, Maun, Botswana, May 712, 2017, volume 46 of EPiC Series in Computing. EasyChair, 2017.
7. T. Gauthier, C. Kaliszyk, and J. Urban. TacticToe: Learning to reason with HOL4
tactics. In Eiter and Sands [6], pages 125–143.
8. J. Jakubuv and J. Urban. ENIGMA: efficient learning-based inference guiding
machine. In H. Geuvers, M. England, O. Hasan, F. Rabe, and O. Teschke, editors,
Intelligent Computer Mathematics - 10th International Conference, CICM 2017,
Edinburgh, UK, July 17-21, 2017, Proceedings, volume 10383 of Lecture Notes in
Computer Science, pages 292–302. Springer, 2017.
9. C. Kaliszyk and J. Urban. Learning-assisted automated reasoning with Flyspeck.
J. Autom. Reasoning, 53(2):173–213, 2014.
9
10. C. Kaliszyk and J. Urban. FEMaLeCoP: Fairly efficient machine learning connection prover. In M. Davis, A. Fehnker, A. McIver, and A. Voronkov, editors, Logic
for Programming, Artificial Intelligence, and Reasoning - 20th International Conference, LPAR-20 2015, Suva, Fiji, November 24-28, 2015, Proceedings, volume
9450 of Lecture Notes in Computer Science, pages 88–96. Springer, 2015.
11. C. Kaliszyk and J. Urban. Mizar 40 for mizar 40. Journal of Automated Reasoning,
55(3):245–256, Oct 2015.
12. D. Kuehlwein and J. Urban. Learning from multiple proofs: First experiments.
In P. Fontaine, R. A. Schmidt, and S. Schulz, editors, PAAR-2012, volume 21 of
EPiC Series, pages 82–94. EasyChair, 2013.
13. S. M. Loos, G. Irving, C. Szegedy, and C. Kaliszyk. Deep network guided proof
search. In Eiter and Sands [6], pages 85–105.
14. S. Schulz. E - A Brainiac Theorem Prover. AI Commun., 15(2-3):111–126, 2002.
15. G. Sutcliffe. The TPTP problem library and associated infrastructure. J. Autom.
Reasoning, 43(4):337–362, 2009.
16. J. Urban, G. Sutcliffe, P. Pudlák, and J. Vyskočil. MaLARea SG1 - Machine
Learner for Automated Reasoning with Semantic Guidance. In IJCAR, pages
441–456, 2008.
17. J. Urban, J. Vyskočil, and P. Štěpánek. MaLeCoP: Machine learning connection
prover. In K. Brünnler and G. Metcalfe, editors, TABLEAUX, volume 6793 of
LNCS, pages 263–277. Springer, 2011.
| 2 |
Deciding equivalence with sums and the empty type
Gabriel Scherer
Northeastern University, USA
gabriel.scherer@gmail.com
arXiv:1610.01213v3 [] 8 Nov 2016
Abstract
The logical technique of focusing can be applied to the λ-calculus;
in a simple type system with atomic types and negative type formers (functions, products, the unit type), its normal forms coincide
with βη-normal forms. Introducing a saturation phase gives a notion of quasi-normal forms in presence of positive types (sum types
and the empty type). This rich structure let us prove the decidability of βη-equivalence in presence of the empty type, the fact that it
coincides with contextual equivalence, and a finite model property.
1.
Introduction
1.1
Notion of equivalences
For a given type system, there may be several notions of program equivalence of interest. One may define a notion of syntactic
equivalence by a system of equations between terms, such as βequivalence and η-equivalence, or their union βη-equivalence. A
more extensional notion of equivalence is contextual or observational equivalence, which checks that the two terms behave in the
same way under all contexts. Finally, a semantic notion of equivalence is given by interpreting terms in a certain mathematical space,
typically morphisms in categories with a certain structure, and considering two terms equivalent if they are equal under all interpretations. One can generally prove that considering all interpretations
in the category of sets suffices to distinguish two observably distinct terms, and certain type systems have the stronger finite model
property that considering all interpretations in finite sets suffices to
distinguish observably distinct terms.
Contextual equivalence has a clear, compact definition, it is the
notion of equivalence that corresponds to programmers’ intuition,
but it is difficult to prove and reason about. Syntactic equivalence
can be used as part of syntactic typing judgments, and may be easier
to prove decidable. Semantic models provide a more abstract point
of view on the identity of programs, and may enable the powerful
method of normalization-by-evaluation.
For the untyped λ-calculus, Böhm (1968) proved that βηequivalence and observational equivalence coincide. Untyped βreduction is not normalizing, so equivalence is undecidable.
For the simply-typed λ-calculus with atomic types, functions
and pairs – what we call the negative fragment – we also know that
these notions of equivalence coincide. Furthermore, typed equivalence is decidable: we can define and compute β-short η-long
normal forms, and two terms are βη-equivalent exactly when they
have the same normal form. Friedman (1975) proved that two terms
equal in all set-theoretic models are equivalent, and Statman (1982)
sharpened the result by proving that finite sets suffice to distinguish
inequivalent terms – the finite model property.
This pleasant setting is quickly eroded by moving to richer
programming languages and type systems. Adding notions of
side-effects, for example, makes the general βη-equivalence unsound. Even in the realm of pure, total programming languages,
adding parametric polymorphism (System F and beyond) makes
βη-equivalence strictly weaker than contextual equivalence.
1.2
Sums and empty type
An interesting middle-ground is the full simply-typed λ-calculus,
with not only atomic types, functions, pairs and the unit type 1 but
also sum types and the empty type 0. There, results on program
equivalence have been surprisingly long to come, because of deep
difficulties caused by mixing functions (negative connectives) and
sums (positive connectives), and the strong behavior of the empty
type: in an inconsistent context, all terms are equal.
The first decidability result for the system with non-empty sums
was Ghani (1995), using advanced rewriting techniques later simplified by Lindley (2007). It was followed by normalization-byevaluation results (Altenkirch, Dybjer, Hofmann, and Scott 2001;
Balat, Di Cosmo, and Fiore 2004) that also established decidability
of equivalence in the non-empty case using the categorical structure introduced in Fiore and Simpson (1999). Decidability in the
presence of the empty type is still open – we propose a proof.
Finally, the only known completeness result for set-theoretic
models is Dougherty and Subrahmanyam (2000); it only holds for
non-empty sums, and relies in an essential way on infinite sets. The
finite model property is only conjectured – we propose a proof,
including in presence of the empty type.
1.3
Focusing
Focusing (Andreoli 1992) is a general technique that uses the notion of invertibility of inference rules to define a focused subset of
any logic that is complete but removes some redundant proofs.
A recent branch of work on maximal multi-focusing (Chaudhuri,
Miller, and Saurin 2008a; Chaudhuri, Hetzl, and Miller 2012) has
demonstrated that focused proofs can be further restricted to become even more canonical: in each application to a specific logic,
the resulting representations are equivalent to existing representations capturing the identity of proofs – proof nets for linear logic,
expansion proofs for classical logic.
Scherer and Rémy (2015) applied focusing to the λ-calculus
with non-empty sums. Completeness of focusing there means that
any term has a βη-equivalent focused term. Their counterpart of
maximal multi-focusing is saturation: saturated terms of the focused λ-calculus introduce and deconstruct neutral terms of sum
type as early as possible – when they are new, they were not derivable in the previous saturation phase. Canonicity of this representation gives yet another decision procedure for βη-equivalence of
λ-terms with non-empty sums.
The present work extends saturated focusing to the full simplytyped lambda calculus, with units and in particular the empty type.
The suitably extended notion of saturated form retains its canonicity properties in the full system. This means that βη-equivalence
with empty types is decidable – by converting terms to their saturated focused form. From two distinct saturated forms, one can
furthermore build a well-typed context that distinguishes them.
This proves that contextual equivalence implies equality of normal forms, and thus βη-equivalence: any term is βη-equivalent
to its normal form, so two terms with the same normal form are
βη-equivalent. Our distinguishing context needs only instantiate
atomic types with finite sets, giving a finite model property.
Extending saturated focusing to the empty type requires adding
a requirement that saturation phases be complete for provability:
not only must they introduce all new neutrals of positive type for
use in the rest of the term, but they must at least introduce one
such neutral for each type deducible in the current context. As
a consequence, we can prove a Saturation Consistency theorem
which is key to our canonicity: if two saturated terms are distinct,
then their context must be consistent.
1.4
Sections 1 to 5, in particular the presentations of focusing and saturated normal forms, correspond to content that
was presented in the thesis manuscript Scherer (2016). In
the present article, definitions and proofs in these sections
are given with minimal amount of details for economy of
space; they are presented in full in the manuscript. In contrast, Section 6, which presents the key canonicity result enabling the contributions of this article, is entirely new, and
presented in more detail.
Contributions
We establish the following results in the simply-typed λ-calculus
with atoms, functions, pairs, the unit type, sums and the empty type
ΛC(X, →, ×, 1, +, 0):
• Saturated terms provide a notion of quasi-normal form; equiva-
lent quasi-normal forms are not necessarily α-equivalent, but
are related by a local, easily decidable relation of invertible
commutation conversions (≈icc ).
• βη-equivalence is decidable.
• βη-equivalence and contextual equivalence coincide, along
with set-theoretic equivalence in all models where atomic types
are interpreted by closed types.
• The finite model property holds – as closed types in this system
have finitely many inhabitants.
Our notion of βη-equivalence is the strong equivalence on sums.
It corresponds to equality of morphisms in the free bi-cartesian
closed category – see Scherer (2016), Section 3.2.2.
1.5
one of them equal to true and the other to false. By contraposition, this proves that two terms contextually equivalent in all models have the same saturated normal forms – giving decidability –
and in particular are βη-equivalent.
For lack of space, our statements do not come with their proofs,
but proof outlines are given in Appendix B (Proof outlines).
Plan
Section 2 (Equivalences in the full λ-calculus) introduces the full
λ-calculus we shall consider in this work, along with the various
notions of equivalence (βη, contextual, set-theoretic) we will discuss. We prove some elementary results: βη-equivalence implies
contextual equivalence in all closed models, which coincides with
set-theoretic equivalence in all closed models.
Section 3 (Focusing) presents focusing, before detailing the
focused λ-calculus extended with the unit type and empty type.
We formulate the computational counterpart of the completeness
theorem: any λ-term has a βη-equivalent focused term.
Section 4 (Saturated focused λ-calculus) presents the saturated
subset of the focused λ-calculus as defined by a different system
of inference rules. Compared to the simpler system of Scherer and
Rémy (2015), this saturated system is parametrized over a selection
function that selects the neutrals to split over during saturation. The
saturated system is, again, computationally complete with respect
to focused terms.
Section 5 (Saturation consistency) establishes the main metatheoretic property of saturated terms in presence of the empty
type, namely that saturating an inconsistent context will always
find a proof of 0. In other words, if two saturated terms differ
after a saturation phase instead of both ending on an absurdity
elimination, we know that they differ in consistent contexts. This
result is key to extending the distinguishability result of Dougherty
and Subrahmanyam (2000) to a system with empty types.
Finally, Section 6 (Canonicity) establishes the central result of
this work: if two saturated λ-terms t
≈icc u are syntactically distinct (modulo invertible commuting conversions), then there exists
a closed type model M in which we have a distinguishing context C [] such C [M(t)], C [M(u)] are closed booleans (1 + 1),
2.
Equivalences in the full λ-calculus
2.1
Typing rules and βη-equivalence
Figure 1. Full simply-typed lambda-calculus ΛC(X, →, ×, 1, +, 0)
A, B, C ::= X, Y , Z | A → B | A1 × A2 | 1 | A1 + A2 | 0
t, u, r ::=
x, y, z | λx. t | t u | (t1 , t2 ) | πi t | ()
| σi t | match t with (σi xi → ui )i | absurd(t)
Γ`t:A→B
Γ`u:A
Γ`tu:B
Γ, x : A ` t : B
Γ ` λx. t : A → B
Γ ` t : A1 × A2
Γ ` πi t : Ai
Γ`t:A
Γ`u:B
Γ ` (t, u) : A × B
Γ ` t : Ai
Γ ` σ i t : A1 + A2
Γ ` () : 1
Γ ` t : A1 + A2
(Γ, xi : Ai ` ui : C)i
Γ ` match t with (σi xi → ui )i : C
Γ, x : A ` x : A
Γ`t:0
Γ ` absurd(t) : A
Figure 2. βη-equivalence for ΛC(X, →, ×, 1, +, 0)
(λx. t) u .β t[u/x]
πi (t1 , t2 ) .β ti
(t : A → B) .η λ(x : A). t x
(t : A1 × A2 ) .η (π1 t, π2 t)
match σj t with (σi xi → ui )i .β uj [t/xj ]
t[u : A1 + A2 /x] .η match u with (σi y i → t[σi y i /x])i
Γ`t:1
Γ ` t .η () : 1
Derived rules:
Γ`u:0
Γ, x : 0 ` t : A
Γ ` t[u/x] .η absurd(u) : A
Γ ` t1 ≈η t2 : 1
Γ`u:0
Γ ` t1 ≈η t2 : A
Figure 1 gives the grammar and typing rules for the full simplytyped λ-calculus ΛC(X, →, ×, 1, +, 0). By symmetry with the
pair projections πi t, we use σi t for sum injection. We use
(. . . )i∈I for a family of objects indexed by i ∈ I. The common
indexing family, when dealing with syntactic binary operations,
is {1, 2}, and we will most of the time leave it implicit. Finally,
match t with (σi xi → ui )i is a compact syntax for our full sumelimination syntax, (match t with | σ1 x1 → u1 | σ2 x2 → u2 ).
Definition 1. A closed type does not contain atomic types.1
Definition 2. We write Γ ` A when t exists such that Γ ` t : A.
We define βη-equivalence as the smallest congruent relation
(≈βη ) closed by the union of the β-reduction relation (.β ) and
the η-expansion relation (.η ) of Figure 2. We have not explicited
the full typing assumptions, but only type-preserving rewrites are
considered. The derived rules are not primitives, they are derivable
from η-expansion at unit and empty type.
If t1 , t2 are of type 1, then they both rewrite to () and are thus
equal. The derivation for equality under absurdity is less direct:
if the current typing context Γ is inconsistent, that is there is a
derivation (u : 0), then any term of any type (t : A) can be seen
as the result of the substitution t[u/x] for a variable x that does not
appear in t, and is thus equal to absurd(u); in particular, any two
terms of the same type are equal.
Theorem 1 (Strong normalization). β-reduction is strongly normalizing in the full simply-typed λ-calculus ΛC(X, →, ×, 1, +, 0).
Theorem 2 (Confluence). β-reduction is confluent for the full
simply-typed λ-calculus ΛC(X, →, ×, 1, +, 0): each term has a
unique β-short normal form.
Lemma 1 (Inversion). A closed β-normal form (in an empty context) starts with an introduction form.
2.2
Contextual equivalence
A common definition of contextual equivalence, for System F for
example, is that two terms Γ ` t, u : A are contextually equivalent if there exists no separating context ∅ ` C [Γ ` : A] : 1 + 1
such that C [t] ≈β σ1 () and C [u] ≈β σ2 (). For a trivdef
def
ial example, if Γ = (x : 1 + 1, y : 1 + 1) , then C [] =
(λx. λy. ) (σ1 ()) (σ2 ()) separates x and y.
This definition is too weak in presence of atomic types. Condef
sider the context Γ = (x : X, y : X) and the terms Γ ` x, y : X.
We want a definition of contextual equivalence that declares these
terms inequivalent, but there is no distinguishing context in the
sense above as we have no information on X, and thus no way
to provide distinct values for x and y. The variables x and y could
be distinct depending on what is the unknown type represented by
the abstract type X.
Definition 3. A model M is a mapping from atomic types to closed
types.
If x is some syntactic object containing types, we write M(x)
for the result of replacing each atomic type in x by its image
in the model M. If Γ ` t : A holds, then it is also the case that
M(Γ) ` t : M(A) holds; we may write the term M(t) as well, to
emphasize that we look at its typing in the model.
Definition 4. If Γ ` t, u : A and for a given model M, we say that
t and u are contextually equivalent in M, written t ≈ctx(M) u, if
∀C, ∅ ` C [M(Γ) ` : M(A)] : 1 + 1 ⇒ C [M(t)] ≈β C [M(u)]
We say that t and u are contextually equivalent, written t ≈ctx u,
if they are contextually equivalent in all models.
2.3
Semantic equivalence
Definition 5 (Semantics of types). For a closed type A we define
the set of semantic values of A, written JA K, by induction on A as
1A
reviewer remarks that the terminology “atomic type” is awkward; if
“atom” means “indecomposable”, then unit types 1, 0 could arguably be
considered atomic. In retrospect, we agree that “type variable”, as naturally
used in polymorphic type systems, would be a better terminology, and plan
to use it in the future.
follows:
JA → B K
JA × B K
J1 K
JA + B K
J0 K
def
=
=
def
=
def
=
def
=
def
total functions from JA K to JB K
{(v, w) | v ∈ JA K, w ∈ JB K}
{?}
{(1, v) | v ∈ JA K} ] {(2, w) | w ∈ JB K}
∅
For an arbitrary type A we write JA KM for JM(A) K.
We remark that JA KM is always a finite type, whose elements
can be enumerated. It is thus decidable whether two elements of
JA KM are equal as mathematical objects; at function types, one
compares two functions pointwise on their finite domain.
Definition 6 (Semantics of environments). For a closed typing environment Γ, we define the set JΓ K of semantic valuations, functions from the domain of Γ to semantic values such that:
def
JΓ K
=
{G | ∀x : A ∈ Γ,
We write JΓ KM for JM(Γ) K.
G(x) ∈ JA K}
Definition 7 (Semantics of typing judgments). We write JΓ ` A K
for the set of functions from semantic valuations of Γ to semantic
def
values in A: JΓ ` A K = JΓ K → JA K . We write JΓ ` A KM for
JM(Γ) ` M(A) K.
Definition 8 (Semantics of terms). For a term Γ ` t : A in a judgment with closed types, we write Jt K for the set-theoretic semantics
of t, as an object of JΓ ` A K. The (natural) definition is given in
full in Appendix A (Semantics of terms), but for example
def
Jλx. t K(G) = (v 7→ Jt K(G, x 7→ v))
We write Jt KM for JM(t) K.
Definition 9 (Semantic equivalence). For any terms Γ ` t, u : A
and model M, we say that t and u are semantically equivalent in
M, written t ≈sem(M) u, if their semantics are (pointwise) equal.
def
t ≈sem(M) u = (∀G ∈ JΓ KM , Jt KM (G) = Ju KM (G) ∈ JA KM )
We say that t and u are semantically equivalent, written t ≈sem u,
if they are semantically equivalent in all models M.
2.4
Easy relations between equivalences
Theorem 3 (βη is semantically sound). If t ≈βη u then t ≈sem u.
Theorem 4 (Semantic equivalence implies contextual equivalence). If t ≈sem u then t ≈ctx u.
On models using closed types (Long version) Our definition of a
model M allows to instantiate atomic types with any closed type;
this instantiation happens in the world of syntax, before we interpret types as sets. It is more common, when giving set-theoretic semantics, to instantiate atoms only in the semantics, defining models
as mapping from atomic types to arbitrary sets.
We are fortunate that our grammar of closed types is expressive
enough to describe any finite set; if we did not have units (1 and
0), for example, we could not do this. Our more syntactic notion of
model can be shared by our definition of contextual and semantic
equivalence, which is very pleasing.
Note that, as a consequence, the finite model property is sort
of built into our definition of the semantic and contextual equivalences: we may only distinguish atoms by instantiating them with
finite sets, not arbitrary sets. One may wonder, then, whether a notion of semantic equivalence that allows to instantiate atoms with
infinite sets would behave differently – did we really prove the finite
model property, or just enforce it by stifling our notion of equivalence?
The coincidence of the finite and infinite interpretations is a
consequence of the later results of this work on the coincidence
of semantic and β-equivalence. If we added the ability to instantiate atoms by infinite sets, we would distinguish more: we could
not prove more terms equal, but would only prove more terms different. But any pair of terms equal in the finite-set semantics is
βη-equivalent, and Theorem 3 (βη is semantically sound) seamlessly extends to infinite sets – those terms must still be equal in an
infinite-set semantics.
Theorem 5 (Reification). For each semantic value v in JA K we
can define a closed term reify(v) such that Jreify(v) K = v.
2.5 Fun-less types and reification
To prove that contextual equivalence implies semantic equivalence,
we build a reification procedure reifyM (v) that goes from JA KM
to closed terms in M(A) and satisfies the following properties:
Theorem 6 (Contextual equivalence implies semantic equivalence). If t ≈ctx u then t ≈sem u.
∀v, JreifyM (v) KM = v
∀(∅ ` t : A), reifyM (Jt KM ) ≈βη t
This is difficult in the general case, as it corresponds to a
normalization-by-evaluation procedure – when you can furthermore prove that reifyM (v) is always a normal term. The reification of finite sums and products is straightforward, but function
types are delicate; intuitively, to reify a function one builds a decision tree on its input, which requires an enumeration procedure for
the input type (Altenkirch and Uustalu 2004) which may itself be a
function, etc.
In the present case, however, the fact that we work with closed
types (no atoms) enables a useful hack: we can eliminate function types by rewriting them in isomorphic types expressed in
ΛC(×, 1, +, 0) only. This was inspired by an idea of Danko Ilik
(see for example Ilik (2015)), which removes sum types rather than
function types. In presence of atomic or infinite types, neither sum
nor function types can be fully removed. In absence of atoms, function types can be fully removed, but sum types cannot – there is no
type isomorphic to 1 + 1 in ΛC(→, ×, 1).
Figure 3 (Fun-less data types) defines the fun-less bAc for each
closed type A. Its definition is structurally recursive, and uses an
auxiliary definition TA → BU that takes a function type whose
left-hand side A is fun-less, and is structurally recursive on this
left-hand side. We also define pair of transformations b cA from
A to bAc and d eA from bAc to A, on both terms and semantic
values. On semantic values they are inverse; on terms they are
inverse modulo βη-equivalence. The (natural) definitions are given
in full in Appendix A (Fun-less types and reification). It uses
auxiliary definitions T UA→B and V WA→B , and has for example
def
def
bvcA→B = Tw 7→ bv(dweA )cB UA→B and TtUA1 ×A2 →B =
Tλx1 . Tλx2 . t (x1 , x2 )UA2 →B UA1 →(A2 →B) .
Finally, we also have that the isomorphisms on semantic values
and ground terms commute.
Figure 3. Fun-less data types
def
def
bA → Bc = TbAc → bBcU
def
bA1 × A2 c = bA1 c × bA2 c
def
b1c = 1
bA1 + A2 c = bA1 c + bA2 c
def
T(A1 × A2 ) → CU
T1 → BU
def
T(A1 + A2 ) → BU
T0 → BU
def
∀v, dbvcA eA = v
∀t,
=
=
def
=
=
def
b0c = 0
TA1 → TA2 → CUU
B
TA1 → BU × TA2 → BU
1
∀t, dbtcA eA ≈βη t
bJt KcA = JbtcA K ∧ dJt KeA = JdteA K
Theorem 5 (Reification) establishes that semantic inhabitation
and provability coincide at closed types.
Corollary 1 (Inhabited or inconsistent). For Γ, A closed, if Γ ` A
then either ∅ ` A or Γ ` 0.
Lemma 2. For any closed term of closed type ∅ ` t : A we have
reify(Jt K) ≈βη t.
3.
Focusing
Consider the usual description of β-normal forms in the purely negative fragment of the simply-typed λ-calculus, ΛC(X, →, ×, 1).
values
neutrals
t
n
::=
::=
λx. t | (t1 , t2 ) | () | n
n t | πi n | x
Values, the story goes, are a sequence of constructors applied to a
neutral, which is a sequence of destructors applied to a variable. It
is even possible and easy to capture the set of β-short η-long normal
forms by adding a typing restriction to this grammar, asking for the
first neutral term n found in a value to be of atomic type (n : X).
Unfortunately, adding sum types to this picture shatters it irreparably. If σi is a constructor, it should go in values, and
match with . . . is a destructor, it should be a neutral term former. Adding σi t to the grammar of values seems innocuous, but
adding match to neutrals raises a question: should we ask the
branches to be neutrals match n with (σi xi → ni )i or values
match n with (σi xi → ti )i ? Neither choices work very well.
Asking branches to be neutrals means that the term
x : X + Y ` (match x with | σ1 y → σ2 y | σ2 y → σ1 y) : Y + X
is not a valid normal form, and in fact has no valid normal form!
We cannot force all constructors to occur outside branches, as in
this example we fundamentally need to choose a different constructor in each branch – committing to either σ1 or σ2 before matching on x would make us stuck, unable to complete our
attempt with a well-formed term.
On the other hand, letting branches be any value introduces
normal forms that really should not be normal forms, such as
π1 (match x with (σi y i → (n, y i ))i ), clearly equivalent, for any
value of x, to the neutral n.
The solution to this problem comes from logic. Logicians remark that some inference rules are invertible and some are noninvertible. A rule is invertible when, used during goal-directed
proof search, it preserves provability: if the conclusion was provable (maybe using another rule), then applying this rule results in
premises that are also provable. For example, consider the implication and disjunction introduction rules:
Γ, A ` B
Γ`A→B
Γ ` Ai
Γ ` A1 + A2
Implication introduction is invertible – this can be proved by inverting the rule, showing a derivation of Γ, A ` B with open
premise Γ ` A → B. Disjunction introduction is not: if one decides to prove, say, A1 , one may get stuck while having chosen to
prove A2 would have worked. Or maybe one needs to delay this
choice until some hypothesis of the contexts is explored – which is
the heart of our X + Y ` Y + X example.
Andreoli’s focusing (Andreoli 1992) is a technique to restrict a
logic, make its proof term more canonical, by imposing additional
restrictions based on the invertibility of rules. One easy restriction
is that invertible rules, when they can be applied to a judgment,
should be applied as early as possible. The more interesting restriction is that when one starts applying non-invertible rules, focusing
forces us to apply them as long as possible, as long as the formula
introduced in premises remain at a type where a non-invertible rule
exists. For a complete reference on focusing in intuitionistic logic,
see Liang and Miller (2007).
In programming terms, the fact that the right implication rule is
invertible corresponds to an inversion principle on values: without
loss of generality, one can consider that any value of type A → B
is of the form λx. t. Any value of type A1 × A2 is of the form
(t1 , t2 ). This is strictly true for closed values in the empty context,
but it is true modulo equivalence even in non-empty contexts, as is
witnessed by the η-expansion principles. If a value (t : A → B) is
not a λ-abstraction, we can consider the equivalent term λx. t x.
But it is not the case that any value of type A + B is of the form
σi t, as our example X + Y ` Y + X demonstrated. Inspired by
focusing we look back at our grammar of βη-normal forms: it is
not about constructors and destructors, it is about term-formers that
correspond to invertible rules and those that do not. To gracefully
insert sums into this picture, the non-invertible σi should go
into the neutrals, and case-splitting should be a value. Scherer and
Rémy (2015) introduce focusing in more details, and present a
grammar of focused normal forms is, lightly rephrased, as follows:
values
t
choice terms
negative neutrals
positive neutrals
f
n
p
::= λx. t | (t1 , t2 ) | () | f
| match x with (σi y i → ti )i
::= (n : X) | let x : A + B = n in t | p
::= n p | πi n | x
::= σi p | (t : N )
The type N on the last line denotes a negative type, defined as
a type whose head connective has an invertible right-introduction
rule: A → B or A × B or 1. This means that if the argument of an
injection σi is itself of sum type, it must be of the form σj as
well; this enforces the focusing restriction that non-invertible rules
are applied as long as the type allows.
It is interesting to compare this grammar to bidirectional type
systems – when used to understand canonical forms rather than
for type inference. Focusing generalizes the idea that some parts
of the term structure (constructors) are canonically determined by
type information, while some parts (neutrals) are not. It generalizes
bidirectional typing by taking the typing environment into account
as well as the goal type (variables of sum type are split during
the inversion phase), and refines the application syntax n t into the
sharper n p where both sub-terms have a neutral spine.
Our work builds on this focused representation, easily extended
with an empty type 0. Our presentation of the type system is different, farther away from the standard λ-calculi and closer to recent presentation of focused systems, by using polarized syntax for
types with explicit shifts – it clarifies the structure of focused systems. Instead of distinguishing positive and negative types based on
their head connectives, we define two disjoint syntactic categories
P and N , with explicit embeddings hN i+ , hP i− to go from one
to the other. In particular, atoms are split in two groups, the positive atoms of the form X + and the negative atoms X − – there is a
global mapping from atoms X to polarities, a given atom is either
always positive or always negative. Sometimes we need to consider
either types of a given polarity or atoms of any polarity; we use P a
for positive types or negative atoms, and N a for negative types of
positive atoms.
We present this focused λ-calculus in Figure 4 (Cut-free focused λ-terms). The focusing discipline is enforced by the inference rules which alternate between four different judgments:
• The invertible judgment Γna ; Σp `inv t : N | Qa corresponds
to invertible phases in focused proof search. Γna is a typing
environment mapping variables to negative types N or positive
atoms X + . Σp contains only positive types; it is the part of the
context that must be decomposed by invertible rules before the
end of the phase. The two positions N | Qa in the goal are either
formulas or empty (∅), and exactly one of them is non-empty in
any valid judgment. If the goal is a negative formula N , it has
yet to be introduced by invertible rules during this phase; once
it becomes atomic or positive it moves to the other position Qa .
• The negative focus judgment Γna ` n ⇓ N corresponds to
a non-invertible phase focused on a (negative) formula in the
context.
• The positive focus judgment Γna ` p ⇑ P corresponds to a non-
invertible phase focused on a (positive) formula in the goal.
• The choice-of-focusing judgment Γna `foc f : P a corre-
sponds to the moment of the proof search (reading from the
conclusion to the premises) where the invertible phase is finished, but no choice of focus has been made yet. Focusing on
the goal on the right uses a positive neutral to prove a positive type – FOCLC - CONCL - POS. Focusing on the left uses a
negative neutral. If the neutral has a positive type, it is letbound to a variable and the proof continue with an invertible phase – FOCLC - LET- POS. If it has a negative atomic type,
then it must be equal to the goal type and the proof is done –
FOCLC - CONCL - NEG .
The notation h i+
a takes a negative-or-atomic type N a and returns a positive type. It is used in the rule FOCLC - INV- FOC that concludes an invertible phase and starts the choice-of-focusing phase.
It may only be applied when the positive context Σp is of the form
+
hΓ0na ia for some Γ0na , that is, when it only contains negative or
atomic formulas – it has been fully decomposed.
Notice that the sum-elimination rule in the invertible judgment
eliminates a variable x, and not an arbitrary term, and re-introduces
variables with the same name x, shadowing the previous hypothesis
of sum type: there is no need to refer to it anymore as we learned
its value. This cute trick is not fundamental for a focused calculus,
but it corresponds to the intuition of the corresponding sequentcalculus rule, and let us actually remove positive types from the
context to have a negative-or-atomic context at the end of the phase.
For any judgment, for example Γna ` p ⇑ P , we use the version
without a term position, for example Γna ⇑ P , as the proposition
that there exists a well-typed term: ∃p, Γna ` p ⇑ P . This is also
an invitation to think of the derivation as a logic proof rather than a
typed program.
Our focused terms are cut-free in the sense that they contain no β-redexes, even modulo commuting conversions. The rule
FOCLC - LET- POS does look like a cut, proving Γna `foc Qa from
Γna ` n ⇓ hP i− and Γna ; x : P `inv t : ∅ | Qa , but notice
that substituting the negative neutral n inside the invertible proof
t would not create a β-redex: we know that x is matched over
during the invertible phase, but n cannot start with a constructor so match n with . . . cannot reduce. If you are interested in
focused systems that do have cuts and interesting dynamic semantics, then the abstract machine calculi of Curien, Fiore, and
Munch-Maccagnoni (2016) are a better starting point.
3.1
(Non-)canonicity
When we look at the purely negative fragment of our calculus
ΛC(X − , →, ×, 1), we can prove that the focused λ-terms correspond exactly to the usual notion of β-short η-long normal
forms. For example, consider the valid terms for the judgment
−
−
+
x : Y + → Z−
→ Z−
1 × Z 2 ; ∅ `inv ? : Y
1 × Z 2 | ∅ . Neither x nor λy. x y, that would be well-typed for the corresponding
un-focused judgment, are valid according to our inference rules.
There is exactly one valid derivation in our system, for the term
λy. (π1 (x y), π2 (x y)) which is the η-long normal form of x at
this type.
A consequence of this result is that the focused λ-calculus is
canonical for the purely negative fragment (or, in fact, the purely
Figure 4. Cut-free focused λ-terms
N , M ::= X − , Y − , Z − | P → N | N 1 × N 2 | 1 | hP i−
P , Q ::= X + , Y + , Z + | P 1 + P 2 | 0 | hN i+
negative types
positive types
P a , Qa ::= P , Q | X − , Y −
N a , M a ::= N , M | X + , Y +
t, u, r ::= λx. t | () | (t1 , t2 ) | (f : P )
| absurd(x) | match x with (σi x → ti )i
f , g ::= let (x : P ) = n in t | (n : X − ) | p
n, m ::= (x : N ) | n p | πi n
p, q ::= σi p | (x : X + )
invertible terms
focusing terms
negative neutrals
positive neutrals
Γna ; Σp , x : P `inv t : N | ∅
Γna ; Σp `inv λx. t : P → N | ∅
proximation of the idea of normal form, as it is easily decidable.
Indeed, while in general commuting conversions may relate very
different terms, they can be easily decided inside a single invertible
phase, for example by imposing a fixed ordering on invertible rules.
By definition of invertibility, any ordering preserves completeness.
The other more fundamental source of non-canonicity is that
two non-invertible phases may be independent from each other, and
thus be ordered in several possible ways, giving distinct but equivalent terms. For example, let x1 = n1 in let x2 = n2 in (x1 , x2 )
and let x2 = n2 in let x1 = n1 in (x1 , x2 ) are equivalent at
type X − × Y − if x1 ∈
/ n2 , x2 ∈
/ n1 . This source of redundancy is
non-local – the permutable let-bindings may be miles away inside
the term. It requires a global approach to recover canonicity, which
we discuss in Section 4 (Saturated focused λ-calculus).
3.2
(Γna ; Σp `inv ti : N i | ∅)i
Γna ; Σp `inv (t1 , t2 ) : N 1 × N 2 | ∅
(Γna ; Σp , x : Qi `inv ti : N | P a )i
Γna ; Σp , x : Q1 + Q2 `inv match x with (σi x → ti )i : N | P a
Γna ; Σp , x : 0 `inv absurd(x) : N | P a
FOCLC - INV- FOC
FOCLC - CONCL - POS
Γna , Γ0na `foc f : (P a | Qa )
Γna ;
+
Γ0na a
`inv f :
FOCLC - CONCL - NEG
Γna ` n ⇓ X −
Γna `foc n : X
−
hP a i−
a
Γna ; Σp `inv () : 1 | ∅
Γna ` p ⇑ P
Γna `foc p : P
| Qa
We can define a depolarization operation b c± that takes polarized
types and erases polarity information. The definition is given in
full in Appendix A (Depolarization), but for example we have
− def
def
def
X ± = X, bP → N c± = bP c± → bN c± , and hN i+ ± =
bN c± .
This erasure operation can be extended to a defocusing operation b cfoc on focused terms that preserves typing modulo depolarization. For example, if Γna ; Σp `inv t : N | Qa holds, then in the
un-focused system bΓna c± , bΣp c± ` btcfoc : (bN c± | bQa c± )
def
FOCLC - LET- POS
Γna ` n ⇓ hP i−
Γna ; x : P `inv t : ∅ | Qa
Γna `foc let x = n in t : Qa
Γna , x : X + ` x ⇑ X +
Γna , x : N ` x ⇓ N
Computational completeness
holds – with (A | ∅) = A and conversely. This operation is defined
on terms as a direct mapping on all λ-term formers, except the letdefinition form which does not exist in the unfocused calculus and
def
is substituted away: blet x = n in tcfoc = btcfoc [bncfoc /x].
Going from a focused system to a system with less restrictions
is easy. The more interesting statement is the converse, that for any
un-focused term t there exists an equivalent focused term t0 .
Theorem 7 (Completeness of focusing).
Γna ; ∅ `inv t : N | ∅
Γna ` n ⇓ N 1 × N 2
Γna ` πi n ⇓ N i
bΓna c± , bΣp c± ` t : (bN c± | bQa c± )
Γna ` t ⇑ hN i+
=⇒
Γna ` n ⇓ P → N
Γna ` p ⇑ P
Γna ` n p ⇓ N
Γna
def
shift-or-atom notations
+
hN i+
a = hN i
def
hP i−
a =
hP i−
Γna ` p ⇑ P i
` σi p ⇑ P 1 + P 2
+ def
=
a
− def
X− a =
X+
X+
X−
Definition 10. We define the equivalence relation (≈icc ) as allowing well-typed permutations of two adjacent invertible rules. For
example we have absurd(x) ≈icc λy. absurd(x).
From now on any notion of normal form being discussed should
be understood as a quasi-normal form, a normal form modulo invertible commuting conversions (≈icc ). This is a reasonable ap-
0
t foc ≈βη t
∧
Γna ; Σp `inv t0 : N | Qa
Proof. Completeness of focusing is a non-trivial result, but it is independent to the contributions of the current work, and in particular
extends gracefully to the presence of an empty type. See for example proofs in Liang and Miller (2007); Ahmad, Licata, and Harper
(2010); Simmons (2011).
3.3
positive fragment): if we have Γn ; ∅ `inv t, u : N | X − with
t 6=α u, then t
≈βη u and t
≈ctx u – these are known to be
equivalent in the negative fragment.
Focusing is not canonical anymore in mixed-polarity settings.
The first source of non-canonicity is that there may a free choice
of ordering of invertible rules in a phase; consider the judgment
Γna ; x : P 1 + P 2 `inv ? : Q → N | ∅ for example, one may
either do a case-split on P 1 + P 2 or introduce a λ-abstraction
for the function type Q → N : match x with (σi x → λy. ?i )i or
λy. match x with (σi x → ?i )i are both valid term prefixes. This
is solved by declaring that we do not care about the ordering of
invertible rules within a single phase.
∃t0 ,
Choices of polarization
We mentioned that a given un-polarized atom X must either appear
always positively X + or always negatively X − in our judgments.
Violating this restriction would break completeness, as for example
X + ` X − is not provable – they are considered distinct atoms. But
the global choice of polarization of each atom is completely free:
completeness holds whatever choice is made. Those choices influence the operational behavior of proof search: Chaudhuri, Pfenning, and Price (2008b) shows that using the negative polarization
for all atoms corresponds to backward proof search, whereas using
the positive polarization corresponds to forward proof search.
Similarly, there is some leeway in insertion of the
D polarity
E shifts
−
+
h i+ and h i− ; for example, 0 + X + and 0 + X +
depolarize to the same formula, but admit fairly different terms –
the double-shifting allows an invertible phase to start right after
(σ2 ). When transforming a non-polarized type into a polarized
type, two strategies for inserting shifts are notable. One is to insert
as few shifts as possible; the terms inhabiting the minimally-shifted
judgment are in one-to-one correspondence with terms of the unfocused system that “respect the focusing restriction”. The other is
to insert double-shifts under each connective; the terms inhabiting
these double-shifted judgments are in one-to-one correspondence
with unfocused sequent terms – Zeilberger (2013) relates this to
double-negation translations from classical to intuitionistic logic.
4.
Saturated focused λ-calculus
In Section 3.1 ((Non-)canonicity) we explained that the essential
source of non-canonicity in focused term systems is that distinct
non-invertible phases may be independent from each other: reordering them gives syntactically distinct terms that are observably
equivalent in a pure calculus. Being such a reordering of another
term is a highly global property, that cannot be decided locally like
invertible commuting conversions.
Logicians introduced maximal multi-focusing (Chaudhuri, Miller,
and Saurin 2008a) to quotient over those reorderings, and Scherer
and Rémy (2015) expressed this in a programming setting as saturation. The idea of maximal multi-focusing is to force each noninvertible phase to happen as early as possible in a term, in parallel,
removing the potential for reordering them. However, in general
there is no goal-directed proof search (or term enumeration) procedure that generates only maximally multi-focused derivations,
as one cannot guess in advance what non-invertible phases will be
useful in the rest of the term – to introduce them as early as possible. Saturation is a technique specific to intuitionistic logic: when
a non-invertible phase starts, instead of trying to guess which noninvertible phases would be useful later, one saturates the context
by performing all the possible left-focused phases, let-binding all
the neutrals that might be used in the rest of the term. One can
think of a neutral of positive type (n : P ) as an observation of the
current environment: we are saturating by performing all possible observations before making a choice – a right focusing phase.
Note this strategy would be invalid in an effectful language, or
a resource-aware logic where introducing unused sub-derivations
can consume necessary resources and get you stuck.
In general there may be infinitely many distinct observations
that can be made in a saturation phase – consider the context
−
(z : X + , s : X + → X + ), and a type system that would enforce complete saturation would then have to admit infinite terms.
Instead, Scherer and Rémy (2015) relax the definition of saturation
by allowing saturation phases to introduce only a finite subset of
deducible neutrals (n : P ). They prove that canonicity holds (in a
system without the empty type) in the following sense: if two saturated terms made the same saturation choices, then they are equivalent if and only if they are syntactically the same – modulo (≈icc ).
In their work, the notion of equivalence is βη-equivalence of the
defocused forms.
4.1
Figure 5. Cut-free saturated focused type system
Γna ; Σp , x : P `sinv t : N | ∅
Γna ; Σp `sinv λx. t : P → N | ∅
Γna ; Σp `sinv () : 1 | ∅
(Γna ; Σp `sinv ti : N i | ∅)i
Γna ; Σp `sinv (t1 , t2 ) : N 1 × N 2 | ∅
Γna ; Σp , x : 0 `sinv absurd(x) : N | Qa
(Γna ; Σp , x : P i `sinv ti : N | Qa )i
Γna ; Σp , x : P 1 + P 2 `sinv match x with (σi x → ti )i : N | Qa
SINV- SAT
Γna ; Γ0na `sat f : (P a | Qa )
Γna ; Γ0na
+
a
`sinv f : hP a i−
a | Qa
SAT
def
(Γna , Γ0na `s n ⇓ hP i− )
)
(n̄, P̄ ) = SelectΓna ,Γ0na ( (n, P ) |
0
∧ ∃x ∈ Γna , x ∈ n
Γna , Γ0na ; x̄ : P̄ `sinv t : ∅ | Qa
Γna ; Γ0na `sat let x̄ = n̄ in t : Qa
SAT- UP
SAT- DOWN
Γna `s p ⇑ P
Γna ; ∅ `sat p : P
Γna ; ∅ `sat n : X −
Γna `s n ⇓ X −
Γna ; ∅ `sinv t : N | ∅
Γna `s t ⇑ hN i+
Γna , x : X + `s x ⇑ X +
Γna `s n ⇓ N 1 × N 2
Γna `s πi n ⇓ N i
Γna , x : N `s x ⇓ N
Γna `s p ⇑ P i
Γna `s σi p ⇑ P 1 + P 2
Γna `s n ⇓ P → N
Γna `s p ⇑ P
Γna `s n p ⇓ N
context ∅, allowing saturation to proceed to prove the goal with one
of the two other choice-of-focusing rules.
This aspect of the saturation judgment is reused, unchanged,
from Scherer and Rémy (2015). On the other hand, the formulation of the saturation rule SAT is different. We pass the (potentially
infinite) set S of new introducible neutrals to a selection function
SelectΓna (S), which returns a finite subset of neutrals to introduce in a given context. SelectΓna (S) may not return any subset,
we give the requirement for a selection function to be valid in Section 4.3 (Selection function).
The notation let x̄ = n̄ in t denotes simultaneous binding of a
(finite) set of neutral terms – our notion of syntactic α-equivalence
is considered to test the (decidable) set equality.
The saturated type system
The saturated type system is given in Figure 5 (Cut-free saturated
focused type system). The neutral judgments are identical to the
focused type system of Figure 4 (Cut-free focused λ-terms), and
most of the invertible rules are also identical. The only change
is that the rule SINV- SAT moving from the invertible phase to the
focused phase, instead of merging the two contexts Γna ; Γ0na in a
single context position as FOCLC - INV- FOC, now keeps them separate.
This second position Γ0na represents the fragment of the context
that is new during the following saturation phase. The saturation
rule SAT requires that any introduced neutral n use at least one
variable of this new context (∃x ∈ Γ0na , x ∈ n). This guarantees
that a single neutral term cannot be introduced twice by distinct
saturation phases: the second time it will not be new anymore.
This new context is also used to know when saturation stops: if
an instance of the SAT rule does not introduce any new neutral, then
on the next saturation phase the new context Γ0na will be the empty
4.2
Strong positive neutrals
To understand and formalize saturation it is interesting to compare
and contrast the various notions of deductions (seeing our type
systems as logics) at play; how to prove A in a context Γ?
• The more general notion of deduction is the unfocused notion
of proof Γ ` A – proof terms have no restriction. In the focused
system, it would correspond to looking for a proof of an invertible judgment ∅; Γ `inv A | ∅.
• The neutral judgments Γ ` n ⇓ N and Γ ` p ⇑ P correspond
to a less expressive notion of “simple deduction step”, which are
iterated by saturation. For example, hX + Y i− ⇑ Y + X does
not hold, it requires more complex reasoning than a chain of
eliminations from the context variables. Focusing decomposes
a non-focused reasoning into a sequence of such simple deduction steps, separated by invertible phases of blind proof search.
V
Γna , x : X + ` x
X+
V
V
Γna ` p P i
Γna ` σi p P 1 + P 2
saturation to find it. One could cheat, knowing that provability of
any formula is decidable in propositional logic, and test explicitly
for the absence of proof of 0; but saturation is already doing proof
search2 , and we can extend it gracefully to have this property.
We define our requirement on the selection function in Figure 7
(Specification of saturation selection functions). We require that,
for any type P that is part of the deducible observations S (by a
neutral (n : hP i− )), either P is already retrievable from the context
Γna (no need to introduce it then) or it is the type of a neutral n0
selected by the function. We do not require the same neutral n to be
selected: there may be infinitely many different neutrals deducible
at P , but just having one of them in the returned set suffices. This
definition is not natural, it will be validated by the results from
Section 5 (Saturation consistency).
Note that the types P that can be simply deduced from the context Γna ⇓ P are subformulas of Γna . We know by the subformula
property that they are also subformulas of the root judgment of the
global derivation. In particular, there is only a finite number of such
deducible types P – this would not hold in a second-order type system. Valid selection functions exist thanks to this finiteness.
Figure 7. Specification of saturation selection functions
Strong positive neutrals correspond to the positive patterns of
Zeilberger (2009). Those patterns describe the spine of a noninvertible phase, but they can also characterize invertible phases:
an invertible phase, presented in a higher-order style, provides a
derivation of the goal for any possible positive pattern passed by
the environment. The two following results witness this relation.
∀k, Γ0nak
∀P ∈ Σp ,
V
Lemma 3 (Strong decomposition of invertible phases). Consider
an invertible derivation Γna ; Σp `sinv t : N | Qa : it starts
with invertible rules, until we reach a (possibly empty) “frontier”
of saturated subterms f to which the rule SINV- SAV is applied.
k∈K
be the family of such subterms.
Let Γna ; Γ0nak `sat f k : Q0ak
0
Then the Γnak are exactly the contexts such that
SELECT- SPECIF
n : hP i− ∈ S
Note that two valid selection functions can be merged into a
valid selection function, by taking the union of their outputs.
4.4 Completeness of saturation
Completeness of saturation is relative to a specific choice of selection function. Indeed, consider the context
Γna
P
V
Lemma 4 (Strong positive cut).
If both Γna ` p P and Γna ; P `sinv t : ∅ | Qa hold, then there
exists a subterm f of t such that Γna `foc f : Qa holds.
V
This result would not be provable for the more expressive judgment Γna ` p ⇑ P : there is no obvious way to substitute a general
u : N through t that would respect the focusing structure – and
return a strict subterm. Γna P =⇒ Γna ⇑ P is true but nontrivial, it relies on the compleness result – identity expansion.
4.3 Selection function
Contrarily to the simpler setting with sums but no empty type,
not all ways to select neutrals for saturation preserve canonicity
in presence of the empty type. Consider for example the focused
terms
let x = f () in match x with (σi x → σ1 ())i
let x = f () in match x with (σi x → σ2 ())i
at the typing f : h1i+ → h1 + 1i− , g : h1i+ → h0i− ` 1 + 1.
The set of potential observations is {f (), g ()}, and both terms
made the same choice of observing only f (). The first term always returns σ1 () and the second σ2 (), so they are syntactically
distinct even modulo (≈icc ). Yet they are βη-equivalent as the
context is inconsistent. Note that if let y = g () in had been
introduced during saturation, the immediately following invertible
phase would necessarily have been absurd(y), and the two terms
would thus be syntactially equal.
To make saturation canonical again, we need a provability completeness requirement: if there is a possible proof of 0, we want
∀Γna , S, P ,
=⇒
Γna P ∨ ∃(n0 : hP i− ), n0 ∈ SelectΓna (S)
Select ( ) is a valid selection function
V
hN i+
P
V
Γna , x : N ` x
V
Figure 6. Strong positive judgment Γna ` p
V
V
One notion that is missing is the notion of what is “already
known by the context”. With the usual non-focused logic, to know
if a positive formula P has been introduced before, we simply
check if P is in the context. But the focusing discipline decomposes
positive formulas and removes them from the context.
One could use the judgment Γna ⇑ P instead – X + , Y + ⇑
X + + Y + as intended. But Γna ⇑ P is too strong for the purpose of
just retrieving information from the context, as it calls the general
invertible judgment at the end of the focused phase. Γna ⇑ hN i+
holds whenever N is provable from Γna , not only when N is an
hypothesis in Γna .
To capture this idea of what can be “retrieved” from the context
without any reasoning, we introduce in Figure 6 (Strong positive
judgment Γna ` p P ) the strong positive judgment Γna P .
def
=
(x : X + , y : h1i+ → X +
−
)
+
In this context, the only deducible positive formula is X , and it is
already retrievable from Γna . This means that a selection function
that would satisfy SelectΓna (S) = ∅ would be a valid selection
function. However, saturating with such a selection function is
not computationally complete: the saturated term x : X + has a
valid derivation, but let z = y () in z does not – nor does any
equivalent term.
We can order selection functions by pointwise subset ordering:
a function f is above g if it selects at least all of g’s neutrals for
each context. A set of saturation functions is upward-closed if, for
any saturation function in the set, any function above it is in the set.
Theorem 8 (Completeness of saturation). For any focused term
Γna ; Σp `inv t : N | P a , there exists an upward-closed set of
selections functions such that Γna ; Σp `sinv t0 : N | P a holds, for
a (computable) saturated term t0 such that btcfoc ≈βη bt0 cfoc .
Note that, given two focused term t1 , t2 , we can merge the selection functions used in the theorem above to get a single selection
function for which there exists saturated terms for both t1 and t2 .
5.
Saturation consistency
In this section, we prove the main result of this extension of saturation to the empty type: if a context is inconsistent, then the saturation phase will eventually introduce a variable of the empty type 0
2 Saturation
synthetizes new sub-terms and can thus decide equivalence
with 0, unlike previous methods that would only reorder the subterms of the
compared terms.
in the context. This is key to obtaining a canonicity result – if saturation sometimes missed proofs of 0, it could continue with distinct
neutral terms and result in distinct but equivalent saturated terms.
The informal view of the different ways to deduce a positive
formula presented in Section 4.2 (Strong positive neutrals) (general proof, simple deduction, retrieval from context) gives a specification of what saturation is doing. From a high-level or big-step
point of view, saturation is trying all possible new simple deductions iteratively, until all the positives deducible from the context
have been added to it. The following characterization is more finegrained, as it describes the state of an intermediary saturation judgment Γna ; Γ0na `sat f : P a .
The characterization is as follows: any formula that can be
“simply deduced” from the old context Γna becomes “retrievable”
in the larger context Γna , Γ0na . This gives a precise meaning to the
intuition that Γna is “old”. What we mean when saying that Γ0na is
“new” can be deduced negatively: it is the part of the context that is
still fresh, its deductions are not stored in the knowledge base yet.
∀P ,
Γna ⇓ hP i−
Γna , Γ0na
=⇒
−
Definition 11. Γna is saturated if Γna ⇓ hP i implies Γna
V V
Theorem 9 (Saturation). If a saturated proof starts from a judgment of the form ∅; Γna0 `sat f : Qa or ∅; Σp 0 `sinv t : N | Qa
then for any sub-derivation of the form Γna ; Γ0na `sat f : Qa we
have the following property:
P
P.
Corollary 2 (Saturation). If a saturated proof starts from a judgment of the form ∅; Γna0 `sat f : Qa or ∅; Σp 0 `sinv t : N | Qa
then for any sub-derivation of the form Γna ; ∅ `sat f : Qa the environment Γna is saturated.
Lemma 5 (Saturated consistency). If Γna is saturated, then Γna 0 0.
Theorem 10 (Inconsistent canonicity). If Γna ` 0, then for any
f , f 0 such that ∅; Γna `sat f , f 0 : P a we have f ≈icc f 0 .
6.
Canonicity
In this section we establish the main result of this article. If two
saturated terms Γna ; Σp `sinv t, t0 : N | Qa are not syntactically
equivalent (t
≈icc t0 ), then there exists a model M in which a
context distinguishes t from t0 : they are not contextually equivalent.
(We build distinguishing contexts in the un-focused λ-calculus,
so technically we are distinguishing the defocused forms btcfoc , bt0 cfoc ;
the proof crucially relies on the saturated structure of its inputs, but
the code we generate for computation and separation is more easily
expressed unfocused.)
6.1
Sketch of the proof
Intuition It is helpful to first get some intuition of what a pair
of syntactically distinct normal forms looks like, and what the
corresponding distinguishing context will look like. Suppose we
have Γna ; Σp `sinv t
≈icc t0 : N | Qa . We can explore t and t0
simultaneously, until we find the source of their inequality.
The source of inequality cannot be in an invertible phase, given
that the term formers in invertible phase are completely determined
by the typing (modulo invertible commuting conversions); for example, if N is N 1 × N 2 , we know that t is of the form (t1 , t2 ),
and t0 of the form (t0 1 , t0 2 ), with ti
≈icc t0 i for some i – so we
can continue exploring ti
≈icc t0 i . Same thing if the term starts
with a sum elimination (modulo (≈icc ) one can assume that they
eliminate the same variable), match x with (σi x → ti )i
≈icc
match x with (σi x → t0 i )i : the subterms ti , t0 i in at least one
of the two branches differ.
Similarly, the source of inequality cannot be in the saturation
phase, where both terms saturate on neutrals that are completely
determined by the typing context – and the saturation selection
function – they are both of the form let x̄ = n̄ in for the same
set of neutrals on each side. The end of this saturation phase is
also type-directed, so both terms stop saturating (they get an empty
Γ0na context) at the same time. The difference must then be in
the neutrals used in the SAT- UP or SAT- DOWN rules, n
≈icc n0
or p
≈icc p0 . Note that we cannot get a positive neutral on one
side and a negative neutral on the other, as usage of those rules
is directed by whether the goal type is a negative atom X − or a
positive type P .
Now, two neutrals n
≈icc n0 or p
≈icc p0 may differ because
their spine differ, or because their sub-terms that are outside the
non-invertible phase differ. In the latter case, finding the source of
inequality is a matter of traversing the common structure towards
the sub-terms that differ. The former case is more interesting –
this pair of neutrals with distinct spines is what we call source of
inequality.
In the positive neutral case, we end up on either σi p and σj p0
with i 6= j, or distinct variables x 6= y of atomic type X + . In
the negative neutral case, one may encounter distinct neutrals with
distinct structure, for example n p 6= x 6= πi m at the same type
N ; negative neutrals should be looked “upside down”, as in System
L (Curien, Fiore, and Munch-Maccagnoni 2016): either their head
variables differ, or the same variable is applied a different sequence
of elimination rules.
In the easy case where the source of inequality is a sum con≈icc (σj ), obtaining a distinguishing context
structor (σi )
looks relatively easy: we need a context C [] that corresponds to
the term traversal we performed to reach this source of inequality.
≈icc t0 2 ,
For example, if we had (t1 , t2 )
≈icc (t0 1 , t0 2 ) because t2
the context fragment corresponding to this reasoning step would
be π2 . This is trickier in the sum-elimination case: if we have
≈icc match x with (σi x → t0 i )i
match x with (σi x → ti )i
then we need our context to instantiate the variable x with the
right value σi p so that the branch we want is taken – the one with
≈icc t0 i . This is easy if x is a formal variable introduced by a
ti
λ-abstraction: at the point where our context needs to distinguish
the two λ-abstraction λx. t
≈icc λx. t0 , we need to use an application context of the form (σi p). But x may have been introduced
by a left-focusing step let x = n in t
≈icc let x = n in t0 ; then
we need to instantiate the variables of the observation n just so that
we get the desired result σi t0 . When the source of inequality is on
negative neutrals with heads x : N , y : M or positive variables
x 6= y : X + , we need the distinguishing context to pass values in
the same way to get to this source of inequality, and also to instantiate the variables x, y to get an inequality. If those variables are
at an atomic type, we must pick a model that replaces this atomic
type by a closed type, to instantiate them by distinguishable values
at this closed type.
Positive simplification In order to simplify the following arguments, we will suppose that the context and types of the two saturated proof to distinguish do not use negative atoms, only positive
atoms. In other words, the results only holds for the focused system
in the fragment ΛC(X + , →, ×, 1, +, 0).
This is a perfectly reasonable simplification in view of our
goal, which is to distinguish inequivalent non-focused term that
have distinct saturated normal forms: we know that any choice of
polarization for the non-focused atoms preserves the existence of
normal forms, so we can make them all positives – see Section 3.3
(Choices of polarization).
In particular, the source of inequality (distinct neutrals whose
spine differs) is always a pair of neutrals p
≈icc p0 , who contain a
syntactic difference (σi
≈icc σj with i 6= j, or x 6= y : X + )
before the end of the non-invertible phase. Negative neutrals are
only used during saturation.
outFin (inFin (x)) = x ∈ S ∧ ∅ ` inFin (outFin (t)) ≈βη t : Fin (S)
For any two distinct elements x 6= y ∈ S there exists a distinguishing context for inFin (x)
≈ctx inFin (y).
Definition 12 (Neutral model). Given two syntactically distinct
≈icc t00 : N | P a (with positive
saturated terms Γna ; Σp `inv t0
atoms only) with a source of inequality of the form
Γ0na ` p
≈icc p0 ⇑ P
we define the neutral model N t0 ,t00 (or just N ) by
def
N t0 ,t0 0 (Y + ) = Fin {x | (x : Y + ) ∈ Γ0na }
We say that N (X) contains a code for each atomic variable
bound at the source of inequality.
Distinguishing outline The general idea of our distinguishing
context construction is to instantiate variables just so that each
variable of atomic type (x : X + ) evaluates to its code inFin (x).
Thus, when we end up on distinct neutrals p
≈icc p0 , we know that
our context will send them to distinguishables values.
There are two moments where building a distinguishing context requires synthesizing a closed value of a type: to instantiate
the open variables in the context of the two terms, and when distinguishing invertible terms at a function type P → N , which we
know to be of the shape λ(x : P ). ( : N ).
Synthesizing a value for a variable of atomic type (x : X + )
is obvious, we just pick inFin (x) – this guarantees that, under
this context, x will reduce to inFin (x) as expected. For a variable
of sum type P + Q, we have to choose the value to make sure
that the correct branch of the two terms (the one containing the
source of inequality) will be explored, as previously explained. For
a variable x of negative type N , we have to make sure that any
observation of x, any neutral term n whose head variable is x (in
the syntax of Curien, Fiore, and Munch-Maccagnoni (2016) they
are the hx | Si), will reduce to the value we need: if we have
let y : P = n in . . ., the neutral n should reduce to inFin (y). In
other words, much in the spirit of higher-order focusing Zeilberger
(2009), we specify the instantiation of (x : N ) by a mapping over
all the observations over x that are made in t0 , t00 .
Example Consider for example:
+
n : (1 + X ) → X
+ −
`
let z = n (σ1 ()) in
let o = n (σ2 z) in z
≈icc
let z = n (σ1 ()) in
let o = n (σ2 z) in o
: X+
The shared context in this example is
let z = n σ1 () in let o = n (σ2 z) in
and the source of inequality is z
≈icc o. The atomic variables in
the context at this point are z and o, so we have N (X + ) =
Fin ({z, o}). We have to provide a value for the negative variable
n; its “observations”, the arguments it is called with, are σ1 () and
σ2 z, so we define it on these inputs following the general scheme:
def
n̂ =
σ1 ()
σ2 inFin (z)
7→
7→
inFin (z)
inFin (o)
The value of n̂ on the last element of N (1 + X + ), namely
σ2 (inFin (o)), is not specified; the return type is inhabited so a
value can be chosen, and the specific choice does not matter.
def
It is easy to check that the context C [] = (λn. ) n̂ is such
that plugging both terms will result in distinct closed values of
N (X + ), namely inFin (z) and inFin (o). From there, building a
distinguishing context returning a boolean is trivial.
Technical challenge We outlined the general argument and
demonstrated it on an example. Unfortunately, we found that scaling it to a rigorous, general proof is very challenging.
When we define instantiation choices for negative types as mappings from observations to their results, we implicitly rely on the
fact that the observations are distinct from each other. This is obvious when the domain of these observations is made of firstorder datatypes (no functions), but delicate when some of those
observations have function types – consider observations against
−
(x : hP → N i+ → X + ).
The natural idea is to inductively invoke the distinguishability
result: if x (λy. t)
≈icc x (λy. t0 ), then t
≈icc t0 are distinguishable and x̂ can distinguish those two arguments by passing the right
instantiation for y. However, making this intuition precise gets us
into a quagmire of self-references: to define instantiation of the
variable x, we may need to instantiate any such variable y whose
scope it dominates, but the argument for distinguishability is really
on the values that λy. t, λy. t0 have reduced to by the time they are
passed to x̂; but the natural way to denote those values makes a reference to the instances passed for all the variables in their scope, x
included...
We are convinced that there is a general inductive argument to
be found that would play in a beautiful way with general polarized
type structure. We have not found it yet, and shall for now use
(a focused version of) the function-removal hack from Section 2.5
(Fun-less types and reification).
6.2
Saturated inequivalence
We argued that if we have t
≈icc t0 , then those terms must
be of the form D [n], D [n0 ] or D [p], D [p0 ] where the neutrals
n
≈icc n0 : X − or p
≈icc p0 : P have distinct spines (invertible
phases), that is, they differ even when ignoring their invertible subterms. With the simplifying assumption that all atoms are positively
polarized, only the case p
≈icc p0 may arise.
This structure is crucial in the proof of canonicity, so we introduce in Figure 8 (Saturated inequivalence judgment) a precise
inductive definition of this decomposition as a saturated inequivalence judgment
Γ | Ξ `inv D Γ0 | Ξ0 `ne p 6= p0 : P : N | P a
which represents constructive evidence of the fact that D [p]
≈icc
D [p0 ], where p, p0 are the source of inequality as witnessed by the
new judgment Γ0 `ne p 6= p0 ⇑ P .
The new structure Ξ is a constraint environment, a list of equalities of the form p = x : P or x = n : P that correspond to
knowledge that was accumulated during the traversal of D: under
a binding let x = n in t we remember x = n, and when doing a case-split on a variable x : P 1 + P 2 we remember which
branch leads to the source of inequality by σi x0 = x. Together,
Γ0 | Ξ0 form a constrained environment as used in Fiore and Simpson (1999). Note that when decomposing the variable x : P 1 + P 2
we mention it in the constraint environment, so we also keep it in
Γ0 for scoping purposes: we use general contexts with types of any
polarity, not just negative or atomic contexts Γna .
Two side-conditions in this judgment capture the essential properties of saturated terms required to obtain canonicity. In the saturation case, the condition ∀n ∈ n̄, n ∈
/ Ξ enforces that let-bindings
in the constraint environment Ξ all bind distinct neutrals. At the
source of inequality, the condition Γ 0 0 enforces that the context
is consistent.
Definition 13. A constrained environment Γ | Ξ is valid if
n ∈ Ξ =⇒ Γ ` n ⇓ Ξ
p ∈ Ξ =⇒ Γ ` p
V
Neutral model If S is a finite set, let us write Fin (S) for the
type 1 + . . . + 1 that is in bijection with S (same number of elements), witnessed by embeddings inFin ( ) : S → Fin (S) and
outFin ( ) : Fin (S) → S such that
(p = x) ∈ Ξ, (p0 = y) ∈ Ξ, x =α y =⇒ p =α p0
(x = n) ∈ Ξ, (y = n0 ) ∈ Ξ, n =α n0 =⇒ x =α y
Ξ
Figure 8. Saturated inequivalence judgment
Ξ ::= ∅ | Ξ, p = x | Ξ, x = n
constraint environment
Γ | Ξ `inv D [Γ0 | Ξ0 `ne p 6= p0 : P 0 ] : N | Qa
Γ, x : P 1 + P 2 , xi : P i | Ξ, σi xi = x `inv
D Γ0 | Ξ0 `ne p 6= p0 : P 0 : N | Qa
Γ,
x : P 1 + P 2 | Ξ `inv
match x with
σi xi → D Γ0 | Ξ0 `ne p 6= p0 : P 0 : N | Qa
σj xj6=i → t
(other cases omitted for space)
Γ | Ξ `foc D [Γ0 | Ξ0 `ne p 6= p0 : P 0 ] : Qa
∀n ∈ n̄, n ∈
/Ξ
Γ, x̄ : P̄ | Ξ, x̄ = n̄ `inv D Γ0 | Ξ0 `ne p 6= p0 : P 0 : ∅ | Qa
0
Γ | Ξ `foc let (x̄ : P̄ ) = n̄ in D (Γ | Ξ0 `ne p 6= p0 : P 0 ) : Qa
Γ `ne p 6= p0 ⇑ P
Γ00
Γ | Ξ `foc Γ | Ξ `ne p 6= p0 : P : P
(other cases omitted for space)
Γ `ne p 6= p0 ⇑ P
(i 6= j) ∨ Γ `ne p 6= p0 ⇑ P i=j
Γ `ne σi p 6= σj p0 ⇑ P 1 + P 2
x 6= y
+
Γ, x : X , y : X
+
`ne x 6= y ⇑ X
+
no case for Γ `ne t 6= t0 ⇑ hN i+
Lemma 6 (Saturated inequivalence).
If ∅; Σp `inv t
≈icc t0 : N | Qa with positive atoms only, then there
exists D [], p, p0 and a valid Ξ such that
Σp | ∅ `inv D Γ0na | Ξ `ne p 6= p0 : P : N | Qa
0
t ≈icc D [p]
0
Let us write (t
≈icc t )
6.3
t ≈icc
0
D [p 6= p ] when this relation holds.
Focused fun removal
In this section, we describe how to transform any pair of distinct saturated terms t0
≈icc t00 in ΛC(X + , →, ×, 1, +, 0) into
a pair of distinct saturated terms in the restricted type system
ΛC(×, 1, +, 0). We know that we can apply the neutral model
N t0 ,t00 to get a pair of terms without atoms – ΛC(→, ×, 1, +, 0)
– and then the bijections of Section 2.5 (Fun-less types and
reification) give us terms in ΛC(×, 1, +, 0). But the impact of these
transformations on the focused and saturated term structure must
be studied carefully.
Reduction to closed types When applying the neutral model
N t0 ,t00 , we turn atomic types X + into strictly positive types
N (X + ) of the form 1 + . . . + 1. Consider a binding site of the
form (λ(x : X + ). t), which becomes λ(x : 1 + . . . + 1). N (t) after transformation. To recover a valid focused term, we need to
insert a big sum elimination to remove the positive variable x from
the context:
+
Theorem 11 (Inequivalence in the model). Suppose we have satdef
urated forms t0 , t00 with positive atoms only. Let us define t1 =
0
0 def
N t0 ,t00 (t0 ) and t1 = N t0 ,t00 (t0 ).
If (t0
≈icc t00 )
D0 [p0 6= p0 0 ], then there exists D1 , p1 , p0 1
0
≈icc t1 )
D1 [p1 6= p0 1 ]
such that (t1
Function elimination To turn t1
≈icc t01 into terms at functionless types, we use the transformations presented in Figure 9 (Funless focused forms). They correspond to focused versions of the
term transformation b cA and T UA of Figure 3 (Fun-less data
types), in two interdependent ways: their definition assumes the
transformed terms respect the focused structures, which gives us
rich information on term shapes, and their result remain in focused
form. The specification of these transformations is as follows:
Γna ; Σp `inv t : N | P a ⇒ bΓna c; bΣp c `inv btc(N |P a ) : bN c | bP a c
Γna ; Σp `inv t : N | P a ⇒ TtU(N |P a ) TN UTP a U
D p0
λx. match x with (σi x → t[σi ()/x])i∈N (X
The substitution of σi () for (x : X + ) in t may not create βredexes, as a variable x of atomic type may not occur in eliminatedsum position. By inspection of the typing rules of our focused
system, such a variable can only appear in a judgment of the form
Γna ` x ⇑ X + ; the focused structure is preserved by replacing it
by the derivation Γna ` σi () ⇑ 1 + . . . + 1.
For a focused term t, let us simply write N (t) for this transformed focused term, splitting over each variable of type N (X + ).
The saturated structure, however, is not preserved by this
change. The problem is that replacing a new variable x by a closed
term σi () may break the condition that only “new” neutrals are
introduced during a saturation phase: any neutral that would only
use x as the new variable introduced by the last invertible phase is
not new anymore.
However, we can show that the saturated inequivalence structure is preserved by this transformation. There is a shared context
to a source of inequality in the transformed terms, where the letbindings might not be new at introduction time, but they are not
redundant, as requested by the inequivalence structure. This is the
property of saturated proofs (besides saturation consistency) that
our distinguishability result relies on.
)
(the family notation here denotes a cascade of sum-eliminations,
with as many cases in total as elements in N (X + )). We also
perform such an elimination on each variable in context at the root.
Γna ` n ⇓ N
...
TΓna U ` TnUN ⇓ TN U
⇒ ∃m0 ,
Γna ` m ⇓ N 0
TΓna U ` m0 ⇓ N 0
...
The transformation of negative neutrals is more elegantly expressed in the sequent-calculus syntax of Curien, Fiore, and
Munch-Maccagnoni (2016): we transform a command hn |N Si :
Γna ` M cutting on a type N into a command Thn |N SiUN :
TΓna U ` M cutting on TN U. We will write m [n] when the neutral
m has head n – it is of the form hn | Si.
The definition strongly relies on the inversion of term structure
of focused terms modulo (≈icc ). For example, when we define
Tλx. tU(P 1 +P 2 )→N , we know that x : P 1 + P 2 and can thus assume modulo (≈icc ) that t is of the form match x with (σi x → ti )i .
Similarly in the neutral case, we know that any neutral Γna ` n ⇓
(P 1 + P 2 ) → N can only appear in a term applied to an argument
Γna ` p ⇑ (P 1 + P 2 ) – non-invertible phases are as long as possible, so they cannot stop on a negative function type – and we know
that such an p must be of the form σi p0 .
The way the focused structure conspires to make this transformation possible is, in fact, rather miraculous – is it more than just
a hack? In the λx. t case of hN 1 × N 2 i+ → M , we know that a
derivation of the form Γna ` x ⇓ N 1 × N 2 can only appear inside a larger derivation Γna ` πi x ⇓ N i , which we can replace by
TΓna U ` xi ⇓ N i . In the neutral case, a (n : hN 1 × N 2 i+ → )
can only appear applied to an argument Γna ` p ⇑ hN 1 × N 2 i+ ,
but as this is a shifted negative type we know that p is in fact an
invertible form Γna ; ∅ `inv t : N 1 × N 2 | ∅, and this must be a pair
Figure 9. Fun-less focused forms
btcA
Tλx.
match x with
σ1 x → t1
σ2 x → t2
applies T UA on all subterms
Tλx. t1 UP 1 →N
def
,
U(P 1 +P 2 )→N =
Tλx. t2 UP 2 →N
def
TnU(P 1 +P 2 )→N (σi p) = Tπi nUP i →N p
def
Tλx. absurd(x)U0→N = ()
TnU0→N p impossible
def
def
Tλx. tUh1i+ →N = t
def
def
TnUhN 1 ×N 2 i+ →M (t1 , t2 ) = TTnUN 1 → t1 UN 2 → t2
def
hhP i− i+ →N
= Tλx. tUP →N
def
TnU hP i− + →N (Γna ; ∅ `inv p : hP i− | ∅) = TnUP →N p
h
i
def
TxUN = (x : TN U)
(t1 , t2 ), exactly what we needed to define the transformation. The
+
same phenomenon occurs on the argument Γ0na ` p ⇑ hP i− of
the double-shifted case.
Lemma 7 (Function elimination preserves inequivalence).
(∃D, p, p0 , (t
≈icc t0 )
D p 6= p0 )
=⇒ (∃D, p, p0 , (btc
≈icc bt0 c)
D p 6= p0 )
(The converse implication also holds, but we don’t need it.)
Lemma 8.
r
z
r
z
q
y
q
y
bbtcA cfoc = bbtcfoc cbAc± = b btcfoc cbbAc± c ∈ bAc±
Corollary 3 (Fun-elimination preserves semantic equivalence).
q y
Jt K = t0
6.4
⇐⇒
q
y
JbtcA K = bt0 cA
Canonicity
Definition 14. Let a closed substitution ρ be a mapping from
variables to closed terms of closed types (types without atoms).
We write ρ : Γ when Γ is a closed context and, for any (x : A) ∈
Γ we have ∅ ` x[ρ] : A.
If Ξ is a constraint environment, we say that Ξ[ρ] holds when,
for any equation t = u in Ξ, we have t[ρ] ≈sem u[ρ].
We write ρ : (Γ | Ξ) if ρ : Γ and Ξ[ρ] hold.
Theorem 12 (Canonicity). Assume Γ is consistent, Ξ is valid, and
Γ0 | ∅ `inv D Γ | Ξ `ne p 6= p0 : P : N | P a
is a judgment of closed function-less types. Then there exists a
closed substitution ρ : (Γ | Ξ) such that p[ρ]
≈sem p0 [ρ].
Corollary 4 (Canonicity). In the sub-system ΛC(×, 1, +, 0):
Γ | ∅ `inv D Γ0 | Ξ0 `ne p 6= p0 : P : N | P a ⇒ D [p]
≈ctx D p0
6.5
Corollary 6. Contextual and βη-equivalence coincide
Corollary 7. Equivalence in the full simply-typed λ-calculus with
sums and the empty type is decidable.
Corollary 8. The full simply-typed λ-calculus with sums and the
empty type has the finite model property.
TnUh1i+ →N () = n
Tλx. tUhN 1 ×N 2 i+ →M = Tλx1 . Tλx2 . t[xi /πi x]i UN 2 → UN 1 →
Tλx. tU
Corollary 5 (Contextual equivalence implies equality of saturated
forms). If Γ ` t, t0 : A are (non-focused) terms of the full simplytyped lambda-calculus ΛC(X, →, ×, 1, +, 0) with t ≈ctx t0 , then
for any Σp , N with no positive atoms and bΣp c± = Γ, bN c± = A
and any saturated terms ∅; Σp `sinv u, u0 : N | ∅ such that
t ≈βη bucfoc and t0 ≈βη bu0 cfoc we have u ≈icc u0 .
Results
Theorem 13 (Saturated terms are canonical). In the system with
only positive atoms, if ∅; Σp `sinv t
≈icc t0 : N | P a then
t
≈ctx t0 .
7.
Conclusion
7.1
Other Related Work
Background material For the reader looking for more historical
perspective Dougherty and Subrahmanyam (2000) gives an interesting, detailed presentation of the state of the art in separation theorems in absence of sum types, and of why their approach are difficult to extend to sums. Simpson (1995) gives an enjoyable, accessible exposition of what “completeness” exactly means in the setting
of categorical semantics; in particular, while we can prove that two
inequivalent terms can be distinguished by a choice of finite sets,
there is no fixed choice of finite sets that could separate all pairs of
terms. It also discusses the notion of “typical ambiguity”, and the
idea that βη-equality is shown to be the maximal consistent relation. At the time of writing, Simpson’s statement of the 15th TLCA
open problem is also one of the clearest expositions of the relation
between these questions.
Focusing The main field of application of focusing to programming is the area of logic programming, where operational semantics correspond to search strategies, which can often be described
as specific choices of polarization in a focused logic (Chaudhuri,
Pfenning, and Price 2008b). Focusing has been used to study programming language principles in Zeilberger (2009); when considering non-normal forms (logics with a strong cut rule) it let us reason finely about evaluation order.
This suggests the general notion of polarization, an approach of
programming calculi where compositions (cuts) are non-associative
(Curien, Fiore, and Munch-Maccagnoni 2016), modeling effects
and resources. In the present work we consider only focused normal forms (except for stating the completeness of focusing), which
only captures the pure, strongly normalizing fragment, and thus
corresponds to a depolarized system.
Guillaume Munch-Maccagnoni’s work on polarized abstract
machine calculi harps on many ideas close to the present work, in
particular Munch-Maccagnoni and Scherer (2015). It is conducted
in a syntax that is inspired by the sequent calculus rather than the
λ-calculus; this choice gives a beautiful dynamic semantics to polarized systems. In contrast, the focused λ-calculus as presented
is a system of normal forms, and defining an internal reduction
for it would be awkward. While there is no direct correspondence
between sequent proofs and natural deduction proofs in general,
their normal forms are in one-to-one mapping, so in our context the
choice of abstract machine or λ-terms-inspired syntax matters less.
Atom-less systems The structure of normal forms has been studied in Altenkirch and Uustalu (2004) in the special case of the
type system ΛC(→, +, 2), with boolean types instead of general
sums, and no atoms. While conceptually easier to follow than the
Grothendieck logical relations of the more general normalizationby-evaluation work on sums, they remain challenging in presence
of higher-order functions.
In unpublished work, Ahmad, Licata, and Harper (2010) work
with the type system ΛC(→, ×, 1, +, 0): no atoms, but everything
else. They use focusing as a guiding principle to generalize the normal forms of ΛC(→, +, 2) to everything else – to our knowledge
this is the first work to use focusing to approach sum types. The
result obtained, namely decidability of observational equivalence
in this system, is not striking on its own: in absence of atoms, all
types are finitely inhabited, so two functions can be compared by
testing them on all their input domain. But it is conducted in a rigorous, inspiring way that shows the promises of the focused structure. Our own proof of distinguishability of distinct normal forms is
not as elegant as this development, as it uses the inelegant shortcut
of function type elimination. We are convinced that there exists a
beautiful proof that weaves the distinguishability structure through
higher-order abstractions in their style, but have yet to find it.
7.2
Future Work
Direct distinguishability proof The use of the positive atoms
simplification and the detour through types without functions are
a bit disappointing. There should be a proof in which all steps are
as general and general as possible: completeness of focusing in the
explicitly polarized system, completeness of saturated forms, and
then canonicity of saturated forms at all types.
Categorical semantics We wonder what is the relation between
the saturated structure and the existing work on categorical semantics of the λ-calculus with finite sums.
Direct comparison algorithm We prove decidability by reducing
equivalence of arbitrary λ-terms to equivalence of saturated forms.
This algorithm can be implemented, but we would rather use an algorithm that does not need to compute full saturated normal forms
before returning a result – in particular on inequivalent inputs.
It is reasonably easy to formulate an equivalence algorithm on
the focused forms directly, that would perform the saturation “on
the fly”: at the beginning of each saturation phase, look for all the
neutrals that can be defined in the current context, recursively compute their equivalence classes, and replace each class by a distinct
free variable – then continue on the neutral structure. Proving this
algorithm correct, however, turns out to be challenging.
Finally, it would be interesting to have an algorithm expressed
directly on the non-focused terms, that would perform a focusing
transformation on the fly, as much as each step of equivalence
requires. The difficulty then is that neutral subterms are not as long
as possible (they may be interrupted by commuting conversions),
so it is possible to look for neutrals sub-terms definable in the
current context and miss some of them – future reasoning stages
may un-stuck sum-eliminations that in turn un-block neutrals that
should have been bound now. For the type system with non-empty
sums, control operators have been used (Balat, Di Cosmo, and
Fiore 2004) to solve that issue; can the technique be extended?
Acknowledgments
Amal Ahmed made this work possible by giving the time, motivation and advice that let it foster. Andrew Pitts gave warm encouragements to look at the specific question of the empty type, and
Alex Simpson enthusiastically sat through an intense whiteboard
session at Schloss Dagstuhl and provided the final motivation to
get it done. Max New, Didier Rémy and the anonymous referees
provided very useful feedback on the article.
This research was supported in part by the National Science
Foundation (grant CCF-1422133).
References
Arbob Ahmad, Daniel R. Licata, and Robert Harper. Deciding coproduct
equality with focusing. Online draft, 2010. 6, 13
Thorsten Altenkirch and Tarmo Uustalu. Normalization by evaluation for
lambda-2 . In FLOPS, 2004. 4, 12
Thorsten Altenkirch, Peter Dybjer, Martin Hofmann, and Philip J. Scott.
Normalization by evaluation for typed lambda calculus with coproducts.
In LICS, 2001. 1
Jean-Marc Andreoli. Logic Programming with Focusing Proof in Linear
Logic. Journal of Logic and Computation, 2(3), 1992. 1, 4
Vincent Balat, Roberto Di Cosmo, and Marcelo P. Fiore. Extensional normalisation and type-directed partial evaluation for typed lambda calculus
with sums. In POPL, 2004. 1, 13
Corrado Böhm. Alcune proprieta delle forme normali nel k-calcolo. IAC
Pubbl, 696119, 1968. 1
Kaustuv Chaudhuri, Dale Miller, and Alexis Saurin. Canonical sequent
proofs via multi-focusing. In IFIP TCS, 2008a. 1, 7
Kaustuv Chaudhuri, Frank Pfenning, and Greg Price. A logical characterization of forward and backward chaining in the inverse method. J.
Autom. Reasoning, 40(2-3), 2008b. 6, 12
Kaustuv Chaudhuri, Stefan Hetzl, and Dale Miller. A Systematic Approach
to Canonicity in the Classical Sequent Calculus. In CSL, 2012. 1
Pierre-Louis Curien, Marcelo Fiore, and Guillaume Munch-Maccagnoni.
A Theory of Effects and Resources: Adjunction Models and Polarised
Calculi. In Proc. POPL, 2016. doi: 10.1145/2837614.2837652. 5, 9, 10,
11, 12
Daniel J Dougherty and Ramesh Subrahmanyam. Equality between functionals in the presence of coproducts. Information and Computation, 157
(1), 2000. 1, 2, 12
Marcelo Fiore and Alex Simpson. Lambda definability with sums via
grothendieck logical relations. In TLCA, 1999. 1, 10
Harvey Friedman. Equality between functionals. In Logic Colloquium,
1975. 1
Neil Ghani. Beta-Eta Equality for Coproducts. In TLCA, 1995. 1
Danko Ilik. The exp-log normal form of types and canonical terms for
lambda calculus with sums. CoRR, arxiv:1502.04634, 2015. URL
http://arxiv.org/abs/1502.04634. 4
Chuck Liang and Dale Miller. Focusing and polarization in intuitionistic
logic. CoRR, arxiv:0708.2252, 2007. URL http://arxiv.org/
abs/0708.2252. 5, 6
Sam Lindley. Extensional rewriting with sums. In TLCA, 2007. 1
Guillaume Munch-Maccagnoni and Gabriel Scherer. Polarised Intermediate
Representation of Lambda Calculus with Sums. In LICS, 2015. 12
Gabriel Scherer. Which types have a unique inhabitant? Focusing on pure
program equivalence. PhD thesis, Université Paris-Diderot, 2016. 2
Gabriel Scherer and Didier Rémy. Which simple types have a unique
inhabitant? In ICFP, 2015. 1, 2, 5, 7, 15
Robert J. Simmons. Structural focalization. CoRR, arxiv:1109.6273,
2011. URL http://arxiv.org/abs/1109.6273. 6
Alex Simpson. Categorical completeness results for the simply-typed
lambda-calculus. In TLCA, 1995. 12
Richard Statman. Completeness, equivalence and lambda-definability.
Journal of Symbolic Logic, 47(1), 1982. 1
Noam Zeilberger. The Logical Basis of Evaluation Order and PatternMatching. PhD thesis, Carnegie Mellon University, 2009. 8, 10, 12
Noam Zeilberger. Polarity in proof theory and programming, 2013. URL
http://noamz.org/talks/logpolpro.pdf. Lecture Notes for the
Summer School on Linear Logic and Geometry of Interaction in Torino,
Italy. 7
A.
Full definitions
Definition (Semantics of terms). We define the following naive
semantics for term formers:
def
TtUA1 ×A2 →B = Tλx1 . Tλx2 . t (x1 , x2 )UA2 →B UA1 →TA2 →BU
def
TtU1→B = t ()
def
varx : JΓ, x : A ` A KM
TtUA1 +A2 →B = Tλx. t (σ1 t)UA1 →B , Tλx. t (σ2 t)UA2 →B
def
def
varx (G) = G(x)
pair : JΓ ` A1 KM × JΓ ` A2 KM → JΓ ` A1 × A2 KM
TtU0→B = ()
def
pair(f1 , f2 )(G) = (f1 (G), f2 (G))
VtWA1 ×A2 →B = λx. VVtWA1 →TA2 →BU (π1 x)WA2 →B (π2 x)
proji : JΓ ` A1 × A2 KM → JΓ ` Ai KM
VtWA1 +A2 →B = λx. match x with
def
def
VtW1→B = λx. t
σ1 x1 → Vπ1 tWA1 →B x1
σ2 x2 → Vπ2 tWA2 →B x2
def
def
proji (f )(G) = vi
where f (G) = (v1 , v2 )
def
VtW0→B = λx. absurd(x)
lam : JΓ, x : A ` B KM → JΓ ` A → B KM
def
def
TvUA1 ×A2 →B = Tw1 7→ Tw2 7→ v((w1 , w2 ))UA2 →B UA1 →TA2 →BU
lam(f )(G) = (v ∈ JA KM ) 7→ f (G, x 7→ v)
def
TvU1→B = v(?)
app : JΓ ` A → B KM × JΓ ` A KM → JΓ ` B KM
def
TvUA1 +A2 →B = Tw 7→ v(((1, v)))UA1 →B , Tw 7→ v(((2, v)))UA2 →B
def
app(f, g)(G) = f (G)(g(G))
def
TvU0→B = ?
unit : JΓ ` 1 KM
def
unit(G) = ?
VvWA1 ×A2 →B = (w1 , w2 ) 7→ VVvWA1 →TA2 →BU (w1 )WA2 →B (w2 )
inji : JΓ ` Ai KM → JΓ ` A1 + A2 KM
V(v1 , v2 )WA1 +A2 →B = (i, w) 7→ Vvi WAi →B (w)
def
def
VvW1→B = ? 7→ v
def
inji (f )(G) = (i, f (G))
match : JΓ ` A1 + A2 KM × JΓ, x : A1 ` B KM × JΓ, x : A2 ` B KM
→ JΓ ` B KM
def
match(f, g1 , g2 )(G) = gi (G, x 7→ v)
where f (G) = (i, v)
absurd : JΓ ` ∅ KM → JΓ ` A KM
def
absurd = ∅
VvW0→B = ∅
Definition (Depolarization).
X+
def
=
±
bP + Qc±
def
b0c±
j
k
hN i+
def
±
By composing together the semantics of the term formers in the
obvious way, we obtain semantics for terms t and one-hole contexts
C:
JΓ ` t : A KM ∈ JΓ ` A KM
q
y
q
y
Γ ` C Γ0 ` : A0 : A0 M ∈ Γ, Γ0 ` A0 M → JΓ ` A KM
For example we have J(t1 , t2 ) KM = pair(Jt1 KM , Jt2 KM ) and
J(t, ) KM (f ) = pair(Jt1 KM , f ). In particular, J KM is the
identity function.
Definition (Fun-less types and reification).
(The definitions of bAc and TAU were given in Figure 3 (Funless data types).)
b cA
A → bAc
:
def
b(t1 , t2 )c = (bt1 c, bt2 c)
def
bσi tc = σi btc
btcA→B = Tλx. bt dxeA cB UA→B
:
JA K → JbAc K
def
b(v1 , v2 )c = (bv1 c, bv2 c)
def
b(i, v)c = (i, bvc)
def
b?c = ?
bvcA→B
def
bAc → A
:
= Tw 7→ bv(dweA )cB UA→B
B.
=
X
bP c± + bQc±
=
0
def
bN c±
=
Y−
def
=
Y
bP → N c±
def
=
bP c± → bN c±
bN × M c±
def
bN c± × bM c±
b1c±
j
k
hP i−
def
±
=
=
1
def
bP c±
=
±
Proof outlines
Theorem 3 (βη is semantically sound). This is proved by direct induction on the evidence that t ≈βη u; semantic equivalence is a
congruence, and β-reduction and η-expansions are identities in the
semantic domain.
Theorem 4 (Semantic equivalence implies contextual equivalence).
For any model M and context ∅ ` C [] : 1 + 1, the closed term
C [M(t)] must have a normal form σi () by Lemma 1 (Inversion),
and C [M(u)] a normal form σj (). The semantic interpretation of
C [t] (technically, JC K(Jt KM ) for the natural semantics of contexts
with a hole) and of C [u] are equal by our assumption t ≈sem u and
the fact that semantic equality is a congruence. They are also are
respectively equal to (i, ?) and (j, ?) by Theorem 3 (βη is semantically sound), so we must have i = j. C [] is not separating.
def
d(t1 , t2 )e = (dt1 e, dt2 e)
def
def
Theorem 5 (Reification). We define reify(v) = dreify0 (bvcA )eA
using an (obvious) auxiliary function reify0 (v) on values at funless types, on which the inversion property is immediate; for
def
= Vλx. dt bxcA eB W reify( ) it is then proved using the fact that J K and b cA , d eA
commute.
dσi te = σi dte
d()e = ()
def
b cA
d eA
def
def
b()c = ()
def
def
dtebAc→bBc
d eA
:
JbAc K → JA K
def
d(v1 , v2 )e = (dv1 e, dv2 e)
def
d(i, v)e = (i, dve)
def
d?e = ?
dveA→B
def
= Vw 7→ dv(bwcA )eB WA→B
Lemma 2. By Theorem 3 (βη is semantically sound), we can
without loss of generality consider β-normal forms only. For the
fun-less reify0 ( ), direct induction proves reify0 (Jt K) = t on
normal-forms t – note that the fact that our types have no functions
is key to preserve the inductive invariant that our terms are closed,
necessary to use the inversion lemma to reason on the shape of t.
The result for reify( ) holds by commutation of J K and b cA ,
d eA .
Lemma 4 (Strong positive cut). By simultaneous induction on p
and t. In the variable case, it’s just a variable/variable substitution.
Theorem 8 (Completeness of saturation). The proof of Scherer and
Rémy (2015) can be reused almost as-is. It defines a rewriting relation from focused terms to saturated terms, that proceeds by moving let-bindings around: at any saturation point in some context
Γna ; Γ0na , it looks for all the negative neutral terms in t that would
be well-typed in Γna , Γ0na , and saturates over exactly those neutral
terms.
However, just selecting the neutral sub-terms of t does not make
a valid selection function. We also have to add enough neutrals
at each saturation step to respect the validity criterion; by the
subformula property, a finite number of neutrals always suffices,
so there exists a valid selection function with the neutral sub-terms
of t.
Finally, any selection function above this one also satisfies completeness. Introducing more neutrals either does not change the
term shape (after the next invertible phase) or exposes a neutral
proof of inconsistency (n : 0); in the latter case, more terms become βη-equivalent, which helps us proving our goal btcfoc ≈βη
bt0 cfoc .
V
Theorem 9 (Saturation). By induction on the derivation. In the base
case, the old context is empty so the result is trivial – there is no P
such that ∅ ⇓ hP i− . The induction case considers the application of
the SAT rule to deduce Γna ; Γ0na `sat let x̄ = n̄ in t : Qa . We have
to prove that any sub-derivation of t of the form Γna , Γ0na ; Γ00nak `sat
f k : Qak satisfies the expected property for any P .
If Γna ⇓ P , this is immediate by induction hypothesis. If we
have Γna , Γ0na ⇓ P but not Γna ⇓ P , then P is only provable
by a “new” neutral n using variables from Γ0na , so it is passed
to the selection function. By validity of the selection function
(Figure 7 (Specification of saturation selection functions)) we know
that some neutral n0 : P must then selected, so P occurs in the
positive context of the invertible term t. By Lemma 3 (Strong
decomposition of invertible phases) we thus have Γ00nak P as
required.
Lemma 5 (Saturated consistency). By (logical) completeness of
focusing it suffices to prove that there is no focused proof of
Γna ; ∅ `inv t : ∅ | 0.
By inversion on the possible derivations, we see that the invertible phase must be empty, and FOCLC - LET- POS is the only possible
choice of focusing: t is of the form let x = n in u with
Γna ` n ⇓ hQi−
Γna ; Q `inv u : ∅ | 0
Γna `foc let x = n in u : 0
Γna is saturated so Γna ⇓ hQi− implies Γna Q. But then, by
Lemma 4 (Strong positive cut), we know that u has a subterm
Γna `foc f : 0.
We have proved that any term in the judgment Γna `foc 0 has a
strictly smaller subterm itself in the judgment Γna `foc 0. There is
no such proof term.
V
Theorem 6 (Contextual equivalence implies semantic equivalence).
We prove the contrapositive: if Jt KM 6= Jt0 KM in some model
M, we build a discriminating context C [] in M. Our induction on a type A has a slightly stronger hypothesis. We assume
M(Γ) ` t, t0 : A(M) and Jt K 6= Jt0 K: the semantically distinct
terms are well-typed at the closed typing M(Γ) ` : Γ(A), but
need not be well-typed at Γ ` : A.
In the function case A → B we have an input w ∈ JA KM
where the two semantics differ, we build the discriminating context
C [( (reifyA (w)))], where C distinguishes at B by induction
hypothesis.
In the sum case, Jt KM is of the form (i, v) and Jt0 KM of
the form (j, v 0 ) and we have either i 6= j or v 6= v 0 . In the
second case, suppose for example i = j = 1, we use a context (match with | σ1 x1 → C [x1 ] | σ2 x2 → σk ()), using
the fact that the return type 1 + 1 is inhabited to build a dummy
value for the unused branch (k ∈ {1, 2} is arbitrary).
Theorem 10 (Inconsistent canonicity). The inversible and saturation phase are purely type-directed, so f and f 0 necessarily use
the same term-formers during those phases. They could only differ on a neutral term, but they cannot contain a derivation of
the form Γna , Γ0na `s n ⇓ X − or Γna , Γ0na `s p ⇑ P as, by
Corollary 2 (Saturation), Γna , Γ0na would be saturated, which is incompatible with our assumption Γna ` 0 and Lemma 5 (Saturated
consistency).
Lemma 6 (Saturated inequivalence). By direct induction on t
≈icc
t0 . In particular, the two side-conditions we mentioned are direct
consequences of the saturated term structure.
The fact that the x = n bindings are never redundant is a direct
consequence of the “new”-ness condition on saturated neutrals: n
cannot be added in Ξ in a future phase as it would not be new
anymore.
The fact that the context is consistent at the source of inequality is a consequence of Corollary 2 and Lemma 5 (Saturated
consistency).
Theorem 11 (Inequivalence in the model). We prove this by simultaneous induction on the inequivalence evidence for t0
≈icc t00 ,
Γna | Σp `inv D0 [Γ0na | Ξ `ne p0 6= p0 0 : P ] : N | Qa and the
sub-terms of the two terms t1 , t01 , building inequivalence evidence
for t1
≈icc t01 . There are three interesting cases: when the current term-former introduces a variable (x : X + ) in context (or at
the root for the variables of Γna , Σp ), the saturation step (where
non-redundancy is checked), and when the source of inequality
Γ0na | Ξ `ne p 6= p0 : P is finally encountered.
When we enter a binding x : X + , the corresponding subterms
of t1 , t01 start with a large pattern-matching on the possible values
of x : N (X + ). We decide to take the branch corresponding to
the case inFin (x), maintaining the inductive invariant that the subterms of t1 , t01 are equal to the sub-terms of t0 , t00 under the family
of substitutions ([inFin (x)/x])x , for all variables x of atomic type
introduced so far.
In a saturation step, we must check that the newly added
bindings are not already present in the constrained environment
Γna | Ξ. The neutrals introduced in t1
≈icc t01 are of the form
n[inFin (x)/x]x , and this substitution preserves α-inequivalence,
as it replaces distinct variables by distinct values.
When we reach the source of inequality Γ0na | Ξ `ne p 6= p0 : P ,
the derivation of spine inequivalence Γ0na `ne p 6= p0 ⇑ P either
ends up on a Γ0na `ne σi p 6= σj p0 ⇑ P with i 6= j, in which case
the corresponding neutral subterms of t1 , t01 are also of this form,
or Γ0na `ne x 6= y ⇑ X + . In this case the subterms of t1 , t01 are
inFin (x), inFin (y), by our inductive invariant: they are also distinct
positive neutrals, and form a source of inequality.
At the source of inequality, we also have to prove that N t0 ,t00 (Γ0 ) 0 0
from the hypothesis Γ0 0 0. Note that this could be wrong if we applied some other model M; for example, the context (x : X + , y : Y + → 0)
is consistent, but applying the model {X + 7→ 1, Y + 7→ 1} results
in an inconsistent context.
Fortunately, the definition of the neutral model Definition 12
(Neutral model) is such that the positive atoms that become inhabited are exactly those that have bound variables in the context at the
source of inequality Γ0na ` p
≈icc p0 P . And, by saturated consistency (Corollary 2 (Saturation)), we know that those positive atoms
are exactly those that are provable in Γ0na .
We can thus can prove N (Γ0na ) 0 0 by contradiction as follows.
Suppose we have a proof of N (Γ0na ) ` 0. Any subproof of the
form N (Γ0na ) ` N (X + ), for one of the X + inhabited in N , can
be replaced by a subproof of the form N (Γ0na ` X + ), as we know
that all such X + are derivable in Γ0na . The transformed proof does
not rely on the definition of the X + , so it is also a valid proof of
Γ0na ` 0, which is impossible.
Corollary 3 (Fun-elimination preserves semantic equivalence). This
is immediate from Lemma 8 and the fact that the functionelimination transformation is a bijection on semantics value – Figure 3 (Fun-less data types).
0
Theorem 12 (Canonicity). We remark that p[ρ]
≈sem p [ρ] is a trivial property if P is a closed type (no atoms X + ): by inspecting the
Γ `ne p 6= p0 ⇑ P judgment (or the strong positive neutral structure), we know that p and p0 differ on a constructor, and this property is preserved by any substitution.
From Γ 0 ∅ we know that for any A ∈ Γ, A is inhabited by
some closed values – Corollary 1 (Inhabited or inconsistent). We
can always pick an arbitrary closed value of type A.
We build ρ by peeling off the variables of the environment Γ,
from the most recently introduced to the first introduced. At any
point during our induction, we have processed a fragment ∆0 of
the context, and ∆ remains unprocessed, with Γ = (∆, ∆0 ), and
we have a substitution ρ that binds the variables of ∆0 to closed
terms, such that Ξ[ρ] contains no inconsistent equation. Initially
def
def
def
we have ∆ = Γ, ∆0 = ∅, ρ = ∅.
0
When Γ is of the form Γ , x : A1 + A2 , we look for equations
of the form σi xi = x in Ξ; by validity of Ξ (Definition 13), there
is at most one. If there is none, we choose an arbitrary value for x
in ρ; if there is one, we choose xi [ρ]; this is a closed term as xi
must be defined after x in the context.
When Γ is of the form Γ0 , x : N , we look for equations the
family of equations (y i = ni [x])i∈I – remember that we write
n [x] when the neutral n has head x. Let us write Ξ [x] for this
set of equations about x.
We define the value for x in ρ as Val(x : N | (y i = ni )i∈I )
where Val(n : N | Ξ), defined below, is such that Ξ[Val(n : N | Ξ)/n]
always holds, if all equations in Ξ are of the form y = m [n].
def
Val(n : 1 | Ξ) = ()
Val(π1 n : N 1 | {(x = m [π1 n]) ∈ Ξ})
def
,
Val(n : N 1 × N 2 | Ξ) =
Val(π2 n : N 2 | {(x = m [π2 n]) ∈ Ξ})
def
Val(n : hP i− | {y = n}) = y
def
Val(n : hP i− | ∅) = arbitrary
From the validity of Ξ, we know that Ξ [x] contains at most
one equation of the form (y : P ) = n for each possible neutral
` n ⇓ hP i− , which justifies the only two cases in the definition of
Val(n : hP i− | Ξ) above.
Corollary 4 (Canonicity). We use a stronger induction hypothesis
on the inequivalence judgment: if
Γ | Ξ `inv D Γ0 | Ξ0 `ne p 6= p0 : P : N | P a
then there is a substitution ρ : (Γ | Ξ) and a context C [] such as
∅ ` C [t[ρ]]
≈β C t0 [ρ] : 1 + 1
def
def
where t = D [p] and t0 = D [p0 ].
def
In the base case D = , we get a substitution ρ : (Γ0 | Ξ0 )
from Theorem 12 (Canonicity); the theorem also gives us p[ρ]
≈sem
p0 [ρ], so there is a distinguishing context C as required as contextual and semantic equivalence coincide.
In the sum-elimination case
Γ, x : P 1 + P 2 , xi : P i | Ξ, σi xi = x `inv D Γ0 | Ξ0 `ne p 6= p0 : P 0 : N | Qa
match x with
Γ0 | Ξ0 `ne p 6= p0 : P 0 : N | Qa
σi xi →
Γ, x : P 1 + P 2 | Ξ `inv
σj xj6=i → t
the substitution ρ verifies, by inductive hypothesis, that σi xi [ρ] =
x[ρ], so in particular the sum-elimination term, once applied ρ, will
reduce to (D [p][ρ] and (D [p0 ])[ρ] respectively, as desired: this
means that the distinguishing context D for the premise is also a
distinguishing context for the sum-elimination terms.
In the (left) pair case
Γ | Ξ `inv D Γ0 | Ξ0 `ne p 6= p0 : P : N 1 | ∅
0
Γ | Ξ `inv (D, u) Γ | Ξ0 `ne p 6= p0 : P : N 1 × N 2 | ∅
We can obviously keep the same ρ, and build the distinguishing
context C [π1 ], where C is the distinguishing context for the
inductive premise.
From this stronger inductive hypothesis (ρ, C), we can build a
distinguishing context for D [p]
≈ctx D [p0 ] as
C [(λ(x1 . . . xn ∈ Γ). ) (x1 [ρ]) . . . (xn [ρ])]
def
def
Theorem 13 (Saturated terms are canonical). Let t0 = t and t00 =
t0 .
By Lemma 6 (Saturated inequivalence) we have Σp | ∅ `inv
D0 [Γ0na | Ξ `ne p0 6= p0 0 : P ] : N | Qa with t0 ≈icc D0 [p0 ]
≈icc t00 )
D0 [p0 6= p0 0 ].
and t00 ≈icc D0 [p0 0 ], that is, (t0
0
We consider the neutral model N t0 ,t0 defined in Definition 12
def
def
(Neutral model). Let t1 = N (t0 ) and t01 = N (t00 ).
By Theorem 11 (Inequivalence in the model) we have (t0
≈icc
t00 )
D1 [p1 6= p0 1 ], that is a inequivalence of terms at the atom≈icc t00 : N (N ) | N (Qa ) .
less judgments ∅; N (Σp ) `inv t0
Using the focused function-elimination transformation of Figdef
def
ure 9 (Fun-less focused forms), let us define t2 = bt1 c and t02 =
0
bt1 c. By Lemma 7 (Function elimination preserves inequivalence)
≈icc t02 )
D2 [p2 6= p0 2 ], that is a saturated inequivawe get (t2
lence at closed types without functions.
From Corollary 4 (Canonicity), we finally get D2 [p2 ]
≈ctx
≈ctx t02 , a contextual inequivalence at closed
D2 [p0 2 ], that is t2
types without functions. As contextual and semantic equivalence
coincide at closed types, this is equivalent to Jt2 K 6= Jt02 K, that is,
Jbt1 c K 6= Jbt01 c K.
By Corollary 3 (Fun-elimination preserves semantic equivalence),
we thus have that Jt1 K 6= Jt01 K at closed types, that is, Jt0 KN 6=
Jt00 KN , that is, t0
≈sem(N ) t00 , that is, t0
≈sem t0 , or equivalently
t0
≈ctx t0 .
Corollary 5 (Contextual equivalence implies equality of saturated forms).
By contrapositive, we show that u
≈icc u0 implies t
≈ctx t0 . If we
0
0
had u
≈icc u we would have bucfoc
≈ctx bu cfoc by Theorem 13
(Saturated terms are canonical). As βη-equivalence implies contextual equivalence, we have t ≈ctx bucfoc and t0 ≈ctx bu0 cfoc , so
t
≈ctx t0 by transitivity.
Corollary 6 (). We know from Section 2 (Equivalences in the full
λ-calculus) that βη-equivalence implies contextual equivalence.
Conversely, we suppose t ≈ctx t0 and prove t ≈βη t0 .
By Theorem 7 (Completeness of focusing), there exists focused
forms u, u0 that erase to t, t0 (respectively) modulo βη; we can even
pick them in the subset with positive atoms only.
By Theorem 8 (Completeness of saturation), there exists saturated forms r, r0 that have the same erasures as u, u0 (respectively)
modulo βη, that is they erase to t, t0 modulo βη.
By Corollary 5 (Contextual equivalence implies equality of saturated forms), we have r ≈icc r0 . As the invertible commuting conversions are sound with respect with βη-equality, we have
brcfoc ≈βη br0 cfoc , that is, t ≈βη t0 .
Corollary 7 (). To test if two terms are equivalent, we invoke the focusing completeness result then the saturated completeness result,
and check if the normal form coincide modulo (≈icc ). Those two
transformations are computable and (≈icc ) is decidable, so equivalence is decidable.
Corollary 8 (). Our notion of semantic equivalence coincides with
βη and contextual equivalence, and only distinguishes terms using
sets isomorphic to closed types, which are all finitely inhabited.
| 6 |
Technical Report about Tiramisu: a Three-Layered
Abstraction for Hiding Hardware Complexity from
DSL Compilers
arXiv:1803.00419v2 [] 7 Mar 2018
Riyadh Baghdadi
MIT
baghdadi@mit.edu
Jessica Ray
MIT
jray@csail.mit.edu
Malek Ben Romdhane
Emanuele Del Sozzo
Patricia Suriana
Shoaib Kamil
Politecnico di Milano
emanuele.delsozzo@polimi.it
Google
psuriana@google.com
MIT
malek@mit.edu
Adobe
kamil@adobe.com
Saman Amarasinghe
MIT
saman@csail.mit.edu
Abstract
Keywords code optimization framework, intermediate representation, domain specific language, polyhedral compilation, iteration space transformation
High-performance DSL developers work hard to take advantage of modern hardware. The DSL compilers have to
build their own complex middle-ends before they can target a common back-end such as LLVM, which only handles
single instruction streams with SIMD instructions. We introduce Tiramisu, a common middle-end that can generate
efficient code for modern processors and accelerators such as
multicores, GPUs, FPGAs and distributed clusters. Tiramisu
introduces a novel three-level IR that separates the algorithm, how that algorithm is executed, and where intermediate data are stored. This separation simplifies optimization
and makes targeting multiple hardware architectures from
the same algorithm easier. As a result, DSL compilers can
be made considerably less complex with no loss of performance while immediately targeting multiple hardware or
hardware combinations such as distributed nodes with both
CPUs and GPUs. We evaluated Tiramisu by creating a new
middle-end for the Halide and Julia compilers. We show that
Tiramisu extends Halide and Julia with many new capabilities including the ability to: express new algorithms (such
as recurrent filters and non-rectangular iteration spaces),
perform new complex loop nest transformations (such as
wavefront parallelization, loop shifting and loop fusion) and
generate efficient code for more architectures (such as combinations of distributed clusters, multicores, GPUs and FPGAs).
Finally, we demonstrate that Tiramisu can generate very
efficient code that matches the highly optimized Intel MKL
gemm (generalized matrix multiplication) implementation.
We show that Tiramisu can generate code that outperforms
Intel MKL DNN convolutions and show speedups reaching
4× over the original Halide and 16× over the original Julia
due to optimizations enabled by Tiramisu.
1
Introduction
With a diverse set of high-performance hardware platforms
available, the development of a credible high-performance
Domain Specific Language (DSL) compiler requires cross
platform code generation [1, 2, 7, 14, 45, 49, 50]. Many of
these DSL compilers target intermediate representations (IRs)
such as the LLVM IR [36], a low-level representation that
models an abstract, RISC-like CPU with infinite registers.
However, before generating the low-level LLVM IR, the DSL
compilers must create their own middle-end passes that
transform their own architecture-independent high-level
IR to take advantage of target hardware platforms such as
multicores, distributed clusters, GPUs, and FPGAs, each with
very different requirements.
These platform specific transformations cannot be done in
the LLVM IR, as it encodes memory accesses with a concrete
memory layout, which reduces optimization opportunities.
This also adds unnecessary dependences such as anti- and
output-dependences, forcing the compiler to modify data
layout in order to perform certain optimizations. For example,
compilers privatize [26, 38, 56] and expand [22, 43, 44] arrays
and scalars in order to parallelize loop nests; subsequently,
the arrays are contracted [19, 37, 48] to minimize the memory
footprint.
This paper presents Tiramisu, a compiler middle-end that
hides the complexity of the large variety of execution platforms and reduces the burden on DSL creators by providing
a multi-layer representation suitable for transforming from
the high-level, domain-specific representation of the DSL to
GPUs, multicore CPUs, distributed machines, and FPGAs.
,,
.
1
,,
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
The novel multi-layer design exposes three different representations that make it easier to perform each program
transformation at the right level of abstraction.
The top layer representation (abstract computation layer)
describes the pure algorithm using producer-consumer relationships without memory locations. The second layer
(computation placement layer) specifies the order of the computations, along with which processor computes each value;
this layer is suitable for performing a vast number of optimizations without dealing with concrete memory layouts.
Finally, the third layer (concrete computation layer) specifies
where to store intermediate data before they are consumed.
Tiramisu then lowers the third layer to a target architecture
while performing back-end-specific code optimizations.
Tiramisu does not automatically parallelize or perform
other transformation decisions; the goal of the middle-end
framework is to provide mechanisms and not policy, which is
the responsibility of the DSL compiler or the application programmer as in Halide [49]. Instead, Tiramisu concentrates
on providing scheduling primitives that can be used to transform and map the programs to the underlying hardware. This
frees the DSL designers from needless bookkeeping, analysis,
and redundant implementation of code transformations and
provides multiple architecture-specific code generation.
Tiramisu uses a unified framework based on polyhedral
sets to represent the three layers. This makes it simple for
Tiramisu to reason about and implement iteration space and
data layout transformations, since these are represented as
transformations on polyhedral sets. It also simplifies deciding
about the legality of transformations (based on dependence
analysis). The use of a unified general framework relieves
compiler developers from developing custom algorithms and
analyses that only work in certain situations and cannot be
composed with other optimizations passes.
This paper makes the following contributions:
from the code transformations and the data layout transformations, simplifying the composition of architecturespecific lowering transformations.
• We implemented this middle-end, Tiramisu, and demonstrate its power and viability by using it as the middleend for Halide [49, 50] and Julia [7] compilers.
• We demonstrate the expressiveness of Tiramisu by extending Halide with many new capabilities such as tiling
recurrent filters, performing precise bound inference
for non-rectangular iteration spaces, and performing
advanced loop nest transformations such as skewing,
shifting, and loop fusion.
• We evaluate Tiramisu and show that we match Intel
MKL gemm, show up to 4× performance gains in Halide
and 16× in Julia.
2
Middle-End Challenges
Building a compiler middle-end requires overcoming many
challenges. The middle-end must support a large enough
class of computations to be usable by multiple DSLs. The
middle-end must also produce code for most of the popular
high-performance computing systems, including multicore
CPUs, clusters of compute nodes and accelerators such as
GPUs and FPGAs in many combinations. Finally, the middleend has to be able to perform heroic program and data transformations and compose them together to generate highly
efficient code. These transformations must be orchestrated
and tailored to the application as well as the underlying
execution hardware.
2.1
The Data Dependence Challenge
Most IRs use memory as a means of communication between program statements. This creates memory-based dependences in the program. It also means that the data-layout
is chosen before optimizing the code. Optimizing a program
for different hardware architectures usually requires modifying the data-layout (since different data layouts are suitable
for different hardware architectures) and requires eliminating memory-based dependences since they restrict optimizations [42].
Consider the simple image processing pipeline in Figure 1(a) (left), which has not been optimized yet. In order to optimize it for a multicore machine, we can parallelize the outermost loop (over the i iterator). To do so, we must first expand
the two-dimensional arrays b1 and b2 into three-dimensional
arrays. In addition to the parallelization, data locality can be
improved by fusing the brightening and clamp stages. After
fusion, we can replace the b1 array accesses with accesses to
a scalar. After applying these optimizations, we get the code
shown in Figure 1-(b) (left). Assuming now that the target
architecture is multi-GPU instead of CPU, changing from an
Array-of-Structures (AoS) layout to a Structure-of-Arrays
• We show that a unified middle-end for DSL compilers can
generate code for multiple high-performance architectures such as multicore CPUs, GPUs, FPGAs, distributed
machines, or any combination of these, using a set of
simple scheduling commands to guide the program transformation and code generation.
• We present the first compiler framework that matches
the performance of the highly optimized Intel MKL gemm
(comparing a single gemm kernel from MKL to a single
gemm kernel generated by Tiramisu). To the best of
our knowledge, this is the first compiler framework that
can demonstrate such performance. Former frameworks
could outperform Intel MKL but by fusing multiple kernels, since a library such as Intel MKL cannot fuse kernels.
Tiramisu can also do that to get even better performance.
• We introduce a novel three-layer intermediate representation for the middle-end that separates the algorithm
2
,,
(SoA) memory layout may lead to better performance. In order to maximize parallelism on GPU, we apply loop fission to
the loop over b2 and the loop over out. The result of applying
these optimizations is shown in Figure 1-(c) (left).
Applying the previous data-layout transformations and
the elimination of memory-based dependencies is, in general, challenging [19, 22, 26, 37, 38, 43, 44, 48, 56]. However,
if the program dependencies are captured using producerconsumer relationships and the data-layout is not specified
early on, all program transformations can be performed without the complexity of memory layout transformations. Most
modern DSL IRs capture the dependencies with producerconsumer relationships. Thus, the middle-end compiler can
implement program transformations using the producerconsumer relationships first and then introduce memory
layouts as a further lowering step. This requires carefully
designing the middle-end IR in multiple layers. We show
that a three-layer IR design can overcome these difficulties.
a challenging problem. A more promising approach is to
expose optimizations to the user by providing a set of optimization directives. Languages such as Halide [13, 49] have
shown that this approach is a viable one. However, in order to expand this to a common middle-end, the scheduling
language must encompass a much broader set of transformations and seamlessly compose these transformations. Figures 1-(b) (Schedule) and 1-(c) (Schedule) show how a simple
collection of scheduling commands can map the architecture
independent program into different architectural configurations.
2.4
The MPI+OpenMP+CUDA Challenge
Most high performance computer systems are complex and
heterogeneous. A typical supercomputer has multiple interconnected nodes equipped with multicore CPU processors
with vector units connected using NUMA shared memory.
They may also have multiple GPU accelerators per node [39].
Recently, data centers have been adding FPGA accelerators to
this mix [10]. Getting the best performance requires the program to take full advantage of all these different components.
In the supercomputing community this is usually referred
to as the MPI+OpenMP+CUDA challenge [60]. Writing code
that targets such heterogeneous systems is non-trivial, as
each of these require drastically different styles of code and
optimizations, all using different libraries and languages.
Getting all of these components to communicate and synchronize is non-trivial as well. The state-of-the-art practice
is to manually write the program in separate language extensions. However, any changes to the program partitioning
between these heterogeneous units will normally require a
complete rewrite of the program. With Tiramisu, it is possible to generate code for these complex computer systems in
a simple, flexible and malleable way. Figure 1-(c) (left) shows
an example where the algorithm is mapped to a GPU cluster
using simple scheduling commands and without the need to
change the algorithm or to write code in different languages
and libraries.
2.2 The Program Representation Challenge
Lowering code to complex hardware architectures requires
many transformations such as loop interchange, blocking,
skewing, fission, fusion, and distribution. These transformations change the loop structure in complex ways by introducing new loops, complex loop bounds, and non-trivial access
functions [59]. Analyzing code generated by one of these
transformations is challenging, which complicates composition with other transformations. This problem can be solved
if the loops are kept within a single unified representation
through all the transformations. However, many representations are inadequate or too conservative to support complex
transformations. They are also inadequate for performing
tasks such as dependence analysis (which is necessary for deciding the correctness of optimizations) and the computation
of communication sets in the general case. For example, the
interval-based representation used in Halide [49] is unable
to support accurate bounds computation for non-rectangular
iteration spaces. It also cannot represent programs that have
cyclic dependence graphs, and does not naturally support
complex transformations such as loop skewing (wavefront
parallelism). We show that a polyhedral based representation
is more flexible, powerful, and is capable of supporting all
transformations necessary in the middle-end.
3 Tiramisu Overview
Tiramisu is a middle-end compiler framework for DSLs.
Using this framework, DSLs can transform their architectureindependent IRs into architecture-specific, low-level IRs while
taking advantage of modern architectural features such as
multicore parallelism, non-uniform memory (NUMA) hierarchies, clusters, and accelerators like GPUs and FPGAs.
Tiramisu is designed for DSLs that logically operate over
dense data using loop nests and sequences of statements,
which is the case of DSLs for image processing, dense linear
algebra, and stencil computations, among others.
3.1 The Three-Layer Intermediate Representation
2.3 The Optimization Orchestration Challenge
Producing high-performance code for a computer system
from an architecture-independent algorithm requires a series of program transformations. These transformations are
non-trivial and highly dependent on the program structure,
input data, and the target architecture. Parallelizing compilers have worked to fully automate this process using cost
models, heuristics [28], and machine learning [54]. However,
obtaining performance comparable to hand-coded implementations using such fully automated approaches is still
Tiramisu uses polyhedral sets to represent each one of the
three IR layers and uses polyhedral set and relation operations
3
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
,,
Constraints: C n : 0 ≤ i < N ,
Cm : 0 ≤ j < M ,
Different Code Optimizations
1
2
3
4
5
6
7
8
9
10
11
12
// Original unoptimized code
for (i in 0..N)
for (j in 0..M)
for (c in 0..3)
b1[j][c] = 1.5*img[i][j][c] // brightening
for (j in 0..M )
for (c in 0..2)
b2[j][c] = clamp(b1[j][c], 0, 255)
for (j in 1..M-1)
for (c in 0..3)
out[i][j][c] = (b2[j-1][c] + b2[j][c] +
b2[j+1][c])/3
1
2
3
4
5
6
7
8
9
10
11
// Code optimized for CPU
parallel for (i in 0..N)
for (j in 0..M)
for (c in 0..3)
float t = 1.5*img[i][j][c]
b2[i][j][c] = clamp(t, 0, 255)
for (j in 1..M-1)
for (c in 0..3)
out[i][j][c] = (b2[i][j-1][c] +
b2[i][j][c] +
b2[i][j+1][c])/3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Code optimized for multi-GPU
distributed for (q in 0..NUM_NODES)
gpu for (i in 0..N/NUM_NODES)
gpu for (j in 0..M)
for (c in 0..3)
float t = 1.5*img[i][j][c]
b2[c][i][j] = clamp(t, 0, 255)
distributed for (q in 0..NUM_NODES)
gpu for (i in 0..N/NUM_NODES)
gpu for (j in 1..M-1)
for (c in 0..3)
out[i][j][c] = (b2[c][i][j-1] +
b2[c][i][j] +
b2[c][i][j+1])/3
(a)
(b)
(c)
Cm ′ : 1 ≤ j < M − 1, C k : 0 ≤ c < 3, Cq : 0 ≤ q < N U M _N O D E S
Tiramisu representation (Layer I, Layer II and Layer III)
The constraints C n , Cm and C k are defined above.
{b1 (i, j, c ) : C n ∧ Cm ∧ C k } : 1.5 ∗ imд(i, j, c )
{b2 (i, j, c ) : C n ∧ Cm ∧ C k } : cl amp (b1 (i, j, c ), 0, 255)
{out (i, j, c ) : C n ∧ Cm ′ ∧ C k } : (b2 (i, j − 1, c ) + b2 (i, j, c ) + b2 (i, j + 1, c ))/3
Layer I
Schedule
Layer II
Layer III
Schedule
Layer II
Layer III
b2 .after(b1 , c)
b1 .parallel(i); b2 .parallel(i); out .parallel(i)
b1 .store_in(t ); b2 .store_in(bufb2 [i, j, c]); out .store_in(bufout [i, j, c]);
// Layer II generated from Layer I using the schedule
{ b1 (i (cpu ), 0, j, c, 0) : C n ∧ Cm ∧ C k }: 1.5 ∗ imд(i, j, c )
{ b2 (i (cpu ), 0, j, c, 1) : C n ∧ Cm ∧ C k } : cl amp (b1 (i, 0, j, c, 0), 0, 255)
{out (i (cpu ), 1, j, c, 0) : C n ∧ Cm ′ ∧ C k }:
(b2 (i, 0, j − 1, c, 1) + b2 (i, 0, j, c, 1) + b2 (i, 0, j + 1, c, 1))/3
Layer III = Layer II representation + the following data mapping
{b1 (i (cpu ), 0, j, c, 0) → t : C n ∧ Cm ∧ C k }
{b2 (i (cpu ), 0, j, c, 1) → bufb2 [i, j, c] : C n ∧ Cm ∧ C k }
{out (i (cpu ), 1, j, c, 0) → bufout [i, j, c] : C n ∧ Cm ′ ∧ C k }
b2 .after(b1 , c); out .after(b2 , root);
b1 .split(i, N/NUM_NODES, q, i); b2 .split(i, N/NUM_NODES, q, i);
out .split(i, N/NUM_NODES, q, i);
b1 .store_in(t ); b2 .store_in(bufb2 [c, i, j]); out .store_in(bufout [i, j, c]);
b1 .gpu(i,j); b2 .gpu(i,j); out .gpu(i,j)
b1 .distribute(q); b2 .distribute(q); out .distribute(q);
// Layer II generated from Layer I using the following schedule
{ b1 (0, q (node ), i (дpu ), j (дpu ), c, 0) : Cq ∧ C n ∧ Cm ∧ C k }: 1.5 ∗ imд(i, j, c )
{ b2 (0, q (node ), i (дpu ), j (дpu ), c, 1) : Cq ∧ C n ∧ Cm ∧ C k } :
cl amp (b1 (0, q, i, j, c, 0), 0, 255)
{out (1, q (node ), i (дpu ), j (дpu ), c, 0) : Cq ∧ C n ∧ Cm ′ ∧ C k }:
(b2 (0, q, i, j − 1, c, 0) + b2 (0, q, i, j, c, 1) + b2 (1, q, i, j + 1, c, 0))/3
// Same as Layer III in (b) except the mapping of b1 and b2 should be replaced
with the following
{b1 (0, q (node ), i (дpu ), j (дpu ), c, 0) → t : Cq ∧ C n ∧ Cm ∧ C k }
{b2 (0, q (node ), i (дpu ), j (дpu ), c, 1) → bufb2 [c, i, j] : Cq ∧ C n ∧ Cm ∧ C k }
Figure 1. Three versions of the motivating example (left) and their equivalent Layer I, II and III representations (right)
to represent transformations on the iteration domain and
data layout. Polyhedral sets and relations are described using
affine (linear) constraints over loop iterators and program
parameters (invariants) and are implemented in Tiramisu
using ISL [58]. We use a combination of classical extensions
to the polyhedral model in order to support non-affine iteration spaces; these extensions are sufficient and practical for
large classes of programs [4, 5].
A typical workflow for using Tiramisu is illustrated in
Figure 2. DSL compilers parse input programs and perform
domain specific optimizations before translating the DSL
program into Layer I of the Tiramisu IR. The first layer of
the IR is then transformed to lower layers (Layer II and Layer
III), and finally LLVM or other low-level IR is generated.
The three layers of the Tiramisu IR are:
in memory (data layout). As there is no notion of data location, values are communicated via explicit producer-consumer
relationships.
Layer II (Computation Placement Layer) which specifies the order of execution of computations and the processor on which they execute. This layer does not specify
how intermediate values are stored in memory; this simplifies optimization passes since these passes do not need
to perform complicated data-layout transformations. The
transformation of Layer I into Layer II is done automatically
using scheduling and data layout commands. Examples of the
scheduling commands supported in Tiramisu are presented
in Table 1.
Layer III (Concrete Computation Layer) which makes
the data layout concrete, specifying where intermediate values are stored.
The separation into levels does not force data-layout mapping to occur after scheduling; in Tiramisu, the user can still
Layer I (Abstract Computation Layer) which specifies
the algorithm without specifying the schedule (when and
where the computations occur) or how data should be stored
4
,,
Domain Specific Languages
Commands to transform Layer I into Layer II
We assume that C and P are computations
Command
Description
C.interchange(i, j)
Interchange the dimensions of C (loop interchange)
C.shift(i, s)
Loop shifting (shift the dimension i by s iterations)
C.split(i, s, i0, i1) Split the dimension i by s. (i0, i1) are the new dimensions
C.tile( i,j,t1,t2,
Tile the dimensions (i,j) of the computation C by t 1 × t 2.
i0,j0,i1,j1)
The names of the new dimensions are (i0, j0, i1, j1).
P.compute_at(C, j)
Compute the computation P in the loop nest of C at loop
level j. This might introduce redundant computations.
C.vectorize(i, v)
Vectorize the dimension i by a vector size v
C.unroll(i, v)
Unroll the dimension i by a factor v
C.parallelize(i)
Mark the dimension i as a space dimension (cpu)
C.distribute(i)
Mark the dimension i as a space dimension (node)
C.after(B, i)
Indicate that C should be ordered after B at the loop level i
(they have the same order in all the loop levels above i)
C.inline()
Inline C in all of its consumers
C.set_schedule()
Set the schedule of C, i.e.,a map that transforms Layer I to
Layer II
C.gpu(i0,i1,i2)
Mark the dimensions i0, i1 and i2 to be executed on the GPU
C.fpga()
Generate HLS code for the computation C
C.pipeline(i)
Mark the dimension i to be pipelined (FPGA)
DSL
Compiler
Domain Specific Optimizations
Layer 1: Abstract Computation Layer
Automatic or
user specified
schedules
Layer 2: Computation Placement Layer
Automatic or
user specified
data mapping
Tiramisu
Layer 3: Concrete Computation Layer
Code generation: Abstract Syntax Tree
...
Distributed
GPU
Distributed
X86
GPU
(Nvidia)
FPGA
(Xilinx)
Vectorized
parallel
X86
Commands to add data mapping to Layer III
Buffer b(...)
Declare a buffer b (size, type, ...)
C.set_access()
Set the access relation for the computation C
C.store_in(buff[i0,..]) Store the result of the computation C (i0, ...) in buff[i0, ...]
C.auto_allocate_map() Allocate a buffer for C and map C to it
C.set_access()
Map C to a buffer access
C.storage_fold(i, d)
Contract the dimension i of the buffer associated to C to
make its size d
create_transfer(...)
Create a pair of send & receive communication statements
C.partition(b, type)
Mark the buffer b to be partitioned in a complete, cyclic or
block way (FPGA)
Backends
Portable performance across a range of platforms
Figure 2. Tiramisu overview
specify data layout before scheduling (to constrain scheduling, for example). The separation ensures that the scheduling
phase can safely assume no data-layout transformations are
required, greatly simplifying scheduling transformations. If
a user requests a transformation that cannot be performed
with the specified data layout, Tiramisu will prevent the
illegal transformation from occurring, ensuring correctness.
In the following section, we provide more details about
the three layers of Tiramisu.
4
Table 1. Examples of Tiramisu Scheduling Commands
tuples in a set, we describe the set using affine constraints
over loop iterators and symbolic constants: {S (i, j) : 1 ≤ i ≤
3 ∧ 1 ≤ j ≤ 2} where i and j are the dimensions of the set.
A map is a relation between two integer sets. For example
{S1(i, j) → S2(i + 2, j + 2) : 1 ≤ i ≤ 3 ∧ 1 ≤ j ≤ 2} is a
map between tuples in the set S1 and tuples in the set S2 (e.g.
the tuple S1(i, j) maps to tuple S2(i + 2, j + 2)). We use the
Integer Set Library (ISL) [58] notation for sets and maps.
Figure 1 shows the code for each optimized implementation discussed in the previous section. The original, unoptimized code is shown in Figure 1-(a), with the right side
showing the Layer I representation. This Layer I representation is the same for all the code variants, as this layer
specifies the computation in a high-level form separate from
scheduling.
Each line in Layer I of Figure 1-(a) (right side in the figure)
corresponds to a statement in the algorithm (left side of the
figure): for example, the first line of Layer I represents the
line 5 in Figure 1-(a). The first part of that line1 , which is
The Tiramisu IR
The input to Tiramisu is the Layer I computations and a
set of scheduling and data layout commands. Layer II is
generated by applying the schedule to Layer I. Commands for
buffer allocation, data layout mapping, and communication
(between CPU nodes for example) are then added to the
Layer II representation; the result constitutes Layer III. An
annotated abstract syntax tree (AST) is then generated from
Layer III. This AST is traversed to generate the target code.
In this section, we describe in detail the three representations used in Tiramisu. We also describe scheduling via
high-level scheduling commands as well as low level scheduling maps. We begin by showing an example.
{b1 (i,j,c): 0<=i<N and 0<=j<M and 0<=c<3}
specifies the iteration domain of the statement, while the
second part, 1.5 ∗ imд(i, j, c), is the computed expression.
The iteration domain is the set of tuples b1 (i, j, c) such that
0 ≤ i < N ∧ 0 ≤ j < M ∧ 0 ≤ c < 3. Computations in Layer
I are not ordered. The declaration order does not affect their
order of execution, which is specified in Layer II.
4.1 An Example in the Three-Layer IR
We first provide an overview of the concepts of polyhedral
sets and maps. More details and a formal definition of these
concepts are provided in the Appendix.
An integer set is a set of integer tuples described using
affine constraints. An example of a set of integer tuples is
{(1, 1); (2, 1); (3, 1); (1, 2); (2, 2); (3, 2)}. Instead of listing all
1 The
5
constraints C n , Cm , and C k have been expanded inline
,,
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
Figure 1-(b) shows the first optimized version of the code.
The schedule on the right side is the set of scheduling and
data layout commands that produce this version of the code.
The scheduling commands are presented in Table 1. Layer II
is generated automatically by applying these commands to
Layer I. Tiramisu provides a large set of high-level scheduling and data layout transformation commands.The Layer II
representation is also shown in Figure 1-(b). Computations
in Layer II are ordered based on their lexicographical order2 .
The set
Computations in Layer I are in Static Single Assignment
(SSA) form [18]; each computation is defined only once (we
use the ϕ operator to deal with branches as in classical SSA).
Reductions and Updates Reductions and updates do not
fit naturally in the memory-independent model used within
Tiramisu, and thus we treat them as a special case. To implement algorithms that perform a reduction or update a
variable (a histogram for example), we declare a new computation for each update. These computations will all be
mapped to the same buffer in Layer III. For example, a dense
matrix multiplication, which has a reduction, is represented
in Layer I as follows:
{b1 (i (cpu), 0, j, c, 0): 0<=i<N and 0<=j<M and 0<=c<3}
in the example, is an ordered set of computations. The tag
(cpu) for the i dimension indicates that this dimension is a
space dimension and that each ith iteration is mapped to the
ith CPU. In Layer II, the computation order is controlled by
a total ordering of these tuples.
Layer III in Figure 1-(b) adds data layout mapping to Layer
II, concretizing where each computation is stored (memory
buffers and scalars are also declared in this layer). In the
example, the data mapping
{c0(i,j): 0<=i<N and 0<=j<N}: 0
{c1(i,j,k): 0<=i<N and 0<=j<N and 1<=k<N}:
ϕ (c0(i,j), c1(i,j,k-1)) + A(i,k) * B(k,j)
Since c1(i, j, k ) needs to read the results of the computations c0(i, j) and c1(i, j, k − 1), we use the ϕ node to merge
them into one expression ϕ (c0(i, j), c1(i, j, k − 1)) (although
the use of ϕ nodes in this case can be avoided, in the general
case the use of ϕ nodes is necessary to support cases such
as the definition of computations within data-dependent
conditions).
{b2 (0,i(cpu),j,c,1) → buf b2 [i,j,c]:
0<=i<N and 0<=j<M and 0<=c<3}
indicates that the result of the computation b2 (0, i (cpu), j, c, 1)
is stored in the array element bu f b2 [i, j, c]. Data mapping
in Tiramisu is an affine relation that maps a computation
from Layer II to a buffer element; scalars are single-element
buffers. Tiramisu allows the expression of any data-layout
mapping that can be expressed as an affine relation (examples provided in Section 12.3). For brevity, the declaration
of buffers, their types, their allocation (including when and
where they are allocated), are all omitted from the example, but such information must be specified for correct code
generation.
4.2 Layer I: Abstract Computation Layer
The first layer defines abstract computations, which are not
yet scheduled or mapped to memory. Each computation represents an expression that should be computed.
As an example, the following code
1
2
3
4
for (i in 0..4)
for (j in 0..4)
if (i < j && i != 2)
A[i][j] = cos(i);
can be represented in Layer I as
{A(i,j): 0<=i<4 and 0<=j<4 and i<j and i!= 2}: cos(i)
though it is important to remember that this representation,
unlike the pseudocode above, does not necessarily store results to memory locations. A(i, j) is the computation, while
the constraints over i and j define the iteration domain. The
second part, cos (i), is the computed expression.
Support for Non-Static-Affine Iteration Spaces Tiramisu
can represent non-static-affine code. In particular, Tiramisu
can represent non-static-affine array accesses, while loops,
non-static-affine loop bounds and non-static-affine conditionals. Tiramisu treats any non-static-affine conditional in
a way similar to [4]: the conditional is represented as a single
macro-statement together with its body (i.e., as a statement
encapsulating both the control and the body). while loops
and loops with non-static-affine bounds are handled in a way
similar to [6].
4.3 Layer II: Computation Placement Layer
The computation placement layer describes when and where
each computation is computed. Unlike computations in the
first layer, computations in this layer are ordered (specifying
when) and are assigned to a particular processor (specifying
where). This order is dictated by space dimensions and time
dimensions. Space dimensions specify on which processor
computations should be executed; such dimensions are not
relevant for determining the order of execution. On the other
hand, time dimensions specify the order of execution relative
to other computations. The order of execution of computations is determined by the lexicographical ordering of the
dimensions. Space dimensions are distinguished from time
dimensions using tags, which consist of a processor type
followed by zero or more properties. Currently, Tiramisu
supports the following space tags:
cpu
node
gpu_thread_X
2 For
example the computation S 0(0, 0, 0) is lexicographically before the
computation S 0(0, 0, 1) and the computations S 0(0, i, 0) are lexicographically before the computations S 0(1, i, 0)
6
the dimension runs on a CPU in a shared memory system
the dimension maps to nodes in a distributed system
the dimension runs on a gpu thread (dimension X where
X=0 for outermost and 2 for innermost). Similar tags are
used for blocks.
,,
Tagging a dimension with a processor type indicates that
the dimension should be distributed over processors of that
type in a system; for example, tagging a dimension with cpu
will execute each iteration in that dimension on a separate
CPU.
In addition to processor type, tags can optionally include
one of the following dimension properties:
vec(s)
unroll
pipeline
{C(i,j)->C(i1(cpu),j1,i2,j2):i1=floor(i/16)
and i2=i%16 and j1=floor(j/16) and j2=j%16 and 0<=i<N and
0<=j<N}
which will produce the following set in Layer II:
{C(i1(cpu),j1,i2,j2): i1=floor(i/16) and i2=i% 16
and j1=floor(j/16) and j2=j%16 and 0<=i<N and 0<=j<N}:
A(i1*16+i2, j1*16+j2) * B(i1*16+i2, j1*16+j2)
4.5.2
vectorize the dimension (s is the vector length)
unroll the dimension
pipeline the dimension (FPGA only)
High Level Scheduling Commands
Tiramisu provides a set of predefined scheduling maps for
common affine loop nest transformations. Table 1, presented
previously, shows examples of Tiramisu high-level scheduling commands. These commands are similar to those in
Halide [49] and ChiLL [13]. The high-level scheduling commands in Tiramisu provide an easy-to-use interface for
advanced loop nest transformations in a composable way,
while still enabling advanced users to provide their own lowlevel scheduling maps to modify the space-time mapping for
scheduling not covered by typical compiler transformations.
4.5.3 Checking the Validity of Schedules
Computations mapped to the same processor are ordered
by projecting the computation set onto the time dimensions
and comparing their lexicographical order, without considering the name of the computation, since all computations
in this layer are in the same time-space domain.
4.4 Layer III: Concrete Computation Layer
The concrete computation layer specifies memory locations
for storing computed values. It consists of the Layer II representation along with allocation/deallocation statements, and
a set of access relations, which map computations from Layer
II to array elements read or written by those computations.
Scalars are treated as single-element arrays. For each buffer,
an allocation statement is created, specifying the type of
the buffer (or scalar) and its size, and is scheduled by being
mapped to the time-space domain. Similarly, a deallocation
statement is also added.
Possible data mappings in Tiramisu include mapping
computations to structures-of-arrays, arrays-of-structures,
and contraction of multidimensional arrays into arrays with
fewer dimensions or into scalars. It is also possible to specify
more complicated accesses such as the storage of computations c (i, j) into the array elements c (i%2, j%2) or into c (j, i).
In order to check the validity of transformations, we first
compute the dependences of the input program, then we
check the validity of transformations using violated dependence analysis [57].
5
Generating Tiramisu from DSLs
To demonstrate the utility of Tiramisu, we integrated it
into two DSL compilers: Halide [49] and Julia [7]. A DSL
compiler that uses Tiramisu must generate three pieces of
information:
• Layer I, which describes the algorithm;
• A scheduling map or a scheduling command;
• Commands declaring the buffers and mapping the computations to buffer elements.
Tiramisu takes the three inputs and generates Layer II and
Layer III automatically, and then generates an Abstract Syntax Tree (AST) from Layer III. The AST is traversed to generate target code (LLVM IR, Nvidia PTX, ...).
5.1 Halide to Tiramisu
4.5 Generating Layer II and III from Layer I
Transforming the first layer into the second layer is usually
done using an affine relation called a scheduling map. This
maps each computation in the first layer into a particular
position in time-space. Composing many transformations
can be done simply by composing different scheduling maps.
4.5.1 Scheduling Maps
Affine transformations including loop tiling, skewing, loop
fusion, distribution, splitting, reordering, and many others
can be expressed as an affine map that maps computations
from Layer I into the time-space domain in Layer II. A scheduling map takes as input the iteration domain from Layer I
and transforms it into a new set that represents the computation in the time-space domain. For example, suppose we
want to tile the following computation (which is in Layer I)
into 16 × 16 tiles and parallelize the outermost loop:
{C(i,j): 0<=i<N and 0<=j<N}: A(i,j) + B(i,j)
To do so, we provide the following scheduling map to
Tiramisu:
7
Halide [49] is an industrial-quality DSL for image processing. We generate Tiramisu IR from Halide by mapping a
Halide Func, which is equivalent to a statement in a loop nest,
directly to a Tiramisu computation (Layer I). Reductions,
which update the same function, are mapped to Tiramisu
computations as described in Section 4.2. Halide scheduling
directives, such as tiling, splitting, reordering, parallelization, vectorization, etc., are directly mapped to the equivalent
high level set of scheduling commands defined in Tiramisu.
Finally, we map computations to buffer elements using the
default Halide mappings, while allowing Halide scheduling
commands that control data mappings to perform equivalent
transformations for the Layer III representation. The rest of
the code generation to low-level executable code takes place
within Tiramisu.
,,
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
5.2 Julia to Tiramisu
Julia is a high-level dynamically-typed programming language designed for numerical computing. However, in contrast to Halide, Julia is more general: it supports while loops
and recurrent computations and is memory-based (i.e., it
uses variables unlike Halide which defines pure functions
mostly). We extend Julia with a set of scheduling directives
and function annotations. Functions annotated with the @acc
macro are optimized with Tiramisu.
We generate Tiramisu from the low-level Julia IR (which
is in SSA form) by translating each statement in the Julia
IR into a computation in Tiramisu. This Julia low-level IR
does not have high level control flow (it only has gotos); thus
we change the compilation flow of Julia to annotate the lowlevel IR with information about the original control flow of
the program. We use the annotations to recover the control
flow and generate the iteration domain of each computation.
Although Julia has another high level IR that has control
flow information, we cannot use that IR because it lacks the
necessary data type annotations.
We transform the memory-based Julia IR into the producerconsumer Tiramisu IR using classical array expansion techniques [22, 43, 44]. The goal here is to extract the data-flow
representation of the code. The user is then free to change
the data layout of computations using high level data-layout
directives.
6
1
2
3
4
5
6
7
8
9
10
11
12
// Layer I
{bx(y,x): 0<=y<rows and 0<=x<cols} :
(in(y,x) + in(y,x+1) + in(y,x+2))/3);
{by(y,x): 0<=y<rows and 0<=x<cols} :
(bx(y,x) + bx(y+1,x) + bx(y+2,x))/3);
// Layer II
bx.split(y,chunk_sz,y1,y2); by.split(y,chunk_sz,y1,y2);
// Layer III
comm_prop blk({BLOCK}), ablk({ASYNC,BLOCK});
send_recv xfer = create_transfer("{(q,y,x): 1<=q<N-1 and 0<=
y<2 and 0<=x<cols}","{(q,y,x): 0<=q<N-2 and 0<=y<2 and
0<=x<cols}",q-1,q,ablk,blk,bx(y,x));
bx.distribute(y1); by.distribute(y1);
xfer.s->distribute(q); xfer.r->distribute(q);
Figure 3. Tiramisu pseudocode for a 3x3 distributed blur
are translated into the appropriate GPU thread and block
IDs in the lowered code.
6.3
Distributed Memory Systems
Tiramisu utilizes MPI to generate code for distributed memory systems. Figure 3 shows Tiramisu pseudocode for a 3x3
distributed box blur. Lines 2 and 4 define the blur computation. This code remains the same regardless of whether we
use a shared memory or distributed memory back-end.
For this example, we want to distribute the computation
such that each MPI rank (process) operates on contiguous
rows of the input data. Each rank gets chunk_sz rows. On line
7, the outer loop is split by chunk_sz. The resulting inner loop
ranges over the rows in the chunk, and the outer loop ranges
over the number of MPI ranks we want to use.
Lines 9 and 10 deal with communication. We assume that
our image data is already distributed, thus only boundary
rows need to be communicated among adjacent ranks. Line
9 defines two communication types, which will be used to select the appropriate MPI function. blk represents a blocking
call, and ablk represents an asynchronous, blocking call. We
use two-sided communication in Tiramisu, meaning communication is done with pairs of send and receive operations.
The actual transfer is defined by create_transfer on line 10,
which takes as input the send and receive iteration domains,
the source and destination ranks, the communication types
for send and receive, and the access into the producer for
the send.
Line 11 tags dimension y1 of bx and by as distributed, and
line 12 tags dimension q of the send and receive as distributed.
During code generation, we postprocess the generated code
and convert each distributed loop into a conditional based
on the rank of the executing process. For example:
Generating Code for Different Platforms
Generating code from Layer III (an ordered set of computations) amounts to generating nested loops (AST) that visit
each computation in the set, once and only once, while following the lexicographical ordering between integer tuples.
Array accesses are generated from the maps that describe the
data mapping. The Tiramisu code generator (which uses the
ISL [58] library for code generation) takes Layer III as input
and generates an AST as output. The AST is then traversed
to generate lower level code targeting different hardware
architectures.
6.1 Multicore CPU
When generating code that targets multicore shared memory
systems, loop levels that are tagged as space cpu dimensions
are translated into parallel loops in the generated code, using
OpenMP-style parallelism. Loops that are tagged with the
vec space dimensions are translated into vectorized loops.
Currently we only support the vectorization of loops that do
not contain any control flow.
for(q in 1..N-1) {...} // distribute on q
becomes:
6.2 GPU
q = get_rank(); if (q≥1 and q<N-1) {...}
For GPU code generation, data copy commands are provided
in Layer III of Tiramisu. These commands are translated into
the equivalent data copy calls in the lowered code. Computation dimensions tagged with GPU thread or GPU block tags
All of the other scheduling commands in Tiramisu can
be composed with transfers and distributed loops, as long
as the composition is semantically correct. This means we
can do everything from basic transformations (e.g. tiling a
8
,,
input.copy_to_device();
bx.gpu(y2,x); by.gpu(y2,x);
output.copy_to_host();
Halide-Tiramisu
Halide-Original
Cannot be expressed in Halide
30
Execution Time [s]
1
2
3
Figure 4. Additional Tiramisu commands needed to generate a 3x3 distributed GPU box blur
20
10
0
ticket #2373
heat3D
warpAffine
rgbyuv420
nb pipeline
heat2D
7.1
Halide to Tiramisu
To evaluate the integration of Tiramisu with Halide, we
used the following benchmarks: cvtColor, which converts
RGB to grayscale; convolution, a simple 2D convolution;
gaussian, which performs a gaussian blur; warpAffine,
which does affine warping on an image; heat2D, a simulation of the 2D heat equation; nb pipeline, a synthetic
pipeline that computes two outputs from the same image,
a negative and a brightened image; rgbyuv420, an image
conversion from RGB to YUV420; heat3D, the 3D heat equation with timestepping; and ticket #2373, a code snippet
from a bug filed against Halide where the bounds inference
is over-approximated, leading the generated code to fail in
execution.
Figure 5 compares the execution time of code generated
by Halide and Halide-Tiramisu. In five of the benchmarks
(namely convolution, cvtColor, gaussian, heat2D, and
warpAffine), the performance of the code generated by
Halide-Tiramisu matches the performance of Halide. We
use the same schedule for both implementations.
Two of the other benchmarks , heat3D and ticket #2373,
cannot be implemented in Halide. The following is an example of a recurrent filter extracted from [12], a compiler
designed to support recurrent filters.
Tiramisu relies on FROST [20] to generate code for FPGAs.
FROST is a common back-end for the hardware acceleration
of DSLs on FPGA. It exposes an IR that DSLs can target, as
well as a high level scheduling language to express FPGAspecific optimizations.
We integrated FROST within Tiramisu, enabling us to target FPGAs. We use Tiramisu to perform loop nest transformations that are necessary to prepare the code for lowering
to FPGA, while FROST focuses on the actual lowering to the
target High-Level Synthesis (HLS) toolchain. For example,
in order to vectorize a loop, Tiramisu first splits the loop
so that the innermost loop has a fixed number of iterations
and then tags that loop for later vectorization by FROST.
FROST then performs the actual vectorization of both loop
and input/output buffers.
The output of FROST is a C++ implementation of the
input code suitable for HLS tools, like Xilinx Vivado HLS
[33]. Finally, FROST leverages Xilinx SDAccel to synthesize
the resulting FPGA design and produce the bitstream file for
execution on actual hardware.
heat3d(x,y,z,0) = a*in(x, y, z) +
b*(in(x+1, y, z) + in(x-1, y, z)+
in(x, y+1, z) + in(x, y-1, z) +
in(x, y, z+1) + in(x, y, z-1));
heat3d(x,y,z,t) = a*heat3d(x, y, z,
b*(heat3d(x+1, y, z, t-1) +
heat3d(x, y+1, z, t-1) +
heat3d(x, y, z+1, t-1) +
time.x-1) +
heat3d(x-1, y, z, t-1)+
heat3d(x, y-1, z, t-1) +
heat3d(x, y, z-1, t-1));
This code cannot be implemented in Halide because it
contains a cyclic dependence graph due to the loop over
timesteps while the Halide compiler assumes that the dependence graph is a DAG (directed acyclic graph). This limitation
is mainly because it is difficult to prove the legality of optimizations in an interval-based representation in the case of
a cyclic dependence graph. This is not the case for Tiramisu,
which relies on a precise dependence analysis [23] and on
checking the legality of transformations using the polyhedral
Evaluation
We performed the evaluation on a cluster of dual-socket
machines with two 24-core Intel Xeon E5-2680 v3 CPUs
running at 2.50GHz running Ubuntu 14.04.5 LTS, with an
Infiniband interconnect.
gaussian
Figure 5. Execution Time for Tiramisu and Halide (s)
6.4 FPGA
7
cvtColor
convolution
transfer) to more advanced transformations (e.g. specializing
a distributed computation based on rank).
GPU scheduling can also be composed with distribution,
allowing programs to execute in either a multi-GPU or heterogeneous CPU-GPU environment. Only a few extra scheduling commands need to be added to distributed Tiramisu
code to enable the use of GPU. Figure 4 shows the four
additional scheduling commands needed to convert the distributed box blur code in Figure 3 to distributed GPU code.
Lines 1 and 3 copy data from the host (CPU) to the device
(GPU) and from the device to the host, respectively. Line 2
tags the computations to run on the GPU. The resulting code
can be used to distribute the box blur computation across
multiple GPUs that reside on different nodes. As with CPU
distribution, we use MPI to control the inter-node communication.
9
,,
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
1.5
Tiramisu
Julia
Execution Time [ms]
Execution Time [s]
20
10
Tiramisu/FROST
Vivado HLS Video Library
1.0
0.5
0
threshold
sobel
scale
gaussian
convolution
gesummv
bicg
mttkrp
doitgen
covariance
cvtColor
0
Figure 7. Execution Time for Tiramisu/FROST and Vivado
HLS Video Library (ms)
Figure 6. Execution Time for Tiramisu and Julia (s)
model [55] to decide whether a transformation can be performed. In ticket #2373, which exhibits a triangular iteration domain, Halide’s bounds inference over-approximates
the computed bounds which leads the generated code to fail
in execution. This over-approximation in Halide is due to
the use of intervals to represent iteration domains, which
prevents Halide from performing precise bounds inference
on non-rectangular iteration spaces. Tiramisu can handle
this case naturally since it relies on a polyhedral based model
where sets can include any affine constraint in addition to the
min and max bounds. These examples show that the model
exposed by Tiramisu naturally supports more complicated
code patterns than an advanced, mature DSL compiler.
For nb pipeline and rgbyuv420, the code generated
from Halide-Tiramisu achieves a 4× speedup over the code
generated from Halide. This is primarily due to fusion. In
both cases, Tiramisu can fuse multiple loops into one loop
which enhances data locality; loop fusion is currently unsupported in Halide. This is another case that demonstrates that
the expressiveness of the polyhedral-based representation in
Tiramisu allows the framework to naturally perform certain
iteration domain transformations that are difficult in other
models.
7.3
7.3.1
Evaluating Backends
FPGA Backend
We evaluate the FPGA backend in Tiramisu using 6 image processing kernels: convolution, cvtColor, gaussian,
scale, sobel, and threshold. We chose these kernels because they are already implemented in the Vivado HLS Video
Library [32], which implements several OpenCV functions
for FPGA. We compare the execution time of code generated
from Tiramisu with the codes extracted from the Vivado
HLS Video Library. These codes are synthesized using the
Xilinx SDAccel 2016.4 toolchain at 200MHz and ran on a
ADM-PCIE-7V3 board by Alpha Data (powered by a Xilinx
Virtex 7 FPGA). For all the kernels, we use a 512 × 384 RGB
image, except for the threshold kernel, which takes as input
a single channel image.
The HLS Video Library kernels expect the input image to
be arranged with channels as the innermost dimension of
the image. The accelerator on the FPGA receives a stream of
pixels from the off-chip memory, and processes each channel
of the pixel completely in parallel.
While the HLS Video Library parallelizes only the channel
dimension, the flexibility of the Tiramisu scheduling commands allowed us to explore other alternatives including
the parallelization over the width dimension of the image
leading to better performance (at the expense of more FPGA
resources). Indeed, while the Video Library performs, at most,
three computations in parallel (on the channels), the code
generated from Tiramisu can perform, at most, sixty-four
computations in parallel, in the case of a 512-bit vectorization
of the input/output buffers for a 8-bit image.
Figure 7 shows the execution time of Tiramisu/FROST
and the Vivado HLS Video Library. Tiramisu with FROST
outperformed the Video Library implementations by at least
3×. For each kernel, we used Tiramisu to arrange the input
image and split the innermost loop to prepare for vectorization (we applied vectorization to both input/output buffers).
We also applied loop pipelining and array partitioning (for
convolution, gaussian and sobel).
7.2 Julia to Tiramisu
We used the following benchmarks to evaluate the integration of Tiramisu within Julia: bicg, a biconjugate gradient
method; doitgen, a multiresolution analysis kernel; mttkrp,
the matricized tensor times Khatri-Rao product; covariance,
which performs a covariance computation; and gesummv,
which is summed matrix-vector multiplications. For a fair
comparison, the Julia code was tagged with the inbounds
macro to remove any boundary checks on buffer accesses.
Figure 6 shows the execution time of code generated by
Julia-Tiramisu compared to Julia without Tiramisu. The
speedups of Julia-Tiramisu in covariance, doitgen, mttkrp
and bicg are mainly due to the improved data locality obtained after tiling using Tiramisu, which is not possible in
Julia.
10
,,
1.2
Tiramisu
MKL
0.8
0.6
0.4
0.2
20
15
10
5
0
sgemm
saxpy
conv-conv
conv
0
2
4
8
16
number of nodes
Figure 8. Tiramisu compared to Intel MKL
5
blurxy
convolution
cvtColor
gaussian
pipeline
sobel
25
Execution Time [s]
Normalized Time
1.0
Figure 10. Execution time of distributed Tiramisu for 2, 4,
8, and 16 nodes (s)
Dist-Tiramisu
Dist-Halide
Execution Time [s]
4
3
nodes by rows. Of these benchmarks, pipeline, and cvtColor
do not require any communication; the other four require
communication due to overlapping boundary regions in the
distributed data. For the distributed CPU-only tests, we use
the MVAPICH2 2.0 [31] implementation of MPI.
Figure 9 compares the execution time of distributed Tiramisu
and distributed Halide on 16 nodes for each of the kernels.
Tiramisu is faster than distributed Halide in each case. For
the kernels involving communication, code generated by
distributed Halide has two problems compared to Tiramisu:
(1) It overestimates the amount of data it needs to send; (2) It
unnecessarily packs together contiguous data into a separate
buffer before sending.
Figure 10 shows the execution time of the kernels with
distributed Tiramisu when running on 2, 4, 8, and 16 nodes.
As expected, execution time decreases for these kernels as
the number of nodes increases.
2
1
0
sobel
pipeline
gaussian
cvtColor
convolution
blurxy
Figure 9. Execution time of distributed Tiramisu and distributed Halide across 16 nodes (s)
7.4 Generating BLAS sgemm using Tiramisu
To evaluate the performance of Tiramisu on an extreme case,
we used Tiramisu to generate the BLAS generalized matrix
multiplication (sgemm) which computes C = αAB + βC. The
sgemm implementation in the Intel MKL library is known
to be one of the most highly hand-optimized implementations for Intel CPU architectures. We used a large set of
optimizations including three-dimensional L2 and L3 blocking, fusion of the computation of T = αAB and C = T + βC
into a single loop, vectorization, unrolling, array packing
(as described in [25]), register blocking, separation of full
and partial tiles (which is crucial to enable vectorization, unrolling, and reduce control overhead). We also tuned the tile
size and unrolling factor for the machine on which we run
our experiments. The resulting kernel matches the Intel MKL
implementation as shown in 8. The Tiramisu implementation of saxpy, convolution and two fused convolutions all
outperform or match the Intel MKL implementation (lower
is better).
7.4.1 Distributed and GPU Backend
7.4.2
For the Tiramisu distributed backend, we used 6 kernels for
evaluation: blurxy, sobel, convolution, gaussian, pipeline,
and cvtColor (we chose these kernels because these are already implemented in the distributed Halide compiler [21]).
We assume that the data is already distributed across the
11
Putting it All Together
As a final experiment, we ran a modified version of the cvtColor kernel in a distributed GPU configuration and compared it with a distributed CPU configuration. For this experiment, we ran on a small cluster of 4 nodes, each consisting
of a single Nvidia K40 GPU and a 12-core Intel Xeon E5-2695
v2 CPU clocked at 2.40GHz. We used OpenMPI 1.6.5 [46] as
our MPI implementation.
Figure 11 shows the results of this experiment. The back
row shows the results for running the cvtColor kernel on one
node, using either 1 core, 10 cores, or 1 GPU. As expected,
10 cores is better than 1 core and the GPU outperforms the
CPU. The front row shows the same configuration, expect
distributed across 4 nodes. So, from left-to-right, the columns
of the front row represent a total of 4 cores, then 40 cores,
and then 4 GPUs. As with the the single node performance,
40 cores is better than 4 cores, and 4 GPUs is better than the
CPUs.
Jessica
Ray,
Malek Ben
Romdhane,
Emanuele
Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Experiment #Riyadh
# NodesBaghdadi,
# Procs (1
per rank)
# GPUs
# Cores
Total execution
time (ms)
1
1
0
1
49034
,, 0
Amarasinghe
1
1
1
0
10
23650
4
1
1
1
0
7675
that
generic DSL frameworks like Delite can benefit from
5
1
2
1
1
25290
6
1
2
1
10
12487Tiramisu.
using
Execution Time (seconds)
1450
1545
1840
1935
2030
25
4
4
4
4
4
4
4
4
8
8
0
0
1
1
1
PENCIL [3, 4] is another generic DSL IR and automatic
14131
optimization
framework which uses a polyhedral represen8200
tation
internally.
It is a subset of C99 with additional con2477
structs
25254 to help parallelizing compilers perform more accurate
11928analyses, and as a result generate more efficient code.
static
The Pluto [9] automatic scheduling algorithm used within
PENCIL can be integrated seamlessly in Tiramisu on top
of the first layer. The main difference between PENCIL and
Tiramisu is that Tiramisu separates computation, schedule,
and data layout. In contrast, the PENCIL IR is a subset of C99
with arrays, scalar variables, etc. and thus successful parallelization and optimization sometimes requires data-layout
transformations such as expansion and privatization which
are not necessary in Tiramisu.
CHiLL [13, 27] is a polyhedral based compiler framework
for Fortran that allows users to express a composition of highlevel transformations such as tiling and unrolling, which the
system performs, freeing users from having to implement
them. URUK [24] is a similar framework that also uses a
polyhedral representation. Other systems such as POET [61]
parametrize loop transformations with a language-agnostic,
purely-syntactic transformation system. These frameworks
require working with concrete data layouts, in contrast to
Tiramisu that does not have a concrete data layout in its
first layer.
Darkroom [29] is a language and compiler for image processing pipelines. Darkoom compiles the input programs into
optimized line-buffered pipelines (it relies on an ILP solver
to optimally schedule the pipelines), and then synthesizes
them for ASIC, FPGA, or CPU. Similarly, [47] presents an
extension to Halide to hardware accelerate applications on
FPGA. The authors implemented additional scheduling commands to define and control the code generated for FPGA.
These works are designed for image processing applications
only, while Tiramisu with FROST can support also other
types of computations (e.g. linear algebra).
1
10
0
1
10
20
15
1
10
5
4
0
1 core
1 core 10 cores
10 cores
1 GPU
1 GPU14
8
2
1
49
24
8
4
Figure 11. Results for either CPU or GPU running on a
single node (back row), and distributed across 4 nodes (front
row).
7.5 Evaluation Summary
Overall, the experiments demonstrated the use of Tiramisu
as an IR and optimization framework for two DSLs and multiple backends. We show that Tiramisu is expressive: it allows
both Halide and Julia to perform new optimizations and allows Halide to express new algorithms. The experiments
also show that Tiramisu is suitable for targeting multiple
hardware architectures, such as multicore, GPUs, distributed
systems, and FPGA. And thanks to its flexible scheduling
commands, it can generate highly optimized code for a variety of of architectures and algorithms.
8
Related Work
8.1 High Performance DSL Compilers
High performance DSL compilers such as Halide [50], Diderot
[14], Simit [35], Polymage[45], OoLaLa [41] and others build
custom compilation pipelines for specific domains such as
image processing or linear algebra. These compilers have
shown that it is possible to obtain high performance by applying domain-specific optimizations in the course of compilation. However, such compilers map DSL code directly to
the target hardware, sometimes using a low-level compiler
framework like LLVM. Our goal in this work is to build a
more generic framework and intermediate representation
that can be used by domain-specific language compilers in
place of ad-hoc re-implementations of compute and data
transformations.
8.3
Data-layout Transformation
Techniques such as scalar and array expansion remove false
dependencies, enabling loop nest transformation and parallelization [22, 34]. Expansion increases dimensionality to create private copies of data for each loop iteration. In Tiramisu,
computations are single assignment, and thus are fully expanded, obviating the need for privatization.
A family of array contraction techniques attempts to reduce the memory footprint without constraining loop nest
transformations [19, 37, 48]: the compiler performs a maximal expansion before applying loop transformations, and
then attempts to contract the expanded arrays. Tiramisu simplifies this process, since maximal expansion is not needed.
This is similar to Halide [49] where computations are mapped
8.2 DSL IRs and Optimization Frameworks
Delite [11] is a generic framework for building DSL compilers
using Lightweight Modular Staging (LMS) [51], a technique
for embedding code generators and compilers in Scala. Delite
exposes several parallel computation patterns that DSLs can
use to express computation; however, it has no facilities for
advanced loop nest transformations. We therefore believe
12
,,
by default to fully expanded arrays and then a compiler pass
performs storage folding to contract arrays.
Several alternative approaches try to constrain expansion.
Maximal static expansion (MSE) restricts the elimination of
dependencies to the situations where the data flow can be
captured accurately at compilation time [16]. It is important when generalizing array dependence analyses and loop
transformations to dynamic control flow, and it can be combined with array contraction [15]. A priori constraints on
memory footprint can also be enforced, up to linear volume
approximations [53], and more generally, trade-offs between
parallelism and storage allocation can be explored. These
techniques can also be applied in Tiramisu to constrain the
schedule.
Data layout transformations for specific dependence patterns using the polyhedral model have been used to eliminate SIMD intra-vector operations [30] and for enhancing
cache locality in non-uniform cache access (NUCA) architectures [40]. These kinds of transformations can be easily
implemented in Tiramisu.
8.4
Tiramisu is designed so most DSLs can use high-level
scheduling and data mapping constructs to control the lowering from the algorithm to the backend, cross-platform code.
In addition, the underlying representations are accessible to
advanced users that wish to implement new optimizations
and transformations.
We evaluate Tiramisu by creating a new middle-end for
the Halide and Julia compilers, targeting a variety of backends. We also demonstrate transformations made possible by
Tiramisu increasing performance by up to 4× in Halide and
16× in Julia and demonstrate that Tiramisu can generate
very fast code matching one of the most hand optimized
kernels (Intel MKL gemm).
Functional IRs and Data Layout
Transformations
The NOVA functional language [17] was designed to be used
as an IR for DSL compilers. It is a polymorphic, staticallytyped functional language with a suite of higher-order functions such as map, reduce and scan that are used to express
parallelism. Although the NOVA IR does not represent memory explicitly, it does not provide any framework for advanced loop nest transformations. For example, only map
functions that iterate over the same ranges can be fused.
Iteration space transformations such as skewing are not addressed. Transformations such as fusion are done at the function level. Tiramisu provides an IR that allows advanced
iteration space transformations while still separating the
algorithm from the schedule and the data layout.
Most functional languages do not expose notions of memory layout to programmers. Instead, programmers rely on
profiling tools to characterize data movement [52] or design algorithms around models of memory traffic for such
programming languages [8]. In contrast, Tiramisu enables
writing the algorithm in a functional manner while separately dealing with data layout and computation scheduling.
9
Conclusion
In this paper we introduce Tiramisu, a middle-end compiler
for domain specific languages that separates the algorithm,
the schedule and the data layout in a three-layer intermediate
representation. Tiramisu supports backend code generation
for multicore CPUs, GPUs, FPGAs, and distributed systems,
as well as machines that contain any combination of these
architectures.
13
,,
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
References
[14] Charisee Chiw, Gordon Kindlmann, John Reppy, Lamont Samuels, and
Nick Seltzer. 2012. Diderot: A Parallel DSL for Image Analysis and
Visualization. In PLDI.
[15] Albert Cohen. 1999. Parallelization via constrained storage mapping
optimization. In Intl. Symp. on High Performance Computing, Kazuki
Joe Akira Fukuda, Shinji Tomita, and Constantine Polychronopoulos
(Eds.). LNCS, Vol. 1615. Springer-Verlag, 83–94.
[16] Albert Cohen and Vincent Lefebvre. 1998. Optimization of Storage
Mappings for Parallel Programs. in Europar 99, number 1685 in LNCS
(1998), 375—382.
[17] Alexander Collins, Dominik Grewe, Vinod Grover, Sean Lee, and Adriana Susnea. 2014. NOVA: A Functional Language for Data Parallelism.
In Proceedings of ACM SIGPLAN International Workshop on Libraries,
Languages, and Compilers for Array Programming (ARRAY’14). ACM,
New York, NY, USA, Article 8, 6 pages. https://doi.org/10.1145/2627373.
2627375
[18] Ron Cytron, Jeanne Ferrante, Barry K. Rosen, Mark N. Wegman, and
F. Kenneth Zadeck. 1991. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph. ACM Trans. Program.
Lang. Syst. 13, 4 (Oct. 1991), 451–490. https://doi.org/10.1145/115372.
115320
[19] Alain Darte and Guillaume Huard. 2005. New Complexity Results on
Array Contraction and Related Problems. J. VLSI Signal Process. Syst.
40, 1 (May 2005), 35–55. https://doi.org/10.1007/s11265-005-4937-3
[20] Emanuele Del Sozzo, Riyadh Baghdadi, Saman Amarasinghe, and
Marco Domenico Santambrogio. 2017. A Common Backend for Hardware Acceleration on FPGA. In 35th IEEE International Conference on
Computer Design (ICCD’17).
[21] Tyler Denniston, Shoaib Kamil, and Saman Amarasinghe. 2016. Distributed halide. In Proceedings of the 21st ACM SIGPLAN Symposium
on Principles and Practice of Parallel Programming. ACM, 5.
[22] P. Feautrier. 1988. Array expansion. In Proceedings of the 2nd international conference on Supercomputing. ACM, St. Malo, France, 429–441.
https://doi.org/10.1145/55364.55406
[23] Paul Feautrier. 1991. Dataflow analysis of array and scalar references.
International Journal of Parallel Programming 20, 1 (Feb. 1991), 23–53.
https://doi.org/10.1007/BF01407931
[24] Sylvain Girbal, Nicolas Vasilache, Cédric Bastoul, Albert Cohen, David
Parello, Marc Sigler, and Olivier Temam. 2006. Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory
Hierarchies. International Journal of Parallel Programming 34, 3 (2006),
261–317.
[25] Kazushige Goto and Robert A. van de Geijn. 2008. Anatomy of Highperformance Matrix Multiplication. ACM Trans. Math. Softw. 34, 3, Article 12 (May 2008), 25 pages. https://doi.org/10.1145/1356052.1356053
[26] M. Gupta. 1997. On privatization of variables for data-parallel execution. In Parallel Processing Symposium, 1997. Proceedings., 11th International. IEEE, 533–541.
[27] Mary Hall, Jacqueline Chame, Chun Chen, Jaewook Shin, Gabe Rudy,
and Malik Murtaza Khan. 2010. Loop Transformation Recipes for Code
Generation and Auto-Tuning. Springer Berlin Heidelberg, Berlin, Heidelberg, 50–64.
[28] Mary W Hall, Saman P Amarasinghe, Brian R Murphy, Shih-Wei Liao,
and Monica S Lam. 1995. Detecting coarse-grain parallelism using
an interprocedural parallelizing compiler. In Supercomputing, 1995.
Proceedings of the IEEE/ACM SC95 Conference. IEEE, 49–49.
[29] James Hegarty, John Brunhaver, Zachary DeVito, Jonathan RaganKelley, Noy Cohen, Steven Bell, Artem Vasilyev, Mark Horowitz, and
Pat Hanrahan. 2014. Darkroom: Compiling High-level Image Processing Code into Hardware Pipelines. ACM Trans. Graph. 33, 4, Article
144 (July 2014), 11 pages. https://doi.org/10.1145/2601097.2601174
[30] Tom Henretty, Kevin Stock, Louis-Noël Pouchet, Franz Franchetti, J.
Ramanujam, and P. Sadayappan. 2011. Data Layout Transformation
for Stencil Computations on Short SIMD Architectures. In ETAPS
[1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng
Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean,
Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp,
Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz
Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga,
Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster,
Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A.
Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas,
Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan
Yu, and Xiaoqiang Zheng. 2016. TensorFlow: Large-Scale Machine
Learning on Heterogeneous Distributed Systems. CoRR abs/1603.04467
(2016). http://arxiv.org/abs/1603.04467
[2] Martin Sandve Alnæs, Anders Logg, Kristian B. Ølgaard, Marie E.
Rognes, and Garth N. Wells. 2014. Unified form language: A domainspecific language for weak formulations of partial differential equations. ACM Trans. Math. Softw. 40, 2 (2014), 9.
[3] Riyadh Baghdadi, Ulysse Beaugnon, Albert Cohen, Tobias Grosser,
Michael Kruse, Chandan Reddy, Sven Verdoolaege, Adam Betts, Alastair F. Donaldson, Jeroen Ketema, Javed Absar, Sven van Haastregt,
Alexey Kravets, Anton Lokhmotov, Robert David, and Elnar Hajiyev.
2015. PENCIL: A Platform-Neutral Compute Intermediate Language
for Accelerator Programming. In Proceedings of the 2015 International Conference on Parallel Architecture and Compilation (PACT)
(PACT ’15). IEEE Computer Society, Washington, DC, USA, 138–149.
https://doi.org/10.1109/PACT.2015.17
[4] Riyadh Baghdadi, Albert Cohen, Tobias Grosser, Sven Verdoolaege, Anton Lokhmotov, Javed Absar, Sven van Haastregt, Alexey Kravets, and
Alastair F. Donaldson. 2015. PENCIL Language Specification. Research
Rep. RR-8706. INRIA. 37 pages. https://hal.inria.fr/hal-01154812
[5] M.-W. Benabderrahmane, L.-N. Pouchet, Albert Cohen, and Cedric
Bastoul. 2010. The Polyhedral Model Is More Widely Applicable Than
You Think. In Proceedings of the International Conference on Compiler
Construction (ETAPS CC’10) (LNCS). Springer-Verlag, Paphos, Cyprus.
[6] Mohamed-Walid Benabderrahmane, Louis-Noël Pouchet, Albert Cohen, and Cédric Bastoul. 2010. The Polyhedral Model is More Widely
Applicable Than You Think. In Proceedings of the 19th Joint European
Conference on Theory and Practice of Software, International Conference
on Compiler Construction (CC’10/ETAPS’10). Springer-Verlag.
[7] Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B Shah. 2017.
Julia: A fresh approach to numerical computing. SIAM Rev. 59, 1 (2017),
65–98.
[8] Guy E. Blelloch and Robert Harper. 2013. Cache and I/O Efficent Functional Algorithms. In Proceedings of the 40th Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL
’13). ACM, New York, NY, USA, 39–50. https://doi.org/10.1145/2429069.
2429077
[9] Uday Bondhugula, Albert Hartono, J. Ramanujam, and P. Sadayappan. 2008. A practical automatic polyhedral parallelizer and locality
optimizer. In PLDI. 101–113.
[10] Adrian M Caulfield, Eric S Chung, Andrew Putnam, Hari Angepat,
Daniel Firestone, Jeremy Fowers, Michael Haselman, Stephen Heil,
Matt Humphrey, Puneet Kaur, et al. 2017. Configurable Clouds. IEEE
Micro 37, 3 (2017), 52–61.
[11] Hassan Chafi, Arvind K. Sujeeth, Kevin J. Brown, HyoukJoong Lee,
Anand R. Atreya, and Kunle Olukotun. 2011. A domain-specific approach to heterogeneous parallelism. In PPoPP. 35–46.
[12] Gaurav Chaurasia, Jonathan Ragan-Kelley, Sylvain Paris, George Drettakis, and Fredo Durand. 2015. Compiling high performance recursive
filters. In Proceedings of the 7th Conference on High-Performance Graphics. ACM, 85–94.
[13] Chun Chen, Jacqueline Chame, and Mary Hall. 2008. CHiLL: A framework for composing high-level loop transformations. Technical Report
08-897. U. of Southern California.
14
,,
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
International Conference on Compiler Construction (CC’11). Springer
Verlag, SaarbrÃČÂijcken, Germany, 225–245.
Wei Huang, Gopalakrishnan Santhanaraman, H-W Jin, Qi Gao, and
Dhabaleswar K Panda. 2006. Design of high performance MVAPICH2:
MPI2 over InfiniBand. In Cluster Computing and the Grid, 2006. CCGRID
06. Sixth IEEE International Symposium on, Vol. 1. IEEE, 43–48.
Xilinx Inc. 2015. HLS Video Library. http://www.wiki.xilinx.com/HLS+
Video+Library. (April 2015).
Xilinx Inc. 2017. Vivado HLx Editions. https://www.xilinx.com/
products/design-tools/vivado.html. (October 2017).
Ken Kennedy and John R. Allen. 2002. Optimizing compilers for modern architectures: a dependence-based approach. Morgan Kaufmann
Publishers Inc. http://portal.acm.org/citation.cfm?id=502981
Fredrik Kjolstad, Shoaib Kamil, Jonathan Ragan-Kelley, David I. W.
Levin, Shinjiro Sueda, Desai Chen, Etienne Vouga, Danny M. Kaufman,
Gurtej Kanwar, Wojciech Matusik, and Saman Amarasinghe. 2016.
Simit: A Language for Physical Simulation. ACM Trans. Graph. 35, 2,
Article 20 (May 2016), 21 pages. https://doi.org/10.1145/2866569
Chris Lattner and Vikram Adve. 2004. LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation. In Proceedings
of the International Symposium on Code Generation and Optimization:
Feedback-directed and Runtime Optimization (CGO ’04). IEEE Computer
Society, Washington, DC, USA, 75–. http://dl.acm.org/citation.cfm?
id=977395.977673
Vincent Lefebvre and Paul Feautrier. 1998. Automatic storage management for parallel programs. Parallel Comput. 24 (1998), 649–671.
https://doi.org/10.1016/S0167-8191(98)00029-5
Zhiyuan Li. 1992. Array privatization for parallel execution of loops.
In Proceedings of the 6th international conference on Supercomputing.
ACM, Washington, D. C., United States, 313–322. https://doi.org/10.
1145/143369.143426
Xiangke Liao, Liquan Xiao, Canqun Yang, and Yutong Lu. 2014.
MilkyWay-2 supercomputer: system and application. Frontiers of
Computer Science 8, 3 (2014), 345–356.
Qingda Lu, Christophe Alias, Uday Bondhugula, Thomas Henretty,
Sriram Krishnamoorthy, J. Ramanujam, Atanas Rountev, P. Sadayappan, Yongjian Chen, Haibo Lin, and Tin-Fook Ngai. 2009. Data Layout
Transformation for Enhancing Data Locality on NUCA Chip Multiprocessors. In International Conference on Parallel Architectures and
Compilation Techniques. 348–357.
Mikel Luján, T. L. Freeman, and John R. Gurd. 2000. OoLALA: An
Object Oriented Analysis and Design of Numerical Linear Algebra. In
OOPSLA. 229–252.
D Maydan, S Amarsinghe, and M Lam. 1992. Data dependence and
data-flow analysis of arrays. In International Workshop on Languages
and Compilers for Parallel Computing. Springer, 434–448.
Dror E. Maydan, Saman P. Amarasinghe, and Monica S. Lam. 1993.
Array-data flow analysis and its use in array privatization. In Proceedings of the 20th ACM SIGPLAN-SIGACT symposium on Principles of
programming languages - POPL ’93. Charleston, South Carolina, United
States, 2–15. https://doi.org/10.1145/158511.158515
Samuel Midkiff. 2012. Automatic Parallelization: An Overview of Fundamental Compiler Techniques. Morgan & Claypool Publishers.
Ravi Teja Mullapudi, Vinay Vasista, and Uday Bondhugula. 2015.
PolyMage: Automatic Optimization for Image Processing Pipelines.
SIGARCH Comput. Archit. News 43, 1 (March 2015), 429–443. https:
//doi.org/10.1145/2786763.2694364
MPI Open. [n. d.]. Version 1.6. 5, Open MPI Software. ([n. d.]).
Jing Pu, Steven Bell, Xuan Yang, Jeff Setter, Stephen Richardson,
Jonathan Ragan-Kelley, and Mark Horowitz. 2017. Programming Heterogeneous Systems from an Image Processing DSL. ACM Trans.
Archit. Code Optim. 14, 3, Article 26 (Aug. 2017), 25 pages. https:
//doi.org/10.1145/3107953
[48] F. Quilleré and S. Rajopadhye. 2000. Optimizing Memory Usage in
the Polyhedral Model. ACM Trans. on Programming Languages and
Systems 22, 5 (Sept. 2000), 773–815.
[49] Jonathan Ragan-Kelley, Andrew Adams, Sylvain Paris, Marc Levoy,
Saman Amarasinghe, and Frédo Durand. 2012. Decoupling Algorithms
from Schedules for Easy Optimization of Image Processing Pipelines.
ACM Trans. Graph. 31, 4, Article 32 (July 2012), 12 pages.
[50] Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain
Paris, Frédo Durand, and Saman P. Amarasinghe. 2013. Halide: a
language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. In PLDI. 519–530.
[51] Tiark Rompf and Martin Odersky. 2010. Lightweight Modular Staging:
A Pragmatic Approach to Runtime Code Generation and Compiled
DSLs. In Proceedings of the Ninth International Conference on Generative
Programming and Component Engineering (GPCE ’10). ACM, New York,
NY, USA, 127–136. https://doi.org/10.1145/1868294.1868314
[52] Daniel Spoonhower, Guy E. Blelloch, Robert Harper, and Phillip B.
Gibbons. 2008. Space Profiling for Parallel Functional Programs. SIGPLAN Not. 43, 9 (Sept. 2008), 253–264. https://doi.org/10.1145/1411203.
1411240
[53] William Thies, Frédéric Vivien, Jeffrey Sheldon, and Saman Amarasinghe. 2001. A unified framework for schedule and storage optimization. In Proc. of the 2001 PLDI Conf.
[54] Georgios Tournavitis, Zheng Wang, Björn Franke, and Michael FP
O’Boyle. 2009. Towards a holistic approach to auto-parallelization:
integrating profile-driven parallelism detection and machine-learning
based mapping. ACM Sigplan Notices 44, 6 (2009), 177–187.
[55] Konrad Trifunovic, Albert Cohen, Razya Ladelski, and Feng Li. 2011.
Elimination of Memory-Based Dependences for Loop-Nest Optimization and Parallelization. In 3rd GCC Research Opportunities Workshop.
Chamonix, France.
[56] Peng Tu and David Padua. 1994. Automatic array privatization. In
Languages and Compilers for Parallel Computing, Utpal Banerjee, David
Gelernter, Alex Nicolau, and David Padua (Eds.). Lecture Notes in
Computer Science, Vol. 768. Springer Berlin / Heidelberg, 500–521.
[57] Nicolas Vasilache, Cedric Bastoul, Albert Cohen, and Sylvain Girbal.
2006. Violated dependence analysis. In Proceedings of the 20th annual
international conference on Supercomputing. ACM, Cairns, Queensland,
Australia, 335–344. https://doi.org/10.1145/1183401.1183448
[58] Sven Verdoolaege. 2010. isl: An Integer Set Library for the Polyhedral
Model. In ICMS, Vol. 6327. 299–302.
[59] Michael E Wolf and Monica S Lam. 1991. A loop transformation theory
and an algorithm to maximize parallelism. IEEE transactions on parallel
and distributed systems 2, 4 (1991), 452–471.
[60] Chao-Tung Yang, Chih-Lin Huang, and Cheng-Fang Lin. 2011. Hybrid
CUDA, OpenMP, and MPI parallel programming on multicore GPU
clusters. Computer Physics Communications 182, 1 (2011), 266–269.
[61] Qing Yi, Keith Seymour, Haihang You, Richard Vuduc, and Dan Quinlan. 2007. POET: Parameterized Optimizations for Empirical Tuning.
In Proc. Wkshp. Performance Optimization of High-level Languages and
Libraries (POHLL), at IEEE Int’l. Par. Distrib. Processing Symp. (IPDPS).
Long Beach, CA, USA, 1–8. https://doi.org/10.1109/IPDPS.2007.370637
15
,,
10
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Patricia Suriana, Shoaib Kamil, and Saman
Amarasinghe
Notation and Definitions
10.1 Presburger formula
We use an EBNF (Extended Backus-Naur Form) grammar to
define Presburger formulas.
<formula>
<atom>
<term>
←
←
←
<relop>
←
<var>
←
<numeral> ←
<formula> ∧ <formula>
| <formula> ∨ <formula>
| ¬<formula> | ∃<var>.<formula>
| ∀<var>.<formula> | <atom>
<term><relop><term>
<numeral> | <term> + <term>
| − <term>
| <numeral> ∗ <term> | <var>
< | ≤ | = | > | ≥
x |y |z | ...
0 | 1 | 2 | ...
j
4
3
2
1
3
2
1
1 2 3 4
i
1 2 3 4
i
Figure 12. Graphical repre- Figure 13. Graphical representation of a set
sentation of a map
In general, an integer set has the form
S = {N (⃗s )| f (⃗s , p⃗)}
with ⃗s representing the integer tuples of the integer set (⃗s ∈
Zd ), N , a common name for all the tuples ⃗s usually used as
the name of computations, d the dimensionality of the set,
p⃗ ∈ Ze a vector of e parameters and f (⃗s , p⃗) a Presburger
formula that evaluates to true, if and only if ⃗s is an element
of S for the given parameters p⃗.
Note that <numeral> ∗ <term> is not a general multiplication operator; it is a shortcut for <term> + · · · + <term>.
Presburger arithmetic is used mainly because it is a decidable arithmetic. That is, there exists an algorithm which decides whether an arbitrary Presburger formula is true (valid)
or not, which is important for many polyhedral operations.
11.1
Relations (maps)
A map is a relation between two integer sets. For example
10.2 Quasi-Affine Constraints
M = {S1(i, j) → S1(i + 2, j + 2) : 1 ≤ i ≤ 3 ∧ 1 ≤ j ≤ 2}
A quasi-affine constraint is a constraint over integer values
and integer variables involving only the operators +, -, ×, /,
mod, &&, ||, <, <=, >, >=, ==, !=, and the ternary ?: operator, where
the second argument of / and mod must be a (positive) integer
literal, and where at least one of the arguments of × must be a
constant expression. An example of a quasi-affine constraint
for a statement in a loop nest is 10×i +j +n > 0, where i and j
are loop iterators and n is a symbolic constant (i.e., a variable
that has an unknown but fixed value for the duration of an
execution). An example of a non-quasi-affine constraint is
i × i > 0, because we require one of the arguments be a
constant.
11
j
4
represents a relation between two sets. The first set is called
the domain or the source and the second is called the range
or the sink. Figure 13 shows a graphical representation of
the map M.
In general, a map has the form
M = {A(⃗s ) → B(⃗o ), (⃗s , o⃗) ∈ Zd1 × Zd2 | f (⃗s , o⃗, p⃗)}
where A(⃗s ) represents the domain or the source and B(⃗o )
represents the range or the sink. d 1 and d 2 are the dimensionalities of ⃗s and o⃗, p⃗ ∈ Ze is a vector of e parameters and
f (⃗s , o⃗, p⃗) is a Presburger formula that evaluates to true if
and only if there is a relation from ⃗s to o⃗ in M for the given
parameters p⃗.
Integer Sets
An integer set is a set of integer tuples from Zd that can be
specified using affine constraints. d is the dimensionality of
the set (the number of integers in each tuple) and a d-tuple is
represented as (a 1 , a 2 , . . . , ad ). An example of a set of integer
tuples is:
12
12.1
Three-Layer IR
Layer I: Abstract Computation Layer
The first layer is a union of computation sets such that each
computation set describes one statement in the program.
Each computation set is defined as follows:
{(1, 1); (2, 1); (3, 1); (1, 2); (2, 2); (3, 2)}
Instead of listing all the integer tuples of the set, we describe the set using affine constraints:
{N 1(⃗s )| f (⃗s , p⃗)} : д(N 2(⃗s ), N 3(⃗s ), ..., N 4(⃗s ))
{S (i, j) : 1 ≤ i ≤ 3 ∧ 1 ≤ j ≤ 2}
where N 1(⃗s ) is a computation that has the name N 1, and
where д(N 2(⃗s ), N 3(⃗s ), ..., N 4(⃗s )) is the expression that the
computation computes and f (⃗s , p⃗) is a Presburger formula
that evaluates to true, if and only if ⃗s is an element of S for
the given parameters p⃗.
where i and j are the dimensions of the set. The tuples of a
set can optionally have a common name, such as S in this
example. Figure 12 shows a graphical representation of the
map S.
16
,,
1
2
3
4
only, while the last vector has one space dimension. We use
a tag to indicate that a given dimension is a space dimension;
this tag indicates mainly the type of processor to which the
computations are mapped.
Assuming that we have two time-space vectors we want
to know which vector among the two executes first, then all
we need to do is to compare the two vectors lexicographically 3 . In the example, S1(0, 0, 0] precedes S2(0, 0, 1) lexicographically, so S1(0, 0, 0) is scheduled to be executed before
S2(0, 0, 1). The ability to add dimensions and reorder them
freely enables the expression of multiple possible mappings
from the original iteration space of the computations to complex execution scenarios. Figure 14 provides examples of
different optimizations for a simple algorithm and shows the
time-space vectors used to express those optimizations. Each
of the dimensions of the vector can be an indexed variable,
distributing the computation over that dimension or a constant providing a lexical ordering between statements. The
algorithms will be using a custom intermediate representation within each DSL, however, we use a classical imperative
language representation to describe them in this paper. A
value can be annotated by a processor type, indicating where
that computation will be placed, and indicating that dimension will be run in parallel.
for (i in 0..N)
for (j in 0..M)
S1
S2
(a) Original computation expressed as an imperative program
S1: ( i, j, 0)
S2: ( i, j, 1)
(b) Sequential
S1: ( i, 0, j)
S2: ( i, 1, j)
(d) Inner loop fission
S1: ( i/N , i%N , j, 0)
S2: ( i/N , i%N , j, 1)
(f) Loop split
S1: ( i%P (cpu), j, i/P, 0)
S2: ( i%P (cpu), j, i/P, 1)
(h) Outer parallel
S1: ( j, i, 0)
S2: ( j, i, 1)
(c) Transposed
S1: ( 0, i, j)
S2: ( 1, i, j)
(e) Outer loop fission
S1: ( i/N , j, i%N , 0)
S2: ( i/N , j, i%N , 1)
(g) ... & permuted
S1: ( i, j/4, 0, j%4 (vec))
S2: ( i, j/4, 1, j%4 (vec))
(i) Inner vectorized
Figure 14. For a simple loop next with two statements, examples of different time-processor vectors leading to many
possible execution arrangements.
12.2 Layer II: Computation Placement Layer
The second layer is identical to the first layer except that
computations in this layer are ordered based on their lexicographical order.
12.3 Layer III: Concrete Computation Layer
The third layer is a union of computation sets and a set of
access relations. The computation sets are identical to the
Layer II computation sets except that new allocation/deallocation statements are added. The set of access relations is
described as follows:
{N 1(⃗s ) → B(⃗o ), (⃗s , o⃗) ∈ Zd1 × Zd2 | f (⃗s , o⃗, p⃗)}
where N 1(⃗s ) is a computation mapped to the buffer element
B[⃗o ] and f (⃗s , o⃗, p⃗) is a Presburger formula that evaluates to
true if and only if there is a relation from ⃗s to o⃗ for the given
parameters p⃗.
12.4 Time-Processor Vectors
The time-space vector in Layer II is a vector indicates the
logical time of execution of computations and the processor on which they should be executed. Each one of those
vectors has a name associated to it (the name of the computation). S1(0, 0, 0), S2(0, 0, 1), S1(i, j, 0) and S2(i (cpu), j, 1) are
examples of time-space vectors representing computations
in Layer II. In general, the time-space vector has two types
of dimensions: time dimensions and space dimensions. The
time dimensions provide the logical order of execution of the
computations while the space dimensions indicate on which
processor the computations should be executed. In the previous example, the first three vectors have time dimensions
3A
17
time-space vector (i 1, . . . , i k , . . . , i n ) lexicographically precedes another time-space vector (i 1′ , . . . , i k′ , . . . , i n′ ) if and only if ∃k ∈ N such
that i 1 = i 1′ ∧ i 2 = i 2′ ∧ · · · ∧ i k < i k′
| 6 |
Adaptive global thresholding on the sphere
Claudio Durastantia,1
arXiv:1601.02844v2 [] 26 Jul 2016
a Fakultät
für Matematik, Ruhr Universität, Bochum
Abstract
This work is concerned with the study of the adaptivity properties of nonparametric regression estimators over the d-dimensional sphere within the global
thresholding framework. The estimators are constructed by means of a form
of spherical wavelets, the so-called needlets, which enjoy strong concentration
properties in both harmonic and real domains. The author establishes the convergence rates of the Lp -risks of these estimators, focussing on their minimax
properties and proving their optimality over a scale of nonparametric regularity
function spaces, namely, the Besov spaces.
Keywords: Global thresholding, needlets, spherical data, nonparametric
regression, U-statistics, Besov spaces, adaptivity.
2010 MSC: 62G08, 62G20, 65T60
1. Introduction
The purpose of this paper is to establish adaptivity for the Lp -risk of regression function estimators in the nonparametric setting over the d-dimensional
sphere Sd . The optimality of the Lp risk is established by means of global
thresholding techniques and spherical wavelets known as needlets.
Let (X1 , Y1 ), . . . , (Xn , Yn ) be independent pairs of random variables such
that, for each i ∈ {1, . . . , n}, Xi ∈ Sd and Yi ∈ R. The random variables
X1 , . . . , Xn are assumed to be mutually independent and uniformly distributed
locations on the sphere. It is further assumed that, for each i ∈ {1, . . . , n},
Yi = f (Xi ) + εi ,
(1)
where f : Sd 7→ R is an unknown bounded function, i.e., there exists M > 0
such that
sup |f (x)| ≤ M < ∞.
(2)
x∈Sd
Moreover, the random variables 1 , . . . , n in Eq. (1) are assumed to be mutually
independent and identically distributed with zero mean. Roughly speaking,
Email address: claudio.durastanti@gmail.com (Claudio Durastanti)
author is supported by Deutsche Forschungsgemeinschaft (DFG) - GRK 2131, “Highdimensional Phenomena in Probability — Fluctuations and Discontinuity”.
1 The
Preprint submitted to Journal of Multivariate Analysis
February 19, 2018
they can be viewed as the observational errors and in what follows, they will be
assumed to be sub-Gaussian.
In this paper, we study the properties of nonlinear global hard thresholding
estimators, in order to establish the optimal rates of convergence of Lp -risks for
functions belonging to the so-called Besov spaces.
1.1. An overview of the literature
In recent years, the issue of minimax estimation in nonparametric settings
has received considerable attention in the statistical inference literature. The
seminal contribution in this area is due to Donoho et al. [7]. In this paper, the
authors provide nonlinear wavelet estimators for density functions on R, lying
over a wide nonparametric regularity function class, which attain optimal rates
of convergence up to a logarithmic factor. Following this work, the interaction
between wavelet systems and nonparametric function estimation has led to a
considerable amount of developments, mainly in the standard Euclidean framework; see, e.g., [3, 5, 24, 26, 27, 28, 30] and the textbooks [22, 44] for further
details and discussions.
More recently, thresholding methods have been applied to broader settings.
In particular, nonparametric estimation results have been achieved on Sd by using a second generation wavelet system, namely, the spherical needlets. Needlets
were introduced by Narcowich et al. [39, 40], while their stochastic properties
dealing with various applications to spherical random fields were examined in
[2, 6, 34, 35, 36]. Needlet-like constructions were also established over more
general manifolds by Geller and Mayeli [18, 19, 20, 21], Kerkyacharian et al.
[25] and Pesenson [41] among others, and over spin fiber bundles by Geller and
Marinucci [16, 17].
In the nonparametric setting, needlets have found various applications on
directional statistics. Baldi et al. [1] established minimax rates of convergence
for the Lp -risk of nonlinear needlet density estimators within the hard local
thresholding paradigm, while analogous results concerning regression function
estimation were established by Monnier [38]. The block thresholding framework
was investigated in Durastanti [9]. Furthermore, the adaptivity of nonparametric regression estimators of spin function was studied in Durastanti et al. [10].
In this case, the regression function takes as its values algebraical curves lying
on the tangent plane for each point on S2 and the wavelets used are the so-called
spin (pure and mixed) needlets; see Geller and Marinucci [16, 17].
The asymptotic properties of other estimators for spherical data, not concerning the needlet framework, were investigated by Kim and Koo [31, 32, 33],
while needlet-like nearly-tight frames were used in Durastanti [8] to establish
the asymptotic properties of density function estimators on the circle. Finally,
in Gautier and Le Pennec [15], the adaptive estimation by needlet thresholding
was introduced in the nonparametric random coefficients binary choice model.
Regarding the applications of these methods in practical scenarios, see, e.g.,
[13, 14, 23], where they were fruitfully applied to some astrophysical problems,
concerning, for instance, high-energy cosmic rays and Gamma rays.
2
1.2. Main results
Consider the regression model given in Eq. (1) and let {ψj,k : j ≥ 0, k =
1, . . . , Kj } be the set of d-dimensional spherical needlets. Roughly speaking,
j and Kj denote the resolution level j and the cardinality of needlets at the
resolution level j, respectively. The regression function f can be rewritten in
terms of its needlet expansion. Namely, for all x ∈ Sd , one has
f (x) =
Kj
XX
βj,k ψj,k (x) ,
j≥0 k=1
where {βj,k : j ≥ 0, k = 1, . . . , Kj } is the set of needlet coefficients.
For each j ≥ 0 and k ∈ {1, . . . , Kj }, a natural unbiased estimator for βj,k is
given by the corresponding empirical needlet coefficient, viz.
n
1X
Yi ψj,k (Xi ) ;
βbj,k =
n i=1
(3)
see, e.g., Baldi et al. [1] and Härdle et al. [22]. Therefore, the global thresholding
needlet estimator of f is given, for each x ∈ Sd , by
fˆn (x) =
Jn
X
j=0
KJn
τj
X
βbj,k ψj,k (x) ,
(4)
k=1
where τj is a nonlinear threshold function comparing the given j-dependent
b j (p), built on a subsample of p < n observations, to a threshold based
statistic Θ
b j (p) is above the threshold, the whole
on the observational sample size. If Θ
j-level is kept; otherwise it is discarded.
Loosely speaking, this procedure allows one to delete the coefficients corresponding to a resolution level j whose contribution to the reconstruction of the
regression function f is not clearly distinguishable from the noise. Following
Kerkyacharian et al. [30], we consider the so-called hard thresholding framework, defined as
τj = τj (p) = 1{Θ̂j (p) ≥ B dj n−p/2 },
where p ∈ N is even. Further details regarding the statistic Θ̂j (p) will be
discussed in Section 3.4, where the choice of the threshold B dj n−p/2 will also be
motivated.
b j (p) as an unbiased statistic of
For the rest of this section, we consider Θ
p
p
|βj,1 | + · · · + |βj,Kj | . The so-called truncation bandwidth Jn , on the other
hand, is the higher frequency on which the empirical coefficients β̂j,1 , . . . , β̂j,Kj
are computed. The optimal choice of the truncation level is Jn = lnB (n1/d );
for details, see Section 3. This allows the error due to the approximation of f ,
which is an infinite sum with respect to j, to be controlled by a finite sum, such
as the estimator fˆn .
3
Our objective is to estimate the global error measure for the regression estimator fˆn . For this reason, we study the worst possible performance over a
so-called nonparametric regularity class {Fα : α ∈ A} of function spaces of the
Lp -risk, i.e.,
Rn fˆn ; Fα = sup E kfˆn − f kpLp (Sd ) .
f ∈Fα
Recall that an estimator fˆn is said to be adaptive for the Lp -risk and for the
scale of classes {Fα : α ∈ A} if, for every α ∈ A, there exists a constant cα > 0
such that
E kfˆn − f kpLp (Sd ) ≤ cα Rn fˆn ; Fα ;
see, e.g., [1, 22, 30].
For r > 0 and for p ∈ [1, r], we will establish that the regression estimator
s
, where 1 ≤ q ≤ ∞ and d/p ≤
fˆn is adaptive for the class of Besov spaces Bp,q
s < r + 1. Finally, let R ∈ (0, ∞) be the radius of the Besov ball on which f
is defined. The proper choice of r will be motivated in Section 2.1. Our main
result is described by the following theorem.
Theorem 1.1. Given r ∈ (1, ∞), let p ∈ [1, r]. Also, let fˆn be given by Eq. (4),
with Jn = lnB n1/d . Then, for 1 ≤ q ≤ ∞, d/p ≤ s < r + 1 and 0 < R < ∞,
there exists C > 0 such that
−sp
sup E kfˆn − f kpLp (Sd ) ≤ Cn 2s+d .
s (R)
f ∈Br,q
The behavior of the L∞ -risk function will be studied separately in Section 3
and the analogous result is described in Theorem 3.2. Moreover, the details
concerning the choice of r will be presented in Remark 3.1 and other properties
of Lp -risk functions, such as optimality, will be discussed in Remark 3.3.
1.3. Comparison with other results
The bound given in Eq. (12) is consistent with the results of Kerkyacharian
et al. [30], where global thresholding techniques were introduced on R. As far
as nonparametric inference over spherical datasets is concerned, our results can
be viewed as an alternative proposal to the existing nonparametric regression
methods (see, e.g., [1, 9, 10, 38]), related to the local and block thresholding
procedures.
Recall that in local thresholding paradigm, each empirical estimator βbj,k is
compared to a threshold τj,k and it is, therefore, kept or discarded if its absolute
value is above or below τj,k respectively, i.e., the threshold function is given by
1{|βbj,k | ≥ τj,k }. Typically, the threshold is chosen such that τj,k = κ (ln n/n),
where κ depends explicitly on two parameters, namely, the radius R of the Besov
ball on which the function f is defined and its supremum M ; see, e.g., Baldi
et al. [1]. An alternative and partially data-driven choice for κ is proposed by
Monnier [38], i.e., here
n
κ0 X
2
κ=
ψj,k (Xi ) .
n i=1
4
Even if this stochastic approach is proved to outperform the deterministic one,
the threshold still depends on both R and M , which control κ0 . Also according
to the results established on R (see Härdle et al. [22]), local techniques entail
nearly optimality rates for the Lp -risks over a wide variety of regularity function
s
spaces. In this case, the regression function f belongs to Bp,q
(R), where s ≥ d/r,
p ∈ {1, ∞}, q ∈ {1, ∞} and 0 < R < ∞ (cf. [1, 10, 22]). However, these adaptive
rates of convergence are achieved on the expense of having an extra logarithmic
term and of requiring explicit knowledge of the radius of the Besov balls on
which f is defined, in order to establish an optimal threshold.
As far as the block thresholding is concerned, for any fixed resolution level
this procedure collects the coefficients β̂j,1 , . . . , β̂j,Kj into ` = ` (n) blocks denoted Bj,1 , . . . , Bj,` of dimension depending on the sample size. Each block is
then compared to a threshold and then it is retained or discarded. This method
has exact convergence rate (i.e., without the logarithmic extra term), although
it requires explicit knowledge of the Besov radius R. Furthermore, the estimator
is adaptive only over a narrower subset of the scale of Besov spaces, the so-called
regular zone; see Härdle et al. [22]. The construction of blocks on Sd can also
be a difficult procedure, as it requires a precise knowledge of the pixelization
of the sphere, namely, the structure of the subregions on which the sphere is
partitioned, in order to build spherical wavelets.
On the other hand, the global techniques presented in this paper do not
require any knowledge regarding the radius of Besov ball and have exact optimal
convergence rates even over the narrowest scale of regularity function spaces.
1.4. Plan of the paper
This paper is organized as follows. Section 2 presents some preliminary
results, such as the construction of spherical needlet frames on the sphere, Besov
spaces and their properties. In Section 3, we describe the statistical methods
we apply within the global thresholding paradigm. This section also includes an
introduction to the properties of the sub-Gaussian random variables and of the
b j (p), which are key for establishing the thresholding procedure.
U -statistic Θ
Section 4 provides some numerical evidence. Finally, the proofs of all of our
results are collected in Section 5.
2. Preliminaries
This section presents details concerning the construction of needlet frames,
the definition of spherical Besov spaces and their properties. In what is to follow
the main bibliographical references are [1, 2, 7, 21, 22, 24, 37, 39, 40].
2.1. Harmonic analysis on Sd and spherical needlets
Consider the simplified notation L2 Sd = L2 Sd , dx , where dx is the
uniform Lebesgue measure over Sd . Also, let H` be the restriction to Sd of
5
the harmonic homogeneous polynomials of degree `; see, e.g., Stein and Weiss
[43]. Thus, the following decomposition holds
2
L
d
S
=
∞
M
H` .
`=0
An orthonormal basis for H` is provided by the set of spherical harmonics
{Y`,m : m = 1, . . . , g`,d } of dimension g`,d given by
d−1
` + ηd ` + 2ηd − 1
, ηd =
.
g`,d =
ηd
`
2
For any function f ∈ L2 Sd , we define the Fourier coefficients as
Z
Y`,m (x) f (x) dx,
a`,m :=
Sd
such that the kernel operator denoting the orthogonal projection over H` is
given, for all x ∈ Sd , by
P`,d f (x) =
g`,d
X
a`,m Y`,m (x) .
m=1
Also, let the measure of the surface of Sd be given by
. d + 1
(d+1)/2
ωd = 2π
Γ
.
2
The kernel associated to the projector P`,d links spherical harmonics to the
(η )
Gegenbauer polynomial of parameter ηd and order `, labelled by C` q . Indeed,
the following summation formula holds
P`,d (x1 , x2 ) =
g`,d
X
Y`,m (x1 ) Y`,m x2 =
m=1
` + ηd (ηd )
C
(hx1 , x2 i) ,
ηd ωd `
where h·, ·i is the standard scalar product on Rd+1 ; see, e.g., Marinucci and
Peccati [37].
Following Narcowich et al. [40], K` = ⊕`i=0 Hi is the linear space of homogeneous polynomials on Sd of degree smaller or equal to `; see also [1, 37, 39].
Thus, there exist a set of positive cubature points Q` ∈ Sd and a set of cubature
weights {λξ }, indexed by ξ ∈ Q` , such that, for any f ∈ K` ,
Z
X
f (x) dx =
λξ f (ξ) .
Sd
ξ∈Q`
In the following, the notation a ≈ b denotes that there exist c1 , c2 > 0 such
that c1 b ≤ a ≤ c2 b. For a fixed resolution level j and a scale parameter B, let
6
Kj = card Q[2B j+1 ] . Therefore, {ξj,k : k = 1, . . . , Kj } is the set of cubature
points associated to the resolution level j, while {λj,k : k = 1, . . . , Kj } contains
the corresponding cubature weights. These are typically chosen such that
Kj ≈ B dj
and ∀k∈{1,...,Kj } λj,k ≈ B −dj .
Define the real-valued weight (or window) function b on (0, ∞) so that
(i) b lies on a compact support B −1 , B ;
P
2
j
(ii) the partitions of unity property holds, namely,
j≥0 b (`/B ) = 1, for
` ≥ B;
(iii) b ∈ C ρ (0, ∞) for some ρ ≥ 1.
Remark 2.1. Note that ρ can be either a positive integer or equal to ∞. In
the first case, the function b(·) can be built by means of a standard B-spline approach, using linear combinations of the so-called Bernstein polynomials, while
in the other case, it is constructed by means of integration of scaled exponential functions (see also Section 4). Further details can be found in the textbook
Marinucci and Peccati [37].
For any j ≥ 0 and k ∈ {1, . . . , Kj }, spherical needlets are defined as
X `
p
P`,d (x, ξj,k ).
b
ψj,k (x) = λj,k
Bj
`≥0
Spherical needlets feature some important properties descending on the structure of the window function b. Using the compactness of the frequency domain,
it follows that ψj,k is different from zero only on a finite set of frequencies `, so
that we can rewrite the spherical needlets as
X `
p
P`,d (x, ξj,k ),
ψj,k (x) = λj,k
b
Bj
`∈Λj
where Λj = u : u ∈ B j−1 , B j+1
and [u], u ∈ R, denotes the integer part
of u. From the partitions of unity property, the spherical needlets
form a tight
frame over Sd with unitary tightness constant. For f ∈ L2 Sd ,
2
kf kL2 (Sd )
=
Kj
XX
2
|βj,k | ,
j≥0 k=1
where
Z
βj,k =
f (x) ψj,k (x) dx,
(5)
Sd
are the so-called needlet coefficients. Therefore, we can define the following
reconstruction formula (holding in the L2 -sense): for all x ∈ Sd ,
f (x) =
Kj
XX
j≥0 k=1
7
βj,k ψj,k (x) .
From the differentiability of b, we obtain the following quasi-exponential localization property; for x ∈ Sd and any η ∈ N such that η ≤ ρ, there exists cη > 0
such that
cη B jd/2
,
(6)
|ψj,k (x)| ≤
jd/2
{1 + B
d (x, ξj,k )}η
where d (·, ·) denotes the geodesic distance over Sd .
Roughly speaking, |ψj,k (x)| ≈ B jd/2 if x belongs to the pixel of area B −dj
surrounding the cubature point ξj,k ; otherwise, it is almost negligible. The
localization result yields a similar boundedness property for the Lp -norm, which
is crucial for our purposes. In particular, for any p ∈ [ 1, ∞) there exist two
constants cp , Cp > 0 such that
cp B jd( 2 − p ) ≤ kψj,k kLp (Sd ) ≤ Cp B jd( 2 − p ) ,
1
1
1
1
(7)
and there exist two constants c∞ , C∞ > 0 such that
d
c∞ B j 2 ≤ kψj,k kL∞ (Sd ) ≤ C∞ B jd/2 .
According to Lemma 2 in Baldi et al. [1], the following two inequalities hold.
For every 0 < p ≤ ∞,
Kj
X
≤ cB jd( 2 − p ) kβj,k k`p ,
1
βj,k ψj,k
k=1
1
(8)
Lp (Sd )
and for every 1 ≤ p ≤ ∞,
kβj,k k`p B jd( 2 − p ) ≤ c kf kLp (Sd ) ,
1
1
where `p denotes the space of p-summable sequences. The generalization for the
case p = ∞ is trivial.
The following lemma presents a result based on the localization property.
Lemma 2.1. For x ∈ Sd , let ψj,k (x) be given by Eq. (2.1). Then, for q ≥ 2,
ki1 6= ki2 , for i1 6= i2 = 1, . . . , q, and for any η ≥ 2, there exists Cη > 0 such
that
Z Y
q
B dj(q−1)
ψj,ki (x) dx ≤
,
η(q−1)
Sd i=1
(1 + B dj ∆)
where
∆=
min
i1 ,i2 ∈{1,...,q},i1 6=i2
d(ξj,ki1 , ξj,ki2 ).
Remark 2.2. As discussed in Geller and Pesenson [21] and Kerkyacharian et
al. [25], needlet-like wavelets can be built over more general spaces, namely, over
compact manifolds. In particular, let {M, g} be a smooth compact homogeneous
manifold of dimension d, with no boundaries. For the sake of simplicity, we
assume that there exists a Laplace–Beltrami operator on M with respect to the
8
action g, labelled by ∆M . The set {γq : q ≥ 0} contains the eigenvalues of
∆M associated to the eigenfunctions {uq : q ≥ 0}, which are orthonormal with
respect to the Lebesgue measure over M and they form an orthonormal basis in
L2 (M); see [20, 21]. Every function f ∈ L2 (M) can be described in terms of
its harmonic coefficients, given by aq = hf, uq iL2 (M) , so that, for all x ∈ M,
X
aq uq (x) .
f (x) =
q≥1
Therefore, it is possible to define a wavelet system over {M, g} describing a
tight frame over M along the same lines as in Narcowich et al. [40] for Sd ; see
also [21, 25, 41] and the references therein, such as Geller and Mayeli [19, 20].
Here we just provide the definition of the needlet (scaling) function on M, given
by
j+1
B
X √−γq
p
ψj,k (x) = λj,k
b
uq (x) ū (ξj,k ) ,
Bj
j−1
q=B
where in this case the set {ξj,k , λj,k } p
characterizes a suitable partition of M,
given by a ε-lattice on M, with ε = λj,k . Further details and technicalities
concerning ε-lattices can be found in Pesenson [41]. Analogously to the spherical
case, for f ∈ L2 (M) and arbitrary j ≥ 0 and k ∈ {1, . . . , Kj }, the needlet
coefficient corresponding to ψj,k is given by
B
X √−γq
p
= λj,k
b
aq uq (ξj,k ) .
Bj
j−1
j+1
βj,k = hf, ψj,k iL2 (Sd )
q=B
These wavelets preserve all the properties featured by needlets on the sphere: because, as shown in the following sections, the main results presented here do not
depend strictly on the underlying manifold (namely, the sphere) but rather they
can be easily extended to more general frameworks such as compact manifolds,
where the concentration properties of the wavelets and the smooth approximation
properties of Besov spaces still hold.
2.2. Besov space on the sphere
Here we will recall the definition of spherical Besov spaces and their main
approximation properties for wavelet coefficients. We refer to [1, 10, 22, 39] for
more details and further technicalities.
Suppose that one has a scale of functional classes Gt , depending on the qdimensional set of parameters t ∈ T ⊆ Rq . The approximation error Gt (f ; p)
concerning the replacement of f by an element g ∈ Gt is given by
Gt (f ; p) = inf kf − gkLp (Sd ) .
g∈Gt
s
Therefore, the Besov space Bp,q
is the space of functions such that f ∈ Lp Sd
and
X1
{ts Gt (f ; p)}q < ∞,
t
t≥0
9
which is equivalent to
X
B j {GB j (f ; p)}q < ∞.
j≥0
s
The function f belongs to the Besov space Bp,q
if and only if
Kj
X
1/p
= B −js wj ,
{|βj,k | kψj,k kLp (Sd ) }p
(9)
k=1
where wj ∈ `q , the standard space of q-power summable infinite sequences.
Loosely speaking, the parameters s ≥ 0, 1 ≤ p ≤ ∞ and 1 ≤ q ≤ ∞ of
s
the Besov space Bp,q
can be viewed as follows: given B > 1, the parameter p
denotes the p-norm of the wavelet coefficients taken at a fixed resolution j, the
parameter q describes the weighted q-norm taken across the scale j, and the
parameter r controls the smoothness of the rate of decay across the scale j. In
view of Eq. (7), the Besov norm is defined as
q/p 1/q
Kj
X
X
p
jq{s+d(1/2−1/p)}
B
|βj,k |
kf kBs = kf kLp (Sd ) +
p,q
j≥0
k=1
= kf kLp (Sd ) + B j{s+d(1/2−1/p)} kβj,k k`p
,
`q
for q ≥ 1. The extension to the case q = ∞ is trivial.
We conclude this section by introducing the Besov embedding, discussed in
[1, 29, 30] among others. For p < r, one has
s
s
Br,q
⊂ Bp,q
s
s−d(1/p−1/r)
and Bp,q
⊂ Br,q
,
or, equivalently,
Kj
X
p
|βj,k | ≤
k=1
Kj
X
k=1
Kj
X
1−p/r
r
;
|βj,k | Kj
(10)
k=1
r
|βj,k | ≤
Kj
X
p
|βj,k | .
(11)
k=1
Proofs and further details can be found, for instance, in [1, 10].
3. Global thresholding with spherical needlets
This section provides a detailed description of the global thresholding technique applied to the nonparametric regression problem on the d-dimensional
sphere. We refer to [12, 22, 30] for an extensive description of global thresholding methods and to [1, 10] for further details on nonparametric estimation in
the spherical framework.
10
3.1. The regression model
Recall the regression model given by Eq. (1), i.e., for all i ∈ {1, . . . , n},
Yi = f (Xi ) + εi .
While {X1 , . . . , Xn } denotes the set of uniformly sampled random directions
over Sd , {Y1 , . . . , Yn } is the set of the independent observations which are related
to {X1 , . . . , Xn } through the regression function f and affected by {ε1 , . . . , εn },
which is the set of the observational errors. The independent and identically
distributed random variables ε1 , . . . , εn are such that, for all i ∈ {1, . . . , n},
E (εi ) = 0, E ε2i = σε2 < ∞,
and they are assumed to be sub-Gaussian. Further details are given in Secs
tion 3.2. Assume that f ∈ Bp,q
, d/p ≤ s < r + 1, 1 ≤ p ≤ r and 1 ≤ q ≤ ∞,
where r is fixed, and that there exists R > 0 such that kf kBp,q
≤ R. As mens
tioned in Section 1.2 and Section 2, the regression function can be expanded in
terms of needlet coefficients as
f (x) =
Kj
XX
βj,k ψj,k (x) ,
j≥0 k=1
where βj,k are given in Eq. (5).
Remark 3.1. As discussed in Section 1.3, we do not require explicit knowledge
of the Besov radius R. Although it can be difficult to determine r explicitly,
we suggest the following criterion. Consider Remark 2.1; if ρ < ∞, we choose
r = ρ (see again [30]). If ρ = ∞, we choose r = B d(Jn +1) empirically, using the
so-called vanishing moment condition on Sd , properly adapted for the needlet
framework; see, e.g., Schröder and Sweldens [42].
3.2. The observational noise
Following Durastanti et al. [10], we assume that ε1 , . . . , εn follow a subGaussian distribution; see also Buldygin and Kozachenko [4]). A random variable ε is said to be sub-Gaussian of parameter a if, for all λ ∈ R, there exists
a ≥ 0 such that
2 2
E(eλε ) ≤ ea λ /2 .
Sub-Gaussian random variables are characterized by the sub-Gaussian standard,
given by
n
o
2 2
ζ (ε) := inf a ≥ 0 : E(eλε ) ≤ ea λ /2 , λ ∈ R ,
which is finite. As proved in [4],
(
ζ (ε) = sup
λ6=0
2 ln E eλε
λ2
)1/2
;
11
E(eλε ) ≤ exp
λ2 ζ 2 (ε)
2
.
Following Lemma 1.4 in [4], for p > 0
p
E (ε) = 0; E ε2 ≤ ζ (ε) ; E (|ε| ) ≤ 2
p
exp
p/2
ζ p (ε) .
Therefore, sub-Gaussian random variables are characterized by the same moment inequalities and concentration properties featured by null-mean Gaussian
or bounded random variables.
Remark 3.2. In order to establish the probabilistic bounds described in Sections 3.3 and 3.4, it would be sufficient for ε1 , . . . , εn to be null-mean independent random variables with finite absolute pth moment. However, we include
the notion of sub-Gaussianity in order to be consistent with the existing literature; see [10]. Furthermore, sub-Gaussianity involves a wide class of random
variables, including Gaussian and bounded random variables and, in general,
all the random variables such that their moment generating function has a an
upper bound in terms of the moment generating function of a centered Gaussian
random variable of variance a. Hence the term “sub-Gaussian.”
3.3. The estimation procedure
We note again that the method established here can be viewed as an extension of global thresholding techniques to the needlet regression function estimation. In this sense, our results are strongly related to those presented in
[1, 10, 30], as discussed in Section 1.3.
For any j ≥ 0 and k ∈ {1, . . . , Kj }, the empirical needlet estimator is given
by
n
1X
b
Yi ψj,k (Xi ) ,
βj,k =
n i=1
and it is unbiased, i.e.,
n
E(βbj,k ) =
1X
[E{f (Xi ) ψj,k (Xi )} + E(εi )E{ψj,k (Xi )}] = βj,k .
n i=1
The empirical needlet coefficients are moreover characterized by the following
stochastic property.
Proposition 3.1. Let βj,k and βbj,k be as in Eq. (5) and Eq. (3), respectively.
Thus, for p ≥ 1, there exists c̃p such that
E(|β̂j,k − βj,k |p ) ≤ c̃p n−p/2 .
Therefore, we define the global thresholding needlet regression estimator at
every x ∈ Sd by
Jn
X
X
fˆn (x) =
τj
βbj,k ψj,k (x) ;
j=0
k
see Eq. (4). Recall now the main results, stated in Section 1.
12
Theorem 1.1. Given r ∈ (1, ∞), let p ∈ [1, r]. Also, let fˆn be given by Eq. (4),
with Jn = lnB n1/d . Then, for 1 ≤ q ≤ ∞, d/p ≤ s < r + 1 and 0 < R < ∞,
there exists C > 0 such that
−sp
sup E kfˆn − f kpLp (Sd ) ≤ Cn 2s+d .
(12)
s (R)
f ∈Br,q
In the nonparametric thresholding settings, the Lp -risk is generally bounded as
follows
Jn
X
p
≤ 2p−1 E
E kfˆn − f kpLp (Sd )
(βbj,k − βj,k )ψj,k
Lp (Sd )
j=0
p
X
+
βj,k ψj,k
j≥Jn
p
d
L (S )
=
S + B,
where S is the stochastic error, due to the randomness of the observations and
B is the (deterministic) bias error. The so-called truncation level Jn is chosen
so that B dJn = n.
In this case the bias term term does not affect the rate of convergence for
s ∈ (d/p, r + 1). As far as S is concerned, its asymptotic behavior is established
by means of the so-called optimal bandwidth selection, i.e., a frequency Js such
1
that B dJs = n (2s+d) ; see [12, 30]. Note that trivially, Js < Jn . The meaning
and the relevance of the optimal bandwidth selection is given in Section 5, in
the proof of Theorem 1.1. However, in the next section it will also be crucial
for the construction of the threshold function.
Consider now the case p = ∞. First, we have to modify the threshold
function given in Eq. (13) slightly, in view of the explicit dependence on p.
b∞ = Θ
b j (1) = |β̂j,1 |+· · ·+
Hence, in the selection procedure we use the statistic Θ
j
dj −1/2
|β̂j,Kj |, which will be compared to B n
. Further details on the threshold
will also be given in the next section. Under this assumption, we obtain the
following result.
Theorem 3.2. Let fˆn by given by Eq. (4). Given r ∈ (1, ∞), for any d < s <
r + 1, there exists C > 0 such that
s−d
sup E kfˆn − f kL∞ (Sd ) ≤ Cn− 2s+d .
s
f ∈Br,q
Remark 3.3. As far as optimality is concerned, Eq. (12) given in Theorem 1.1,
achieves the same optimal minimax rates provided on R by Kerkyacharian et al.
[30], where the established optimal rate of convergence was given by n−sp/(2p+1) .
Moreover, this rate is consistent with the results provided over the so-called regular zone by [1, 9, 10, 38] for local and block thresholding estimates by needlets
13
on the sphere, where the rates are nearly optimal due to the presence of a logarithmic term.
Regarding the L∞ -risk function, according to Theorem 3.2, the rate established is not optimal; see, e.g., Baldi et al. [1]. In the global thresholding
b j (p)
paradigm, a straightforward generalization of the thresholding function Θ
given by Eq. (13) is not available (see Remark 3.4). Therefore, the upper bound
for the case p = ∞ is established in a different framework, which can be reasonably assumed to cause the lack of optimality.
3.4. The threshold
The construction of the threshold function τj is strictly related to Efromovich
[12] and Kerkyacharian et al. [30], where analogous results were established in
the real case. Let
Kj
X
p
Θj (p) =
|βj,k | .
k=1
s
,
Using Eq. (9), it follows immediately that, if f ∈ Bp,q
1
1
Θj (p) ≤ CB −jp{s+d( 2 − p )} .
Consider now the optimal bandwidth selection Js . If j ≤ Js ,
1
1
B −jp{s+d( 2 − p )} ≥
B dj
.
np/2
Thus, even if j ≤ Js doesn’t imply Θj (p) > B dj /np/2 , according to [12, 30], one
p/2
has that Θj (p) ≥ B dj /n
implies j ≤ Js .
Clearly, the case Θj (p) ≤ B dj /np/2 , j ≤ Js provides no guarantee of a
better performance if compared to the linear estimate, whose error is of order
B dj /np/2 ; see Härdle et al. [22]. Thus, the natural choice is to construct a
threshold function that keeps the level j if and only if
Θj (p) ≥
B dj
.
np/2
As pointed out in [12, 30], the natural estimator |β̂j,1 |p + · · · + |β̂j,Kj |p for
Θj (p) yields an extremely large bias term and, therefore, undersmoothing effects. Hence, following the procedure as suggested in [12, 30] (see also [22]) but
properly adapted to the needlet framework (see Lemma 2.1), we propose an
alternative method as described below.
Let p ∈ N be even, Σp denoting the set of p-dimensional vectors chosen in
p
{1, . . . , n} and also let υ = (i1 , . . . , ip ) be the generic element belonging to Σp .
b j (p) by
Define the U-statistic Θ
Kj
X
X ⊗p
b j (p) = 1
Ψj,k (Xυ , ευ ) ,
Θ
n
p
k=1 υ∈Σp
14
(13)
where
Ψ⊗p
j,k (Xυ , ευ ) =
p
Y
{f (Xih ) + εih } ψj,k (Xih ) .
h=1
Given that the sets of variables {X1 , . . . , Xn } and {ε1 , . . . , εn } are independent,
it can be easily seen that
E{Θ̂j (p)} =
Kj
X
p
|βj,k | = Θj .
(14)
k=1
Remark 3.4. As mentioned in the previous section, if we consider the case
b j (p). Hence, we choose
p = ∞, we lack a straightforward extension of Θ
b∞
Θ
j =
Kj
X
|β̂j,k |.
k=1
so that the threshold function is given by
B dj
∞
τj = 1 Θj ≥ 1/2 .
n
Our purpose is to establish two probabilistic bounds related to the mth
b j (p), respectively. We have that
moment and to the mth centered moment Θ
−m
m
n X Y
o
X
n
m
E
Ψ⊗p
E[{Θ̂j (p)} ] =
j,k` (Xυ` , ευ` ) .
p
υ1 ,...,υm ∈Σp
k1 ,...,km `=1
For any fixed configuration υ1 , . . . , υm , let the sequence c1 , . . . , cm denote the
b j (p)}.
cardinality of the E [{f (X) + }ψj,k (X)]` of size ` appearing in E{Θ
Observe that
m
X
`c` = mp.
`=1
Following Kerkyacharian et al. [30], the next results hold.
b j (p) be given by Eq. (13). Also, let p be an even integer.
Proposition 3.3. Let Θ
Then, for m ∈ N, there exists C˜1 such that
Pm
jd ( mp
d
`=1 c` )
2 −
X
B
B −j{c1 (s− p )−d(1−γ)}
m
E[{Θ̂j (p)} ] ≤ C̃1
,
n
nmp/2
(c1 ,...,cm )∈Γm,p
where Γm,p = {c1 , . . . , cm :
Pm
`=1
`c` = mp}.
b j (p) and Θj be given by (13) and (14) respectively.
Proposition 3.4. Let Θ
Also, let p, m be even integers. Then, there exists C˜2 such that
p jd mh
p
X
B
E{|Θ̂j (p) − Θj (p)|m } ≤ C˜2
{Θj (p)}(1−h/p)m .
p/2
n
h=1
15
Remark 3.5. According to [30], this procedure can be easily extended to the case
of p is not being an even natural number, by means of an interpolation method.
Indeed, by fixing p1 , p2 ∈ N, both even, we can rewrite p = δp1 + (1 − δ) p2 , as
b j (p) = {Θ̂j (p1 )}δ {Θ̂j (p2 )}1−δ .
Θ
The following lemma is crucial for the application of our interpolation method.
As in [30], we consider for the sake of simplicity just the case p2 − p1 = 2, so
that p0 ≤ p ≤ p00 , p00 = p0 + 2.
Lemma 3.5. For any even m ∈ N,
E
1
Θ̂j (p0 ) − Θj (p00 )
n
m
0
≤ C˜2
p0
X
{Θj (p00 − h)}m n−mh/2 .
h=1
b ∞.
We conclude this section with a result regarding the behavior of Θ
j
b ∞ and Θ∞ be given by (13) and (14) for p = 1, reProposition 3.6. Let Θ
j
j
spectively. Then, there exists C̃∞ such that
∞ 2
dj
b∞
E(|Θ
j − Θj | ) ≤ C̃∞ B /n.
4. Simulations
In this section, we present the results of some numerical experiments performed over the unit circle S1 . In particular, we are mainly concerned with
the empirical evaluation of L2 -risks obtained by global thresholding techniques,
which are then compared to the L2 -loss functions for linear wavelet estimators.
As in any practical situation, the simulations are computed over finite samples and therefore, they can be considered as a reasonable hint. Furthermore,
they can be viewed as a preliminary study to practical applications on real
data concerning estimation of the power spectrum of the Cosmic Microwave
Background radiation; see, e.g., Faÿ et al. [14].
The needlets over the circle used here are based on a weight function b which,
analogously to Baldi et al. [1], is a properly rescaled primitive of the function
2
x 7→ e−1/(1−x ) ;
see also Marinucci and Peccati [37]. Following Theorem 1.1, we fix B = 2 and
b j (2), correspondn = 26 , 27 , 28 and Jn = 6, 7, 8, respectively. The U -statistic Θ
2
ing to the L risk considered here, results in considerable computational effort,
because it is built over 2,016, 8,128 or 32,640 possible combinations of needlet
empirical coefficients for Jn = 6, 7, 8, respectively.
By choosing a test function F and fixing the set of locations X1 , . . . , Xn , we
obtain the corresponding Y1 , . . . , Yn by adding to F (Xi ) a Gaussian noise, with
three different amplitudes, i.e., the noise standard deviation σε is chosen to be
equal to 0.25M , 0.5M or 0.75M , where M is the L∞ -norm of the test function.
Therefore, the following three numerical experiments are performed.
16
Example 4.2
Jn ↓ \σε →
6
7
8
0.25M
7.82
1.90
1.77
Global
0.50M
65.29
9.38
18.88
0.75M
108.46
67.09
54.03
0.25M
80.23
26.75
36.69
Linear
0.50M
411.38
141.41
96.82
0.75M
889.15
451.61
434.80
Table 1: Example 4.2 - Values for L2 risk
Example 4.1. According to Baldi et al. [1], we use the uniform test function
defined, for all x ∈ S1 , by
1
F1 (x) =
.
4π
In this case, for every j, k, we get βj,k = 0. The performance of our model can
be roughly evaluated by simply controlling how many resolution levels pass the
selection procedure. For all the choices of n and σ, we get τj = 0 for all j ∈
{1, . . . , Jn }. On the other hand, we consider a finite number of resolution levels
and therefore of frequencies. Thus, it is possible that higher resolution levels,
involving higher frequencies, could be selected by the thresholding procedure.
Example 4.2. In this example, we choose the function defined, for all x ∈ S1 ,
by
F2 (x) = cos (4x) .
In this case, the test function corresponds to the real part of one of the Fourier
modes over the circle (with eigenvalue 4). This choice allows us to establish
whether the thresholding procedure is able to select only the suitable corresponding resolution levels, as the amplitude of the noise increases. As expected, for
every n and for every σε , we get τj = 1 for j = 2 (containing the frequency
k = 4) and 0 otherwise. Table 1 presents the value for L2 -risks for different
values of Jn and σε , while Figure 1 illustrates the graphical results for the case
Jn = 8 and σε = 0.5M .
Example 4.3. A more general function is chosen here, which is defined, for all
x ∈ S1 , by
n
o
2
2
F3 (x) = e−(x−3π/2) + 2e−(x−2) sin (−2x) ,
depending on a larger set of Fourier modes. In this case, Table 2 gives the
values of L2 -risks corresponding to different Jn and σε , while Figure 2 presents
the graphical results for the case Jn = 8 and σε = 0.5M . Table 3 contains, for
every pair Jn , σε , the resolution levels selected by the procedure.
17
Test function + noise
−2
−1.0
0.0
0 1 2
1.0
Test function
0
1
2
3
4
5
6
0
2
3
4
5
6
Global thresholding needlet estimator
−1.0
−1.5
0.0
0.0
1.5
Linear needlet estimator
1
0
1
2
3
4
5
6
0
1
2
3
4
5
6
Figure 1: Example 4.2 — Jn = 8, σε = 0.5M
Example 4.3
Jn ↓ \σε →
6
7
8
0.25M
62.01
58.87
51.14
Global
0.50M
208.94
150.73
40.65
0.75M
625.02
277.98
NA
0.25M
227.32
150.94
321.05
Linear
0.50M
1372.64
644.31
1271.31
Table 2: Example 4.3 — Values for L2 risk
Example 4.3
Jn ↓ \σε →
6
7
8
Selected j
0.25M 0.50M 0.75M
1
1,2
1,2,3
1
1,4
1,2
1
1,2
1,3,5
Table 3: Example 4.3 - Values of the function τj
18
0.75M
2296.28
1294.59
NA
Test function + noise
−3 −1
−0.5 0.5
1
3
1.5
Test function
0
1
2
3
4
5
6
0
1
3
4
5
6
Global thresholding needlet estimator
−4
−1.0
0 2 4
0.5 1.5
Linear needlet estimator
2
0
1
2
3
4
5
6
0
1
2
3
4
5
6
Figure 2: Example 4.3 - Jn = 8, σε = 0.5M
5. Proofs
In this section, we provide proofs for the main and auxiliary results.
5.1. Proof of the main results
The proofs of Theorem 1.1 and Theorem 3.2 follow along the same lines as
the proof of Theorem 8 in Baldi et al. [1].
Proof of Theorem 1.1. Following, for instance, [1, 7, 9, 10, 22, 30] and as mentioned in Section 3.3, the Lp -risk E(kfˆn − f kpLp (Sd ) ) can be decomposed as the
sum of a stochastic and a bias term. More specifically,
p
Kj
Jn X
X
E(kfˆn − f kpLp (Sd ) ) ≤ 2p−1 E
(τj βbj,k − βj,k )ψj,k
j=0 k=1
p
d
L (S )
p
Kj
X X
+
βj,k ψj,k
.
j>Jn k=1
p
d
L (S )
Using the definition of Besov spaces, we obtain for the bias term the following
inequality:
Kj
X X
j>Jn k=1
p
≤
βj,k ψj,k
Lp (Sd )
Kj
X X
j>Jn
p
ps
≤ CB −spJn ≤ Cn− 2s+d .
βj,k ψj,k
k=1
Lp (Sd )
19
Following Baldi et al. [1], the stochastic
p
Jn Kj
XX b
(τj βj,k − βj,k )ψj,k
E
j=0 k=1
term can be split into four terms, i.e.,
p−1
≤ 4 (Aa + Au + U a + U a),
Lp (Sd )
where
Jn K j
dj
B dj
XX b
b j (p) ≥ B
(τj βj,k − βj,k )ψj,k 1 Θ
Aa = E
1
Θ
(p)
≥
j
np/2
2np/2
j=0 k=1
Jn Kj
B dj
B dj
XX b
b
(τj βj,k − βj,k )ψj,k 1 Θj (p) ≥ p/2 1 Θj (p) ≤ p/2
Au = E
n
2n
j=0 k=1
p
,
Lp (Sd )
p
,
Lp (Sd )
p
Kj
Jn X
X
B dj
2B dj
b
b
(τj βj,k − βj,k )ψj,k 1 Θj (p) ≤ p/2 1 Θj (p) ≥ p/2
U a = E
n
n
j=0 k=1
Jn Kj
dj
2B dj
XX b
b j (p) ≤ B
(τj βj,k −βj,k )ψj,k 1 Θ
U u = E
1
Θ
(p)
≤
j
np/2
np/2
j=0 k=1
,
Lp (Sd )
p
.
Lp (Sd )
Following Durastanti et al. [10], the labels A and U denote the regions where
b
Θj (p) is larger and smaller than the threshold B dj n−p/2 respectively, whereas a
and u refer to the regions where the deterministic Θj (p) are above and under
a new threshold, given by 2−1 B dj n−p/2 for a and 2B dj n−p/2 for u. The decay
of Aa and U u depends on the properties of Besov spaces, while the bounds on
b j (p),
Au and U a depend on the probabilistic inequalities concerning βbj,k and Θ
given in Propositions 3.1, 3.3 and 3.4.
Let p ∈ N be even. Then, using the definition of the optimal bandwidth
selection, we have
Kj
Jn
X
X
B dj
p−1
p
Aa ≤ C10 (Jn + 1)
kψj,k kLp (Sd )
E |β̂j,k − βj,k |p 1 Θj (p) ≥ p/2
2n
j=0
k=1
Kj
Js
Jn X
X
X
Θj (p)
≤ C100
B jdp/2 n−p/2 + n−p/2
B dj(p/2−1) dj −p/2
B n
j=0
j=Js k=1
Kj
Jn
X
X
p
p
≤ C1000 B Js dp/2 n−p/2 +
|βj,k | kψj,k kLp (Sd )
j=Js
k=1
≤ C1 B Js dp/2 n−p/2 + B −Js sp
=
sp
C1 n− 2s+d ,
20
given that
dp
sp
B Js dp/2 n−p/2 = n 2(2s+d) −p/2 = n− 2s+d .
(15)
Similarly, for the region Au we obtain
Au ≤ C20 (Au1 + Au2 ) ,
where
Au1 =
Kj
Js X
X
p
kψj,k kLp (Sd ) E(|β̂j,k − βj,k |p ),
j=0 k=1
Au2 =
Kj
Jn X
X
j=Js
d
p
b j (p) ≥ B j
kψj,k kLp (Sd ) E |β̂j,k − βj,k |p 1 Θ
.
n−p/2
k=1
Using Eq.s (7) and (15), it is easy to see that
sp
Au1 ≤ C2 n− 2s+d .
Regarding Au2 , using Hölder inequality with 1/α0 + 1/α = 1, the generalized
Markov inequality with even m ∈ N and Proposition 3.3, we obtain
Au2
≤
C
Kj
Jn X
X
p
kψj,k kLp (Sd )
j=Js k=1
Kj
Jn X
X
B dj
p
b
E |β̂j,k − βj,k | 1 Θj (p) ≥ p/2
n
1/α
n
o1/α0
B dj
pα0
b
B
E |β̂j,k − βj,k |
≤ C
Pr Θj (p) ≥ p/2
n
j=Js k=1
hn
oi
1/α
b j (p)m
Jn
E Θ
X
B jdp/2 n−p/2
≤ C
dj /np/2 m
B
j=Js
1/α
Pm
jd ( mp
Jn
`=1 c` )
j dp
2 −
X
X
2
d
B
B
≤ C
B −j{c1 (s− p )} B dj{(1−m)−γ}
n
np/2
j=Js
jd(p/2−1)
(c1 ,...,cm )∈Γm,p
≤ Au2,1 + Au2,2 ,
(+)
where Au2,1 and Au2,2 are defined by splitting Γm,p into two subsets, Γm,p and
(−)
Γm,p . These subsets are defined as
(
)
m
mp X
(+)
Γm,p := c1 , . . . , cm :
−
cm ≥ 0 ;
2
i=i
)
(
m
mp X
(−)
−
cm ≤ 0 .
Γm,p := c1 , . . . , cm :
2
i=i
21
Note that 1 − γ ∈ [0, 1]. Hence we choose m, α so that m > 1 + αp/2. It can be
easily verified that
≤
Au2,1
Jn
X
C0
Bj
dp
2
n−p/2
h
j=Js
≤
≤
C 00 B Js
X
B dj{(1−m)−γ}
i1/α
(+)
(c1 ,...,cm )∈Γm,p
dp
2
n−p/2
X
(c1 ,...,cm )∈Γm,p
B dJs (1−m)
1/α
(+)
sp
C 000 n− 2s+d
On the other hand,
≤ C0
Au2,2
X
B
c (s−d/p)
−Js 1 α
≤ C
≤ C
00
B dJs
n
p/2
B dJs
n
p/2
B jdp/2 n−p/2
X
B
−Js
c1
α
(−)
(c1 ,...,cm )∈Γm,p
n−
X
Pm
B jd ( mp
`=1 c` )/α
2 −
n
j=Js
(−)
(c1 ,...,cm )∈Γm,p
00
Jn
X
(
B
1−m−γ
α
Pm
Js d ( mp
`=1 c` )/α
2 −
1−m−γ
(s− dp ) B
B α dJs
n
)
c1 s− d
p
α(2s+d)
{n (2s+d)α (
2s
mp Pm
`=1
2 −
c` )
}n
d(1−m−γ)
α(2s+d)
(−)
(c1 ,...,cm )∈Γm,p
≤ C 000 n
sp
− 2s+d
,
and therefore,
sp
Au ≤ C2 n− 2s+d .
Consider now U a. We have that
p
Kj
Js
X
X
0
βj,k ψj,k
U a ≤ C3
j=0
k=1
Lp (Sd )
+
j=Js
=
dj
B dj
b j (p) ≤ B
E 1 Θ
1
Θ
≥
2
j
np/2
np/2
Kj
Jn
X
X
k=1
p
βj,k ψj,k
Lp (Sd )
U a1 + U a2 .
It is easy to see that
−sp
U a2 ≤ C300 B −Js ps = C300 n− 2s+d .
On the other hand, using the generalized Markov inequality, Proposition 3.4
22
dj
with m = p and Eq. (15), we have that
U a1
Js
X
dj
B dj
b j (p) ≤ B
B dj(p/2−1) E Θj 1 Θ
1
Θ
≥
2
j
np/2
np/2
j=0
=
≤ C
≤ C
≤ C
Js
X
n
B dj
1 o
B dj(p/2−1) Θj Pr |Θ̂j (p) − Θj | ≥ Θj 1 Θj ≥ 2 p/2
2
n
j=0
Js
X
B dj
m
1
Θ
≥
2
B dj(p/2−1) Θ1−m
E
|
Θ̂
(p)
−
Θ
|
j
j
j
j
np/2
j=0
Js
X
B
dj(p/2−1)
j=0
≤ C
Js
X
Θj
p
X
Θ−1
j
`=1
B dj
np/2
m`
p
dj
B dj(p/2−1)
j=0
B
np/2
−sp
≤ Cn 2s+d ,
and therefore,
−sp
U a ≤ C3 n− 2s+d .
Finally, in view of Eq. (7) and Eq. (15), we have that
X
Kj
Jn X
Js
dj
X
B
p
p
U u ≤ C40
|βj,k | kψj,k kLp (Sd )
B jd(p/2−1) Θj (p) 1 Θj ≤ 2 p/2 +
n
j=0
j=Js k=1
Jn
Js
X
X
B −jsp
≤ C400
B jdp/2 n−p/2 +
j=0
≤ C4 n
−sp
2s+d
j=Js
.
We now need to extend these results to any p ∈ (1, r) using the interpolation method described in Remark 3.5. The two terms that have to be studied
separately are Au and U a, in particular Au2 and U a1 , since they involve the
probabilistic inequalities described in Propositions 3.3 and 3.4, holding only for
even p ∈ N. According to [30], the generalization in the case of Au2 is obtained
by bounding
E
hn
om i
hn
om iδ hn
om i1−δ
Θ̂j (p)
≤ CE Θ̂j (p1 )
E Θ̂j (p2 )
23
Indeed,
Au2
≤ C
Jn
X
B
j
dp1
2
n
p1
2
−
j=Js
×B
(c1 ,...,cm )∈Γm,p1
−jc1 s− pd
1
×
X
X
B jd
n
( mp2 1 −Pm
`=1 c` )
1/α #δ h
dp2
p2
B j 2 n− 2
B
dj{(1−m)−γ}
jd
1/α 1−δ
( mp2 2 −Pm
`=1 c` )
−jc s− pd
2 B dj{(1−m)−γ}
.
B 1
B
n
(c1 ,...,cm )∈Γm,p2
The result above follows from Eq.s (10) and (11), so that
s
Bp,q
s
⊂Bps1 ,q ; Bp,q
⊂
1
s−d p
− p1
2
Bp2 ,q
.
Straightforward calculations lead to the claimed result. On the other hand, in
order to study U a1 , we apply Lemma 3.5 to obtain
U a1 ≤C
Js
X
B
dj(p/2−1)
j=0
Js
X
B dj
B dj
b
E Θj 1 Θj (p) ≤ p/2 1 Θj (p) ≥ 2 p/2
n
n
B dj
B dj
B
E Θj 1 Θ̂j (p0 ) ≤ p0 1 Θj (p) ≥ 2 p/2
≤C
n
n2
j=0
(
)
#
B dj
B dj
1 Θ̂j (p00 ) ≤ p0 1 Θj (p) ≥ 2 p/2
0
n
n2
J
s
X
Θj (p00 )
0
0
dj(p/2−1)
B
Pr Θ̂j (p0 ) − Θj (p0 ) ≥
≤C
2
j=0
Θj (p00 )
1
+ Pr
Θ̂j (p0 ) − Θj (p00 ) ≥
,
n
2
o
n
dj
dj
because Θj (p) ≥ 2 nBp/2 ⊂ Θj (p00 ) ≥ 2 Bp0 . Finally, by applying Markov
dj(p/2−1)
n
24
0
2
inequality with m > p, m ∈ N even, we have that
U a1 ≤C
Js
X
0
B
p0
X
dj(p/2−1)
j=0
m
{Θj (p00 − h)} n−
h=1
(
×
{Θj (p00
1−m
− h)}
Θj (p00 )
1
≥2
B dj
n
≤C
Js
X
B
dj 1− pp0 + mh
p0
0
{Θj (p00 )}
p−mh
p00
h=1
(
1
0
Θj (p00 )
≤C
p00
2
≥2
0
B dj(p/2−1)
j=0
p0
X
B dj
h=1
p00
2
n
n−
B dj
n
Js
X
)
0
p0
X
dj(p/2−1)
j=0
×B
mh
2
mh
2
)
p00
2
! p−mh
0
p0
− mh
2
n
B
dj 1− pp0 + mh
0
p
0
0
sp
− 2s+d
≤Cn
Proof of Theorem 3.2. Similarly to the previous proof, note that
Kj
Jn X
X
E kfˆn − f kL∞ (Sd ) ≤C E
τj βbj,k − βj,k ψj,k
j=0 k=1
∞
d
L (S )
Kj
X X
.
(βj,k ) ψj,k
+
j>Jn k=1
∞
d
L
s
If f ∈ B∞,∞
, |βj,k | ≤ M B −j (
(8) with p = ∞, we get
s+ d
2
Kj
X X
j>Jn k=1
) for any k = 1, . . . , K , then by Eq.s (7) and
j
≤
βj,k ψj,k
L∞ (Sd )
(S )
Kj
X X
j>Jn
≤C
βj,k ψj,k
k=1
X
L∞ (Sd )
sup
j>Jn k=1,...,Kj
≤C
X
|βj,k | kψj,k kL∞ (Sd )
s
s
B −js = O n− d = O n− 2s+d .
j>Jn
As far as the other term is concerned, following the same procedure as described
in the proof of Theorem 1.1, see also [1, 10], we obtain
Kj
Jn X
X
E
τj βbj,k − βj,k ψj,k
≤ C (Aa + Au + U a + U u) ,
j=0 k=1
L∞ (Sd )
25
where
Kj
B dj
B dj
X
∞
∞
b
b
τj βj,k − βj,k ψj,k 1 Θj ≥ 1 1 Θj ≥
Aa =
E
1
n2
2n 2
j=1
k=1
Kj
J
n
X X
B dj
B dj
∞
b∞
≥
τj βbj,k − βj,k ψj,k 1 Θ
Au =
E
1
Θ
<
1
1
j
j
n2
2n 2
j=1
k=1
Kj
Jn
dj
dj
X
X
B
2B
b∞
βj,k ψj,k 1 Θ
≤ 1 1 Θ∞
Ua=
E
1
j
j ≥
2
2
n
n
j=1
k=1
∞
d
L (S )
Kj
J
n
dj
X X
B dj
b ∞ < B 1 1 Θ∞
βj,k ψj,k 1 Θ
Uu=
E
1
j <
j
2
2
n
2n
j=1
k=1
Jn
X
L∞ (Sd )
L∞ (Sd )
L∞ (Sd )
1
dj
2
Now Θ∞
j ≥ B /n implies j ≤ Js (see Section 3.4) and in view of Eq.s (7) and
(8) with p = ∞, we get
!
Js
X
d
j
2
B E
Aa ≤C
sup |β̂j,k − βj,k |
k=1,...,Kj
j=1
≤CB
d
2 Js
1
(Js + 1) n− 2
s
=O n− 2s+d
Consider now Au. Using Js , we split this term into Au = Au1 + Au2 , as in the
proof of Theorem 1.1. Trivially, we get
s
Au1 = O n− 2s+d .
On the other hand, using Eq. (8) and Proposition 3.6, we get
Au ≤
Jn
X
! 12
B
d
2j
Jn
X
|β̂j,k − βj,k |
sup
d
1
Pr
k=1,...,Kj
j=0
≤
E
2
B dj
∞
b∞
Θ
−
Θ
≥
1
j
j
2n 2
12
d
B 2 j (j + 1) n− 2 B − 2 j
j=0
s
1
=O Jn n− 2 = o n− 2s+d
1
dj
2
As far as U a is concerned, again Θ∞
j ≥ B /n implies j ≤ Js (see Section 3.4),
26
so that
Ua ≤
Js
X
d
Bj 2
j=1
Js
X
Kj
X
B dj
∞
b∞
Θ
≥ 1
j − Θj
n2
Pr
βj,k ψj,k
k=1
L∞ (Sd )
B dj
∞
b∞
≥
Θ
−
Θ
1
j
j
n2
j=1
dj −2
Js
X
2
B
jd
∞
∞
b
2
B M
≤
E Θj − Θj
1
n2
j=1
≤
d
B j 2 M Pr
1
≤ Js n− 2 ,
where we used Eq. (8) and Proposition 3.6. Finally, we have that
Kj
J
n
X X
B dj
B dj
∞
b∞
<
βj,k ψj,k 1 Θ
E
Uu ≤
1
Θ
<
1
1
j
j
n2
2n 2
j=1
k=1
L∞ (Sd )
≤ U u1 + U u2 ,
where
U u1 ≤
Kj
Js
X
X
j=1
≤
Js
X
βj,k ψj,k
k=1
B
3d
2 j
B dj
∞
1 Θj <
1
2n 2
L∞ (Sd )
3d
1
1
n− 2 = O B 2 Js n− 2
j=1
and
X
U u2 =
d
B 2j
sup
|βj,k | .
k=1,...,Kj
j>Js
Note that
B
3d
2 Js
1
d−s
n− 2 = n 2s+d ,
as claimed.
5.2. Proofs of the auxiliary results
The proof of Lemma 2.1 can be viewed as a generalization of the proof of
Lemma 5.1 in Durastanti et al. [11].
Proof of Lemma 2.1. Using the needlets localization property given in Eq. (6),
we have that
Z Y
Z Y
q
q
1
jdq
ψj,ki (x) dx ≤ Cη B
η.
jq
{1
+
B
d
(x, ξj,ki )}
Sd i=1
Sd i=1
27
Let S1 = x ∈ Sd : d (x, ξj,k1 ) ≥ ∆/2 , so that Sd ⊆ S1 ∪ S1 . Therefore,
Z
q
Y
1
η ≤
jq
Sd i=1 {1 + B d (x, ξj,ki )}
q
Y
Z
S1 i=1
1
η
B jq d (x, ξj,ki )}
{1 +
q
Y
Z
1
η.
jd d (x, ξ
{1
+
B
j,ki )}
S1 i=1
+
From the definition of S1 and following Lemma 5.1 in Durastanti, Marinucci
and Peccati [11], it follows that
q
Y
1
2η(q−1)
≤
η
η(d−1)
jd
S1 i=1 {1 + B d (x, ξj,ki )}
(1 + B jd ∆)
Z
2η(q−1)
≤
(1 +
≤
(2π)
(2π)
d−1
η(q−1)
2
d−1
≤Cη0
(2π)
{1 +
η
B jd d (x, ξj,k1 )}
B
1
Sd
Z
η(d−1)
(1 + B jd ∆)
{1 +
η
B jd d (x, ξj,k1 )}
Z
2η(q−1)
(1 + B jd ∆)
1
S1
η(d−1)
B jd ∆)
d−1
=
Z
π
Z
η(d−1)
∞
0
2η(q−1) B −dj
(1 + B jd ∆)
η(d−1)
dx
sin ϑ
η dϑ
(1 + B jd ϑ)
0
−dj
dx
y
η dy
(1 + y)
.
On the other hand,
q
Y
1
2η
η
η ≤
jd
(1+B jd ∆)
S1 i=1 {1 + B d (x, ξj,ki )}
Z
Z
q−1
Y
S1 i=1
dx
1 + B jd d x, ξj,ki+1
η.
Let S2 = {x ∈ S1 : d (x, ξj,k2 ) ≥ ∆/2}. Then,
S1 ⊆ S2 ∪ S2
and
Sd ⊆ S2 ∪ S2 ∪ S1 .
As far as S2 is concerned, we apply the same chain of inequalities as those
−η
used for S1 . The integral over S2 can be bound by the factor 2η 1 + B jd ∆
multiplied by the integral of the product of q − 2 localization bounds of the
needlets.
By re-iterating
the procedure, we obtain, a set ofS nested Sg =
q
x ∈ Sg−1 , d x, ξj,kg ≥ ∆/2 , g = 1, . . . , q so that Sd ⊆ Sq ∪ g=1 Sg , which
yields the claimed result.
The proof of Proposition 3.1 is a simple modification of Proposition 6 in Durastanti et al. [10] concerning complex random spin needlet coefficients. Many
technical details are omitted for the sake of brevity.
Proof of Proposition 3.1. For p ≤ 2 we apply the classical convexity inequality
such that for a set of independent centered random variables {Zi } with finite
28
p-th absolute moment,
E
n
X
p!
Zi
p/2
2
n
X
.
≤ E
Zi
i=1
i=1
For p > 2, we apply the Rosenthal inequality (see for instance Härdle et al.
[22]), that is, there exists a constant cp > 0 such that
( n
)p/2
p!
n
n
X
X
X
p
≤ cp
E
Zi
E (|Zi | ) +
E Zi2
i=1
i=1
i=1
On the other hand, since B dj ≤ n, we have that
p
p
E {|(f (X) + ε) ψj,k (X) − βj,k | } ≤2p−1 [E {|f (X) ψj,k (X) − βj,k | }
p
+E {|εψj,k (X)| }]
p
p
≤c0p {M p + E (|ε| )} kψj,k kLp (Sd )
p/2−1
≤c00p B jd(p/2−1) ≤ c000
p n
Hence,
p
E |β̂j,k − βj,k |
≤c̃p
np/2−1
+ n−p/2
np−1
= c̃p n−p/2
The proof of Proposition 3.3 can be considered as the counterpart in the needlet
framework of the proof of Lemma 2 in Kerkyacharian et al. [30].
Remark 5.1. Any element c` can be decomposed as the sum of integers ci1 ,...,i` ;` ,
where the `-dimensional vector {i1 , . . . , i` } ⊂ {1, . . . , m} specifies the spherical
needlets involved in each configuration (of size `) given by
h
i
`
E {f (X) + ε} ψj,ki1 (X) . . . , ψj,ki` (X) .
The notation [i1 , . . . , i` ] denotes the set of all the possible combinations of {i1 , . . . , i` }
such that
X
ci1 ,...,i` ;` = c` .
[i1 ,...,i` ]
Proof of Proposition 3.3. Note that
E
hn
om i n−m
b
Θj (p)
=
p
X
E
υ1 ,...,υm ∈Σp
29
X
m
Y
k1 ,...,km `=1
Ψ⊗p
j,k` (Xυ` , ευ` )
(16)
where
X
E
m
Y
k1 ,...,km `=1
Ψ⊗p
(X
,
ε
)
υ
υ
`
`
j,k`
=
"
m
X Y
Y
E {f (X) + ε}
k1 ,...,km `=1[i1 ,...,i` ]
=
m
X Y
(
`
Y
E f (X)
k1 ,...,km h=1
#ci1 ,...,i` ;`
ψj,kih (X)
h=1
)ch;1
ψj,kh (X)
h=1
"
m
Y
Y
`
Y
`
`
E {f (X) + ε}
`=2[i1 ,...,i` ]
`
Y
#ci1 ,...,i` ;`
ψj,kih (X)
.
h=1
Using Eq. (2) and the independence of the noise ε, for any ` ≥ 2 we have that
"
Y
`
E {f (X) + ε}
#ci1 ,...,i` ;`
`
Y
(
≤ CM,p,` E
ψj,kih (X)
h=1
[i1 ,...,i` ]
`
Y
)ci1 ,...,i` ,`
ψj,kh (X)
,
h=1
where CM,p,` = 2p−1 M ` + E ε` . In view of Lemma 2.1, we obtain
m
Y
(
Y
E
`=2 [i1 ,...,i` ]
`
Y
)ci1 ,...,i` ,`
≤C 0
ψj,kh (X)
h=1
m
Y
`ci
Y
,...,ı` ;`
1
kψj,k kL` (S
d)
`=2 [i1 ,...,i` ]
1{k = ki1 = . . . = ki` }
≤C
0
m
Y
`
P
[i
,...,i` ]
kψj,k kL` (Sd1)
ci1 ,...,ı` ;`
∆ (k, `)
`=2
≤C 00 B
=C 00 B
P
m
jd
`=2
`c`
2
P
m
jd
`=1
`c`
2
−
Pm
−
Pm
`=2
`=1
c`
c`
∆ (k, m)
B jd
c1
2
∆ (k, m) ,
Q
Note that [i1 ,...,i` ] 1{k = ki1 = . . . = ki` } implies that al least ` kh indexes are
equal. Thus, using Eq. (10), we obtain
X
m
Y
ch,1
E {f (X) ψj,kh (X)}
=
X
k1 ,...,km h=1
Pm
βj,kh=1
ch,1
k
(
≤ CB −j
Pm
h=1
ch,1 {s+d(
1
1
2−p
)} B
= CB −jc1 {s+d( 2 − p )} B jd(1−γ)
1
30
1
jd 1−
(
min p,
Pm
h=1 ch;1
p
)
)
where γ = min (p, c1 ) /p. Hence,
"
Y
E {f (X) + ε}
`
`
Y
#ci1 ,...,i` ;`
≤C 000 B jd(
ψj,kih (X)
mp Pm
`=1
2 −
c` )
h=1
[i1 ,...,i` ]
B −jc1 (s− p ) B jd(1−γ)
d
Finally, for any fixed configuration c1 , . . . , cm , the number of possible combinations is bounded by
Pm
n
n − cm
n − `=2 c`
,
C?
...
c1
cm
cm−1
where C ? denotes the possible choices of {ci1 ,...,i` ;` } for any ` and does not
depend on n. In view of the Stirling approximation, np ≈ np , the number
Pm
of possible combinations is bounded by Cn− `=1 c` . Using the aforementioned
results, Eq. (16) is bounded by
E
hn
om i
b j (p)
Θ
≤ C̃1
X
(c1 ,...,cm )∈Γm,p
d
Pm
B jd ( mp
B −j {c1 (s− p )−d(1−γ)}
`=1 c` )
2 −
,
mp
n
n 2
as claimed.
The proof of Proposition 3.4 can be viewed as the counterpart of the proof
of Lemma 3 in Kerkyacharian et al. [30] in the needlet framework.
Proof of Proposition 3.4. Following Kerkyacharian et al. [30], note that
p
Y
xi − β p =
i=1
p
X
h
Y
X
β p−h
(xti − β) .
1≤t1 <...,<th ≤p i=1
h=1
Let
Ψ̃j,k (X, ε) := {f (x) + ε} ψj,k (x) − βj,k .
so that
n
o
E Ψ̃j,k (X, ε) = 0.
(17)
We therefore obtain
b j (p) − Θj (p) =
Θ
−1 X
Kj
p
X X
n
p−h
βj,k
p
k=1 υ∈Σp h=1
X
Ψ̃⊗h
j,k (Xι , ει ) ,
ι⊂υ,ι∈Σh
and, reversing the order of integration, we have that
b j (p) − Θj (p) =
Θ
Kj p
X
X
k=1 h=1
n−h
p−h
n
p
31
p−h
βj,k
X
ι∈Σh
Ψ̃⊗h
j,k (Xι , ει ) ,
Hence, we can rewrite
E
n
p
m o
X
b j (p) − Θj (p)
Θ
≤pm−1
(
h=1
)m
n−h
p−h
n
p
(
X
E
ι1 ,...,ιm ∈Σh
m
Y
X
m
Y
p−h
βj,k`
k1 ,...,km `=1
)
Ψ̃⊗h
j,k`
(Xι` , ει` ) .
`=1
Similar to the proof of Proposition 3.3, we fix a configuration of indexes ι1 , . . . , ιm ∈ Σh ,
corresponding to the set of coefficients {c1 , c . . . , ch }. Because, in this case, the
considered U-statistic is degenerate, we discard all the combinations with c1 6= 0,
in view of (17). On the other hand, following Lemma 2.1, we have that
n
o
Ph
X
Ph
`
⊗h
`=2 ( 2 −1)c` }
`=1 c` B dj {
E Ψ̃⊗h
j,k1 (X1 , ε1 ) . . . Ψ̃j,kh (Xh , εh ) ≤ Cn
ι1 ,...,ιm ∈Σh
Ph
= Cn
`=1
c`
B dj (
mh
2 −
Pm
`=2
c` )
.
(18)
Pm
P
Furthermore, mh = `=2 `c` > 2 `=2 c` implies that the exponent in the last
term of (18) is positive, so that
n
o
X
mh
⊗h
⊗h
E Ψ̃j,k
(X
,
ε
)
.
.
.
Ψ̃
(X
,
ε
)
≤ Cn 2 ,
1
1
h
h
j,k
1
h
ι1 ,...,ιm ∈Σh
because B dj ≤ n. Finally, using the Stirling approximation and Eq. (10) we
have that
!m
p
om i
hn
m(p−h)
X
X
mh
n
p−h
0
b j (p) − Θj (p)
≤C
|βj,k |
n 2
E Θ
nmp
h=1
k
!1− hp m
p
X
X
h
mh
p
≤C 00
n− 2
B jd p
|βj,k |
h=1
≤C̃2
k
p
X
h=1
mh
jd
p
B
np/2
1− h m
{Θj (p)}( p ) ,
as claimed.
The proof of Lemma 3.5 is the counterpart of the proof of Lemma 6 in
Kerkyacharian et al. [30] in the needlet framework.
Proof of Lemma 3.5. Following Kerkyacharian et al. [30] and the results ob-
32
tained in the previous proof, we have that
−1 X
p00
Kj
X
X
0
n
p0 −h
Θ̂j (p0 ) − Θj (p0 ) =
βj,k
p0
k=1 υ∈Σp0 h=1
+
+
1
n
1
n
1
2
⊗(h−1)
X
Ψ̃j,k
X
Ψ̃⊗h
j,k (Xι , ει )
ι⊂υ,ι∈Σh
(Xι , ει )
ι⊂υ,ι∈Σh−1
X
ι⊂υ,ι∈Σh−2
with the convention
X
⊗(0)
Ψ̃j,k (Xι , ει ) = 2;
⊗(h−2)
Ψ̃j,k
(Xι , ει ) ,
X
⊗(h)
Ψ̃j,k (Xι , ει ) = 0 for h < 0.
ι⊂υ,ι∈Σh
ι⊂υ,ι∈Σ0
Reversing the order of integration and applying an analogous procedure to the
one used in the proof of Proposition 3.4, we achieve the claimed result.
Proposition 3.6 is proved by using the general properties of the needlets.
Proof of Proposition 3.6. It is easy to see that
2
Kj n
Kj n
2 1 X
i 1X
X h
X
2
b∞
E Θ
= 2
E {ψj,k (Xi ) Yi } +
E{ψj,k (Xi ) Yi }
j
n
n
i=1
i=1
k=1
k=1
d
≤
2
B j
M 2 + σε2 kψj,k kL2 (Sd ) + Θ∞
,
j
n
as claimed.
Acknowledgement. The author wishes to thank M. Konstantinou, A. Ortiz,
A. Renzi and N. Turchi for precious discussions and hints. Furthermore, the
author wishes to acknowledge the Associate Editor, the referees and the Editorin-Chief for the insightful remarks and suggestions which led to a substantial
improvement of this work.
References
[1] Baldi, P., Kerkyacharian, G., Marinucci, D., Picard, D., 2009a. Adaptive
density estimation for directional data using needlets. Ann. Statist. 37,
3362–3395.
[2] Baldi, P., Kerkyacharian, G., Marinucci, D., Picard, D., 2009b. Asymptotics for spherical needlets. Ann. Statist. 37, 1150–1171.
[3] Brown, L.D., Cai, T.T., Zhou, H.H., 2010. Nonparametric regression in
exponential families. Ann. Statist. 38, 2005–2046.
33
[4] Buldygin, V.V., Kozachenko, Y.V., 2000. Metric characterization of random variables and random processes. volume 188 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI.
Translated from the 1998 Russian original by V. Zaiats.
[5] Cai, T.T., Low, M.G., Zhao, L.H., 2007. Trade-offs between global and
local risks in nonparametric function estimation. Bernoulli 13, 1–19.
[6] Cammarota, V., Marinucci, D., 2015. On the limiting behaviour of needlets
polyspectra. Ann. Inst. Henri Poincaré Probab. Stat. 51, 1159–1189.
[7] Donoho, D.L., Johnstone, I.M., Kerkyacharian, G., Picard, D., 1996. Density estimation by wavelet thresholding. Ann. Statist. 24, 508–539.
[8] Durastanti, C., 2015a. Adaptive density estimation on the circle by nearlytight frames. Submitted, arXiv:1504.00595.
[9] Durastanti, C., 2015b. Block thresholding on the sphere. Sankhya A 77,
153–185.
[10] Durastanti, C., Geller, D., Marinucci, D., 2012. Adaptive nonparametric
regression on spin fiber bundles. J. Multivariate Anal. 104, 16–38.
[11] Durastanti, C., Marinucci, D., Peccati, G., 2014. Normal approximations
for wavelet coefficients on spherical Poisson fields. J. Math. Anal. Appl.
409, 212–227.
[12] Efroı̆movich, S.Y., 1985. Nonparametric estimation of a density of unknown
smoothness. Teor. Veroyatnost. i Primenen. 30, 524–534.
[13] Faÿ, G., Delabrouille, J., Kerkyacharian, G., Picard, D., 2013. Testing the
isotropy of high energy cosmic rays using spherical needlets. Ann. Appl.
Stat. 7, 1040–1073.
[14] Faÿ, G., Guilloux, F., Betoule, M., Cardoso, J.F., Delabrouille, J., Le Jeune, J., 2008. Cmb power spectrum estimation using wavelets. Phys. Rev.
D D78:083013.
[15] Gautier, R., Le Pennec, E., 2013. Adaptive estimation in the nonparametric random coefficients binary choice model by needlet thresholding.
Submitted, arXiv:1106.3503.
[16] Geller, D., Marinucci, D., 2010. Spin wavelets on the sphere. J. Fourier
Anal. Appl. 16, 840–884.
[17] Geller, D., Marinucci, D., 2011. Mixed needlets. J. Math. Anal. Appl. 375,
610–630.
[18] Geller, D., Mayeli, A., 2009a. Besov spaces and frames on compact manifolds. Indiana Univ. Math. J. 58, 2003–2042.
34
[19] Geller, D., Mayeli, A., 2009b. Continuous wavelets on compact manifolds.
Math. Z. 262, 895–927.
[20] Geller, D., Mayeli, A., 2009c. Nearly tight frames and space-frequency
analysis on compact manifolds. Math. Z. 263, 235–264.
[21] Geller, D., Pesenson, I.Z., 2011. Band-limited localized Parseval frames
and Besov spaces on compact homogeneous manifolds. J. Geom. Anal. 21,
334–371.
[22] Härdle, W., Kerkyacharian, G., Picard, D., Tsybakov, A., 1998. Wavelets,
approximation, and statistical applications. volume 129 of Lecture Notes in
Statistics. Springer-Verlag, New York.
[23] Iuppa, R., Di Sciascio, G., Marinucci, D., Santonico, R., 2012. Cosmic-ray
anisotropies observed by the argo-ybj experiment. Nuclear Instruments
and Methods in Physics Research A , 160–164.
[24] Juditsky, A.B., Lepski, O.V., Tsybakov, A.B., 2009. Nonparametric estimation of composite functions. Ann. Statist. 37, 1360–1404.
[25] Kerkyacharian, G., Nickl, R., Picard, D., 2012. Concentration inequalities
and confidence bands for needlet density estimators on compact homogeneous manifolds. Probab. Theory Related Fields 153, 363–404.
[26] Kerkyacharian, G., Picard, D., 1992. Density estimation in Besov spaces.
Statist. Probab. Lett. 13, 15–24.
[27] Kerkyacharian, G., Picard, D., 1993. Density estimation by kernel and
wavelets methods: optimality of Besov spaces. Statist. Probab. Lett. 18,
327–336.
[28] Kerkyacharian, G., Picard, D., 2000. Thresholding algorithms, maxisets
and well-concentrated bases. Test 9, 283–344. With comments, and a
rejoinder by the authors.
[29] Kerkyacharian, G., Picard, D., 2004. Regression in random design and
warped wavelets. Bernoulli 10, 1053–1105.
[30] Kerkyacharian, G., Picard, D., Tribouley, K., 1996. Lp adaptive density
estimation. Bernoulli 2, 229–247.
[31] Kim, P.T., Koo, J.Y., 2002. Optimal spherical deconvolution. J. Multivariate Anal. 80, 21–42.
[32] Kim, P.T., Koo, J.Y., Luo, Z.M., 2009. Weyl eigenvalue asymptotics and
sharp adaptation on vector bundles. J. Multivariate Anal. 100, 1962–1978.
[33] Koo, J.Y., Kim, P.T., 2008. Sharp adaptation for spherical inverse problems
with applications to medical imaging. J. Multivariate Anal. 99, 165–190.
35
[34] Lan, X., Marinucci, D., 2008. The needlets bispectrum. Electron. J. Stat.
2, 332–367.
[35] Lan, X., Marinucci, D., 2009. On the dependence structure of wavelet
coefficients for spherical random fields. Stochastic Process. Appl. 119, 3749–
3766.
[36] Marinucci, D., 2006. High-resolution asymptotics for the angular bispectrum of spherical random fields. Ann. Statist. 34, 1–41.
[37] Marinucci, D., Peccati, G., 2011. Random fields on the sphere. Volume 389
of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge. Representation, limit theorems and cosmological
applications.
[38] Monnier, J.B., 2011. Nonparametric regression on the hyper-sphere with
uniform design: needlets-based regression on the hyper-sphere. TEST 20,
412–446.
[39] Narcowich, F., Petrushev, P., Ward, J., 2006a. Decomposition of Besov
and Triebel-Lizorkin spaces on the sphere. J. Funct. Anal. 238, 530–564.
[40] Narcowich, F.J., Petrushev, P., Ward, J.D., 2006b. Localized tight frames
on spheres. SIAM J. Math. Anal. 38, 574–594 (electronic).
[41] Pesenson, I.Z., 2013. Multiresolution analysis on compact Riemannian
manifolds, in: Multiscale analysis and nonlinear dynamics. Wiley-VCH,
Weinheim. Rev. Nonlinear Dyn. Complex., pp. 65–82.
[42] Schröder, P., Sweldens, W., 1995. Spherical wavelets: efficiently representing functions on the sphere, in: Proc. SIGGRAPH ’95, 22nd annual
conference on Computer graphics and interactive techniques, pp. 161–172.
[43] Stein, E.M., Weiss, G., 1971. Introduction to Fourier analysis on Euclidean
spaces. Princeton University Press, Princeton, N.J. Princeton Mathematical Series, No. 32.
[44] Tsybakov, A.B., 2009. Introduction to nonparametric estimation. Springer
Series in Statistics, Springer, New York. Revised and extended from the
2004 French original, Translated by Vladimir Zaiats.
36
| 10 |
DEEP NEURAL NETWORKS FOR SINGLE CHANNEL SOURCE SEPARATION
Emad M. Grais, Mehmet Umut Sen, Hakan Erdogan
arXiv:1311.2746v1 [] 12 Nov 2013
Faculty of Engineering and Natural Sciences,
Sabanci University, Orhanli Tuzla, 34956, Istanbul.
{grais, umutsen, haerdogan}@sabanciuniv.edu
ABSTRACT
In this paper, a novel approach for single channel source separation
(SCSS) using a deep neural network (DNN) architecture is introduced. Unlike previous studies in which DNN and other classifiers
were used for classifying time-frequency bins to obtain hard masks
for each source, we use the DNN to classify estimated source spectra
to check for their validity during separation. In the training stage, the
training data for the source signals are used to train a DNN. In the
separation stage, the trained DNN is utilized to aid in estimation of
each source in the mixed signal. Single channel source separation
problem is formulated as an energy minimization problem where
each source spectra estimate is encouraged to fit the trained DNN
model and the mixed signal spectrum is encouraged to be written
as a weighted sum of the estimated source spectra. The proposed
approach works regardless of the energy scale differences between
the source signals in the training and separation stages. Nonnegative
matrix factorization (NMF) is used to initialize the DNN estimate
for each source. The experimental results show that using DNN initialized by NMF for source separation improves the quality of the
separated signal compared with using NMF for source separation.
Index Terms— Single channel source separation, deep neural
network, nonnegative matrix factorization.
1. INTRODUCTION
Single channel audio source separation is an important and challenging problem and has received considerable interest in the research
community in recent years. Since there is limited information in the
mixed signal, usually one needs to use training data for each source
to model each source and to improve the quality of separation. In
this work, we introduce a new method for improved source separation using nonlinear models of sources trained using a deep neural
network.
1.1. Related work
Many approaches have been introduced so far to solve the single channel source separation problem. Most of those approaches
strongly depend on training data for the source signals. The training
data can be modeled using probabilistic models such as Gaussian
mixture model (GMM) [1, 2, 3], hidden Markov model (HMM) or
factorial HMM [4, 5, 6]. These models are learned from the training
data and usually used in source separation under the assumption that
the sources appear in the mixed signal with the same energy level
as they appear in the training data. Fixing this limitation requires
complicated computations as in [7, 8, 9, 10, 11, 12]. Another approach to model the training data is to train nonnegative dictionaries
for the source signals [13, 14, 15]. This approach is more flexible
with no limitation related to the energy differences between the
source signals in training and separation stages. The main problem
in this approach is that any nonnegative linear combination of the
trained dictionary vectors is a valid estimate for a source signal
which may decrease the quality of separation. Modeling the training
data with both nonnegative dictionary and cluster models like GMM
and HMM was introduced in [16, 17, 18, 19] to fix the limitation
related to the energy scaling between the source signals and training
more powerful models that can fit the data properly. Another type of
approach which is called classification-based speech separation aims
to find hard masks where each time-frequency bin is classified as
belonging to either of the sources. For example in [20], various classifiers based on GMM, support vector machines, conditional random
fields, and deep neural networks were used for classification.
1.2. Contributions
In this paper, we model the training data for the source signals using a single joint deep neural network (DNN). The DNN is used as
a spectral domain classifier which can classify its input spectra into
each possible source type. Unlike classification-based speech separation where the classifiers are used to segment time-frequency bins
into sources, we can obtain soft masks using our approach. Single
channel source separation problem is formulated as an energy minimization problem where each source spectral estimate is encouraged
to fit the trained DNN model and the mixed signal spectrum is encouraged to be written as a weighted sum of the estimated source
spectra. Basically, we can think of the DNN as checking whether
the estimated source signals are lying in their corresponding nonlinear manifolds which are represented by the trained joint DNN.
Using a DNN for modeling the sources and handling the energy differences in training and testing is considered to be the main novelty
of this paper. Deep neural network (DNN) is a well known model
for representing the detailed structures in complex real-world data
[21, 22]. Another novelty of this paper is using nonnegative matrix
factorization [23] to find initial estimates for the sources rather than
using random initialization.
1.3. Organization of the paper
This paper is organized as follows: In Section 2 a mathematical formulation for the SCSS problem is given. Section 3 briefly describes
the NMF method for source separation. In Section 4, we introduce
our new method. We present our experimental results in Section 5.
We conclude the paper in Section 6.
2. PROBLEM FORMULATION
In single channel source separation problems, the aim is to find estimates of source signals that are mixed on a single channel y(t). For
simplicity, in this paper we assume the number of sources is two.
This problem is usually solved in the short time Fourier transform
(STFT) domain. Let Y (t, f ) be the STFT of y(t), where t represents
the frame index and f is the frequency-index. Due to the linearity of
the STFT, we have:
Y (t, f ) = S1 (t, f ) + S2 (t, f ),
(2)
We can write the magnitude spectrogram in matrix form as follows:
Y ≈ S1 + S2.
(3)
where S 1 , S 2 are the unknown magnitude spectrograms of the
source signals and need to be estimated using the observed mixed
signal and the training data.
3. NMF FOR SUPERVISED SOURCE SEPARATION
In this section, we briefly describe the use of nonnegative matrix
factorization (NMF) for supervised single channel source separation.
We will relate our model to the NMF idea and we will use the source
estimates obtained from using NMF as initilization for our method,
so it is appropriate to introduce the use of NMF for source separation
first.
To find a suitable initialization for the sources signals, we use
nonnegative matrix factorization (NMF) as in [23]. NMF [25] factorizes any nonnegative matrix V into a basis matrix (dictionary) B
and a gain matrix G as
V ≈ BG.
(4)
The matrix B contains the basis vectors that are optimized to allow the data in V to be approximated as a linear combination of its
constituent columns. The solution for B and G can be found by
minimizing the following Itakura-Saito (IS) divergence cost function [26]:
min DIS (V || BG) ,
B ,G
(5)
where
DIS (V || BG) =
X
a,b
!
V a,b
V a,b
− log
−1 .
(BG)a,b
(BG)a,b
This divergence cost function is a good measurement for the perceptual difference between different audio signals [26]. The IS-NMF
solution for equation (5) can be computed by alternating multiplicative updates of G and B as follows:
V
2
(BG)
,
G←G⊗
1
BT
BG
V
GT
2
(BG)
B←B⊗
,
1
GT
BG
BT
S train
≈ B i Gtrain
, ∀i ∈ {1, 2} ,
i
i
(1)
where S1 (t, f ) and S2 (t, f ) are the unknown STFT of the first and
second sources in the mixed signal. In this framework [13, 24], the
phase angles of the STFT were usually ignored. Hence, we can approximate the magnitude spectrum of the measured signal as the sum
of source signals’ magnitude spectra as follows:
|Y (t, f )| ≈ |S1 (t, f )| + |S2 (t, f )| .
where 1 is a matrix of ones with the same size of V , the operation ⊗ is an element-wise multiplication, all divisions and (.)2 are
element-wise operations. The matrices B and G are usually initialized by positive random numbers and then updated iteratively using
equations (6) and (7).
In the initialization stage, NMF is used to decompose the frames
for each source i into a multiplication of a nonnegative dictionary
B i and a gain matrix Gi as follows:
(8)
where S train
is the nonnegative matrix that contains the spectroi
gram frames of the training data of source i. After observing the
mixed signal, we calculate its spectrogram Y psd . NMF is used to
decompose the mixed signal’s spectrogram matrix Y psd with the
trained dictionaries as follows:
G1
Y psd ≈ [B 1 , B 2 ] G or Y psd ≈ [B 1 B 2 ]
.
(9)
G2
The only unknown here is the gains matrix G since the dictionaries
are fixed. The update rule in equation (6) is used to find G. After
finding the value of G, the initial estimate for each source magnitude
spectrogram is computed as follows:
Ŝ init1 =
B 1 G1
B 2 G2
⊗ Y , Ŝ init2 =
⊗Y,
B 1 G1 + B 2 G2
B 1 G1 + B 2 G2
(10)
where ⊗ is an element-wise multiplication and the divisions are done
element-wise.
The magnitude spectrograms of the initial estimates of the
source signals are used to initialize the sources in the separation
stage of the DNN approach.
4. THE METHOD
In NMF, the basic idea is to model each source with a dictionary, so
that source signals appear in the nonnegative span of this dictionary.
In the separation stage, the mixed signal is expressed as a nonnegative linear combination of the source dictionaries and separation is
performed by taking the parts corresponding to each source in the
decomposition.
The basic problem in NMF is that each source is modeled to lie
in a nonnegative cone defined by all the nonnegative linear combinations of its dictionary entries. This assumption may be a limiting
assumption usually since the variability within each source indicates
that nonlinear models may be more appropriate. This limitation led
us to consider nonlinear models for each source. It is not trivial to
use nonlinear models or classifiers in source separation. Since deep
neural networks were recently used with increased success in speech
recognition and other object recognition tasks, they can be considered as superior models of highly variable real-world signals.
We first train a DNN to model each source in the training
stage. We then use an energy minimization objective to estimate the
sources and their gains during the separation stage. Each stage is
explained below.
4.1. Training the DNN
(6)
(7)
We train a DNN that can classify sources present in the mixed signal. The input to the network is a frame of normalized magnitude
spectrum, x ∈ Rd . The neural network architecture is illustrated in
Figure 1. There are two outputs in the DNN, each corresponding to
a source. The label of each training instance is a binary indicator
function, namely if the instance is from source one, the first output
label f1 (x) = 1 and the second output label f2 (x) = 0. Let nk
be the number of hidden nodes in layer k for k = 0, . . . , K where
K is the number of layers. Note that n0 = d and nK = 2. Let
W k ∈ Rnk ×nk−1 be the weights between layers k − 1 and k, then
the values of a hidden layer hk ∈ Rnk are obtained as follows:
hk = g(W k hk−1 ),
(11)
1
where g(x) = 1+exp(−x)
is the elementwise sigmoid function. We
skip the bias terms to avoid clutter in our notation. The input to the
network is h0 = x ∈ Rd and the output is f (x) = hK ∈ R2 .
𝑓1 (𝒙)
𝑓2 (𝒙)
𝒉𝟒
𝑾𝟒
𝒉𝟑
corresponding source model in the DNN.
E1 (x) = (1 − f1 (x))2 + (f2 (x))2 ,
(12)
E2 (x) = (f1 (x))2 + (1 − f2 (x))2 .
(13)
Basically, we expect to have E1 (x) ≈ 0 when x comes from source
one and vice versa. We also define the following energy function
which quantifies the energy of error caused by the least squares difference between the mixed spectrum y and its estimate found by
linear combination of the two source estimates x1 and x2 :
Eerr (x1 , x2 , y, u, v) = ||ux1 + vx2 − y||2 .
(14)
Finally, we define an energy function that measures the negative energy of a variable, R(θ) = (min(θ, 0))2 .
In order to estimate the unknowns in the model, we solve the
following energy minimization problem.
𝑾𝟑
(x̂1 , x̂2 , û, v̂) =
argmin E(x1 , x2 , y, u, v),
(15)
{x1 ,x2 ,u,v}
𝒉𝟐
𝑾𝟐
𝒉𝟏
𝑾𝟏
𝒙
where
E(x1 , x2 , y, u, v) = E1 (x1 ) + E2 (x2 ) + λEerr (x1 , x2 , y, u, v)
X
+β
R(θi )
i
(16)
Fig. 1. Illustration of the DNN architecture.
Training a deep network necessitates a good initialization of the
parameters. It is shown that layer-by-layer pretraining using unsupervised methods for initialization of the parameters results in superior performance as compared to using random initial values. We
used Restricted Boltzmann Machines (RBM) for initialization. After initialization, supervised backpropagation algorithm is applied
to fine-tune the parameters. The learning criteria we use is leastsquares minimization. We are able to get the partial derivatives with
respect to the inputs, and this derivative is also used in the source
separation part. Let f (.) : Rd → R2 be the DNN, then f1 (x)
and f2 (x) are the scores that are proportional to the probabilities of
source one and source two respectively for a given frame of normalized magnitude spectrum x. We use these functions to measure how
much the separated spectra carry the characteristics of each source
as we elaborate more in the next section.
4.2. Source separation using DNN and energy minimization
In the separation stage, our algorithm works independently in each
frame of the mixed audio signal. For each frame of the mixed signal
spectrum, we calculate the normalized magnitude spectrum y. We
would like to express y = ux1 + vx2 where u and v are the gains
and x1 and x2 are normalized magnitude spectra of source one and
two respectively.
We formulate the problem of finding the unknown parameters
θ = (x1 , x2 , u, v) as an energy minimization problem. We have a
few different criteria that the source estimates need to satisfy. First,
they must fit well to the DNN trained in the training stage. Second,
their linear combination must sum to the mixed spectrum y and third
the source estimates must be nonnegative since they correspond to
the magnitude spectra of each source.
The energy functions E1 and E2 below are least squares cost
functions that quantify the fitness of a source estimate x to each
is the joint energy function which we seek to minimize. λ and β are
regularization parameters which are chosen experimentally. Here
θ = (x1 , x2 , u, v) = [θ1 , θ2 , . . . , θn ] is a vector containing all the
unknowns which must all be nonnegative. Note that, the nonnegativity can be given as an optimization constraint as well, however
we obtained faster solution of the optimization problem if we used
the negative energy function penalty instead. If some of the parameters are found to be negative after the solution of the optimization
problem (which rarely happens), we set them to zero. We used the
LBFGS algorithm for solving the unconstrained optimization problem.
We need to calculate the gradient of the DNN outputs with respect to the input x to be able to solve the optimization problem. The
(x)
gradient of the input x with respect to fi (x) is given as ∂f∂ix
=
q 1,i for i = 1, 2, where,
q k,i = W Tk (q k+1,i ⊗ hk ⊗ (1 − hk )),
fi (x)(1 − fi (x))wTK,i ,
th
nK−1
(17)
and q K,i =
where wK,i ∈ R
contains
the weights between i node of the output layer and the nodes at the
previous layer, in other words the ith row of W K .
The flowchart of the energy minimization setup is shown in Figure 2. For illustration, we show the single DNN in two separate
blocks in the flowchart. The fitness energies are measured using the
DNN and the error energy is found from the summation requirement.
Note that, since there are many parameters to be estimated and
the problem is clearly non-convex, the initialization of the parameters is very important. We initialize the estimates x̂1 and x̂2 from
the NMF result after normalizing by their `2 -norms. û is initialized
by the `2 -norm of the initial NMF source estimate ŝ1 divided by the
`2 -norm of the mixed signal y. v̂ is initialized in a similar manner.
After we obtain (x̂1 , x̂2 , û, v̂) as the result of the energy minimization problem, we use them as spectral estimates in a Wiener
filter to reconstruct improved estimates of each source spectra, e.g.
for source one we obtain the final estimate as follows:
Fig. 2. Flowchart of the energy minimization setup. For illustration,
we show the single DNN in two separate blocks in the flowchart.
ŝ1 =
(ûx̂1 )2
⊗ y.
(ûx̂1 )2 + (v̂ x̂2 )2
(18)
5. EXPERIMENTS AND DISCUSSION
We applied the proposed algorithm to separate speech and music signals from their mixture. We simulated our algorithm on a collection
of speech and piano data at 16kHz sampling rate. For speech data,
we used the training and testing male speech data from the TIMIT
database. For music data, we downloaded piano music data from piano society web site [27]. We used 39 pieces with approximate 185
minutes total duration from different composers but from a single
artist for training and left out one piece for testing. The magnitude
spectrograms for the speech and music data were calculated using
the STFT: A Hamming window with 480 points length and 60%
overlap was used and the FFT was taken at 512 points, the first 257
FFT points only were used since the conjugate of the remaining 255
points are involved in the first points.
The mixed data was formed by adding random portions of the
test music file to 20 speech files from the test data of the TIMIT
database at different speech to music ratio. The audio power levels
of each file were found using the “speech voltmeter” program from
the G.191 ITU-T STL software suite [28].
For the initialization of the source signals using nonnegative matrix factorization, we used a dictionary size of 128 for each source.
For training the NMF dictionaries, we used 50 minutes of data for
music and 30 minutes of the training data for speech. For training the
DNN, we used a total 50 minute subset of music and speech training
data for computational reasons.
For the DNN, the number of nodes in each hidden layer were
100-50-200 with three hidden layers. Sigmoid nonlinearity was used
at each node including the output nodes. DNN was initialized with
RBM training using contrastive divergence. We used 150 epochs for
training each layer’s RBM. We used 500 epochs for backpropagation
training. The first five epochs were used to optimize the output layer
keeping the lower layer weights untouched.
In the energy minimization problem, the values for the regularization parameters were λ = 5 and β = 3. We used Mark Schmidt’s
minFunc matlab LBFGS solver for solving the optimization problem
[29].
Performance measurements of the separation algorithm were
done using the signal to distortion ratio (SDR) and the signal to
interference ratio (SIR) [30]. The average SDR and SIR over the
20 test utterances are reported. The source to distortion ratio (SDR)
is defined as the ratio of the target energy to all errors in the reconstructed signal. The target signal is defined as the projection
of the predicted signal onto the original speech signal. Signal to
interference ratio (SIR) is defined as the ratio of the target energy to
the interference error due to the music signal only. The higher SDR
and SIR we measure the better performance we achieve. We also
use the output SNR as additional performance criteria.
The results are presented in Tables 1 and 2. We experimented
with multi-frame DNN where the inputs to the DNN were taken
from L neighbor spectral frames for both training and testing instead of using a single spectral frame similar to [15]. We can see that
using the DNN and the energy minimization idea, we can improve
the source separation performance for all input speech-to-music ratio
(SMR) values from -5 to 5 dB. In all cases, DNN is better than regular NMF and the improvement in SDR and SNR is usually around
1-1.5 dB. However, the improvement in SIR can be as high as 3
dB which indicates the fact that the introduced method can decrease
remaining music portions in the reconstructed speech signal. We
performed experiments with L = 3 neighboring frames which improved the results as compared to using a single frame input to the
DNN. For L = 3, we used 500 nodes in the third layer of the DNN
instead of 200. We conjecture that better results can be obtained if
higher number of neighboring frames are used.
Table 1. SDR, SIR and SNR in dB for the estimated speech signal.
SMR
dB
-5
0
5
NMF
SDR
1.79
4.51
7.99
SIR
5.01
8.41
12.36
DNN
SNR
3.15
5.52
8.62
SDR
2.81
5.46
8.74
L=1
SIR
7.03
9.92
13.39
SNR
3.96
6.24
9.24
SDR
3.09
5.73
8.96
L=3
SIR
7.40
10.16
13.33
SNR
4.28
6.52
9.45
Table 2. SDR, SIR and SNR in dB for the estimated music signal.
SMR
dB
-5
0
5
NMF
SDR
5.52
3.51
0.93
SIR
15.75
12.65
9.03
DNN
SNR
6.30
4.88
3.35
SDR
6.31
4.23
1.79
L=1
SIR
18.48
16.03
12.94
SNR
7.11
5.60
3.96
SDR
6.67
4.45
1.97
L=3
SIR
18.30
15.90
13.09
SNR
7.43
5.88
4.17
6. CONCLUSION
In this work, we introduced a novel approach for single channel
source separation (SCSS) using deep neural networks (DNN). The
DNN was used in this paper as a helper to model each source signal.
The training data for the source signals were used to train a DNN.
The trained DNN was used in an energy minimization framework to
separate the mixed signals while also estimating the scale for each
source in the mixed signal. Many adjustments for the model parameters can be done to improve the proposed SCSS using the introduced
approach. Different types of DNN such as deep autoencoders and
deep recurrent neural networks which can handle the temporal structure of the source signals can be tested on the SCSS problem. We
believe this idea is a novel idea and many improvements will be possible in the near future to improve its performance.
7. REFERENCES
[1] T. Kristjansson, J. Hershey, and H. Attias, “Single microphone
source separation using high resolution signal reconstruction,”
in IEEE International Conference Acoustics, Speech and Signal Processing (ICASSP), 2004.
[15] E. M. Grais and H. Erdogan, “Single channel speech music
separation using nonnegative matrix factorization with sliding
window and spectral masks,” in Annual Conference of the International Speech Communication Association (InterSpeech),
2011.
[2] A. M. Reddy and B. Raj, “Soft Mask Methods for singlechannel speaker separation,” IEEE Transactions on Audio,
Speech, and Language Processing, vol. 15, Aug. 2007.
[16] E. M. Grais and H. Erdogan, “Regularized nonnegative matrix factorization using gaussian mixture priors for supervised
single channel source separation,” Computer Speech and Language, vol. 27, no. 3, pp. 746–762, May 2013.
[3] A. M. Reddy and B. Raj, “A Minimum Mean squared error estimator for single channel speaker separation,” in International Conference on Spoken Language Processing (InterSpeech), 2004.
[17] E. M. Grais and H. Erdogan, “Gaussian mixture gain priors for
regularized nonnegative matrix factorization in single-channel
source separation,” in Annual Conference of the International
Speech Communication Association (InterSpeech), 2012.
[4] T. Virtanen, “Speech recognition using factorial hidden
markov models for separation in the feature space,” in International Conference on Spoken Language Processing (InterSpeech), 2006.
[18] E. M. Grais and H. Erdogan, “Hidden Markov Models as priors for regularized nonnegative matrix factorization in singlechannel source separation,” in Annual Conference of the International Speech Communication Association (InterSpeech),
2012.
[5] S. T. Roweis, “One Microphone source separation,” Neural
Information Processing Systems, 13, pp. 793–799, 2000.
[6] A. N. Deoras and A. H. Johnson, “A factorial hmm approach to
simultaneous recognition of isolated digits spoken by multiple
talkers on one audio channel,” in IEEE International Conference Acoustics, Speech and Signal Processing (ICASSP), 2004.
[7] M. H. Radfar, W. Wong, W. Y. Chan, and R. M. Dansereau,
“Gain estimation in model-based single channel speech separation,” in In proc. IEEE International Workshop on Machine
Learning for Signal Processing (MLSP, Grenoble, France),
Sept. 2009.
[8] M. H. Radfar, R. M. Dansereau, and W. Y. Chan, “Monaural speech separation based on gain adapted minimum mean
square error estimation,” Journal of Signal Processing Systems,Springer, vol. 61, no. 1, pp. 21–37, 2008.
[9] M. H. Radfar and R. M. Dansereau, “Long-term gain estimation in model-based single channel speech separation,” in in
Proc. of IEEE Workshop on Applications of Signal Processing
to Audio and Acoustics (WASPAA, New Paltz, NY), 2007.
[19] E. M. Grais and H. Erdogan,
“Spectro-temporal postenhancement using MMSE estimation in NMF based singlechannel source separation,” in Annual Conference of the International Speech Communication Association (InterSpeech),
2013.
[20] Y. Wang and DL. Wang, “Cocktail party processing via structured prediction,” in Advances in Neural Information Processing Systems (NIPS), 2012.
[21] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh, “A
fast learning algorithm for deep belief nets,” Neural Comput.,
vol. 18, no. 7, pp. 1527–1554, July 2006.
[22] Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury,
“Deep neural networks for acoustic modeling in speech recognition,” Signal Processing Magazine, 2012.
[23] E. M. Grais and H. Erdogan, “Single channel speech music
separation using nonnegative matrix factorization and spectral
masks,” in International Conference on Digital Signal Processing, 2011.
[10] A. Ozerov, C. Fvotte, and M. Charbit, “Factorial scaled hidden
markov model for polyphonic audio representation and source
separation,” in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Mohonk, NY, 2009.
[24] T. Virtanen, “Monaural sound source separation by nonnegative matrix factorization with temporal continuity and
sparseness criteria,” IEEE Transactions on Audio, Speech, and
Language Processing, vol. 15, pp. 1066–1074, Mar. 2007.
[11] M. H. Radfar, W. Wong, R. M. Dansereau, and W. Y. Chan,
“Scaled factorial Hidden Markov Models: a new technique for
compensating gain differences in model-based single channel
speech separation,” in IEEE International Conference Acoustics, Speech and Signal Processing (ICASSP), 2010.
[25] D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” Advances in Neural Information Processing
Systems, vol. 13, pp. 556–562, 2001.
[12] J. R. Hershey, S. J. Rennie, P. A. Olsen, and T. T. Kristjansson, “Super-human multi-talker speech recognition: A graphical modeling approach,” Computer Speech and Language, vol.
24, no. 1, pp. 45–66, Jan. 2010.
[13] M. N. Schmidt and R. K. Olsson, “Single-channel speech separation using sparse non-negative matrix factorization,” in International Conference on Spoken Language Processing (InterSpeech), 2006.
[14] E. M. Grais and H. Erdogan,
“Spectro-temporal postsmoothing in NMF based single-channel source separation,”
in European Signal Processing Conference (EUSIPCO), 2012.
[26] C. Fevotte, N. Bertin, and J. L. Durrieu, “Nonnegative matrix
factorization with the itakura-saito divergence. With application to music analysis,” Neural Computation, vol. 21, no. 3,
pp. 793–830, 2009.
[27] URL, “http://pianosociety.com,” 2009.
[28] URL, “http://www.itu.int/rec/T-REC-G.191/en,” 2009.
[29] URL, “http://www.di.ens.fr/ mschmidt/software/minfunc.html,”
2012.
[30] E. Vincent, R. Gribonval, and C. Fevotte, “Performance measurement in blind audio source separation,” IEEE Transactions
on Audio, Speech, and Language Processing, vol. 14, no. 4, pp.
1462–69, July 2006.
| 9 |
On the Feasibility of Interference Alignment in
Compounded MIMO Broadcast Channels with
Antenna Correlation and Mixed User Classes
arXiv:1711.07175v1 [] 20 Nov 2017
Galymzhan Nauryzbayev, Member, IEEE, Emad Alsusa, Senior Member, IEEE,
and Mohamed Abdallah, Senior Member, IEEE
Abstract
This paper presents a generalized closed-form beamforming technique that can achieve the maximum degrees of
freedom in compounded multiple-input multiple-output (MIMO) broadcast channels with mixed classes of multipleantenna users. The contribution is firstly described within the context of a three-cell network and later extended
to the general multi-cell scenario where we also show how to determine the conditions required to align the
interference in a subspace that is orthogonal to the one reserved for the desired signals. This is then followed by
an analysis of the impact of antenna correlation for different channel state information acquisition models. The
proposed scheme is examined under both conventional and Large-scale MIMO systems. It will be shown that the
proposed technique enables networks with any combination of user classes to achieve superior performance even
under significant antenna correlation, particularly in the case of the Large-scale MIMO systems.
Index Terms
Antenna Correlation, Broadcast Channel (BC), Channel State Information (CSI), Degree of Freedom (DoF),
Interference Alignment (IA), Multiple-Input Multiple-Output (MIMO).
I. I NTRODUCTION
I
NTERFERENCE alignment (IA) is a potential capacity-achieving technique in interference-limited
networks, that was initially proposed in [1] to establish additional degrees of freedom (DoF) in a
single-input single-output (SISO) model, where it was shown that the optimal DoF is
K
2
in K-user time
varying interference channel (IC). The concept of IA (is concerned with) aligning all interfering signals
into a specific subspace at the receiver side in order to exclusively reserve (a linearly independent) subspace
for interference-free data transmission. In a relatively short time, the IA concept has attracted a significant
2
amount of the researcher attention and a number of algorithms have been proposed for multiple-input
multiple-output (MIMO) systems [2]-[21], [25]-[31], to name a few.
The feasibility conditions of IA for MIMO IC were analysed in [2] and [3]. In [4], the authors considered
the case of K-user M × N MIMO network. It was shown that the achievable total number of DoF equals
Kmin(M, N), if K ≤ R, and
RK
min(M, N),
R+1
antennas, respectively, and R =
max(M,N )
.
min(M,N )
if K > R; M and N are the number of transmit and receive
The authors in [5] investigated the X-channel network scenario
and obtained remarkably high DoF compared to the findings in [6]. The derived outcomes of two DoF
per user, [5], [7], prompted the work in [8], where the authors presented an IA scheme for the two-user
X-channel with three antennas at each node. In [9], the authors achieved a total DoF of
MN
M +N −1
for the
case of M × N time-varying X-channels. In [10], the authors proposed an IA scheme to estimate the
achievable DoF in cellular networks which is based on aligning the interference into a multi-dimensional
subspace at multiple non-intended base stations (BS). On the other hand, in [11], the authors provided a
precise expression of the spatial multiplexing gain for two mutually interfering MIMO broadcast channels
(BCs) using a linear transceiver. Moreover, for the same scenario, the authors in [12] proposed a novel
IA technique for jointly designing the transmit and receive beamforming (BF) vectors with a closed-form
non-iterative expression. It was shown both analytically and numerically that the proposed scheme achieves
the optimal DoF. Furthermore, the authors in [13] showed how the extension of the grouping method,
[12], can be outperformed in terms of capacity and BS complexity. On the other hand, the authors in
[14] considered a network scenario comprising all the aforementioned cases and proposed a closed-form
IA scheme that can mitigate such mixture of interference. However, due to the limited available physical
space, resulting in small separations between the antenna elements, each channel is affected by antenna
correlation which hinders further enhancements of the transmission rate.
A. Main Contributions
In contrast to [14] and [15], this paper considers the compounded MIMO BC network scenario when
the users have different number of antennas and require different numbers of data streams. The derived
algorithm can be regarded as a generalization that can be also used in the special cases of [14] and [15],
where it was assumed that all users demand an equal number of data streams. Thus, this paper focuses
on the feasibility of IA, achievable sum rate and DoF region under various scenarios of channel state
information (CSI) acquisition and spatial correlation. The efficient channel model is used to capture all
possible cases of CSI mismatch. Furthermore, the paper investigates the impact of spatial correlation
3
between the antenna elements which arises due to the physical limitations of the available space. With
this in mind, the model was manipulated to involve both the impact of spatial correlation and CSI
mismatch in the compounded MIMO BC when the users have various antenna configurations and require to
decode different numbers of data streams. Therefore, a comprehensive algorithm is proposed to define the
minimum required antenna configuration to achieve a given DoF. The proposed technique is demonstrated
within the contexts of both conventional and Large-scale MIMO (LS-MIMO) systems under a wide range
of channel conditions and spatial correlations. Finally, the complexity of this technique is studied and
compared to a well-known benchmark technique.
The results demonstrate the effectiveness of the proposed technique particularly under highly correlated
channels. It is shown that this technique is not only associated with relatively lower computational
requirement, but also can achieve the maximum DoF even when the multiplexed users belong to different
data classes.
II. S YSTEM M ODEL
The system model considered here presents a cellular network comprising of L cells serving multiple
users randomly distributed as shown in Fig. 1. In terms of the network coverage, the whole combined
area can be determined as non-overlapped and overlapped areas, [22], [23]. The overlapped area denotes
a space where BSs cover a common region. Hence, the number of BSs creating the overlapped area varies
from 2 to L. According to this number of BSs, we can further define totally and partially overlapped areas.
Any user located in the totally overlapped area experiences the worst scenario when high interference
degrades the network performance. Thus, a main focus of the work is the users located in the totally
overlapped area.
Under a dense network scenario, [24], it is feasible to assume that each BS aims to communicate with
a large number of users K. We can also assume that K ≤ K users reside in the totally overlapped area,
and that K is identical for all L cells.
Since a concept of the point-to-point MIMO model can be applied for designing the cellular network,
we consider the IC and X-channel models that are differentiable from each other by the message set ups,
[1], [25], e.g., IC is the scenario when each BS i serves only the corresponding user i while the X-channel
scenario is characterised by a message set when each BS i has different messages to all users. Therefore,
a cellular MIMO network can be determined as a compounded IC-X channel scenario when each cell
consists of cell-edge users experiencing a multi-source transmission from L BSs such that 1 < L < L,
4
BS1
BS2
A totally overlapped
area
A partially overlapped
area
BS3
Fig. 1. A three-cell MIMO network with randomly distributed users.
and L is thought to be the same for all users. Hence, the user of interest can classify the observed network
as a set of (L − L) IC and L X-channels, and then the corresponding interferences can be defined as ICI
and XCI, [14], [15].
Since we focus on multi-antenna users in the totally overlapped area, it is reasonable to assume that
the user of interest may wish to receive data from several BSs. In turn, those BSs might send different
numbers of data streams to the interested users. In this work, we consider the network scenario when BSs
serve mixed user classes that can be determined as the case when users with different numbers of antennas
require a different number of data streams to be decoded. Hence, we define the network configuration
as (Mi , Ni , K, L, di ), where Mi and Ni indicate the number of transmit and receive antennas per node in
cell i, respectively, K is the number of users per cell in the totally overlapped area, and di denotes the
number of data streams transmitted from BS i. For the sake of brevity, we assume that each BS serves
only one user in the totally overlapped area (K = 1), and the received signal at the user of interest can
be written as, [14],
ỹj =
UH
j
j
X
Hj,i Vi si + UH
j
i=j−L+1
[j]
where Uj ∈ CNj ×d
|
{z
desired + XCI signals
L
X
Hj,l Vl sl + ñj , ∀j ∈ L,
(1)
l=1,l6=i
}
|
{z
ICI
}
[j]
d ×1
denotes the receive suppression matrix of user j, and ñj = UH
is the
j nj ∈ C
effective zero-mean additive white Gaussian noise (AWGN) vector, with E ñj ñH
= σñ2 I. d[j] is the
j
number of data streams required to be decoded at user j. Hj,i ∈ CNj ×Mi is the channel matrix between
5
BS i and user j. As in [14], the transmit BF design can be proceeded as
[ICI]
Vi = Vi
[ICI]
[XCI]
× Vi
, ∀i ∈ L,
(2)
[XCI]
∈ CQi ×di are the BF matrices responsible for mitigating the ICI and
XCI terms in (1), respectively. Vi ∈ CMi ×di is the transmit BF matrix at BS i, with tr Vi ViH = 1. We
where Vi
∈ CMi ×Qi and Vi
assume that si ∈ Cd×1 is the data vector comprising of the symbols drawn from independent and identically
distributed (i.i.d.) Gaussian input signaling and chosen from a desired constellation, with E si sH
= I.
i
Then, all these conditions sufficiently meet the average power constraint at the BS.
With this in mind and due to the nature of broadcast transmissions, we can assume that the users within
one cell desire to receive the same data, thus the transmitted signal can be expressed as
si
=
c[i,i]T
c[i,i+1]T
···
c[i,l]T
···
c[i,i+L−1]T
T
,
∀i
∈
L, (3)
where c[i,l] ∈ Cd[i,l] ×1 and d[i,l] indicate the message vector and its corresponding number of data streams
transmitted from BS i to the user in cell l, respectively. More specifically, (3) implies that BS i dedicates the
first part of the data to its corresponding user i, while the second part of the data is transmitted to the user
belonging to the neighbouring cell (i+1), and so on. The superscript numeration l ∈ {i, i+1, . . . , i+L−1}
changes circularly such that1
l=
L,
if modL (l) = 0,
(4)
mod (l), otherwise.
L
For instance, for i = 4 and L = 4 (L = 3), we have modL ({i, i + 1, i + 2}) = mod4 ({4, 5, 6}) →
{4, 1, 2}.
To decode the required signal successfully in (1), the desired signal should be aligned into a subspace
at the receiver side such that both ICI and XCI interfering signals are aligned into the subspace that is
orthogonal to the desired signal. Therefore, the following conditions must be satisfied at user j
j
X
i=j−L+1
L
X
[j]
rank UH
j Hj,i Vi = d , ∀j ∈ L,
UH
j Hj,l Vl = 0, ∀j ∈ L, l 6= i,
l=1
1
It is worth to note that all notations with ∗ (·) should satisfy Eq. (4).
(5)
(6)
6
where d[j] is the number of resolvable interference-free data streams at user j. Condition (5) ensures that
the user of interest can extract the desired signal coming from the channel links simultaneously carrying
both the desired and XCI signals, while (6) implies that the ICI signals are totally mitigated at the receiver
side.
A. Imperfect CSI
Since the CSI acquisition in practice is presented by imperfect estimates of the channel parameters, the
system performance is likely to be degraded. Similar to the same assumption as in [26], [27], [28], we
assume that the precoders and combiners are designed with the knowledge of CSI mismatch. Similar to
[29], [30], we exploit the following model to construct the CSI mismatch as
Ĥ = G + E,
(7)
where the actual channel realization G is thought to be independent of the channel measurement error
matrix E. Defining the nominal SNR as ρ =
P
2 ,
σn
we further regard E as a Gaussian matrix where each
entry is generated using i.i.d. random variables with zero mean and variance τ such that
vec (E) ∼ CN (0, τ I) with τ , βρ−α , β > 0, α ≥ 0,
(8)
where vec (·) indicates the vector operator and CN (µ, Σ) denotes the multivariate complex normal
distribution with mean vector µ and covariance matrix Σ.
In this model, the error variance can be used to capture a variety of CSI acquisition scenarios, i.e.,
dependent on the SNR (α 6= 0) or be independent of that (α = 0). In particular, perfect CSI scenario can
be obtained by setting τ = 0 (α → ∞). Two distinct cases of CSI acquisition, known as reciprocal and
CSI feedback channels, can be described by setting α = 0 and α = 1, respectively.
To facilitate further analysis, it is more appropriate to derive the statistical properties of G conditioned
on Ĥ. Since Ĥ = G + E, with G and E being statistically independent Gaussian variables, G and Ĥ
are jointly Gaussian. Therefore, after conditioning on Ĥ, [31], [32], the real channel realization can be
expressed as
G=
1
Ĥ + Ê,
1+τ
τ
where vec Ê ∼ CN 0, 1+τ
I is statistically independent of Ĥ.
(9)
7
B. Kronecker Product based Channel Modeling with Antenna Correlation
The downlink channel Hj,i in (1) is modeled as a correlated flat fading channel. Since the system model
in [14] presumes that both transmitter (Tx) and receiver (Rx) nodes are deployed with multiple antennas,
we assume that the fading is correlated at both sides.
The general model of the correlated channel is given as, [33], [34], [35],
vec (H) = R1/2 vec (G) ,
(10)
where G is the i.i.d. MIMO channel with either perfect CSI or CSI mismatch as in (9), and R is the
covariance matrix defined as
n
R , E vec (G) vec (G)
H
o
.
(11)
A popular MIMO channel simulation model is the Kronecker model, [35], [36], [37]. The Kronecker
model is limited because it does not take into account the coupling between the direction of departure
(DoD) at the transmitter and the direction of arrival (DoA) at the receiver, which is typical for MIMO
channels. Despite the limitation of ignoring the coupling between the DoD and DoA at the transmit and
receive ends, the Kronecker model is widely used in information theoretic capacity analysis and simulation
studies, [38], [39], [40], [41]. Though the Kronecker model is expected to be inaccurate for increasing
array size and angular resolution, it still finds use in large MIMO system studies because of its simplicity.
Therefore, the Kronecker model is found to be sufficient to analyse the effect of spatial correlation on
the network performances.
Since this model is based on the assumption of separability of the transmit and receive correlation
coefficients, the transmitter does not affect the spatial properties of the received signal. In such a case the
correlation matrix is given as
R = Rr ⊗ Rt ,
(12)
where the notation ⊗ stands for the Kronecker product operator, Rr ∈ CN ×N and Rt ∈ CM ×M are the
receive and transmit correlation matrices defined as
1
E GGH ,
M
1
Rt = E GH G .
N
Rr =
(13)
(14)
8
Then, the channel realization, with respect to imperfect CSI (9), can be modeled as, [42], [43],
H=
=
1/2
1/2
where Ẽ = Rr ÊRt
H̃ =
1/2
R1/2
r GRt
=
R1/2
r
1
1/2
Ĥ + Ê Rt
1+τ
1
1/2
1/2
R1/2 ĤRt + R1/2
= H̃ + Ẽ,
r ÊRt
1+τ r
∼ CN (0, Rr ⊗
τ
I
τ +1
(15)
⊗ Rt ) is the estimation error which is uncorrelated with
1/2
1/2
1
Rr ĤRt .
1+τ
With this in mind, we refer to [44], where, for the case of both-end correlation, the authors showed that
the Kronecker-based exponential and uniform models are empirically reasonable to apply at the transmitter
and receiver sides, respectively. Moreover, these simple single-parameter models allow one to investigate
the effects of both-end correlation on the achievable sum rate and DoF to achieve clearer insights in an
explicit way.
1) Exponential correlation coefficient model: The most common and easy model that accounts for
potential antenna correlation is the exponential model, [35], [37], which can be accordingly utilized at
the transmitter side as follows
Rt[m,n] =
r |m−n| ,
if m ≥ n,
(16)
(r † )|m−n| , if m < n,
where the subscript notation [m, n] indicates the matrix element located in the mth row and nth column.
(† ) denotes a complex conjugate and r = aejθ is the complex correlation coefficient with 0 ≤ a < 1. For
simplicity, we assume that r = a throughout the paper, unless it is restated.
2) Uniform correlation coefficient model: The uniform coefficient model, [45], is the worst case
scenario where the correlation coefficients are defined as
r |m−n| , if m = n,
Rr[m,n] =
r,
if m =
6 n,
(17)
where we assume that the coefficients of all neighboring subchannels are equal to those of the distant
channels.
9
III. T HE IA F EASIBILITY C ONDITIONS
AND
D O F R EGION A NALYSIS
As stated in [14], [15], the compounded MIMO BC scenario represents the network model where the
users located in the totally overlapped area experience a multi-source transmission by L BSs. In particular,
the authors in [14] focused on the scenario when each Tx-Rx pair has an identical antenna configuration
with an equal number of data streams transmitted to the users of interest. In contrast to [14], in this
work, we consider the network scenario when the BSs serve mixed user classes. Therefore, we exploit
a three-cell network scenario to analyze how the proposed IA scheme performs under this transmission
strategy. For the sake of simplicity, we assume that each cell consists of one BS serving a single user
residing in the totally overlapped area. Thus, each BS aims to send data to two users of interest (L = 2).
Accordingly, the transmitted signal in (3) can be rewritten as
si = c[i,i]T
c[i,i+1]T
T
, ∀i ∈ L,
(18)
where c[i,i] ∈ Cd[i,i] ×1 is the data vector transmitted from BS i to the corresponding user belonging the
same cell, while c[i,i+1] ∈ Cd[i,i+1] ×1 indicates the data transmitted to a desired user in the neighboring
cell.
In the following sections, we show how to define the antenna configuration for each Tx-Rx pair to
achieve a given DoF in the three-cell MIMO network (see Fig. 2) and determine the performance metrics
accounting for the impact of spatial correlation.
A. The Feasibility Conditions of Interference Alignment
Since we decouple the transmit BF matrix into two parts, we need to start with the design of the
part responsible for IC interference mitigation. For the sake of brevity, we refer to [14], where the given
scheme is also applicable for the case with a single user per cell.
[ICI]
After applying that scheme, we come up with the Vi
matrix with the dimension of Mi ×(Mi −Ni−1 ).
Hence, the matrix dimension leads to the following condition
(Mi − Ni−1 )+ ≥ di , ∀i ∈ L
(19)
to be satisfied for a successful transmission of all the desired data to the interested users, and (·)+ represents
a function that returns a non-negative value. di is the number of data streams transmitted from BS i and
10
M1
1
s1
s2
M2
s3
N2
1
y2
U2
V2
M3
1
y1
U1
V1
1
N1
1
N3
1
U3
V3
y3
Fig. 2. A system model with three-cell compounded MIMO BCs for K = 1 user per cell with various numbers of antennas at each Tx-Rx
pair.
defined with respect to (18) as
di =
i+1
X
d[i,j], ∀i ∈ L.
(20)
j=i
For simplicity of the following derivations, we introduce a new variable indicating the number of columns
[ICI]
in Vi
, as well as the number of non-occupied antennas at the transmitter side, and define
Qi = (Mi − Ni−1 )+ , ∀i ∈ L.
(21)
Finally, we restate the condition of the successive ICI cancellation given in (19) as
[ICI]
rank Vi
≥ Qi , ∀i ∈ L.
(22)
Since we have different classes of receivers and BSs that aim to transmit different numbers of data
streams to the users of interest, we need to define the number of data streams desired to be decoded at
user j as
d[j] =
X
∀i
d[i,j], ∀j ∈ L.
(23)
11
In general, the number of transmitted data streams from BS i is not equal to the number of data streams
desired to be received by the corresponding user i, di 6= d[i] ; nevertheless, the total number of streams at
the transmitter side always matches the one at the receiver side as
L=3
X
di =
i=1
L=3
X
d[j].
(24)
j=1
With this in mind, the received signal at the user of interest can be written as
ỹj =
UH
j
j
X
UH
j
i=j−1
|
=
{z
}
desired + XCI signals
j
X
[ICI] [XCI]
UH
Vi
si
j Hj,i Vi
|
[ICI]
}
+ ñj =
j
X
[XCI]
H̄j,i Vi
si + ñj
i=j−1
H̄j,i V[i,i]XCI
=
| {z }
i=j−1
where H̄j,i = UH
j Hj,i Vi
{z
ICI=0
i=j−1
j
X
L=3
X
✘✘✘
✘✘
✘
[ICI]
[XCI]
✘
H✘
Vl
Vl
sl + ñj
+
j,l✘
✘✘✘
✘
✘
✘ l=1,l6=i
[ICI] [XCI]
si
Hj,i Vi Vi
Qi ×d[i,i]
V
|
[i,i+1]XCI
{z
Qi ×d[i,i+1]
}
c[i,i]
c
[i,i+1]
+ ñj , ∀j ∈ L,
(25)
indicates the effective channel matrix. The under-braced terms represent the
Qi × d[m,n] matrices responsible for the XCI cancellation of the undesired data, c[m,n] . Then, the user
of interest no longer experiences ICI, but the interference arriving along the desired directions presented
by the c[j−1,j−1] and c[j,j+1] data vectors with d[j−1,j−1] and d[j,j+1] data streams, respectively. Hence, we
define the minimum and maximum numbers of the interfering data streams as
ki = min d[j−1,j−1], d[j,j+1] ,
ri = max d[j−1,j−1], d[j,j+1] ,
(26)
(27)
and the corresponding difference as
w i = ri − k i .
(28)
To be more specific, we consider the received signal at user 1, which can be presented with respect to
12
(18) as
[1,1]
[3,3]
c
c
[1,1]XCI
+ H̄1,3 V[3,3]XCI V[3,1]XCI
+ ñ1
ỹ1 = H̄1,1 V
V[1,2]XCI
[1,2]
[3,1]
c
c
= H̄1,1 V[1,1]XCI c[1,1] + H̄1,3 V[3,1]XCI c[3,1] + ñ1 + H̄1,1 V[1,2]XCI c[1,2] + H̄1,3 V[3,3]XCI c[3,3] .
|
|
{z
}
{z
}
desired signal
(29)
interference
Since we assume that user 1 is deployed with N1 antennas, this number has to be enough to allocate
the desired signal into a subspace separate from the interference, and can be then defined as
N1 ≥ d[1] + dI [1] ,
(30)
where dI [1] is the subspace spanning the interfering c[1,2] and c[3,3] vectors present at user 1 and can be
expressed as
d[3,3]
d[1,2]
I
[1]
=
X
[1,2] [1,2]
H̄1,1 vi ci
+
X
[3,3] [3,3]
ci .
H̄1,3 vi
(31)
i=1
i=1
Therefore, the number of receive antennas has to be enough to decode the received signal and consequently
needs to satisfy the following requirement
N1 ≥ d[1] + d[1,2] + d[3,3] ,
which is not always possible to provide at the receive side due to the physical space limitation.
We define the k1 and r1 variables given in (26)–(27) as
k1 = min d[1,2] , d[3,3] ,
r1 = max d[1,2] , d[3,3] .
Accordingly, the corresponding difference between d[1,2] and d[3,3] is given as
w 1 = r1 − k 1 ,
where w1 indicates the number of the interfering data streams that can not be aligned with the other
interfering signal, and, therefore, these w1 vectors need to be mitigated at the receiver.
To reduce the subspace spanning the I [1] , we make sure that these interfering signals span a one-
13
I
[1]
=
=
d1,2
X
[1,2] [1,2]
H̄1,1 vi ci
i=1
k1
X
+
[3,3] [3,3]
ci
H̄1,3 vi
i=1
[1,2] [1,2]
H̄1,1 vi ci
i=1
|
d3,3
X
+
[3,3] [3,3]
H̄1,3 vi ci
{z
}
space dimension range = 1
+
kX
1 +w1
[1,2] [1,2]
cl
H̄1,1vl
(33)
l=k1 +1
|
{z
}
space dimension range = 0
dimensional space, and determine the k1 pairs of precoding vectors such that
[1,2]
H̄1,1 vi
[m,n]
where vi
[3,3]
= −H̄1,3 vi
, ∀i ∈ {1, 2, . . . , k1},
(32)
is the ith column of the V[m,n]XCI matrix from (29), and the XCI notation is omitted for
[1,2]
[3,3]
brevity. We randomly pick the v1,...,k1 and v1,...,k1 vectors to guarantee that they are linearly independent
with probability one. This can be presented as shown in (33), where we assumed that r1 = d[1,2] .
Since user 1 obtains the c[1,2] and c[3,3] data vectors with d[1,2] and d[3,3] data streams, respectively, for
d1,2 6= d[3,3] , we then need to define which effective channel matrix, H̄1,x , needs to be cancelled, where
x can be derived from
{x, y} =
{1, 2}, if r1 = d[1,2] ,
{3, 3}, if r1 = d[3,3] .
Therefore, we have the w1 interfering BF vectors,
[x,y]
V{k1 +1:k1 +w1 }
n
o
[x,y]
[x,y]
= vk1 +1 , . . . , vk1 +w1 , that are obtained
by finding the null space of the corresponding effective channel as follows
[x,y]
V{k1 +1:k1 +w1 } = null H̄1,x ,
(34)
[x,y]
where V{k1 +1:k1 +w1 } denotes the part of the BF matrix with a range from the (k1 +1)th up to the (k1 +w1 )th
column. This leads to the following condition to be satisfied
w1 ≤ (Q1 − N1 )+ or w1 ≤ (Q3 − N1 )+ .
(35)
As a result, the interference observed by user 1 can be mitigated as shown in (37), where we assume
d[1,2] > d[3,3] and the boxed term is redundant, unless d[1,2] < d[3,3] . According to (33), the interference
at the user of interest spans a one-dimensional space, thus we redefine the number of receive antennas
14
|
v[1,2]
1
|
H̄1,1 H̄1,3
|
|
{z
}
[3,3]
N1 ×(Q1 +Q3 )
v1
|
|
···
|
|
[1,2]
[1,2]
· · · vk1
vk1 +1
|
···
|
···
|
|
[3,3]
· · · vk1
0
···
|
|
···
|
[1,2]
· · · vk1 +w1
···
|
···
|
···
0
···
|
{z
|
0
|
|
[3,3]
vk1 +1
|
···
|
···
0
···
|
···
|
[3,3]
· · · vk1 +w1
···
|
| ··· |
= 0 · · · 0 (37)
| ··· |
|
{z
}
N1 ×max(d[1,2] ,d[3,3] )
}
(Q1 +Q3 )×(k1 +w1 )
given in (30) as
N1 = d[1] + 1.
(36)
Finally, if all the conditions above are satisfied, user 1 experiences the interference-free data transmission.
Similar to cell 1, we define the conditions satisfying the ability to maintain the proposed scheme for
the rest of cells.
Below we briefly explain how to derive the receive suppression matrices. The given example will be
generalized for user i. Due to the help of transmit beamforming matrices, we ensure that the interfering
signals coming from BSs j (∀j ∈ L = 3, j 6= i) are aligned along one shared space (for brevity, we
denote this effective interference as H̃i ). Having this, we define the space spanning the interference as
Ti = span{H̃H
i Ui },
(38)
where span{·} denotes the subspace spanned by the column vectors of a matrix. Next, we rewrite this
equation as
h
I Mi
i
− H̃H
i
Ti
Ui
= Fi Wi = 0.
(39)
Since the size of the matrix Fi is Mi × Mi + Pi (Pi = max(dj , dk ) and dj and dk is the numbers of
data streams transmitted from BSs j and k, where i 6= {j, k}), the null space always exists because
Mi + Pi > Mi , ∀Pi > 0. For more details, the reader can refer to [13].
B. The DoF Region Analysis
With respect to the number of cells L, we define an upper bound on the DoF region. Thus, for the
compounded MIMO BC network scenario with L cells, we have ((L + 2)L + 1) constraints that determine
the upper limit of the DoF region.
15
While the set Dout describes an outer limit for all achievable d[i,j] on the compounded L−cell MIMO
BC channel, maximization of the sum of d[i,j] over Dout is a linear LP problem, where it is required to
explicitly estimate all extreme points of the feasible space, to calculate the objective values and, finally,
to eliminate all redundant limits.
Dout , maximize
L
X
∗ mod (i+(L−L))
L
i=1
X
d[i,j] ∈ RLL
+ ,
j=i
∗ mod (i+(L−L))
L
X
subject to di =
d[i,j] ≤ Qi ,
(40.1)
d[j,i] + 1 ≤ Ni ,
(40.2)
j=i
∗ mod (i+(L−1))
L
X
j=i
|
[k]
{z
d[i]
}
Di = d[j] ∪ Qi ≤ max(Nj , Qi ),
(40.3)
j ∈ {i, . . . ,∗ modL (i + (L − L))},
|
{z
}
k samples
∀i ∈ L, k ∈ {1, . . . , (L − L)},
where
Qt = wj + Nj , ∀j ∈ L,
(40.4)
t = m ← d[m,n] = rj in (27),
(40.5)
where t and ∪ indicate the first subscript index and the union operation, respectively. The case of wj = 0
implies that at this particular user the numbers of interfering streams are identical. Q indicates the variable
that captures uncertainty of which Q should be chosen according to (35).
Although at a glance it might seem that the DoF region above is not too different from the one
presented in [15], it is worth mentioning that [15] focused on the network case with an identical antenna
configuration leading to identical numbers of data streams causing the interference at the receiver side,
that is, in [15] the authors considered the users of one user class. Moreover, the algorithm in [15] can not
be applied for the compounded MIMO BC with mixed user classes. Thus, considering different classes
of users entails more conditions related to the XCI mitigation shown by the second term in (33), which
spans a zero-dimensional space.
The set Dout provides the conditions that define the outer limit for all the attainable d[j,i] under the
compounded MIMO BC by maximizing a weighted sum of d[j,i] which is a linear programming problem.
At the same time, this set of conditions is suitable either to analyse the achievable DoF region under a
16
given network antenna configuration or to calculate the minimum number of antennas at each node in
order to obtain the required DoF. Subsequently, we provide the algorithm that allows us to compute the
minimum required number of antennas at each Tx-Rx pair with maximum DoF.
Algorithm 1 Defining the Antenna Configuration
Require:
1: inputs d[i,j] , ∀i, j ∈ L
2: Qi and Nj from (40.1)–(40.2)
Ensure:
3: for j = 1 : L do
4:
if (40.3) is not valid then
[k]
5:
update Qi ← Di
6:
end if
7:
return Qi .
8: end for
9: for j = 1 : L do
10:
calculate wj , according to (28).
11:
find the value of t with respect to (40.5).
12:
if t = j then
13:
Qt := Qj
14:
if Qj > Qj then
15:
update Qj ← Qj
16:
end if
17:
else
Qt := Qj−1
18:
19:
if Qj−1 > Qj−1 then
20:
update Qj−1 ← Qj−1
21:
end if
22:
end if
23:
return Qj−1 or Qj (if updated only).
24:
This derived value will be used in the next iteration
25: end for
26: utilize Qi and Nj , ∀i, j ∈ L, to calculate the numbers of transmit antennas as in (21).
The following provides the total number of DoF by explicitly solving the LP problem for the three-cell
compounded MIMO BC case.
P roposition 1 :
η , max
Dout
L=3
X
i=1
∗ mod (i+1)
L
X
d[i,j]
j=i
= min {Q1 + Q2 + Q3 , N1 + N2 + N3 − 3, max(N1 , Q1 ) + max(N3 , Q2 ),
max(N1 , Q3 ) + max(N2 , Q2 ), max(N2 , Q1 ) + max(N3 , Q3 ),
(41)
17
max(N1 , Q1 ) + max(N1 , Q3 ) + max(N3 , Q2 ) + max(N2 , Q2 )
,
2
max(N1 , Q3 ) + max(N2 , Q1 ) + max(N3 , Q3 ) + max(N2 , Q2 )
,
2
max(N1 , Q1 ) + max(N2 , Q1 ) + max(N3 , Q2 ) + max(N3 , Q3 )
,
2
max(N1 , Q1 ) + max(N1 , Q3 ) + Q2 + N2 + N3
− 1,
2
max(N1 , Q1 ) + max(N2 , Q1 ) + Q2 + Q3 + N3 − 1
,
2
max(N1 , Q1 ) + max(N3 , Q2 ) + Q1 + Q2 + Q3
,
2
max(N1 , Q1 ) + max(N3 , Q2 ) + N1 + N2 + N3 − 1
− 1,
2
max(N1 , Q3 ) + max(N2 , Q2 ) + Q1 + Q2 + Q3
,
2
max(N1 , Q3 ) + max(N2 , Q2 ) + N1 + N2 + N3 − 1
− 1,
2
max(N1 , Q3 ) + max(N3 , Q3 ) + Q1 + Q2 + N2 − 1
,
2
max(N2 , Q1 ) + max(N2 , Q2 ) + Q3 + N1 + N3
− 1,
2
max(N2 , Q1 ) + max(N3 , Q3 ) + Q1 + Q2 + Q3
,
2
max(N2 , Q1 ) + max(N3 , Q3 ) + N1 + N2 + N3 − 1
− 1,
2
max(N2 , Q2 ) + max(N3 , Q2 ) + Q1 + Q3 + N1 − 1
,
2
max(N3 , Q2 ) + max(N3 , Q3 ) + Q1 + N1 + N2
− 1,
2
max(N1 , Q1 ) + max(N1 , Q3 ) + max(N2 , Q1 )
3
max(N2 , Q2 ) + max(N3 , Q2 ) + max(N3 , Q3 )
.
+
3
Outline of the Proof : Proposition 1 can be verified by solving the dual problem by linear programming
max d[1,1] + d[1,2] + d[2,2] + d[2,3] + d[3,3] + d[3,1] .
Since all the extreme points of the feasible space can be directly evaluated, we compute the objective
value at these points and eliminate the limits that can be regarded redundant. Using the fundamental
theorem of LP, [46], [47], we find the solution. For the sake of brevity, the derivation details are omitted.
It is worth noting that all the terms given in Proposition 1 are essential because any of them is valid
for a certain antenna configuration.
18
We define the achievable DoF for our multi-cell network as the pre-log factor of the sum rate, [4], [48].
This is one of the key metrics used for assessing the performance of a multiple antenna based system in
the high SNR region defined as
L
X
IP (SNR)
η = lim
=
d[j] ,
SN R→∞ log2 (SNR)
j=1
where IP (SNR) denotes the sum rate that can be achieved at a given SNR defined as IP (SNR) =
(42)
L
P
Ij ,
j=1
where Ij and d[j] are the data rate and the number of the successfully decoded data streams at user j,
respectively.
Therefore, the received signal at user j, with respect to the considered channel model in (15), can be
rewritten as
ỹj =
UH
j
= UH
j
L=3
X
i=1
L=3
X
i=1,
|
=
Hj,iVi si + UH
j Hj,j+1 Vj+1 sj+1 + ñj
|
{z
}
i6=j+1
ICI
{z
}
desired signal + XCI
L=3
X
UH
j
i=1,
|
=
Hj,i Vi si + ñj
H̃j,i + Ẽj,i Vi si + UH
H̃
+
Ẽ
j,j+1
j,j+1 Vj+1 sj+1 + ñj
j
|
{z
}
i6=j+1
{z
}
ICI
desired signal + XCI
j
UH
j
X
H̃j,i V
[i,j] [i,j]
c
i=j−1
|
+
{z
}
desired signal
UH
j
|
j
X
+
UH
j
|
L=3
X
Ẽj,i Vi si
i=1
{z
CSI mismatch
}
✭✭✭
H
✭✭
U✭
V✭
H̃j,i V[i,l{l6=j}]c[i,l{l6=j}] + ✭
✭✭
j,j+1
j+1 sj+1 + ñj , ∀j ∈ L.
j H̃
|
{z
}
i=j−1
ICI = 0
{z
}
(43)
XCI
With this in mind, we determine the data rate achievable at the user of interest as shown in (44), where
Jj in (45) indicates the interference terms in (43).
IV. S IMULATION R ESULTS
In this section, we present our results using Monte Carlo simulations with 105 trials to investigate the
impact of antenna correlation and CSI mismatch on the network performances. The simulations assume a
Gaussian modulation and frequency flat fading which is designed according to (9). The users are distributed
19
as it was described in the system model. To make a fair comparison, the total transmit power at BS is
constrained to unity irrespective of the number of transmit antennas. To create different classes of users
with various numbers of antennas, we assume that BS i transmits the i data streams to the corresponding
user i, and only one data stream to the collateral user of interest. Therefore, we have three receivers
deployed with three, four and five antennas, respectively.
According to [51], we examine three correlation regimes, namely, the low, medium and high correlations.
The low correlation mode indicates the scenario with no correlation with sufficient spacing (≥ λ/2)
between the antennas. According to (15), the medium and high correlation regimes at the transmitter side
can be modeled by r = 0.3 and r = 0.9, respectively; however, the same modes at the receiver side can
be presented by r = 0.9.
In Fig. 3, we evaluate the cumulative distribution function (CDF) of the achievable data rate for every
user in the assumed system model. The data rates are calculated using the proposed scheme under different
antenna correlation modes. For the sake of clarity, we consider the average data rate achievable by the
network. The observation point is 30 dB. We deploy transmitters with ten, thirty and fifty antennas to
estimate the potential benefit attainable from the LS-MIMO scenario. As we can see, for the case of low
correlation, the probability of attaining a higher data rate increases as the number of antennas at the BS
goes up. Regarding the medium correlation case, we observe severe degradation in the achievable data
rate, and the various numbers of transmit antennas do not seem to have much difference; however, a
different antenna deployment still matters as it is shown in the inset figure (see Fig. 3, where lines from
Fig. 3. CDF curves with low (black), medium (red) and high (blue) correlations for different numbers of transmit antennas at 30 dB for the
case of perfect CSI (when α → ∞).
20
L=3
X
L=3
X
IΣ =
Ij =
log2 det I +
j=1
j=1
where Jj =
j
X
i=j−1
j
P
i=j−1
[i,j] [i,j]H H
UH
V
H̃j,iUj
j H̃j,i V
Jj +
σñ2 I
[i,l{l6=j}] [i,l{l6=j}]H H
UH
V
H̃j,i Uj +
j H̃j,i V
L=3
X
,
H H
UH
j Ẽj,k Vk Vk Ẽj,k Uj .
(44)
(45)
k=1
Fig. 4. CDF curves with low (black), medium (red) and high (blue) correlations for different CSI mismatch scenarios with M = 50
antennas per BS at 30 dB.
top to bottom refer to the cases when transmitter is equipped with 10, 30 and 50 antennas, respectively).
Finally, we consider the high correlation mode presented in blue lines with which produces a significant
loss in the achievable data rate. For M = 10, M = 30 and M = 50 antennas deployed at the transmitter
side, the average data rate of 8.8, 11.4 and 12.75 bits/s/Hz are achievable with a probability of 90%. It
is worth to note that the deployment of more antennas allow us to overcome the high correlation, and
accordingly, the system can be treated as though it is experiencing medium correlation. With this in mind,
in the next simulation, we consider only the network case where all the BSs are equipped with fifty
antennas.
Next, we want to investigate the combined effect of CSI mismatch and antenna correlation on the
achievable data rates. The observation point is 30 dB. As shown in Fig. 4, the case of (α = 1.5, β = 15)
performs worse than the other two scenarios of the CSI acquisition. The CSI mismatch cases given by
(α = 0.75, β = 10) and (α = 0, β = 0.05) act in a similar way under the medium and high correlation
regimes; however, for low correlation, the network with (α = 0, β = 0.05) slightly outperforms the one
21
modeled by (α = 0.75, β = 10). This result leads to the realization that the SNR-dependent and SNRindependent CSI acquisition scenarios, (α = 0, β = 0.05) and (α = 0.75, β = 10), do not differ from each
other in the cases of medium and high antenna correlations.
It is worth mentioning that the provided results in Fig. 4 and 5 also provide a representative insight
into how many DoFs can be achieved under the considered network scenario. Eq. (42) can be modified
to calculate the average number of DoFs as follows
DoF =
I(SNR)
,
log2 (SNR)
(46)
when the averaged sum rate values are scaled by the SNR value (the observation point is chosen to be
30 dB).
Fig. 5. The achievable data rate as a function of the correlation coefficient and number of transmit antennas.
Finally, we provide a 3D plot in Fig. 5 presenting the data rate achievable at 30 dB as a function of the
number of transmit antennas and correlation coefficient. The correlation at the transmitter side is assumed
to be high and modeled by using the exponential model as in (16) (r = 0.9). Accordingly, the antenna
correlation at the receiver side is given as in (17) and simulated within the range [0, 1]. As we can see, the
achievable data rate increases as the number of transmit antennas goes up, while the data rate decreases
as the correlation coefficient increases.
V. C ONCLUSION
In this work, we considered the compounded MIMO BC network scenario and proposed a generalized
transmit BF design that accounts for different user classes. We analysed the feasibility conditions of IA
22
and presented the DoF region analysis which were then utilized in design of an algorithm to define the
minimum antenna configuration to achieve a required number of data streams in the network. Moreover, we
investigated the impact of spatial antenna correlation under various CSI acquisition scenarios. Finally, the
proposed scheme was examined in traditional and LS-MIMO systems under different channel scenarios.
It was shown that the performance obtained for the latter case indicates that deploying more antennas
makes it possible to overcome the impact of high correlation by careful manipulation of the antenna array.
R EFERENCES
[1] V. R. Cadambe and S. A. Jafar, “Interference alignment and the degrees of freedom of the K-user interference channel,” IEEE Trans.
Inf. Theory, vol. 54, no. 8, pp. 3425–3441, Aug. 2008.
[2] C. Yetis, T. Gou, S. A. Jafar, and A. Kayran, “On feasibility of interference alignment in MIMO interference networks,” IEEE Trans.
Signal Process., vol. 58, no. 9, pp. 4771–4782, Sept. 2010.
[3] G. Bresler, D. Cartwright, and D. Tse, “Feasibility of interference alignment for the MIMO interference channel,” IEEE Trans. Inf.
Theory, vol. 60, no. 9, pp. 5573–5586, Sept. 2014.
[4] T. Gou and S. A. Jafar, “Degrees of freedom of the K-user M × N MIMO interference channel,” IEEE Trans. Inf. Theory, vol. 56, no.
12, pp. 6040–6057, Dec. 2010.
[5] M. Maddah-Ali, A. Motahari, and A. Khandani, “Signaling over MIMO multibase systems – combination of multi-access and broadcast
schemes,” in IEEE Int. Symp. Inf. Theory (IEEE ISIT), pp. 2104–2108, July 2006.
[6] S. Jafar and M. Fakhereddin, “Degrees of freedom for the MIMO interference channel,” IEEE Trans. Inf. Theory, vol. 53, no. 7, pp.
2637–2642, July 2007.
[7] M. Maddah-Ali, A. Motahari, and A. Khandani, “Communication over X channel: Signalling and multiplexing gain,” in Technical
Report. UW–ECE–2006–12, University of Waterloo, July 2006.
[8] S. A. Jafar and S. Shamai, “Degrees of freedom region for the MIMO X channel,” IEEE Trans. Inf. Theory, vol. 54, no. 1, pp. 151–170,
Jan. 2008.
[9] V. R. Cadambe and S. A. Jafar, “Interference alignment and the degrees of freedom of wireless X networks,” IEEE Trans. Inf. Theory,
no. 9, pp. 3893–3908, Sept. 2009.
[10] C. Suh and D. Tse, “Interference alignment for cellular networks,” in An. Allerton Conf. Commun., Control, Comput., pp. 1037–1044,
Sept. 2008.
[11] J. Kim, S. H. Park, H. Sung, and I. Lee, “Sum rate analysis of two-cell MIMO broadcast channels: spatial multiplexing gain,” in IEEE
Int. Conf. Commun. (IEEE ICC), pp. 1–5, May 2010.
[12] W. Shin, N. Lee, J. B. Lim, C. Shin, and K. Jang, “On the design of interference alignment scheme for two-cell MIMO interfering
broadcast channels,” IEEE Trans. Wireless Commun., vol. 10, no. 2, pp. 437–442, Feb. 2011.
[13] J. Tang and S. Lambotharan, “Interference alignment techniques for MIMO multi-cell interfering broadcast channels,” IEEE Trans.
Commun., vol. 61, no. 1, pp. 164–175, Jan. 2013.
[14] G. Nauryzbayev and E. Alsusa, “Interference Alignment Cancellation in Compounded MIMO Broadcast Channels with General Message
Sets,” IEEE Trans. Commun., vol. 63, no. 10, pp. 3702–3712, Oct. 2015.
[15] G. Nauryzbayev and E. Alsusa, “Enhanced Multiplexing Gain using Interference Alignment Cancellation in Multi-cell MIMO
Networks,” IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 357–369, Jan. 2016.
23
[16] S. Arzykulov, G. Nauryzbayev, T. Tsiftsis, and M. Abdallah, “Error Performance of Wireless Powered Cognitive Relay Networks with
Interference Alignment,” in Proc. IEEE Int. Symp. Personal, Indoor, and Mobile Radio Commun. (IEEE PIMRC), pp. 1-6, Montreal,
Canada, Oct. 2017.
[17] S. Arzykulov, G. Nauryzbayev, T. Tsiftsis, and M. Abdallah, “On the Capacity of Wireless Powered Cognitive Relay Network with
Interference Alignment,” in Proc. IEEE Global Commun. Conf. (IEEE GLOBECOM), pp. 1-6, Singapore, Dec. 2017.
[18] G. Nauryzbayev and E. Alsusa, “Identifying the Maximum DoF Region in the Three-cell Compounded MIMO Network,” in IEEE
Wireless Commun. Netw. Conf. (IEEE WCNC), pp. 1-5, Doha, Qatar, Apr. 2016.
[19] G. Nauryzbayev, E. Alsusa, and J. Tang, “An Alignment Based Interference Cancellation Scheme for Multi-cell MIMO Networks,” in
IEEE Veh. Technol. Conf. (IEEE VTC-Spring), pp. 1-5, Glasgow, UK, 2015.
[20] G. Nauryzbayev, S. Arzykulov, E. Alsusa, and T. Tsiftsis, An Alignment-based Interference Cancellation Scheme for Network-MIMO
Systems,” in Int. Conf. Signal Process. Commun. Syst. (ICSPCS), pp. 1-5, Australia, Dec. 2016.
[21] G. Nauryzbayev, S. Arzykulov, E. Alsusa, and T. Tsiftsis, “A Closed-form Solution to Implement Interference Alignment and
Cancellation Scheme for the MIMO three-user X-channel Model,” in Int. Conf. Signal Process. Commun. Syst. (ICSPCS), pp. 1-6,
Australia, Dec. 2016.
[22] J.J. Alcaraz and M. van der Schaar, “Coalitional Games With Intervention: Application to Spectrum Leasing in Cognitive Radio,” IEEE
Trans. Wireless Commun., vol. 13, no. 11, pp. 6166–6179, Nov. 2014.
[23] D. Zhao, “Throughput Fairness in Infrastructure-Based IEEE 802.11 Mesh Networks,” IEEE Trans. Veh. Technol., vol. 56, no. 5, pp.
3210–3219, Sept. 2007.
[24] K. Zheng, L. Zhao, W. Xiang, and L. Hanzo, “A survey of Large-Scale MIMO systems,” IEEE Commun. Surveys Tuts, vol. 17, no. 3,
pp. 1738–1760, Third Quarter 2015.
[25] V. R. Cadambe, S. A. Jafar, and S. Shamai, “Interference alignment on the deterministic channel and application to fully connected
AWGN interference networks,” IEEE Trans. Inf. Theory, vol. 55, no. 1, pp. 269–274, Jan. 2009.
[26] B. Nosrat-Makouei, J. G. Andrews, and R. W. Heath, “MIMO interference alignment over correlated channels with imperfect CSI,” IEEE
Trans. Signal Process., vol. 59, no. 6, pp. 2783–2794, Jun. 2011.
[27] O. El Ayach and R. W. Heath, “Interference alignment with analog channel state feedback,” IEEE Trans. Wireless Commun., vol. 11,
no. 2, pp. 626–636, Feb. 2012.
[28] R. Tresch and M. Guillaud, “Cellular interference alignment with imperfect channel knowledge,” in IEEE Int. Conf. Commun. (IEEE
ICC), pp. 1–5, 2009.
[29] S. Razavi and T. Ratnarajah, “Performance analysis of interference alignment under CSI mismatch,” IEEE Trans. Veh. Technol., vol.
63, no. 9, pp. 4740–4748, Nov. 2014.
[30] S. Razavi and T. Ratnarajah, “Adaptive LS- and MMSE-based beamformer design for multiuser MIMO interference channels,” IEEE
Trans. Veh. Technol., vol. 65, no. 1, pp. 132–144, Jan. 2016.
[31] P. Aquilina and T. Ratnarajah, “Performance analysis of IA techniques in the MIMO IBC with imperfect CSI,” IEEE Trans. Commun.,
vol. 63, no. 4, pp. 1259–1270, Apr. 2015.
[32] S. Kay, “Fundamentals of statistical signal processing: Estimation theory,” Englewood Cliffs, NJ: Prentice-Hall, 1993.
[33] J. Kotecha and A. Sayeed, “Transmit signal design for optimal estimation of correlated MIMO channels,” IEEE Trans. Signal Process.,
vol. 52, no. 2, pp. 546–557, Feb. 2004.
[34] V. Raghavan, J. Kotecha and A. Sayeed, “Why does the Kronecker model result in misleading capacity estimates?,” IEEE Trans. Inf.
Theory, vol. 56, no. 10, pp. 4843–4864, Oct. 2010.
24
[35] J. Choi and D. Love, “Bounds on eigenvalues of a spatial correlation matrix,” IEEE Commun. Lett., vol. 18, no. 8, pp. 1391–1394,
Aug. 2014.
[36] C. Oestges, “Validity of the Kronecker model for MIMO correlated channels,” in IEEE Veh. Technol. Conf. (IEEE VTC), pp. 2818–2822,
May 2006.
[37] S. Loyka, “Channel capacity of MIMO architecture using the exponential correlation matrix,” IEEE Commun. Lett., vol. 5, no. 9, pp.
369–371, Sept. 2001.
[38] S. Wyne, A. Molisch, P. Almers, G. Eriksson, J. Karedal, and F. Tufvesson, “Outdoor-to-indoor office MIMO measurements and
analysis at 5.2 GHz,” IEEE Trans. Veh. Technol., vol. 57, no. 3, pp. 1374–1386, May 2008.
[39] H. Tong and S. Zekavat, “On the suitable environments of the Kronecker product form in MIMO channel modeling,” in IEEE Wireless
Commun. Netw. Conf. (IEEE WCNC), pp. 780–784, Mar.–Apr. 2008.
[40] R. Ertel, P. Cardieri, K. Sowerby, T. Rappaport, and J. Reed, “Overview of spatial channel models for antenna array communication
systems,” IEEE Person. Commun., vol. 5, no. 1, pp. 10–22, Feb. 1998.
[41] D. Shiu, G. Foschini, M. Gans, and J. Kahn, “Fading correlation and its effect on the capacity of multi-element antenna systems,” IEEE
Trans. Commun., vol. 48, no. 3, pp. 502–513, Mar. 2000.
[42] D. Gesbert, H. Bolcskei, D. Gore and A. Paulraj, “Outdoor MIMO wireless channels: Models and performance prediction,” IEEE
Trans. Commun., vol. 50, no. 12, pp. 1926–1934, Dec. 2002.
[43] C. Masouros, M. Sellathurai and T. Ratnarajah, “Large-scale MIMO transmitters in fixed physical spaces: The effect of transmit
correlation and mutual coupling,” IEEE Trans. Commun., vol. 61, no. 7, pp. 2794–2804, July 2013.
[44] D. Chizhik, J. Ling, P. Wolniansky, R. Valenzuela, N. Costa and K. Huber, “Multiple-inputmultiple-output measurements and modeling
in Manhattan,” IEEE J. Sel. Areas Commun., vol. 21, no. 3, pp. 321–331, Apr. 2003.
[45] S. Salous, “Radio propagation measurement and channel modelling,” ch. 3.12.2, pp. 122–123, Hoboken, NJ: Wiley, 2013.
[46] C. Wang, K.
Ren and J. Wang, “Secure optimization computation outsourcing in cloud computing: A case study of linear
programming,” IEEE Trans. Comput., vol. 65, no. 1, pp. 216–229, Jan. 2016.
[47] D. Luenberger and Y. Ye, “Linear and nonlinear programming,” 3rd ed., Springer, 2008.
[48] G. Caire and S. Shamai, “On the achievable throughput of a multi-antenna Gaussian broadcast channel,” IEEE Trans. Inf. Theory, vol.
49, no. 7, pp. 1691–1706, July 2003.
[49] M. Holmes, A. Gray and C. Isbell, “Fast SVD for large scale matrices,” College of Computing, Georgia Institute of Technology, pp.
1–2, Atlanta, GA, 2007.
[50] W. Wai, C. Tsui and R. Cheng, “A low complexity architecture of the V-BLAST system,” in IEEE Wireless Commun. Netw. Conf.
(IEEE WCNC), pp. 310–314, Sept. 2000.
[51] “User equipment (UE) radio transmission and reception.” 3rd Generation Partnership Project; Technical specification group radio access
network; Evolved Universal Terrestrial Radio Access (E-UTRA). TS 36.101, V13.0.0, 2015.
| 7 |
On Stochastic Orders and Fast Fading Multiuser
Channels with Statistical CSIT
Pin-Hsun Lin, Member, IEEE, Eduard A. Jorswieck, Senior Member, IEEE,
arXiv:1712.03692v1 [] 11 Dec 2017
Rafael F. Schaefer, Senior Member, IEEE, Martin Mittelbach, and
Carsten R. Janda, Student Member, IEEE
Abstract
In this paper, we investigate the ergodic capacity of fast fading Gaussian multiuser channels when only the
statistics of the channel state are known at the transmitter. In general, the characterization of capacity regions
of multiuser channels with only statistical channel state information at the transmitter (CSIT) is open. Instead of
directly matching achievable rate regions and the corresponding outer bounds, in this work we resort to classifying
the random channels through their probability distributions. To be more precise, in order to attain capacity results,
we first derive sufficient conditions to attain some information theoretic channel orders such as degraded and
strong/very strong interference by applying the usual stochastic order and exploiting the same marginal property
such that the capacity regions of the memoryless Gaussian multiuser channels can be characterized. These include
Gaussian interference channels, Gaussian broadcast channels, and Gaussian wiretap channels/secret key generation.
We also extend the framework to channels with a specific memory structure, namely, channels with finite-state,
wherein the Markov fading channel is discussed as a special case. Several practical examples such as Rayleigh
fading and Nakagami-m fading, etc., illustrate the application of the derived results.
Index Terms
Stochastic orders, Same marginal property, Imperfect CSIT, Coupling, Maximal coupling, Copula, Multi-user
channels, Capacity regions, Channel with memory
Parts of the work were presented in ITW 2016, Cambridge, UK and SCC 2017, Hamburg, Germany. Pin-Hsun Lin, Eduard A. Jorswieck,
Martin Mittelbach and Carsten R. Janda are with the Communications Laboratory, Department of Electrical Engineering and Information
Technology, Technische Universität Dresden, Germany. Rafael F. Schaefer is with the Information Theory and Applications Chair, Technische
Universität Berlin, Germany Emails: {pin-hsun.lin, eduard.jorswieck, martin.mittelbach, carsten.janda}@tu-dresden.de, rafael.schaefer@tuberlin.de. Part of this work is funded by FastCloud 03ZZ0517A and FastSecure 03ZZ0522A.
2
I. I NTRODUCTION
For Gaussian multiuser (GMU) channels with perfect channel state information at the transmitter (CSIT),
due to the capability of ordering the channels of different users, capacity regions/secrecy capacity are
known for the degraded broadcast channel (BC) [1] [2] and wiretap channel (WTC) [3], [4], [5], and also
the sum capacity of low-interference regime [6] and the capacity region for some cases of interference
channel (IC) such as strong IC [7], [8], [9] and very strong IC [8]. When fading effects of wireless
channels are taken into account, if there is perfect CSIT, some of the above capacity results still hold with
an additional operation of taking the average with respect to the fading channels. For example, in [10], the
ergodic secrecy capacity of Gaussian WTC is derived; in [11], the ergodic capacity regions are derived for
ergodic very strong and uniformly strong (each realization of the fading process is a strong interference
channel) Gaussian IC. Because of limited feedback bandwidth and the delay caused by channel estimation,
the transmitter may not be able to track channel realizations instantaneously if they vary rapidly. Thus,
for fast fading channels, it is more practical to consider the case with only partial CSIT of the legitimate
channel. However, when there is only statistical CSIT, there are only few known capacity results, such as
the layered BC [12], the binary fading interference channel [13], the one-sided layered IC [14], Gaussian
WTC [15], [16], and layered WTC [16], etc. The main difficulty is that, without the knowledge of the
instantaneous CSIT, we may not be able to directly compare the channels in a simple form and manner.
In particular, channel orders including degraded, less noisy, and more capable [17] [18] in Gaussian BC
and Gaussian WTC or the strong and very strong in IC depend on the knowledge of CSIT. Note that
these channel orders can help to simplify the functional optimization with respect to the channel input
distribution and/or channel prefixing.
To consider a GMU-channel in which the transmitters only know the distributions of the channels
but not the realizations, some immediate questions are: How to compare channel qualities only by their
distributions? How to derive the capacity region by exploiting such comparison of channel qualities? From
an information-theoretic point of view, to deal with this problem, a better achievable scheme and tighter
3
outer bound shall be developed. In contrast, in this work we resort to identifying that whether the random
channels are stochastically orderable or not, to find the capacity region. In particular, a GMU channel with
orderable random channels means that there exists an equivalent GMU channel in which we can reorder
channel realizations among different transmitter-receiver pairs in the desired manner1 . Taking the Gaussian
BC (GBC) as an example, an orderable two-user GBC means that under the same noise distributions at
the two receivers, in the equivalent GBC, one channel strength is always stronger or weaker than the
other for all realizations within a codeword length. We attain this goal mainly by the following elements:
stochastic orders [19], coupling [20], and the same marginal property [21]. The stochastic orders have
been widely used in the last several decades in diverse areas of probability and statistics such as reliability
theory, queueing theory, and operations research, etc., see [19] and references therein. Different stochastic
orders such as the usual stochastic order, the convex order, and the increasing convex order can help us
to identify the location, dispersion, or both location and dispersion of random variables, respectively [19].
Choosing a proper stochastic order to compare the unknown channels allows us to form an equivalent
channel in which realizations of channel gains are ordered in the desired manner. Then, we are able to
derive the capacity regions of the equivalent MU channel, which is simpler than directly considering the
original channel.
We also investigate the applicability of the developed framework to MU channels with memory, e.g.,
the finite-state BC (FSBC) [22]. In the FSBC model, the channels from the transmitter to the receivers
are governed by a state sequence that depends on the channel input, outputs, and previous states, which
are described by the transition function. Some examples for FSC include multiple access channels [23],
degraded BC [22], etc. More detailed discussion on FSC without and with feedback can be found in [22],
[24], respectively, and references therein.
The main contributions of this paper, which are all novel in the literature to the best of our knowledge,
are as follows:
1 The
desired manner is related to the intrinsic property of the channel to be analyzed, which will be shown later.
4
•
We use maximal coupling to illustrate the insight of the proposed framework and we use coupling to
derive sufficient conditions to attain the capacity results. In addition to the coupling scheme, we also
identify that another alternatively explicit construction to derive the sufficient conditions is indeed a
copula.
•
By coupling, we classify memoryless fast fading GMU channels such that we can obtain the capacity
results of them under statistical CSIT. To attain this goal, we integrate the concepts of usual stochastic
order and the same marginal property, i.e., an intrinsic property of MU channels without receivers
cooperation, such that we can align the realizations of the fading channel gains between different
users in an identical trichotomy order over time to obtain an equivalent channel.
•
We then connect the trichotomy order of channel strengths in the equivalent channels and different
information theoretic orders to characterize the capacity regions of: Gaussian IC (GIC), GBC, GWTC,
and also the secret key capacity of the secret key generation with a Gaussian-Maurer’s satellite model
[25].
•
We further extend the framework to time-varying channels with memory. In particular, we consider
the finite-state channel (FSC) model introduced by [26]. We use the finite-state BC (FSBC) [22] as
an example. The Markov fading channel, which is commonly used to model the memory effect in
wireless channels, is also considered in this paper.
•
Several examples with practical channel distributions are illustrated to show the usage scenarios of
the developed framework.
Notation: Upper case normal/bold letters denote random variables/random vectors (or matrices), which
will be defined when they are first mentioned; lower case bold letters denote vectors. The expectation is
denoted by E[.]. We denote the probability mass function (PMF) and probability density function (PDF) of
a random variable X by PX and fX , respectively. The cumulative distribution function (CDF) is denoted by
FX (x), where F̄X (x) = 1 − FX (x) is the complementary CDF (CCDF) of X. X ∼ F denotes that the random
variable X follows the distribution with CDF F. The mutual information between two random variables X
5
and Y is denoted by I(X;Y ) while the conditional mutual information given Z is denoted by I(X;Y |Z). The
differential and conditional differential entropies are denoted by h(·) and h(·|·), respectively. A Markov
chain relation between X, Y , and Z is described by X −Y − Z. Unif(a, b) denotes the uniform distribution
between a and b and N0 = {0, N}. Bernoulli distribution with probability p is defined by Bern(p). The
indicator function is denoted by 1{.} , e.g., 1{(1)} = 1 means the expression in (1) is valid. The support
/ The
of a random variable X is denoted by either supp(X) or supp( fX ). The null set is denoted by 0.
logarithms used in the paper are all with respect to base 2. We define C(x) , 12 log(1 + x). We denote
the equality in distribution by =d . The convolution of functions f and g is denoted by f ∗ g. Calligraphic
alphabets in capital denote sets. The convex hull of a set A is denoted by co(A ).
The remainder of the paper is organized as follows. In Section II, we introduce our motivation and
important preliminaries. In Section III, we develop our framework from maximal coupling, coupling, and
copulas. In Section IV we apply the developed framework to fast fading Gaussian interference channels,
broadcast channels, wiretap channels, and source model secret key generation with statistical CSIT. In
Section V we discuss the finite state broadcast channel as an example of applying the developed framework
for channels with memory. Finally, Section VI concludes the paper.
II. M OTIVATION AND P RELIMINARIES
In this section, we first explain our motivation and then review several key properties and definitions
including the same marginal property, degradedness, and the usual stochastic orders, which are crucial
for deriving the main results of this work. We assume that each node in the considered MU channel is
equipped with a single antenna.
A. Motivation
With perfect CSIT, the transmitter can easily compare the strengths of instantaneous channels among
time and/or users and find the optimal strategy for the transmission [18], including the designs of the
codebook, channel input distribution, resource allocation, etc., simply according to the trichotomy law. In
6
contrast, when the transmitter has imperfect CSIT, e.g., only the statistics of the channels, the comparison
is non-trivial due to the random strength of the fading channels. In the following, we use two simple
examples, e.g., fast fading channels with additive white Gaussian noises (AWGN) to illustrate the difficulty
of comparing channels in such a scenario. For simplicity, we consider a two receiver case where we
assume that the noise variances at different receivers are identical, without loss of generality. Denote
the square of the magnitudes of the two real random channels by H1 and H2 with PDF’s f1 and f2 ,
respectively. In Fig. 1 (a), the supports of f1 and f2 are non-overlapping. Therefore, even when there is
only statistical CSIT, we still can know that the second channel is always stronger than the first channel.
However, due to the lacked knowledge of CSI, the capacity results degrade. In contrast, in Fig. 1 (b),
the supports of the two channels are overlapping. Intuitively, the transmitter is not able to distinguish
the stronger channel just based on the channel statistics. This is because, due to the overlapping part
of the PDF’s, the trichotomy order of the channel realizations H1 = h1 and H2 = h2 may alter over
time. As an example, from Fig. 1 (b) we may have two vectors of absolute square of channel gains
as h 1 = [3.1, 2.3, 0.7, 8.5, · · · ] and h 2 = [1.6, 4.9, 2.8, 3.1, · · · ], where each entry is independently and
identically generated by the corresponding distributions at a specific sample time. It can be easily seen
that there is no fixed trichotomy order between h1 and h2 within the whole codeword length. Therefore,
we may not be able to claim which channel is better in this scenario. Based on the above observation,
we would like to ask the following question: can the transmitter compare the strengths of channels to
different receivers, only based on the knowledge of the distributions of the fading channels in general,
just like the comparison of f1 and f2 in Fig. 1(a)? In addition, can we extend the scenarios of channel
comparison from the easy case as Fig. 1(a) to Fig. 1(b)? In the following, we partly answer this question.
B. Preliminaries
In the following, we review important background knowledge that will be necessary in the following.
1) The Same Marginal Property: The same marginal property is crucial in the sense that it provides us
the degree of freedom to construct an equivalent channel in which the realizations of all random channel
7
PDF
PDF
f2
f1
f1
f2
h
(a)
Fig. 1.
h
(b)
Two examples of relations between two fading channels.
tuples are aligned in the desired manner. For channels with multiple transmitters and multiple receivers,
e.g., the interference channel, we can extend from the following building block:
Theorem 1 (The Same Marginal Property for Two Transmitters [27, Theorem 16.6]). The capacity
region of a discrete memoryless multiuser channel including two transmitters and two non-cooperative
receivers with input and output alphabets X1 × X2 and Y1 × Y2 , respectively, depends only on the conditional marginal distributions PY1 |X1 ,X2 and PY2 |X1 ,X2 and not on the joint conditional distribution PY1 ,Y2 |X1 ,X2 ,
where (X1 , X2 ) ∈ X1 × X2 and (Y1 , Y2 ) ∈ Y1 × Y2 are the transmit and receive signal pairs, respectively.
For channels with a single transmitter and multiple receivers, e.g., BC or WTC, we can specialize from
Theorem 1 as follows by removing X1 or X2 .
Corollary 1 (The Same Marginal Property for One Transmitter [27, Theorem 13.9]). The capacity
region of a discrete memoryless multiuser channel including one transmitter and two non-cooperative
receivers with input and output alphabets X and Y1 × Y2 , respectively, depends only on the conditional
marginal distributions PY1 |X and PY2 |X and not on the joint conditional distribution PY1 ,Y2 |X , where X ∈ X
and (Y1 , Y2 ) ∈ Y1 × Y2 are the transmit signal and receive signal pair, respectively.
2) Information-Theoretic Orders for Memoryless Channels and Stochastic Orders: Based on the same
marginal property, we introduce the following definitions describing the relation of reception qualities
among different receivers.
Definition 1. A discrete memoryless channel with two non-cooperative receivers and one transmitter is
physically degraded if the transition probability satisfies PY1Y2 |X (y1 , y2 |x) = PY1 |X (y1 |x)PY2 |Y1 (y2 |y1 ), for all
x ∈ X , y1 ∈ Y1 , y2 ∈ Y2 , i.e., X, Y1 , and Y2 form a Markov chain X −Y1 −Y2 . The channel is stochastically
degraded if its conditional marginal distribution is the same as that of a physically degraded channel,
8
i.e.,
∃ P̃Y2 |Y1 (y2 |y1 ) : PY2 |X (y2 |x) = ∑ PY1 |X (y1 |x)P̃Y2 |Y1 (y2 |y1 ), ∀ x, y1 , y2 .
(1)
y1
By applying the quantization scheme used in [18, Proof of Theorem 3.3 and Remark 3.8], we can
extend the discussion from discrete alphabets to continuous ones. In the following we consider fast fading
GMU channels. Denote the square of the fading channels from the transmitter to the first and second
receivers considered in Definition 1 by the random variables H1 and H2 , respectively. We then define two
sets of pairs of the random channels as H1 = {(H1 , H2 )} and H10 = {(H1 , H2 ) : 1{(1)} = 1}, i.e., (H1 , H2 )
in H10 must satisfy (1). In the following, we call a stochastically degraded channel simply a degraded
channel due to the same marginal property. Note that discussions on the relation between degradedness
and other information theoretic channel orders can be referred to [17], [27], [28].
Definition 2. Denote the square of channels from the i-th transmitter to the first and second receivers by
H1i and H2i , respectively, i = 1, 2. A discrete memoryless-IC is said to have strong interference if
I(X1 ;Y1 |X2 , H11 , H12 ) ≤ I(X1 ;Y2 |X2 , H21 , H22 ),
(2)
I(X2 ;Y2 |X1 , H21 , H22 ) ≤ I(X2 ;Y1 |X1 , H11 , H12 ),
(3)
for all fX1 · fX2 . Define the sets H2 = {(H11 , H12 , H21 , H22 )} and H20 = {(H11 , H12 , H21 , H22 ) : 1{(2)} =
1 and 1{(3)} = 1}.
Definition 3. A discrete memoryless-IC is said to have very strong interference if
I(X1 ;Y1 |X2 , H11 , H12 ) ≤ I(X1 ;Y2 |H21 , H22 ),
(4)
I(X2 ;Y2 |X1 , H21 , H22 ) ≤ I(X2 ;Y1 |H11 , H12 ),
(5)
for all fX1 · fX2 . Define the set H30 = {(H11 , H12 , H21 , H22 ) : 1{(4)} = 1 and 1{(5)} = 1}.
In the following, we introduce the definition of the usual stochastic order, which is the underlying tool
to derive the main results in this paper.
Definition 4. [19, (1.A.3)] For random variables X and Y , X is smaller than Y in the usual stochastic
order, namely, X ≤st Y , if and only if F̄X (x) ≤ F̄Y (x) for all x ∈ (−∞, ∞).
Note that the definition is applicable to both discrete or continuous random variables.
9
III. M AIN R ESULTS
In this section, we develop a general framework to classify fading channels such that we are able to
characterize the corresponding capacity results under statistical CSIT.
A. Problem Formulation and the Proposed Framework
According to the motivation described in Section II-A, we aim to formulate a problem in a tractable
way. We first define a set A ⊆ {H1 , H2 }, which is a subset of all tuples of random channels of the
aforementioned MU channels. We also define a set B , which includes all above MU channel orders from
Definitions 1, 2, and 3 as
B , {H10 , H20 , H30 }.
(6)
Intuitively, we want to find a subset of all fading channel tuples, namely, A , which should possess the
following properties:
•
encompasses a constructive way to find the transformation f : A 7→ B ;
•
allows the existence of a corresponding set B in which the channel qualities follow a certain order;
•
by this channel order, capacity results are attainable.
The considered problem in this work is formulated as follows, also illustrated in Fig. 6.
P1: Find a set of tuples ( f , A ), namely, S , such that
S = {( f , A ) :
f : A 7→ B ,
(7)
a ∈ A and b ∈ B have the same marginal distributions.}
(8)
Note that (7) is due to the fact that in the desired equivalent channel, the tractability of the derivation
of the capacity region can be guaranteed, while (8) ensures that after the transformation f , the capacity
result is not changed by Theorem 1.
10
All random
fading channels
Fig. 2.
f(a)
a
b
A
B
For each realization of b,
the capacity results are known
The proposed scheme identifies the ergodic capacity regions under statistical CSIT.
Remark 1. The optimal classification shall be finding the three elements of a tuple ( f , A , B ), simultane-
ously, instead of fixing B and then finding ( f , A ). However, as mentioned previously, the goal of this work
is not to match new capacity inner and outer bounds for the open scenarios, which is out of the scope of
this work.
Note that under different assumptions on different MU channel models, ways to identify A and B may
be different, not to mention the corresponding capacity results. Therefore, we will introduce those results
case by case.
In the following we summarize three feasible schemes for P1 to compare channel strengths for the case
in Fig. 1 (b), when the transmitter has only statistical CSIT. The first two schemes are related to coupling
and the third one is related to copula [29]. At the end we will show that the schemes by coupling and
copula are equivalent. In brevity, the trick of all schemes is that under fixed marginal distributions, to
find a special structure on the dependence among fading channels corresponding to different users to fit
our purpose, e.g., to attain the capacity results. Note that the three schemes can be easily extended to an
arbitrary number of receivers. We first give the following definition.
Definition 5. [30, Definition 2.1] The pair (X̃, Ỹ ) is a coupling of the random variables (X,Y ) if X̃ =d X
and Ỹ =d Y .
It will be clarified that coupling plays an important role of the function f in P1 and the proposed
framework.
1) A Construction from Maximal Coupling: For the random variables whose PDF’s partially overlap as
in Fig. 1 (b), a straightforward idea to align H1 and H2 for each realization is as follows: we try to form
an equivalent channel pair (H̃1 , H̃2 ) with PDF’s f˜1 and f˜2 , respectively, which is a coupling of (H1 , H2 ),
11
to reorder H̃1 and H̃2 in the sense that H̃1 = H̃2 = h̃1 = h̃2 if h̃1 , h̃2 ∈ S , supp{ f˜1 } ∩ supp{ f˜2 }. Otherwise,
H̃1 and H̃2 follow PDF’s f˜1 = f1 and f˜2 = f2 , respectively. If this alignment is possible, we can transform
the remaining parts of the PDF’s into two non-overlapped ones. Then we know that one of the fading
channel is always equivalently no weaker than the other one, even if the statistics of the channels are
known, only. However, we may ask whether such an alignment exists or not? Our answer is yes, which
relies on the concept of maximal coupling [30] by reordering part of the channel realizations as defined
as follows.
Definition 6. [30, Section 2.2] For the random variables (X, Y ), the coupling (X̃, Ỹ ) is called a maximal
coupling if P(X̃ = Ỹ ) gets its maximal value among all the couplings of (X, Y ).
To proceed, we introduce a result of maximal coupling from [30].
Proposition 1. [30, Proposition 2.5] Suppose X and Y are random variables with respective piecewise
continuous density functions fX and fY . The maximal coupling (X̃, Ỹ ) for (X, Y ) results in
P(X̃ = Ỹ ) =
Z ∞
−∞
min( fX (x), fY (x))dx.
(9)
Based on Proposition 1, we derive the following result, which reorders the realizations of a class of
(H1 , H2 ) in the same trichotomy order.
Theorem 2. For a single-transmitter two-receiver AWGN channel, assume the PDF’s of the two channels,
namely, f1 and f2 , are continuous or with finite discontinuities. Then the selection
(
)
/ sup(supp( f2 )) < sup(supp( f1 )) , i ∈ {1, 2},
A = (H1 , H2 ) : fi − fInt 6= 0,
(10)
B = {(H̃1 , H̃2 ) ∈ f (A )},
(11)
(
f : A 7→ B , f (a) = (H̃1 , H̃2 ) :
fk (h) − min( f1 (h), f2 (h))
, k = 1, 2, with probability 1 − p;
H̃k follows the PDF f˜k (h) ,
1− p
)
min(
f
(h),
f
(h))
1
2
H̃1 = H̃2 follow the PDF f˜0 (h) ,
, with probability p ,
(12)
p
where fInt (h) , min( f1 (h), f2 (h)), h ∈ supp(H1 ) ∪ supp(H2 ), p ,
feasible to P1.
R∞
−∞ min( f 1 (h), f 2 (h))dh
and a ∈ A , is
12
Proof: The explicit construction of f in (12) is from the proof of maximal coupling [30, Proposition
2.5], while (10) is a sufficient condition to guarantee that the realizations of the two equivalent channels
follow the desired order after maximal coupling. To be self-contained, we firstly restate the important
steps of the proof of [30, Proposition 2.5] in the following. Define H = {h : f1 (h) < f2 (h)}. Any coupling
(H̃1 , H̃2 ) of (H1 , H2 ) should satisfy
P(H̃1 = H̃2 ) = P(H̃1 = H̃2 , H̃1 ∈ H ) + P(H̃1 = H̃2 , H̃2 ∈ H c )
≤ P(H̃1 ∈ H ) + P(H̃2 ∈ H c )
(a)
= P(H1 ∈ H ) + P(H2 ∈ H c )
=
(b)
=
Z
H
f1 (h)dh +
Z ∞
−∞
Z
Hc
f2 (h)dh
min( f1 (h), f2 (h))dh
(c)
= p,
(13)
where (a) is by Definition 5, (b) is by the definition of H and (c) is by the definition of p.
On the other hand, we can use (12) as an explicit construction of the maximal coupling. Define three
independent random variables G0 , G1 , and G2 , following the PDF’s f˜0 , f˜1 , and f˜2 , respectively, as shown
in (12). We can show that (12) is a coupling as follows:
(a)
P(H̃k ≤ h0 ) = pP(H̃k = G̃0 ≤ h0 ) + (1 − p)P(H̃k = G̃k ≤ h0 )
=p
=
Z h0
min( f1 (h), f2 (h))
∞
Z h0
∞
p
dh + (1 − p)
Z h0
fk (h) − min( f1 (h), f2 (h))
∞
fk (h)dh = P(Hk ≤ h0 ), for k = 1, 2,
1− p
dh
(14)
where (a) is by construction, i.e., by assigning G0 and Gk to H̃k with probability p and 1 − p, respectively.
Hence, it is clear that (14) fulfills the definition of coupling in Definition 5. On the other hand, it is clear
that P(H̃1 = H̃2 ) ≥ P(H̃1 = H̃2 = G0 ) = p. Therefore, from (13) and (14), we know that (12) can achieve
maximal coupling.
In the following, we will prove that (10) and (12) form a feasible solution for P1. By (9) it is clear that
13
P(H̃1 = H̃2 ) is the area of the intersection of f1 and f2 . Then from (12), we know that f˜1 (x) and f˜2 (x)
do not overlap with probability 1 − p. In addition, from (10) we know that h1 < h2 if h1 ∈ supp{ f˜1 } and
h2 ∈ supp{ f˜1 }. That is, H̃1 < H̃2 with probability 1 − p. Furthermore, from (12) (or from (9)), we know
that H̃1 = H̃2 is with probability p. In other words, by the maximal coupling (12), we can construct an
equivalent channel where there are only two relations between the fading channels realizations h̃1 and h̃2 :
1) h̃1 = h̃2 ; 2) h̃1 < h̃2 , which completes the proof.
Remark 2. Note that by applying the maximal coupling, we can transform the two PDF’s like the ones
in Fig. 1(b) into equivalent ones where one of the PDF’s can generate channel realizations always no
worse than those of the other. In the following we use two examples to illustrate the feasibility of the
selection of A and f in Theorem 2. Assume supp( f1 ) ⊆supp( f2 ) and sup(supp( f1 )) ⊆sup(supp( f2 )), there
may exist R21 ⊂supp( f˜2 ) and R22 ⊂supp( f˜2 ), such that h̃21 < h̃1 < h̃22 , where h̃21 ∈ R21 and h̃22 ∈ R22 ,
and h̃1 ∈ supp( f˜1 ) for the case h̃1 6= h̃2 . Then h̃1 can be larger or smaller than h̃2 . An example is shown
in Fig. 3(a). On the other hand, if supp( f1 ) ⊆supp( f2 ) and the maximum value of the alphabets of H1 is
the same as that of H2 , it is possible that h̃1 > h̃2 , for all h̃1 ∈ supp{ f˜1 } and h̃2 ∈ supp{ f˜2 } for the case
h̃1 6= h̃2 . An example is shown in Fig. 3(b). This example shows that (10) is sufficient but not necessary.
Note that the latter example fulfills the definition of the usual stochastic order and further discussion can
be seen in the next method.
~
f2
f1
= (1-p)
f2
+p
~
f1
h
~ ~
f 1= f 2
h
h
(a)
f2
f1
= (1-p)
h
Fig. 3.
(b)
~
f2
~
f1
+p
h
~ ~
f1 = f2
h
Examples of different PDF’s f1 , f2 and also f˜1 and f˜2 constructed by Theorem 2, where the bold curves denote f2 and f˜2 .
Remark 3. Note that from [30, Proposition 2.7] we know that the probability of H̃1 6= H̃2 from the coupling
R
14
(H̃1 , H̃2 ) of (H1 , H2 ) can be described by the total variation distance between H1 and H2 as
P(H̃1 6= H̃2 ) = (1 − p)
Z ∞
−∞
( f˜1 (h) + f˜2 (h))dh = 1 −
Z ∞
−∞
min( f1 (h), f2 (h))dh = dTV (H1 , H2 ),
(15)
where dTV (X,Y ) = sup|P(X ∈ A ) − P(Y ∈ A )|. By this way we can observe the relationship between the
A
closeness of two random variables in distributions and the closeness of the two random variables can be
coupled.
In fact, the condition described in Theorem 2 corresponds to H1 ≤st H2 . The reason that H1 ≤st H2 also
results in H̃1 ≤ H̃2 with probability 1, will be explained by the next method, namely, coupling.
2) A Construction from Coupling: In this method we resort to constructing an explicit coupling such
that each realization of channel pair has the same trichotomy order.
Theorem 3. For a single-transmitter two-receiver AWGN channel, the selection
A = {(H1 , H2 ) : H1 ≤st H2 },
(16)
B = {(H̃1 , H̃2 ) ∈ f (A )},
(17)
f : A 7→ B , f (a) = {(H̃1 , H̃2 ) : H̃1 = FH−1
(U), H̃2 = FH−1
(U)},
1
2
(18)
where a ∈ A , U ∼ Unif(0, 1), is feasible to P1.
Proof: From the coupling theorem [20] we know that, H1 ≤st H2 if and only if there exist random
variables H̃1 =d H1 and H̃2 =d H2 such that H̃1 ≤ H̃2 with probability 1. Therefore, by construction we
know that distributions of H̃1 and H̃2 fulfill the same marginal condition (8) in P1. In addition, the
trichotomy order H̃1 ≤ H̃2 fulfills (H̃1 , H̃2 ) ∈ B , i.e., the 1st channel is degraded from the 2nd one by (1)
under this trichotomy order for AWGN channel. The proof of the coupling theorem [30, Ch. 2] provides
us a constructive way to find f , which is restated as follows for a self contained proof. If H1 ≤st H2 ,
and if the generalized inverses FH−1
and FH−1
exist, where the generalized inverse of F is defined by
1
2
F −1 (u) = inf{x ∈ R : F(x) ≥ u}, u ∈ [0, 1], then the equivalent channels H̃1 and H̃2 can be constructed by
H̃1 = FH−1
(U) and H̃2 = FH−1
(U), respectively, where U ∼ Unif(0, 1). This is because
1
2
P(H̃1 ≤ h) = P(FH−1
(U) ≤ h) = P(U ≤ FH1 (h)) = FH1 (h),
1
(19)
15
i.e., H̃1 ∼ FH1 . Similarly, H̃2 ∼ FH2 . Since H1 ≤st H2 , from Definition 4 we know that FH2 (h) ≤ FH1 (h),
(u) ≤ FH−1
(u), ∀u, such that P(H̃1 ≤ H̃2 ) = 1. Therefore, we attain (18),
for all h. Then it is clear that FH−1
1
2
which completes the proof.
Remark 4. Originally, even though we know H1 ≤st H2 , the order of the channel realizations H1 = h1 and
H2 = h2 may vary for each realization. Then we are not able to claim which channel is stronger for the
duration of a codeword. However, from Theorem 3 we know that we can virtually explicitly align all the
channel realizations within a codeword length such that each channel gain realization of H1 is no worse
than that of H2 , if H1 ≤st H2 . Then we can claim that there exists an equivalently degraded channel.
Note that the usual stochastic order aligns random fading gains in the equivalent channel, which makes
it simple to adopt the MU channel orders, e.g., in Sec. II-B2. However, the alignment operation done in
the equivalent channel may be too strict for feasible scenarios. This is because the considered channel
orders, e.g., the degradedness, among MU channels are in fact described by just statistics but not to the
realizations of individual channels.
3) A Construction from Copula: In this method, we first explicitly construct a joint distribution between
the fading channels, such that each realization of channel pairs has also the same trichotomy order. We
then identify this construction is indeed the copula [29]. The concepts of coupling and copula are similar
in the sense that, in both cases the marginal distributions are given, and we want to form joint distributions
to achieve certain goals. In the former case, we can introduce any relation between the random channels
such that, for example, we can order the channel pairs for all realizations in the desired trichotomy
order. However, we do not construct an explicit joint distribution. In the latter case, we may try to find a
constructive way to attain an explicit joint distribution from the marginal ones, then identify that this joint
distribution fulfill our requirements. Note that the existence of the copula is proved by Sklar’s theorem
[29].
Theorem 4. For a single-transmitter two-receiver AWGN channel, the selection of
A = {(H1 , H2 ) : H1 ≤st H2 },
(20)
B = {(H̃1 , H̃2 ) ∈ f (A )},
(21)
f : A 7→ B , f (a) = {(H̃1 , H̃2 ) : H̃1 and H̃2 follow the joint CCDF F̄H̃1 , H̃2 (h̃1 , h̃2 ) = min{F̄H1 (h̃1 ), F̄H2 (h̃2 )}},
(22)
16
where F̄X,Y (x, y) , P(X ≥ x, Y ≥ y), a ∈ A , is feasible to P1.
Proof: By the definition of F̄X,Y (x, y) it is clear that the marginal distributions are unchanged, i.e.,
F̄H̃1 (h̃1 ) = F̄H̃1 ,H̃2 (h̃1 , 0) = F̄H1 (h̃1 ) and F̄H̃2 (h̃1 ) = F̄H̃1 ,H̃2 (0, h̃2 ) = F̄H̃2 (h̃2 ). Note that we consider the square
of channel gains, so we substitute 0 into F̄H̃1 , H̃2 (h̃1 , h̃2 ) to marginalize it. With the selection A = {(H1 , H2 ) :
H1 ≤st H2 }, we prove that h̃1 ≤ h̃2 , ∀(H̃1 , H̃2 ) ∈ B = f (A ) in the following.
Assume that h̃1 > h̃2 + ε, ε > 0, we can prove that
P(h̃1 ≤ H̃1 , h̃2 ≤ H̃2 ≤ h̃2 + ε) =P(h̃1 ≤ H̃1 , h̃2 ≤ H̃2 ) − P(h̃1 ≤ H̃1 , h̃2 + ε ≤ H̃2 )
(a)
= F̄H̃1 ,H̃2 (h̃1 , h̃2 ) − F̄H̃1 ,H̃2 (h̃1 , h̃2 + ε)
(b)
= F̄H̃1 (h̃1 ) − F̄H̃1 (h̃1 )
=0,
(23)
where (a) follows from the definition of the joint CCDF and (b) follows from (22) with the given property
H1 ≤st H2 , which implies that F̄H̃2 (h̃2 ) ≥ F̄H̃2 (h̃2 + ε) ≥ F̄H̃1 (h̃2 + ε) ≥ F̄H̃1 (h̃1 ). To ensure that H̃1 ≤ H̃2
for all random samples, we let ε → 0. Thus, as long as H1 ≤st H2 , we can form an equivalent joint
distribution as (22) such that it has the same marginal distribution as the original channel, then the
capacity is unchanged.
We can re-express Theorem 4 in terms of the joint CDF instead of the joint CCDF (or the joint survival
function [29]) as follows.
Corollary 2. For a single-transmitter two-receiver AWGN channel, the selection of
A = {(H1 , H2 ) : H1 ≤st H2 },
f (a) = {(H̃1 , H̃2 ) : H̃1 and H̃2 follow the joint CDF FH̃1 , H̃2 (h̃1 , h̃2 ) = min{FH1 (h̃1 ), FH2 (h̃2 )}},
B = {(H̃1 , H̃2 ) ∈ f (A )},
(24)
(25)
(26)
where a ∈ A , solves P1.
Proof: Here we construct the function f by showing that (25) is identical to the Fréchet-Hoeffdinglike upper bound for the survival copulas. From Fréchet-Hoeffding bounds [29, Sec. 2.5] we know that a
17
joint CDF FXY (x, y) can be upper and lower bounded by the marginals FX (x) and FY (y) as follows:
max{FX (x) + FY (y) − 1, 0} ≤ FXY (x, y) ≤ min{FX (x), FY (y)}.
(27)
On the other hand, by definition of the joint CCDF F̄XY (x, y) and the joint CDF FXY (x, y), we can easily
see:
F̄XY (x, y) = 1 − FX (x) − FY (y) + FXY (x, y).
(28)
After substituting the upper bound in (27) into (28), we can get:
(a)
F̄XY (x, y) ≤ 1 − FX (x) − FY (y) + min{FX (x), FY (y)} = 1 + min{−FX (x), −FY (y)} = min{F̄X (x), F̄Y (y)},
where (a) is by the definition of CCDF, which completes the proof.
In fact, we can prove that (18) and (25) are equivalent by showing that:
P(FH−1
(U) ≤ h̃1 , FH−1
(U) ≤ h̃2 ) = P(U ≤ FH1 (h̃1 ), U ≤ FH2 (h̃2 ))
1
2
= P(U ≤ min{FH−1
(h̃1 ), FH−1
(h̃2 )})
1
2
(a)
= min{FH−1
(h̃1 ), FH−1
(h̃2 )},
1
2
(29)
where (a) is due to the assumption that U ∼ Unif(0, 1).
Note that copula and coupling introduce different expressions in the joint distributions in the sense
that, in the latter case, the joint distribution is implicitly indicated by the random variable U and also the
marginal distributions in (18) while the joint distribution is explicitly constructed as (22) in copula.
Remark 5. We can directly identity that (22) is a two-dimensional copula [29, Definition 2.2.2] by definition
without the aid of Fréchet-Hoeffding-like upper bound and the derivation in the proof of Corollary 2. An
equivalent definition of a two-dimensional copula [29, (2.2.2a), (2.2.2b), (2.2.3)] is a function C0 : [0, 1]2 7→
[0, 1] with the following properties:
1) For every u, v in [0, 1],
C0 (u, 0) = 0 = C0 (0, v),
(30)
C0 (u, 1) = u and C0 (1, v) = v;
(31)
18
2) For every u1 , u2 , v1 , v2 in [0, 1] such that u1 ≤ u2 and v1 ≤ v2 ,
C0 (u2 , v2 ) −C0 (u2 , v1 ) −C0 (u1 , v2 ) +C0 (u1 , v1 ) ≥ 0.
(32)
The identification that F̄H̃1 , H̃2 (h̃1 , h̃2 ) = min{F̄H1 (h̃1 ), F̄H2 (h̃2 )} in (22) is a copula is derived in Appendix
I.
IV. A PPLICATION ON FAST FADING M EMORYLESS MU C HANNELS
In the following, we will use the coupling scheme introduced in Theorem 3 for memoryless MU channels
due to its intuitive characteristics. We first consider the binary symmetric memoryless MU channels as
examples for discrete memoryless MU channels and then extend to memoryless GMU channels.
A. Binary and Binary Fading MU Channels with Statistical CSIT
In the following we take the binary symmetric channel with and without fading as examples.
1) Binary Symmetric MU Channels with Statistical CSIT: For binary symmetric channels (BSC), we
can model the random bit flip by a switch S ∼ Bern(p), i.e., S = 1 if it switches to a bit flip with probability
p. Consider a binary symmetric MU channel with one transmitter and two receivers where S1 ∼ Bern(p1 )
and S2 ∼ Bern(p2 ) for the first and second BSC’s, respectively. Define the statistical CSIT as that the
transmitter only knows the probabilities p1 and p2 , p1 ≥ p2 and define the perfect CSIT as that the
transmitter knows the instantaneous positions of S1 and S2 . For statistical CSIT, we can use Theorem 3 to
compare the two sub-BSC’s, i.e., the second channel is stronger than the first one if S1 ≥st S2 . Note that
the direction of the inequality in usual stochastic order is in the reverse direction compared to the three
schemes since here higher probability of S = 1 means the channel is worse. Note also that the definition
of the usual stochastic order in Definition (4) is also valid for discrete random variables.
2) Binary Erasure MU Channels with Statistical CSIT: Similar to the BSC case, we can use a switch S to
change the input bit to an erasure. Then for two binary erasure channels S1 ∼ Bern(p1 ) and S2 ∼ Bern(p2 ),
p1 ≥ p2 , similar to the BSC case, we can claim that the second channel is stronger than the first one if
S1 ≥st S2 .
19
3) Binary Fading MU Channels with Statistical CSIT: For BSC’s, similar to the AWGN cases, we can
consider a binary fading [13] as following, where the receive signal at the k-th receiver is:
Yk = Hk X ⊕ Nk ,
(33)
where Hk ∼ Bern(pk ) and Nk ∼ Bern(pN ) denote the fading channel and the equivalent additive noise,
respectively. Then for a two-receiver case, from Theorem 3 we can easily see that if p1 ≥ p2 , i.e., H1 ≥st H2 ,
the first channel is stronger than the second one.
B. Fading Gaussian Interference Channels with Statistical CSIT
From this section we consider the Gaussian MU channels. When there is only statistical CSIT and full
CSI at the receiver, the ergodic capacity region of GIC is unknown in general. In this section, we identify
a sufficient condition based on the results in Section III-A to attain the capacity regions of Gaussian
interference channel with strong and very strong interference. Examples illustrate the results.
1) Preliminaries and Results: We assume that each receiver perfectly knows the two channels to it. .
Therefore, the considered received signals of a two-user fast fading Gaussian interference channel can be
stated as
Y1 =
√
√
H11 e jΦ11 X1 + H12 e jΦ12 X2 + N1 , H̃11 X1 + H̃12 X2 + N1 ,
(34)
Y2 =
√
√
H21 e jΦ21 X1 + H22 e jΦ22 X2 + N2 , H̃21 X1 + H̃22 X2 + N2 ,
(35)
where Hk j and Φk j ∈ [0, 2π] are real-valued non-negative independent random variables denoting the
absolute square and the phase of the fading channel between the j-th transmitter to the k-th receiver,
respectively, where k, j ∈ {1, 2}. The CCDF of Hk j is denoted by F̄Hk j . The channel inputs at the
transmitters 1 and 2 are denoted by X1 and X2 , respectively. We consider the channel input power
constraint as E[|X1 |2 ] ≤ P1 and E[|X2 |2 ] ≤ P2 , respectively. Noises N1 and N2 at the corresponding receivers
are independent circularly symmetric AWGN with zero mean and unit variance. We assume that the
transmitters only know the statistics but not the instantaneous realizations of {Hk j } and {Φk j }. Hence, the
channel input signals {X j } are not functions of the channel realizations and therefore are independent to
20
{Hk j } and {Φk j }. In addition, without loss of generality, we assume {Hk j }, {Φk j }, and {Nk } are mutually
independent.
In the following derivation we do not exploit the commonly used standard form of GIC. This is
because that the normalization operation in the standard form results in a ratio of random variables,
whose distribution may not be easy to derive and thus hinders us on identifying all channels easily.
We first extend the sufficient condition to have a strong interference channel to the case with only
statistical CSIT. In the sequel we illustrate our first main result.
Theorem 5. If we consider the set A as
A = {(H11 , H12 , H21 , H22 ) : H21 ≥st H11 and H12 ≥st H22 },
(36)
then the following ergodic capacity region of a strong GIC with statistical CSIT can be achieved
C (P1 , P2 ) = C1 (P1 ) ∩ C2 (P2 ),
(37)
where
C j (Pj ) , (R1 , R2 ) : R1 ≤ E[C(H j1 P1 )],
R2 ≤ E[C(H j2 P2 )],
R1 + R2 ≤ E[C(H j1 P1 + H j2 P2 )] .
(38)
Please refer to Appendix II for the proof.
Remark 6. A recent work [13] proves the capacity region of a two user binary fading interference channel
without CSIT under both weak and strong interferences, where the receive signals are modeled by
Yk = Gkk Xk ⊕ Gk̄k Xk̄ , k = 1, 2,
(39)
where k̄ = 3−k, Gkk , Gk̄k ∈ {0, 1} and all algebraic operations are in F2 . Our result in Theorem 5 and also
the discussion in Section III are valid for the binary fading IC (39). More specifically, in [13], the strong
interference channel is defined as pd ≤ pc ≤ 1, where H11 ∼ Bern(pd ), H22 ∼ Bern(pd ), H12 ∼ Bern(pc ),
and H21 ∼ Bern(pc ). It is easy to see that the complementary cumulative distribution function of H21 is no
less than that of H11 , i.e., H21 ≥st H11 . Similarly, we can deduce H12 ≥st H22 . This fact manifests that (36)
is a more general expression of the strong interference condition, i.e., it can be used for either discrete
or real valued random channels. We can also extend (39) into cases with noises. Cases in [13] achieving
capacity regions but not covered in our current paper can be considered as future works. Note that (36)
21
can also be specialized to the full CSIT case. For strong binary fading ICs, the capacity region achieved
by interference-decoding at each receiver [13], is the same as in our results in Theorem 5.
Remark 7. With the technique of stochastic orders, we can consider a fading IC with a stricter condition,
i.e., the uniformly strong IC [11], which is introduced in Appendix II, such that we can more easily derive
the capacity region of the IC. In contrast, directly dealing with the hybrid IC2 may be too difficult3 .
The ergodic capacity region result of a very strong interference channel is as follows.
Theorem 6. If we consider the set A as
A = (H11 , H12 , H21 , H22 ) :
H12
H21
≥st H11 and
≥st H22 ,
1 + P2 H22
1 + P1 H11
(40)
then the following gives the ergodic capacity region of a very strong GIC with statistical CSIT
C (P1 , P2 ) = (R1 , R2 ) : R1 ≤ E[C(H11 P1 )],
R2 ≤ E[C(H22 P2 )] .
(41)
Please refer to Appendix III for the proof.
By the considered scenarios, i.e., strong/very strong interference, we can decouple the effect of the
other user to the single user capacity constraint, such that we are able to prove the optimality of Gaussian
input. Further discussion is presented in Remark 9 in Sec. IV-D.
2) Examples: In this subsection we provide examples to show the scenarios that the sufficient conditions
in Theorem 5 and Theorem 6 are feasible.
Example of (36): Assume that the squares of the four channel gains follow exponential distributions,
i.e., H jk ∼ Exp(1/σ2jk ), j, k = 1, 2. By the CCDF of exponential distributions, it is easy to see that if
σ221 ≥ σ211 and σ212 ≥ σ222 , then the two constraints in (36) are fulfilled.
When there is perfect CSIT, constraints in (36) can still be utilized by bestowing the density functions
of the channels as Dirac delta functions.
2 It
is defined in [11] as: at least one strong and one weak fading state happen when averaged over all fading states.
may treat the problem considered in [13], which is only for binary alphabet, as the evidence of the concern of the difficulty.
3 We
22
Example of (40): In this case, to proceed, we find the distributions of the two ratios of random variables
in (40). We can first rearrange H21 /(1 + P2 H22 ) into a ratio of quadratic form as
H H B1 H
H21
=
, Z,
1 + P2 H22 1 + P2 H H B 2 H
(42)
where we assume H , [Hw,21 Hw,22 ]T , Hw,21 ∼ N(0, 1), Hw,22 ∼ N(0, 1), Hw,21 is independent of Hw,22 ,
B 1 , diag{[σ221 , 0]} and B 2 , diag{[0, σ221 ]}. Hence, H21 , (σ221 Hw,21 )2 , H22 , (σ222 Hw,22 )2 . From [31,
(16)] we know that the CDF of the RHS of (42) can be calculated by the following
λ2i
1 −t
t
λ
FZ (t) =
e iu
Πl6=i (λi − λl ) |λi |
λi
−t
1
1
t
−1
(b)
σ221
P2 σ222
e u
+e
,
u
= u(t) − 2
σ21 + tP2 σ222
σ221
P2 σ222
(a)
u(t) − Σ2i=1
(43)
where in (a), u(t) is the unit step function, {λi} are the eigenvalues of B1 − tP2 B2 , i.e., (λ1 , λ2 ) =
(σ221 , −tP2 σ222 ); in (b) we substitute the eigenvalues into the RHS of (a). Then we evaluate the first
−t
2
constraint in (40) by checking the difference of CCDFs of Z and H11 , i.e., 1 − FZ (t) − e σ11 numerically. In
the first comparison, we fix the variances of the cross channels as c , σ212 = σ221 = 1 and the transmit powers
P , P1 = P2 = 1, and scan the variances of the dedicated channels by a , σ211 = σ222 = 0.1, 0.3, 0.5, 0.7.
Since the conditions in (40) are symmetric and the considered settings are symmetric, once the first
condition in (40) is valid, the second one will be automatically valid. The results are shown in Fig. 4,
from which we can observe that when a increases, the difference of the CCDFs decreases. This is because
the support of the CCDF of H11 increases with increasing a and in the considered case when a = 0.7,
the distribution of H11 concentrates on larger channel values than H21 /(1 + P2 H22 ), which reflects that
the CCDFs that, H11 has a larger CCDF than H21 /(1 + P2 H22 ) at smaller h, which violates the condition
in (40). In contrast, the values a = 0.1, 0.3, 0.5 result in (H11 , H12 , H21 , H22 ) satisfying (40). We also
investigate the effect of the transmit power constraints to the validity of the sufficient condition. We
consider the case with a = 0.1, c = 1 with P = 1, 10, 50, 100 (in linear scale). From Fig. 5 we can observe
that the differences of CCDFs decrease with increasing P, which is intuitive from (40), since then the
distribution of H21 /(1 + P2 H22 ) will be more concentrated in a smaller range. We can also see that, when
23
P = 100, the condition in (40) is not valid anymore. In contrast, the choices P = 1, 10, 50 satisfy (40).
c=1, P=1
0.7
0.6
1 − FZ (t) − e−t/σ11
0.5
2
Very strong GIC
0.4
0.3
a = 0.1, 0.3, 0.5, 0.7
0.2
0.1
0
−0.1
0
1
2
3
4
5
6
7
H
Fig. 4.
Identification of (40) under different variances of channel gains of the dedicated channels with c = 1 and P = 1.
a = 0.1, c = 1
0.7
0.6
2
1 − FZ (t) − e−t/σ11
0.5
Very strong GIC
0.4
0.3
0.2
P = 1, 10, 50, 100
0.1
0
−0.1
0
1
2
3
4
5
6
7
H
Fig. 5.
Identification of (40) under different transmit power constraints with a = 0.1 and c = 1.
C. Fading Gaussian Broadcast Channels with Statistical CSIT
1) Preliminaries and Results: When a 2-receiver BC is degraded, we know that the capacity region
[18] is the union over all V , X satisfying the Markov chain V − X − Y1 − Y2 of rate pairs (R1 , R2 ) such
24
that
R1 ≤ I(X;Y1 |V ),
R2 ≤ I(V ;Y2 ),
for some p(v, x), where the cardinality of the random variable V satisfies |V | ≤ min{|X |, |Y1 |, |Y2 |} + 1.
Note that for non-degraded BC’s, only the inner and outer bounds are known, e.g., Marton’s inner bound
[32] and Nair-El Gamal’s outer bound [33]. Therefore, it shall be easier to characterize the capacity region
of a broadcast channel if we can identify that it is degraded.
The capacity region of a GBC is known for both fixed and fading channels when there is perfect
CSIT as well as perfect CSI at the receiver (CSIR). For multiple antennas GBC with perfect CSIT and
CSIR, the channel enhancement technique is used to form an equivalent degraded broadcast channel [34].
Immense endeavors have been made to find the optimal input covariance matrix of the Gaussian input,
e.g., [35]–[37]. However, the problem is open when there is imperfect CSIT in general and only limited
cases are known [12]. The fading GBC with only perfect CSIR but imperfect CSIT in general lacks the
degraded structure for arbitrary fading distributions, which makes it a challenging problem.
We assume that there is full CSIR such that they can compensate the phase rotation of their own
channels, respectively, without changing the capacity to form real channels. Therefore, the signal of
receiver k of the considered L-receiver fast fading GBC can be stated as
Yk =
p
Hk X + Nk , k = 1 · · · L,
(44)
where Hk is a real-valued non-negative independent random variable denoting the square of receiver k’s
fading channel with CCDF F̄Hk . The channel input is denoted by X. We consider the channel input power
constraint as E[X 2 ] ≤ PT . The noises {Nk } at the corresponding receivers are independent AWGN with zero
mean and unit variance. We assume that the transmitter only knows the statistics but not the instantaneous
realizations of {Hk }. In the following we will first consider the two-receiver case then extend to the
L-receiver case.
The following result can be easily derived from Theorem 5.
25
Corollary 3. For an L-receiver fast fading Gaussian broadcast channel, if H1 ≥st H2 ≥st · · · ≥st HL , then
it is degraded.
Remark 8. An important note is that the degradedness does not guarantee the optimality of Gaussian input
for fading GBC’s with statistical CSIT. In [38], it is shown that with statistical CSIT, a local perturbation
of Gaussian channel input can be better than the Gaussian one. Gaussian input is optimal only proved
for very few cases, e.g., [39].
Now we can further generalize Corollary 3 to the case in which fading channels are formed by clusters
of scatterers [40]. The received signals at receivers j from M clusters of scatters can be expressed as
M
Yj =
∑ H̃ j,k X + N j ,
k=1
where H̃ j,k are mutually independent, for all j, k. In particular, we consider the case in which we are only
provided the information of each cluster but not the superimposed result as
√
Hk in (44). Therefore, the
phases of each channel cluster should be taken into account, i.e., we consider the k-th clusters of the first
and second user as H̃1k = H̃1k,Re + i · H̃1k,Im =
√
√
H1k e−iφ1k and H̃2k = H̃2k,Re + i · H̃2k,Im = H2k e−iφ2k .
Proposition 2. Let M be the number of clusters of scatterers for both users 1 and 2, and denote the
channels of users 1’s and 2’s kth clusters as H̃1k and H̃2k , respectively, where k = 1, · · · , M. The broadcast
channel is degraded if user 1’s scatterers are stronger than those of user 2 in the sense that H̃π1k ,Re H̃π1 j ,Re ≥st
H̃π2k ,Re H̃π2 j ,Re , and H̃π1k ,Im H̃π1 j ,Im ≥st H̃π2k ,Im H̃π2 j ,Im , ∀k, j ∈ {1, · · · , M}, for some permutation π1 and π2
of users 1’s and 2’s clusters.
Proof: We attain the result by the fact that the usual stochastic order is closed under convolution
[19, Theorem 1.A.3], which claims that for two sets of independent random variables {Ak } and {Bk },
if Ak ≥st Bk , ∀k ∈ {1, · · · , M} then for any increasing function φ : RM → R, one has φ(A1 , · · · , AM ) ≥st
M
φ(B1 , · · · , BM ). In particular, here φ is the summation and we have ∑M
k=1 Ak ≥st ∑k=1 Bk . In our problem,
from Corollary 1 we know that
M
∑ H̃1k
k=1
2
M
≥st
∑ H̃2k
2
(45)
k=1
is sufficient to attain a degraded GBC. After expanding the left and right hand sides of (45) in addition
with [19, Theorem 1.A.3], we can get Proposition 2.
26
To encompass a broader scenario of fading channels, we can relax the conditions of the number of
clusters of channels 1 and 2 in Lemma 2 to two non-negative integer-valued random variables N and M,
M
2
2
respectively, and the result is still valid. In particular, if | ∑N
k=1 H̃1k | ≥st | ∑k=1 H̃2k | and if N ≥st M, then
receiver 2 is a degraded version of receiver 1, which can be proved with the aid of [19, Theorem 1.A.4].
2) Example: In the following, we show an example with practical fading distributions based on
Corollary 3. Assume the magnitudes of the three channels are independent Nakagami-m random variables
with shape parameters m1 , m2 , and m3 , and spread parameters w1 , w2 , and w3 , respectively. From
Corollary 3 we know that the fading GBC is degraded if
γ m2 , mw22x
γ m3 , mw33x
γ m1 , mw11x
≥
≥
, ∀x,
Γ(m1 )
Γ(m2 )
Γ(m3 )
where γ(s, x) =
R x s−1 −t
R
e dt is the incomplete gamma function and Γ(s) = 0∞ t s−1 e−t dt is the ordinary
0t
gamma function [41, p. 255]. An example satisfying the above inequality is (m1 , w1 ) = (1, 3), (m2 , w2 ) =
(1, 2) and (m3 , w3 ) = (0.5, 1).
D. Physical Layer Security with Fading Gaussian Channels and Statistical CSIT
In this section we introduce two important notions in physical layer security: the wiretap channel for
secure data transmission and the secret key generation (SKG), to show how to apply the developed
framework to them. For SKG, we focus on the source-model SKG to compare the random source
stochastically.
1) Wiretap channels: For WTC, compared to [16], in this paper we provide a more intuitive and
complete derivation. In addition, we discuss the secret key capacity of SKG with source model. In
particular, we aim at characterizing the ergodic secret key capacity of a fast fading Gaussian Maurer’s
satellite model, which is a very practical model for SKG through wireless channels.
We consider a Gaussian WTC as:
Y=
√
HX + NY ,
Z=
√
GX + NZ ,
27
where H and G are channel gains from Alice to Bob and Eve, respectively, while NY and NZ are AWGN’s
at Bob and Eve respectively. Without loss of generality, we assume NY and NZ following the same
distribution, i.e., both with zero means and unit variances. Assume that Alice only knows the statistics of
both channels H and G, while Bob knows perfectly the realization of his own channel H but Eve knows
perfectly both H and her own channel G.
Theorem 7. If we consider the set A as
A = {(H, G) : H ≥st G},
(46)
then the ergodic secrecy capacity with statistical CSIT of both H and G is
CS (PT ) = E[C(HPT ) −C(GPT )].
(47)
Please refer to Appendix IV for the proof.
Remark 9. Recall that for a degraded GBC with only statistical CSIT, we cannot claim that the capacity
region is achievable by a Gaussian input. On the contrary, for a degraded Gaussian WTC (GWTC) with
only statistical CSIT, we can prove that a Gaussian input is optimal. The difference can be explained
by the following. For a GWTC, there is only one message set to be transmitted to the single dedicated
receiver and the Markov chain U − X − Y − Z reduces to X − Y − Z [42] when it is degraded, i.e., no
prefixing is needed. The simplification in solving the optimal channel input distribution from two random
variables to only one makes it easier to prove that Gaussian input is the optimal one. On the other hand,
for a two user degraded GBC there are two message sets and one of the user signals is interference to
the other, i.e., the chain U − X −Y − Z does not degenerate as that of a WTC. In particular, the boundary
point of a BC with statistical CSIT is expressed as
max
I(X;Y1 |U, H1 ) + µI(U;Y2 |H2 )
max
h(Y1 |U, X, H1 )−h(N1 )+µh(Y2 |H2 )−µh(Y2 |U, H2 ),
(U, X):U−X−(Y1 ,Y2 ),
E[X 2 ]≤PT
=
(U, X):U−X−(Y1 ,Y2 ),
E[X 2 ]≤PT
(48)
where µ > 0. Note that Gaussian is optimal for each term on the RHS of (48) but the negative sign
prevents to show the exact optimality. More precisely, h(Y2 |U, H2 ) comprises the interference from the
first user such that this term cannot be reduced to h(N2 ) as the counter part h(Y1 |U, X, H1 ) = h(N1 ) [43].
28
2) Source-model SKG: To invoke the developed scheme, we first prove a sufficient condition to achieve
the secret key capacity CSK = I(X;Y ) − I(X; Z), i.e., the common randomness forming a Markov chain
X −Y − Z, which is physically degraded, can be relaxed to stochastic degradedness. The motivation is as
follows. To achieve the secret key capacity, the original analysis [44, Corollary 1] requires the common
randomness forming a Markov chain. However, this requirement is quite stringent on classifying random
sources which can attain the secret key capacity. From extending the physically degraded broadcast/wiretap
channels to the stochastic one we know that, the latter is equivalent to the former one, but it has broader
classes of channels. Therefore, in the following we define the stochastically degraded source first. We then
show that the stochastically degraded condition can be fulfilled by, e.g., the fast fading Gaussian Maurer’s
(satellite) model. More specifically, we assume that there is a central random source S observed through
fast fading AWGN channels as X, Y , and Z at Alice, Bob, and Eve, respectively. We apply the usual
stochastic order to derive the sufficient condition on the fading channels such that CSK is achieved. The
sufficient condition provides a simple way to verify the stochastic degradedness and thereby to identify
the effective SK capacity. We first give a definition on the relation between the common randomness
followed by our result.
Definition 7. A source of common randomness (X , Y , Z , PX Ỹ Z̃ ) is called stochastically degraded if the
conditional marginal distributions PỸ |X and PZ̃|X are identical to those of another source of common
randomness (X , Y , Z , PXY Z ) following X − Y − Z. A source of common randomness (X , Y , Z , PXY Z ) is
called physically degraded if X, Y, and Z form a Markov chain X −Y − Z.
Theorem 8. If a source of common randomness (X , Y , Z , PX Ỹ Z̃ ) is stochastically degraded such that
PỸ |X = PY |X and PZ̃|X = PZ|X , where X − Y − Z, then the SK capacity of the source (X, Ỹ , Z̃) is CSK =
I(X;Y ) − I(X; Z).
Please refer to Appendix V for the proof. The proof is sketched as follows and summarized in Fig. .
The main idea of the proof is to show that the stochastically degraded source (X, Ỹ , Z̃) implies that the
conceptual WTC (CWTC) [45] is also stochastically degraded, which then has the same CSK as that of the
CWTC from a physically degraded source (X,Y, Z). There are three main steps. First is to construct the
29
CWTC of the source (X,Y, Z) and prove that, if X −Y − Z, then U −Y 0 − Z 0 , i.e., the corresponding CWTC
is also a physically degraded one, where U is the conceptual code symbol transmitted in the CWTC,
which is uniformly distribution in X and independent to (X,Y, Z); the equivalently received signals at
Bob and Eve in the CWTC are Y 0 , (Y,U ⊕ X) and Z 0 , (Z,U ⊕ X), respectively, while U ⊕ X is the
signal transmitted through the public channel. The second step is to construct a stochastically degraded
source (X, Ỹ , Z̃) from (X,Y, Z) and then construct the corresponding CWTC of (X, Ỹ , Z̃) as (U,Y 00 , Z 00 ),
where Y 00 , (Ỹ ,U ⊕ X) and Z 00 , (Z̃,U ⊕ X) and (U,Y 00 , Z 00 ). The third step is to show that the CWTC
described by (U,Y 00 , Z 00 ) is a stochastically degraded version of the CWTC described by (U,Y 0 , Z 0 ), i.e.,
to prove these two CWTC’s have the same marginals. Then from same marginal property we know the
two CWTC’s have the same secrecy capacity. As a result, the two common random sources (X,Y, Z) and
(X, Ỹ , Z̃) have the same CSK , which completes the proof.
Common random source
Conceptual WTC
1)
Physically degraded:
2)
Stochastically degraded:
2)
3)
Fig. 6. Key steps in the proof of Theorem 8. First, to show the CWTC is also physically degraded if the source is physically degraded. Then
to show the CWTC constructed from a stochastically degraded source corresponding to the physically degraded source, is a stochastically
degraded CWTC with the same marginal as that physically degraded CWTC. Finally to show the secrecy capacity of the second CWTC is
indeed the secret key capacity of the stochastically degraded source. The key and secrecy capacities of the physically degraded source and
the corresponding CWTC are denoted by CSK and CS , respectively, while the key (rate) and secrecy capacities of the stochastic source and
0 (R0 ) and C0 , respectively.
the corresponding CWTC are denoted by CSK
SK
S
3) Example: In the following we give an example to show the usage scenario of Theorem 8. Consider
the following fast fading Gaussian Maurer’s (satellite) model
4
[25]: X = AX S + NX , Y = AY S + NY , and
Z = AZ S + NZ , where NX , NY and NZ are independent AWGNs at Alice, Bob, and Eve, respectively, while
30
all noises are with zero mean and unit variance; AX , AY , and AZ follow CDFs FX , FY , and FZ , respectively,
are the i.i.d. fast fading channel gains from the source S to Alice, Bob, and Eve, respectively. Thus we can
extend Theorem 3 to the sufficient condition for the source to be stochastically degraded and the example
shown in Sec. IV-C can be treated as a practical one for the fast fading Gaussian Maurer’s model.
V. C HANNELS WITH M EMORY
In this section, we discuss channels with memory which possesses a specific structure, i.e., channels
with finite-state [26]. In particular, we investigate the relationship between the stochastic orders for random
processes and the ergodic capacity of BC as a representative example. Due to the memory, the information
theoretic order of degradedness has to be revisited. Therefore, we first introduce some important properties
extended to multiuser channels with memory such as the same marginal property and degradedness. Then
we discuss the usual stochastic order for random processes.
A. Preliminaries
Definition 8 (Finite state broadcast channels (FSBC) [22]). The discrete finite-state broadcast channel
is defined by the triplet {X × S , p(y, z, s|x, s0 ), Y × Z × S }, where X is the input symbol, Y and Z are
the output symbols, S0 and S are the channel states at the end of the previous and the current symbol
transmissions, respectively. S , X , Y , and Z are finite sets. The PMF of the FSBC satisfies
p(yi , zi , si |yi−1 , zi−1 , si−1 , xi , s0 ) = p(yi , zi , si |xi , si−1 ),
(49)
where s0 is the initial channel state.
For FSBC, a single letter expression of the condition for degradedness is in general not possible.
Therefore we introduce the following definitions of physical and stochastic degradedness for FSBC.
Definition 9 (Degradedness for FSBC [22]). A finite state BC is called physically degraded if for every
s0 ∈ S and symbol time i its PMF satisfies
4 Since
p(yi |xi , yi−1 , zi−1 , s0 ) = p(yi |xi , yi−1 , s0 )
(50)
p(zi |xi , yi , zi−1 , s0 ) = p(zi |yi , zi−1 , s0 ).
(51)
the SKG capacity can be achieved by one time discussion, i.e., without feedback/iteration, we can resort to the quantization scheme
as shown in [46, Appendix 1] to extend the result in Theorem 8 to continuous alphabets.
31
The FSBC is called stochastically degraded if there exists a PMF p̃(z|y) such that for every block length
n and initial state s0 ∈ S
n
n
n
n
n
n
n
n
p(z |x , s0 ) = ∑ p(y , z |x , s0 ) = ∑ p(y |x , s0 ) ∏ p̃(zi |yi ).
Yn
Yn
(52)
i=1
Note that Definition 9 can be specialized to the degradedness of the memoryless case easily as p(y, z|x) =
p(y|x)p(z|y), i.e., X, Y , and Z form a Markov chain X −Y − Z for physically degraded BC and p(z|x) =
∑y p(y|x) p̃(z|y) for stochastically degraded BC.
Remark 10. Two important facts of channels with memory considered in [22] are: 1) The channel output
at the weaker receiver does not contain more information than that of the stronger receiver. 2) When the
degraded outputs are given causally, the stronger/better output up to the current time results in the case
where the degraded current output is independent to the channel input up to the current time. In addition,
the definition of stochastic degradedness (52), i.e., P̃Z|Y,S0 considered in [22] is a strictly equivalent channel
from Y to Z, which is memoryless, compared to the expression P̃Z n |Y n ,S0 . This is because the authors aim to
show the explicit causal operation of the degraded channel. More importantly, with this definition, we can
identify the degradedness more easily by adopting the stochastic ordering to align the random channels,
which will be shown in the next section.
In this work, we consider the indecomposable FSBC (IFSBC), where the concept of indecomposability
is defined as [47]: if there exists an N0 (ε) ∈ N for every ε > 0 such that for all n > N0 (ε), then we have
|p(sn |xn , s0 ) − p(sn |xn , s00 )| ≤ ε, for all sn , xn , and different initial states s0 and s00 . This means that the
effect of the initial channel state s0 on the state transition probabilities diminishes over time.
To apply the stochastic orders, similar to the aforementioned memoryless channels, we need the same
marginal property. In the following, to take the memory effect into account, we need to consider a multiletter version of the same marginal property.
Corollary 4. The capacity region of an FSBC depends only on the conditional marginal distributions
pY n |X n , S0 and pZ n |X n , S0 and not on the joint conditional distribution pY n ,Z n |X n , S0 .
This corollary can be easily proved by removing the memoryless property of [27, Theorem 13.9].
32
B. Sufficient Conditions for an IFSBC to be Degraded
In this subsection, we discuss our results for the IFSBC. In particular, we investigate the relationship
between the stochastic orders for random processes and the ergodic capacities of IFSBC. In brief,
we identify whether the fading processes satisfy the usual stochastic order or not. Again by invoking
the coupling scheme with the same marginal property, we can form an equivalent channel with the
same capacity region, which has all fading states ordered in the same trichotomy order for all channel
realizations. Therefore, it results in the degradedness. The detailed proof is derived in the following.
When considering the memory property, the aforementioned stochastic order is not sufficient. On the
contrary, we need to resort to the stochastic order for a random process. We introduce the following
sufficient condition for the degraded IFSBC by the usual stochastic order for random processes. To describe
the memory effect to the current random variable, we define [X(m)|X(m − 1) = x(m − 1), · · · , X(0) = x(0)]
as the random variable X at time m, given the realizations of X prior to m.
Theorem 9. Let {HZ (m)} and {HY (m)} be two discrete-time random processes. If
A = ({HZ (m)}, {HY (m)}) :
HZ (0) ≤st HY (0),
[HZ (m)|HZ (m − 1) = hZ (m − 1), · · · , HZ (0) = hZ (0)]
≤st [HY (m)|HY (m − 1) = hY (m − 1), · · · , HY (0) = hY (0)],
whenever hZ ( j) ≤ hY ( j), j = 0, 1, · · · , m − 1, m ∈ N ,
(53)
then the IFSBC is degraded and the following capacity region of a degraded IFSBC can be achieved
(
C = lim co
n→∞
[
qn ∈Qn
(R1 , R2 ) : R1 ≥ 0, R2 ≥ 0
1
R1 ≤ I(X n ;Y n |U n , s00 )qn
n
)
1
R2 ≤ I(U n ; Z n |s000 )qn ,
n
where Qn is the set of all joint distributions p(un , xn ), s00 and s000 are arbitrary initial states.
The proof is sketched as follows. By definition of the usual stochastic order, it is clear that (53),
namely, the strong stochastic order [19, Theorem 6.B.31], results in {HZ (m)} ≤st {HY (m)}, which is
33
therefore a sufficient condition for the usual stochastic order. Note that this comparison is element-wise.
Then, from Theorem 3 we know that there exists {HZ0 (m)} =st {HZ (m)} and {HY0 (m)} =st {HY (m)} such
that Pr(HZ0 (m) ≤ HY0 (m)) = 1, ∀ m ∈ N0 . On the other hand, note that the relation between Yi and Zi of
the equivalent physically degraded channel in (52) is memoryless, i.e., described by ∏ni=1 p̃(zi |yi ). This
means considering zi and yi in an element-wise manner is sufficient, which is compatible to the usual
stochastic order of random processes. Therefore, as long as the conditions in (53) are fulfilled, there exists
∏ni=1 p̃(zi |yi ) as shown in (52). Then it is clear that {HZ (m)} ≤st {HY (m)} implies (52). By combining
the capacity region results in [22], this completes the proof.
Remark 11. For WTC’s with finite state and statistical CSIT, we can also apply the previous discussion
to identify the degradedness. From [48] it is clear that the identification of degradedness can simplify
the functional optimization, i.e., there is no need to optimize the channel prefixing p(xn |un ), if the WTC
is degraded. Based on the multi-letter expression of secrecy capacity with memory in [49], we know that
under statistical CSIT, if the conditions in Theorem 9 are valid, then the following secrecy capacity of a
degraded WTC with finite state can be achieved
o
1n n n 0
0
n n 00
00
CS = lim
max
I(X ;Y |S0 = s0 ) − I(X ; Z |S0 = s0 ) ,
n→∞ p(xn ):Pr( 1 c (X n )≤P)=1 n
n n
(54)
where {cn }n≥1 is a sequence of cost functions, cn : X → R+ , such that any transmit sequence X n ∈ X n
should satisfy 1n cn (X n ) ≤ P.
In the following, we specialize Theorem 9 to the time homogeneous5 finite-state Markov channels.
There are two main reasons to consider the Markov fading channels for memory cases. First, it is a
useful model for mobile wireless channels when the underlying channel state process is time-invariant
and indecomposable [50], [51], [52]. More detailed discussions can be found in [53]. Second, the structure
of a Markov chain simplifies the sufficient condition of usual stochastic order for random processes, which
increases the tractability and provides insights on the analysis of fading channel with memory. The latter
point can be observed in the derivation shown in the following.
Remark 12. According to [53] we know that the first order Markov chain can model slow fading relatively
accurately. For intermediate fading rate, higher orders from second to seventh order Markov chain should
5 The
transition probability matrix does not change over time.
34
be used. For cases in which the channel gains change over each code symbol, no memory will be used.
Independent to the fading speed, by our framework we can construct an equivalent degraded channel for
each code symbol.
Consider a k-th order N-alphabet time-homogeneous Markov process with a transition probability matrix
as P , which is fixed over time. Indices of each row and column of P represent a vector of the current
states s = (s j , s j+1 , · · · , s j+k−1 ) and the next state s 0 = (s j+1 , s j+2 , · · · , s j+k ), respectively, where j ∈ N.
Denote the j-row of P and Q as p j and q j , respectively. We name s and s 0 as the super states. Then
the transition probability from the super state s to s 0 is expressed as Psi+1 |ss =
Psi+1 ,ss
Ps ,
i = j, · · · , j + k − 1.
Define p i and q i as the i-th row vectors of the transition matrices P and Q , respectively. To simplify the
expression, we define an N-ary expansion of L, namely, an integer indicating the number of the row of
P to the vector s , by g : L 7→ s , where L ∈ {1, · · · , N k }.
We now derive the result for the Markov fading channels in the following.
Corollary 5. Consider two Markov fading channels both with k-th order, namely, {HZ (m)} and {HY (m)},
with transition matrices P and Q , respectively. A two-receiver finite state Markov fading BC is degraded
if
A ={({HY (m)}, {HZ (m)}) :
HZ (0) ≤st HY (0),
(55)
[HZ (m)|HZ (m − 1) = hZ (m − 1), · · · , HZ (0) = hZ (0)] ≤st
[HY (m)|HY (m − 1) = hY (m − 1), · · · , HY (0) = hY (0)],
whenever hZ ( j) ≤ hY ( j), j = 0, 1, · · · , m − 1, if 1 ≤ m ≤ k, and,
(56)
F̄p l (n) ≤ F̄q j (n), ∀ n, ∀ (l, j) = {l, j|g(l) ≤ g( j)} , if m > k},
(57)
where F̄p i (n) and F̄q i (n) are the CCDFs of the i-th state of the two receivers, respectively, and n is the
index of channel states, ≤ compares two vectors element-wise.
Please refer to Appendix VI for the proof. Note that conditions (55) and (56) should be identified
according to initial conditions which are additionally given, but cannot be derived from the transition
matrices P and Q .
The first order Markov fading channels can be further specialized from Corollary 5 shown as follows.
35
Corollary 6. Consider two first order Markov fading channels {HZ (m)} and {HY (m)} with fading states
arranged in an increasing manner with respect to the state-values. A two-receiver finite state Markov
fading broadcast channel is degraded if
HZ (0) ≤st HY (0), and
(58)
F̄p l (n) ≤ F̄q j (n), ∀ n, ∀ l, j, s.t. l ≤ j.
(59)
Proof: Following the same steps in the proof of Corollary 5, the condition hZ (m) ≤ hY (m) can be
simply replaced by (59) due to the ordered super states.
C. Examples
In this subsection, we illustrate two examples to show an application of our results to fading channels
with memory.
Example 1 of channels with memory (a 1st-order Markov chain with 3-state): Consider two threestate first order Markov chains. Given any HZ (0) ≤st HY (0), the following transition probability matrices
of HZ and HY , respectively, satisfy Corollary 6 and form a degraded channel
P=
1
2
1
4
1
4
3
4
1
8
1
8
5
8
1
4
1
8
, Q =
1
4
3
8
3
8
1
8
2
8
5
8
1
2
1
8
3
8
.
(60)
To identify the degradedness, we resort to Corollary 6, and verify the condition for the corresponding
CCDF matrices as
F̄P =
1
2
1
4
1
4
1
8
3
8
1
8
0
,
F̄
=
Q
0
0
3
4
3
8
7
8
5
8
1
2
3
8
0
.
0
0
(61)
After comparing the pairs of CCDFs: (F̄p 1 , F̄q 1 ), (F̄p 1 , F̄q 2 ), (F̄p 1 , F̄q 3 ), (F̄p 2 , F̄q 2 ), (F̄p 2 , F̄q 3 ), (F̄p 3 , F̄q 3 ), it
is clear that (59) is valid. Therefore, it is a degraded IFSBC.
36
Example 2 of channels with memory (a 2nd-order Markov chain with 2-state): Consider two
binary-valued second order Markov chains. The general transition matrix can be expressed as [54]
00
01
10
11
00 PP000
00
01
0
P100
10
P10
11
0
P001
P00
0
0
P010
P01
P101
P10
0
0
P110
P11
0
P011
P01
,
0
(62)
P111
P11
where row and column indices show the current and next super states, respectively.
Consider the following transition matrices
1 1
1 2
2 2 0 0
3 3 0 0
0 0 1 2
0 0 1 3
3 3
4 4
P=
, Q =
.
1 3
1 4
4 4 0 0
5 5 0 0
1 4
1 5
0 0 5 5
0 0 6 6
The corresponding CCDF matrices can be easily derived as
2
1
3
2 0 0 0
2
1
1 1
3 0
F̄P =
, F̄Q =
4
3
4 0 0 0
5
4
1 1 5 0
1
(63)
0 0 0
3
1 4 0
.
0 0 0
5
1 6 0
(64)
By comparing the pairs of CCDFs: (F̄p 1 , F̄q 1 ), (F̄p 1 , F̄q 2 ), (F̄p 1 , F̄q 3 ), (F̄p 1 , F̄q 4 ), (F̄p 2 , F̄q 2 ), (F̄p 2 , F̄q 3 ),
(F̄p 2 , F̄q 4 ), (F̄p 3 , F̄q 3 ), (F̄p 3 , F̄q 4 ), (F̄p 4 , F̄q 4 ), we can verify (57). In addition, given any initial conditions
satisfying (55) and
[HZ (2)|HZ (1) = hZ (1), HZ (0) = hZ (0)] ≤st [HY (2)|HY (1) = hY (1), HY (0) = hY (0)],
(65)
[HZ (1)|HZ (0) = hZ (0)] ≤st [HY (1)|HY (0) = hY (0)],
(66)
are all fulfilled, we can identify it is a degraded IFSBC.
37
VI. C ONCLUSION
In this paper, we investigated the ergodic capacity of several fast fading Gaussian memoryless multiuser
channels, when only the statistics of the channel state information are known at the transmitter. To achieve
this goal, we resorted to classifying the random channels through their probability distributions by which
we are able to attain the capacity results. In particular, we derived sufficient conditions to attain some
information theoretic channel orders such as degraded and strong/very strong interference by combining
the usual stochastic order and the same marginal property, such that the capacity regions can be simply
characterized, which includes Gaussian interference channels, Gaussian broadcast channels, and Gaussian
wiretap channels/secret key generation. Extension of the framework to channels with finite-state memory
was also considered, wherein the Markov fading channel is discussed as a special case. Practical examples
were illustrated to show the successful application of the derived results.
A PPENDIX I. P ROOF OF R EMARK 5
We verify (30), (31), and (32), respectively as follows. It can be easily seen that
min{F̄H1 (h̃1 ), 0} = min{0, F̄H2 (h̃2 )} = 0,
(67)
then (30) is fulfilled. As previously mentioned, this property corresponds to the same marginal property.
We can also easily see that
min{F̄H1 (h̃1 ), 1} = F̄H1 (h̃1 ) and min{1, F̄H2 (h̃2 )} = F̄H2 (h̃2 ),
(68)
then (31) is fulfilled. To check (32), i.e., the 2-increasing property, we first define u j , F̄H1 (h̃1, j ) and
v j , F̄H2 (h̃2, j ), j = 1, 2, where the subscript j is to indicate the different realizations of H1 and H2 . The
condition {u1 ≤ u2 and v1 ≤ v2 } has the following cases: 1) v1 ≤ v2 ≤ u1 ≤ u2 , 2) v1 ≤ u1 ≤ v2 ≤ u2 ,
3) v1 ≤ u1 ≤ u2 ≤ v2 , 4) u1 ≤ v1 ≤ v2 ≤ u2 , 5) u1 ≤ v1 ≤ u2 ≤ v2 , 6) u1 ≤ u2 ≤ v1 ≤ v2 . We can further
combine the above cases into the following 4 classes:
Class 1 (Cases 1 and 2): u2 ≥ v2 , u1 ≥ v1 : the LHS of (32) can be expressed as
v2 − v1 − min(u1 , v2 ) + v1 = v2 − min(u1 , v2 ) ≥ 0.
(69)
38
Class 2 (Case 4): u2 ≥ v2 , u1 ≤ v1 : the LHS of (32) can be expressed as
v2 − v1 − u1 + u1 = v2 − v1 ≥ 0.
(70)
Class 3 (Case 3): u2 ≤ v2 , u1 ≥ v1 : the LHS of (32) can be expressed as
u2 − v1 − u1 + v1 = u2 − u1 ≥ 0.
(71)
Class 4 (Cases 5 and 6): u2 ≤ v2 , u1 ≤ v1 : the LHS of (32) can be expressed as
u2 − min(u2 , v1 ) − u1 + u1 = u2 − min(u2 , v1 ) ≥ 0.
(72)
Therefore, from (69), (70), (71), and (72), the 2-increasing property is fulfilled, which completes the proof.
A PPENDIX II. P ROOF OF T HEOREM 5
We first derive the ergodic capacity region of a strong GIC with statistical CSIT, which is not reported
in literature. Then we derive the sufficient condition to achieve this ergodic capacity region. To commence
the first part, we focus on the uniformly strong IC (US IC) [11], in which every fading state results in
strong interference, i.e., all realizations of channel gains over time satisfy h21 ≥ h11 and h12 ≥ h22 . Hinging
upon the US IC, we can smoothly connect the stochastic orders with the capacity region. To prove the
ergodic capacity region, we mirror the proof steps in [11, Theorem 3] with proper modifications to fit our
assumptions. For the achievable scheme, it is clear that allowing each receiver to decode both messages
from the two transmitters provides an inner bound, i.e., (37), of the capacity region.
Now we prove that (38) is an outer bound of the capacity region of the considered model, where a
genie bestows the information of the interference to only one of the receiver, e.g., the second receiver,
which is equivalent to setting h21 = 0. By this genie aided channel, we aim to prove6
R1 + R2 ≤ E [C (H11 P1 + H12 P2 )] .
6 The
bound.
(73)
single user capacity outer bounds of R1 and R2 can be easily derived. Therefore, here we only focus on the sum capacity outer
39
From Fano’s inequality we know that the sum rate must satisfy
(R1 + R2 − ε)
(a)
n
n
H ) + I(X2n ;Y2n |X1n , H̃
H )
≤I(X1n ;Y1n |H̃
(b)
n
n
n
n
H = h̃h ) + I(X2n ; h̃n22 X2n + N2n |H̃
H = h̃h )]
=E[I(X1n ; h̃n11 X1n + h̃n12 X2n + N1n |H̃
n
n
n
n
n
n
H = h̃h ) − h(h̃n12 X2n + N1n |H̃
H = h̃h ) + h(h̃n22 X2n + N2n |H̃
H = h̃h ) − h(N2n )]
=E[h(h̃n11 X1n + h̃n12 X2n + N1n |H̃
(c)
H = h̃h)−h(N2,k ) +h(h̃n22 X2n + N2n |H̃
H = h̃h)−h(h̃n12 X2n +N1n |H̃
H = h̃h)],
≤E[Σnk=1 h(h̃11,k X1,k +h̃12,k X2,k +N1,k |H̃
(74)
n
n , H̃ n ]; in
H , [H̃11
where in (a) the condition of the second term is due to the genie and we define H̃
12
n
n
H . To simplify the notation, we omit the subscript of H̃
H in expectation;
(b) the expectation is over H̃
(c) is by applying the chain rule of entropy and conditioning reduces entropy and i.i.d property to the
first and the fourth terms on the RHS of the second equality, respectively. Since the last term on the
RHS of (74) has a sign change and also it is not as simple as the term h(N2n ), we concentrate on
n X n + N n ) − h(H̃ n X n + N n ) together for the ease of single letterization. To proceed, we exploit the
h(H̃22
2
2
12 2
1
property |h̃12 | ≥ |h̃22 |7 by definition of US IC as
(a)
n n
n n
n
n
E[h(H̃22
X2 + N2n ) − h(H̃12
X2 + N1n )] =E[h(X2n + Ñ2n ) − h(X2n + Ñ1n ) + 2 log(|H̃22
|/|H̃12
|)]
(b)
n
n
=E[h(X2n + Ñ1n + N n ) − h(X2n + Ñ1n ) + 2 log(|H̃22
|/|H̃12
|)]
n
n
|/|H̃12
|)]
=E[h(N n + X2n + Ñ1n )−h(N n + X2n + Ñ1n |N n )+ 2 log(|H̃22
(c)
n
n
|/|H̃12
|)]
=E[I(N n ; N n + X2n + Ñ1n ) + 2 log(|H̃22
(d)
n
n
|/|H̃12
|)]
≤ E[I(N n ; N n + Ñ1n ) + 2 log(|H̃22
(e)
≤Σnk=1 E[h(N2,k ) − h(N1,k )],
(75)
where in (a), Ñ2,k ∼ C N (0, |H̃22,k |−2 ) and Ñ1,k ∼ C N (0, |H̃12,k |−2 ); in (b) we define Nk ∼ C N (0, |H̃22,k |−2 −
|H̃12,k |−2 ), while N n = [N1 , N2 , · · · Nn ]T and N n is independent to Ñ n ; in (c), we treat N n as the transmitted
signal and X2n + Ñ1n as an equivalent noise at the receiver; in (d) we apply data processing inequality with
40
the Markov chain: N n − Ỹ2n − Ỹ1n , where Ỹ1n , X2n + Ñ1n + N n and Ỹ2n , Ñ1n + N n ; in (e) we use the fact
that N1n and N2n are i.i.d., respectively, and also by the assumption |h̃12 | ≥ |h̃22 |. Note that even though
(48) and (74) both have the differences of differential entropies and both are to solve the optimal input
distributions, in (48) due to the asymmetric conditioned U and X in the differential entropies, (48) cannot
be reformed as (75).
After substituting (75) into (74), we obtain
1
H ) , I(X1 , X2 ;Y1 |H̃
H , Q),
R1 + R2 ≤ Σnk=1 I(X1k , X2k ;Y1k |H̃
n
(76)
where Q is the time sharing random variable. To proceed, we apply the result in [55], wherein the
capacity region of a MAC is derived for cases in which only the receiver can perfectly track the channel
information but the transmitter has no such information. For such channels, the optimal input distribution
is proved to be Gaussian. Note that since the transmitter has no instantaneous CSIT, it cannot do power
allocation across time. As a result, full powers P1 and P2 are always used. Then we obtain (73). Likewise,
when the genie provides the interference only to the first receiver, we can get
R1 + R2 ≤ E [C (H21 P1 + H22 P2 )] .
(77)
Finally, after comparing the outer bounds (73) and (77) to (37) and (38), we can observe that decoding
both messages at each receiver can achieve the capacity region outer bound and under the assumptions
on the knowledge of channel gains at each node.
In the ensuing part, we derive the sufficient condition to achieve the capacity region of GIC with strong
interference. In particular, we derive the condition to achieve US IC. From Definition 2 we can derive
7 Note
I(X1 ;Y1 |X2 , H11 = h11 , H12 = h12 ) = log(1 + h11 P1 ),
(78)
I(X1 ;Y2 |X2 , H21 = h21 , H22 = h22 ) = log(1 + h21 P1 ).
(79)
that without this property, we may not be able to rearrange the outer bound of the sum rate as (76). This seemly very strict condition
can be relaxed as long as the channel distributions follow proper stochastic orders, which will be explained in the latter part of this proof.
41
If we impose the following constraint
h21 ≥ h11 and h12 ≥ h22 ,
(80)
for all realizations, it is clear that (2) is fulfilled. Note that it is the uniformly strong IC [11], by which
we derive (73) and (77). To connect (80) to our considered model, we apply Theorem 3. Recall that we
assume that all random channels, signals, and noises are jointly independent. From (34) and (35), we can
find that
fY1 |X1 ,X2 = fH̃11 ∗ fH̃12 ∗ fN1 ,
(81)
fY2 |X1 ,X2 = fH̃21 ∗ fH̃22 ∗ fN2 ,
(82)
where the convolution ∗ is due to the mutual independence assumption, while for a given x j , H̃k j , x j Hk j .
0 , H̃ 0 ) and (H̃ 0 , H̃ 0 ) such
From (81) and (82) we know that as long as we can find equivalent channels (H̃11
12
22
21
that fH̃ 0 ∗ fH̃ 0 = fH̃11 ∗ fH̃12 and fH̃ 0 ∗ fH̃ 0 = fH̃22 ∗ fH̃21 , then the capacity regions of the two interference
11
12
22
21
channels are identical. To achieve this goal, we resort to finding equivalent {Hk0 j } which are jointly
independent such that8
fHk0 j = fHk j .
(83)
A constructive way to generate the equivalent {Hk0 j } is by (18), i.e.,
Hk0 j = FH−1
(U), k, j = {1, 2}.
kj
(84)
Then we know that Hk0 j has the same CDF as that of Hk j , namely, FHk j , which satisfies the condition (83).
0 ≥ H 0 ) = P(H 0 ≥ H 0 ) = 1 if H ≥ H
From Theorem 2, we also know that P(H21
21 st 11 and H12 ≥st H22 ,
11
12
22
respectively, which satisfies (80) a.s. and therefore completes the proof.
8 Finding
individually equivalent channels as (83) but not finding the compound distributions such that fH̃ 0 ∗ fH̃ 0 = fH̃11 ∗ fH̃12 may make
11
12
the sufficient condition in the theorem statement stricter. But this way is more tractable and therefore we adopt it.
42
A PPENDIX III. P ROOF OF T HEOREM 6
Similar to Appendix II, we first derive the ergodic capacity region of GIC with very strong interference
with statistical CSIT and then we derive the sufficient condition to achieve the derived capacity region.
For the first part, we can deduce the proof following the same steps as in Appendix II with proper
modifications. But here we proceed in an easier way since for the capacity region of a discrete memoryless IC with very strong interference, there is no sum rate constraint. We can solve the optimal input
distributions of GIC with statistical CSIT by considering
arg max
I(X1 ;Y1 |X2 , H11 , H12)+µI(X2 ;Y2 |X1 , H22 , H21),
pX1 , pX2 :
E[|X1 |2 ]≤P1 ,E[|X2 |2 ]≤P2
(85)
where µ ∈ R+ . Note that (85) can be further expressed as
I(X1 ; H11 X1 +Z1 |H11)+µI(X2 ; H22 X1 +Z2 |H22).
pX1 , pX2 :
E[|X1 |2 ]≤P1 ,E[|X2 |2 ]≤P2
arg max
(86)
It is clear that for each µ, (86) can be maximized by Gaussian inputs, i.e., X1 ∼ C N (0, P1 ) and X2 ∼
C N (0, P2 ). Then the capacity region can be delineated by (41).
In the second part, from (4) we can derive
I(X1 ;Y2 |H21 = h21 , H22 = h22 ) = log 1 +
h21 P1
.
1 + h22 P2
(87)
After comparing (87) and (78), we can find the condition
h11 ≤
h21
.
1 + P2 h22
(88)
h22 ≤
h12
.
1 + P1 h11
(89)
Similarly, from (5) we can derive
By treating H21 /(1 + P2 H22 ) and H12 /(1 + P1 H11 ) as new random variables, respectively, and follow the
same steps in the proof of Theorem 5, we can extend (88) and (89) to the stochastic case as
H21
≥st H11 ,
1 + P2 H22
H12
≥st H22 .
1 + P1 H11
(90)
(91)
43
Now we need to guarantee that the marginal distributions are intact after applying the construction by
0 /(1 + P H 0 )
Theorem 3. This can be simply checked by the fact that, because the equivalent channels H21
2 22
have the same distribution as that of H21 /(1 + P2 H22 ) after applying Theorem 3, we can exploit the
0 and H 0 whose distributions are the same as those of H
transform (84) to construct H21
21 and H22 ,
22
respectively. Therefore, the marginal distributions in (81) and (82) are not changed, which completes the
proof.
A PPENDIX IV. P ROOF OF T HEOREM 7
The achievability of (47) can be easily derived by substituting U = X ∼ N(0, PT ) into the secrecy
capacity in [42, (8)]. In the following, we focus on the derivation of the outer bound of the secrecy
capacity. In particular, we adopt the Sato-type outer bound combined with the coupling scheme to show
that Gaussian input is optimal for the outer bound and also show the outer bound matches the lower
bound. In the following, we first verify the validity of using the coupling scheme.
To construct an equivalent wiretap channel, we require the two channels to have the same: 1) error
probability P(W 6= Ŵ ) and 2) equivocation rate limn→∞ n1 h(W |Z n ), where W and Ŵ are the transmitted
and detected messages at Alice and Bob, respectively. The first requirement is valid because the coupling
scheme does not change the channel distribution. Checking the second requirement is more involved. The
reason is that we have asymmetric knowledge of the CSI at Bob and Eve. In general, to design for the
worst case we assume that Eve has more knowledge of the CSI than Bob. As mentioned previously, a
common assumption is that Bob knows perfectly the realization of his own channel H but Eve knows
perfectly both H and her own channel G. The equivocation rate then becomes
1
lim h(W |Z n , H n , Gn ),
n→∞ n
(92)
and the calculation of which depends on p(zn , hn , gn |xn ). If we directly use the coupling scheme, we will
end up with the equivocation rate as h(W |H̃ n , G̃n , Z̃ n ), whose calculation relies on p(z̃n , h̃n , g̃n |xn ), where
Z̃ is the channel output at Eve of the equivalent WTC when the coupling scheme is used. Note that
44
p(zn , hn , gn |xn ) may not be identical to p(z̃n , h̃n , g̃n |xn ) because same marginal property does not guarantee
the same joint distribution. More specifically, the correlation between H n and Gn can be arbitrary. In
contrast, the correlation between H̃ n and G̃n cannot be arbitrary, i.e., it is fixed by the marginal CDFs FH
and FG and also the property of U. To avoid this inconsistence, we can consider a new wiretap channel
where Eve only knows gn with equivocation rate as h(W |Gn , Z n ) calculated by p(zn , gn |xn ). Note that the
secrecy capacity of the new GWTC is no less than the original one, since Eve here knows less CSI than
in the original setting. After applying the coupling scheme, we have the equivocation rate as h(W |G̃n , Z̃ n ),
whose calculation relies on p(z̃n , g̃n |xn ). A sufficient condition to equal these two equivocation rates is
that p(z, g|x) = p(z̃, g̃|x), which can be verified as follows:
p(z, x|g)p(g)
p(x)
(a) p(z, x|g)p(g̃)
=
p(x)
(b) p(z̃, x|g̃)p(g̃)
=
p(x)
p(z, g|x) =
= p(z̃, g̃|x),
(93)
where (a) is due to the coupling, i.e., p(g) = p(g̃); (b) comes from the fact that g̃ and g have the same set
of alphabets and distribution while z is replaced by z̃ due to the equivalent channel z̃ =
√
g̃x + nz . Then
we know that the new WTC where Eve only knows gn before and after using the coupling scheme are
equivalent.
To verify the optimality of Gaussian input of the outer bound, we invoke the Sato-type outer bound by
letting Bob know what Eve knows, i.e., Bob receives Ỹ and Z̃. Then we can express the outer bound as
(a)
max I(U; Ỹ , Z̃, H̃, G̃) − I(U; Z̃, G̃) = max I(X; Ỹ , Z̃, H̃, G̃) − I(X; Z̃, G̃)
PU , PX|U
PX
(b)
= max I(X; Ỹ , Z̃|H̃, G̃) − I(X; Z̃|G̃)
PX
(c)
= max I(X; Ỹ , Z̃|H̃, G̃) − I(X; Z̃|H̃, G̃)
PX
= max I(X; Ỹ |Z̃, H̃, G̃),
PX
(94)
45
where (a) is because the new WTC is a degraded one; (b) is due to the assumption of statistical CSIT;
(c) comes from the fact that knowing G̃ causes H̃ also known. In particular, since Bob knows both FH
and FG in the new WTC, once he knows g̃ = FG−1 (u), he can simply get u and then derive h̃ = FH−1 (u).
In addition, because
arg max I(X; Ỹ |Z̃, H̃, G̃) = arg max h(Ỹ |Z̃, H̃, G̃),
PX
PX
(95)
we can extend [56, Lemma 1] to show that
h(Ỹ |Z̃, H̃, G̃) = h(Ỹ − αZ̃|Z̃, H̃, G̃)
≤ h(Ỹ − αZ̃|H̃, G̃)
= EH̃, G̃ [h(Ỹ − αZ̃|H̃ = h̃, G̃ = g̃)],
where α =
√ √
h̃ g̃/(1 + g̃) is the linear MMSE estimator of Ỹ from Z̃ and the inequality is due to the fact
that conditioning only reduces the differential entropy. And the equality is achieved by Gaussian X for
the outer bound.
We match the outer and lower bounds as follows. Due to the coupling scheme, we have the Markov
chain X − Ỹ − Z̃ for each channel use. Hence we can further express the RHS of (b) in (94) as
max I(X; Ỹ , Z̃|H̃, G̃) − I(X; Z̃|G̃) = max I(X; Ỹ |H̃, G̃) − I(X; Z̃|G̃).
PX
PX
(96)
After substituting Gaussian input into (96) followed by the full power usage since power allocation can
not be done due to statistical CSIT, we can get (47), which completes the proof.
A PPENDIX V. P ROOF OF T HEOREM 8
To prove that the stochastically degraded random source (X, Ỹ , Z̃) implies that the corresponding
conceptual WTC (CWTC) [45] is also stochastically degraded from the CWTC constructed by the corresponding physically degraded source (X,Y, Z), we start from constructing the CWTC of the random source
(X,Y, Z), where the equivalently received signals at Bob and Eve are Y 0 , (Y,U ⊕ X) and Z 0 , (Z,U ⊕ X),
46
respectively. If X −Y − Z, then U −Y 0 − Z 0 , i.e., the CWTC is also a physically degraded one, which can
be shown as follows:
(a)
I(U; Z 0 |Y 0 ) =H(Z,U ⊕ X|Y,U ⊕ X) − H(Z,U ⊕ X|Y,U ⊕ X,U)
(b)
=H(Z|Y ) − H(Z|Y,U ⊕ X,U)
(c)
=H(Z|Y ) − H(Z|Y, X,U)
(d)
= H(Z|Y ) − H(Z|Y, X)
(e)
=0,
(97)
where (a) is by definition of Y 0 and Z 0 ; (b) is by the crypto lemma [57] and U is selected to be independent
to Y and Z; (c) is from the fact that given U, we can know X from U ⊕ X; (d) is by the same reason as
(b); (e) is due to X −Y − Z and by definition of conditional mutual information.
Now we consider the stochastically degraded source of common randomness (X, Ỹ , Z̃) fulfilling PỸ |X =
PY |X and PZ̃|X = PZ|X . Similar to the first step, we construct a CWTC from (X, Ỹ , Z̃), namely, {(U,Y 00 , Z 00 ),
PU,Y 00 ,Z 00 = PU PY 00 ,Z 00 |U }, where Y 00 , (Ỹ ,U ⊕ X) and Z 00 , (Z̃,U ⊕ X) are the equivalent channel outputs at
Bob and Eve, respectively. To prove the two CWTC’s PY 0 ,Z 0 |U and PY 00 ,Z 00 |U are equivalent, inherited from the
WTC, we can invoke the same marginal property in Corollary 1 to prove PY 00 |U = PY 0 |U and PZ 00 |U = PZ 0 |U ,
which is shown in the following. By definition of Y 00 and Y 0 , we know that PY 00 |U=u = PỸ ,u⊕X , PỸ ,Xu and
PY 0 |U=u = PY,u⊕X , PY,Xu , respectively, where we define Xu as u ⊕ X. Note that due to the closed operation
⊕ in X , the distribution of Xu is just a left circular shift of that of X by u. Instead of directly proving
PỸ ,Xu = PY,Xu , we can equivalently prove PỸ |Xu = PY |Xu , ∀ u, as follows:
PỸ |Xu =x = PỸ |X=x = PY |X=x = PY |Xu =x ,
(98)
where the 1st and 3rd equality is the 2nd equality is due to the assumption of (X, Ỹ , Z̃) forming a
stochastically degraded source from the physically degraded one (X,Y, Z). Therefore, PY 00 |U = PY 0 |U .
Similarly, we can derive PZ 00 |U = PZ 0 |U . Hence, we prove that the CWTC of a stochastic degraded source
is also a degraded CWTC and the two CWTC’s have the same secrecy capacity.
47
Note that due to X − Y − Z, we can derive the key capacity CSK from the corresponding CWTC, but
not just an achievable key rate. For the source (X, Ỹ , Z̃), till now we may only claim the achievable
0 , is the same as C . That is because the
secret key rate but not the secret key capacity, namely, CSK
SK
CWTC is in general only an achievable scheme to derive the secret key rate. However, due to the fact
that the stochastic degradedness is more general than the physical degradedness, i.e., less stringent on
characterizing the relation between X, tildeY and Z̃, the former can not result in a larger secret key
0 =C
capacity than the latter one. Therefore, we attain that CSK
SK = I(X;Y ) − I(X; Z), which completes the
proof.
A PPENDIX VI. P ROOF OF C OROLLARY 5
To simplify the expression, we first define
H̃Z (hhZ (m)) =st [HZ (m)|HZ (m − 1) = hZ (m − 1), · · · , HZ (m − k) = hZ (m − k)]
as the channel state of the m-th sample, which is a random variable given k consecutively past channel
states h Z (m) = [hZ (m − 1), · · · , hZ (m − k)]. We also define H̃Y (hhY (m)), similarly. Due to the property of
the k-th order Markov chain, we can express (53) as
H̃Z (hhZ (m)) ≤st H̃Y (hhY (m)), ∀m ∈ N
(99)
with h Z (k) ≤ hY (k) for all k < m. In addition to Theorem 9, we know that from HZ (0) ≤st HY (0) and (99)
whenever h Z (k) ≤ hY (k) are valid, it follows that {HZ (m)} ≤st {HY (m)}, m ∈ N. Note that the inequality
operates element-wise.
Based on the given transition matrices P and Q , we can further simplify the constraint (99) for the case
m > k. It is clear that the j-th entries of p i and q i are the transition probabilities from the i-th super state
to the j-th super state of the Markov processes {HZ (m)} and {HY (m)}, respectively. Given p i and q i ,
the i-th rows of P and Q , respectively, i = 1, 2, · · · , N k , we can form the corresponding CCDF matrices,
respectively, as F̄P = [F̄pT1 , F̄pT2 , · · · , F̄pT k ]T and F̄Q = [F̄qT1 , F̄qT2 , · · · , F̄qT k ]T , where F̄p j and F̄q j are the CCDF
N
N
vectors derived by p j and q j , respectively. From Definition 4, we can equivalently express (99) for m > k by
48
F̄p l (n) ≤ F̄q j (n), ∀ n, with the constraint h Z (k) ≤ hY (k), where we need to properly choose (l, j) according
to (l, j) = {l, j|g(l) ≤ g( j)}, where g is an N-ary expansion of L defined as g : L 7→ s ∈ {1, · · · , N}k ,
L ∈ {1, · · · , N k }. Then we attain (56) and (57). Combining with (55), we obtain the sufficient conditions to
attain {HZ (m)} ≤st {HY (m)}, or equivalently, there exist two random processes {HZ0 (m)} and {HY0 (m)},
such that {HZ0 (m)} =st {HZ (m)}, {HY0 (m)} =st {HY (m)}, and Pr(HZ0 (m) ≤ HY0 (m), m ∈ N0 ) = 1, which
implies the degradedness and completes the proof.
R EFERENCES
[1] P. P. Bergmans, “Random coding theorem for broadcast channels with degraded components,” IEEE Trans. Inf. Theory, vol. 19, no. 2,
p. 197207, 1973.
[2] R. G. Gallager, “Capacity and coding for degraded broadcast channels,” Probl. Inf. Transm, vol. 10, no. 3, pp. 3–14, 1974.
[3] A. Khisti and G. W. Wornell, “Secure transmission with multiple antennas-II: The MIMOME wiretap channel,” IEEE Trans. Inf. Theory,
vol. 56, no. 11, pp. 5515–5532, Nov 2010.
[4] F. Oggier and B. Hassibi, “The secrecy capacity of the MIMO wiretap channel,” IEEE Trans. Inf. Theory, vol. 57, no. 8, Aug. 2011.
[5] T. Liu and S. Shamai (Shitz), “A note on the secrecy capacity of the multiple-antenna wiretap channel,” IEEE Trans. Inf. Theory,
vol. 55, no. 6, pp. 2547–2553, Jun. 2009.
[6] V. S. Annapureddy and V. V. Veeravalli, “Gaussian interference networks: Sum capacity in the low-interference regime and new outer
bounds on the capacity region,” IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3032–3050, July 2009.
[7] H. Sato, “The capacity of the Gaussian interference channel under strong interference,” IEEE Trans. Inf. Theory, vol. 27, no. 6, pp.
786–788, Nov. 1981.
[8] A. B. Carleial, “A case where interference does not reduce capacity,” IEEE Trans. Inf. Theory, vol. 21, no. 5, pp. 569–570, Sep. 1975.
[9] T. S. Han and K. Kobayashi, “A new achievable rate region for the interference channel,” IEEE Trans. Inf. Theory, vol. 27, no. 1, pp.
49–60, Jan. 1981.
[10] Y. Liang, V. Poor, and S. Shamai (Shitz), “Secure communication over fading channels,” IEEE Trans. Inf. Theory, vol. 54, no. 6, pp.
2470–2492, Jun. 2008.
[11] L. Sankar, X. Shang, E. Erkip, and H. V. Poor, “Ergodic fading interference channels: sum capacity and separability,” IEEE Trans. Inf.
Theory, vol. 57, no. 5, pp. 2605–2626, May 2011.
[12] D. N. C. Tse and R. D. Yates, “Fading broadcast channels with state information at the receivers,” IEEE Trans. Inf. Theory, vol. 58,
no. 6, pp. 3453–3471, June 2012.
[13] A. Vahid, M. A. Maddah-Ali, A. S. Avestimehr, and Y. Zhu, “Binary fading interference channel with no CSIT,” IEEE Trans. Inf.
Theory, vol. 63, no. 6, pp. 3565 – 3578, June 2017.
[14] Y. Zhu and D. Guo, “Ergodic fading Z-interference channels without state information at transmitters,” IEEE Trans. Inf. Theory, vol. 57,
no. 5, pp. 2627–2647, May 2011.
[15] S.-C. Lin and P.-H. Lin, “On ergodic secrecy capacity of multiple input wiretap channel with statistical CSIT,” IEEE Trans. Inf.
Forensics Security, vol. 8, no. 2, pp. 414–419, Feb. 2013.
[16] P.-H. Lin and E. Jorswieck, “On the fading Gaussian wiretap channel with statistical channel state information at transmitter,” IEEE
Trans. Inf. Forensics Security, vol. 11, no. 1, pp. 46–58, Jan. 2016.
[17] J. Körner and K. Marton, “Comparison of two noisy channels,” in in Topics in Information Theory (1975), Imre Csiszár and Peter
Elias, Eds. North-Holland, colloquia Math. Soc. Jáanos Bolyai, 1977, pp. 411–423.
49
[18] A. E. Gamal and Y. H. Kim, Network Information Theory.
Cambridge University Press, 2012.
[19] M. Shaked and J. G. Shanthikumar, Stochastic Orders.
Springer Science New York, 2007.
[20] H. Thorisson, Coupling, Stationarity, and Regeneration.
Springer-Verlag New York, 2000.
[21] T. M. Cover and J. A. Thomas, Elements of Information Theory, 1st ed.
New York: Wiley-Interscience, 1991.
[22] R. Dabora and A. J. Goldsmith, “The capacity region of the degraded finite-state broadcast channel,” IEEE Trans. Inf. Theory, vol. 56,
no. 4, pp. 1828–1851, April 2010.
[23] H. Permuter, T. Weissman, and J. Chen, “Capacity region of the finite-state multiple-access channel with and without feedback,” IEEE
Trans. Inf. Theory, vol. 55, no. 6, pp. 2455–2477, Jun. 2009.
[24] R. Dabora and A. J. Goldsmith, “On the capacity of indecomposable finite-state channels with feedback,” IEEE Trans. Inf. Theory,
vol. 59, no. 1, pp. 193–203, Jan. 2013.
[25] M. Naito, S. Watanabe, R. Matsumoto, and T. Uyematsu, “Secret key agreement by soft-decision of signals in Gaussian Maurer’s
model,” IEICE Trans. Fundamentals, vol. E92-A, no. 2, pp. 525–534, Feb. 2009.
[26] B. McMillan, “The basic theorems of information theory,” Ann. Math. Statist., vol. 24, no. 2, pp. 196–219, 1953.
[27] S. M. Moser, Advanced Topics in Information Theory-Lecture Notes.
[28] A.
Markur
and
Y.
Polyanskiy,
“Comparison
of
channels:
http://moser-isi.ethz.ch/docs/atit script v210.pdf, 2017.
criteria
for
domination
by
a
symmetric
channel,”
https://arxiv.org/abs/1609.06877.
[29] R. B. Nelson, An introduction to copulas, 2nd ed.
Springer-Verlag, 2006.
[30] S. M. Ross and E. A. Peköz, A second course in probability.
ProbabilityBookstore.com, Boston, MA., 2007.
[31] T. Y. Al-Naffouri and B. Hassibi, “On the distribution of indefinite quadratic forms in Gaussian random variables,” in Proc. IEEE
International Symposium on Information Theory (ISIT), June-July 2009, pp. 1744–1748.
[32] K. Marton, “A coding theorem for the discrete memoryless broadcast channel,” IEEE Trans. Inf. Theory, vol. 25, no. 3, pp. 306–311,
May 1979.
[33] C. Nair and A. E. Gamal, “An outer bound to the capacity region of the broadcast channel,” IEEE Trans. Inf. Theory, vol. 53, no. 1,
pp. 350–355, Jan 2007.
[34] H. Weingarten, Y. Steinberg, and S. Shamai (Shitz), “The capacity region of the Gaussian multiple-input multiple-output broadcast
channel,” IEEE Trans. Inf. Theory, vol. 52, no. 9, pp. 3936–3964, Sept. 2006.
[35] A. D. Dabbagh and D. J. Love, “Precoding for multiple antenna gaussian broadcast channels with successive zero-forcing,” IEEE Trans.
Signal Process., vol. 55, no. 7, pp. 3837–3850, July 2007.
[36] T. Oechtering, E. Jorswieck, R. F. Wyrembelski, and H. Boche, “On the optimal transmit strategy for the MIMO bidirectional broadcast
channel,” IEEE Trans. Commun., vol. 57, no. 12, pp. 3817–3826, Dec. 2009.
[37] C. Hellings, M. Joham, and W. Utschick, “Gradient-based power minimization in MIMO broadcast channels with linear precoding,”
IEEE Trans. Signal Process., vol. 60, no. 2, pp. 877 – 890, Feb 2012.
[38] E. Abbe and L. Zheng, “A coordinate system for Gaussian networks,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 721–733, Feb. 2012.
[39] D. Tuninetti and S. Shamai, “On two-user fading Gaussian broadcast channels with perfect channel state information at the receivers,”
in Proc. IEEE International Symposium on Information Theory (ISIT), Yokohama, Japan, June-July 2003, p. 345.
[40] A. S. Y. Poon, D. N. C. Tse, and R. W. Brodersen, “Impact of scattering on the capacity, diversity, and propagation range of multipleantenna channels,” IEEE Trans. Inf. Theory, vol. 52, no. 3, pp. 1087–1100, Mar. 2006.
[41] M. M. Abramowitz and I. A. I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables.
New York: Dover, 1972.
[42] I. Csiszár and J. Korner, “Broadcast channels with confidential messages,” IEEE Trans. Inf. Theory, vol. 24, no. 3, pp. 339–348, 1978.
[43] E. A. Abbe and L. Zheng, “Coding along Hermite polynomials for Gaussian noise channels,” in Proc. IEEE International Symposium
on Information Theory (ISIT), Seoul, Korea, June-July 2009, pp. 1644–1648.
[44] R. Ahlswede and I. Csiszàr, “Common randomness in information theory and cryptography-part I: Secret sharing,” IEEE Trans. Inf.
Theory, vol. 39, no. 4, pp. 1121–1132, July 1993.
50
[45] U. M. Maurer, “Secret key agreement by public discussion from common information,” IEEE Trans. Inf. Theory, vol. 39, no. 3, pp.
733–742, May 1993.
[46] R. A. Chou and M. R. Bloch, “Separation of reliability and secrecy in rate-limited secret-key generation,” IEEE Trans. Inf. Theory,
vol. 60, no. 8, pp. 4941–4957, Aug. 2014.
[47] R. G. Gallager, Information theory and reliable communication.
New York: Wiley, 1968.
[48] Y. Sankarasubramaniam, A. Thangaraj, and K. Viswanathan, “Finite-state wiretap channels: secrecy under memory constraints,” in
Proc. IEEE Information Theory Workshop (ITW), Taormina, Italy, Oct. 2009, pp. 115–119.
[49] M. Bloch and N. Laneman, “Strong secrecy from channel resolvability,” IEEE Trans. Inf. Theory, vol. 59, no. 12, pp. 8077–8098, Dec.
2013.
[50] A. Das and P. Narayan, “Capacities of time-varying multiple-access channels with side information,” IEEE Trans. Inf. Theory, vol. 48,
no. 1, pp. 4–25, Jan. 2002.
[51] H. Wang and N. Moayeri, “Finite-state Markov channel-a useful model for radio communication channels,” IEEE Trans. Veh. Technol.,
vol. 44, pp. 163–171, Feb. 1995.
[52] A. Goldsmith, Design and performance of high-speed communication systems.
Ph.D. dissertation, Univ. California, Berkeley, 1994.
[53] P. Sadeghi, R. A. Kennedy, P. B. Rapajic, and R. Shams, “Finite-state Markov modeling of fading channels,” IEEE Signal Process.
Mag., pp. 57–80, Sept. 2008.
[54] C. Pimental, T. H. Falk, and L. Lisbôa, “Finite-state Markov modeling of correlated Rician-fading channels,” IEEE Trans. Veh. Technol.,
vol. 53, no. 5, pp. 1491–1500, Sept. 2004.
[55] S. Shamai (Shitz) and A. D. Wyner, “Information theoretic considerations for symmetric, cellular, multiple-access fading channels: Part
I,” IEEE Trans. Inf. Theory, vol. 43, pp. 1877–1894, Nov. 1997.
[56] A. Khisti and G. W. Wornell, “Secure transmission with multiple antennas-I: The MISOME wiretap channel,” IEEE Trans. Inf. Theory,
vol. 56, no. 7, pp. 3088–3104, July 2010.
[57] G. D. Forney, Jr., “On the role of MMSE estimation in approaching the information-theoretic limits of linear Gaussian channels:
Shannon meets Wiener,” https://arxiv.org/pdf/cs/0409053.pdf, 2004.
| 7 |
arXiv:1703.10293v1 [] 30 Mar 2017
Preserving Distances in Very Faulty Graphs
Greg Bodwin
Fabrizio Grandoni
Stanford, gbodwin@stanford.edu
IDSIA, USI-SUPSI, fabrizio@idsia.ch
Merav Parter
Virginia Vassilevska Williams
CSAIL, MIT, parter@mit.edu
Stanford, virgi@cs.stanford.edu
Abstract
Preservers and additive spanners are sparse (hence cheap to store) subgraphs that preserve
the distances between given pairs of nodes exactly or with some small additive error, respectively. Since real-world networks are prone to failures, it makes sense to study fault-tolerant
versions of the above structures. This turns out to be a surprisingly difficult task. For every
small but arbitrary set of edge or vertex failures, the preservers and spanners need to contain
replacement paths around the faulted set. Unfortunately, the complexity of the interaction between replacement paths blows up significantly, even from 1 to 2 faults, and the structure of
optimal preservers and spanners is poorly understood. In particular, no nontrivial bounds for
preservers and additive spanners are known when the number of faults is bigger than 2.
Even the answer to the following innocent question is completely unknown: what is the
worst-case size of a preserver for a single pair of nodes in the presence of f edge faults? There
are no super-linear lower bounds, nor subquadratic upper bounds for f > 2. In this paper we
make substantial progress on this and other fundamental questions:
• We present the first truly sub-quadratic size single-pair preservers in unweighted (possibly
directed) graphs for any fixed number f of faults. Our result indeed generalizes to the
single-source case, and can be used to build new fault-tolerant additive spanners (for all
pairs).
• The size of the above single-pair preservers is O(n2−g(f ) ) for some positive function g,
and grows to O(n2 ) for increasing f . We show that this is necessary even in undirected
unweighted graphs, and even if you allow for a small additive error: If you aim at size
O(n2−ε ) for ε > 0, then the additive error has to be Ω(εf ). This surprisingly matches
known upper bounds in the literature.
• For weighted graphs, we provide matching upper and lower bounds for the single pair
case. Namely, the size of the preserver is Θ(n2 ) for f ≥ 2 in both directed and undirected
graphs, while for f = 1 the size is Θ(n) in undirected graphs. For directed graphs, we
have a superlinear upper bound and a matching lower bound.
Most of our lower bounds extend to the distance oracle setting, where rather than a subgraph
we ask for any compact data structure.
1
Introduction
Distance preservers and additive spanners are (sparse) subgraphs that preserve, either exactly or
with some small additive error, the distances between given critical pairs P of nodes. This has
been a subject of intense research in the last two decades [CE06, BW16, ADD+ 93, ACIM99, Che13,
BTMP05, AB16, Pet09].
However, real-world networks are prone to failures. For this reason, more recently (e.g. [CLPR09,
BCPS15,CP10,PP13,Par14,BGLP14,PP14,BGG+ 15,DK11,LNS02,CZ04,Luk99]) researchers have
devoted their attention to fault-tolerant versions of the above structures, where distances are (approximately) preserved also in the presence of a few edge (or vertex) faults. For the sake of simplicity
we focus here on edge faults, but many results generalize to the case of vertex faults where F ⊆ V .
Definition 1.1. Given an n-node graph G = (V, E) and P ⊆ V × V , a subgraph H ⊆ G is an
f -fault tolerant (f -FT) β-additive P -pairwise spanner if
distH\F (s, t) ≤ distG\F (s, t) + β,
∀(s, t) ∈ P, ∀F ⊆ E, |F | ≤ f.
If β = 0, then H is an f -FT P -pairwise preserver.
Finding sparse FT spanners/preservers turned out to be an incredibly challenging task. Despite
intensive research, many simple questions have remained open, the most striking of which arguably
is the following:
Question 1. What is the worst-case size of a preserver for a single pair (s, t) and f ≥ 1 faults?
Prior work [Par15, PP13] considered the single-source P = {s} × V unweighted case, providing
super-linear lower bounds for any f and tight upper bounds for f = 1, 2. However, first, there is
nothing known for f > 2, and second, the lower bounds for the {s} × V case do not apply to the
single pair case where much sparser preservers might exist. Prior to this work, it was conceivable
that in this case O(n) edges suffice for arbitrary fixed f .
Our first result is a complete answer to Question 1 for weighted graphs. In more detail, we
prove:
• An (s, t) preserver in a weighted graph for f = 1 has size Θ(n) in the undirected setting
(Theorem 4.1, with extensions in Theorem 4.3) or Θ(DP(n)) in the directed setting (Theorem
??).
• An (s, t) preserver in a weighted graph for f ≥ 2 has size Θ(n2 ) even in the undirected case
(Theorem 4.4).
The function DP(n) above denotes a tight bound for the sparsity of a pairwise distance preserver
in directed weighted graphs with n nodes and O(n) pairs. Coppersmith and Elkin [CE06] show
that Ω(n4/3 ) ≤ DP(n) ≤ O(n3/2 ). It is a major open question to close this gap, and we show
that the no-fault n-pair distance preserver question is equivalent to the 1-fault single pair preserver
question, thereby fully answering the latter question, up to resolving the major open problem for
n-pair preservers.
For unweighted graphs, we achieve several non-trivial lower bounds concerning the worst-case
size of (s, t) preservers and spanners:
• In the unweighted directed or undirected case this size is Θ(n) for f = 1. This shows an
interesting gap w.r.t. to the weighted case mentioned before.
• The size is super-linear for any f ≥ 2 even in unweighted undirected graphs and even if we
allow a small enough polynomial additive error nδ .
Note that the latter lower bound (unlike in the weighted case) leaves room for improvements.
In particular, consider the following question:
1
Question 2. In unweighted graphs, is the worst-case size of an f -FT (s, t) preserver subquadratic
for every constant f ≥ 2?
Prior work showed that the answer is YES for f = 1, 2 [Par15, PP14], but nothing is known for
f ≥ 3. We show that the answer is YES:
• In unweighted directed or undirected graphs, for any f ≥ 1 there is an (s, t) preserver of size
O(n2−g(f ) ) for some positive decreasing function g(·). See Theorem 2.1.
The above result has many strengths. First, it extends to the single-source case (i.e., P =
{s} × V ). Second, the same result holds for any fixed number f of vertex faults. Prior work was
only able to address the simple case f = 1 [Par14]. Third, such a preserver can be computed very
efficiently in O(f mn) time, and its analysis is relatively simple (e.g., compared to the slightly better
size bound in [Par15] that was achieved by a cumbersome case analysis). Finally, via fairly standard
techniques, the preserver result also implies improved f -FT 2-additive (all pairs!) spanners for all
f ≥ 1 (see Theorem 2.8).
In the above result the size of the preserver grows quickly to O(n2 ) for increasing f . This raises
the following new question:
Question 3. Does there exist a universal constant ε > 0 such that all unweighted graphs have an
f -FT (s, t) preserver of size Of (n2−ε )? What if we allow a small additive error?
The only result with strongly sub-quadratic size in the above sense is an O(f · n4/3 ) size spanner
with additive error Θ(f ) [BCPS15,BGG+ 15]. Can we remove or reduce the dependence of the error
on f ? We show that the answer is NO:
• In undirected unweighted graphs, any single-pair spanner of strongly subquadratic size Of (n2−ε )
for ε > 0 needs to have additive error Ω(εf ). (See Theorem 3.1 and related results in Theorems 3.4-3.5).
Hence the linear dependence in f in the additive error in [BCPS15, BGG+ 15] is indeed necessary.
We found this very surprising. The table in Appendix A summarizes our main results for FTpreservers.
So far we have focused on sparse distance preserving subgraphs. However, suppose that the
distance estimates can be stored in a different way in memory. Data structures that store the
distance information of a graph in the presence of faults are called distance sensitivity oracles.
Distance sensitivity oracles are also intensely studied [DTCR08,BK09,WY13, GW12, DP09, DP17].
Our main goal here is to keep the size of the data structure as small as possible. Other typical
goals are to minimize preprocessing and query time - we will not address these.
Question 4. How much space do we need to preserve (exactly or with a small additive error) the
distances between a given pair of nodes in the presence of f faults?
Clearly all our preserver/spanner upper bounds extend to the oracle case, however the lower
bounds might not: in principle a distance oracle can use much less space than a preserver/spanner
with the same accuracy. Our main contribution here is the following incompressibility result:
• The worst-case size of a single-pair exact distance sensitivity oracle in directed or undirected
weighted graphs is Θ(n2 ) for f ≥ 2 (note that the optimal size for f = 1 is Θ(n) by simple
folklore arguments, so our result completes these settings). See Theorem 4.4.
2
• If we allow for a polynomial additive error nδ , for small δ, even in the setting of undirected
unweighted graphs, then the size of the oracle has to be super-linear already for f ≥ 3
(Theorem 3.6).
The technical part of the paper has precise theorem statements for all results. The interested
reader will find even more results and corollaries there as well. We omitted these from this introduction for the sake of clarity.
1.1
Related Work
Fault-tolerant spanners were introduced in the geometric setting [LNS02] (see also [Luk99, CZ04]).
FT-spanners with multiplicative stretch are relatively well understood: the error/sparsity for f -FT
and f -VFT multiplicative spanners is (up to a small polynomial factor in f ) the same as in the
nonfaulty case. For f edge faults, Chechik et al. [CLPR09] showed how to construct f -FT (2k − 1)1
multiplicative spanners with size Õ(f n1+ k ) for any f, k ≥ 1. They also construct an f -VFT spanner
with the same stretch and larger size. This was later improved by Dinitz and
[DK11]
Krauthgamer
2− k1 1+ k1
who showed the construction of f -VFT spanners with 2k − 1 error and Õ f
n
edges.
FT additive spanners were first considered by Braunschvig, Chechik and Peleg in [BCPS15] (see
also [BGG+ 15] for slightly improved results). They showed that FT Θ(f )-additive spanners can
be constructed by combining FT multiplicative spanners with (non-faulty) additive spanners. This
construction, however, supports only edge faults. Parter and Peleg showed in [PP14] a lower bound
of Ω(n1+εβ ) edges for single-source FT β-additive spanners. They also provided a construction of
single-source FT-spanner with additive stretch 4 and O(n4/3 ) edges that is resilient to one edge
fault. The first constructions of FT-additive spanners resilient against one vertex fault were given
in [Par14] and later on in [BGG+ 15]. Prior to our work, no construction of FT-additive spanners
was known for f ≥ 2 vertex faults.
As mentioned earlier, the computation of preservers and spanners in the non-faulty case (i.e.
when f = 0) has been the subject of intense research in the last few decades. The current-best
preservers can be found in [CE06, BW16, Bod17b]. Spanners are also well understood, both for
multiplicative stretch [ADD+ 93,Erd63] and for additive stretch [ACIM99,Che13,BTMP05,Woo10,
AB16, BW16, Che13, Pet09, ABP17]. There are also a few results on “mixed” spanners with both
multiplicative and additive stretch [EP04, TZ06, BTMP05]
Distance sensitivity oracles are data structures that can answer queries about the distances in
a given graph in the presence of faults. The first nontrivial construction was given by Demetrescu
et al. [DTCR08] and later improved by Bernstein and Karger [BK09] who showed how to construct
Õ(n2 )-space, constant query time oracles for a single edge fault for an m-edge n-node graph in
Õ(mn) time. The first work that considered the case of two faults (hence making the first jump
from one to two) is due to Duan and Pettie in [DP09]. Their distance oracle has nearly optimal
e 2 ) and query time of O(1).
e
size of O(n
The case of bounded edge weights, and possibly multiple
faults, is addressed in [WY13, GW12] exploiting fast matrix multiplication techniques. The size of
their oracle is super-quadratic.
The notion of FT-preservers is also closely related to the problem of constructing replacement
paths. For a pair of vertices s and t and an edge e, the replacement path Ps,t,e is the s-t shortestpath that avoids e1 . The efficient computation of replacement paths is addressed, among others,
in [MMG89, RZ12, WY13, VW11]. A single-source version of the problem is studied in [GW12].
Single-source FT structures that preserve strong connectivity have been studied in [BCR16].
1
Replacement paths were originally defined for the single edge fault case, but later on extended to the case of
multiple faults as well.
3
1.2
Preliminaries and Notation
Assume throughout that all shortest paths ties are broken in a consistent manner. For every s, t ∈ V
and a subgraph G0 ⊆ G, let πG0 (s, t) be the (unique) u-v shortest path in G0 (i.e., it is unique under
breaking ties). If there is no path between s and t in G0 , we define πG0 (s, t) = ∅. When G0 = G,
we simply write π(u, v). For any path P containing nodes u, v, let P [u
v] be the subpath of P
between u and v. For s, t ∈ V and F ⊆ E, we let Ps,t,F = πG\F (s, t) be the s-t shortest-path in
G \ F . We call such paths replacement paths. When F = {e}, we simply write Ps,t,e . By m we
denote the number of edges in the graph currently being considered.
The structure of the paper is as follows. In Sec. 2, we describe an efficient construction for
FT-preservers and additive spanners with a subquadratic number of edges. Then, in Sec. 3, we
provide several lower bound constructions for a single s-t pair, both for the exact and for the
additive stretch case. Finally, in Sec. 4 we consider the setting of weighted graphs. Most of the
results of that setting are deferred to Appendix ??. Missing proofs in other sections can be found
in the appendix as well.
2
Efficient Construction of FT-Preservers and Spanners
In this section we show:
Theorem 2.1. For every directed or undirected unweighted graph G = (V, E), integer f ≥ 1 and
S ⊆ V , one can construct in time O(f n m) an f -FT S-sourcewise (i.e. P = S × V ) preserver of
e · |S|1/2f · n2−1/2f ).
size O(f
We remark that Theorem 2.1 holds under both edge and vertex faults. We next focus on
the directed case, the undirected one being analogous and simpler. We begin by recapping the
currently-known approaches for handling many faults, and we explain why these approaches fail to
achieve interesting space/construction time bounds for large f .
The limits of previous approaches: A known approach for handling many faults is by random
sampling of subgraphs, as introduced by Weimann and Yuster [WY13] in the setting of distance
sensitivity oracles, and later on applied by Dinitz and Kraughgamer [DK11] in the setting of fault
tolerant spanners. The high level idea is to generate multiple subgraphs G1 , . . . , Gr by removing
each edge/vertex independently with sufficiently large probability p; intuitively, each Gi simultaneously captures many possible fault sets of size f . One can show that, for a sufficiently small
parameter L and for any given (short) replacement path Ps,t,F of length at most L (avoiding faults
F ), w.h.p. in at least one Gi the path Ps,t,F is still present while all edges/vertices in F are
S deleted.
Thus, if we compute a (non-faulty) preserver Hi ⊆ Gi for each i, then the graph H = i Hi will
contain every short replacement path. For the remaining (long) replacement paths, Weimann and
Yuster use a random decomposition into short subpaths. Unfortunately, any combination of the
parameters p, r, L leads to a quadratic (or larger) space usage.
Another way to handle multiple faults is by extending the approach in [PP13,PP14,Par14] that
works for f ∈ {1, 2}. A useful trick used in those papers (inspired by prior work in [RZ12, VW11])
is as follows: suppose f = 1, and fix a target node t. Consider the shortest path π(s, t). It is
sufficient to take the last edge of each replacement path Ps,t,e and charge it to the node t; the rest
of the path is then charged to other nodes by an inductive argument. Hence, one only needs to
bound the number of new-ending paths – those that end in an edge that is not already in π(s, t).
In the case f = 1, these new-ending paths have a nice structure: they diverge from π(s, t) at
some vertex b (divergence point) above the failing edge/vertex and collide again with π(s, t) only
4
at the terminal t; the subpath connecting b and t on the replacement path is called its detour. One
can divide the s-t replacement paths into two groups: short (resp., long) paths are those whose
√
detour has length at most (resp., at least) n. It is then straightforward enough to show that each
e 1/2 ) edges entering t, and so (collecting these last edges over
category of path contributes only O(n
e 3/2 ) edges in total. Generalizing this to the
all nodes in the graph) the output subgraph has O(n
case of multiple faults is non-trivial already for the case of f = 2. The main obstacle here stems
from a lack of structural understanding of replacement paths for multiple faults: in particular, any
given divergence point b ∈ π(s, t) can now be associated with many new-ending paths and not only
one! In the only known positive solution for f = 2 [Par15], the approach works only for edge faults
and is based on an extensive case analysis whose extension to larger f is beyond reasonable reach.
Thus, in the absence of new structural understanding, further progress seems very difficult.
A second source of difficulties is related to the running time of the construction. A priori, it
seems that constructing a preserver H should require computing all replacement paths Ps,t,F , which
leads to a construction time that scales exponentially in f . In particular, by deciding to omit an
edge e from the preserver H, we must somehow check that this edge does not appear on any of the
replacement paths Ps,t,F (possibly, without computing these replacement paths explicitly).
Our basic approach: The basic idea behind our algorithm is as follows. Similar to [PP13, PP14,
Par14], we focus on each target node t, and define a set Et of edges incident to t to be added to
our preserver. Intuitively, these are the last edges of new-ending paths as described before. The
construction of Et , however, deviates substantially from prior work. Let us focus on the simpler
case of edge deletions. The set Et is constructed recursively, according to parameter f . Initially we
consider the shortest path tree T from the source set S to t, and add to Et the edges of T incident
to t (at most |S| many). Consider any new-ending replacement path P for t. By the previous
discussion, this path has to leave T at some node b and it meets T again only at t: let D be the
subpath of P between b and t (the detour of P ). Note that D is edge-disjoint from T , i.e. it is
contained in the graph G0 = G \ E(T ). Therefore, it would be sufficient to compute recursively the
set Et0 of final edges of new-ending replacement paths for t in the graph G0 with source set S 0 given
by the possible divergence points b and w.r.t. f − 1 faults (recall that one fault must be in E(T ),
hence we avoid that anyway in G0 ). This set Et0 can then be added to Et .
The problem with this approach is that S 0 can contain Ω(n) many divergence points (hence Et
Ω(n) many edges), leading to a trivial Ω(n2 ) size preserver. In order to circumvent this problem,
we classify the divergence points b in two categories. Consider first the nodes b at distance at
most L from t along T , for some parameter L. There are only O(|S|L) many such nodes S short ,
which is sublinear for |S| and L small enough. Therefore we can safely add S short to S 0 . For the
remaining divergence points b, we observe that the corresponding detour D must have length at
least L: therefore by sampling Õ(n/L) nodes S long we hit all such detours w.h.p. Suppose that
σ ∈ S long hits detour D. Then the portion of D from σ to t also contains the final edge of D to be
added to Et . In other terms, it is sufficient to add S long (which has sublinear size for polynomially
large L) to S 0 to cover all the detours of nodes b of the second type. Altogether, in the recursive
call we need to handle one less fault w.r.t. a larger (but sublinear) set of sources S 0 . Our approach
has several benefits:
• It leads to a subquadratic size for any f (for a proper choice of the parameters);
• It leads to a very fast algorithm. In fact, for each target t we only need to compute a BFS
tree in f different graphs, leading to an O(f nm) running time;
• Our analysis is very simple, much simpler than in [Par15] for the case f = 2;
• It can be easily extended to the case of vertex faults.
5
Algorithm 1 Construction of Et in our f -FT S-Sourcewise Preserver Algorithm.
1:
2:
3:
4:
5:
6:
7:
8:
9:
procedure ComputeSourcewiseFT(t, S, f, G)
Input: A graph G with a source set S and terminal t, number of faults f .
Output: Edges Et incident to t in an f -FT S-sourcewise preserver H.
Set G0 = G, S0 = S, Et = ∅.
for i ∈ {0, . . . , f } do
S
Compute the partial BFS tree Ti = s∈Si πGi (s, t).
Et = Et ∪ {LastE(πTi (s, t)) p
| s ∈ Si }.
Set distance threshold di = n/|Si | · f log n.
Let Sishort = {v ∈ V (Ti ) | distTi (v, t) ≤ di }.
Sample a collection Silong ⊆ V (Gi ) of Θ(n/di · f log n) vertices.
Set Si+1 = Sishort ∪ Silong and Gi+1 = Gi \ E(Ti ).
Algorithm for Edge Faults: Let us start with the edge faults case. The algorithm constructs
a set SEt of edges incident to each target node t ∈ V . The final preserver is simply the union
H = t∈V Et of these edges. We next describe the construction of each Et (see also Alg. 1). The
computation proceeds in rounds i = 0, . . . , f . At the beginning of round i we are given a subgraph
Gi (with G0 = G) and a set of sources Si (with
S S0 = S).
We compute a partial BFS tree Ti = s∈Si πGi (s, t)2 from Si to t, and add to Et (which is
initially empty) the edges {LastE(πTi (s, t)) | s ∈ Si } of this tree incident to t. Here, for a path π
where one endpoint is the considered target node t, we denote by LastE(π) the edge of π incident
to t. The source set Si+1 is given by Sishort ∪ Silong . Here Sishort = {v ∈ V (Ti ) | distTi (v, t) ≤ di }
p
is the set of nodes at distance at most di = n/|Si | · f log n from t, while Silong is a random sample
of Θ(n/di · f log n) vertices. The graph Gi+1 is obtained from Gi be removing the edges E(Ti )3 .
Adaptation for Vertex Faults: The only change in the algorithm is in the definition of the graph
Gi inside the procedure to compute Et . We cannot allow ourselves to remove all the vertices of the
tree Ti from Gi and hence a more subtle definition is required. To define Gi+1 , we first remove from
Gi : (1) all edges of Sishort × Sishort , (2) the edges of E(Ti ), and (3) the vertices of V (Ti ) \ Sishort . In
addition, we orient all remaining edges incident to Sishort to be directed away from these vertices
(i.e., the incoming degree of the Sishort vertices in Gi+1 is zero). Finally, we delete all remaining
edges incident to Sishort which are directed towards any one of these vertices (i.e., the incoming
degree of the Sishort vertices in Gi+1 is zero).
Analysis: We now analyze our algorithm. Since for each vertex t, we compute f (partial) BFS
trees, we get trivially:
Lemma 2.2 (Running Time). The subgraph H is computed within O(f n m) time.
We proceed with bounding the size of H.
1/2f · (f n)1−1/2f ) for every t ∈ V , hence |E(H)| =
e
Lemma 2.3 (Size Analysis). |Et | = O(|S|
f
f
e |S|1/2 n2−1/2 ).
O(f
Proof. Since the number of edges collected at the end each round i is bounded by the number of
sources Si , it is sufficient to bound |Si | for all i. Observe that, for every i ∈ {0, . . . , f − 1},
|Si+1 | ≤ |Silong | + |Sishort | ≤ di · |Si | + Θ(n/di · f log n) = Θ(di · |Si |).
2
If πGi (s, t) does not exist, recall that we define it as an empty set of edges.
Note that for f = 1, the algorithm has some similarity to the replacement path computation of [RZ12]. Yet,
there was no prior extension of this idea for f ≥ 2.
3
6
i
i
By resolving this recurrence starting with |S0 | = |S| one obtains |Si | = O(|S|1/2 (f n log n)1−1/2 ).
The claim follows by summing over i ∈ {0, . . . , f }.
We next show that the algorithm is correct. We focus on the vertex fault case, the edge fault
case being similar and simpler. Let us define, for t ∈ V and i ∈ {0, . . . , f },
Pt,i = {πGi \F (s, t) | s ∈ Si , F ⊆ V (Gi ), |F | ≤ f − i}.
Lemma 2.4. For every t ∈ V and i ∈ {0, . . . , f }, it holds that: LastE(π) ∈ Et for every
π ∈ Pt,i .
Proof. We prove the claim by decreasing induction on i ∈ {f, . . . , 0}. For the base of the induction,
consider the case of i = f . In this case, Pt,f = {πGf (s, t) | s ∈ Sf }. Since we add precisely
the last edges of these paths to the set Et , the claim holds. Assume that the lemma holds for
0
rounds f, f − 1, . . . , i + 1 and consider round i. For every πGi \F (s, t) ∈ Pt,i , let Ps,t,F
= πGi \F (s, t).
S
4 Consider the partial BFS tree T =
i
s∈Si πGi (s, t) rooted at t. Note that all (interesting)
0
0
replacement paths Ps,t,F
in Gi have at least one failing vertex v ∈ F ∩ V (Ti ) as otherwise Ps,t,F
=
πGi (s, t).
We next partition the replacement paths π ∈ Pt,i into two types depending on their last edge
LastE(π). The first class contains all paths whose last edge is in Ti . The second class of replacement
paths contains the remaining paths, which end with an edge that is not in Ti . We call this second
class of paths new-ending replacement paths. Observe that the first class is taken care of, since we
add all edges incident to t in Ti . Hence it remains to prove the lemma for the set of new-ending
paths.
0
0
For every new-ending path Ps,t,F
, let bs,t,F be the last vertex on Ps,t,F
that is in V (Ti ) \ {t}.
We call the vertex bs,t,F the last divergence point of the new-ending replacement path. Note that
0
[bs,t,F
t] is vertex disjoint with the tree Ti except for the vertices bs,t,F
the detour Ds,t,F = Ps,t,F
and t. From now on, since we only wish to collect last edges, we may restrict our attention to
0
this detour subpath. That is, since LastE(Ds,t,F ) = LastE(Ps,t,F
), it is sufficient to show that
LastE(Ds,t,F ) ∈ Et .
Our approach is based on dividing the set of new-ending paths in Pt,i into two classes based
on the position of their last divergence point bs,t,F (see Fig. 1). The first class Pp
short consists of
new-ending paths in Pt,i whose last divergence point is at distance at most di = n/|Si | · f log n
from t on Ti . In other words, this class contains all new-ending paths whose last divergence point
is in the set Sishort . We now claim the following.
0
Claim 2.5. For every Ps,t,F
∈ Pshort , the detour Ds,t,F is in Pt,i+1 .
0
Proof. Since Ds,t,F is a subpath of the replacement path Ps,t,F
, Ds,t,f is the shortest path between
bs,t,F and t in Gi \ F . Recall that Ds,t,F is vertex disjoint with V (Ti ) \ {bs,t,F , t}.
0
Since bs,t,F is the last divergence point of Ps,t,F
with Ti , the detour Ds,t,F starts from a vertex
short
bs,t,F ∈ Si
and does not pass through any other vertex in V (Ti ) \ {t}. Since we only changed in
Gi+1 the direction of edges incident to Sishort vertices but the outgoing edge connecting bs,t,F to its
neighbor x on Ds,t,F [bs,t,F
t] remains (i.e., this vertex x is not in V (Ti ) \ {t}), this implies that
the detour Ds,t,F exists in Gi+1 . In particular, note that the vertex bs,t,F cannot be a neighbor of t
in Ti . If (bs,t,F , t) were an edge in Ti , then we can replace the portion of the detour path between
0
bs,t,F and t by this edge, getting a contradiction to the fact that Ps,t,F
is a new-ending path5 .
0
We denote these replacement paths as Ps,t,F
as they are computed in Gi and not in G.
For the edge fault case, the argument is much simpler: by removing E(Ti ) from Gi , we avoid at least one the
failing edges in Gi+1 .
4
5
7
𝑠1𝑖
𝑠2𝑖
𝑠𝜎𝑖
𝑠𝑗𝑖
𝑠3𝑖
𝑃𝑠𝜎 ,𝑡,𝐹′
𝑏𝑠𝜎 ,𝑡,𝐹′
𝑤𝑠𝜎 ,𝑡,𝐹′
𝑃𝑠1 ,𝑡,𝐹
𝑠ℓ𝑖
𝑏𝑠1 ,𝑡,𝐹
𝐷𝑠1 ,𝑡,𝐹
𝑡
Figure 1: Shown is a partial tree Ti whose leaves set are in Si , all edges are directed towards t in the directed
case. (In the figure, we let sij = si for simplicity of notation). The replacement paths Psij ,t,F are divided
into two types depending on their last divergence point bsij ,t,F . Note that this point is not necessarily on
π(sij , t) and may appear on other π(si` ) paths. The vertices appearing on the first di levels of Ti are Sishort .
The path Ps1 ,t,F is in Pshort and the path Psσ ,t,F 0 is in Plong . The vertex wsσ ,t,F 0 is in the set Silong and it
hits the long detour of Psσ ,t,F 0 . Note that since both Ps1 ,t,F and Psσ ,t,F are new-ending, one of the vertices
in their failing set F, F 0 appears on πTi (bs1 ,t,F , t), πTi (bsσ ,t,F 0 , t) respectively.
Next, observe that at least one of the failing vertices in F occurs on the subpath πGi [bs,t,F , t],
let this vertex be v ∈ F . Since v ∈ Sishort , all the edges are directed away from v in Gi+1 and hence
the paths going out from the source bs,t,F in Gi+1 cannot pass through v. Letting F 0 = F \ V (Ti ),
it holds that (1) |F 0 | ≤ f − i − 1 and (2) since the shortest path ties are decided in a consistent
manner and by definition of Gi+1 , it holds that Ds,t,F = πGi+1 \F 0 (bs,t,F , t). As bs,t,F ∈ Sishort , it
holds that Ds,t,F ∈ Pt,i+1 .
0
0
Hence by the inductive hypothesis for i + 1, LastE(Ps,t,F
) is in Et for every Ps,t,F
∈ Pshort .
We now turn to consider the second class of paths Plong which contains all remaining new-ending
paths; i.e., those paths whose last divergence point is at distance at least di from t on Ti . Note
0
[bs,t,F
t] of these paths is long – i.e., its length is at least di . For
that the detour Ds,t,F = Ps,t,F
0
convenience, we will consider the internal part Ds,t,F
= Ds,t,F \ {bs,t,F , t} of these detours, so that
the first and last vertices of these detours are not on Ti .
0
We exploit the lengths of these detours Ds,t,F
and claim that w.h.p, the set Silong is a hitting
set for these detours. This indeed holds by simple union bound overall possible O(nf +2 ) detours.
0
0
For every Ps,t,F
∈ Plong , let ws,t,F ∈ V (Ds,t,F
) ∩ Silong . (By the hitting set property, w.h.p., ws,t,F
0
0
is well defined for each long detour). Let Ws,t,F = Ps,t,F
[ws,t,F , t] be the suffix of the path Ps,t,F
0
starting at a vertex from the hitting set ws,t,F ∈ Silong . Since LastE(Ps,t,F
) = LastE(Ws,t,F ), it is
sufficient to show that LastE(Ws,t,F ) is in Et .
0
Claim 2.6. For every Ps,t,F
∈ Plong , it holds that Ws,t,F ∈ Pt,i+1 .
0
Proof. Clearly, Ws,t,f is the shortest path between ws,t,F and t in Gi \ F . Since Ws,t,F ⊆ Ds,t,F
is
0
vertex disjoint with V (Ti ), it holds that Ws,t,F = πGi+1 \F 0 (ws,t,F , t) for F = F \ V (Ti ). Note that
since at least one fault occurred on Ti , we have that |F 0 | ≤ f − i − 1. As ws,t,F ∈ Silong , it holds
that Ws,t,F ∈ Pt,i+1 . The lemma follows.
0
By applying the claim for i = 0, we get that LastE(Ps,t,F
) is in Et as required for every
∈ Plong . This completes the proof.
0
Ps,t,F
Lemma 2.7. (Correctness) H is an f -FT S-sourcewise preserver.
8
Proof. By using Lemma 2.4 with i = 0, we get that for every t ∈ V , s ∈ S and F ⊆ V , |F | ≤ f ,
LastE(Ps,t,F ) ∈ Et (and hence also LastE(Ps,t,F ) ∈ H). It remains to show that taking the last
edge of each replacement path Ps,t,F is sufficient. The base case is for paths of length 1, where we
have clearly kept the entire path in our preserver. Then, assuming the hypothesis holds for paths
up to length k − 1, consider a path Ps,t,F of length k. Let LastE(Ps,t,F ) = (u, t). Then since we
break ties in a consistent manner, Ps,t,F = Ps,u,F ◦ LastE(Ps,t,e ). By the inductive hypothesis Ps,u,F
is in H, and since we included the last edge, Ps,t,F is also in H. The claim follows.
Theorem 2.1 now immediately follows from Lemmas 2.2, 2.3, and 2.7. Combing our f -FT
sourcewise preserver from Theorem 2.1 with standard techniques (see, e.g. [Par14]), we show:
Theorem 2.8. For every undirected unweighted graph G = (V, E) and integer f ≥ 1, there exists a
e nm)-time construction of a +2-additive f -FT spanner of G of size O(f
e ·n2−1/(2f +1) )
randomized O(f
that succeeds w.h.p.6 .
Proof. The spanner construction works as follows. Let L be an integer parameter to be fixed later.
A vertex u is low-degree if it has degree less than L, otherwise it is high-degree. Let S be a random
sample of Θ( Ln · f log n) vertices. Our spanner H consists of the f -VFT S-sourcewise preserver from
Theorem 2.1 plus all the edges incident to low-degree vertices. We now analyze the construction.
The size of H is bounded by:
e f |S|1/2f · n2−1/2f + O(nL) = O
e f 1+1/2f L−1/2f · n2 + nL
O
l
m
f
f
The claim on the size follows by choosing L = f n2 /(2 +1) .
Next, we turn to show correctness. First note that w.h.p every high-degree vertex has at least
f + 1 neighbors in S. Consider any pair of vertices u, t and a set of failing vertices F and let Pu,t,F
be the u − t shortest path in G \ F . Let x be the last vertex (closest to t) incident to a missing
edge e ∈ Pu,t,F \ E(H). Hence x is a high-degree vertex. We observe that, w.h.p., x is adjacent
to at least f + 1 vertices in S. Since at most f vertices fail, one of the neighbors of x in S ,say,
s0 survives. Let πH\F (u, s0 ) be the u − s0 shortest path in H \ F , and consider the following u − t
path P 0 = πH\F (u, s0 ) · (s0 , x) · Pu,t,F [x
t]. By the definition of x, P 0 ⊆ H. In addition, since H
contains an f -FT S-sourcewise preserver and s0 ∈ S, it holds that
distH\F (u, t) ≤ |P 0 | = distG\F (u, s0 ) + 1 + |Pu,t,F [x
≤ distG\F (u, x) + 2 + Pu,t,f [x
t]|
t] = |Pu,t,F | + 2 = distG\F (u, t) + 2.
The lemma follows.
3
Lower Bounds for FT Preservers and Additive Spanners
In this section, we provide the first non-trivial lower bounds for preservers and additive spanners
for a single pair s-t. We start by proving the following theorem.
Theorem 3.1. For any two integers q, h > 0 and a sufficiently large n, there exists an unweighted
undirected n-node graph G = (V, E) and a pair s, t ∈ V such that any 2hq-FT (2q − 1)-additive
n 2−2/(h+1)
)
).
spanner for G for the single pair (s, t) has size Ω(( hq
6
The term w.h.p. (with high probability) here indicates a probability exceeding 1 − 1/nc , for an arbitrary constant
c ≥ 2. Since randomization is only used to select hitting sets, the algorithm can be derandomized; details will be
given in the journal version.
9
The main building block in our lower bound is the construction of an (undirected unweighted)
tree T h , where h is a positive integer parameter related to the desired number of faults f . Tree T h
is taken from [Par15] with mild technical adaptations. Let d be a size parameter which is used to
obtain the desired number n of nodes. It is convenient to interpret this tree as rooted at a specific
node (though edges in this construction are undirected). We next let rt(T h ) and L(T h ) be the root
and leaf set of T h , respectively. We also let `(h) and n(h) be the height and number of nodes of
T h , respectively.
Tree T h is constructed recursively as follows (see also Fig. 3a). The base case is given by T 0
which consists of a single isolated root node rt(T 0 ). Note that `(0) = 0 and n(0) = 1. In order to
h−1
construct T h , we first create d copies T0h−1 , . . . , Td−1
of T h−1 . Then we add a path v0 , . . . , vd−1 of
length d − 1 (consisting of new nodes), and choose rt(T h ) = v0 . Finally, we connect vj to rt(Tjh−1 )
with a path (whose internal nodes are new) of length (d − j) · (`(h − 1) + 3). Next lemma illustrates
the crucial properties of T h .
Lemma 3.2. The tree T h satisfies the following properties:
1. n(h) ≤ 23 (h + 1)(d + 1)h+1
2. |L(T h )| = dh
3. For every ` ∈ L(T h ), there exists F` ⊆ E(T ), |F` | = h, such that distT h \F` (s, `) ≤
distT h \F` (s, `0 ) + 2 for every `0 ∈ L(T h ) \ {`0 }.
We next construct a graph S h as follows. We create two copies Ts and Tt of T h . We add to
S h the complete bipartite graph with sides L(Ts ) and L(Tt ), which we will call the bipartite core
B of S h . Observe that |L(Ts )| = |L(Tt )| = dh , and hence B contains d2h edges. We will call
s = sr(S h ) = rt(Ts ) the source of S h , and t = tg(S h ) = rt(Tt ) its target. See Fig. 3b for an
illustration.
Lemma 3.3. Every 2h-FT (s, t) preserver (and 1-additive (s, t) spanner) H for S h must contain
each edge e = (`s , `t ) ∈ B.
Proof. Assume that e = (`s , `t ) ∈
/ H and consider the case where F`s fails in Ts and F`t fails in Tt .
Let G0 := S h \ (F`s ∪ F`t ), and ds (resp., dt ) be the distance from s to `s (resp., from `t to t) in G0 .
By Lemma 3.2.3 the shortest s-t path in G0 passes through e and has length ds + 1 + dt . By the
same lemma, any path in G0 , hence in H 0 := H \ (F`s ∪ F`t ), that does not pass through `s (resp.,
`t ) must have length at least (ds + 2) + 1 + dt (resp., ds + 1 + (dt + 2)). On the other hand, any
path in H 0 that passes through `s and `t must use at least 3 edges of B, hence having length at
least ds + 3 + dt .
Our lower bound graph Sqh (see also Fig. 3d) is obtained by taking q copies S1 , . . . , Sq of graph
1
n
S h with d = ( 3q(h+1)
− 1) h+1 , and chaining them with edges (tg(Si ), sr(Si+1 )), for i = 1, . . . , q − 1.
We let s = sr(S1 ) and t = tg(Sq ).
Proof of Theorem 3.1. Consider Sqh . By Lemma 3.2.1-2 this graph contains at most n nodes, and
n 2−2/(h+1)
the bipartite core of each Si contains d2h = Ω(( qh
)
) edges.
Finally, we show that any (2q − 1)-additive (s, t) spanner needs to contain all the edges of at
least one such bipartite core. Let us assume this does not happen, and let ei be a missing edge
in the bipartite core of Si for each i. Observe that each s-t shortest path has to cross sr(Si ) and
tg(Si ) for all i. Therefore, it is sufficient to choose 2h faulty edges corresponding to each ei as in
Lemma 3.3. This introduces an additive stretch of 2 in the distance between s and t for each ei ,
leading to a total additive stretch of at least 2q.
10
The same construction can also be extended to the setting of (2h)-FT S × T preservers. To do
that, we make parallel copies of the S h graph. Details are given in Appendix B.4.
Improving over the Bipartite Core: The proof above only gives the trivial lower bound of
Ω(n) for the case of two faults (using h = q = 1). We can strengthen the proof in this special case
to show instead that Ω(n1+ε ) edges are needed, and indeed this even holds in the presence of a
polynomial additive stretch:
Theorem 3.4. A 2-FT distance preserver of a single (s, t) pair in an undirected unweighted graph
needs Ω(n11/10−o(1) ) edges.
Theorem 3.5. There are absolute constants ε, δ > 0 such that any +nδ -additive 2-FT preserver
for a single (s, t) pair in an undirected unweighted graph needs Ω(n1+ε ) edges.
Finally, by tolerating one additional fault, we can obtain a strong incompressibility result:
Theorem 3.6. There are absolute constants ε, δ > 0 such that any +nδ -additive 3-FT distance
sensitivity oracle for a single (s, t) pair in an undirected unweighted graph uses Ω(n1+ε ) bits of
space.
The proofs of Theorems 3.4, 3.5 and 3.6 are all given in Appendix B. The central technique in
their proofs, however, is the same. The key observation is that the structure of Ts , Tt allows us to
use our faults to select leaves `s , `t and enforce that a shortest `s − `t path is kept in the graph.
When we use a bipartite core between the leaves of Ts and Tt , this “shortest path” is simply an
edge, so the quality of our lower bound is equal to the product of the leaves in Ts and Tt . However,
sometimes a better graph can be used instead. In the case h = 1, we can use a nontrivial lower
bound graph against (non-faulty) subset distance preservers (from [Bod17a]), which improves the
cost per leaf pair from 1 edge to roughly n11/10 edges, yielding Theorem 3.4. Alternatively, we can
use a nontrivial lower bound graph against +nδ spanners (from [AB16]), which implies Theorem 3.5.
The proof of Theorem 3.6 is similar in spirit, but requires an additional trick in which unbalanced
trees are used: we take Ts as a copy of T 1 and Tt as a copy of T 2 , and this improved number of
leaf-pairs is enough to push the incompressibility argument through.
4
FT Pairwise Preservers for Weighted Graphs
We now turn to consider weighted graphs, for which the space requirements for FT (s, t) preservers
are considerably larger.
Theorem 4.1. For any undirected weighted graph G and pair of nodes (s, t), there is a 1-FT (s, t)
preserver with O(n) edges.
To prove Thm. 4.1, we first need:
Lemma 4.2. In an undirected weighted graph G, for any replacement path Ps,t,e protecting against
a single edge fault, there is an edge (x, y) ∈ Ps,t,e such that there is no shortest path from s to x in
G that includes e, and there is no shortest path from t to y in G that includes e.
Proof. Let x be the furthest node from s in Ps,t,e such that there is no shortest path from s to x in
G that includes e. Note that if x = t then there is no path from s to t that uses e and so the claim
holds trivially. We can therefore assume x 6= t, and define: let y be the node immediately following
x in Ps,t,e . It must then be the case that there is a shortest path from s to y that includes e.
11
Let e =: (u, v), with dist(s, u) < dist(s, v). The shortest path from s to y that uses e must then
intersect u before v, so we have dist(u, y) > dist(v, y). Thus, any shortest path in G beginning at
y that uses (u, v) must intersect v before u. However, we have dist(u, t) > dist(v, t). Therefore,
any shortest path ending at t that uses (u, v) must intersect u before v. It follows that any shortest
path beginning at y and ending at t does not use (u, v).
We can now prove:
Proof of Theorem 4.1. To construct the preserver, simply add shortest path trees rooted at s and
t to the preserver. If the edge fault e does not lie on the included shortest path from s to t, then
the structure is trivially a preserver. Thus, we may assume that e is in π(s, t). We now claim
that, for some valid replacement path Ps,t,e protecting against the fault e, all but one (or all) of
the edges of Ps,t,e are in the preserver. To see this, we invoke Lemma 4.2: there is an edge (x, y)
in Ps,t,e such that no shortest path from s to x and no shortest path from t to y in G Ps,t,e [s
x]
uses e. Therefore, our shortest path trees rooted at s and t include a shortest path from s to x
and from t to y, and these paths were unaffected by the failure of e. Therefore, Ps,t,e has all edges
in the preserver, except possibly for (x, y). There are at most n edges on π(s, t), so there are at
most n edge faults for which we need to include a replacement path in our preserver. We can thus
complete the preserver by adding the single missing edge for each replacement path, and this costs
at most n edges. If the edge fault e does not lie on the included shortest path from s to t, then the
structure is trivially a preserver. Thus, we may assume that e is in π(s, t). We now claim that, for
some valid replacement path Ps,t,e protecting against the fault e, all but one (or all) of the edges
of Ps,t,e are in the preserver. To see this, we invoke Lemma 4.2: there is an edge (x, y) in Ps,t,e
such that no shortest path from s to x and no shortest path from t to y in G Ps,t,e [s
x] uses e.
Therefore, our shortest path trees rooted at s and t include a shortest path from s to x and from
t to y, and these paths were unaffected by the failure of e. Therefore, Ps,t,e has all edges in the
preserver, except possibly for (x, y). There are at most n edges on π(s, t), so there are at most n
edge faults for which we need to include a replacement path in our preserver. We can thus complete
the preserver by adding the single missing edge for each replacement path,, paying ≤ n edges.
With a trivial union bound, we get that any set P of node pairs can be preserved using
O(min(n|P |, n2 )) edges. It is natural to wonder if one can improve this union bound by doing
something slightly smarter in the construction.
Theorem 4.3. For any integer 1 ≤ p ≤ n2 , there exists an undirected weighted graph G and a set
P of p node pairs such that every 1-FT P -pairwise preserver of G contains Ω(min(np, n2 )) edges.
Proof. We construct our lower bound instance by adapting the construction in Lemma 3.3. First,
add a path of length n + 1 using edges of weight 1. Call the nodes on the path p1 , . . . , pn+1 . Next,
create n new nodes {vi }, and add an edge of weight 1 from pn+1 to each vi . Then, for each i ∈ [1, p],
add a new node xi to the graph, and connect xi to pi with an edge of weight 2(n − i) + 1. Finally,
for all i ∈ [1, p], j ∈ [1, n], add an edge of weight 1 between xi and vj . Define the pair set P to
be {s} × {vi | i ∈ [1, p]}. Note that the graph has Θ(n) nodes and Ω(n|P |) edges, because there
are exactly n|P | edges between the nodes {xi } and {vj }. We will complete the proof by arguing
that all edges in {xi } × {vj } must be kept in the preserver. Specifically, we claim that for any i, j,
the edge (xi , vj ) is needed to preserve the distance of the pair (s, vj ) ∈ P when the edge (pi , pi+1 )
faults. To see this, note that any path from s to vj must pass through some node xi , and we have
dist(s, xi ) = (i − 1) + 2(n − i) + 1 = 2n − i for any i. Since (pi , pi+1 ) has faulted, the path from s to
vj must intersect xi0 for some i0 ≤ i before it intersects xi00 for any i00 > i. Therefore, the shortest
s − vj path passes through xi , and thus uses (xi , vj ).
12
We show that the situation dramatically changes for f = 2.
Theorem 4.4. There exists an undirected weighted graph G and a single node pair (s, t) in this
graph such that every 2-FT (s, t) preserver of G requires Ω(n2 ) edges. The same lower bound holds
on the number of bits of space used by any exact distance sensitivity oracle in the same setting.
Proof. For the first claim, we construct our lower bound instance as follows. Build node disjoint
paths Ps = (s = s0 , s1 . . . , sn−1 ) and Pt = (t = t0 , t1 , . . . , tn−1 ) of n nodes each. All of the edges in
these paths have weight zero (or sufficiently small ε > 0 will do). Next, we add a complete bipartite
graph X × Y with edges of weight 1, where X = {x0 , . . . , xn−1 } and Y = {y0 , . . . , yn−1 } are new
node sets of size n each. Finally, for each i ∈ {0, . . . , n − 1}, we add edges (si , xi ) and (ti , yi ) of
weight n − i. See Fig. 2a for an illustration of this construction.
We now claim that every 2-FT s-t preserver must include all edges of the bipartite graph X ×Y .
In more detail, the edge (xi , yj ) is needed when the edges esi = (si , si+1 ) and etj = (tj , tj+1 ) fail.
Indeed, there is path of length n − i + n − j + 1 passing throw (xi , yj ) in G \ {esi , etj } and any other
s-t path has length at least n − i + n − j + 2. The first claim follows.
For the second claim, consider the same graph as before, but with possibly some missing edges
in X × Y . Consider any distance sensitivity oracle for this family of instances. By querying the s-t
distance for faults (esi , etj ), one obtains n − i + n − j + 1 iff the edge (xi , yj ) is present in the input
graph. This way it is possible to reconstruct the edges E 0 ⊆ X × Y in the input instance. Since
2
there are Ω(2n ) possible input instances, the size of the oracle has to be Ω(n2 ).
We next consider the case of directed graphs, and prove Theorem ??. We split its proof in the
next two lemmas. Let DP(n) describe the worst-case sparsity of a (non-FT) preserver of n node
pairs in a directed weighted graph. That is, for any directed weighted n-node graph G and set P
of |P | = n node pairs, there exists a distance preserver of G, P on at most DP(n) edges, yet there
exists a particular G, P for which every distance preserver has DP(n) edges.
Lemma 4.5. Given any s-t pair in a directed weighted graph, there is a 1-FT s-t preserver whose
sparsity is O(DP(n)).
Proof. Add a shortest path π(s, t) to the preserver, and note that we only need replacement paths
in our preserver for edge faults e on the path π(s, t). There are at most n − 1 such edges; thus, the
preserver is the union of at most n − 1 replacement paths. For each replacement path Ps,t,e , note
that the path is disjoint from π(s, t) only on one continuous subpath. Let a, b be the endpoints of
this subpath. Then Ps,t,e [a
b] is a shortest path in the graph G \ π(s, t), and all other edges in
Ps,t,e belong to π(s, t). Therefore, if we include in the preserver all edges in a shortest path from a
to b in G \ π(s, t), then we have included a valid replacement path protecting against the edge fault
e. By applying this logic to each of the n − 1 possible edge faults on π(s, t), we can protect against
all possible edge faults by building any preserver of n − 1 node pairs in the graph G \ π(s, t).
Lemma 4.6. There is a directed weighted graph G and a node pair s-t such that any 1-FT s-t
preserver requires Ω(DP(n)) edges.
Proof. Let K be a directed graph on O(n) nodes and nonnegative edge weights, and let P =
{(x1 , y1 ), ..., (xn , yn )} be a set of node pairs of size n such that the sparsest preserver of K, P has
Ω(f (n)) edges.
Add to K a directed path Y := (s → a1 → b1 → a2 → b2 → · · · → an → bn → t) on 2n + 2 new
nodes. All edges in Y have weight 0 (or sufficiently small ε > 0 will do). All other edge weights in
the graph will be nonnegative, so Y is the unique shortest path from s to t.
13
𝑠
𝐾
0
0
…
0
1
𝑦1
𝑥1
0
𝑎1
0
𝑏1
𝑛−2 𝑊
𝑠
𝑦2
𝑥2
𝑥𝑖
0
𝑦𝑖
𝑥𝑖
𝑦𝑛
𝑥𝑛
2𝑊
𝑛
𝑎2
0
𝑏2 … 𝑎𝑖
0
𝑏𝑖 … 𝑎𝑛
0
𝑏𝑛
0
𝑡
𝑦𝑗
1
0
0
…
𝑛
0
𝑡
(a) Lower bound example for single pair and
two faults. When the ith edge on the s-path
fails, the new shortest path to t must go
through xi . Similarly, when the jth edge
on the t-path fails, the shortest path to s
must go through yj . Hence, the shortest
path in G \ {ei , ej } uses the edge (xi , yj ).
(b) Lower bound construction for a single
pair in the directed weighted case. Here,
K is an arbitrary lower bound graph for
(non-FT) distance preservers of n pairs in
the directed weighted setting, which can be
modularly substituted in to our construction.
Figure 2: Lower bounds for weighted graphs.
Let M be the largest edge weight in K, and let W = M · n. Note that W is larger than the
weight of any shortest path in K. Now, for each i ∈ [1, n], add an edge from ai to xi of weight
(n − i)W and add an edge from yi to bi of weight iW . See Fig. 2b for an illustration This completes
the construction of G. There are O(n) nodes in G, and so it suffices to show that all edges in a
preserver of K, P must remain in a 1-FT s-t preserver for G.
Let (ai → bi ) be an edge on the path Y . If this edge faults, then the new shortest path from s
to t has the following general structure: the path travels from s to aj for some j ≤ i, then it travels
to xj , then it travels a shortest path from xj to yk (for some k ≥ i) in K, then it travels from yk
to bk , and finally it travels from bk to t. The length of this path is then
(n − j)W + distK (xj , yk ) + kW = nW + (k − j)W + distK (xj , yk ).
Suppose that k − j ≥ 1. Then the weight of the detour is at least (n + 1)W (as all distances in K
are nonnegative). On the other hand, if k = j (and hence = i), the weight of the detour is
nW + distK (xi , yk ) < (n + 1)W
because we have W > distK (xi , yk ). Thus, any valid replacement path Ps,t,(ai ,bi ) travels from s to
ai , then from ai to xi , then along some shortest path in K from xi to yi , then from yi to bi , and
finally from bi to t.
Hence, any 1-FT s-t preserver includes a shortest path from xi to yi in K for all i ∈ [1, n].
Therefore, the number of edges in this preserver is at least the optimal number of edges in a
preserver of K, P ; i.e. DP(n) edges. Since G has O(n) nodes, the theorem follows.
5
Open Problems
There are lots of open ends to be closed. Perhaps the main open problem is to resolve the current
gap for f -FT single-source preservers. Since the lower bound of Ω(n2−1/(f +1) ) edges given in [Par15]
has been shown to be tight for f ∈ [1, 2], it is reasonable to believe that this is the right bound
for f ≥ 3. Another interesting open question involves lower bounds for FT additive spanners. Our
14
lower-bounds are super linear only for f ≥ 2. The following basic question is still open though: is
there a lower bound of Ω(n3/2+ ) edges for some ∈ (0, 1] for 2-additive spanners with one fault?
Whereas our lower bound machinery can be adapted to provide non trivial bounds for different
types of f -FT P -preservers (e.g., P = {s, t}, P = S × T , etc.), our upper bounds technique for
general f ≥ 2 is still limited to the sourcewise setting. Specifically, it is not clear how to construct
an f -FT S × S preservers other than taking the (perhaps wasteful) f -FT S-sourcewise preservers.
As suggested by our lower bounds, these questions are interesting already for a single pair.
15
References
[AB16]
A. Abboud and G. Bodwin. The 4/3 additive spanner exponent is tight. In STOC,
pages 351–361, 2016.
[ABP17]
Amir Abboud, Greg Bodwin, and Seth Pettie. A hierarchy of lower bounds for sublinear
additive spanners. In SODA, pages 568–576, 2017.
[ACIM99] D. Aingworth, C. Chekuri, P. Indyk, and R. Motwani. Fast estimation of diameter and
shortest paths (without matrix multiplication). SIAM J. Comput., 28(4):1167–1181,
1999.
[ADD+ 93] I. Althöfer, G. Das, D. Dobkin, D. Joseph, and J. Soares. On sparse spanners of weighted
graphs. Discrete & Computational Geometry, 9(1):81–100, 1993.
[BCPS15] G. Braunschvig, S. Chechik, D. Peleg, and A. Sealfon. Fault tolerant additive and (µ,
α)-spanners. Theor. Comput. Sci., 580:94–100, 2015.
[BCR16]
S. Baswana, K. Choudhary, and L. Roditty. Fault tolerant subgraph for single source
reachability: generic and optimal. In STOC, pages 509–518, 2016.
[BGG+ 15] D. Bilò, F. Grandoni, L. Gualà, S. Leucci, and G. Proietti. Improved purely additive
fault-tolerant spanners. In ESA, pages 167–178. 2015.
[BGLP14] D. Bilò, L. Gualà, S. Leucci, and G. Proietti. Fault-tolerant approximate shortest-path
trees. In ESA, pages 137–148. 2014.
[BK09]
Aaron Bernstein and David R. Karger. A nearly optimal oracle for avoiding failed
vertices and edges. In STOC, pages 101–110, 2009.
[Bod17a]
G. Bodwin. Linear size distance preservers. In SODA, 2017.
[Bod17b]
Greg Bodwin. Linear size distance preservers. In SODA, pages 600–615, 2017.
[BTMP05] S. Baswana, K. Telikepalli, K. Mehlhorn, and S. Pettie. New constructions of (alpha,
beta)-spanners and purely additive spanners. In SODA, pages 672–681, 2005.
[BW16]
G. Bodwin and V. Vassilevska Williams. Better distance preservers and additive spanners. In SODA, pages 855–872, 2016.
[CE06]
D. Coppersmith and M. Elkin. Sparse sourcewise and pairwise distance preservers.
SIAM Journal on Discrete Mathematics, 20(2):463–501, 2006.
[Che13]
S. Chechik. New additive spanners. In SODA, pages 498–512, 2013.
[CLPR09] S. Chechik, M. Langberg, D. Peleg, and L. Roditty. Fault-tolerant spanners for general
graphs. In STOC, pages 435–444, 2009.
[CP10]
S. Chechik and D. Peleg. Rigid and competitive fault tolerance for logical information
structures in networks. In Electrical and Electronics Engineers in Israel (IEEEI), 2010
IEEE 26th Convention of, pages 000024–000025. IEEE, 2010.
[CZ04]
A. Czumaj and H. Zhao. Fault-tolerant geometric spanners. Discrete & Computational
Geometry, 32(2):207–230, 2004.
16
[DK11]
M. Dinitz and R. Krauthgamer. Fault-tolerant spanners: better and simpler. In PODC,
pages 169–178, 2011.
[DP09]
R. Duan and S. Pettie. Dual-failure distance and connectivity oracles. In SODA, pages
506–515, 2009.
[DP17]
Ran Duan and Seth Pettie. Connectivity oracles for graphs subject to vertex failures.
In SODA, pages 490–509, 2017.
[DTCR08] C. Demetrescu, M. Thorup, R. A. Chowdhury, and V. Ramachandran. Oracles for
distances avoiding a failed node or link. SIAM Journal on Computing, 37(5):1299–
1318, 2008.
[EP04]
M. Elkin and D. Peleg. (1+epsilon, beta)-spanner constructions for general graphs.
SIAM J. Comput., 33(3):608–631, 2004.
[Erd63]
P. Erdös. Extremal problems in graph theory. Theory of Graphs and its Applications
(Proc. Sympos. Smolenice, 1963), pages 29–36, 1963.
[GW12]
F. Grandoni and V. Vassilevska Williams. Improved distance sensitivity oracles via fast
single-source replacement paths. In FOCS, pages 748–757, 2012.
[LNS02]
C. Levcopoulos, G. Narasimhan, and M. Smid. Improved algorithms for constructing
fault-tolerant spanners. Algorithmica, 32(1):144–156, 2002.
[Luk99]
T. Lukovszki. New results on fault tolerant geometric spanners. In Algorithms and Data
Structures, pages 193–204. Springer, 1999.
[MMG89] K. Malik, A. K. Mittal, and S. K. Gupta. The k most vital arcs in the shortest path
problem. Operations Research Letters, 8(4):223–227, 1989.
[Par14]
M. Parter. Vertex fault tolerant additive spanners. In Distributed Computing, pages
167–181. Springer, 2014.
[Par15]
M. Parter. Dual failure resilient BFS structure. In PODC, pages 481–490, 2015.
[Pet09]
S. Pettie. Low distortion spanners. ACM Transactions on Algorithms, 6(1), 2009.
[PP13]
M. Parter and D. Peleg. Sparse fault-tolerant BFS trees. In ESA, pages 779–790, 2013.
[PP14]
M. Parter and D. Peleg. Fault tolerant approximate BFS structures. In SODA, pages
1073–1092, 2014.
[RZ12]
L. Roditty and U. Zwick. Replacement paths and k simple shortest paths in unweighted
directed graphs. ACM Transactions on Algorithms, 8(4):33, 2012.
[TZ06]
M. Thorup and U. Zwick. Spanners and emulators with sublinear distance errors. In
SODA, pages 802–809, 2006.
[VW11]
V. Vassilevska Williams. Faster replacement paths. In SODA, pages 1337–1346. SIAM,
2011.
[Woo10]
D. P. Woodruff. Additive spanners in nearly quadratic time. In ICALP, pages 463–474,
2010.
[WY13]
O. Weimann and R. Yuster. Replacement paths and distance sensitivity oracles via fast
matrix multiplication. ACM Transactions on Algorithms, 9(2):14, 2013.
17
𝑑(ℓ ℎ − 1 + 3)
𝑟ℎ
(𝑑 − 1)(ℓ ℎ − 1 + 3)
𝑆ℎ
𝑣1
….
(𝑑 − 𝑗)(ℓ ℎ − 1 + 3)
𝑣𝑗
𝑇ℎ
𝐿(𝑇 ℎ )
….
ℓ ℎ − 1 + 3)
𝑠
𝑣𝑑−1
𝐿(𝑇 ℎ )
ℎ−1
𝑇𝑑−1
𝑇ℎ
𝑇𝑗ℎ−1
𝑡
𝑇1ℎ−1
𝑇0ℎ−1
(b) The graph S h used to provide a lower bound for
s-t pair and 1-additive stretch
(a) Structure of T h tree
𝑠
𝑠1
Ω( 𝑛/|𝑆|)
…
…
𝑡2
Ω( 𝑛/|𝑇|)
𝑡1
𝑇1ℎ
𝑠𝜎
𝑠2
𝑆1ℎ
𝑞
𝑡𝜎′
𝑡
ℎ
𝑇2𝑞
𝑆1
(d) The graph Sqh used to provide lower
(c) Lower bound for S × T preserver (and
bound for additive stretch β = 2q − 1 al1-additive) for the case f = 2
ready for a single pair s-t
Figure 3: Illustration of the lower bound constructions.
A
Tables
f -VFT (or EFT) Preservers
pairs P , f = 1
S-sourcewise
(unweighted)
(weighted)
Single pair s-t, f = 1
(weighted+directed)
Upper
Bound
e |S|1/2f · n2−1/2f )
O(f
O(min{n2 , n|P |})
O(DP (n)) = O(n3/2 )
New
New
New
Lower
Bound
Ω(|S|1/(f +1) · n2−1/(f +1) )
Ω(min{n2 , n|P |})
Ω(DP (n)) = Ω(n4/3 )
[Par15]
New
New
B
B.1
Omitted Details of Section 3
Properties of T h
We next prove Lemma 3.2.
Lemma B.1. `(h) = 3((d + 1)h − 1) and n(h) ≤ 32 (h + 1)(d + 1)h+1 .
Proof. Let us prove the first claim by induction on h. The claim is trivially true for h = 0. Next
suppose it holds up to h − 1 ≥ 0, and let us prove it for h. All the subtrees Tjh−1 used in the
18
construction of T h have the same height `(h − 1) which is 3((d + 1)h−1 − 1) by the inductive
hypothesis. The distance between rt(T h ) and rt(Tjh−1 ) is j + (d − j)(`(h − 1) + 3) which is a
decreasing function of j. In particular, the maximum such distance is the one to rt(T0h−1 ), which
is d(`(h − 1) + 3). We conclude that
`(h) = `(h − 1) + d(`(h − 1) + 3) = 3d + (d + 1)3 (d + 1)h−1 − 1 = 3(d + 1)h − 3.
The second claim is trivially true for h = 0. The number of nodes in T h is given by d times the
number of nodes in T h−1 , plus the sum of the lengths of the paths connecting each vj to rt(Tjh−1 ),
i.e.
n(h) = d · n(h − 1) +
d−1
X
f irst
(d − j)(`(h − 1) + 3) claim
= d · n(h − 1) + 3(d + 1)h−1
j=0
Induct.
hypoth.
≤
d(d + 1)
2
3
3
3
3
d · h(d + 1)h + d(d + 1)h = (h + 1)d(d + 1)h ≤ (h + 1)(d + 1)h+1 .
2
2
2
2
We next need a more technical lemma which will be useful to analyze the stretch. An easy
inductive proof shows that |L(T h )| = dh . It is convenient to sort these leaves from left to right
using the following inductive process. The base case is that T 0 has a unique leaf (the root) which
obviously has a unique ordering. For the inductive step, given the sorting of the leaves of T h−1 ,
the sorting for T h is achieved by placing all the leaves in the subtree Tjh−1 to the left of the leaves
h−1
of the subtree Tj+1
, j = 0, . . . , d − 2 (the leaves of each Tjh−1 are then sorted recursively). Given
this sorting, we will name the leaves of T H (from left to right) `h0 , . . . , `hdh −1 .
For each leaf `hj , we next recursively define a subset of at most h (faulty) edges Fjh . Intuitively,
these are edges that we can remove to make `hj the closest leaf to the root. We let Fj0 = ∅. Suppose
that `hj is the r-th leaf from left to right of Tth−1 (in zero-based notation). Then Fjh is given by the
edges of type Frh−1 in Tth−1 , plus edge (vt , vt+1 ) if t < d − 1. Note that obviously |Fjh | ≤ h.
Lemma B.2. One has that `hj is the leaf at minimum finite distance d0 := distT h −F h (rt(T h ), `hj )
j
from rt(T h ) in T h − Fjh , and any other leaf in L(T h ) − {`hj } is at distance at least d0 + 2 from
rt(T h ).
Proof. Once again the proof is by induction. Let rh := rt(T h ). The claim is trivially true for h = 0
since there is a unique leaf `00 = r0 . Next assume the claim is true up to h − 1 ≥ 0, and consider
T h . Consider any leaf `hj , and with the same notation as before assume that it is the r-th leaf from
left to right of Tth−1 . Observe that by removing edge (vt , vt+1 ) we disconnect from rh all nodes
in subtrees Tth−1
with t0 > t. In particular the distances from rh to the leaves of those subtrees
0
becomes unbounded. Next consider a leaf `0 in a tree Tth−1
with t0 < t. By construction we have
0
that
distG−F h (rh , `0 ) ≥ distG (rh , `0 )
j
≥ distG (rh , rt(Tth−1
))
0
= t0 + (d − t0 )(`(h − 1) + 3)
≥ (t − 1) + (d − t + 1)(`(h − 1) + 3)
= t + (d − t)(`(h − 1) + 3) + `(h − 1) + 2.
19
On the other hand, any leaf in L(Tth−1 ) which is still connected to rt(Tth−1 ) has distance at most
t + (d − t)(`(h − 1) + 3) + `(h − 1) from rh . Recall that we are removing the edges of type Frh−1 from
Tth−1 . Note also that `hj corresponds to leaf `h−1
in Tth−1 . Hence, by the inductive hypothesis, `hj
r
is the leaf of Tth−1 at minimum finite distance d0 from rth−1 , and any other such leaf is at distance
at least d0 + 2 from rth−1 . The claim follows.
Proof of Lemma 3.2. Claim 1 is given by Lemma B.1, Claim 2 by a trivial induction, and Claim 3
by Lemma B.2.
B.2
Improvement with Preserver Lower Bounds
We next prove Theorem 3.4. We need the following technical lemma.
Lemma B.3 ( [Bod17a]). For all n, there is an undirected unweighted bipartite graph G = (V, E) on
n nodes and Ω(n11/10−o(1) ) edges, as well as disjoint node subsets S, T ⊆ V with |S| = |T | = Θ(n1/2 )
such that the following properties hold:
• For each edge e ∈ E, there is a pair of nodes s ∈ S, t ∈ T with dist(s, t) = L (for some
parameter L) such that every shortest (s, t) path includes e.
• For all s ∈ S, t ∈ T , we have dist(s, t) ≥ L.
The construction for Theorem 3.4 proceeds as follows. By Lemma B.1, the number of leaves in
T is ` = Θ(d) and the number of nodes in T 1 is n = Θ(d2 ), so we have ` = Θ(n1/2 ). As before, let
Ts , Tt be copies of T 1 rooted at s, t respectively. Now, let H be a graph drawn from Lemma B.3,
with node subsets S, T , where the number of nodes nH is chosen such that |S| = |T | = `(s) = `(t).
We add a copy of H to the graph Ts ∪ Tt , where `(s) is used as the node set S, `(t) is used as the
node set T , and O(n) new nodes are introduced to serve as the remaining nodes in H. Note that
the new graph G = Ts ∪ Tt ∪ H now has N = Θ(n) nodes, so it (still) has N 11/10−o(1) edges in its
internal copy of H.
1
Lemma B.4. In G, we have:
• For each edge e in the internal copy of H, there exist nodes u ∈ `(s) = S, v ∈ `(t) = T with
dist(u, v) = L such that every shortest (u, v) path (in G) includes e.
• For all u ∈ `(s) = S, v ∈ `(t) = T , we have dist(u, v) ≥ L.
Proof. First, we observe that if any shortest (u, v) path π(u, v) in G contains a node x not in the
internal copy of H, then we have dist(u, v) ≥ L + 1. To see this, note that by construction any
(x, v) path must contain a subpath π(u0 , v 0 ) ⊆ H between nodes u0 ∈ S, v 0 ∈ T . By Lemma B.3
this subpath has length at least L. Since the path π(u, v) contains x ∈
/ π(u0 , v 0 ), we then have
|π(u, v)| ≥ L + 1.
The second point in this lemma is now immediate: if π(u, v) is contained in H then we have
dist(u, v) ≥ L from Lemma B.3; if π(u, v) is not contained in H then we have dist(u, v) ≥ L + 1
from the above argument. For the first point, note that by Lemma B.3, there is a pair u ∈ S, v ∈ T
such that distH (u, v) = L and every shortest (u, v) path in H includes e. Since we then have
distG (u, v) ≤ L, by the above it follows that π(u, v) is contained in H, and the lemma follows.
We can now show:
20
Proof of Theorem 3.4. Let e be any edge in the internal copy of H in G. By Lemma B.4, there is a
pair of nodes u ∈ `(s) = S, v ∈ `(t) = T such that every (u, v) shortest path in G includes e. Also,
by Lemma B.2, there are faults f1 ∈ Ts , f2 ∈ Tt such that u is the leaf of Ts at minimum distance
d0s from s in Ts \ {f1 }, and v is the leaf of Tt at minimum distance d0t from t in Tt \ {f2 }.
Thus, under fault set {f1 , f2 }, we have
distG\{f1 ,f2 } (s, t) ≤ d0s + L + d0t
since one possible (s, t) path is obtained by walking a shortest path from s to u, then from u to v,
then from v to t. Moreover, any (s, t) path Q in G \ {f1 , f2 } that does not include u (or v) must
include some other leaf u0 6= u ∈ `(s), so it has length at least
|Q| ≥ distG\{f1 ,f2 } (s, u0 ) + distG\{f1 ,f2 } (u0 , v 0 ) + distG\{f1 ,f2 } (v 0 , t)
(for some leaf v 0 ∈ Tt , possibly equal to v). By Lemmas B.2 and B.4, this implies
|Q| ≥ (d0s + 2) + L + d0t > distG\{f1 ,f2 } (s, t).
Therefore Q is a non-shortest path, and so every shortest (s, t) path in G \ {f1 , f2 } includes nodes
u and v, and so (by Lemma B.4) it includes the edge e. We then cannot remove the edge e without
destroying all shortest (s, t) shortest paths in G \ {f1 , f2 }, so all N 11/10−o(1) edges in the internal
copy of H must be kept in any 2-FT distance preserver of (s, t).
Remark B.5. It is natural to expect that a similar improvement to the lower bound may be possible
for h > 1; that is, we could imagine again augmenting the bipartite core with a subset preserver
lower bound. While it is conceivable that this technique may eventually be possible, it currently does
not work: for h > 1 we have `(s) = Ω(n(s)2/3 ), and it is currently open to exhibit a lower bound
G = (V, E), S for subset preservers in which |S| = Ω(n2/3 ) and |E| = ω(|S|2 ). In other words, for
h > 1, the current best known subset preserver lower bound that could be used is just a complete
bipartite graph, and so we can do no better than the construction using bipartite cores.
B.3
Improvement with Spanner Lower Bounds
We next prove Theorem 3.5. Our new “inner graph” that replaces the bipartite core is drawn from
the following lemma7 :
Lemma B.6 ( [AB16]). There are absolute constants ε, δ > 0, a family of n-node graphs G =
(V, E), node subsets S, T ⊆ V of size |S| = |T | = Θ(n1/2−δ ), and a set P ⊆ S × T such that any
subgraph H ⊆ G on o(n1+ε ) edges has distH (s, t) > distG (s, t)+nδ for some (s, t) ∈ P . Moreover,
we have distG (s, t) = L for all (s, t) ∈ P , and distG (s, t) ≥ L for all (s, t) ∈ S × T .
We now describe our construction. First, as before, we take trees Ts , Tt which are copies of T 1
rooted at s, t respectively. We now label leaves of Ts (and Tt ) as good leaves or bad leaves using the
following iterative process. Arbitrarily select a leaf ` and label it a good leaf. Next, for all leaves
`0 satisfying distTs (s, `0 ) ∈ [distTs (s, `) − nδ , distTs (s, `) + nδ ], we label `0 a bad leaf. We then
arbitrarily select another good leaf from among the unlabelled leaves, and repeat until all leaves
have a label. Note that we have Θ(n1/2 ) leaves of Ts ; by construction distTs (s, `) 6= distTs (s, `0 )
for any two leaves `, `0 , and so the total number of good leaves is Θ(n1/2−δ ). Note:
7
The result proved in [AB16] is more general than this one; the parameters have been instantiated in this statement
to suit our purposes.
21
Lemma B.7 (Compare to Lemma B.2). One has that any good leaf `hj is the leaf at minimum
finite distance d0 := distT h −F h (rt(T h ), `hj ) from rt(T h ) in T h − Fjh , and any other good leaf in
j
L(T h ) − {`hj } is at distance at least d0 + nδ from rt(T h ).
Proof. Immediate from Lemma B.2 and the selection of good leaves.
We insert a graph H drawn from Lemma B.6 into the graph Ts ∪ Tt , using the good leaves of
Ts as the set S and the good leaves of Tt as the set T (as before, all other nodes in H are newly
added to the graph in this step). Note that the final graph Ts ∪ H ∪ Tt still has N = Θ(n) nodes.
This completes the construction.
We now argue correctness, i.e. we show that one cannot sparsify the final graph to o(N 1+ε )
edges without introducing +nδ error in the s-t distance for some well-chosen set of two faults. The
proof is essentially identical to the one used above, but we repeat it for completeness.
Proof of Theorem 3.5. Let G0 = (V, E 0 ) be a subgraph of the final graph G = (V, E) with |E 0 | =
o(N 1+ε ) edges. By Lemma B.6, there is a pair of good leaves (`s , `t ) ∈ P such that
distG0 [H] (`s , `t ) > L + nδ
(where G0 [H] denotes the copy of H in G0 ). By Lemma B.7, there are faults f1 , f2 such that `s is
the leaf at minimum distance d0s from s, `t is the leaf at minimum distance d0t from t, (`s , `t ) ∈ P ,
and the distance from s (resp. t) to any other good leaf `0s 6= `s (resp. `0t 6= `t ) is at least d0s + nδ
(resp. d0t + nδ ). Thus, we have
distG\{f1 ,f2 } (s, t) ≤ d0s + L + d0t .
We now lower bound this distance in G0 . As before, there are two cases: either the shortest (s, t)
path in G0 traverses a shortest (`s , `t ) path in G0 [H], or it does not. If so, then we have
distG0 \{f1 ,f2 } (s, t) ≥ d0s + (L + nδ ) + d0t ≥ distG\{f1 ,f2 } (s, t) + nδ
and so G0 is not a 2-FT +nδ − 1 (s, t) preserver of G, and the theorem follows. Otherwise, if the
shortest (s, t) path in G0 does not traverse a shortest (`s , `t ) path in G0 [H], then by construction it
passes through (w.l.o.g.) some good leaf `0s 6= `s ∈ Ts and `0t ∈ Ts (where `0t is possibly equal to `t ).
We then have
distG0 \{f1 ,f2 } (s, t) ≥ distG0 \{f1 ,f2 } (s, `0s ) + distG\{f1 ,f2 } (`0s , `0t ) + distG0 \{f1 ,f2 } (`0t , t)
≥ (d0s + nδ ) + L + d0t
≥ distG\{f1 ,f2 } (s, t) + nδ
and the theorem follows.
Finally, by tolerating one additional fault, we can obtain a strong incompressibility result, hence
proving Theorem 3.6: The proof is nearly identical to the above, but there are two key differences.
First, we use unbalanced trees: Ts is a copy of T 1 while Tt is a copy of T 2 . Hence, there are three
total faults in the sets F`ss ∪ F`tt used to “select” the appropriate leaves s, t. We define good leaves
exactly as before. We use a slightly different lemma for our inner graph (which is proved using the
same construction):
22
Lemma B.8 ( [AB16]). There is an absolute constant δ > 0, a family of n-node graphs G = (V, E),
node subsets S, T ⊆ V of size |S| = Θ(n1/2−δ ), |T | = Θ(n2/3−δ ), and a set P ⊆ S × T of size
|P | = |S||T |n−o(1) with the following property: for each pair (s, t) ∈ P , we may assign a set of
edges in G to p such that (1) no edge is assigned to two or more pairs, and (2) if all edges assigned
to a pair (s, t) are removed from G, then dist( s, t) increases by +nδ . Moreover, dist(s, t) = L for
all (s, t) ∈ P , and dist(s, t) ≥ L for all (s, t) ∈ S × T .
Proof of Theorem 3.6. Let G, P be a graph and pair set drawn from Lemma B.8. Define a family
of 2|P | subgraphs by independently keeping or removing all edges assigned to each pair in P . We
will argue that any nδ/2 -additive distance sensitivity oracle must use a different representation for
each such subgraph, and thus, |P | = Ω(n1+ε ) bits of space are required in the worst case.
Suppose towards a contradiction that a distance sensitivity oracle uses the same space representation for two such subgraphs G1 , G2 , and let (`s , `t ) ∈ P be a pair for which its owned edges
are kept in G1 but removed in G2 . By an identical argument to the one used in Theorem 3.5, we
have
distG1 \{f1 ,f2 ,f3 } (s, t) ≤ d0s + L + d0t
for fault set {f1 , f2 , f3 } = F`ss ∪ F`tt (where d0s , d0t are defined exactly as before). Meanwhile, also by
the same argument used in Theorem 3.5, we have
distG2 \{f1 ,f2 ,f3 } (s, t) ≥ d0s + L + d0t + nδ ≥ distG1 \{f1 ,f2 ,f3 } (s, t) + nδ .
Since G1 , G2 are stored identically by the distance sensitivity oracle, it must answer the query
{f1 , f2 , f3 } identically for both graphs. However, since the right answer differs by +nδ from G1 to
G2 , it follows that the oracle will have at least +nδ /2 − 1 error on one of the two instances.
B.4
Lower Bound for S × T Preservers
Theorem B.9. For every positive integer f , there exists a graph G = (V, E) and subsets S, T ⊆
V , such that every (2f )-FT 1-additive S × T spanner (hence S × T preserver) of G has size
Ω(|S|1/(f +1) · |T |1/(f +1) · (n/f )2−2/(f +1) ).
Proof. The graph G is constructed as follows (see also Figure 3c). For each si ∈ S, we construct a
copy Tsi of T f rooted at si with size parameter
dS =
n
3(f + 1)|S|
1
f +1
− 1.
Similarly, for each tj ∈ T , we construct a copy Ttj of T f rooted at tj with size parameter
dT =
n
3(f + 1)|T |
1
f +1
− 1.
Finally, we add a complete bipartite graph between the leaves of each Tsfi and the leaves of each
Ttfj . We call the edges of the last type the bipartite core of G.
Note that by Lemma B.1 the total number of nodes is n. Furthermore, the bipartite core has
size
f
f +1
2− 2 !
2
f +1
1
1
n
f f
= Ω |S| f +1 |T | f +1 n
|S| |T | dS dT = Ω |S| |T |
9(f + 1)2 |S||T |
f
23
The rest of the proof follows along the same line as in Lemma 3.3: given any edge e = (`si , `tj )
between a leaf `si of Tsi and a leaf `tj of Ttj , removing e would cause an increase of the stretch
between si and tj by at least an additive 2 for a proper choice of f faults in both Tsi and Ttj .
24
| 8 |
arXiv:1503.02876v8 [] 19 Mar 2018
ON FINITE TYPE EPIMORPHISMS OF RINGS
ABOLFAZL TARIZADEH
Abstract. In this note, finite type epimorphisms of rings are
characterized.
1. Introduction
In this paper we investigate an important class of epimorphisms of
rings, namely finite type epimorphisms. Needless to say that finite type
ring maps have very closed connections with the geometric structures
and essentially originate from algebraic geometry. Meanwhile, finite
type epimorphisms of commutative rings have very interesting properties and they are important from various aspects. For example, “finite
type monomorphisms of schemes” can be considered as the geometric
interpretation of finite type epimorphisms of rings.
In the realm of epimorphisms of commutative rings there are some
highly non-trivial results in the literature. For instance, “every finite
type epimorphism of rings which is also injective and flat then it is of
finite presentation”, see [2, Theorem 1.1]. A special case of this result
was announced in [7, Corollary 3.4.7]. Another important result is due
to Lazard, see Theorem 2.2. In the present article, partially motivated
by the above results, we have obtained two new and non-trivial results.
In fact, Theorems 3.2 and 3.3 are the main results of this note. These
results can be considered as the analogous of Theorem 2.2 in the finite
type case whose hypotheses have been as much as possible weakened.
In this article, all of the rings are commutative.
2. preliminaries
Here we recall some material which are needed in the next section.
0
2010 Mathematics Subject Classification: 13B10, 13B30, 13A99, 13C11.
Key words and phrases: finite type epimorphism; module of differentials; universally
injective.
1
2
ABOLFAZL TARIZADEH
By an epimorphism ϕ : R → S we mean it is an epimorphism in
the category of (commutative) rings. Surjective ring maps are special
cases of epimorphisms. As a specific example, the canonical ring map
Z → Q is an epimorphism of rings while it is not surjective.
Remark 2.1. It is easy to see that a ring map ϕ : R → S is an epimorphism of rings if and only if s ⊗ 1 = 1 ⊗ s for all s ∈ S. It is also
equivalent to the condition that S ⊗R Coker ϕ = 0. It follows that every
faithfully flat epimorphism of rings is an isomorphism. In particular,
every non-zero epimorphism of rings with source a field is an isomorphism. We refer to [8], specially [4], [5] and [6] for a comprehensive
discussion of epimorphisms of commutative rings. Also see [1].
Theorem 2.2. A ring map ϕ : R → S is an epimorphism of rings if
and only if the following conditions hold.
(i) The induced map ϕ∗ : Spec(S) → Spec(R) is injective.
(ii) For each prime ideal q of S the induced map κ(p) → κ(q) is an
isomorphism where p = ϕ∗ (q).
(iii) The kernel of the canonical ring map S ⊗R S → S is a finitely
generated ideal.
(iv) The module of Kähler differentials ΩS/R is zero.
Proof. See [4, Proposition 1.5].
Remark 2.3. It is well-known that if ϕ : R → A is a ring map and p
a prime ideal of R then p ∈ Im ϕ∗ if and only if A ⊗R κ(p) 6= 0 where
κ(p) is the residue field of R at p.
Definition 2.4. The map ϕ∗ : Spec(S) → Spec(R) induced by a ring
map ϕ : R → S is said to be universally injective if for any ring map
R → R′ then the induced map ψ ∗ : Spec(R′ ⊗R S) → Spec(R) is injective where ψ : R′ → R′ ⊗R S is the base change map.
3. Main results
Let ϕ : R → S be a ring map. Consider the canonical ring map
π : S ⊗R S → S which maps each pure tensor s ⊗ s′ of S ⊗R S into ss′ .
The kernel of π is generated by elements of the form s⊗1−1⊗s. Because
ON FINITE TYPE EPIMORPHISMS OF RINGS
if
P
i
si s′i = 0 then we may write
P
i
si ⊗ s′i =
P
i
3
(1 ⊗ s′i )(si ⊗ 1 − 1 ⊗ si).
To prove the main results of this note we need the following lemma.
Lemma 3.1. If a ring map ϕ : R → S is of finite type then Ker π is a
finitely generated ideal.
Proof. By the hypothesis there are elements s1 , ..., sn ∈ S such that
S = R[s1 , ..., sn ]. Consider the ideal J = (si ⊗ 1 − 1 ⊗ si : 1 ≤ i ≤ n) of
S ⊗R S. Clearly J ⊆ Ker π. To prove the reverse inclusion it suffices
to show that
sd11 ...sdnn ⊗ 1 − 1 ⊗ sd11 ...sdnn ∈ J.
We use an induction argument over n. If n = 1 then we have sd ⊗ 1 −
1 ⊗ sd = (s
⊗ 1)d − (1 ⊗ s)d = (s ⊗ 1)d−1 + (s ⊗ 1)d−2 (1 ⊗ s) + ... +
(1 ⊗ s)d−1 (s ⊗ 1 − 1 ⊗ s) which belongs to J. Let n > 1. Then we
dn−1
may write sd11 ...sdnn ⊗ 1 − 1 ⊗ sd11 ...sdnn = sdnn ⊗ 1 sd11 ...sn−1
⊗1−1⊗
dn−1
dn−1
d1
d1
dn
dn
s1 ...sn−1 + 1 ⊗ s1 ...sn−1 sn ⊗ 1 − 1 ⊗ sn which is, by the induction
hypothesis and the induction step, belonging to J.
Theorem 3.2. A finite type ring map ϕ : R → S is an epimorphism of
rings if and only if the induced map ϕ∗ is injective and for each prime
ideal q of S the base change map κ(p) → Sq ⊗R κ(p) is an isomorphism
where p = ϕ∗ (q).
Proof. “⇒” By Theorem 2.2, ϕ∗ is injective. The composition
of ϕ with the canonical ring map S → Sq gives us the epimorphism
R → Sq . Thus the map κ(p) → Sq ⊗R κ(p) is an epimorphism since
every epimorphism of rings is stable under the base change. By Remark
2.3, Sq ⊗R κ(p) 6= 0 and so by Remark 2.1 the base change map is an
isomorphism. To prove the converse implication we shall use Theorem
2.2. We have Sq ⊗R κ(p) ≃ Sq /pSq . Thus Sq /pSq is a field and so
pSq = qSq . Hence the induced map κ(p) → κ(q) is an isomorphism.
Moreover, by [3, Tag 00RV], we have
ΩSq /R ⊗R κ(p) ≃ ΩSq ⊗R κ(p)/κ(p) = 0.
It follows that (qSq )ΩSq /R = ΩSq /R . By [3, Tag 00RZ], ΩS/R is finitely
generated S−module. Therefore (ΩS/R )q ≃ ΩSq /R is a finitely generated Sq −module. Then Nakayama implies that ΩSq /R = 0. It follows
that ΩS/R = 0 because (ΩS/R )q ≃ ΩSq /R = 0. Now using Lemma 3.1
4
ABOLFAZL TARIZADEH
and Theorem 2.2 then we conclude that ϕ is an epimorphism.
Theorem 3.3. Let ϕ : R → S be a finite type ring map. Then the
following conditions are equivalent.
(i) ϕ is an epimorphism of rings.
(ii) ϕ∗ is universally injective and ΩS/R = 0.
(iii) ϕ is formally unramified and for any field K and any ring maps
f, g : S → K if f ◦ ϕ = g ◦ ϕ then f = g.
(iv) ϕ∗ is injective, ΩS/R = 0 and for each prime ideal q of S the field
extension κ(p) ⊆ κ(q) is purely inseparable where p = ϕ∗ (q).
Proof. (i) ⇒ (ii) : Epimorphisms are stable under the base change.
Then apply Theorem 2.2.
(ii) ⇒ (iii) : The ring map ϕ is formally unramified if and only if
ΩS/R = 0, see [3, Tag 00UO]. There exist ring maps f ′ , g ′ : K ⊗R S → K
which map each pure tensor a ⊗ s of K ⊗R S into af (s) and ag(s), respectively. Let ψ : K → K ⊗R S be the base change of f ◦ ϕ : R → K.
Then, by the hypotheses, ψ ∗ is injective. Moreover K ⊗R S is a nontrivial ring. Therefore the prime spectrum of K ⊗R S is a single-point
set. This implies that
(f ′ )−1 (0) = (g ′)−1 (0). But for each s ∈ S,
′
f f (s) ⊗ 1 − 1 ⊗ s = 0. Thus g ′ f (s) ⊗ 1 − 1 ⊗ s = 0. Therefore
f (s) = g(s) for all s ∈ S.
(iii) ⇒ (i) : Let p be a prime ideal of S ⊗R S. The hypotheses imply
that η ◦ i = η ◦ j where η : S ⊗R S → κ(p) and i, j : S → S ⊗R S
are the canonical ring maps. Therefore for each s ∈ S, s ⊗ 1 − 1 ⊗ s
is a nilpotent element. Let J be the kernel of the canonical ring map
S ⊗R S → S. By Lemma 3.1, J is a finitely generated ideal. It follows
that J is a nilpotent ideal. But ΩS/R ≃ J/J 2 . Thus J = J 2 and so
J = 0. Therefore s ⊗ 1 = 1 ⊗ s for all s ∈ S. Hence ϕ is an epimorphism. (i) ⇒ (iv) : See Theorem 2.2.
(iv) ⇒ (iii) : We have f −1 (0) = g −1 (0) since f ◦ ϕ = g ◦ ϕ and ϕ∗ is
injective. Let q = f −1 (0). For each s ∈ S, by the hypotheses, there is
n
a natural number n ≥ 0 such that (s/1 + qSq )p is in the image of the
induced map κ(p) → κ(q) where p = ϕ∗ (q) and p is the characteristic
of κ(p). Therefore there are elements r ∈ R and t ∈ R \ p such that
n
n
n
ϕ(t)sp − ϕ(r) ∈ q. Thus f (s)p = (f ◦ ϕ)(r)(f ◦ ϕ)(t)−1 = g(s)p . This
n
p
= 0 since the characteristic of K is equal
implies that f (s) − g(s)
to p. Therefore f (s) = g(s) for all s ∈ S.
ON FINITE TYPE EPIMORPHISMS OF RINGS
5
References
[1] Call, F. W. Epimorphic flat maps, Proceedings of the Edinburgh Mathematical
Society (1986) 29, 57-59.
[2] Cox, Jr, S. and Rush, D. Finiteness in flat modules and algebras, Journal of
Algebra, Volume 32, Issue 1, 1974, p. 44-50.
[3] Aise Johan de Jong et al. Stacks Project, see http://stacks.math.columbia.edu.
[4] Lazard, Daniel. Épimorphismes plats, Séminaire Samuel. Algèbre commutative, tomme 2 (1967-1968).
[5] Olivier, Jean-Pierre. Anneaux absolument plats universels et épimorphismes à
buts réduits, Séminaire Samuel. Algèbre commutative, tomme 2 (1967-1968).
[6] Roby, Norbert. Diverses caractérisations des épimorphismes, Séminaire
Samuel. Algèbre commutative, tomme 2 (1967-1968).
[7] Raynaud, M. and Gruson, L. Critères de platitude et de projectivité, Inventiones Mathematicae, 13 (1971), 1-89.
[8] Séminaire Samuel. Les épimorphismes d’anneaux, 2, 1967-1968, Paris,
Secrétariat mathématique, 1968.
Department of Mathematics, Faculty of Basic Sciences, University
of Maragheh, P. O. Box 55136-553, Maragheh, Iran.
E-mail address: ebulfez1978@gmail.com
| 0 |
1
A Change-Sensitive Algorithm for Maintaining
Maximal Bicliques in a Dynamic Bipartite Graph
arXiv:1707.08272v1 [] 26 Jul 2017
Apurba Das, Srikanta Tirthapura, Senior Member, IEEE,
Abstract—We consider the maintenance of maximal bicliques from a dynamic bipartite graph that changes over time due to the
addition or deletion of edges. When the set of edges in a graph changes, we are interested in knowing the change in the set of maximal
bicliques (the “change”), rather than in knowing the set of maximal bicliques that remain unaffected. The challenge in an efficient
algorithm is to enumerate the change without explicitly enumerating the set of all maximal bicliques. In this work, we present (1)
near-tight bounds on the magnitude of change in the set of maximal bicliques of a graph, due to a change in the edge set (2) a
“change-sensitive” algorithm for enumerating the change in the set of maximal bicliques, whose time complexity is proportional to the
magnitude of change that actually occurred in the set of maximal bicliques in the graph. To our knowledge, these are the first
algorithms for enumerating maximal bicliques in a dynamic graph, with such provable performance guarantees. Our algorithms are
easy to implement, and experimental results show that their performance exceeds that of current baseline implementations by orders
of magnitude.
F
1
I NTRODUCTION
Graphs are ubiquitous in representing linked data in many
domains such as in social network analysis, computational
biology, and web search. Often, these networks are dynamic,
where new connections are being added and old connections are being removed. The area of dynamic graph mining
focuses on efficient methods for finding and maintaining
significant patterns in a dynamic graph. In this work we
focus on the maintenance of dense subgraphs within a
dynamic graph.
Our work is motivated by many applications that require
the maintenance of dense substructures from a dynamic
graph. Angel et al. [6], propose an algorithm for identifying breaking news stories in real-time through dense
subgraph mining from an evolving graph, defined on the
co-occurrence of entities within messages in an online social
network. [17] present methods for detecting communities
among users in a microblogging platform through identifying dense structures in an evolving network representing
connections among users. A sample of other applications of
dense subgraph mining in networks include identification of
communities in a social network [15], [23], identification of
web communities [14], [28], [18], phylogenetic tree construction [11], [29], [34], communities in bipartite networks [19],
genome analysis [26], and closed itemset mining [32], [20].
We consider the fundamental problem of maintaining
maximal bicliques in a bipartite graph that is changing due
to the addition or deletion of edges. Let G = (L, R, E)
be a simple undirected bipartite graph with its vertex set
partitioned into L, R, and edge set E ⊆ L × R. A biclique in
G is a bipartition B = (X, Y ), X ⊆ L, Y ⊆ R such that each
vertex in X is connected to each vertex in Y . A biclique B
is called a maximal biclique if there is no other biclique B 0
•
Das and Tirthapura are with the Department of Electrical and Computer
Engineering, Iowa State University, Ames, IA 50011.
E-mail: {adas,snt}@iastate.edu.
The authors were partially supported through the NSF grant 1527541.
such that B is a proper subgraph of B 0 . Let BC(G) denote
the set of all maximal bicliques in G.
Suppose that, starting from bipartite graph G1 =
(L, R, E), the state of the graph changes to G2 = (L, R, E ∪
H) due to the addition of a set of new edges H . Let
Υnew (G1 , G2 ) = BC(G2 ) \ BC(G1 ) denote the set of new
maximal bicliques that arise in G2 that were not present
in G1 and Υdel (G1 , G2 ) = BC(G1 ) \ BC(G2 ) denote the
set of maximal bicliques in G1 that are no longer maximal
bicliques in G2 (henceforth called as subsumed bicliques).
See Fig. 1 for an example. Let Υ(G1 , G2 ) = Υnew (G1 , G2 ) ∪
Υdel (G1 , G2 ) denote the symmetric difference of BC(G1 )
and BC(G2 ). We ask the following questions:
(1) How large can be the size of Υ(G1 , G2 )? In particular,
can a small change in the set of edges cause a large change
in the set of maximal bicliques in the graph?
(2) How can we compute Υ(G1 , G2 ) efficiently? Can
we quickly compute Υ(G1 , G2 ) when |Υ(G1 , G2 )| is small?
In short, can we design change-sensitive algorithms for enumerating elements of Υ(G1 , G2 ), whose time complexity is
proportional to the size of change, |Υ(G1 , G2 )|?
1.1
Contributions
Magnitude of Change: Let g(n) denote the maximum
number of maximal bicliques possible in an n vertex
bipartite graph. A result due to Prisner [27] shows that
g(n) ≤ 2n/2 , where equality occurs when n is even. We
show that the change in the number of maximal bicliques
when a single edge is added to the graph can be as large as
3g(n − 2) ≈ 1.5 × 2n/2 , which is exponential in the number
of vertices in the graph. This shows that the addition of
even a single edge to the graph can lead to a large change
in the set of maximal bicliques in the graph. We further
show that this bound is tight for the case of the addition
of a single edge – the largest possible change in the set of
maximal bicliques upon adding a single edge is 3g(n − 2).
For the case when more edges can be added to the graph,
2
the addition of a batch of 100 edges, while the baseline
algorithm took more than 30 minutes.
1.2
Fig. 1: Change in maximal bicliques when the graph changes
from G1 to G2 due to the addition of edge set H =
{{a, y}, {c, x}}. Note that each maximal biclique of G1 is
subsumed by a larger maximal biclique in G2 , and there is
one new maximal biclique in G2 .
it is easy to see that the maximum possible change is no
larger than 2g(n).
Enumeration Algorithm: From our analysis, it is clear that
the magnitude of change in the set of maximal bicliques
in the graph can be as large as exponential in n in the
worst case. On the flip side, the magnitude of change can
be as small as 1 – for example, consider the case when a
newly arriving edge connects two isolated vertices in the
graph. Thus, there is a wide range of values the magnitude
of change can take. When the magnitude of change is
very large, an algorithm that enumerates the change must
inevitably pay a large cost, if only to enumerate the change.
On the other hand, when the magnitude of change is small,
it will ideally pay a smaller cost. This motivates our search
for a change-sensitive algorithm whose computational cost
for enumerating the change is proportional to the magnitude of the change in the set of maximal bicliques.
We present a change-sensitive algorithm, DynamicBC,
for enumerating the new maximal bicliques and subsumed
maximal bicliques, when a set of new edges H are added
to the bipartite graph G. The algorithm DynamicBC has
two parts, NewBC, for enumerating new maximal bicliques,
and SubBC, for enumerating subsumed maximal bicliques.
When a batch of new edges H of size ρ is added to the
graph, the time complexity of NewBC for enumerating Υnew ,
the set of new maximal bicliques, is O(∆2 ρ|Υnew |) where
∆ is the maximum degree of the graph after update. The
time complexity of SubBC for enumerating Υdel , the set
of subsumed bicliques, is O(2ρ |Υnew |). To the best of our
knowledge, these are the first provably change-sensitive
algorithms for maintaining maximal bicliques in a dynamic
graph.
Experimental Evaluation: We present an empirical evaluation of our algorithms on real bipartite graphs with million of nodes. Our results shows that the performance of
our algorithms are orders of magnitude faster than current
approaches. For example, on the actor-movie-1 graph
with 640K vertices and 1.4M edges, our algorithm took
about 30 milliseconds for computing the change due to
Related Work
Maximal Biclique enumeration (MBE) on a static graph:
There has been substantial prior work on enumerating maximal bicliques from a static graph. Alexe et al. [5] propose
an algorithm for MBE from a static graph based on the
consensus method, whose time complexity is proportional
to the size of the output (number of maximal bicliques
in the graph) - termed as output-sensitive algorithm. Liu et
al. [22] propose an algorithm for MBE based on depthfirst-search (DFS). Damaschke [7] propose an algorithm for
bipartite graphs with a skewed degree distribution. Gély et
al. [13] propose an algorithm for MBE through a reduction
to maximal clique enumeration (MCE). However, in their
work, the number of edges in the graph used for enumeration increases significantly compared to the original graph.
Makino & Uno [24] propose an algorithm for MBE based
on matrix multiplication, which provides the current best
time complexity for dense graphs. Eppstein [12] proposes
a linear time algorithm for MBE when the input graph has
bounded arboricity. Other works on sequential algorithm for
MBE on a static graph include [9], [10]. [25], [33], [31] present
parallel algorithms for MBE and MCE for the MapReduce
framework. [20] show a correspondence between closed
itemsets in a transactional database and maximal cliques in
an appropriately defined graph.
Dense Structures from Dynamic Graphs: There have
been some prior works related to maintenance of dense
structures similar to maximal bicliques in dynamic graphs.
Kumar et al. [18] define (i, j)-core which is a biclique with i
vertices in one partition and j vertices in another partition.
In their work, the authors propose a dynamic algorithm
for extracting non-overlapping maximal set of (i, j)-cores
for interesting communities. [30], [21], [16] present methods
for maintaining k -cores and k -trusses in a dynamic graph,
and [8] present algorithms for maintaining maximal cliques
in a dynamic graph.
Roadmap: The remaining section are organized as follows. We present definitions and preliminaries in Section 2.
Then we describe our algorithms in Section 3, results on the
size of change in the set of maximal bicliques in Section 4,
and experimental results in Section 5.
2
P RELIMINARIES
Let V (G) denote the set of vertices of G and E(G) the set
of edges in G. Let n and m denote the number of vertices
and number of edges in G respectively. Let ΓG (u) denote
the set of vertices adjacent to vertex u in G. If the graph G
is clear from the context, we use Γ(u) to mean ΓG (u). For
an edge e = (u, v) ∈ E(G), let G − e denote the graph after
deleting e ∈ E(G) from G and G + e denote the graph after
adding e ∈
/ E(G) to G. For a set of edges H , let G + H
(G − H ) denote the graph obtained after adding (deleting)
H to (from) E(G). Similarly, for a vertex v ∈
/ V (G), let G + v
denote the graph after adding v to G and for a vertex v ∈
V (G), let G − v denote the graph after deleting v and all its
adjacent edges from E(G). Let ∆(G) denote the maximum
3
For graph G and set of edges H , we use Υnew to mean
Υnew (G, G + H), and Υdel to mean Υdel (G, G + H).
Algorithm 1: DynamicBC(G, H, BC(G))
Input: G - Input bipartite graph, H - Edges being
added to G, BC(G)
Output: Υ : the union of set of new maximal bicliques
and subsumed bicliques
new
1 Υ
← NewBC(G, H)
del
2 Υ
← SubBC(G, H, BC(G), Υnew )
new
3 Υ ← Υ
∪ Υdel
Fig. 2: Cocktail-party graph on 6 vertices CP (3)
degree of a vertex in G and δ(G) the minimum degree of a
vertex in G.
Definition 1 (Change-Sensitive Algorithm). An algorithm for
a dynamic graph stream is called change-sensitive if its time
complexity of enumerating the change in a graph property is
proportional to the magnitude of change.
Results for a static graph. In [27], Prisner presented the
following result on the number of maximal bicliques in a
bipartite graph with n vertices.
We first present Algorithm NewBC for enumerating new
cliques in Section 3.1, and Algorithm NewBC for enumerating subsumed cliques in Section 3.2. The main result on
the time complexity of DynamicBC is summarized in the
following theorem.
Theorem 3. DynamicBC is a change-sensitive algorithm for
enumerating the change in the set of maximal bicliques, with time
complexity O(∆2 ρ|Υnew |+2ρ |Υnew |) where ∆ is the maximum
degree of a vertex in G + H and ρ is the size of H .
3.1
Enumerating New Maximal Bicliques
Theorem 1 (Theorem 2.1 [27]). Every bipartite graph with n
n
vertices contains at most 2 2 ≈ 1.41n maximal bicliques, and the
only extremal (maximal) bipartite graphs are the graphs CP (k).
Here, CP (k) denotes the cocktail-party graph which
is a bipartite graph with k vertices in each partition
where V (CP (k)) = {a1 , a2 , . . . , ak , b1 , b2 , . . . , bk } and
E(CP (k)) = {(ai , bp ) : i 6= p} [27]. See Figure 2 for an
example.
As a subroutine, we use an algorithm for enumerating
maximal bicliques from a static undirected graph, whose
runtime is proportional to the number of maximal bicliques.
There are a few algorithms of this kind [5], [22], [35]. We use
the following result due to Liu et al. [22] as it provides best
possible time and space complexity.
Theorem 2 (Liu et al., [22]). For a graph G with n vertices,
m edges, maximum degree ∆, and number of maximal bicliques
µ, there is an algorithm MineLMBC for enumerating maximal bicliques in G with time complexity O(n∆µ) and space complexity
O(m + ∆2 ).
MineLMBC is depth-first-search (DFS) based algorithm
for enumerating maximal bicliques of a static graph G =
(V, E). It takes as input the graph G and the size threshold
s. The algorithm enumerates all maximal bicliques of G with
size of each partition at least s. Clearly, by setting s = 1, the
algorithm enumerates all maximal bicliques of G.
3 C HANGE -S ENSITIVE A LGORITHM FOR M AXIMAL
B ICLIQUES
In this section, we present a change-sensitive algorithm
DynamicBC for enumerating the change in the set of maximal bicliques. The algorithm has two parts : (1) Algorithm NewBC for enumerating new maximal bicliques, and
(2) Algorithm SubBC for enumerating subsumed bicliques.
Fig. 3: The original graph G has 4 maximal bicliques. When
new edges in H (in dotted line) are added to G, all maximal
bicliques in G remain maximal in G + H and only one
maximal biclique is newly formed (< {a3 , a4 }, {b3 , b4 } >).
Let G0 denote the graph G + H . A baseline algorithm for
enumerating new maximal bicliques in G0 is to (1) enumerate all maximal bicliques in G, (2) enumerate all maximal
bicliques in G0 both using an output-sensitive algorithm
such as [22], and then (3) compute BC(G0 ) \ BC(G). However, this is not change-sensitive, since we need to compute
all maximal bicliques of G0 each time, but it is possible
that most of the maximal bicliques in G0 are not new.
For example, see Fig. 3. We next present an approach that
overcomes this difficulty.
For each new edge e ∈ H , let BC 0 (e) denote the set of
maximal bicliques in G0 containing edge e.
Lemma 1. Υnew = ∪e∈H BC 0 (e).
Proof. Each biclique in Υnew must contain at least one edge
from H . To see this, consider a biclique b ∈ Υnew . If b did not
contain an edge from H , then b is also a maximal biclique
in G, and hence cannot belong to Υnew . Hence, b ∈ BC 0 (e)
for some edge e ∈ H , and b ∈ ∪e∈H BC 0 (e). This shows that
Υnew ⊆ ∪e∈H B 0 (e).
4
Fig. 4: Construction of G0e from G0 = G + H when a set of
new edges H = {e, h} is added to G. A = ΓG0 (v) = {u, x}
and B = ΓG0 (u) = {v, y}.
Next consider a biclique b ∈ ∪e∈H BC 0 (e). It must be
the case that b ∈ BC 0 (h) for some h in H . Thus b is a
maximal biclique in G + H , and b contains edge h ∈ H
and b cannot be a biclique in G. Thus b ∈ Υnew . This shows
that ∪e∈H BC 0 (e) ⊆ Υnew .
Next, for each edge e = (u, v) ∈ H , we present an
efficient way to enumerate all bicliques in BC 0 (e) through
enumerating maximal bicliques in a specific subgraph G0e of
G0 , constructed as follows. Let A = ΓG0 (u) and B = ΓG0 (v).
Then G0e = (A, B, E 0 ) is a subgraph of G0 induced by
vertices in A and B . See Fig. 4 for an example of the
construction of G0e .
Lemma 2. For each e ∈ H , BC 0 (e) = BC(G0e )
Proof. First we show that BC 0 (e) ⊆ BC(G0e ). Consider a
biclique b = (X, Y ) in BC 0 (e). Let e = (u, v). Here b contains
both u and v . Suppose that u ∈ X and v ∈ Y . According
to the construction G0e contains all the vertices adjacent to u
and all the vertices adjacent to v . And in b, all the vertices
in X are connected to all the vertices in Y . Hence, b is a
biclique in G0e . Also, b is a maximal biclique in G0 , and G0e
is an induced subgraph of G0 which contains all the vertices
of b. Hence, b is a maximal biclique in G0e .
Next we show that BC(G0e ) ⊆ BC 0 (e). Consider a biclique b0 = (X 0 , Y 0 ) in BC(G0e ). Clearly, b0 contains e as it
contains both u and v and b0 is a maximal biclique in G0e .
Hence, b0 is also a biclique in G0 that contains e. Now we
prove that b0 is also maximal in G0 . Suppose not, that there
is a vertex w ∈ V (G0 ) such that b0 can be extended with
w. Then, as per the construction of G0e , w ∈ V (G0e ) since w
must be adjacent to either u or v . Then, b0 is not maximal in
G0e . This is a contradiction. Hence, b0 is also maximal in G0 .
Therefore, b0 ∈ BC 0 (e).
Based on the above observation, we present our changesensitive algorithm NewBC (Algorithm 2). We use an outputsensitive algorithm for a static graph MineLMBC for enumerating maximal bicliques from G0e . Note that typically, G0e is
much smaller than G0 since it is localized to edge e, and
hence enumerating all maximal bicliques from G0e should
be relatively inexpensive.
Theorem 4. NewBC enumerates the set of all new bicliques
arising from the addition of H in time O(∆2 ρ|Υnew |) where
∆ is the maximum degree of a vertex in G0 and ρ is the size of H .
The space complexity is O(|E(G0 )| + ∆2 ).
Proof. First we consider correctness of the algorithm. From
Lemma 1 and Lemma 2, we know that Υnew is enumerated
by enumerating BC(G0e ) for every e ∈ H . Our algorithm
Algorithm 2: NewBC(G, H)
Input: G - Input bipartite graph, H - Edges being
added to G
Output: bicliques in Υnew , each biclique output once
1 Consider edges of H in an arbitrary order
e1 , e2 , . . . , eρ
0
2 G ← G+H
3 for i = 1 . . . ρ do
4
e ← ei = (u, v)
5
G0e ← a subgraph of G0 induced by
ΓG0 (u) ∪ ΓG0 (v)
6
Generate bicliques of G0e using MineLMBC. For each
biclique thus generated, output b only if b does
not contain an edge ej for j < i
does this exactly, and use the MineLMBC algorithm for enumerating BC(G0e ).
For the runtime, consider that the algorithm iterates
over each edge e in H . In each iteration, it constructs a
graph G0e and runs MineLMBC(G0e ). Note that the number
of vertices in G0e is no more than 2∆, since it is the size
of the union of the edge neighborhoods of ρ edges in G0 .
The set of maximal bicliques generated in each iteration is a
subset of Υnew , therefore the number of maximal bicliques
generated from each iteration is no more than |Υnew |. From
Theorem 2, we have that the runtime of each iteration is
O(∆2 |Υnew |). Since there are ρ edges in H , the result on
runtime follows. For the space complexity, we note that the
algorithm does not store the set of new bicliques in memory
at any point. The space required to construct G0e is linear in
the size of G0 . From Theorem 2, the total space requirement
is O(|E(G0 )| + ∆2 ).
3.2
Enumerating Subsumed Maximal Bicliques
We now present a change-sensitive algorithm for enumerating BC(G) \ BC(G0 ) where G0 = G + H . Suppose a new
maximal biclique b of G0 subsumed a maximal biclique b0
of G. Note that b0 is also a maximal biclique in b − H . So,
one idea is to enumerate all maximal bicliques in b − H and
then check which among them is maximal in G. However,
checking maximality of a biclique is costly operation since
we need to consider neighborhood of every vertex in the
biclique. Another idea is to store the bicliques of the graph
explicitly and see which among the generated bicliques are
contained in the set of maximal bicliques of G. This is not
desirable either since large amount of memory is required
to store the set of all maximal bicliques of G.
A more efficient approach is to store the signatures
of the maximal bicliques instead of storing the bicliques
themselves. Then, we enumerate all maximal bicliques in
b − H and for each biclique generated, we compare the
signature of the generated biclique with the signatures of
the bicliques stored. An algorithm following this idea is
presented in Algorithm 3. In this algorithm we reduce the
cost of main memory by storing the signatures. We use a
standard hash function (such as 64 bit murmur hash 1 ) for
1. https://sites.google.com/site/murmurhash/
5
computing signatures of maximal bicliques. For computing
the signature, first we represent a biclique in canonical
form (vertices in first partition represented in lexicographic
order followed by vertices in another partition represented
in lexicographic order). Then we convert the string into
bytes, and apply hash function on the computed bytes.
The hash function returns signature as output. By storing
the signatures instead of maximal bicliques, we are able to
check whether a maximal biclique from b − H is contained
in the set of maximal bicliques of G by comparing their hash
values. Thus we pay much less cost in terms of memory by
storing the signatures of bicliques.
Now we prove that Algorithm 3 indeed enumerates all
maximal bicliques of b − H .
Lemma 3. In Algorithm 3, for each b ∈ Υnew , S after Line 14
contains all maximal bicliques in b − H .
Proof. First observe that, removing H from b is equivalent
to removing those edges in H which are present in b. Hence,
computing maximal bicliques in b−H reduces to computing
maximal bicliques in b − H1 where H1 is the set of all edges
in H which are present in b.
We use induction on the number of edges k in H1 .
Consider the base case, when k = 1. H1 contains a single
edge e1 = {u, v}. Clearly, b − H1 has two maximal bicliques
b \ {u} and b \ {v}. Suppose, that the set H1 is of size k . Our
inductive hypothesis is that all maximal bicliques in b − H1
are enumerated. Consider H10 = {e1 , e2 , ..., ek , ek+1 } with
k + 1 edges. Now each maximal biclique b0 in b − H1 either
remains maximal within b − H10 (if at least one endpoint
of ek+1 is not in b0 ) or generates two maximal bicliques in
b − H10 (if both endpoints of ek+1 are in b0 ). Thus, for each
b ∈ Υnew , S after Line 14 contains all maximal bicliques
within b − H .
Now we show that the algorithm described above is a
change-sensitive algorithm for enumerating all elements of
Υdel when the number of edges ρ in H is constant.
Theorem 5. Algorithm 3 enumerates all bicliques in Υdel =
BC(G) − BC(G + H) using time O(2ρ |Υnew |) where ρ is the
number of edges in H . The space complexity of the algorithm is
O(|E(G0 )| + |V (G0 )| + ∆2 + |BC(G)|).
Proof. We first show that every biclique b0 enumerated by
the algorithm is indeed a biclique in Υdel . Note that b0 is a
maximal biclique in G, due to explicitly checking the condition. Further, b0 is not a maximal biclique in G+H , since it is
a proper subgraph of b, a maximal biclique in G + H . Next,
we show that all bicliques in Λdel are enumerated. Consider
any subsumed biclique b0 ∈ Λdel . It must be contained
within b \ H , where b is a maximal biclique within Λnew .
Moreover, b0 will be a maximal biclique within b \ H , and
will be enumerated by the algorithm according to Lemma 3.
For the time complexity we show that for any b ∈ Υnew ,
the maximum number of maximal bicliques in b − H is
2ρ using induction on ρ. Suppose ρ = 1 so that H contains a single edge, say e1 = (u, v). Then, b − H has
two maximal bicliques, b \ {u} and b \ {v}, proving the
base case. Suppose that for any set H of size k , it was
true that b − H has no more than 2k maximal bicliques.
Consider a set H 00 = {e1 , e2 , . . . , ek+1 } with k + 1 edges.
Algorithm 3: SubBC(G, H, BC, Υnew )
Input: G - Input bipartite graph
H - Edge set being added to G
BC - Set of maximal bicliques in G
Υnew - set of new maximal bicliques in G + H
Output: All cliques in Υdel = BC(G) \ BC(G + H)
del
1 Υ
←∅
new
2 for b ∈ Υ
do
3
S ← {b}
4
for e = (u, v) ∈ E(b) ∩ H do
5
S0 ← φ
6
for b0 ∈ S do
7
if e ∈ E(b0 ) then
8
b1 = b0 \ {u} ; b2 = b0 \ {v}
9
S 0 ← S 0 ∪ b1 ; S 0 ← S 0 ∪ b2
10
11
12
else
S 0 ← S 0 ∪ b0
/* S 0 contains all the maximal
bicliques in b − {e1 , e2 , ..., ek } where
{e1 , e2 , ..., ek } ⊆ E(b) ∩ H are
considered so far.
*/
13
14
15
16
17
S ← S0
for b0 ∈ S do
if b0 ∈ BC then
Υdel ← Υdel ∪ b0
Let H 0 = {e1 , e2 , . . . , ek }. Subgraph b − H 00 is obtained
from b − H 0 by deleting a single edge ek+1 . By induction,
we have that b − H 0 has no more than 2k maximal bicliques.
Each maximal biclique b0 in b − H 0 either remains a maximal
biclique within b − H 00 (if at least one endpoint of ek+1 is
not in b0 ), or leads to two maximal bicliques in b − H 00 (if
endpoints of ek+1 are in different bipartition of b0 ). Hence,
the number of maximal bicliques in b − H 00 is no more than
2k+1 , completing the inductive step.
Following this, for each biclique b ∈ Υnew , we need to
check for maximality for no more than 2ρ bicliques in G.
This checking can be performed by checking whether each
such generated biclique in contained in the set BC(G) and
for each biclique, this can be done in constant time.
For the space bound, we first note that in Algorithm 3,
enumerating maximal bicliques within b − H consumes
space O(|E(G0 )| + ∆2 ), and checking for maximality can be
done in space linear in size of G. However, for storing the
maximal bicliques in G takes O(|BC(G)|) space. Hence, for
these operations, the overall space-cost for each b ∈ Υnew is
O(|E(G0 )| + |V (G0 )| + ∆2 + |BC(G)|). The only remaining
space cost is the size of Υnew , which can be large. Note that,
the algorithm only iterates through Υnew in a single pass. If
elements of Υnew are provided as a stream from the output
of an algorithm such as NewBC, then they do not need to
be stored within a container, so that the memory cost of
receiving Υnew is reduced to the cost of storing a single
maximal biclique within Υnew at a time.
6
Algorithm 4: Decremental(G,H)
Input: G - Input bipartite graph, H - Edges being
deleted from G
Output: Υnew (G, G − H) ∪ Υdel (G, G − H)
new
1 Υ
← φ; Υdel ← φ; G00 ← G − H
del
2 Υ
← NewBC(G00 , H)
new
3 Υ
← SubBC(G00 , H, BC(G00 ), Υdel )
new
4 returnΥ
∪ Υdel
3.3
Decremental and Fully Dynamic Cases
We now consider the maintenance of maximal bicliques
in the decremental case, when edges are deleted from the
graph. This case can be handled using a reduction to the
incremental case. We show that the maintenance of maximal
bicliques due to deletion of a set of edges H from a bipartite graph G is equivalent to the maintenance of maximal
bicliques due to addition of H to the bipartite graph G − H .
Lemma 4. Υnew (G, G − H) = Υdel (G − H, G) and
Υdel (G, G − H) = Υnew (G − H, G)
Proof. Note that Υnew (G, G − H) is the set of all bicliques
that are maximal in G − H , but not in G. By definition,
this is equal to Υdel (G − H, G). Similarly we can show that
Υdel (G, G − H) = Υnew (G − H, G).
Based on the above lemma, an algorithm for the decremental case is presented in Algorithm 4. For the fully
dynamic case, where we need to consider both the addition
and deletion of edges, we first compute the changes due to
addition of edges, followed by changes due to deletion of
edges.
4
M AGNITUDE OF CHANGE IN B ICLIQUES
We consider the maximum change in the set of maximal
bicliques when a set of edges is added to the bipartite graph.
Let λ(n) denote the maximum size of Υ(G, G + H) taken
over all n vertex bipartite graphs G and edge sets H . We
derive the following upper bound on the maximum size of
Υ(G, G + H) in the following Lemma:
Lemma 5. λ(n) ≤ 2g(n).
Proof. Note that, for any bipartite graph G with n vertices
and for any new edge set H it must be true that |BC(G)| ≤
g(n) and |BC(G + H)| ≤ g(n). Since |Υnew (G, G + H)| ≤
|BC(G + H)| and |Υdel (G, G + H)| ≤ |BC(G)|, it follows
that |Υ(G, G + H)| ≤ |BC(G + H)| + |BC(G)| ≤ 2g(n).
Next we analyze the upper bound of |Υ(G, G + e)| in the
following when an edge e ∈
/ E(G) is added to G.
Fig. 5: Construction showing the changes in the set of
maximal bicliques when a new edge is added. G is in the
left on n = 6 vertices. G00 consists of vertices in L0 and R0
and edges among them to make it a cocktail-party graph. G0
in the right is obtained by adding edge e = (u, v) to G.
large as 3g(n − 2) in Lemma 9 we prove that the size of
Υ(G, G + e) is at most 3g(n − 2).
Lemma 6. For any even integer n > 2 there exists a bipartite
graph G on n vertices and an edge e = (u, v) ∈
/ E(G) such that
|Υ(G, G + e)| = 3g(n − 2).
Proof. We use proof by construction. Consider bipartite
graph G = (L, R, E) constructed on vertex set U ∪ V with
n vertices such that |L| = |R| = n/2. Let u ∈ L and v ∈ R
be two vertices and let L0 = L \ {u} and R0 = R \ {v}.
Let G00 denote the induced subgraph of G on vertex sets L0
and R0 . In our construction, G00 is CP ( n2 − 1). In graph G,
in addition to the edges in G00 , we add an edge from each
vertex in R0 to u and an edge from each vertex in L0 to v .
We add edge e = (u, v) to G to get graph G0 = G + e (see
Fig. 5 for construction). We claim that the size of Υ(G, G0 )
is 3g(n − 2).
First, we note that the total number of maximal bicliques
in G is 2g(n−2). Each maximal biclique in G contains either
vertex u or v , but not both. The number of maximal bicliques
that contain vertex u is g(n−2), since each maximal biclique
in G00 leads to a maximal biclique in G by adding u. Similarly, the number of maximal bicliques in G that contains v
is g(n − 2), leading to a total of 2g(n − 2) maximal bicliques
in G.
Next, we note that the total number of maximal bicliques
in G0 is g(n − 2). To see this, note that each maximal biclique
in G0 contains both vertices u and v . Further, for each
maximal biclique in G00 , we get a corresponding maximal
biclique in G0 by adding vertices u and v . Hence the number
of maximal bicliques in G0 equals the number of maximal
bicliques in G00 , which is g(n − 2).
No maximal biclique in BC(G) contains both u and
v , while every maximal biclique in G0 contains both u
and v . Hence, BC(G) and BC(G0 ) are disjoint sets, and
|Υ(G, G0 )| = |BC(G)| + |BC(G0 )| = 3g(n − 2).
Theorem 6. For an integer n ≥ 2, a bipartite graph G =
(L, R, E) with n vertices, and any edge e = (u, v) ∈
/ E(G), u ∈
U, v ∈ V , the maximum size of Υ(G, G + e) is 3g(n − 2), and
for each even n, there exists a bipartite graph that achieves this
bound.
Lemma 7. If e = (u, v) ∈
/ E(G) is inserted to G where u ∈
L, v ∈ R, all new maximal bicliques in G + e must contain e.
We prove this theorem in the following two lemmas. In
Lemma 6 we prove that the size of Υ(G, G + e) can be as
Proof. Proof by contradiction. Assume that there is a new
maximal biclique b = (b1 , b2 ) in BC(G + e) − BC(G) that
Now we will prove a few results that we will use in
proving Lemma 9.
7
does not contain e. Then b must be present in G but is not
maximal in G, and there must be another vertex w ∈ L
(or R) that can be added to b while remaining a biclique.
Clearly, w can be added to biclique b in G + e also, so that b
is not maximal in G + e, contradicting our assumption.
Lemma 8. If e = (u, v) is added to G, each biclique b ∈
BC(G) − BC(G + e) contains either u or v .
Proof. Proof by contradiction. Suppose there is maximal
biclique b = (b1 , b2 ) in BC(G) − BC(G + e) that contain
neither u nor v . Then, b must be maximal biclique in G. Since
b is not maximal biclique in G + e, b is contained in another
maximal biclique b0 = (b01 , b02 ) in G + e. From Lemma 7, b0
must contain edge e = (u, v), and hence, both vertices u and
v . Since b0 is a biclique, every vertex in b02 is connected to u
in G0 . Hence, every vertex in b2 is connected to u even in G.
Therefore, b ∪ {u} is a biclique in G, and b is not maximal in
G, contradicting our assumption.
Observation 1. For a bipartite graph G = (L, R, E) and a
vertex u ∈ V (G), the number of maximal bicliques that contains
v is at most g(n − 1).
Proof. Suppose, u ∈ L. Then each maximal biclique b in G
that contains u, corresponds to a unique maximal biclique in
G − {u}. Such maximal bicliques can be derived from b by
deleting u from b. As the maximum number of maximal
bicliques in G − {u} is g(n − 1), maximum number of
maximal bicliques in G can be no more than g(n − 1).
Observation 2. The number of maximal bicliques containing a
specific edge (u, v) is at most g(n − 2).
Proof. Consider an edge (u, v) ∈ E(G). Let vertex set
V 0 = (ΓG (u)∪ΓG (v))−{u, v}, and let G0 be the subgraph of
G induced by V 0 . Each maximal biclique b in G that contains
edge (u, v) corresponds to a unique maximal biclique in
G0 by simply deleting vertices u and v from b. Also, each
maximal biclique b0 in G0 corresponds to a unique maximal
biclique in G that contains (u, v) by adding vertices u and
v to b0 . Thus, there is a bijection between the maximal
bicliques in G0 and the set of maximal bicliques in G that
contains edge (u, v). The number of maximal bicliques in
G0 can be at most g(n − 2) since G0 has no more than (n − 2)
vertices, completing the proof.
Lemma 9. For a bipartite graph G = (L, R, E) on n vertices
and edge e = (u, v) ∈
/ E(G), the size of Υ(G, G + e) can be no
larger than 3g(n − 2).
Proof. Proof by contradiction. Suppose there exists a bipartite graph G = (L, R, E) and edge e ∈
/ E(G) such that
|Υ(G, G+e)| ≥ 3g(n−2). Then either |BC(G+e)−BC(G)| ≥
g(n − 2) or |BC(G) − BC(G + e)| ≥ 2g(n − 2).
Case 1: |BC(G + e) − BC(G)| ≥ g(n − 2): This means
that total number of new maximal bicliques formed due to
addition of edge e is larger than g(n − 2). From Lemma 7,
each new maximal biclique formed due to addition of e
must contain e. From Observation 2, the total number of
maximal bicliques in an n vertex bipartite graph containing
a specific edge can be at most g(n − 2). Thus, the number
of new maximal bicliques after adding edge e is at most
g(n − 2), contradicting our assumption.
Case 2: |BC(G)−BC(G+e)| ≥ 2g(n−2): Using Lemma 8,
each maximal biclique b ∈ BC(G) − BC(G + e) must contain
either u or v , but not both. Suppose that b contains u but
not v . Then, b must be a maximal biclique in G − v . Using
Observation 1, we see that the number of maximal bicliques
in G − v that contains a specific vertex u is no more than
g(n − 2). In a similar way, the number of possible maximal
bicliques that contain v is at most g(n − 2). Therefore, the
total number of maximal bicliques in BC(G) − BC(G + e) is
at most 2g(n − 2), contradicting our assumption.
Combining Lemma 5, Theorem 6 and using the fact that
3g(n − 2) = 1.5g(n) for even n, we obtain the following
when n is even:
Theorem 7. 1.5g(n) ≤ λ(n) ≤ 2g(n)
5
E XPERIMENTAL E VALUATION
In this section, we present results of an experimental evaluation of our algorithms.
5.1
Data
We consider the following real-world bipartite graphs in
our experiments. A summary of the datasets is presented in
Table 1. In the actor-movie [1] graph, vertices consist of
actors in one bipartition and movies in another bipartition.
There is an edge between an actor and a movie if the
actor played in that movie. In the dblp-author [2] graph,
vertices consist of authors in one partition and the publications in another partition. Edges connect authors to their
publications. In the epinions-rating [3] graph, vertices
consist of users in one partition and products in another
partition. There is an edge between a user and a product if
the user rated the product. Also, the edges have timestamps
of their creation. In the flickr-membership [4] graph,
vertices consists of users and groups. There is an edge
between a user and a group if that user is a member of
that group.
We converted the above graphs into dynamic graphs
by creating edge streams as follows: For actor-movie,
dblp-author, and flickr-membership we created initial graphs by retaining each edge in the original graph with
probability 0.1 and deleting the rest. Then the deleted edges
are added back as an edge stream, until the original graph is
reached. We named the initial graphs as actor-movie-1,
dblp-author-1, and flickr-membership-1. For the
epinions-rating graph, we created the initial graph by
retaining initial 10% edges of the original graph according
to their timestamps, and considered rest of the edges for
creating the edge stream in timestamp ordering. We named
the initial graph as epinions-rating-init. In Table 1,
the number of edges of the initial graph is in the column
Edges(initial) and the number of edges when we end
the experiment is in column Edges(final).
5.2
Experimental Setup and Implementation Details
We implemented our algorithms using Java on a 64-bit Intel(R) Xeon(R) CPU clocked at 3.10 Ghz and 8G DDR3 RAM
with 6G heap memory space. Unless otherwise specified, we
considered batches of size 100.
8
TABLE 1: Summary of Graphs Used
Dataset
actor-movie-1
dblp-author-1
epinions-rating-init
flickr-membership-1
Nodes
Edges(initial)
Edges(final)
Edges(original graph)
Avg. deg.(original graph)
639286
6851776
996744
895589
146917
864372
1366832
855179
1470404
8649016
1631832
1355179
1470404
8649016
13668320
8545307
6
3
31
35
0.04
Initial-graph
DynamicBC
BaselineBC
actor-movie-1
dblp-author-1
epinion-rating-init
flickr-membership-1
30 ms.
20 ms.
3.5 sec.
0.5 sec.
> 30 min.
> 20 hours
> 10 hours
1 hour
DynamicBC
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
1-1K
4K-5K
8K-9K 12K-13K
Iteration range
Avg. comp. time in range
Avg. comp. time in range
TABLE 2: Comparison with Baseline: computation time for adding a single batch of size 100
0.03
0.02
0.015
0.01
0.005
0
1-1K
DynamicBC
18
16
14
12
10
8
6
4
1-.25K
1K-1.25K
Iteration range
39K-40K
Iteration range
77K-78K
(b) dblp-author-1
2.5K-2.6K
Avg. comp. time in range
Avg. comp. time in range
(a) actor-movie-1
20
DynamicBC
0.025
7
6
DynamicBC
5
4
3
2
1
0
1-1K
2K-3K
Iteration range
4K-5K
(d) flickr-membership-1
(c) epinions-rating-init
Fig. 6: Computation time (in sec.) for enumerating the change in maximal bicliques, per batch of edges.
Metrics: We evaluate our algorithms using the following
metrics: (1) computation time for new maximal bicliques
and subsumed bicliques when a set of edges are added,
(2) change-sensitiveness, that is, the total computation time
as a function of the size of change. We measure the size
of change as the sum of the total number of edges in the
new maximal bicliques and the subsumed bicliques, and
(3) space cost, that is the memory used by the algorithm
for storing the graph, and other data structures used by
the algorithm, and (4) cumulative computation time for
different batch sizes, that is the cumulative computation
time from the initial graph to the final graph while using
different batch size.
5.3
Discussion of Results
Comparison with Baseline. We compared the performance
of our algorithm, DynamicBC, with a baseline algorithm
for maintaining maximal bicliques, we have implemented
0.04
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
NewBC
SubBC
K
0K 13
K
-1 K1
3K 6K 9K 12
14K
7K
Avg. comp. time in range
Avg. comp. time in range
9
0.03
0.025
0.02
0.015
0.01
0.005
0
Iteration range
K
5K -2K 2.6
K 1.2
K K5
5
.2
5
7
12.
1K 1.
Iteration range
(c) epinions-rating-init
(b) dblp-author-1
Avg. comp. time in range
Avg. comp. time in range
NewBC
SubBC
1K 1K 1K 1K 8K
-3 -4 -6 -7 -7
K K
K K
K
30 40 60 70 77
Iteration range
(a) actor-movie-1
20
18
16
14
12
10
8
6
4
2
0
NewBC
SubBC
7
6
5
NewBC
SubBC
4
3
2
1
0
2K -3K -4K -5K
1K K2K 3K 4K
11
Iteration range
(d) flickr-membership-1
Fig. 7: Computation time (in sec.) broken down into time for new and subsumed bicliques
algorithm that we call BaselineBC. The baseline algorithm
computes Υ(G, G + H) by (1) Enumerating BC(G),
(2) Enumerating BC(G + H), and (3) computing the
difference of the two. We use MineLMBC [22] for enumerating
bicliques from a static graph. Table 2 shows a comparison
of the runtimes of DynamicBC and BaselineBC. From the
table, it is clear that DynamicBC is faster than BaselineBC
by two to three orders of magnitude. For instance, for
adding a single batch of size 100 to actor-movie-1,
BaselineBC takes more than 30 min., whereas DynamicBC
takes around 30 ms.
Computation Time per Batch of Edges: Let an
“iteration” denote the addition of a single batch of edges.
Fig. 6 shows the computation time per iteration versus
iteration number. From the plots, we observe that the
computation time increases as the iteration increases.
This trend is consistent with predictions. Note that as
computation progresses, the number of edges in the graph
increases, and note that the the computation time is
proportional to the size of graph as well as size of change
(Theorem 3). In Fig. 6(c) we see that computation time
decreases suddenly and then again increases. This may
seem anomalous, but is explained by noting that in these
cases, the magnitude of change decreases in those iterations,
and then increases thereafter.
In Fig. 7, we show the breakdown of the computation
time of DynamicBC into time taken for enumerating new
cliques (NewBC) and for enumerating subsumed cliques
(SubBC). Observe that the computation time increases for
both new maximal bicliques and subsumed bicliques as
more batches are added. This is because the graph becomes
denser when more batches are added and the time taken
to compute the change increases, consistent with Theorem 3.
Change-Sensitiveness: Fig. 8 shows the computation
time as a function of the size of change. We observe that
the computation time of DynamicBC is roughly proportional
to the size of change. The computation time of both
NewBC and SubBC increases as number of new maximal
bicliques and subsumed bicliques increases. Clearly, this
observation supports our theoretical analysis. In some plots
(Fig. 8(c),8(d)) we see a rapid increase in the computation
time with the size of change. This is because, when the
graph grows, memory consumption increases considerably
and this affects the computation time of the algorithm.
Space Cost: Fig. 9 shows the space cost of DynamicBC for
different graphs. As SubBC needs to maintain the maximal
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
1-10k
DynamicBC
20K-30K
Size range
40K-70K
Avg. comp. time in range
Avg. comp. time in range
10
100
DynamicBC
90
80
70
60
50
40
30
20
10
0
100K-1M
Size range
0.02
0.015
0.01
0.005
0
1-5K
DynamicBC
10K-15K
Size range
25K-35K
(b) dblp-author-1
100M-1B
Avg. comp. time in range
Avg. comp. time in range
(a) actor-movie-1
0.025
500
DynamicBC
450
400
350
300
250
200
150
100
50
0
100K-1M
Size range
100M-1B
(d) flickr-membership-1
(c) epinions-rating-init
Fig. 8: Computation time (in sec.) for total change vs. size of total change.
bicliques in memory for computing subsumed bicliques,
we report the space consumption in two cases: (1) when we
store the maximal bicliques in memory, (2) when we store
the signatures of bicliques in memory instead of storing
the bicliques. Signatures consume less memory than the
actual bicliques as the signatures have fixed size (64 bits
in our case using the murmur hash function) for different
sizes of bicliques. Therefore, memory consumption by the
algorithm that uses signatures should be smaller than the
algorithm that does not use signatures. The trend is also
clear in the plots. The difference in memory consumption is
not prominent during the initial iterations because, sizes of
maximal bicliques are much smaller during initial iterations
and therefore memory consumption is mainly due to the
graph that we maintain in memory. We are not showing the
space cost without hash for the third input graph because
the algorithm could not execute on the third input graph
without hashing, due to running out of memory.
Computation Time for Different Batch Size: Table 3
shows the cumulative computation time for different graphs
when we use different batch size. We observe that the
total computation time increases when increasing the batch
size. The reason for this trend is that the computation time
for subsumed cliques increases with increasing batch size,
while the computation time for the new maximal bicliques
remains almost same across different batch sizes. Note that,
the time complexity for SubBC has (in the worst case) an
exponential dependence on the batch size. Therefore, the
computation time for subsumed cliques tends to increase
with an increase in the batch size. However, with a very
small batch size (such as 1 or 10), the change in the maximal
bicliques is very small, and the overhead can be large.
Maintaining Large Maximal Bicliques:
We also
consider maintaining large maximal bicliques with
predefined size threshold s, where it is required that each
bipartition of the biclique has size at least s. For large
subsumed bicliques, we provide s in addition to other
inputs to SubBC as well. Table 4 shows the cumulative
computation time by varying the threshold size s from 1
to 6. Clearly, s = 1 means that we maintain all maximal
bicliques. As expected, the cumulative computation time
decreases significantly in most of the cases as the size
threshold s increases.
6
C ONCLUSION
In this work, we presented a change-sensitive algorithm
for enumerating changes in the set of maximal bicliques in
120
Avg. space cost in range
Avg. space cost in range
11
Without hash
with hash
100
80
60
40
20
0
1-1K
4K-5K
8K-9K
Iteration range
12K-13K
700
Without hash
with hash
600
500
400
300
200
100
1-1K 39K-40K
Iteration range
350
(b) dblp-author-1
Avg. space cost in range
Avg. space cost in range
(a) actor-movie-1
With hash
300
250
200
150
100
50
1-.25K
77K-78K
1K-1.25K
Iteration range
700
Without hash
With hash
600
500
400
300
200
100
0
1-1K
2.5K-2.6K
2K-3K
Iteration range
4K-5K
(d) flickr-membership-1
(c) epinion-rating-init
Fig. 9: Space cost (in MB)
TABLE 3: Total computation time (from the initial graph to the final graph) for different batch sizes
Initial-graph
actor-movie-1
dblp-author-1
epinion-rating-init
flickr-membership-1
batch-size-1
batch-size-10
batch-size-100
3.8 min. (3.3 + 0.5)
11.3 min. (9 + 2.3)
3.3 hours (3.1 + .2)
2.1 hours (1.9 + 0.2)
3.8 min. (2.8 + 1)
14.1 min. (8.8 + 5.3)
3.7 hours (3.1 + 0.6)
2.4 hours (1.9 + 0.5)
3.9 min. (2.9 + 1)
15.7 min. (8.3 + 7.4)
7 hours (3.2 + 3.8)
3 hours (2.1 + 0.9)
TABLE 4: Total computation time (from initial to final graph) by varying the threshold size s
Initial-graph
actor-movie-1
dblp-author-1
epinion-rating-init
flickr-membership-1
s=1
s=2
s=3
s=4
s=5
s=6
203 sec.
947 sec.
7 hours
3 hours
124 sec.
531 sec.
6.5 hours
2.5 hours
105 sec.
445 sec.
6.3 hours
2.3 hours
100 sec.
403 sec.
6 hours
2.1 hours
103 sec.
399 sec.
5.5 hours
1.9 hours
98 sec.
400 sec.
5 hours
1.6 hours
dynamic graph. The performance of this algorithm is proportional to the magnitude of change in the set of maximal
bicliques – when the change is small, the algorithm runs
faster, and when the change is large, it takes a proportionally
longer time. We present near-tight bounds on the maximum
possible change in the set of maximal bicliques, due to a
change in the set of edges in the graph. Our experimental
evaluation shows that the algorithm is efficient in practice,
and scales to graphs with millions of edges. This work leads
to natural open questions (1) Can we design more efficient algorithms for enumerating the change, especially for
enumerating subsumed cliques? (2) Can we parallelize the
algorithm for enumerating the change in maximal bicliques?
R EFERENCES
[1]
Actor movies network dataset – KONECT.
http://konect.
uni-koblenz.de/networks/actor-movie, Oct. 2016.
12
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
Dblp network dataset – KONECT. http://konect.uni-koblenz.de/
networks/dblp-author, Oct. 2016.
Epinions product ratings network dataset – KONECT. http://
konect.uni-koblenz.de/networks/epinions-rating, Oct. 2016.
Flickr network dataset – KONECT. http://konect.uni-koblenz.
de/networks/flickr-groupmemberships, Oct. 2016.
G. Alexe, S. Alexe, Y. Crama, S. Foldes, P. L. Hammer, and B. Simeone. Consensus algorithms for the generation of all maximal
bicliques. Discrete Applied Mathematics, 145(1):11–21, 2004.
A. Angel, N. Koudas, N. Sarkas, D. Srivastava, M. Svendsen,
and S. Tirthapura. Dense subgraph maintenance under streaming
edge weight updates for real-time story identification. The VLDB
Journal, pages 1–25, 2013.
P. Damaschke. Enumerating maximal bicliques in bipartite graphs
with favorable degree sequences. Information Processing Letters,
114(6):317–321, 2014.
A. Das, M. Svendsen, and S. Tirthapura. Change-sensitive algorithms for maintaining maximal cliques in a dynamic graph. arXiv
preprint arXiv:1601.06311, 2016.
V. M. Dias, C. M. De Figueiredo, and J. L. Szwarcfiter. Generating
bicliques of a graph in lexicographic order. Theoretical Computer
Science, 337(1):240–248, 2005.
V. M. Dias, C. M. de Figueiredo, and J. L. Szwarcfiter. On the
generation of bicliques of a graph. Discrete Applied Mathematics,
155(14):1826–1832, 2007.
A. C. Driskell, C. An, J. G. Burleigh, M. M. McMahon, B. C.
O’Meara, and M. J. Sanderson. Prospects for building the tree of
life from large sequence databases. Science, 306(5699):1172–1174,
2004.
D. Eppstein. Arboricity and bipartite subgraph listing algorithms.
Information processing letters, 51(4):207–211, 1994.
A. Gély, L. Nourine, and B. Sadi. Enumeration aspects of maximal
cliques and bicliques. Discrete applied mathematics, 157(7):1447–
1459, 2009.
D. Gibson, R. Kumar, and A. Tomkins. Discovering large dense
subgraphs in massive graphs. In VLDB, pages 721–732, 2005.
R. A. Hanneman and M. Riddle. Introduction to social network
methods. http://faculty.ucr.edu/∼hanneman/nettext/. Textbook
on the web.
X. Huang, H. Cheng, L. Qin, W. Tian, and J. X. Yu. Querying ktruss community in large and dynamic graphs. In SIGMOD, pages
1311–1322, 2014.
A. Java, X. Song, T. Finin, and B. L. Tseng. Why we twitter: An
analysis of a microblogging community. In WebKDD/SNA-KDD,
pages 118–138, 2007.
R. Kumar, P. Raghavan, S. Rajagopalan, and A. Tomkins. Trawling
the Web for emerging cyber-communities. Computer networks,
31(11):1481–1493, 1999.
S. Lehmann, M. Schwartz, and L. K. Hansen. Biclique communities. Physical Review E, 78(1):016108, 2008.
J. Li, H. Li, D. Soh, and L. Wong. A correspondence between
maximal complete bipartite subgraphs and closed patterns. In
European Conference on Principles of Data Mining and Knowledge
Discovery, pages 146–156. Springer, 2005.
R. Li, J. X. Yu, and R. Mao. Efficient core maintenance in large
dynamic graphs. TKDE, 26(10):2453–2465, 2014.
G. Liu, K. Sim, and J. Li. Efficient mining of large maximal
bicliques. In Data warehousing and knowledge discovery, pages 437–
448. Springer, 2006.
D. Lo, D. Surian, K. Zhang, and E.-P. Lim. Mining direct antagonistic communities in explicit trust networks. In CIKM, pages
1013–1018, 2011.
K. Makino and T. Uno. New algorithms for enumerating all
maximal cliques. In SWAT, pages 260–272. 2004.
A. P. Mukherjee and S. Tirthapura. Enumerating maximal bicliques from a large graph using mapreduce. In IEEE BigData
Congress, pages 707–716, 2014.
N. Nagarajan and C. Kingsford. Uncovering genomic reassortments among influenza strains by enumerating maximal bicliques.
In Bioinformatics and Biomedicine, 2008. BIBM’08. IEEE International
Conference on, pages 223–230. IEEE, 2008.
E. Prisner. Bicliques in graphs i: Bounds on their number. Combinatorica, 20(1):109–117, 2000.
J. E. Rome and R. M. Haralick. Towards a formal concept analysis
approach to exploring communities on the world wide web. In
Formal Concept Analysis, volume 3403 of LNCS, pages 33–48. 2005.
[29] M. J. Sanderson, A. C. Driskell, R. H. Ree, O. Eulenstein, and
S. Langley. Obtaining maximal concatenated phylogenetic data
sets from large sequence databases. Mol. Biol. Evol., 20(7):1036–
1042, 2003.
[30] A. E. Sariyüce, B. Gedik, G. Jacques-Silva, K. Wu, and Ü. V.
Çatalyürek. Streaming algorithms for k-core decomposition.
PVLDB, 6(6):433–444, 2013.
[31] M. Svendsen, A. P. Mukherjee, and S. Tirthapura. Mining maximal
cliques from a large graph using mapreduce: Tackling highly
uneven subproblem sizes. J. Parallel Distrib. Comput., 79-80:104–
114, 2015.
[32] P. Valtchev, R. Missaoui, and R. Godin. A framework for incremental generation of closed itemsets. Discrete Applied Mathematics,
156(6):924–949, 2008.
[33] Y. Xu, J. Cheng, A. W.-C. Fu, and Y. Bu. Distributed maximal clique
computation. In IEEE BigData Congress, pages 160–167, 2014.
[34] C. Yan, J. G. Burleigh, and O. Eulenstein. Identifying optimal
incomplete phylogenetic data sets from sequence databases. Mol.
Phylogenet. Evol., 35(3):528–535, 2005.
[35] Y. Zhang, C. A. Phillips, G. L. Rogers, E. J. Baker, E. J. Chesler,
and M. A. Langston. On finding bicliques in bipartite graphs: a
novel algorithm and its application to the integration of diverse
biological data types. BMC bioinformatics, 15(1):1, 2014.
Apurba Das is a 4th year Ph.D. student in the
department of Computer Engineering at Iowa
State University. He received his Masters in
Computer Science from Indian Statistical Institute, Kolkata in 2011 and worked for 2 years
after that as a software developer at Ixia. His research interests are in the area of graph mining,
dynamic and streaming graph algorithms, and
large scale data analysis.
Dr. Srikanta Tirthapura received his Ph.D. in
Computer Science from Brown University in
2002, and his B.Tech. in Computer Science and
Engineering from IIT Madras in 1996. He is the
Kingland Professor of Data Analytics in the department of Electrical and Computer Engineering at Iowa State University. He has worked at
Oracle Corporation and is a recipient of the IBM
Faculty Award, and the Warren Boast Award
for excellence in Undergraduate Teaching. His
research interests include algorithms for largescale data analysis, stream computing, and cybersecurity.
| 8 |
On Polynomial Time Absolute Approximation-bounded Solution of TSP
and NP Complete Problems
Wenhong Tiana
arXiv:1605.06183v2 [] 18 Dec 2016
a School of Information and Software Engineering,
University of Electronic Science and Technology of China (UESTC)
Abstract
The question of whether all problems in NP class are also in P class is generally considered one of the most
important open questions in mathematics and theoretical computer science as it has far-reaching consequences to other problems in mathematics, computer science, biology, philosophy and cryptography. There
are intensive research on proving ‘NP not equal to P’ and ‘NP equals to P’. However, none of the ‘proved’
results is commonly accepted by the research community up to date. In this paper, motived by approximability of traveling salesman problem (TSP) in polynomial time, we provide a polynomial time absolute
approximation-bounded solution of TSP in Euclidean space. It may shine light on solving other NP complete
problems in similar way.
Keywords: NP problems, P Problems, P versus NP, TSP, Polynomial Time Absolute Approximation
Bounded Solutions
1. Introduction
P versus NP problem is one of seven Millennium Prize Problems in mathematics that were stated by
the Clay Mathematics Institute [1] in 2000. As of Dec 2016, six of the problems remain unsolved. The
official statement of P versus NP problem was given by Stephen Cook [2]. In computational complexity
theory, Karp’s 21 NP-complete problems are a set of computational problems which are NP-complete. In his
1972 paper [9], Richard Karp used Stephen Cook’s 1971 theorem that the Boolean satisfiability problem is
NP-complete (also called the Cook-Levin theorem) to show that there is a polynomial time many-one reduction from the Boolean satisfiability problem (BSP) to each of 21 combinatorial problems, thereby showing
that they are all NP-complete. This was one of the first demonstrations that many natural computational
problems occurring throughout computer science are computationally intractable, and it drove interest in
the study of NP-completeness and the P versus NP problem.
Simply speaking, P problems mean that the class of problems can be solved exactly in polynomial time
while NP (non-deterministic polynomial) problem stands for a class of problems which can not be solved in
polynomial time. Intuitively, NP problem is the set of all decision problems for which the instances where the
answer is “yes” have efficiently verifiable proofs of the fact that the answer is indeed “yes”. More precisely,
these proofs have to be verifiable in polynomial time by a deterministic Turing machine. In an equivalent
formal definition, NP problems is the set of decision problems where the “yes”-instances can be accepted in
polynomial time by a non-deterministic Turing machine [18]. NP problems has far-reaching consequences to
other problems in mathematics, biology, philosophy and cryptography.
The complexity class P is contained in NP, and NP contains many important problems. The hardest
of which are NP-complete problems, whose solutions are sufficient to deal with any other NP problems in
polynomial time. The most important open question in complexity theory, is the P versus NP problem which
asks whether polynomial time algorithms actually exist for NP-complete problems and all NP problems. The
Preprint submitted to Elsevier
December 20, 2016
important thing is that Karp showed that if any of them have efficient polynomial time algorithms, then they
all do. Many of these problems arise from real-world optimization problems including Sub Set Sum Problem (SSP), Traveling Salesman Problem (TSP), Bin Packing Problem (BPP), Hamiltonian Cycle Problem
(HCP), and Chromatic Number Problem (CNP) etc.. Researchers later extend Karp’s techniques to show
hundreds, if not thousands of natural problems, are NP-complete.
It is widely believed that NP!=P in 2002 [4]. In 2012, 10 years later, the same poll was repeated [5].
The number of researchers who answered was 126 (83%) believed the answer to be no, 12 (9%) believed
the answer is yes, 5 (3%) believed the question may be independent of the currently accepted axioms and
therefore is impossible to prove or disprove, 8 (5%) said either don’t know or don’t care or don’t want the
answer to be yes nor the problem to be resolved. On the Web site [18] , Prof. Gerhard Woeginger provides the unofficial archivist of about 116 claims for the NP vs P problem from 1986 to April 2016, among
them, 49 (42%) believed the answer to be no, 62 (53%) believed the answer is yes, the other 5 (5%) think
Undecidable, or Unprovable or Unknow. About nine of papers in the list ‘established’ NP=P by designing algorithms for variants of the TSP, though none of them is commonly accepted yet by the research community.
As for approximation of TSP, Christofides [Christofides,1976] provided an absolute 1.5-approximation
algorithm. Arora [Arora, 1998] proposed an asymptotical (1+1/c)-approximation algorithm, but its computational complexity is of O(n(logn)O(c) ) where (logn)O(c) can be huge since c can be a few tens or larger,
therefore it is not a practical algorithm but asymptotical bounded approximation (this is confirmed by Prof.
Arora through email). Tian et al. [Tian et al., 2016] introduced TGB algorithm with absolute approximation
K−1
)-approximation where K is the number of iterations in TGB and α is the shape parameter
of (1+ 12 ( α+1
α+2 )
of Generalized Beta distribution (introduce in Section 3) and can be obtained once a TSP is given. In this
paper, we focus on absolute approximation but not asymptotical approximation. For convenience, we just
use approximation to represent absolute approximation.
In [Papadimitriou and Vempala, 2006], Papadimitriou and Vempala proved that, unless NP=P, there can be
no polynomial-time C-approximation algorithm for the metric TSP with C less than 220
219 , i.e., less than 0.45%.
In this paper, we aim to propose is a absolute bounded approximation algorithm for TSP in Euclidean space.
The remaining sections are organized as follows. TSP is discussed in Section 2. Approximation bounded
algorithm for TSP is proposed in Section 3. Our main results are provided in Section 4. Finally we conclude
in Section 5.
2. TSP Formulation in Euclidean Space
The TSP is one of most researched problems in combination optimization because of its importance in
both academic need and real world applications. For surveys of the TSP and its applications, the reader is
referred to [Cook, 2012] and references therein. We consider the n-node TSP defined in Euclidean space.
This can be represented on a complete graph G= (V, E) where V is the set of nodes and E is the set of
edges. The cost of an edge (u, v) is the Euclidean distance (cuv ) between u and v. Let the edge cost matrix
be C[cij ] which satisfies the triangle inequality.
Definition 1. Symmetric TSP (STSP) is TSP in Euclidean distance (called ESTSP) and the edge cost
matrix C is symmetric.
Definition 2. Asymmetric TSP (ATSP) is TSP in Euclidean distance (called EATSP) and the edge cost
matrix C is asymmetric.
Definition 3. 4STSP is a STSP whose edge costs are non-negative and satisfies the triangle inequality,
i.e., for any three distinct nodes (not necessary neighboring) (i, j, k), (cij +cjk ) ≥ cik . The STSP is also
called the metric TSP.
Definition 4. TSP tour. Given a graph G in 2-dimensional Euclidean distance and its distance matrix
C where cij denote the distance between node i and j (for both symmetric and asymmetric). A tour T with
|V | nodes has length
2
|V |−1
L=
X
cT (k),T (k+1)
(1)
k=0
In 1977, Papadimitriou [Papadimitriou,1977] firstly proved that the Euclidean TSP (ETSP) is NPcomplete by reduction of the Exact Cover Problem to the ETSP.
3. On the Approximability of Metric TSP
On the approximability of metric TSP, there is a well-known theorem as follow.
Papadimitriou-Vempala Theorem. In [Papadimitriou and Vempala,2006], Papadimitriou and Vempala
(let us call it Papadimitriou-Vempala Theorem) proved that, unless NP=P, there can be no polynomial-time
220
, i.e., less than 0.45%.
C-approximation algorithm for the metric TSP with C less than 219
Before continuing, the following two definitions are introduced:
Definition 5. maxTSP. The maximum tour length (B) is obtained using LKH where each edge cost (cij ) is
replaced by a very large value (M ) minus the original edge cost, i.e., (M -cij ). M can be set as the maximum
edge cost plus 1.
Definition 6. k-opt. is a local search with k-exchange neighborhoods and the most widely used heuristic
method for the TSP. k-opt is a tour improvement algorithm, where in each step k links of the current
tour are replaced by k links in such a way that a shorter tour is achieved (see [Helsgaun 2009] for detailed
introduction).
In the following, we propose an algorithm called ITGBC which can obtain approximation ratio less than 220
219
for metric TSP.
We firstly propose a Generalized Beta (GB) distribution [Tian et al., 2016]. The probability density
function (pdf) of GB is defined as
(x − A)α−1 (B − x)β−1
Beta(α, β)
f (x, α, β, A, B) =
(2)
where Beta(α, β) is the beta function
Z
Beta(α, β) =
1
tα−1 (1 − t)β−1 dt,
(3)
0
A and B is the lower bound and upper bound respectively, α > 0, β > 0. For TSP, A and B represents the
minimum and maximum tour length (maxTSP) respectively.
Some of the following results are introduced in [Tian et al, 2016], for completeness, we restate here and
introduce Iterative Truncated Generalized Beta distribution Based on Christofides Algorithm
(ITGBC) firstly. ITGBC algorithm performs in seven steps:
• (1). Finding the minimum spanning tree M ST of the input graph G representation of metric TSP;
• (2). Taking G restricted to vertices of odd degrees in M ST as the subgraph G∗ ; This graph has an
even number of nodes and is complete;
• (3). Finding a minimum weight matching M ∗ on G∗ ;
• (4). Uniting the edges of M ∗ with those of the M ST to create a graph H with all vertices having even
degrees;
3
• (5). Creating a Eulerian tour on H and reduce it to a feasible solution using the triangle inequality, a
short cut is a contraction of two edges (i, j) and (j, k) to a single edge (i, k);
• (6). Applying Christofides algorithm [Christofides,1976] to a ESTSP forms a truncated GB (TGB)
for the probability density function of optimal tour lengths, with expectation (average) value at most
1.5OPT-, where is a very small value; Applying k-opt to the result of Christofides algorithm forms
another TGB for probability density function of optimal tour lengths;
• (7). Iteratively applying this approach, taking the expectation value of (K − 1)-th iteration as the
µK−1 −A
upper bound (bˆK = tB−A ) of the K-th iteration, we have the expectation value (denoted as µK
t )
after K iterations (K ≥ 2), which is proved in [Tian et al., 2016],
µK
t = A + (B − A)
B2 (0, bˆK , α + 1, β)
B2 (0, bˆK , α, β)
= A + (B − A)g(bˆK )
1 α + 1 K−1
≤ (1 + (
)
)A
2 α+2
Z t
B2 (0, t, α, β) =
xα−1 (1 − x)β−1 dt
(4)
(5)
0
K−1
)-approximation where K is the number of iterations
Theorem 1. ITGBC algorithm is (1+ 21 ( α+1
α+2 )
in ITGBC algorithm, α is the shape parameter of TGB and can be determined once ETSP instance is given.
The present author proved Theorem 1 in [Tian et al., 2016] , for completeness, we provide the proof in the
following.
1
2
3
K
A
1.5A-ε 1.5A
B
Figure 1: The process in ITGBC algorithm
Proof. Applying k-opt to the result obtained by Christofide algorithm as shown in Fig.1. The TGB in this
case is truncated from above. Denote the first truncation by Christofides’ algorithm as the first truncation
4
(K=1). The probability density function of the second TGB is given by
(x − A)α−1 (B − x)β−1
ft2 (x, α, β, A, B, a2 , b2 ) = R b2
(x − A)α−1 (B − x)β−1
a2
(6)
In this case, a2 =A, b2 =1.5A because the distribution is based on the result after applying Christofides algox−A
A−A
0.5A
rithm which assures the upper bound is at most 1.5A, see Fig.1. Setting x̂= B−A
, aˆ2 = B−A
=0, bˆ2 = 1.5A−A
B−A = B−A ,
we have
Z b2
C0 =
(x − A)α−1 (B − x)β−1 dx
a2
bˆ2
Z
=
((B − A)x̂)α−1 ((B − A)(1 − x̂)β−1 dx
0
= (B − A)α+β−1 B2 (0, bˆ2 , α, β)
where
Z
(7)
t
xα−1 (1 − x)β−1 dt
B2 (0, t, α, β) =
(8)
0
By the definition of the expectation or mean value (denoted as µ2t ) for ft2 (x, α, β, A, B, a2 , b2 ), we have
Z b2
µ2t − A =
(x − A)ft2 (x, α, β, A, B, a2 , b2 )dx
a2
R b2
=
=
a2
(x − A)α (B − x)β−1 dx
C0
(B − A)
α+β
= (B − A)
=> µ2t = A + (B − A)
B2 (0, bˆ2 , α + 1, β)
C0
B2 (0, bˆ2 , α + 1, β)
B2 (0, bˆ2 , α, β)
B2 (0, bˆ2 , α + 1, β)
(9)
B2 (0, bˆ2 , α, β)
µK−1 −A
Taking the expectation value of (K − 1)-th iteration as the upper bound (bˆK = tB−A ) of the K-th
iteration, we apply this approach iteratively and have the expectation value after K iterations (K ≥ 2),
denoted as µK
t ,
µK
t = A + (B − A)
B2 (0, bˆK , α + 1, β)
B2 (0, bˆK , α, β)
= A + (B − A)g(bˆK )
Notice that the expectation value of the (K-1)-iteration is taken as the upper bound (bˆK =
is OPT and B is the maxTSP) of the K-iteration, as shown in Fig.1. Setting
g(bˆK ) =
B2 (0, bˆK , α + 1, β)
B2 (0, bˆK , α, β)
µK−1
−A
t
B−A
here A
(10)
The exact expression of g(bˆK ) can be stated in a hypergeometric series, and
α
bˆK
ˆ
B2 (0, bK , α, β) =
F (α, 1 − β, α + 1, bˆK )
α
5
(11)
and F (a, b, c, x)
a(a + 1)b(b + 1) 2
ab
x+
x
c
c(c + 1)2!
a(a + 1)(a + 2)b(b + 1)(b + 2) 3
+
x + ...
c(c + 1)(c + 2)3!
=1+
(12)
In all cases, we have α >1, β >1, bˆK ∈ (0, 1) as shown in [Tian et al., 2016], therefore F (a, b, c, x) is an
monotonic decreasing function. We have
α+1
u2t = A + (B − A)g(bˆ2 ) ≤ A + 0.5A
α+2
(13)
continue this for g(bˆ3 ), u3t , g(bˆ4 ), u4t ,..., so forth, we have
0.5A α + 1 K−1
bˆK ≤
(
)
B−A α+2
(14)
and
B2 (0, bˆK , α + 1, β)
g(bˆK ) =
B2 (0, bˆK , α, β)
α+1 ˆ
≤
bK
α+2
K−1
0.5A( α+1
α+2 )
=
,
B−A
(15)
Therefore
µK
t = A + (B − A)
B2 (0, bˆK , α + 1, β)
B2 (0, bˆK , α, β)
= A + (B − A)g(bˆK )
1 α + 1 K−1
≤ (1 + (
)
)A
2 α+2
(16)
This completes the proof.
Theorem 2. The
√ computational complexity of ITGBC algorithm is of
O(max(n3 , K(k 3 + k n))).
√
Proof: In [Helsgaun, 2009], a method with computational complexity of O(k 3 + k n) is introduced for
k-opt. Since ITGBC applies Christofides algorithm firstly which has computational complexity of O(n3 )
[Christofides,1976],
√ and then applies k-opt with K iterations in LKH which has computational complexity
of O(K(k 3 + k n)), the computational complexity of LKH is estimated to be√O(n2.2 ) [Helsgaun, 2009], so
altogether the computational complexity of ITGBC is of O(max(n3 , K(k 3 + k n))).
3.1. Our Main Results
Theorem 3. Metric TSP is one of NP complete problem [Papadimitriou, 1977], which can be solved by
220
ITGBC in polynomial time C-approximation with C less than 219
.
6
K−1
Proof. From Theorem 1, it can been seen the approximation ratio (1+ 12 ( α+1
) can be less than 220
α+2 )
219 .
Actually when the instance is given, the shape parameter α can be estimated easily as shown in [Tian et
al., 2016]. By fixing the approximation ratio to obtain the number of iterations in ITGBC, through a simple
log0.009
), the approximation ratio will be less than
numerical computation, we know when K > (1+ log(1−1/(α+2))
220
219 .
According to Papadimitriou-Vempala Theorem, this happens only when NP=P. This may imply NP=P.
4. Numerical Results
For implementation, ITGBC algorithm is based on Christofides’ algorithm and LKH source codes, so it
takes both advantages of them and provides approximation bounded results.
4.1. Polynomial Time Approximation-Bounded Solutions to ESTSP
The results in Table 1 are obtained from William J. Cook’s book [Cook, 2012], Chapter 1, where all
problems are solved to optimality by different tools except for 1000,000 city problem and 1,904,711 city
problem, for which optima are not known yet.
In Table 1, Nagata’s tour for 1000,00-city Mona Lisa tour is known to be at most 0.0003% longer than an
Table 1: TSP Records Variation By Years [Cook, 2012]
# Nodes
Year (Solved)
Description
Authors
48
64
80
120
318
532
666
1002
2392
3038
13509
15112
24978
85900
100000
1904711
1954
1971
1975
1977
1987
1987
1987
1987
1987
1992
1998
2001
2004
2006
2009*
2010*
USA cities
random nodes
random nodes
Germany cities
cities
USA cities
World cities
cities
cities
cities
USA cities
cities
Sweden cities
cities
Japan
World TSP Challenge
Dantzig et al.(by hand)
Micheal Held, Richard Karp
Panagiotis Miliotis
Martin Grotschel, Manfred Padberg
Manfred Padberg,Harlan Crowder
Martin Grotschel, Manfred Padberg
Martin Grotschel, Manfred Padberg
Martin Grotschel, Manfred Padberg
Martin Grotschel, Manfred Padberg
Concorde
Concorde
Concorde
Concorde
Concorde, LKH [Helsgaun,2009]
Yuchi Nagata
LKH [Helsgaun,2009]
optimal solution; The tour by LKH [Helsgaun,2009] for 1,904,711-city of length 7,515,790,345 meters was
known to be no more than 0.0476% longer than an optimal tour.
Definition 7. Concorde Algorithm [Concorde,2003]: Concorde is a computer code for the STSP
and some related network optimization problems. The code is written in the ANSI C programming language.
Concorde’s TSP solver has been used to obtain the optimal solutions to the full set of 110 TSPLIB instances,
the largest having 85,900 cities. Executable versions of Concorde with qsopt for Linux and Solaris are available [Concorde,2003]. Hans Mittelmann has created a NEOS Server (http:/neos-server.org) for Concorde,
allowing users to solve TSP instances online.
Definition 8. LKH algorithm [Helsgaun,2009]: LKH is an effective implementation of the LinKernighan heuristic [Lin and Kernighan,1973] for solving the traveling salesman problem. Computational
7
experiments have shown that LKH is highly effective. LKH has produced optimal solutions for all solved
problems they have been able to obtain; including a 85,900-city instance (at the time of writing, the largest
nontrivial instance solved to optimality). Furthermore, the algorithm has improved the best known solutions for a series of large-scale instances with unknown optima, among these a 1,904,711-city instance (called
World TSP).
Both Concorde [Concorde, 2003] and LKH [Helsgaun,2009] solve all 110 TSPLIB instances [Reinelt,1991] to
optimums.
Table 2: 5 Longest Running Time TSPLIB Instances Solved Exactly by LKH [Helsgaun,2009]
Name
#Nodes
Running Time (Seconds)
fl1577
fnl4461
u1817
pcb3038
pla7397
1577
4661
1817
3038
7397
10975
10973
2529
3237
130220
Table 2 shows that LKH results for 5 STSP TSPLIB instances which are top 5 longest running time
instances for LKH solved in 1998 and running times are measured in seconds on a 300 MHz G3 Power
Macintosh.
5. Conclusions and Future Work
In this paper, we provide an algorithm ITGBC in polynomial time with absolute approximation bounded
solutions for TSP. One can see from Table 1 that, the scale (the number of nodes) of TSP is increased as year
increasing; one of reasons for the TSP to become harder is because of the scale becomes larger and larger.
For TSPLIB instances with node number less than 5000, ITGBC based on LKH can solve them to optimality
in less than a few hours or shorter. These mean that ITGBC can provide exact or approximation-bounded
solutions to practical TSPs.
How about other NP problems? Can they also be solved in similar way? According to Karp’s result
[Karp,1972] that if any of NP problems have efficient algorithms, then they all do. Hopefully our proposed
approach can shine light on other NP problems.
Acknowledgments
This research is partially supported by China National Science Foundation (CNSF) with project ID
61672136, 61650110513; and Xi Bu Zhi Guang Project (R51A150Z10). A version of manuscript is posted on
arxiv at https://arxiv.org/abs/1605.06183.
References
References
[1] Clay Mathematics
problems.
Institute,
http://www.claymath.org/millennium-problems/millennium-prize-
[2] Stephen Cook (1971). The Complexity of Theorem Proving Procedures, Proceedings of the third annual
ACM symposium on Theory of computing. pp. 151158, March of 1971.
[3] William Cook, In Pursuit of the Traveling Salesman, Princeton University Press, 2012.
8
[4] Concorde, http://www.math.uwaterloo.ca/tsp/concorde.html
[5] William I. Gasarch, (June 2002). The P=?NP poll
doi:10.1145/1052796.1052804. Retrieved 2008-12-29.
,
SIGACT News 33 (2):
34-47.
[6] William I. Gasarch, The Second P=?NP poll, SIGACT News 74, 2012.
[7] K. Helsgaun,General k-opt submoves for the Lin-Kernighan TSP heuristic. Mathematical Programming
Computation, 2009,doi: 10.1007/s12532-009-0004-6.
[8] Richard M. Karp (1972), Reducibility Among Combinatorial Problems, In R. E. Miller and J. W.
Thatcher (editors). Complexity of Computer Computations. New York: Plenum. pp. 85-103.
[9] S. Lin, B.W. Kernighan, An effective heuristic algorithm for the traveling-salesman problem. Oper.
Res.21, 498-516 (1973).
[10] LKH codes, http://www.akira.ruc.dk/∼keld/research/LKH/, last accessed Jan. 15th of 2015.
[11] Christos H. Papadimitriou, The Euclidean travelling salesman problem is NP-complete, Theoretical
Computer Science, Volume 4, Issue 3, June 1977, Pages 237-244.
[12] Christos H. Papadimitriou and Santosh Vempala, On the approximability of the traveling salesman
problem,Combinatorica 26 (1) (2006) 101-120.
[13] G. Reinelt. TSPLIB, a traveling salesman problem library. ORSA Journal on Computing, 3(4):376-384,
1991.
[14] S. Reiter and D. B. Rice. Discrete optimizing solution procedures for linear and nonlinear integer
programming problems. Management Science, 12(11):829-850, 1966.
[15] David Soler, A transformation for the mixed general routing problem with turn penalties,Journal of the
Operational Research Society 59, 540-547, April 2008.
[16] Wenhong Tian, Chaojie Huang, Xinyang Wang, A Near Optimal Approach for Symmetric Traveling
Salesman Problem in Eclidean Space, To appear in Proceedings of ICORES 2017, Portugal, also available
at arxiv https://arxiv.org/pdf/1502.00447.pdf
[17] Wikipedia, http://en.wikipedia.org/wiki/NP-complete.
[18] http://www.win.tue.nl/∼gwoegi/P-versus-NP.htm
[19] Manindra Agrawal, Neeraj Kayal, Nitin Saxena, PRIMES is in P, IIT Kanpur, Preprint of August 8,
2002, http://www.cse.iitk.ac.in/news/ primality.html.
[20] N. Christofides, Worst-case analysis of a new heuristic for the travelling salesman problem, Report 388,
Graduate School of Industrial Administration, CMU, 1976.
[21] Sanjeev Arora, Polynomial Time Approximation Schemes for Euclidean Traveling Salesman and other
Geometric Problems. J. ACM 45(5):753-782,1998.
9
| 8 |
Sign-Constrained Regularized Loss Minimization
Tsuyoshi Kato†,∗ , Misato Kobayashi† , Daisuke Sano⋄
†
arXiv:1710.04380v1 [cs.LG] 12 Oct 2017
⋄
Division of Electronics and Informatics, Faculty of Science and Technology, Gunma
University, Tenjin-cho 1-5-1, Kiryu, Gunma 376-8515, Japan.
Department of Civil and Environmental Engineering, Graduate School of Engineering,
Tohoku University, Aoba 6–6–06, Aramaki, Aoba-ku, Sendai, Miyagi 980–8579, Japan.
Abstract
in I+ and I− ,
In practical analysis, domain knowledge about analysis target
has often been accumulated, although, typically, such knowledge has been discarded in the statistical analysis stage, and
the statistical tool has been applied as a black box. In this paper, we introduce sign constraints that are a handy and simple representation for non-experts in generic learning problems. We have developed two new optimization algorithms for
the sign-constrained regularized loss minimization, called the
sign-constrained Pegasos (SC-Pega) and the sign-constrained
SDCA (SC-SDCA), by simply inserting the sign correction
step into the original Pegasos and SDCA, respectively. We
present theoretical analyses that guarantee that insertion of
the sign correction step does not degrade the convergence
rate for both algorithms. Two applications, where the signconstrained learning is effective, are presented. The one is
exploitation of prior information about correlation between
explanatory variables and a target variable. The other is introduction of the sign-constrained to SVM-Pairwise method.
Experimental results demonstrate significant improvement of
generalization performance by introducing sign constraints in
both applications.
1
for h ∈ I+ ,
The problem of regularized loss minimization (e.g. Hastie
et al. (2009)) is often described as
where
wrt w ∈ Rd ,
λ
1
P (w) := kwk2 + Φ(X ⊤ w),
2
n
X := [x1 , . . . , xn ] ∈ Rd×n ,
wh′ ≤ 0.
(2)
Sign constraints can introduce prior knowledge directly to
learning machines. For example, let us consider a binary classification task. In case that h-th explanatory variable xh is
positively correlated to a binary class label y ∈ {±1}, then a
positive weight coefficient wh is expected to achieve a better
generalization performance than a negative coefficient, because without sign constraints, the entry wh in the optimal
solution might be negative due to small sample problem. On
the other hand, in case that xh is negatively correlated to the
class label, a negative weight coefficient wh would yield better
prediction. If sign constraints were explicitly imposed, then
inadequate signs of coefficients could be avoided.
The strategy of sign constraints for generic learning problems has rarely been discussed so far, although there are
extensive reports for non-negative least square regression
supported by many successful applications including sound
source localization: (Lin et al., 2004), tomographic imaging
(Ma, 2013), spectral analysis (Zhang et al., 2007), hyperspectral image super-resolution (Dong et al., 2016), microbial
community pattern detection (Cai et al., 2017), face recognition (Ji et al., 2009; He et al., 2013), and non-negative image restoration (Henrot et al., 2013; Landi and Piccolomini,
2012; Wang and Ma, 2007; Shashua and Hazan, 2005). In
most of them, non-negative least square regression is used
as an important ingredient of bigger methods such as nonnegative matrix factorization (Lee and Seung, 2001; Wang
et al., 2017; Kimura et al., 2016; Févotte and Idier, 2011;
Ding et al., 2006).
Several efficient algorithms for the non-negative least
square regression have been developed. The active set method
by Lawson and Hanson (1995) has been widely used in many
years, and several work (Kim et al., 2010, 2007; Bierlaire
et al., 1991; Portugal et al., 1994; Moré and Toraldo, 1991;
Lin and Moré, 1999; Morigi et al., 2007) have accelerated optimization by combining the active set method with the projected gradient approach. Interior point methods (Bellavia
et al., 2006; Heinkenschloss et al., 1999; Kanzow and Klug,
2006) have been proposed as an alternative algorithm for nonnegative least square regression. However, all of them cannot
be applied to generic regularized loss minimization problems.
In this paper, we present two algorithms for the signconstrained regularized loss minimization problem with
Introduction
min
wh ≥ 0, for h′ ∈ I− ,
P (w)
(1)
aiming to obtain a linear predictor hw, xi for an unknown
input x ∈ Rd . Therein, Φ : Rn → R is a loss function
P which is
the sum of convex losses for n examples: Φ(z) := ni=1 φi (zi )
⊤
for z := [z1 , . . . , zn ] ∈ Rn . This problem covers a large
class of machine learning algorithms including support vector
machine, logistic regression, support vector regression, and
ridge regression.
In this study, we pose sign constraints (Lawson and Hanson, 1995) to the entries in the model parameter w ∈ Rd in
the unconstrained minimization problem (1). We divide the
index set of d entries into three exclusive subsets, I+ , I0 , and
I− , as {1, . . . , d} = I+ ∪ I0 ∪ I− and impose on the entries
1
generic loss functions. A surge of algorithms for unconstrained regularized empirical loss minimization have been
developed such as SAG (Roux et al., 2012; Schmidt et al.,
2016), SVRG (Johnson and Zhang, 2013), Prox-SVRG (Xiao
and Zhang, 2014), SAGA (Defazio et al., 2014a), Kaczmarz
(Needell et al., 2015), EMGD (Zhang et al., 2013), and Finito
(Defazio et al., 2014b). This study focuses on two popular algorithms, Pegasos (Shalev-Shwartz et al., 2011) and
SDCA (Shalev-Shwartz and Zhang, 2013). A prominent characteristic of the two algorithms is unnecessity to choose a step
size. Some of the other optimization algorithms guarantee
convergence to the optimum under the assumption of a small
step size, although the step size is often too small to be used.
Meanwhile, the theorem of Pegasos has been developed with
a step size ηt = 1/(λt) which is large enough to be adopted
actually. SDCA needs no step size. Two new algorithms developed in this study for the sign-constrained problems are
simple modifications of Pegasos and SDCA.
The contributions of this study are summarized as follows.
Assumption 2.1. Throughout this paper, the following
assumptions are used:
(a) Φ(·) is a convex function.
(c) ∀s ∈ Rn , Φ(s) ≥ 0.
1
Φ(0) ≤ rloss .
n
(d) ∀i, kxi k ≤ R.
(b)
Most of widely used loss functions satisfy the above assumptions. Several examples of such loss functions are described in Table 1. If the hinge loss is chosen, the learning
machine is a well-known instance called the support vector
machine. If the square error loss is chosen, the learning machine is called the ridge regression. We denote the optimal solution to the constraint problem by w⋆ := argmin P (w). We
w∈S
assume two types of loss functions: L-Lipschitz continuous
function and (1/γ)-smooth function. Function φi : R → R is
said to be an L-Lipschitz continuous funciton if
• Sign constraints are introduced to generic regularized loss
minimization problems.
∀s, ∀δ ∈ R,
|φi (s + δ) − φi (s)| ≤ L|δ|.
(6)
Such functions are often said shortly to be L-Lipschitz in
this paper. Function φi : R → R is a (1/γ)-smooth function
if its derivative function is L-Lipschitz. For an index subset
A ⊆ {1, . . . , n} and a vector v ∈ Rn , let vA be the subvector
of v containing entries corresponding to A. Let XA be a
sub-matrix in X containing columns corresponding to A. Let
• Our theoretical analysis ensures that both SC-Pega and Φ(· ; A) : R|A| → R be defined as
SC-SDCA do not degrade the convergence rates of the
X
Φ(sA ; A) :=
φi (si ).
original algorithms.
(7)
• Two optimization algorithms for the sign-constrained
regularized loss minimization, called SC-Pega and SCSDCA, were developed by simply inserting the sign correction step, introduced in Section 3, to the original Pegasos and SDCA.
i∈A
• Two attractive applications, where the sign-constrained
learning is effective, are presented. The one is exploitation of prior information about correlation between exSign-Constrained Pegasos
planatory variables and a target variable. The other 3
is introduction of the sign-constrained to SVM-Pairwise
In the original Pegasos algorithm (Shalev-Shwartz et al.,
method (Liao and Noble, 2003).
2011), φi is assumed to be the classical hinge loss function
• Experimental results demonstrate significant improve- (See Table 1 for the definition). Each iterate consists of three
ment of generalization performance by introducing sign steps: the mini-batch selection step, the gradient step, and the
constraints in both two applications.
projection-onto-ball step. Mini-batch selection step chooses a
subset At ⊆ {1, . . . , n} from n examples at random. The cardinality of the subset is predefined as |At | = k. Gradient step
2 Problem Setting
computes the gradient of
The feasible region can be expressed simply as
S := w ∈ Rd c ⊙ w ≥ 0d
λ
1
⊤
(8)
w ; At ).
kwk2 + Φ(XA
t
2
k
which approximates the objective function P (w). The current solution wt is moved toward the opposite gradient direction as
1
wt+1/2 := wt − ∇Pt (wt )
λt
(9)
t−1
1
⊤
w
;
A
).
=
w−
XAt ∇Φ(XA
t
t
t
t
kλt
At the projection-onto-ball step, the norm of the solution is
if the norm is over √λkw1
:
shortened to be √λkw1
t+1/2 k
t+1/2 k
!
1
wt+1/2 .
(10)
wt+1 := min 1, √
λkwt+1/2 k
Pt (w) :=
(3)
⊤
where c = [c1 , . . . , cd ] ∈ {0, ±1}d, each entry is given by
for h ∈ I+ ,
+1
(4)
ch := 0
for h ∈ I0 ,
−1
for h ∈ I− .
Using S, the optimization problem discussed in this paper
can be expressed as
min
P (w)
wrt
w ∈ S.
(5)
2
⊤
Table 1: Loss functions and their properties. Suppose 0 ≤ γ ≤ 1. Let y := [y1 , . . . , yn ] .
Name
Classical hinge loss
Smoothed hinge loss
Logistic loss
Square error loss
Absolute error loss
Definition
φi (s) := max(0,
1 − yi s)
1 − yi s − 0.5γ
if yi s ∈ (−∞, 1 − γ],
φi (s) := (1 − yi s)2 /(2γ)
if yi s ∈ (1 − γ, 1),
0
if yi s ∈ [1, +∞).
φi (s) := log(1 + exp(−yi s))
φi (s) := 0.5(s − yi )2
φi (s) := |s − yi |
Label
yi ∈ {±1}
Type
1-Lipschitz
yi ∈ {±1}
(1/γ)-smooth
yi ∈ {±1}
yi ∈ R
yi ∈ R
0.25-smooth
1-smooth
1-Lipschitz
rloss
1
1−
γ
2
log(2)
kyk2 /(2n)
kyk1 /n
Algorithm 1 Generic Sign-Constrained Pegasos
continuous, it holds that
Require: Data matrix X ∈ Rd×n , loss function Φ : Rn → R,
p
2 1 + log(T )
regularization parameter λ ∈ R, sign constraint parame.
E [P (w̃)] − P (w⋆ ) ≤
rloss λ + LR
ter c ∈ {±1, 0}d, and mini-batch size k.
λT
1: begin
(12)
2: w1 := 0d ; {Initialization}
3: for t := 1, . . . , T do
4:
Choose At ⊆ {1, . . . , n} uniformly at random such that
See Subsection A.1 for proof of Theorem 1. This bound is
|At | = k.
exactly
same as the original Pegasos, yet Algorithm 1 contains
1
⊤
5:
wt+1/3 := t−1
t wt − λt XAt ∇Φ(XAt wt−1 ; At );
the sign correction step.
6:
wt+2/3 := wt+1/3 + c ⊙ (−c ⊙ wt+1/3 )+ ;
√
7:
wt+1 := min 1, rloss λ−1 kwt+2/3 k−1 wt+2/3 ;
8: end for
4 Sign-Constrained SDCA
PT
9: return w̃ :=
w
/T
;
t
t=1
The original SDCA is a framework for the unconstrained
10: end.
problems (1). In SDCA, a dual problem is solved instead
of the primal problem. Namely, the dual objective is maxiThe projection-onto-ball step plays an important role in get- mized in a iterative fashion with respect to the dual variables
⊤
ting a smaller upper-bound of the norm of the gradient of the α := [α1 , . . . , αn ] ∈ Rn . The problem dual to the unconregularization term in the objective, which eventually reduces strained problem (1) is given by
the number of iterates to attain an ǫ-approximate solution
min D(α)
wrt α ∈ Rn ,
(13)
(i.e. P (w̃) − P (w⋆ ) ≤ ǫ).
In the algorithm developed in this study, we simply inserts where
between those two steps, a new step that corrects the sign of
2
each entry in the current solution w as
1
λ 1
(14)
Xα − Φ∗ (−α).
D(α) := −
2 λn
n
for h ∈ I+ ,
max(0, wh )
wh ← min(0, wh )
(11) To find the maximizer of D(α), a single example i is chofor h ∈ I− ,
sen randomly at each iterate t, and a single dual variable αi
wh
for h ∈ I0 ,
is optimized with the other (n − 1) variables α1 , . . . , αi−1 ,
αi+1 , . . . , αn frozen. If we denote by α(t−1) ∈ Rn the value
which can be rewritten equivalently as w ← w + c ⊙ (−c ⊙ of the dual vector at the previous iterate (t − 1), the dual
w)+ where the operator (·)+ is defined as ∀x ∈ Rd , (x)+ := vector is updated as α(t) := α(t−1) + ∆αe where ∆α ∈ R
i
max(0, x).
is determined so that ∆α ∈ argmax Dt (∆α ; w(t−1) ) where
The algorithm can be summarized as Algorithm 1. Here,
∆α∈R
1
Xα(t−1) and
the loss function is not limited to the classical hinge loss. In w(t−1) = λn
the
√ projection-onto-ball step,√ the solution is projected onto
2
rloss λ−1 -ball instead of (1/ λ)-ball to handle more general
λ
1
∆α
(t−1)
D
(∆α
;
w)
:=
−
−∆α).
x
− φ∗i (−αi
w
+
t
i
settings. Recall that rloss = 1 if φi is the hinge loss employed
2
λn
n
in the original Pegasos. It can be shown that the objective
(15)
gap is bounded as follows.
In case of the hinge loss, the maximizer of Dt (· ; w(t−1) ) can
be found within O(d) computation. The primal variable w(t)
can also be maintained within O(d) computation by w(t) :=
w(t−1) + ∆α
λn xi .
Theorem 1. Consider Algorithm 1. If φi are L-Lipschitz
3
Algorithm 2 Generic Sign-Constrained SDCA.
Require: Data matrix X ∈ Rd×n , loss function Φ : Rn →
R, regularization parameter λ ∈ R, and sign constraint
parameter c ∈ {±1, 0}d.
1: begin
2: α(0) := 0n ; w̄ (0) := 0d ; w (0) := 0d ; {Initialization}
3: for t := 1, . . . , T do
4:
∆α ∈ argmax Dt (∆α ; w(t−1) );
See Subsections A.6 for proof of Theorem 2. Theorem 2
suggests that the convergence rate of Algorithm 2 is not deteriorated compared to the original SDCA in both cases of
L-Lipschitz and smooth losses, despite insertion of the sign
correction step.
5
Multiclass Classification
∆α∈R
In this section, we extend our algorithms to the multi-class
classification setting of m classes. Here, the model parameter
6:
is a W ∈ Rd×m instead of a vector w ∈ Rd . The loss function
7:
for each example xi ∈ Rd is of an m-dimensional vector. Here,
8:
the prediction is supposed to be done by taking the class
9:
with the maximal score among s1 := hw1 , xi , . . . , and sm :=
hwm , xi. Here, without loss of generality, the set of the class
Now let us move on the sign-constrained problem. In addi- labels are given by Y := {1, . . . , m}. Several loss functions
m
m
tion to Algorithm 1 that is derived from Pegasos, we present φi : R → R are used for multiclass classification as follows.
another algorithm based on SDCA for solving the minimizer
• Soft-max loss:
of P (w) subject to the sign constraint c ⊙ w ≥ 0d . Like Algo
rithm 1 that has been designed by inserting the sign correction
X
φm
exp (sy − syi )
(20)
step into the original Pegasos iterate, the new algorithm has
i (s) := log
y∈Y
been developed by simply adding the sign correction step in
each SDCA iterate. The resultant algorithm is described in
Therein, yi is the true class label of i-th example.
Algorithm 2.
For some loss functions, maximization at step 5 in Algo• Max-hinge loss;
rithm 2 cannot be given in a closed form. Alternatively, step 4
φm
can be replaced to
i (s) := max (sy − syi + δy,yi ) .
(21)
y∈Y
5:
w̄(t) := w̄(t−1) + ∆α
λn xi ;
w(t) := w̄(t) + c ⊙ (−c ⊙ w̄(t) )+ ;
end for
PT
1
(t−1)
;
return w̃ := T −T
t=T0 +1 w
0
end.
4 : ∆α := sq,
where
s := Clip[0,s−1 ]
lb
1 z (t) αi + φ∗i (−αi ) + φi (z (t) )
+
2
γq 2
• Top-k hinge loss (Lapin et al., 2015):
slb .
φm
i (s)
k
1X
:=
(I − 1e⊤
yi )s + 1 − eyi [j] .
k j=1
(22)
(16)
Therein, we have defined slb := λnγ/(λnγ + R2 ), z (t) :=
(t−1)
, and Clip[a,b] (x) :=
w(t−1) , xi , q (t) := −∇φi (z (t) ) − αi
Therein, x[j] denotes the j-th largest value in a vecmax(a, min(b, x)). See Subsection A.4 for derivation of (16).
tor x ∈ Rm .
We have found the following theorem that states the red×m
is defined as
quired number of iterates guaranteeing the expected primal The objective function for learning W ∈ R
objective gap below a threshold ǫ under the sign constraints.
n
λ
1X m
P m (W ) := kW k2F +
φ (W ⊤ xi ).
(23)
2
n i=1 i
Theorem 2. Consider Algorithm 2. In case that φi are
L-Lipschitz continuous (i.e. (6)), it holds that E[P (w̃)] −
P (w⋆ ) ≤ ǫ if T and T0 are specified so that
2λnrloss
4G
(17)
+ max 0, n log
T0 ≥
λǫ
G
and
G
T ≥ T0 + max n,
λǫ
The learning problem discussed is minimization of P m (W )
with respect to W subject to sign constraints
∀(h, j) ∈ E+ , Wh,j ≥ 0,
∀(h′ , j ′ ) ∈ E− , Wh′ ,j ′ ≤ 0,
(24)
with two exclusive set E+ and E− such that
E+ ∪ E− ⊆ {(h, j) ∈ N2 | h ∈ [1, d], j ∈ [1, m]}.
(18)
Introducing C ∈ {0, ±1}d×m
+1
Ch,j := −1
0
where G := 4R2 L2 . If φi are hinge loss functions, then
G := R2 L2 . In case that φi are (1/γ)-smooth, E[P (w̃)] −
P (w⋆ ) ≤ ǫ is established if
R2
rloss
R2
log
n+
.
T > T0 ≥ n +
λγ
λγ (T − T0 )ǫ
(19)
as
for (h, j) ∈ E+ ,
for (h, j) ∈ E− ,
o.w.
the feasible region can be expressed as
S m := W ∈ Rd×m C ⊙ W ≥ Od×m .
4
(25)
(26)
(27)
Algorithm 3 Sign-Constrained Pegasos for Multiclass Clas- Algorithm 4 Sign-Constrained SDCA for Multiclass Classisification.
fication.
Require: Data matrix X ∈ Rd×n , loss function Φm : Require: Data matrix X ∈ Rd×n , loss function Φ : Rm×n →
Rm×n → R, regularization parameter λ ∈ R, sign conR, regularization parameter λ ∈ R, and sign constraint
straint parameter C ∈ {0, ±1}d×m, and mini-batch size
parameter C ∈ {±1, 0}d×m.
k.
1: begin
1: begin
2: A(0) := O; W̄ (0) := O; W (0) := O;
{Initialization}
2: W1 := 0d ; {Initialization}
3: for t := 1, . . . , T do
3: for t := 1, . . . , T do
4:
∆α ∈ argmax Dt (∆α ; W (t−1) );
∆α∈Rm
4:
Choose At ⊆ {1, . . . , n} uniformly at random such that
1
(t)
xi ∆α⊤ ;
5:
W̄
:=
W̄ (t−1) + λn
|At | = k.
(t)
(t)
⊤
6:
W := W̄ + C ⊙ max(O, −C ⊙ W̄ (t) );
5:
Zt := Wt−1
XAt ;
7: end for
⊤
1
PT
6:
Wt+1/3 := t−1
1
t Wt − λt XAt (∇Φ(Zt ; At )) ;
(t−1)
8:
return
W̃
:=
;
t=T0 +1 W
T
−T
0
7:
Wt+2/3 := Wt+1/3 + C ⊙ max(O, −C ⊙ Wt+1/3 );
9: end.
rloss
Wt+2/3 ;
8:
Wt+1 := min 1, √λkW
k
t+2/3 F
9:
10:
11:
end for
P
return W̃ := Tt=1 Wt /T ;
end.
6.1
Prediction Performance
The pattern recognition performance of the sign-constrained
learning was examined on two tasks: Escherichia coli (E. coli)
prediction and protein function prediction.
The goal is here to develop algorithms that find
W⋆ := arg minm P m (W ).
(28) E. coli Prediction The first task is to predict E. coli
counts in river water. The E. coli count has been used as
an indicator for fecal contamination in water environment in
Define Φm (· ; A) : Rm×k → R as
many parts of the world (Scott et al., 2002). In this experiX
Φm (SA ; A) :=
φm
(s
)
i
i
(29) ment, the data points with E. coli counts over 500 most probable number (MPN)/100 mL are assigned to positive class,
i∈A
and the others are negative. The hydrological and water qualwhere SA is the horizontal concatenation of columns in S := ity monitoring data are used for predicting E. coli counts to
[s1 , . . . , sn ] ∈ Rm×n selected by a minibatch A. We here be positive or negative.
use the following assumptions: Φm (·) is a convex function;
For ensuring the microbial safety in water usage, it is meanΦm (O) ≤ nrloss ; ∀S ∈ Rm×n , Φm (S) ≥ 0; ∀i, kxi k ≤ R.
ingful to predict E. coli counts on a real-time basis. The conBy extending Algorithm 1, an algorithm for minimization
centration of E. coli in water, which is measured by cultureof P m (W ) subject to the sign constraints can be developed dependent methods (Kobayashi et al., 2013), has been used to
as described in Algorithm 3.
monitor the fecal contamination in water environment, and
The SDCA-based learning algorithm can also be develhas been proved to be effective to prevent waterborne inoped for the multiclass classification task. In the algo- fectious diseases in varied water usage styles. On the other
rithm, the dual variables are represented as a matrix A :=
hand, the real-time monitoring of E. coli counts has not yet
[α1 , . . . , αn ] ∈ Rm×n . At each iterate t, one of n columns, αi , been achieved. It take at least ten hours to obtain E. coli
is chosen at random instead of choosing one of a dual vari- counts by culture-dependent methods, and also at least sevable to update the matrix as A(t) := A(t−1) +∆αe⊤
i where we eral hours are needed to measure the concentration of E. coli
have used the iterate number (t) as the superscript of A. To by culture-independent methods (Ishii et al., 2014b,a), such
determine the value of ∆α, the following auxiliary funcition as polymerase chain reaction. Since it is possible to measure
is introduced:
the some of the hydrological and water quality data with realtime sensors, the real-time prediction of E. coli counts will be
2
kxi k
Dt (∆α ; W ) := − 2 k∆αk2
realized if the hydrological and water quality data are avail2λ n
able for the E. coli count prediction.
(t−1)
− ∆α). (30)
− W ⊤ xi , ∆α − φ∗i (−αi
Many training examples are required to obtain a better
generalization performance. A serious issue, however, is that
For both algorithms (Algorithms 3 and 4), we can bound
measuring the concentration of E. Coli is time-consuming and
the required number of iterations similar to those presented
the cost of reagents is expensive. We here demonstrate that
in Theorems 1 and 2.
this issue can be relaxed by exploiting the domain knowledge
hoarded in the field of water engineering.
The hydrological and water quality data contain nine ex6 Experiments
planatory variables, WT, pH, EC, SS, DO, BOD, TN, TP, and
In this section, experimental results are reported in order to flow rate. The explanatory variable pH is divided into two
illustrate the effects of the sign constraints on classification variables, pH+ ← max(0, pH−7) and pH− ← max(0, 7−pH).
and to demonstrate the convergence behavior.
It is well-known, in the field of water engineering, that E. coli
W ∈S
5
(b) ROC score
20
10
0
-0.4
-0.2
0
Improvement
0.2
0.4
Normalized frequency
Normalized frequency
(a) PRBEP
20
10
0
-0.4
-0.2
0
0.2
0.4
Improvement
Figure 1: Improvements of generalization performances on
E. Coli prediction.
is increased, as WT, EC, SS, BOD, TN, and TP are larger,
and as pH+ , pH− DO, and the flow rate are smaller. From this
fact, we restrict the sign of entries in the predictor parameter
w as follows.
Table 2: ROC Scores for protein
Category
SC-SVM
1
0.751 (0.011)
2
0.740 (0.016)
3
0.753 (0.011)
4
0.762 (0.010)
5
0.769 (0.012)
6
0.690 (0.014)
7
0.713 (0.024)
8
0.725 (0.019)
9
0.655 (0.024)
10
0.743 (0.016)
11
0.535 (0.019)
12
0.912 (0.011)
function prediction.
SVM
0.730 (0.010)
0.680 (0.015)
0.721 (0.011)
0.734 (0.010)
0.691 (0.013)
0.614 (0.014)
0.618 (0.022)
0.667 (0.019)
0.578 (0.023)
0.710 (0.014)
0.492 (0.018)
0.901 (0.011)
x1 , . . . , xn . If we suppose that the first n+ proteins in the
• Coefficients wh of six explanatory variables, WT, EC, training set are in positive class and the rest are negative,
SS, BOD, TN, and TP must be non-negative.
then the first n+ similarities x1 , . . . , xn+ are sequence simi• Coefficients wh of four explanatory variables, pH+ , pH− , larities to positive examples, and xn+ +1 , . . . , xn are similarities to negative examples. The n-dimensional vectors are fed
DO, flow rate must be non-positive.
⊤
to SVM and get the weight coefficients w := [w1 , . . . , wn ] .
We actually measured the concentrations of E. coli 177 times Then, the prediction score of the target protein is expressed
from December 5th, 2011 to April 17th, 2013. We obtained as
n+
n
X
X
177 data points including 88 positives and 89 negatives. We
wi xi +
wi xi .
(31)
chose ten examples out of 177 data points at random to use
i=1
i′ =n+ +1
them for training, and the other 167 examples were used for
testing. The prediction performance is evaluated by the preci- The input protein sequence is predicted to have some particsion recall break-even point (PRBEP) (Joachims, 2005) and ular cellular function if the score is over a threshold. It should
the ROC score. We compared the classical SVM with the be preferable that the first n+ weight coefficients w1 , . . . , wn+
sign-constrained SVM (SC-SVM) to examine the effects of are non-negative and that the rest of (n − n+ ) weight coefsign constraints. We repeated this procedure 10,000 times ficients wn+ +1 , . . . , wn are non-positive. The SVM-Pairwise
approach does not ensure those requirements. Meanwhile,
and obtained 10,000 PRBEP and 10,000 ROC scores.
SC-SVM achieved significant improvement compared to the our approach is capable to explicitly impose the constraints
classical SVM. SC-SVM achieved PRBEP and ROC score of of
0.808 and 0.863 on average over 10,000 trials, whereas those of
w1 ≥ 0, . . . , wn+ ≥ 0, and wn+ +1 ≤ 0, . . . , wn ≤ 0. (32)
the classical SVM were 0.757 and 0.810, respectively. The difference from the classical SVM on each trial is plotted in the
histograms of Figure 1. Positive improvements of ROC scores
This approach was applied to predict protein functions
were obtained in 8,932 trials out of 10,000 trials, whereas in Saccharomyces cerevisiae (S. cerevisiae). The annotaROC scores were decreased only for 796 trials. For PRBEP, tions of the protein functions are provided in MIPS Compositive improvements were obtained on 7,349 trials, whereas prehensive Yeast Genome Database (CYGD). The dataset
deteriorations were observed only on 1,069 trials.
contains 3,583 proteins. The Smith-Waterman similarities
available from https://noble.gs.washington.edu/proj/
Protein Function Prediction In the field of molecular sdp-svm/ were used as sequence similarities among the probiology, understanding the functions of proteins is positioned teins. The number of categories was 12. Some proteins
as a key step for elucidation of cellular mechanisms. Sequence have multiple cellular functions. Indeed, 1,343 proteins in
similarities have been a major mean to predict the function the dataset have more than one function. From this reason,
of an unannotated protein. At the beginning of this century, we pose 12 independent binary classification tasks instead of
the prediction accuracy has been improved by combining se- a single multi-class classification task. 3,583 proteins were
quence similarities with discriminative learning. The method, randomly splited in half to get two datasets. The one was
named SVM-Pairwise (Liao and Noble, 2003), uses a feature used for training, and the other was for testing. For 12 classivector that contains pairwise similarities to annotated protein fication tasks, we repeated this procedure 100 times, and we
sequences. Several other literature (Liu et al., 2014; Ogul and obtained 100 ROC scores.
Table 2 reports the ROC scores averaged over 100 trials and
Mumcuoglu, 2006; Lanckriet et al., 2004b,a) have also provided empirical evidences for the fact that the SVM-Pairwise the standard deviations for 12 binary classification tasks. The
approach is a powerful framework. Basically, if n proteins sign constraints significantly surpassed the classical training
are in a training dataset, the feature vector has n entries, for all 12 tasks. Surprisingly, we observed that the ROC score
6
(a) Covtype
(b) W8a
SC-Pega, k=10
SC-Pega, k=100
SC-SDCA
10
10
10-3
100
10-1
10-2
-3
SC-Pega, k=10
SC-Pega, k=100
SC-SDCA
10-1
Objective Gap
-2
100
SC-Pega, k=10
SC-Pega, k=100
SC-SDCA
1
Objective Gap
10-1
Objective Gap
(c) Phishing
102
100
10-2
10-3
10
10-4
10-4
10-4
10-5
10-4
10-3
Epoch
10-2
10-1
10-4 10-3 10-2 10-1 100
Epoch
101
102
103
10-4
10-3
10-2
10-1
100
Epoch
101
102
Figure 2: Comparison of different optimization methods.
References
of SC-SVM is larger than that of the classical SVM in every
trial.
6.2
Bellavia, S., Macconi, M., and Morini, B. (2006). An interior point newtonlike method for non-negative least-squares problems with degenerate solution. Numerical Linear Algebra with Applications, 13(10), 825–846.
doi: 10.1002/nla.502.
Convergence
Bierlaire, M., Toint, P., and Tuyttens, D. (1991). On iterative algorithms
for linear least squares problems with bound constraints. Linear Algebra
and its Applications, 143, 111–143. doi: 10.1016/0024-3795(91)90009-l.
We carried out empirical evaluation of the proposed optimization methods, the sign-constrained Pegasos (SC-Pega) and
the sign-constrained SDCA (SC-SDCA), in order to illustrate
the convergence of our algorithms to the optimum. For SCPega, we set the mini-batch size to k = 10 and k = 100.
In this experiments, we used the smoothed hinge loss with
γ = 0.01 and λ = 1/n. We used three datasets, Covtype (n =
581, 012 and d = 54), W8a (n = 49, 749 and d = 300), and
Phishing (n = 11, 055 and d = 68). The three datasets are
for binary classification and available from LIBSVM web site
(https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/).
Figure 2 depicts the primal objective gap against epochs,
where the primal objective gap is defined as P (w) − P (w⋆ ).
As expected in theoretical results, SC-SDCA converged to the
optimum faster than SC-Pega except on the dataset Phishing.
No significant difference between different mini-batch sizes is
observed.
7
Cai, Y., Gu, H., and Kenney, T. (2017). Learning microbial community
structures with supervised and unsupervised non-negative matrix factorization. Microbiome, 5(1), 110.
Defazio, A., Bach, F., and Lacoste-julien, S. (2014a). Saga: A fast incremental gradient method with support for non-strongly convex composite
objectives. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and
K. Weinberger, editors, Advances in Neural Information Processing Systems 27 , pages 1646–1654. Curran Associates, Inc.
Defazio, A. J., Caetano, T. S., and Domke, J. (2014b).
Finito: A
faster, permutable incremental gradient method for big data problems.
arXiv:1407.2710 .
Ding, C., Li, T., Peng, W., and Park, H. (2006). Orthogonal nonnegative matrix tri-factorizations for clustering. In Proceedings of the 12th
ACM SIGKDD international conference on Knowledge discovery and
data mining - KDD06 . ACM Press. doi:10.1145/1150402.1150420.
Dong, W., Fu, F., Shi, G., Cao, X., Wu, J., Li, G., and Li, G. (2016).
Hyperspectral image super-resolution via non-negative structured sparse
representation. IEEE Trans Image Process, 25(5), 2337–52.
Févotte, C. and Idier, J. (2011). Algorithms for nonnegative matrix factorization with the β-divergence. Neural Computation, 23(9), 2421–2456.
doi:10.1162/neco a 00168, [pdf].
Conclusions
In this paper, we presented two new algorithms for minimizing
regularized empirical loss subject to sign constraints. The two
algorithms are based on Pegasos and SDCA, both of which
have a solid theoretical support for convergence. The signconstrained versions, named SC-Pega and SC-SDCA, respectively, enjoy the same convergence rate as the corresponding original algorithms. The algorithms were demonstrated
in two applications. The one is posing sign constraints according to domain knowledge, and the other is improving the
SVM-Pairwise method by sign constraints.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning – Data Mining, Inference, and Prediction. Springer,
2nd edition.
Hazan, E., Agarwal, A., and Kale, S. (2007). Logarithmic regret algorithms
for online convex optimization. Machine Learning, 69(2-3), 169–192.
doi:10.1007/s10994-007-5016-8.
He, R., Zheng, W. S., Hu, B. G., and Kong, X. W. (2013). Two-stage
nonnegative sparse representation for large-scale face recognition. IEEE
Trans Neural Netw Learn Syst, 24(1), 35–46.
Heinkenschloss, M., Ulbrich, M., and Ulbrich, S. (1999).
Superlinear
and quadratic convergence of affine-scaling interior-point newton methods for problems with simple bounds without strict complementarity assumption. Mathematical Programming, 86(3), 615–635. doi:
10.1007/s101070050107.
Acknowledgements
Henrot, S., Moussaoui, S., Soussen, C., and Brie, D. (2013). Edge-preserving
nonnegative hyperspectral image restoration. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE.
doi: 10.1109/icassp.2013.6637926.
TK was supported by JPSP KAKENHI Grant Number
26249075 and 40401236.
7
Lin, Y., Lee, D., and Saul, L. (2004). Nonnegative deconvolution for time of
arrival estimation. In 2004 IEEE International Conference on Acoustics,
Speech, and Signal Processing. IEEE. doi:10.1109/icassp.2004.1326273.
Ishii, S., Kitamura, G., Segawa, T., Kobayashi, A., Miura, T., Sano, D.,
and Okabe, S. (2014a). Microfluidic quantitative pcr for simultaneous
quantification of multiple viruses in environmental water samples. Appl
Environ Microbiol, 80(24), 7505–11.
Ishii, S., Nakamura, T., Ozawa, S., Kobayashi, A., Sano, D., and Okabe, S.
(2014b). Water quality monitoring and risk assessment by simultaneous
multipathogen quantification. Environ Sci Technol, 48(9), 4744–9.
Liu, B., Zhang, D., Xu, R., Xu, J., Wang, X., Chen, Q., Dong, Q., and
Chou, K. C. (2014). Combining evolutionary information extracted from
frequency profiles with sequence-based kernels for protein remote homology detection. Bioinformatics, 30(4), 472–479.
Ji, Y., Lin, T., and Zha, H. (2009). Mahalanobis distance based nonnegative sparse representation for face recognition.
In 2009 International Conference on Machine Learning and Applications. IEEE.
doi:10.1109/icmla.2009.50.
Ma, J. (2013). Algorithms for non-negatively constrained maximum penalized likelihood reconstruction in tomographic imaging. Algorithms, 6(1),
136–160. doi: 10.3390/a6010136.
Moré, J. J. and Toraldo, G. (1991). On the solution of large quadratic
programming problems with bound constraints. SIAM Journal on Optimization, 1(1), 93–113. doi: 10.1137/0801008.
Joachims, T. (2005). A support vector method for multivariate performance
measures. In L. D. Raedt and S. Wrobel, editors, Proceedings of the
22nd International Conference on Machine Learning (ICML-05), pages
377–384.
Morigi, S., Reichel, L., Sgallari, F., and Zama, F. (2007). An iterative
method for linear discrete ill-posed problems with box constraints. Journal of Computational and Applied Mathematics, 198(2), 505–520. doi:
10.1016/j.cam.2005.06.053.
Johnson, R. and Zhang, T. (2013). Accelerating stochastic gradient descent
using predictive variance reduction. In Advances in Neural Information
Processing Systems 26: Proceedings of a meeting held December 5-8,
2013, Lake Tahoe, Nevada, United States., pages 315–323.
Needell, D., Srebro, N., and Ward, R. (2015). Stochastic gradient descent,
weighted sampling, and the randomized kaczmarz algorithm. Mathematical Programming, 155(1-2), 549–573. doi:10.1007/s10107-015-0864-7.
Kanzow, C. and Klug, A. (2006).
On affine-scaling interiorpoint newton methods for nonlinear minimization with bound constraints. Computational Optimization and Applications, 35(2), 177–197.
doi:10.1007/s10589-006-6514-5.
Ogul, H. and Mumcuoglu, E. U. (2006). Svm-based detection of distant
protein structural relationships using pairwise probabilistic suffix trees.
Comput Biol Chem, 30(4), 292–299.
Kim, D., Sra, S., and Dhillon, I. S. (2007). Fast newton-type methods for
the least squares nonnegative matrix approximation problem. In Proceedings of the 2007 SIAM International Conference on Data Mining, pages 343–354. Society for Industrial and Applied Mathematics.
doi:10.1137/1.9781611972771.31.
Portugal, L. F., Júdice, J. J., and Vicente, L. N. (1994). A comparison of
block pivoting and interior-point algorithms for linear least squares problems with nonnegative variables. Mathematics of Computation, 63(208),
625–625. doi:10.1090/s0025-5718-1994-1250776-4.
Kim, D., Sra, S., and Dhillon, I. S. (2010). Tackling box-constrained optimization via a new projected quasi-newton approach. SIAM Journal on
Scientific Computing, 32(6), 3548–3563. doi:10.1137/08073812x.
Rockafellar, R. T. (1970). Convex Analysis. Princeton University Press,
Princeton, NJ.
Roux, N. L., Schmidt, M., and Bach, F. R. (2012). A stochastic gradient
method with an exponential convergence rate for finite training sets. In
F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances
in Neural Information Processing Systems 25 , pages 2663–2671. Curran
Associates, Inc.
Kimura, K., Kudo, M., and Tanaka, Y. (2016). A column-wise update
algorithm for nonnegative matrix factorization in bregman divergence
with an orthogonal constraint. Machine Learning, 103(2), 285–306.
doi:10.1007/s10994-016-5553-0, [pdf].
Kobayashi, A., Sano, D., Hatori, J., Ishii, S., and Okabe, S. (2013). Chickenand duck-associated bacteroides-prevotella genetic markers for detecting
fecal contamination in environmental water. Appl Microbiol Biotechnol,
97(16), 7427–37.
Schmidt, M., Roux, N. L., and Bach, F. (2016). Minimizing finite sums with
the stochastic average gradient. Mathematical Programming, 162(1-2),
83–112. doi:10.1007/s10107-016-1030-6.
Lanckriet, G. R., Deng, M., Cristianini, N., Jordan, M. I., and Noble, W. S.
(2004a). Kernel-based data fusion and its application to protein function
prediction in yeast. Pac Symp Biocomput, -(-), 300–11.
Scott, T. M., Rose, J. B., Jenkins, T. M., Farrah, S. R., and Lukasik, J.
(2002). Microbial source tracking: current methodology and future directions. Appl Environ Microbiol, 68(12), 5796–803.
Lanckriet, G. R., Bie, T. D., Cristianini, N., Jordan, M. I., and Noble, W. S.
(2004b). A statistical framework for genomic data fusion. Bioinformatics,
20(16), 2626–35.
Shalev-Shwartz, S. and Zhang, T. (2013). Stochastic dual coordinate ascent
methods for regularized loss. J. Mach. Learn. Res., 14(1), 567–599.
Shalev-Shwartz, S. and Zhang, T. (2016). Accelerated proximal stochastic
dual coordinate ascent for regularized loss minimization. Mathematical
Programming, 155(1), 105–145.
Landi, G. and Piccolomini, E. L. (2012). NPTool: a Matlab software for nonnegative image restoration with Newton projection methods. Numerical
Algorithms, 62(3), 487–504. doi: 10.1007/s11075-012-9602-x.
Shalev-Shwartz, S., Singer, Y., Srebro, N., and Cotter, A. (2011). Pegasos:
primal estimated sub-gradient solver for SVM. Math. Program., 127(1),
3–30.
Lapin, M., Hein, M., and Schiele, B. (2015). Top-k multiclass svm. In
C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett,
editors, Advances in Neural Information Processing Systems 28 , pages
325–333. Curran Associates, Inc.
Solving Least Squares
Applied Mathematics.
Shashua, A. and Hazan, T. (2005). Non-negative tensor factorization with
applications to statistics and computer vision. In Proceedings of the 22nd
international conference on Machine learning - ICML’05 . ACM Press.
doi:10.1145/1102351.1102451.
Lee, D. D. and Seung, H. S. (2001). Algorithms for non-negative matrix
factorization. In Advances in neural information processing systems,
pages 556–562. [pdf].
Wang, J., Tian, F., Yu, H., Liu, C. H., Zhan, K., and Wang, X. (2017). Diverse non-negative matrix factorization for multiview data representation.
IEEE Trans Cybern, -(-), –.
Liao, L. and Noble, W. S. (2003). Combining pairwise sequence similarity
and support vector machines for detecting remote protein evolutionary
and structural relationships. J Comput Biol, 10(6), 857–68.
Wang, Y. and Ma, S. (2007). Projected barzilai–borwein method for largescale nonnegative image restoration. Inverse Problems in Science and
Engineering, 15(6), 559–583. doi: 10.1080/17415970600881897.
Lin, C.-J. and Moré, J. J. (1999). Newton’s method for large boundconstrained optimization problems. SIAM Journal on Optimization,
9(4), 1100–1127. doi: 10.1137/s1052623498345075.
Xiao, L. and Zhang, T. (2014). A proximal stochastic gradient method with
progressive variance reduction. SIAM Journal on Optimization, 24(4),
2057–2075. doi:10.1137/140961791.
Lawson, C. L. and Hanson, R. J. (1995).
Problems.
Society for Industrial and
doi:10.1137/1.9781611971217.
8
A.2
Zhang, L., Mahdavi, M., and Jin, R. (2013). Linear convergence with condition number independent access of full gradients. In Proceedings of the
26th International Conference on Neural Information Processing Systems, NIPS’13, pages 980–988, USA. Curran Associates Inc.
Proof of Lemma A.2
Lemma A.2 states the following three claims.
• B ∩ S is a closed and convex set.
Zhang, Q., Wang, H., Plemmons, R., and Pauca, V. P. (2007). Spectral
unmixing using nonnegative tensor factorization. In Proceedings of the
45th annual southeast regional conference on ACM-SE 45 . ACM Press.
doi: 10.1145/1233341.1233449.
• (35) is hold.
• (36) is hold.
A
A.1
Apparently, B ∩ S is a closed and convex set because the both
sets are closed and convex. We shall show (35) and then (36).
Proofs and Derivations
Proof of Theorem 1
Proof of (35) To prove (35), it suffices to show the projecShalev-Shwartz et al. (2011) have used the following lemma,
tion from a point z ∈ Rd onto the set B ∩ S is given by
given below, to obtain the bound.
)
( r
rloss
ΠS (z).
(38)
ΠB∩S (z) = min 1,
2
λ kΠS (z)k
Lemma A.1 (Hazan et al. (2007)). Let f1 , . . . , fT be a
sequence of λ-strongly convex functions. Let C be a closed
The projection problem can be expressed as
convex set and define ΠC (w) := argmin kw′ − wk. Let
w ′ ∈C
w1 , . . . , wT +1 be a sequence of vectors such that w1 ∈ C
and for t ≥ 1, wt+1 := ΠC (wt − ∇t /(λt)), where ∇t ∈
∂ft (wt ). Assume that ∀t ∈ N, k∇t k ≤ G. Then, for
∀u ∈ C, it holds that
T
T
1 X
(1 + log(T ))G2
1 X
.
ft (wt ) ≤
ft (u) +
T t=1
T t=1
2λT
min
subject to
1
kx − zk2
wrt x ∈ Rd
2
rloss
kxk2 ≤
, c ⊙ x ≥ 0d .
λ
(39)
With non-negative dual variables β ∈ Rd+ and η ∈ R+ , the
Lagrangian function is given by
(33)
LB∩S (x, β, η) :=
1
kx − zk2
2
η
rloss
We, too, have used Lemma A.1 to obtain Theorem 1 for
− hβ, c ⊙ xi +
kxk2 −
. (40)
2
λ
our sign-constrained learning problem (5). To this end, we
find the following lemma.
Let
(x⋆ , β⋆ , η⋆ )
be
the
saddle
point
of
minx maxβ,η LB∩S (x, β, η). Then, x⋆ = ΠB∩S (z). At
the saddle point, it holds that ∇x LB∩S = 0, yielding
√
Lemma A.2. Let B be rloss / λ-ball defined as
1
r
x=
(z + c ⊙ β).
(41)
r
loss
η+1
(34)
B := w ∈ Rd kwk ≤
.
λ
The dual objective is written as
and S be the set defined in (3). Then, the intersection
DB∩S (β, η) = min LB∩S (x, β, η)
of the two sets are closed and convex. It holds that
x
1
1
= LB∩S (
(z + c ⊙ β), β, η)
(35)
wt+1 = ΠB∩S wt − ∇Pt (wt )
(42)
η
+
1
λt
η
1
1
kz + c ⊙ βk2 +
+ kzk2 .
=−
for ∀t ∈ N. Furthermore, the optimal solution w⋆ :=
2(η + 1)
2λ 2
argmin P (w) is in the intersection of the two sets.
w∈S
This implies that DB∩S (·, ·) is maximized when β = (−c ⊙
Namely,
z)+ . Note that this does not depend on the value of η. Subw⋆ ∈ B ∩ S.
(36)
stituting this into (41), we have
1
1
x=
(z + c ⊙ (−c ⊙ z)+ ) =
ΠS (z),
(43)
See Subsection A.2 for proof of Lemma A.2. The above
η+1
η+1
lemma suggests that the setting of ft := Pt , C := B ∩ S and
u := w⋆ fulfills the assumptions of Lemma A.1. An upper where the last equality follows from the fast that ΠS (z) = z+
c⊙(−c⊙z)+ which can be shown as follows. The Lagrangian
bound of the norm of the gradient of ft is given by
p
function for the problem of projection of z onto S is given by
(37)
k∇ft (wt )k = k∇Pt (wt )k ≤ rloss λ + LR.
LS (x, β) = LB∩S (x, β, 0), and, with a similar derivation, the
See
Subsection
A.3
for
derivation
of
(37).
By
setting
G
=
dual
objective is DS (β) = DB∩S (β, 0) which is maximized at
√
rloss λ + LR, Theorem 1 is established.
β = (−c ⊙ z)+ yielding ΠS (z) = z + (−c ⊙ z)+ .
9
where the first inequality, the third and fourth equalities,
and the last inequality follow from Assumption 2.1(c), no
1
rloss
1
DB∩S ((−c ⊙ z)+ , η) = − kΠS (z)k2 (η + 1)−1 −
η + kzk2duality gap, (49), and (50), respectively. Therefore, w⋆ ∈
2
2λ
2
(44) B. Furthermore, w⋆ is feasible so w⋆ ∈ S. Hence, (36) is
established.
with the derivative
Next, we find the optimal η. The dual objective is
rloss
1
kΠS (z)k2 (η + 1)−2 −
.
2
2λ
(45)
A.3 Derivation of (37)
Setting the derivative to zero and noting that η is a nonnegative variable, we get
This inequality leads to a bound of The norm of the gradient
!
r
of the loss term in Pt (·) can be bounded as
λ
η⋆ = max 0,
kΠS (z)k − 1 .
(46)
rloss
∇η DB∩S ((−c ⊙ z)+ , η) =
X
∂
⊤
w ; At ) =
Φ(XA
xi ∇φi (hxi , wi)
t
∂w
i∈At
X
≤
kxi k ∇φi (hxi , wi) ≤ kLR. (52)
Substituting this into (43), we obtain
ΠB∩S (z) = x⋆ =
1
p
ΠS (z)
1 + max(0, λ/rloss kΠS (z)k − 1)
i∈At
√
rloss
= min 1, √
ΠS (z). (47)
λ kΠS (z)k
Using this, (37) is derived as
Thus, (38) is established.
k∇ft (wt )k = k∇Pt (wt )k
Proof of (36) We use the following problem dual to (5):
1 ∂
⊤
w ; At )
Φ(XA
≤ λ k∇wt k +
t
k ∂w
r
kLR p
rloss
≤λ
+
≤ rloss λ + LR.
λ
k
2
1 ∗
Φ (−α) wrt α ∈ Rn .
n
(48)
Let α⋆ be the solution optimal to the dual problem (48).
The primal optimal solution can be recovered by
1
(49)
Xα⋆
w⋆ = ΠS
λn
max
−
λ
ΠS
2
Xα
λn
−
A.4
with no duality gap. The loss term in the objective of the
dual problem is bounded from above as
1
1
− Φ∗ (−α) = − maxn (hs, −αi − Φ(s))
n
n s∈R
1
= minn (hs, αi + Φ(s))
n s∈R
1
1
= (h0, αi + Φ(0)) = Φ(0) ≤ rloss .
n
n
(50)
(53)
Derivation of (16)
Exploiting proof techniques used in ProxSDCA (ShalevShwartz and Zhang, 2016), we here limit the form of ∆α to
(t−1)
and ui ∈ −∂φi (zi ).
sq where q := ui − αi
Denote by D0 (α) the objective function of the dual problem (48). Suppose that i-th example is chosen at t-th iterate.
The new value of the regularization term in D0 (α) is given
by
The square norm of the primal optimal solution is bounded
as
2
sq
2
λ
λ
X(α + sqei )
1
1
−
≥−
xi
Π
ΠS w̄(t−1) +
S
2
2
2
kw⋆ k = kw⋆ k + kw⋆ k
2
λn
2
λn
2
2
2
2
λ
s
1 s
1 λ
1
1
≥−
w(t−1) − zi q −
R2 q 2 . (54)
kw⋆ k2 + Φ(X ⊤ w⋆ )
≤ kw⋆ k2 +
2
n
2λ
n
2
λ 2
n
1
1
= kw⋆ k2 + P (w⋆ )
where the first inequality follows from the following inequality
2
λ
!
2
(51)
λ
Xα⋆
1
1
1
−
ΠS
− Φ∗ (−α⋆ )
= kw⋆ k2 +
(55)
∀v, ∀∆ ∈ Rd , kΠS (v) + ∆k ≥ kΠS (v + ∆)k ,
2
λ
2
λn
n
1 λ
1
1
2
kw⋆ k + Φ∗ (−α⋆ )
= kw⋆ k2 −
and the second
inequality is derived from the fact of w(t−1) =
2
λ 2
n
(t−1)
ΠS w̄
and the assumption of kxi k ≤ R. We shall prove
rloss
1
= − Φ∗ (−α⋆ ) ≤
(55) in Subsection A.5.
λn
λ
10
The improvement of the dual objective is expressed as
• In case of v < 0 and ∆ < −v,
bh (∆ ; v) = 0.5∆2 ≥ 0 = ah (v + ∆).
(65)
D0 (α(t−1) + sqei ) − D0 (α(t−1) )
2
2
λ
λ
X(α + sqei )
• In case of v < 0 and ∆ ≥ −v,
=−
+
ΠS
w(t−1)
2
λn
2
bh (∆ ; v) − ah (v + ∆) = 0.5∆2 − 0.5(∆ + v)2
1
1
− φi (−αi − sq) + φi (−αi )
n
n
= −v∆ − 0.5v 2 = −0.5((∆ + v) + ∆)v
(66)
1 s 2 2 2 (1 − s)sγ 2
2
R q +
q
≥ −0.5v∆ ≥ 0.5v ≥ 0.
≥−
2λ
n
2n
s (t)
z αi + φ∗i (−αi ) − φ∗i (−ui ) − z (t) u
+
Therefore, we get bh (∆; v) ≥ ah (v + ∆) for h ∈ I+ . Finally,
n
we assume h ∈ I− .
2
2
γq
s
s
z (t) αi + φ∗i (−αi ) + φi (z (t) ) +
= − q 2 γs−1
(
lb +
2n
n
2
0.5(∆ + v)2
for v ≤ 0,
(56)
bh (∆ ; v) =
(67)
2
0.5∆
for v > 0.
Thus, the value of s maximizing the lower-bound can be
given by
We need to analyze the following three cases:
(t)
∗
(t)
1 z αi + φi (−αi ) + φi (z )
• In case of v ≤ 0,
s = Clip[0,s−1 ]
slb . (57)
+
lb
2
γq 2
bh (∆ ; v) = 0.5(∆ + v)2 ≥ ah (v + ∆).
(68)
Thus, (16) is derived.
• In case of v > 0 and ∆ > −v,
A.5
Derivation of (55)
For h = 1, . . . , d, letting
2
0.5(∆ + (v)+ )
bh (∆; v) := 0.5(∆ + v)2
0.5(∆ + (−v)+ )2
and
2
0.5(v)+
ah (v) := 0.5(v)2
0.5(−v)2+
bh (∆ ; v) = 0.5∆2 ≥ 0 = ah (v + ∆).
for h ∈ I+ ,
for h ∈ I0 ,
for h ∈ I− ,
for h ∈ I+ ,
for h ∈ I0 ,
for h ∈ I− ,
• In case of v > 0 and ∆ ≤ −v,
d
X
bh (∆h ; vh ),
d
X
ah (vh + ∆h )
bh (∆ ; v) − ah (v + ∆) = 0.5∆2 − 0.5(∆ + v)2
(58)
= −v∆ − 0.5v 2 = 0.5(−(∆ + v) − ∆)v
≥ 0.5(−∆)v ≥ 0.5v ≥ 0.
(59)
The above leads to bh (∆; v) ≥ ah (v + ∆) for h ∈ I− .
A.6
RHS of (55) =
(60)
(61)
h=1
To show the inequality (55), it suffices to show that
∀h = 1, . . . , d, ∀v, ∀∆ ∈ R,
bh (∆; v) ≥ ah (v + ∆).
Proof of Theorem 2
A key observation that leads to the discovery of Theorem 2
is the following lemma:
h=1
and
(70)
2
both the sides in (55) can be rewritten as
LHS of (55) =
(69)
(62)
Lemma A.3. Let g : Rd → R ∪ {+∞} be defined as
g(w) := 12 kwk2 + δS (w) where δS (·) is the indicator
function of the feasible region S given in (3). Namely,
δS (w) = +∞ if w 6∈ S; otherwise δS (w) = 0. Then,
with d-dimensional vector c defined in (4), the gradient
of its convex conjugate (Rockafellar, 1970) is expressed
as ∇g ∗ (w̄) = w̄ + c ⊙ (−c ⊙ w̄)+ .
Apparently, bh (∆; v) = ah (v + ∆) for h ∈ I0 . Assume h ∈ I+
See Subsections A.7 for proof of Lemma A.3.
for a while.
The function g defined in Lemma A.3 is 1-strongly convex.
(
Then,
if we view g as a regularization function in replace0.5(∆ + v)2
for v ≥ 0,
bh (∆ ; v) =
(63)
ment
of
the square L2-norm regularizer, the sign-constrained
0.5∆2
for v < 0.
optimization problem (5) can be rewritten as
The following three cases must be considered:
• In case of v ≥ 0,
bh (∆ ; v) = 0.5(∆ + v)2 ≥ ah (v + ∆).
min
λg(w) +
1
Φ(X ⊤ w)
n
wrt
w ∈ Rd .
(71)
This is a class of optimization problems targeted by a vari(64) ant of SDCA named Prox-SDCA (Shalev-Shwartz and Zhang,
11
2016) which maintains the convergence rate of the vanilla
SDCA yet the regularization function can be extended to be
a 1-strongly convex function. The difference from the vanilla
SDCA is that the primal variable is recovered from the gradient of the convex conjugate of g(·) at the end of each iterate.
It can be seen that Algorithm 2 is generated by applying ProxSDCA to our problem setting with g defined in Lemma A.3.
From this observation, Theorem 2 is established.
A.7
Proof of Lemma A.3
The convex conjugate of g is
g∗ (w̄) = max (hw̄, wi − g(w))
w∈Rd
1
2
= max hw̄, wi − kwk − δS (w)
2
w∈Rd
1
2
= max hw̄, wi − kwk
w∈S
2
1
1
= kw̄k2 − min kw − w̄k2 .
2
2 w∈S
(72)
We use Danskin’s theorem to get the derivative as:
∇g ∗ (w̄) = ΠS (w̄).
(73)
12
| 2 |
Dance Dance Convolution
Chris Donahue 1 Zachary C. Lipton 2 Julian McAuley 2
arXiv:1703.06891v3 [cs.LG] 21 Jun 2017
Abstract
Dance Dance Revolution (DDR) is a popular
rhythm-based video game. Players perform steps
on a dance platform in synchronization with music as directed by on-screen step charts. While
many step charts are available in standardized
packs, players may grow tired of existing charts,
or wish to dance to a song for which no chart exists. We introduce the task of learning to choreograph. Given a raw audio track, the goal is to
produce a new step chart. This task decomposes
naturally into two subtasks: deciding when to
place steps and deciding which steps to select.
For the step placement task, we combine recurrent and convolutional neural networks to ingest
spectrograms of low-level audio features to predict steps, conditioned on chart difficulty. For
step selection, we present a conditional LSTM
generative model that substantially outperforms
n-gram and fixed-window approaches.
1. Introduction
Dance Dance Revolution (DDR) is a rhythm-based video
game with millions of players worldwide (Hoysniemi,
2006). Players perform steps atop a dance platform, following prompts from an on-screen step chart to step on the
platform’s buttons at specific, musically salient points in
time. A player’s score depends upon both hitting the correct buttons and hitting them at the correct time. Step charts
vary in difficulty with harder charts containing more steps
and more complex sequences. The dance pad contains up,
down, left, and right arrows, each of which can be in one
of four states: on, off, hold, or release. Because the four arrows can be activated or released independently, there are
256 possible step combinations at any instant.
1
UCSD Department of Music, San Diego, CA 2 UCSD Department of Computer Science, San Diego, CA. Correspondence to:
Chris Donahue <cdonahue@ucsd.edu>.
Proceedings of the 34 th International Conference on Machine
Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017
by the author(s).
Figure 1. Proposed learning to choreograph pipeline for four seconds of the song Knife Party feat. Mistajam - Sleaze. The pipeline
ingests audio features (Bottom) and produces a playable DDR
choreography (Top) corresponding to the audio.
Step charts exhibit rich structure and complex semantics to
ensure that step sequences are both challenging and enjoyable. Charts tend to mirror musical structure: particular sequences of steps correspond to different motifs (Figure 2),
and entire passages may reappear as sections of the song
are repeated. Moreover, chart authors strive to avoid patterns that would compel a player to face away from the
screen.
The DDR community uses simulators, such as the opensource StepMania, that allow fans to create and play their
own charts. A number of prolific authors produce and disseminate packs of charts, bundling metadata with relevant
recordings. Typically, for each song, packs contain one
chart for each of five difficulty levels.
Despite the game’s popularity, players have some reasonable complaints: For one, packs are limited to songs with
favorable licenses, meaning players may be unable to dance
to their favorite songs. Even when charts are available,
players may tire of repeatedly performing the same charts.
Although players can produce their own charts, the process
is painstaking and requires significant expertise.
Dance Dance Convolution
under-recognized source of annotated data for MIR research. StepMania Online, a popular repository of DDR
data, distributes over 350Gb of packs with annotations
for more than 100k songs. In addition to introducing a
novel task and methodology, we contribute two large public datasets, which we consider to be of notably high quality
and consistency.1 Each dataset is a collection of recordings
and step charts. One contains charts by a single author and
the other by multiple authors.
Figure 2. A four-beat measure of a typical chart and its rhythm
depicted in musical notation. Red: quarter notes, Blue: eighth
notes, Yellow: sixteenth notes, (A): jump step, (B): freeze step
In this paper, we seek to automate the process of step chart
generation so that players can dance to a wider variety of
charts on any song of their choosing. We introduce the task
of learning to choreograph, in which we learn to generate
step charts from raw audio. Although this task has previously been approached via ad-hoc methods, we are the
first to cast it as a learning task in which we seek to mimic
the semantics of human-generated charts. We break the
problem into two subtasks: First, step placement consists
of identifying a set of timestamps in the song at which to
place steps. This process can be conditioned on a playerspecified difficulty level. Second, step selection consists of
choosing which steps to place at each timestamp. Running
these two steps in sequence yields a playable step chart.
This process is depicted in Figure 1.
Progress on learning to choreograph may also lead to advances in music information retrieval (MIR). Our step
placement task, for example, closely resembles onset detection, a well-studied MIR problem. The goal of onset
detection is to identify the times of all musically salient
events, such as melody notes or drum strikes. While not
every onset in our data corresponds to a DDR step, every
DDR step corresponds to an onset. In addition to marking steps, DDR packs specify a metronome click track for
each song. For songs with changing tempos, the exact location of each change and the new tempo are annotated.
This click data could help to spur algorithmic innovation
for beat tracking and tempo detection.
Unfortunately, MIR research is stymied by the difficulty
of accessing large, well-annotated datasets. Songs are often subject to copyright issues, and thus must be gathered by each researcher independently. Collating audio
with separately-distributed metadata is nontrivial and errorprone owing to the multiple available versions of many
songs. Researchers often must manually align their version of a song to the metadata. In contrast, our dataset is
publicly available, standardized and contains meticulouslyannotated labels as well as the relevant recordings.
We believe that DDR charts represent an abundant and
For both prediction stages of learning to choreograph, we
demonstrate the superior performance of neural networks
over strong alternatives. Our best model for step placement
jointly learns a convolutional neural network (CNN) representation and a recurrent neural network (RNN), which
integrates information across consecutive time slices. This
method outperforms CNNs alone, multilayer perceptrons
(MLPs), and linear models.
Our best-performing system for step selection consists of
a conditional LSTM generative model. As auxiliary information, the model takes beat phase, a number representing
the fraction of a beat at which a step occurs. Additionally,
the best models receive the time difference (measured in
beats) since the last and until the next step. This model selects steps that are more consistent with expert authors than
the best n-gram and fixed-window models, as measured by
perplexity and per-token accuracy.
1.1. Contributions
In short, our paper offers the following contributions:
• We define learning to choreograph, a new task with
real-world usefulness and strong connections to fundamental problems in MIR.
• We introduce two large, curated datasets for benchmarking DDR choreography algorithms. They represent an under-recognized source of music annotations.
• We introduce an effective pipeline for learning to
choreograph with deep neural networks.2
2. Data
Basic statistics of our two datasets are shown in Table 1.
The first dataset contains 90 songs choreographed by a single prolific author who works under the name Fraxtil. This
dataset contains five charts per song corresponding to increasing difficulty levels. We find that while these charts
overlap significantly, the lower difficulty charts are not
strict subsets of the higher difficulty charts (Figure 3). The
1
https://github.com/chrisdonahue/ddc
Demonstration showing human choreography alongside output of our method: https://youtu.be/yUc3O237p9M
2
Dance Dance Convolution
4th
8th
10 5
16th
32nd
12th
24th
10 4
10 3
10 2
10 1
Figure 3. Five seconds of choreography by difficulty level for the
song KOAN Sound - The Edge from the Fraxtil training set.
second dataset is a larger, multi-author collection called In
The Groove (ITG); this dataset contains 133 songs with one
chart per difficulty, except for 13 songs that lack charts for
the highest difficulty. Both datasets contain electronic music with constant tempo and a strong beat, characteristic of
music favored by the DDR community.
Dataset
Table 1. Dataset statistics
Fraxtil
Num authors
Num packs
Num songs
Num charts
Steps/sec
Vocab size
1
3
90 (3.1 hrs)
450 (15.3 hrs)
3.135
81
Beginner
Easy
Medium
Hard
Challenge
Figure 4. Number of steps per rhythmic subdivision by difficulty
in the Fraxtil dataset.
The charts in both datasets echo high-level rhythmic structure in the music. An increase in difficulty corresponds
to increasing propensity for steps to appear at finer rhythmic subdivisions. Beginner charts tend to contain only
quarter notes and eighth notes. Higher-difficulty charts reflect more complex rhythmic details in the music, featuring higher densities of eighth and sixteenth note steps (8th,
16th) as well as triplet patterns (12th, 24th) (Figure 4).
ITG
8
2
133 (3.9 hrs)
652 (19.0 hrs)
2.584
88
Note that while the total number of songs is relatively
small, when considering all charts across all songs the
datasets contain around 35 hours of annotations and
350,000 steps. The two datasets have similar vocabulary
sizes (81 and 88 distinct step combinations, respectively).
Around 84% of the steps in both datasets consist of a single, instantaneous arrow.
Step charts contain several invariances, for example interchanging all instances of left and right results in an equally
plausible sequence of steps. To augment the amount of data
available for training, we generate four instances of each
chart, by mirroring left/right, up/down (or both). Doing so
considerably improves performance in practice.
In addition to encoded audio, packs consist of metadata including a song’s title, artist, a list of time-stamped tempo
changes, and a time offset to align the recording to the tempos. They also contain information such as the chart difficulties and the name of the choreographer. Finally, the
metadata contains a full list of steps, marking the measure
and beat of each. To make this data easier to work with,
we convert it to a canonical form consisting of (beat, time,
step) tuples.
3. Problem Definition
A step can occur in up to 192 different locations (subdivisions) within each measure. However, measures contain roughly 6 steps on average. This level of sparsity
makes it difficult to uncover patterns across long sequences
of (mostly empty) frames via a single end-to-end sequential model. So, to make automatic DDR choreography
tractable, we decompose it into two subtasks: step placement and step selection.
In step placement, our goal is to decide at what precise
times to place steps. A step placement algorithm ingests
raw audio features and outputs timestamps corresponding
to steps. In addition to the audio signal, we provide step
placement algorithms with a one-hot representation of the
intended difficulty rating for the chart.
Step selection involves taking a discretized list of step times
computed during step placement and mapping each of these
to a DDR step. Our approach to this problem involves modeling the probability distribution P (mn |m1 , . . . , mn−1 )
where mn is the nth step in the sequence. Some steps require that the player hit two or more arrows at once, a jump;
or hold on one arrow for some duration, a freeze (Figure 2).
4. Methods
We now describe our specific solutions to the step placement and selection problems. Our basic pipeline works
as follows: (1) extract an audio feature representation;
Dance Dance Convolution
(2) feed this representation into a step placement algorithm,
which estimates probabilities that a ground truth step lies
within that frame; (3) use a peak-picking process on this
sequence of probabilities to identify the precise timestamps
at which to place steps; and finally (4) given a sequence of
timestamps, use a step selection algorithm to choose which
steps to place at each time.
4.1. Audio Representation
Music files arrive as lossy encodings at 44.1kHz . We decode the audio files into stereo PCM audio and average
the two channels to produce a monophonic representation.
We then compute a multiple-timescale short-time Fourier
transform (STFT) using window lengths of 23ms, 46ms,
and 93ms and a stride of 10ms. Shorter window sizes
preserve low-level features such as pitch and timbre while
larger window sizes provide more context for high-level
features such as melody and rhythm (Hamel et al., 2012).
Using the ESSENTIA library (Bogdanov et al., 2013), we
reduce the dimensionality of the STFT magnitude spectra
to 80 frequency bands by applying a Mel-scale (Stevens
et al., 1937) filterbank. We scale the filter outputs logarithmically to better represent human perception of loudness.
Finally, we prepend and append seven frames of past and
future context respectively to each frame.
For fixed-width methods, the final audio representation is a
15 × 80 × 3 tensor. These correspond to the temporal width
of 15 representing 150ms of audio context, 80 frequency
bands, and 3 different window lengths. To better condition
the data for learning, we normalize each frequency band to
zero mean and unit variance. Our approach to acoustic feature representation closely follows the work of Schlüter &
Böck (2014), who develop similar representations to perform onset detection with CNNs.
4.2. Step Placement
We consider several models to address the step placement
task. Each model’s output consists of a single sigmoid unit
which estimates the probability that a step is placed. For
all models, we augment the audio features with a one-hot
representation of difficulty.
Following state-of-the-art work on onset detection
(Schlüter & Böck, 2014), we adopt a convolutional neural
network (CNN) architecture. This model consists of two
convolutional layers followed by two fully connected
layers. Our first convolutional layer has 10 filter kernels
that are 7-wide in time and 3-wide in frequency. The
second layer has 20 filter kernels that are 3-wide in time
and 3-wide in frequency. We apply 1D max-pooling after
each convolutional layer, only in the frequency dimension,
with a width and stride of 3. Both convolutional layers
Figure 5. C-LSTM model used for step placement
use rectified linear units (ReLU) (Glorot et al., 2011).
Following the convolutional layers, we add two fully
connected layers with rectifier activation functions and 256
and 128 nodes, respectively.
To improve upon the CNN, we propose a C-LSTM model,
combining a convolutional encoding with an RNN that integrates information across longer windows of time. To
encode the raw audio at each time step, we first apply two
convolutional layers (of the same shape as the CNN) across
the full unrolling length. The output of the second convolutional layer is a 3D tensor, which we flatten along the
channel and frequency axes (preserving the temporal dimension). The flattened features at each time step then become the inputs to a two-layer RNN.
The C-LSTM contains long short-term memory (LSTM)
units (Hochreiter & Schmidhuber, 1997) with forget gates
(Gers & Schmidhuber, 2000). The LSTM consists of 2 layers with 200 nodes each. Following the LSTM layers, we
apply two fully connected ReLU layers of dimension 256
and 128. This architecture is depicted in Figure 5. We
train this model using 100 unrollings for backpropagation
through time.
A chart’s intended difficulty influences decisions both
about how many steps to place and where to place them.
For low-difficulty charts, the average number of steps per
second is less than one. In contrast, the highest-difficulty
charts exceed seven steps per second. We trained all models both with and without conditioning on difficulty, and
found the inclusion of this feature to be informative. We
Dance Dance Convolution
Next
Step
LSTM
(t-1)
LSTM
LSTM
(t-1)
LSTM
RNN
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Step
Feats
1.0
Figure 6. One second of peak picking. Green: Ground truth region (A): true positive, (B): false positive, (C): false negative, (D):
two peaks smoothed to one by Hamming window, (E): misaligned
peak accepted as true positive by ±20ms tolerance
Rhythmic
Feats
Curr
Step
Figure 7. LSTM model used for step selection
concatenate difficulty features to the flattened output of the
CNN before feeding the vector to the fully connected (or
LSTM) layers (Figure 5).3 We initialize weight matrices
following the scheme of Glorot & Bengio (2010).
Training Methodology We minimize binary crossentropy with mini-batch stochastic gradient descent. For
all models we train with batches of size 256, scaling down
gradients when their l2 norm exceeds 5. We apply 50%
dropout following each LSTM and fully connected layer.
For LSTM layers, we apply dropout in the input to output
but not temporal directions, following best practices from
(Zaremba et al., 2014; Lipton et al., 2016; Dai & Le, 2015).
Although the problem exhibits pronounced class imbalance
(97% negatives), we achieved better results training on imbalanced data than with re-balancing schemes. We exclude
all examples before the first step in the chart or after the
last step as charts typically do not span the entire duration
of the song.
For recurrent neural networks, the target at each frame is
the ground truth value corresponding to that frame. We
calculate updates using backpropagation through time with
100 steps of unrolling, equal to one second of audio or
two beats on a typical track (120 BPM). We train all networks with early-stopping determined by the area under
the precision-recall curve on validation data. All models
satisfy this criteria within 12 hours of training on a single
machine with an NVIDIA Tesla K40m GPU.
4.3. Peak Picking
Following standard practice for onset detection, we convert sequences of step probabilities into a discrete set of
3
For LogReg and MLP, we add difficulty to input layer.
chosen placements via a peak-picking process. First we
run our step placement algorithm over an entire song to assign the probabilities of a step occurring within each 10ms
frame.4 We then convolve this sequence of predicted probabilities with a Hamming window, smoothing the predictions and suppressing double-peaks from occurring within
a short distance. Finally, we apply a constant threshold to
choose which peaks are high enough (Figure 6). Because
the number of peaks varies according to chart difficulty, we
choose a different threshold per difficulty level. We consider predicted placements to be true positives if they lie
within a ±20ms window of a ground truth.
4.4. Step Selection
We treat the step selection task as a sequence generation
problem. Our approach follows related work in language
modeling where RNNs are well-known to produce coherent
text that captures long-range relationships (Mikolov et al.,
2010; Sutskever et al., 2011; Sundermeyer et al., 2012).
Our LSTM model passes over the ground truth step placements and predicts the next token given the previous sequence of tokens. The output is a softmax distribution over
the 256 possible steps. As input, we use a more compact
bag-of-arrows representation containing 16 features (4 per
arrow) to depict the previous step. For each arrow, the 4
corresponding features represent the states on, off, hold,
and release. We found the bag-of-arrows to give equivalent
4
In DDR, scores depend on the accuracy of a player’s step
timing. The highest scores require that a step is performed within
22.5ms of its appointed time; this suggests that a reasonable algorithm should place steps with an even finer level of granularity.
Dance Dance Convolution
Table 3. Results for step selection experiments
Table 2. Results for step placement experiments
c
m
Model
Dataset
PPL
AUC
LogReg
MLP
CNN
C-LSTM
Fraxtil
Fraxtil
Fraxtil
Fraxtil
1.205
1.097
1.082
1.070
0.601
0.659
0.671
0.682
F-score F-score
0.609
0.665
0.678
0.681
0.667
0.726
0.750
0.756
LogReg
MLP
CNN
C-LSTM
ITG
ITG
ITG
ITG
1.123
1.090
1.083
1.072
0.599
0.637
0.677
0.680
0.634
0.671
0.689
0.697
0.652
0.704
0.719
0.721
performance to the one-hot representation while requiring
fewer parameters. We add an additional feature that functions as a start token to denote the first step of a chart. For
this task, we use an LSTM with 2 layers of 128 cells each.
Finally, we provide additional musical context to the step
selection models by conditioning on rhythmic features
(Figure 7). To inform models of the non-uniform spacing
of the step placements, we consider the following three features: (1) ∆-time adds two features representing the time
since the previous step and the time until the next step;
(2) ∆-beat adds two features representing the number of
beats since the previous and until the next step; (3) beat
phase adds four features representing which 16th note subdivision of the beat the current step most closely aligns to.
Training Methodology For all neural network models,
we learn parameters by minimizing cross-entropy. We train
with mini-batches of size 64, and scale gradients using the
same scheme as for step placement. We use 50% dropout
during training for both the MLP and RNN models in the
same fashion as for step placement. We use 64 steps of unrolling, representing an average of 100 seconds for the easiest charts and 9 seconds for the hardest. We apply earlystopping determined by average per-step cross entropy on
validation data. All models satisfy this criteria within 6
hours of training on a single machine with an NVIDIA
Tesla K40m GPU.
5. Experiments
For both the Fraxtil and ITG datasets we apply 80%, 10%,
10% splits for training, validation, and test data, respectively. Because of correlation between charts for the same
song of varying difficulty, we ensure that all charts for a
particular song are grouped together in the same split.
5.1. Step Placement
We evaluate the performance of our step placement methods against baselines via the methodology outlined below.
Model
Dataset
PPL
Accuracy
KN5
MLP5
MLP5 + ∆-time
MLP5 + ∆-beat + beat phase
LSTM5
LSTM5 + ∆-time
LSTM5 + ∆-beat + beat phase
LSTM64
LSTM64 + ∆-time
LSTM64 + ∆-beat + beat phase
Fraxtil
Fraxtil
Fraxtil
Fraxtil
Fraxtil
Fraxtil
Fraxtil
Fraxtil
Fraxtil
Fraxtil
3.681
3.744
3.495
3.428
3.583
3.188
3.185
3.352
3.107
3.011
0.528
0.543
0.553
0.557
0.558
0.584
0.581
0.578
0.606
0.613
KN5
MLP5
MLP5 + ∆-time
MLP5 + ∆-beat + beat phase
LSTM5
LSTM5 + ∆-time
LSTM5 + ∆-beat + beat phase
LSTM64
LSTM64 + ∆-time
LSTM64 + ∆-beat + beat phase
ITG
ITG
ITG
ITG
ITG
ITG
ITG
ITG
ITG
ITG
5.847
5.312
4.792
4.786
5.040
4.412
4.447
4.780
4.284
4.342
0.356
0.376
0.402
0.401
0.407
0.439
0.441
0.426
0.454
0.444
Baselines To establish reasonable baselines for step
placement, we first report the results of a logistic regressor (LogReg) trained on flattened audio features. We also
report the performance of an MLP. Our MLP architecture
contains two fully-connected layers of size 256 and 128,
with rectifier nonlinearity applied to each layer. We apply
dropout with probability 50% after each fully-connected
layer during training. We model our CNN baseline on the
method of Schlüter & Böck (2014), a state-of-the-art algorithm for onset detection.
Metrics We report each model’s perplexity (PPL) averaged across each frame in each chart in the test data. Using
the sparse step placements, we calculate the average perchart area under the precision-recall curve (AUC). We average the best per-chart F-scores and report this value as
F-scorec . We calculate the micro F-score across all charts
and report this value as F-scorem .
In Table 2, we list the results of our experiments for step
placement. For ITG, models were conditioned on not just
difficulty but also a one-hot representation of chart author.
For both datasets, the C-LSTM model performs the best
by all evaluation metrics. Our models achieve significantly
higher F-scores for harder difficulty step charts. On the
Fraxtil dataset, the C-LSTM achieves an F-scorec of 0.844
for the hardest difficulty charts but only 0.389 for the lowest difficulty. The difficult charts contribute more to Fscorem calculations because they have more ground truth
positives. We discuss these results further in Section 6.
Dance Dance Convolution
5.2. Step Selection
Baselines For step selection, we compare the performance of the conditional LSTM to an n-gram model. Note
that perplexity can be unbounded when a test set token is
assigned probability 0 by the generative model. To protect
the n-gram models against unbounded loss on previously
unseen n-grams, we use modified Kneser-Ney smoothing
(Chen & Goodman, 1998), following best practices in language modeling (Mikolov et al., 2010; Sutskever et al.,
2011). Specifically, we train a smoothed 5-gram model
with backoff (KN5) as implemented in Stolcke (2002).
Following the work of Bengio et al. (2003) we also compare against a fixed-window 5-gram MLP which takes 4
bag-of-arrows-encoded steps as input and predicts the next
step. The MLP contains two fully-connected layers with
256 and 128 nodes and 50% dropout after each layer during training. As with the LSTM, we train the MLP both
with and without access to side features. In addition to
the LSTM with 64 steps of unrolling, we train an LSTM
with 5 steps of unrolling. These baselines show that the
LSTM learns complex, long-range dependencies. They
also demonstrate the discriminative information conferred
by the ∆-time, ∆-beat, and beat phase features.
Metrics We report the average per-step perplexity, averaging scores calculated separately on each chart. We
also report a per-token accuracy. We calculate accuracy
by comparing the ground-truth step to the argmax over a
model’s predictive distribution given the previous sequence
of ground-truth tokens. For a given chart, the per token
accuracy is averaged across time steps. We produce final
numbers by averaging scores across charts.
In Table 3 we present results for the step selection task.
For the Fraxtil dataset, the best performing model was the
LSTM conditioned on both ∆-beat and beat phase, while
for ITG it was the LSTM conditioned on ∆-time. While
conditioning on rhythm features was generally beneficial,
the benefits of various features were not strictly additive.
Representing ∆-beat and ∆-time as real numbers outperformed bucketed representations.
Additionally, we explored the possibility of incorporating
more comprehensive representations of the audio into the
step selection model. We considered a variety of representations, such as conditioning on CNN features learned from
the step placement task. We also experimented with jointly
learning a CNN audio encoder. In all cases, these approaches led to rapid overfitting and never approached the
performance of the conditional LSTM generative model;
perhaps a much larger dataset could support these approaches. Finally, we tried conditioning the step selection
models on both difficulty and chart author but found these
models to overfit quickly as well.
Figure 8. Top: A real step chart from the Fraxtil dataset on the
song Anamanaguchi - Mess. Middle: One-step lookahead predictions for the LSTM model, given Fraxtil’s choreography as input.
The model predicts the next step with high accuracy (errors in
red). Bottom: Choreography generated by conditional LSTM
model.
6. Discussion
Our experiments establish the feasibility of using machine
learning to automatically generate high-quality DDR charts
from raw audio. Our performance evaluations on both
subtasks demonstrate the advantage of deep neural networks over classical approaches. For step placement, the
best performing model is an LSTM with CNN encoder,
an approach which has been used for speech recognition
(Amodei et al., 2015), but, to our knowledge, never for
music-related tasks. We noticed that by all metrics, our
models perform better on higher-difficulty charts. Likely,
this owes to the comparative class imbalance of the lower
difficulty charts.
The superior performance of LSTMs over fixed-window
approaches on step selection suggests both that DDR charts
exhibit long range dependencies and that recurrent neural
networks can exploit this complex structure. In addition to
reporting quantitative results, we visualize the step selection model’s next-step predictions. Here, we give the entire
ground truth sequence as input but show the predicted next
step at each time. We also visualize a generated choreography, where each sampled output from the LSTM is fed in
as the subsequent input (Figure 8). We note the high accuracy of the model’s predictions and qualitative similarity of
the generated sequence to Fraxtil’s choreography.
For step selection, we notice that modeling the Fraxtil
dataset choreography appears to be easy compared to the
multi-author ITG dataset. We believe this owes to the distinctiveness of author styles. Because we have so many
step charts for Fraxtil, the network is able to closely mimic
his patterns. While the ITG dataset contains multiple charts
per author, none are so prolific as Fraxtil.
Dance Dance Convolution
We released a public demo5 using our most promising models as measured by our quantitative evaluation. Players upload an audio file, select a difficulty rating and receive a
step chart for use in the StepMania DDR simulator. Our
demo produces a step chart for a 3 minute song in about
5 seconds using an NVIDIA Tesla K40c GPU. At time of
writing, 220 players have produced 1370 step charts with
the demo. We also solicited feedback, on a scale of 1-5, for
player “satisfaction” with the demo results. The 22 respondents reported an average satisfaction of 3.87.
A promising direction for future work is to make the selection algorithm audio-aware. We know qualitatively that
elements in the ground truth choreography tend to coincide
with specific musical events: jumps are used to emphasize
accents in a rhythm; freezes are used to transition from regions of high rhythmic intensity to more ambient sections.
DDR choreography might also benefit from an end-to-end
approach, in which a model simultaneously places steps
and selects them. The primary obstacle here is data sparsity at any sufficiently high feature rate. At 100Hz , about
97% of labels are null. So in 100 time-steps of unrolling,
an RNN might only encounter 3 ground truth steps.
We demonstrate that step selection methods are improved
by incorporating ∆-beat and beat phase features, however
our current pipeline does not produce this information. In
lieu of manual tempo input, we are restricted to using
∆-time features when executing our pipeline on unseen
recordings. If we trained a model to detect beat phase, we
would be able to use these features for step selection.
7. Related Work
Several academic papers address DDR. These include anthropological studies (Hoysniemi, 2006; Behrenshausen,
2007) and two papers that describe approaches to automated choreography. The first, called Dancing Monkeys,
uses rule-based methods for both step placement and step
selection (O’Keeffe, 2003). The second employs genetic
algorithms for step selection, optimizing an ad-hoc fitness
function (Nogaj, 2005). Neither establishes reproducible
evaluation methodology or learns the semantics of steps
from data.
Our step placement task closely resembles the classic problem of musical onset detection (Bello et al., 2005; Dixon,
2006). Several onset detection papers investigate modern deep learning methodology. Eyben et al. (2010) employ bidirectional LSTMs (BLSTMs) for onset detection;
Marchi et al. (2014) improve upon this work, developing
a rich multi-resolution feature representation; Schlüter &
Böck (2014) demonstrate a CNN-based approach (against
5
http://deepx.ucsd.edu/ddc
which we compare) that performs competitively with the
prior BLSTM work. Neural networks are widely used on
a range of other MIR tasks, including musical chord detection (Humphrey & Bello, 2012; Boulanger-Lewandowski
et al., 2013a) and boundary detection (Ullrich et al., 2014),
another transient audio phenomenon.
Our step selection problem resembles the classic natural
language processing task of statistical language modeling. Classical methods, which we consider, include n-gram
distributions (Chen & Goodman, 1998; Rosenfeld, 2000).
Bengio et al. (2003) demonstrate an approach to language
modeling using neural networks with fixed-length context.
More recently, RNNs have demonstrated superior performance to fixed-window approaches (Mikolov et al., 2010;
Sundermeyer et al., 2012; Sutskever et al., 2011). LSTMs
are also capable of modeling language at the character level
(Karpathy et al., 2015; Kim et al., 2016). While a thorough
explanation of modern RNNs exceeds the scope of this paper, we point to two comprehensive reviews of the literature (Lipton et al., 2015; Greff et al., 2016). Several papers
investigate neural networks for single-note melody generation (Bharucha & Todd, 1989; Eck, 2002; Chu et al., 2016;
Hadjeres & Pachet, 2016) and polyphonic melody generation (Boulanger-Lewandowski et al., 2012).
Learning to choreograph requires predicting both the timing and the type of events in relation to a piece of music. In
that respect, our task is similar to audio sequence transduction tasks, such as musical transcription and speech
recognition. RNNs currently yield state-of-the-art performance for musical transcription (Böck & Schedl, 2012;
Boulanger-Lewandowski et al., 2013b; Sigtia et al., 2016).
RNNs are widely used for speech recognition (Graves &
Jaitly, 2014; Graves et al., 2006; 2013; Sainath et al., 2015),
and the state-of-the-art method (Amodei et al., 2015) combines convolutional and recurrent networks. While our
work is methodologically similar, it differs from the above
in that we consider an entirely different application.
8. Conclusions
By combining insights from musical onset detection and
statistical language modeling, we have designed and evaluated a number of deep learning methods for learning to
choreograph. We have introduced standardized datasets
and reproducible evaluation methodology in the hope of
encouraging wider investigation into this and related problems. We emphasize that the sheer volume of available step
charts presents a rare opportunity for MIR: access to large
amounts of high-quality annotated data. This data could
help to spur innovation for several MIR tasks, including
onset detection, beat tracking, and tempo detection.
Dance Dance Convolution
Acknowledgements
The authors would like to thank Jeff Donahue, Shlomo
Dubnov, Jennifer Hsu, Mohsen Malmir, Miller Puckette,
Adith Swaminathan and Sharad Vikram for their helpful feedback on this work. This work used the Extreme Science and Engineering Discovery Environment
(XSEDE) (Towns et al., 2014), which is supported by
National Science Foundation grant number ACI-1548562.
GPUs used for this research were graciously donated by the
NVIDIA Corporation.
References
Amodei, Dario, Anubhai, Rishita, Battenberg, Eric, Case,
Carl, Casper, Jared, Catanzaro, Bryan, Chen, Jingdong,
Chrzanowski, Mike, Coates, Adam, Diamos, Greg, et al.
Deep speech 2: End-to-end speech recognition in english
and mandarin. In ICML, 2015.
Behrenshausen, Bryan G. Toward a (kin) aesthetic of video
gaming the case of dance dance revolution. Games and
Culture, 2007.
Bello, Juan Pablo, Daudet, Laurent, Abdallah, Samer,
Duxbury, Chris, Davies, Mike, and Sandler, Mark B. A
tutorial on onset detection in music signals. IEEE Transactions on speech and audio processing, 2005.
Bengio, Yoshua, Ducharme, Réjean, Vincent, Pascal, and
Jauvin, Christian. A neural probabilistic language
model. JMLR, 2003.
Bharucha, Jamshed J and Todd, Peter M. Modeling the
perception of tonal structure with neural nets. Computer
Music Journal, 1989.
Böck, Sebastian and Schedl, Markus. Polyphonic piano
note transcription with recurrent neural networks. In
ICASSP, 2012.
Bogdanov, Dmitry, Wack, Nicolas, Gómez, Emilia, Gulati,
Sankalp, Herrera, Perfecto, Mayor, Oscar, Roma, Gerard, Salamon, Justin, Zapata, José R, and Serra, Xavier.
Essentia: An audio analysis library for music information retrieval. In ISMIR, 2013.
Chen, Stanley F and Goodman, Joshua. An empirical study
of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University, 1998.
Chu, Hang, Urtasun, Raquel, and Fidler, Sanja. Song from
pi: A musically plausible network for pop music generation. arXiv:1611.03477, 2016.
Dai, Andrew M and Le, Quoc V. Semi-supervised sequence
learning. In NIPS, 2015.
Dixon, Simon. Onset detection revisited. In Proceedings
of the 9th International Conference on Digital Audio Effects, 2006.
Eck, Douglas. A first look at music composition using lstm
recurrent neural networks. Technical Report IDSIA-0702, 2002.
Eyben, Florian, Böck, Sebastian, Schuller, Björn W, and
Graves, Alex. Universal onset detection with bidirectional long short-term memory neural networks. In ISMIR, 2010.
Gers, Felix and Schmidhuber, Jürgen. Recurrent nets that
time and count. In International Joint Conference on
Neural Networks (IJCNN), 2000.
Glorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. In
AISTATS, 2010.
Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua.
Deep sparse rectifier neural networks. In AISTATS, 2011.
Graves, Alex and Jaitly, Navdeep. Towards end-to-end
speech recognition with recurrent neural networks. In
ICML, 2014.
Graves, Alex, Fernández, Santiago, Gomez, Faustino, and
Schmidhuber, Jürgen. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ICML, 2006.
Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey. Speech recognition with deep recurrent neural
networks. In ICASSP, 2013.
Boulanger-Lewandowski, Nicolas, Bengio, Yoshua, and
Vincent, Pascal. Modeling temporal dependencies in
high-dimensional sequences: Application to polyphonic
music generation and transcription. In ICML, 2012.
Greff, Klaus, Srivastava, Rupesh K, Koutnı́k, Jan, Steunebrink, Bas R, and Schmidhuber, Jürgen. Lstm: A search
space odyssey. IEEE transactions on neural networks
and learning systems, 2016.
Boulanger-Lewandowski, Nicolas, Bengio, Yoshua, and
Vincent, Pascal. Audio chord recognition with recurrent
neural networks. In ISMIR, 2013a.
Hadjeres, Gaëtan and Pachet, François.
Deepbach:
a steerable model for bach chorales generation.
arXiv:1612.01010, 2016.
Boulanger-Lewandowski, Nicolas, Bengio, Yoshua, and
Vincent, Pascal. High-dimensional sequence transduction. In ICASSP, 2013b.
Hamel, Philippe, Bengio, Yoshua, and Eck, Douglas.
Building musically-relevant audio features through multiple timescale representations. In ISMIR, 2012.
Dance Dance Convolution
Hochreiter, Sepp and Schmidhuber, Jürgen. Long shortterm memory. Neural computation, 1997.
Hoysniemi, Johanna. International survey on the dance
dance revolution game. Computers in Entertainment
(CIE), 2006.
Humphrey, Eric J and Bello, Juan Pablo. Rethinking automatic chord recognition with convolutional neural networks. In ICMLA, 2012.
Karpathy, Andrej, Johnson, Justin, and Fei-Fei, Li.
Visualizing and understanding recurrent networks.
arXiv:1506.02078, 2015.
Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush,
Alexander M. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on
Artificial Intelligence, 2016.
Lipton, Zachary C, Berkowitz, John, and Elkan, Charles. A
critical review of recurrent neural networks for sequence
learning. arXiv:1506.00019, 2015.
Lipton, Zachary C, Kale, David C, Elkan, Charles, and
Wetzell, Randall. Learning to diagnose with LSTM recurrent neural networks. In ICLR, 2016.
Marchi, Erik, Ferroni, Giacomo, Eyben, Florian, Gabrielli,
Leonardo, Squartini, Stefano, and Schuller, Bjorn.
Multi-resolution linear prediction based features for audio onset detection with bidirectional lstm neural networks. IEEE, 2014.
Mikolov, Tomas, Karafiát, Martin, Burget, Lukas, Cernockỳ, Jan, and Khudanpur, Sanjeev. Recurrent neural
network based language model. In Interspeech, 2010.
Nogaj, Adam. A genetic algorithm for determining optimal
step patterns in dance dance revolution. Technical report,
State University of New York at Fredonia, 2005.
O’Keeffe, Karl. Dancing monkeys (automated creation of
step files for dance dance revolution). Technical report,
Imperial College London, 2003.
Rosenfeld, Ronald. Two decades of statistical language
modeling: Where do we go from here? Proceedings
of the IEEE, 2000.
Sainath, Tara N, Weiss, Ron J, Senior, Andrew, Wilson,
Kevin W, and Vinyals, Oriol. Learning the speech frontend with raw waveform cldnns. In Interspeech, 2015.
Schlüter, Jan and Böck, Sebastian. Improved musical onset detection with convolutional neural networks. In
ICASSP, 2014.
Sigtia, Siddharth, Benetos, Emmanouil, and Dixon, Simon.
An end-to-end neural network for polyphonic piano music transcription. IEEE/ACM Transactions on Audio,
Speech, and Language Processing, 2016.
Stevens, Stanley Smith, Volkmann, John, and Newman,
Edwin B. A scale for the measurement of the psychological magnitude pitch. The Journal of the Acoustical
Society of America, 1937.
Stolcke, Andreas. Srilm-an extensible language modeling
toolkit. In Interspeech, 2002.
Sundermeyer, Martin, Schlüter, Ralf, and Ney, Hermann.
Lstm neural networks for language modeling. In Interspeech, 2012.
Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E.
Generating text with recurrent neural networks. In
ICML, 2011.
Towns, John, Cockerill, Timothy, Dahan, Maytal, Foster, Ian, Gaither, Kelly, Grimshaw, Andrew, Hazlewood, Victor, Lathrop, Scott, Lifka, Dave, Peterson,
Gregory D, et al. Xsede: accelerating scientific discovery. Computing in Science & Engineering, 2014.
Ullrich, Karen, Schlüter, Jan, and Grill, Thomas. Boundary
detection in music structure analysis using convolutional
neural networks. In ISMIR, 2014.
Zaremba, Wojciech, Sutskever, Ilya, and Vinyals,
Oriol.
Recurrent neural network regularization.
arXiv:1409.2329, 2014.
| 9 |
Sim-to-Real Transfer of Robotic Control with Dynamics Randomization
arXiv:1710.06537v3 [cs.RO] 3 Mar 2018
Xue Bin Peng1,2 , Marcin Andrychowicz1 , Wojciech Zaremba1 , and Pieter Abbeel1,2
Abstract— Simulations are attractive environments for training agents as they provide an abundant source of data and
alleviate certain safety concerns during the training process.
But the behaviours developed by agents in simulation are often
specific to the characteristics of the simulator. Due to modeling
error, strategies that are successful in simulation may not
transfer to their real world counterparts. In this paper, we
demonstrate a simple method to bridge this “reality gap”. By
randomizing the dynamics of the simulator during training, we
are able to develop policies that are capable of adapting to
very different dynamics, including ones that differ significantly
from the dynamics on which the policies were trained. This
adaptivity enables the policies to generalize to the dynamics of
the real world without any training on the physical system. Our
approach is demonstrated on an object pushing task using a
robotic arm. Despite being trained exclusively in simulation, our
policies are able to maintain a similar level of performance when
deployed on a real robot, reliably moving an object to a desired
location from random initial configurations. We explore the
impact of various design decisions and show that the resulting
policies are robust to significant calibration error.
I. INTRODUCTION
Deep reinforcement learning (DeepRL) has been shown
to be an effective framework for solving a rich repertoire of complex control problems. In simulated domains,
agents have been developed to perform a diverse array of
challenging tasks [1], [2], [3]. Unfortunately, many of the
capabilities demonstrated by simulated agents have often
not been realized by their physical counterparts. Many of
the modern DeepRL algorithms, which have spurred recent
breakthroughs, pose high sample complexities, therefore
often precluding their direct application to physical systems.
In addition to sample complexity, deploying RL algorithms
in the real world also raises a number of safety concerns
both for the agent and its surroundings. Since exploration
is a key component of the learning process, an agent can at
times perform actions that endanger itself or its environment.
Training agents in simulation is a promising approach that
circumvents some of these obstacles. However, transferring
policies from simulation to the real world entails challenges
in bridging the ”reality gap”, the mismatch between the
simulated and real world environments. Narrowing this gap
has been a subject of intense interest in robotics, as it offers
the potential of applying powerful algorithms that have so
far been relegated to simulated domains.
While significant efforts have been devoted to building
higher fidelity simulators, we show that dynamics randomization using low fidelity simulations can also be an effective
1 OpenAI
2 UC Berkeley, Department of Electrical Engineering and Computer
Science
Fig. 1. A recurrent neural network policy trained for a pushing task in
simulation is deployed directly on a Fetch Robotics arm. The red marker
indicates the target location for the puck.
approach to develop policies that can be transferred directly
to the real world. The effectiveness of our approach is
demonstrated on an object pushing task, where a policy
trained exclusively in simulation is able to successfully
perform the task with a real robot without additional training
on the physical system.
II. RELATED WORK
Recent years have seen the application of deep reinforcement learning to a growing repertoire of control problems.
The framework has enabled simulated agents to develop
highly dynamic motor skills [4], [5], [6], [7]. But due to
the high sample complexity of RL algorithms and other
physical limitations, many of the capabilities demonstrated
in simulation have yet to be replicated in the physical world.
Guided Policy Search (GPS) [8] represents one of the few
algorithms capable of training policies directly on a real
robot. By leveraging trajectory optimization with learned linear dynamics models, the method is able to develop complex
manipulation skills with relatively few interactions with the
environment. The method has also been extended to learning
vision-based manipulation policies [9]. Researchers have also
explored parallelizing training across multiple robots [10].
Nonetheless, successful examples of training policies directly
on physical robots have so far been demonstrated only on
relatively restrictive domains.
A. Domain Adaptation
The problem of transferring control policies from simulation to the real world can be viewed as an instance
of domain adaptation, where a model trained in a source
domain is transfered to a new target domain. One of the
key assumptions behind these methods is that the different
domains share common characteristics such that representations and behaviours learned in one will prove useful for the
other. Learning invariant features has emerged as a promising
approach of taking advantage of these commonalities [11],
[12]. Tzeng et al. [11] and Gupta et al. [13] explored using
pairwise constraints to encourage networks to learn similar
embeddings for samples from different domains that are
labeled as being similar. Daftry et al. [14] applied a similar
approach to transfer policies for controlling aerial vehicles
to different environments and vehicle models. In the context
of RL, adversarial losses have been used to transfer policies
between different simulated domains, by encouraging agents
to adopt similar behaviours across the various environments
[15]. Alternatively, progressive networks have also been used
to transfer policies for a robotic arm from simulation to the
real world [16]. By reusing features learned in simulation,
their method was able to significantly reduce the amount
of data needed from the physical system. Christiano et al.
[17] transfered policies from simulation to a real robot by
training an inverse-dynamics model from real world data.
While promising, these methods nonetheless still require data
from the target domain during training.
B. Domain Randomization
Domain randomization is a complementary class of techniques for adaptation that is particularly well suited for simulation. With domain randomization, discrepancies between
the source and target domains are modeled as variability
in the source domain. Randomization in the visual domain
has been used to directly transfer vision-based policies from
simulation to the real world without requiring real images
during training [18], [19]. Sadeghi and Levine [18] trained
vision-based controllers for a quadrotor using only synthetically rendered scenes, and Tobin et al. [19] demonstrated
transferring image-based object detectors. Unlike previous
methods, which sought to bridge the reality gap with high
fidelity rendering [20], their systems used only low fidelity
rendering and modeled differences in visual appearance by
randomizing scene properties such as lighting, textures, and
camera placement. In addition to randomizing the visual
features of a simulation, randomized dynamics have also
been used to develop controllers that are robust to uncertainty
in the dynamics of the system. Mordatch et al. [21] used a
trajectory optimizer to plan across an ensemble of dynamics
models, to produce robust trajectories that are then executed
on a real robot. Their method allowed a Darwin robot to
perform a variety of locomotion skills. But due to the cost
of the trajectory optimization step, the planning is performed
offline. Other methods have also been proposed to develop
robust policies through adversarial training schemes [22],
[23]. Yu et al. [24] trained a system identification module
to explicitly predict parameters of interest, such as mass and
friction. The predicted parameters are then provided as input
to a policy to compute the appropriate controls. While the
results are encouraging, these methods have so far only been
demonstrated on transfer between different simulators.
The work most reminiscent to our proposed method is
that of Antonova et al. [25], where randomized dynamics
was used to transfer manipulation policies from simulation
to the real world. By randomizing physical parameters such
as friction and latency, they were able to train policies in
simulation for pivoting objects held by a gripper, and later
transfer the policies directly to a Baxter robot without requiring additional fine-tuning on the physical system. However
their policies were modeled using memoryless feedforward
networks, and while the policies developed robust strategies,
the lack of internal state limits the feedforward policies’
ability to adapt to mismatch between the simulated and real
environment. We show that memory-based policies are able
to cope with greater variability during training and also better
generalize to the dynamics of the real world. Unlike previous
methods which often require meticulous calibration of the
simulation to closely conform to the physical system, our
policies are able to adapt to significant calibration error.
C. Non-prehensile Manipulation
Pushing, a form of non-prehensile manipulation, is an
effective strategy for positioning and orienting objects that
are too large or heavy to be grasped [26]. Though pushing has
attracted much interest from the robotics community [27],
[28], [29], it remains a challenging skill for robots to adopt.
Part of the difficulty stems from accurately modeling the
complex contact dynamics between surfaces. Characteristics
such as friction can vary significantly across the surface of an
object, and the resulting motions can be highly sensitive to
the initial configuration of the contact surfaces [26]. Models
have been proposed to facilitate planning algorithms [27],
[30], [28], but they tend to rely on simplifying assumptions
that are often violated in practice. More recently, deep learning methods have been applied to train predictive models for
pushing [31]. While data-driven methods overcome some of
the modeling challenges faced by previous frameworks, they
require a large corpus of real world data during training.
Such a dataset can be costly to collect, and may become
prohibitive for more complex tasks. Clavera et al. demonstrated transferring pushing policies trained in simulation to
a real PR2 [32]. Their approach took advantage of shaped
reward functions and careful calibration to ensure that the
behaviour of the simulation conforms to that of the physical
system. In contrast, we will show that adaptive policies can
be trained exclusively in simulation and using only sparse
rewards. The resulting policies are able accommodate large
calibration errors when deployed on a real robot and also
generalize to variability in the dynamics of the physical
system.
III. BACKGROUND
In this section we will provide a review of the RL
framework and notation used in the following sections. We
consider a standard RL problem where an agent interacts
with an environment according to a policy in order to
maximize a reward. The state of the environment at timestep
t is denoted by st ∈ S. For simplicity, we assume that
the state is fully observable. A policy π(a|s) defines a
distribution over the action space A given a particular state s,
where each query to the policy samples an action a from the
conditional distribution. The reward function r : S × A →
R provides a scalar signal that reflects the desirability of
performing an action at a given state. For convenience, we
denote rt = r(st , at ). The goal of the agent is to maximize
PT
0
the multi-step return Rt = t0 =t γ t −t rt0 , where γ ∈ [0, 1]
is a discount factor and T is the horizon of each episode.
The objective during learning is to find an optimal policy
π ∗ that maximize the expected return of the agent J(π)
π ∗ = arg max J(π)
π
If each episode starts in a fixed initial state, expected return
can be rewritten as the expected return starting at the first
step
"T −1
#
X
J(π) = E[R0 |π] = Eτ ∼p(τ |π)
r(st , at )
t=0
where p(τ |π) represents the likelihood of a trajectory
τ = (s0 , a0 , s1 , ..., aT −1 , sT ) under the policy π,
p(τ |π) = p(s0 )
TY
−1
p(st+1 |st , at )π(st , at )
t=0
with the state transition model p(st+1 |st , at ) being determined by the dynamics of the environment. The dynamics
is therefore of crucial importance, as it determines the
consequences of the agent’s actions, as well as the behaviours
that can be realized.
Learning from a sparse binary reward is known to be challenging for most modern RL algorithms. We will therefore
leverage a recent innovation, Hindsight Experience Relay
(HER) [35], to train policies using sparse rewards. Consider
an episode with trajectory τ ∈ (s0 , a0 , ..., aT −1 , sT ), where
the goal g was not satisfied over the course the trajectory.
Since the goal was not satisfied, the reward will be −1
at every timestep, therefore providing the agent with little
information on how to adjust its actions to procure more
rewards. But suppose that we are provided with a mapping
m : S → G, that maps a state to the corresponding
goal satisfied in the given state. For example, m(sT ) = g 0
represents the goal that is satisfied in the final state of the
trajectory. Once a new goal has been determined, rewards can
be recomputed for the original trajectory under the new goal
g 0 . While the trajectory was unsuccessful under the original
goal, it becomes a successful trajectory under the new goal.
Therefore, the rewards computed with respect to g 0 will not
be −1 for every timestep. By replaying past experiences with
HER, the agent can be trained with more successful examples
than is available in the original recorded trajectories. So far,
we have only considered replaying goals from the final state
of a trajectory. But HER is also amenable to other replay
strategies, and we refer interested readers to the original
paper [35] for more details.
A. Policy Gradient Methods
For a parametric policy πθ with parameters θ, the objective
is to find the optimal parameters θ∗ that maximizes the
expected return θ∗ = arg maxθ J(πθ ). Policy gradient
methods [33] is a popular class of algorithms for learning
parametric policies, where an estimate of the gradient of
the objective Oθ J(πθ ) is used to perform gradient ascent to
maximize the expected return. While the previous definition
of a policy is suitable for tasks where the goal is common
across all episodes, it can be generalized to tasks where an
agent is presented with a different goal every episode by
constructing a universal policy [34]. A universal policy is
a simple extension where the goal g ∈ G is provided as
an additional input to the policy π(a|s, g). The reward is
then also dispensed according to the goal r(st , at , g). In our
framework, a random goal will be sampled at the start of
each episode, and held fixed over the course the episode.
For the pushing task, the goal specifies the target location
for an object.
B. Hindsight Experience Replay
During training, RL algorithms often benefit from carefully shaped reward functions that help guide the agent towards fulfilling the overall objective of a task. But designing
a reward function can be challenging for more complex
tasks, and may bias the policy towards adopting less optimal
behaviours. An alternative is to use a binary reward r(s, g)
that only indicates if a goal is satisfied in a given state,
(
0,
if g is satisfied in s
r(s, g) =
−1, otherwise
IV. METHOD
Our objective is to train policies that can perform a task
under the dynamics of the real world p∗ (st+1 |st , at ). Since
sampling from the real world dynamics can be prohibitive,
we instead train a policy using an approximate dynamics
model p̂(st+1 |st , at ) ≈ p∗ (st+1 |st , at ) that is easier to
sample from. For all of our experiments, p̂ assumes the form
of a physics simulation. Due to modeling and other forms
of calibration error, behaviours that successfully accomplish
a task in simulation may not be successful once deployed
in the real world. Furthermore, it has been observed that
DeepRL policies are prone to exploiting idiosyncrasies of the
simulator to realize behaviours that are infeasible in the real
world [2], [7]. Therefore, instead of training a policy under
one particular dynamics model, we train a policy that can
perform a task under a variety of different dynamics models.
First we introduce a set of dynamics parameters µ that parameterizes the dynamics of the simulation p̂(st+1 |st , at , µ).
The objective is then modified to maximize the expected
return across a distribution of dynamics models ρµ ,
"
"T −1
##
X
r(st , at )
E Eτ ∼p(τ |π,µ)
µ∼ρµ
t=0
By training policies to adapt to variability in the dynamics
of the environment, the resulting policy might then better
generalize to the dynamics of real world.
A. Tasks
Our experiments are conducted on a puck pushing task
using a 7-DOF Fetch Robotics arm. Images of the real robot
The state is represented using the joint positions and
velocities of the arm, the position of the gripper, as well as
the puck’s position, orientation, linear and angular velocities.
The combined features result in a 52D state space. Actions
from the policy specify target joint angles for a position
controller. Target angles are specified as relative offsets from
the current joint rotations. This yields a 7D action space.
friction, and characteristics of the actuators). In order to
determine the appropriate actions, a policy requires some
means of inferring the underlying dynamics of its environment. While the dynamics parameters are readily available
in simulation, the same does not hold once a policy has been
deployed in the real world. In the absence of direct knowledge of the parameters, the dynamics can be inferred from a
history of past states and actions. System identification using
a history of past trajectories has been previously explored by
Yu et al. [24]. Their system incorporates an online system
identification module φ(st , ht ) = µ̂, which utilizes a history
of past states and actions ht = [at−1 , st−1 , at−2 , st−2 , ...] to
predict the dynamics parameters µ. The predicted parameters
are then used as inputs to a universal policy that samples an
action according to the current state and inferred dynamics
π(at |st , µ̂). However, this decomposition requires identifying the dynamics parameters of interest to be predicted at
runtime, which may be difficult for more complex systems.
Constructing such a set of parameters necessarily requires
some structural assumptions about the dynamics of a system,
which may not hold in the real world. Alternatively, SysID
can be implicitly embedded into a policy by using a recurrent
model π(at |st , zt , g), where the internal memory zt = z(ht )
acts as a summary of past states and actions, thereby providing a mechanism with which the policy can use to infer
the dynamics of the system. This model can then be trained
end-to-end and the representation of the internal memory can
be learned without requiring manual identification of a set
of dynamics parameters to be inferred at runtime.
C. Dynamics Randomization
E. Recurrent Deterministic Policy Gradient
During training, rollouts are organized into episodes of a
fixed length. At the start of each episode, a random set of
dynamics parameters µ are sampled according to ρµ and held
fixed for the duration of the episode. The parameters which
we randomize include:
• Mass of each link in the robot’s body
• Damping of each joint
• Mass, friction, and damping of the puck
• Height of the table
• Gains for the position controller
• Timestep between actions
• Observation noise
which results in a total of 95 randomized parameters. The
timestep between actions specifies the amount of time an
action is applied before the policy is queried again to sample
a new action. This serves as a simple model of the latency
exhibited by the physical controller. The observation noise
models uncertainty in the sensors and is implemented as
independent Gaussian noise applied to each state feature.
While parameters such as mass and damping are constant
over the course of an episode, the action timestep and the
observation noise varies randomly each timestep.
Since HER augments the original training data recorded
from rollouts of the policy with additional data generated
from replayed goals, it requires off-policy learning. Deep
Deterministic Policy Gradient (DDPG) [2] is a popular offpolicy algorithm for continuous control. Its extension to
recurrent policies, Recurrent Deterministic Policy Gradient
(RDPG) [36], provides a method to train recurrent policies with off-policy data. To apply RDPG, we denote a
deterministic policy as π(st , zt , g) = at . In additional to
the policy, we will also model a recurrent universal value
function as Q(st , at , yt , g, µ), where yt = y(ht ) is the value
function’s internal memory. Since the value function is used
only during training and the dynamics parameters µ of the
simulator are known, µ is provided as an additional input to
the value function but not to the policy. We will refer to a
value function with knowledge of the dynamics parameters
as an omniscient critic. This follows the approach of [37],
[38], where additional information is provided to the value
function during training in order to reduce the variance of
the policy gradients and allow the value function to provide
more meaningful feedback for improving the policy.
Algorithm 1 summarizes the training procedure, where
M represents a replay buffer [2], and θ and ϕ are the
parameters for the policy and value function respectively. We
also incorporate target networks [2], but they are excluded
for brevity.
Fig. 2. Our experiments are conducted on a 7-DOF Fetch Robotics arm.
Left: Real robot. Right: Simulated MuJoCo model.
and simulated model is available in Figure 2. The goal g for
each episode specifies a random target position on the table
that the puck should be moved to. The reward is binary with
rt = 0 if the puck is within a given distance of the target,
and rt = −1 otherwise. At the start of each episode, the arm
is initialized to a default pose and the initial location of the
puck is randomly placed within a fixed bound on the table.
B. State and Action
D. Adaptive Policy
Manipulation tasks, such as pushing, have a strong dependency on the physical properties of the system (e.g. mass,
Algorithm 1 Dynamics Randomization with HER and
RDPG
1: θ ← random weights
2: ϕ ← random weights
3: while not done do
4:
g ∼ ρg sample goal
5:
µ ∼ ρµ sample dynamics
6:
Generate rollout τ = (s0 , a0 , ..., sT )with dynamics µ
7:
for each st , at in τ do
8:
rt ← r(st , g)
9:
end for
10:
Store (τ, {rt }, g, µ) in M
11:
Sample episode (τ, {rt }, g, µ) from M
12:
with probability k
13:
g ← replay new goal with HER
14:
rt ← r(st , g) for each t
15:
endwith
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
for each t do
Compute memories zt and yt
ât+1 ← πθ (st+1 , zt+1 , g)
ât ← πθ (st , zt , g)
qt ← rt + γQϕ (st+1 , ât+1 , yt+1 , g, µ)
4qt ← qt − Qϕ (st , at , yt , g, µ)
end for P
∂Q (s ,a ,y ,g,µ)
Oϕ = T1 t 4qt ϕ t ∂ϕt t
P
∂Q (s ,â ,y ,g,µ) ∂ât
Oθ = T1 t ϕ t ∂at t
∂θ
Update value function and policy with Oθ and Oϕ
end while
F. Network Architecture
A schematic illustrations of the policy and value networks
are available in Figure 4. The inputs to the network consist of
the current state st and previous action at−1 , and the internal
memory is updated incrementally at every step. Each network
consists of a feedforward branch and recurrent branch, with
the latter being tasked with inferring the dynamics from
past observations. The internal memory is modeled using a
layer of LSTM units and is provided only with information
required to infer the dynamics (e.g. st and at−1 ). The
recurrent branch consists of an embedding layer of 128 fullyconnected units followed by 128 LSTM units. The goal g
does not hold any information regarding the dynamics of the
system, and is therefore processed only by the feedforward
branch. Furthermore, since the current state st is of particular
importance for determining the appropriate action for the
current timestep, a copy is also provided as input to the
feedforward branch. This presents subsequent layers with
more direct access to the current state, without requiring information to filter through the LSTM. The features computed
by both branches are then concatenated and processed by 2
additional fully-connected layers of 128 units each. The value
network Q(st , at , at−1 , g, µ) follows a similar architecture,
with the query action at and parameters µ being processed
by the feedforward branch. ReLU activations are used after
Fig. 3. LSTM policy deployed on the Fetch arm. Bottom: The contact
dynamics of the puck was modified by attaching a packet of chips to the
bottom.
each hidden layer (apart from the LSTM). The output layer
of Q consists of linear units, while π consists of tanh output
units scaled to span the bounds of each action parameter.
V. EXPERIMENTS
Results are best seen in the supplemental video
https://youtu.be/XUW0cnvqbwM. Snapshots of policies deployed on the real robot are available in Figure 3. All
simulations are performed using the MuJoCo physics engine
[39] with a simulation timestep of 0.002s. 20 simulation
timesteps are performed for every control timestep. Each
episode consists of 100 control timestep, corresponding to
approximately 4 seconds per episode, but may vary as a
result of the random timesteps between actions. Table I
details the range of values for each dynamics parameter.
At the start of each episode, a new set of parameters µ is
sampled by drawing values for each parameter from their
respective range. Parameters such as mass, damping, friction,
and controller gains are logarithmically sampled, while other
parameters are uniformly sampled. The timestep 4t between
actions varies every step according to 4t ∼ 4t0 + Exp(λ),
where 4t0 = 0.04s is the default control timestep, and
Exp(λ) is an exponential distribution with rate parameter
λ. While 4t varies every timestep, λ is fixed within each
Parameter
Link Mass
Joint Damping
Puck Mass
Puck Friction
Puck Damping
Table Height
Controller Gains
Action Timestep λ
Range
[0.25, 4]× default mass of each link
[0.2, 20]× default damping of each joint
[0.1, 0.4]kg
[0.1, 5]
[0.01, 0.2]N s/m
[0.73, 0.77]m
[0.5, 2]× default gains
[125, 1000]s−1
TABLE I
DYNAMICS PARAMETERS AND THEIR RESPECTIVE RANGES .
Fig. 4. Schematic illustrations of the policy network (top), and value
network (bottom). Features that are relevant for inferring the dynamics of
the environment are processed by the recurrent branch, while the other inputs
are processed by the feedforward branch.
episode. In addition to randomizing the physical properties
of the simulated environment, we also simulate sensor noise
by applying gaussian noise to the observed state features at
every step. The noise has a mean of zero and a standard
deviation of 5% of the running standard deviation of each
feature. Gaussian action exploration noise is added at every
step with a standard deviation of 0.01rad.
The real puck has a mass of approximately 0.2kg and
a radius of 0.065m. The goal is considered satisfied if the
puck is within 0.07m of the target. The location of the
puck is tracked using the PhaseSpace mocap system. When
evaluating performance on the physical system, each episode
Fig. 5. Joint trajectories recorded from the simulated and real robot when
executing the same target trajectories. The joints correspond to the shoulder,
elbow, and wrist of the Fetch arm.
consists of 200 timesteps. Little calibration was performed to
ensure that the behaviour of the simulation closely conforms
to that of the real robot. While more extensive calibration
will likely improve performance, we show that our policy
is nonetheless able to adapt to the physical system despite
poor calibration. To illustrate the discrepancies between the
dynamics of the real world and simulation we executed the
same target trajectory on the real and simulated robot, and
recorded the resulting joint trajectories. Figure 5 illustrates
the recorded trajectories. Given the same target trajectory,
the pose trajectories of the simulated and real robot differ
significantly, with varying degrees of mismatch across joints.
During training, parameter updates are performed using
the ADAM optimizer [40] with a stepsize of 5 × 10−4 for
both the policy and value function. Updates are performed
using batches of 128 episodes with 100 steps per episode.
New goals are sampled using HER with a probability of k =
0.8. Each policy is trained for approximately 8000 update
iterations using about 100 million samples, which requires
approximately 8 hours to simulate on a 100 core cluster.
A. Comparison of Architectures
To evaluate the impact of different architectural choices,
we compared policies modeled using different architectures
and tested their performance in simulation and on the real
robot. The first is an LSTM policy following the architecture
illustrated in Figure 4. Next we consider a memoryless
feedforward network (FF) that receives only the current state
st and goal g as input. As a baseline, we also trained
a memoryless feedforward network without randomization
(FF no Rand), then evaluated the performance with randomization. To provide the feedforward network with more
information to infer the dynamics, we augmented the inputs
with a history of the 8 previously observed states and actions
(FF + Hist). The success rate is determined as the portion of
episodes where the goal is fulfilled at the end of the episode.
In simulation, performance of each policy is evaluated over
100 episodes, with randomized dynamics parameters for
each episode. Learning curves comparing the performance
of different model architectures in simulation are available
in Figure 6. Four policies initialized with different random
seeds are trained for each architecture. The LSTM learns
faster while also converging to a higher success rate than
Fig. 6. Learning curves of different network architectures. Four policies
are trained for each architecture with different random seeds. Performance
is evaluated over 100 episodes in simulation with random dynamics.
Model
LSTM
FF no Rand
FF
FF + Hist
Success (Sim)
0.91 ± 0.03
0.51 ± 0.05
0.83 ± 0.04
0.87 ± 0.03
Success (Real)
0.89 ± 0.06
0.0 ± 0.0
0.67 ± 0.14
0.70 ± 0.10
Trials (Real)
28
10
12
20
TABLE II
P ERFORMANCE OF THE POLICIES WHEN DEPLOYED ON THE SIMULATED
AND REAL ROBOT. P ERFORMANCE IN SIMULATION IS EVALUATED OVER
100 TRIALS WITH RANDOMIZED DYNAMICS PARAMETERS .
Model
all
fixed action timestep
no observation noise
fixed link mass
fixed puck friction
Fig. 7. Performance of different models when deployed on the simulated
and real robot for the pushing task. Policies are trained using only data from
simulation.
the feedforward models. The feedforward network trained
without randomization is unable to cope with unfamiliar
dynamics during evaluation. While training a memoryless
policy with randomization improves robustness to random
dynamics, it is still unable to perform the task consistently.
Next, we evaluate the performance of the different models
when deployed on the real Fetch arm. Figure 7 compares
the performance of the final policies when deployed in
simulation and the real world. Table II summarizes the
performance of the models. The target and initial location
of the puck is randomly placed within a 0.3m × 0.3m
bound. While the performance of LSTM and FF + Hist
policies are comparable in simulation, the LSTM is able to
better generalize to the dynamics of the physical system. The
feedforward network trained without randomization is unable
to perform the task under the real world dynamics.
B. Ablation
To evaluate the effects of randomizing the various dynamics parameters, we trained policies with subsets of the
parameters held fixed. A complete list of the dynamics
parameters are available in Table I. The configurations we
consider include training with a fixed timestep between
actions, training without observation noise, or with fixed
mass for each link. Table III summarizes the performance
of the resulting policies when deployed on the real robot.
Disabling randomization of the action timestep, observation
noise, link mass, and friction impairs the policies’ ability to
adapt to the physical environment. Policies trained without
randomizing the action timestep and observation noise show
particularly noticeable drops in performance. This suggests
that coping with the latency of the controller and sensor noise
are important factors in adapting to the physical system.
C. Robustness
To evaluate the robustness of the LSTM policy to different
dynamics when deployed on the real robot, we experimented
with changing the contact dynamics of the physical system
by attaching a packet of chips to the bottom of the puck.
The texture of the bag reduces the friction between the puck
and the table, while the contents of the bag further alters the
contact dynamics. Nonetheless, the LSTM policy achieves
a success rate of 0.91 ± 0.04, which is comparable to the
success rate without the attachment 0.89 ± 0.06. The policy
Success
0.89 ± 0.06
0.29 ± 0.11
0.25 ± 0.12
0.64 ± 0.10
0.48 ± 0.10
Trials
28
17
12
22
27
TABLE III
P ERFORMANCE OF LSTM POLICIES ON THE REAL ROBOT, WHERE THE
POLICIES ARE TRAINED WITH SUBSETS OF PARAMETERS HELD FIXED .
also develops clever strategies to make fine adjustments to
position the puck over the target. One such strategy involves
pressing on one side of the puck in order to partially upend
it before sliding it to the target. Other strategies including
manipulating the puck from the top or sides depending on the
required adjustments, and correcting for case where the puck
overshoots the target. These behaviours emerged naturally
from the learning process using only a sparse binary reward.
VI. C ONCLUSIONS
We demonstrated the use of dynamics randomization
to train recurrent policies that are capable of adapting
to unfamiliar dynamics at runtime. Training policies with
randomized dynamics in simulation enables the resulting
policies to be deployed directly on a physical robot despite
poor calibrations. By training exclusively in simulation, we
are able to leverage simulators to generate a large volume
of training data, thereby enabling us to use powerful RL
techniques that are not yet feasible to apply directly on a
physical system. Our experiments with a real world pushing
tasks showed comparable performance to simulation and the
ability to adapt to changes in contact dynamics. We also
evaluated the importance of design decisions pertaining to
choices of architecture and parameters which to randomize
during training. We intend to extend this work to a richer
repertoire tasks and incorporate more modalities such as
vision. We hope this approach will open more opportunities
for developing skillful agents in simulation that are then able
to be deployed in the physical world.
VII. ACKNOWLEDGEMENT
We would like to thank Ankur Handa, Vikash Kumar, Bob
McGrew, Matthias Plappert, Alex Ray, Jonas Schneider, and
Peter Welinder for their support and feedback on this project.
R EFERENCES
[1] V.
M.
G.
H.
Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness,
G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland,
Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou,
King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis,
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
“Human-level control through deep reinforcement learning,” Nature,
vol. 518, no. 7540, pp. 529–533, 02 2015. [Online]. Available:
http://dx.doi.org/10.1038/nature14236
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez,
Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep
reinforcement learning,” CoRR, vol. abs/1509.02971, 2015. [Online].
Available: http://arxiv.org/abs/1509.02971
Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel,
“Benchmarking deep reinforcement learning for continuous control,”
CoRR, vol. abs/1604.06778, 2016. [Online]. Available: http://arxiv.
org/abs/1604.06778
X. B. Peng, G. Berseth, and M. van de Panne, “Terrain-adaptive locomotion skills using deep reinforcement learning,” ACM Transactions
on Graphics (Proc. SIGGRAPH 2016), vol. 35, no. 4, 2016.
X. B. Peng, G. Berseth, K. Yin, and M. van de Panne, “Deeploco:
Dynamic locomotion skills using hierarchical deep reinforcement
learning,” ACM Transactions on Graphics (Proc. SIGGRAPH 2017),
vol. 36, no. 4, 2017.
L. Liu and J. Hodgins, “Learning to schedule control fragments for
physics-based characters using deep q-learning,” ACM Trans. Graph.,
vol. 36, no. 3, pp. 29:1–29:14, Jun. 2017. [Online]. Available:
http://doi.acm.org/10.1145/3083723
N. Heess, D. TB, S. Sriram, J. Lemmon, J. Merel, G. Wayne,
Y. Tassa, T. Erez, Z. Wang, S. M. A. Eslami, M. A. Riedmiller,
and D. Silver, “Emergence of locomotion behaviours in rich
environments,” CoRR, vol. abs/1707.02286, 2017. [Online]. Available:
http://arxiv.org/abs/1707.02286
S. Levine, N. Wagener, and P. Abbeel, “Learning contactrich manipulation skills with guided policy search,” CoRR, vol.
abs/1501.05611, 2015. [Online]. Available: http://arxiv.org/abs/1501.
05611
S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training
of deep visuomotor policies,” CoRR, vol. abs/1504.00702, 2015.
[Online]. Available: http://arxiv.org/abs/1504.00702
S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning
hand-eye coordination for robotic grasping with deep learning
and large-scale data collection,” CoRR, vol. abs/1603.02199, 2016.
[Online]. Available: http://arxiv.org/abs/1603.02199
E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine,
K. Saenko, and T. Darrell, “Adapting deep visuomotor representations
with weak pairwise constraints,” CoRR, vol. abs/1511.07111, 2015.
[Online]. Available: http://arxiv.org/abs/1511.07111
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle,
F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial
training of neural networks,” J. Mach. Learn. Res., vol. 17,
no. 1, pp. 2096–2030, Jan. 2016. [Online]. Available: http:
//dl.acm.org/citation.cfm?id=2946645.2946704
A. Gupta, C. Devin, Y. Liu, P. Abbeel, and S. Levine, “Learning
invariant feature spaces to transfer skills with reinforcement
learning,” CoRR, vol. abs/1703.02949, 2017. [Online]. Available:
http://arxiv.org/abs/1703.02949
S. Daftry, J. A. Bagnell, and M. Hebert, “Learning transferable policies
for monocular reactive MAV control,” CoRR, vol. abs/1608.00627,
2016. [Online]. Available: http://arxiv.org/abs/1608.00627
M. Wulfmeier, I. Posner, and P. Abbeel, “Mutual alignment transfer
learning,” CoRR, vol. abs/1707.07907, 2017. [Online]. Available:
http://arxiv.org/abs/1707.07907
A. A. Rusu, M. Vecerik, T. Rothörl, N. Heess, R. Pascanu,
and R. Hadsell, “Sim-to-real robot learning from pixels with
progressive nets,” CoRR, vol. abs/1610.04286, 2016. [Online].
Available: http://arxiv.org/abs/1610.04286
P. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Blackwell,
J. Tobin, P. Abbeel, and W. Zaremba, “Transfer from simulation to
real world through learning deep inverse dynamics model,” CoRR,
vol. abs/1610.03518, 2016. [Online]. Available: http://arxiv.org/abs/
1610.03518
F. Sadeghi and S. Levine, “Cad2rl: Real single-image flight without
a single real image,” CoRR, vol. abs/1611.04201, 2016. [Online].
Available: http://arxiv.org/abs/1611.04201
J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel,
“Domain randomization for transferring deep neural networks from
simulation to the real world,” CoRR, vol. abs/1703.06907, 2017.
[Online]. Available: http://arxiv.org/abs/1703.06907
S. James and E. Johns, “3d simulation for robot arm control
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
with deep q-learning,” CoRR, vol. abs/1609.03759, 2016. [Online].
Available: http://arxiv.org/abs/1609.03759
I. Mordatch, K. Lowrey, and E. Todorov, “Ensemble-cio: Full-body
dynamic motion planning that transfers to physical humanoids,”
in 2015 IEEE/RSJ International Conference on Intelligent Robots
and Systems, IROS 2015, Hamburg, Germany, September 28
- October 2, 2015, 2015, pp. 5307–5314. [Online]. Available:
https://doi.org/10.1109/IROS.2015.7354126
A. Rajeswaran, S. Ghotra, S. Levine, and B. Ravindran, “Epopt:
Learning robust neural network policies using model ensembles,”
CoRR, vol. abs/1610.01283, 2016. [Online]. Available: http://arxiv.
org/abs/1610.01283
L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta, “Robust
adversarial reinforcement learning,” CoRR, vol. abs/1703.02702,
2017. [Online]. Available: http://arxiv.org/abs/1703.02702
W. Yu, C. K. Liu, and G. Turk, “Preparing for the unknown: Learning
a universal policy with online system identification,” CoRR, vol.
abs/1702.02453, 2017. [Online]. Available: http://arxiv.org/abs/1702.
02453
R. Antonova, S. Cruciani, C. Smith, and D. Kragic, “Reinforcement
learning for pivoting task,” CoRR, vol. abs/1703.00472, 2017.
[Online]. Available: http://arxiv.org/abs/1703.00472
K. Yu, M. Bauzá, N. Fazeli, and A. Rodriguez, “More than a million
ways to be pushed: A high-fidelity experimental data set of planar
pushing,” CoRR, vol. abs/1604.04038, 2016. [Online]. Available:
http://arxiv.org/abs/1604.04038
K. M. Lynch and M. T. Mason, “Stable pushing: Mechanics, controllability, and planning,” The International Journal of Robotics Research,
vol. 15, no. 6, pp. 533–556, 1996.
M. Dogar and S. Srinivasa, “A framework for push-grasping in clutter,”
in Robotics: Science and Systems VII. Pittsburgh, PA: MIT Press,
July 2011.
N. Fazeli, R. Kolbert, R. Tedrake, and A. Rodriguez, “Parameter and
contact force estimation of planar rigid-bodies undergoing frictional
contact,” The International Journal of Robotics Research, vol. 0, no. 0,
p. 0278364917698749, 2016.
S. Akella and M. T. Mason, “Posing polygonal objects in the plane
by pushing,” The International Journal of Robotics Research, vol. 17,
no. 1, pp. 70–88, 1998.
C. Finn, I. J. Goodfellow, and S. Levine, “Unsupervised learning
for physical interaction through video prediction,” CoRR, vol.
abs/1605.07157, 2016. [Online]. Available: http://arxiv.org/abs/1605.
07157
D. H. Ignasi Clavera and P. Abbeel, “Policy transfer via modularity,”
in IROS. IEEE, 2017.
R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour, “Policy gradient
methods for reinforcement learning with function approximation,” in
In Advances in Neural Information Processing Systems 12. MIT
Press, 2000, pp. 1057–1063.
T. Schaul, D. Horgan, K. Gregor, and D. Silver, “Universal value
function approximators,” in Proceedings of the 32nd International
Conference on Machine Learning, ser. Proceedings of Machine
Learning Research, F. Bach and D. Blei, Eds., vol. 37. Lille,
France: PMLR, 07–09 Jul 2015, pp. 1312–1320. [Online]. Available:
http://proceedings.mlr.press/v37/schaul15.html
M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba, “Hindsight
experience replay,” in Advances in Neural Information Processing
Systems, 2017.
N. Heess, J. J. Hunt, T. P. Lillicrap, and D. Silver, “Memory-based
control with recurrent neural networks,” CoRR, vol. abs/1512.04455,
2015. [Online]. Available: http://arxiv.org/abs/1512.04455
J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson,
“Learning to communicate with deep multi-agent reinforcement
learning,” CoRR, vol. abs/1605.06676, 2016. [Online]. Available:
http://arxiv.org/abs/1605.06676
R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and
I. Mordatch, “Multi-agent actor-critic for mixed cooperativecompetitive environments,” CoRR, vol. abs/1706.02275, 2017.
[Online]. Available: http://arxiv.org/abs/1706.02275
E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for
model-based control.” in IROS. IEEE, 2012, pp. 5026–5033.
D. P. Kingma and J. Ba, “Adam: A method for stochastic
optimization,” CoRR, vol. abs/1412.6980, 2014. [Online]. Available:
http://arxiv.org/abs/1412.6980
| 3 |
1
ML and Near-ML Decoding of LDPC Codes Over
the BEC: Bounds and Decoding Algorithms
Irina E. Bocharova, Senior Member, IEEE, Boris D. Kudryashov, Senior Member, IEEE,
Vitaly Skachek, Member, IEEE, Eirik Rosnes, Senior Member, IEEE,
and Øyvind Ytrehus, Senior Member, IEEE
arXiv:1709.01455v1 [] 5 Sep 2017
Abstract
The performance of maximum-likelihood (ML) decoding on the binary erasure channel for finite-length low-density paritycheck (LDPC) codes from two random ensembles is studied. The theoretical average spectrum of the Gallager ensemble is
computed by using a recurrent procedure and compared to the empirically found average spectrum for the same ensemble as well
as to the empirical average spectrum of the Richardson-Urbanke ensemble and spectra of selected codes from both ensembles.
Distance properties of the random codes from the Gallager ensemble are discussed. A tightened union-type upper bound on the
ML decoding error probability based on the precise coefficients of the average spectrum is presented. A new upper bound on the
ML decoding performance of LDPC codes from the Gallager ensemble based on computing the rank of submatrices of the code
parity-check matrix is derived. A new low-complexity near-ML decoding algorithm for quasi-cyclic LDPC codes is proposed and
simulated. Its performance is compared to the upper bounds on the ML decoding performance.
I. I NTRODUCTION
The binary erasure channel (BEC) is one of the simplest for analysis and consequently a well-studied communication channel
model. In spite of its simplicity, during the last decades the BEC has started to play a more important role due to the emergence
of new applications. For example, in communication networks virtually all errors occurring at the physical level can be detected
using a rather small redundancy. Data packets with detected errors can be viewed as symbol erasures.
As already mentioned, an important reason for the popularity of the BEC is that analysis of decoding performance is simpler
than for other channel models. On the other hand, it is expected that ideas and findings for the BEC might be useful for
constructing codes and developing decoding algorithms for other important communication channels, such as, for example, the
binary symmetric channel (BSC) or the additive white Gaussian noise (AWGN) channel.
A remarkable property of the BEC is that maximum-likelihood (ML) decoding of any linear code over this channel is reduced
to solving a system of linear equations. This means that ML decoding of an [n, k] low-density parity-check (LDPC) code (where
n is the code length and k is the number of information symbols) with ν erasures can be performed by Gaussian elimination
with time complexity at most O(ν 3 ). Exploiting the sparsity of the parity-check matrix of the LDPC codes can lower the
complexity to approximately O(ν 2 ) (see overview and analysis in [3] and references therein). Practically feasible algorithms
with a thorough complexity analysis can be found in [4]–[6]. This makes ML-decoded LDPC codes strong competitors for
scenarios with strong restrictions on the decoding delay. It is worth noting that ML decoding allows for achieving low error
probability at rates strictly above the so-called belief propagation (BP) decoding threshold [7]. However, ML decoding of long
LDPC codes of lengths of a few thousand bits over the BEC is still considered impractical.
Low-complexity suboptimal decoding techniques for LDPC codes over the BEC are based on the following two approaches.
The first approach consists of adding redundant rows to the original code parity-check matrix (see, for example, [8]–[10]). The
second approach applies post-processing in case of BP decoding failure [11]–[14]. In [11], [14], a post-processing step based on
the concept of guessing bits is proposed. Applying bit guessing based algorithms to the decoding of LDPC codes improves the
performance of BP decoding, but does not provide near-ML performance. A new low-complexity near-ML decoding algorithm
is presented in this paper.
A commonly used approach to the analysis of decoding algorithms is to study the performance of the algorithm applied to
random linear codes over a given channel model. Two of the most studied code ensembles are the classical Gallager ensemble
[15] and the more general ensemble presented in [7]. The Gallager ensemble is historically the first thoroughly studied ensemble
of regular LDPC codes. The ensemble in [7] can be described by random Tanner graphs with given parity-check and symbol
I. Bocharova and B. Kudryashov are with Dept. of Information Systems, St.-Petersburg University of Information Technologies, Mechanics and Optics,
St.-Petersburg, 197101, Russia (e-mail: {iebocharova, bdkudryashov}@corp.ifmo.ru).
I. Bocharova and V. Skachek are with Institute of Computer Science, University of Tartu (e-mail: {irinaboc, vitaly}@ut.ee).
E. Rosnes is with Simula@UiB, and Ø. Ytrehus is with Simula@UiB and with Dept. of Informatics, University of Bergen (e-mail: {eirik, oyvind}@ii.uib.no).
This paper was presented in part at the 9th International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, September
2016 [1], and at the IEEE International Symposium on Information Theory, Aachen, Germany, June 2017 [2].
This work is supported in part by the Norwegian-Estonian Research Cooperation Programme under the grant EMP133, and by the Estonian Research
Council under the grant PUT405.
Copyright c 2017 IEEE
2
node degree distributions. We will refer to this ensemble and the LDPC codes contained in it as the Richardson-Urbanke (RU)
ensemble and RU LDPC codes, respectively. Several ensembles of regular LDPC codes are described and analyzed in [16].
It is shown in [15, Appendix B] that the asymptotic weight enumerator of random (J, K)-regular LDPC codes approaches
the asymptotic weight enumerator of random linear codes when J and K grow. In [16], it is confirmed that other ensembles of
regular LDPC codes display a similar behavior. Thus, regular ensembles are good enough to achieve near-optimal performance.
On the other hand, it is well known that both asymptotically [7] and in a finite-length regime irregular LDPC codes outperform
their regular counterparts and more often are recommended for real-life applications [17], [18]. Finite-length analysis of RU
LDPC codes under BP and (to a lesser degree) under ML decoding was performed in [19]. In this paper, we extend the analysis
of the ML decoding case for regular codes. In particular, new error probability bounds for regular codes are presented. For
general linear codes, detailed overviews of lower and upper bounds for the AWGN channel and the BSC were presented by
Sason and Shamai in their tutorial paper [20], and for the AWGN channel, the BSC, and the BEC by Polyanskiy et al. in [21]
and by Di et al. in [19].
For computing upper bounds on the error probability for LDPC codes there exist two approaches. One approach is based on
a union-type bound and requires knowledge of the code weight enumerator or its estimate. The second approach used in [19]
implies estimating the rank of submatrices of the LDPC code parity-check matrix. Notice that for infinitely long codes, the
bit error rate performance of BP decoding can be analyzed through density evolution (see e.g. [7], [22]). However, the density
evolution technique is not suitable for analysis of finite-length codes, since dependencies caused by cycles in the Tanner graph
associated with the code are not taken into account.
In this paper, first we consider a tightened union-type bound based on precise average spectra of random finite-length LDPC
codes. The difference between our approach and other techniques is the way of calculating the bound. Instead of manipulating
with hardly computable coefficients of series expansions of generating functions, we compute the spectra by using efficient
recurrent procedures. This allows for obtaining precise average weight enumerators with complexity growing linearly with the
code length. New bounds, based on computing the rank of submatrices, are derived for the RU and the Gallager ensemble of
regular LDPC codes.
As mentioned above, in this paper, we propose a decoding algorithm which provides near-ML decoding of long quasi-cyclic
(QC) LDPC block codes. The decoding complexity is polynomial in the window length, but it is linear in the code length.
It is well known that a QC LDPC block code can be represented as a tail-biting (TB) parent convolutional LDPC code.
Thereby, decoding techniques developed for TB codes are applicable to QC LDPC codes. The proposed algorithm resembles a
wrap-around suboptimal decoding of TB convolutional codes [23], [24]. Decoding of a TB code requires identification of the
correct starting state, and thus ML decoding must apply the Viterbi algorithm once for each possible starting state. In contrast,
wrap-around decoding applies the Viterbi algorithm once to the wrapped-around trellis diagram with all starting state metrics
initialized to zero. This decoding approach yields near-ML performance at a typical complexity of a few times the complexity
of the Viterbi algorithm.
The new algorithm is based on a combination of BP decoding of the QC LDPC code followed by so-called “quasi-cyclic
sliding-window” ML decoding. The latter technique is applied “quasi-cyclically” to a relatively short sliding window, where
the decoder performs ML decoding of a zero-tail terminated (ZT) LDPC convolutional code. Notice that unlike sliding-window
near-ML (SWML) decoding of convolutional codes considered in [25], the suggested algorithm working on the parent LDPC
convolutional code has significantly lower computational complexity due to the sparsity of the code parity-check matrix [26].
On the other hand, it preserves almost all advantages of the convolutional structure in the sense of erasure correcting capability.
The remainder of the paper is organized as follows. Preliminaries are given in Section II. A recurrent algorithm for computing
the average spectrum for the Gallager ensemble of binary LDPC codes is presented in Section III. Empirical average spectra
for the Gallager and RU ensembles are computed and compared to the theoretical average spectra as well as to the spectra of
selected codes in the same section. Distance properties of the Gallager ensemble are discussed in Appendix A. In Section IV,
two upper bounds on the error probability of ML decoding over the BEC are considered. The corresponding proofs are given
in Appendices B and C for the RU and the Gallager ensemble, respectively. A new algorithm for near-ML decoding for
long QC LDPC codes based on the interpretation of these codes as TB convolutional codes and using wrap-around slidingwindow decoding is proposed in Section V. Simulation results confirm the efficiency of the algorithm and are presented
in Section VI, while asymptotic density evolution thresholds which stem from the derived rank bounds are computed in
Section VII. Conclusions are presented in Section VIII.
II. P RELIMINARIES
For a binary linear [n, k] code C of rate R = k/n, let r = n−k be the code redundancy. Denote by {An,w }, w = 0, 1, . . . , n,
the code weight enumerator, where An,w is the number of codewords of weight w. By some abuse of notation, we use An,w
for both the weight enumerator of a specific code and for random weight enumerators in code ensembles.
We denote by H an r × n parity-check matrix which determines C. Random matrices H of size r × n do not necessarily
have full rank ρ = r which means that the “true” dimension of the code is equal to k 0 = n − ρ ≥ k. Following a commonly
accepted assumption we ignore the difference between k 0 and k when deriving bounds on the error probability, but we take it
into account when considering code examples.
3
Two ensembles of random regular LDPC codes are studied below. The first ensemble is the Gallager ensemble [15] of
(J, K)-regular LDPC codes. Codes of this ensemble are determined by random parity-check matrices H which consist of
strips Hi of width M = r/J rows each, i = 1, 2, . . . , J. All strips are random column permutations of the strip where the
j-th row contains K ones in positions (j − 1)K + 1, (j − 1)K + 2, . . . , jK, for j = 1, 2, . . . , n/K.
The second ensemble is a special case of the ensemble described in [7, Definition 3.15]. Notice that the Gallager ensemble
and the RU ensemble are denoted by B and H, respectively, in [16].
For a ∈ {1, 2, . . . } denote by am the sequence (a, a, . . . , a) of m identical symbols a. In order to construct an r × n
parity-check matrix H of an LDPC code from the RU ensemble one does the following.
J
J
J
• Construct the sequence a = (1 , 2 , . . . , n ); and
• apply a random permutation b = π(a) to obtain a sequence b = (b1 , . . . , bN ), where N = Kr = Jn. The elements
b1 , . . . , bK show the locations of the nonzero elements of the first row of H, elements bK+1 , . . . , b2K show the locations
of the nonzero elements of the second row of H, etc.
A code from the RU ensemble is (J, K)-regular if for a given permutation π all elements of subsequences (biK−K+1 , . . . , biK )
are different for all i = 1, . . . , r, otherwise it is irregular. The regular RU codes belong to the ensemble A in [16] which is
defined by equiprobable parity-check matrices with row weight K and column weight J. It is shown in [16] that the three
ensembles A, B, and H have the same asymptotic average weight enumerators.
It is known (see [16, Theorem 3]) that for large n the total number of (J, K)-regular [n, n − r] codes (ensemble A in [16])
is equal to
(K − 1)(J − 1)
(Jn)!
1 + o(n−1+δ ) ,
exp −
(K!)r (J!)n
2
where δ > 0 and o(x) → 0 when x → 0. At the same time the number of different codes from the RU ensemble constructed
as described above is
(Jn)!
.
(K!)r (J!)n
Thus, the portion of (J, K)-regular LDPC codes in the RU ensemble is
(K − 1)(J − 1)
1 + o(n−1+δ ) ,
exp −
2
that is, most of the “(J, K)-regular” RU codes are indeed irregular. In the following, a code from the RU ensemble with
parameters J and K will sometimes be referred to as a (J, K)-RU code or simply as a (J, K)-regular code even if it is not
strictly (J, K)-regular. Also, with some abuse of language a (J, K)-regular code from the Gallager ensemble will be referred
to as a (J, K)-Gallager code.
As a performance measure we use word (block, frame) error rate (FER) Pe , which for the BEC is defined as the probability
that the decoder cannot recover the information of a received word uniquely.
Consider ML decoding over the BEC, where ε > 0 denotes the channel symbol erasure probability. Assume that a codeword
x = (x1 , . . . , xn ) is transmitted and that ν erasures occurred. Then, we denote by I the set of indices of the erased positions,
that is, I = {i1 , . . . , iν } ⊆ {1, 2, . . . , n}, |I| = ν, and by xI = (xi1 , . . . , xiν ) a vector of unknowns corresponding to the
erased positions. Let I c = {1, 2, . . . , n} \ I and xI c be the set of indices of unerased positions and the vector of unerased
values of x, respectively. Also, let HI be the submatrix of H consisting of the columns indexed by I. From xH T = 0, where
(·)T denotes the transpose of its argument, it follows that
xI HIT = xI c HITc , s ,
(1)
yI HIT = 0 ,
(2)
where s is the syndrome vector or, equivalently,
where y = x + x0 is a codeword. If the solution of (2) is not unique, that is,
rank (HI ) < |I| ,
then the corresponding set of erasures cannot be (uniquely) corrected. Otherwise, the set of erasures I is correctable. Thus,
the ML decoding error probability (for the BEC) is the probability of such a set I, that is,
Pe = Pr (rank (HI ) < |I|) .
(3)
4
III. AVERAGE SPECTRA FOR ENSEMBLES OF REGULAR LDPC CODES
A. Weight enumerator generating functions
In this section, we study average weight enumerators for different ensembles of LDPC codes. The weight distribution of
any linear code can be represented in terms of its weight generating function
Gn (s) =
n
X
An,w sw ,
w=0
where An,w is the random variable representing the number of binary codewords of weight w and length n, and s is a formal
variable. Our goal is to find E{An,w }, where E{·} denotes the expected value over the code ensemble. In general, computing
the coefficients An,w is a rather difficult task. If a generating function can be represented as a degree of another generating
function (or expressed via a degree of such a function) then for numerical computations we can use the following simple
recursion.
P
Lemma 1: Let f (s) = l≥0 fl sl be a generating function. Then, the coefficients in the series expansion of the generating
P
L
function FL (s) = [f (s)] = l≥0 Fl,L sl satisfy the following recurrent equation
fl ,
L=1
Pl
Fl,L =
.
(4)
f
F
,
L>1
i=0 i l−i,L−1
B. General linear codes
For completeness, we present the average spectrum for the ensemble of random linear codes determined by equiprobable
r × n parity-check matrices, where r = n − k, and k and n are the code dimension and length, respectively. The weight
generating function of all binary sequences of length n is Gn (s) = (1 + s)n . Then, the average spectrum coefficients are
n −r
E{An,w } =
2 , w>0,
(5)
w
where 2−r is the probability that a binary sequence x of length n and weight w > 0 satisfies xH T = 0.
If a random linear code contains only codewords of even weight, then its generating function has the form
X n
(1 + s)n + (1 − s)n
,
Gn (s) =
sw =
2
w
w even
and the average spectrum coefficients are
E{An,w } =
2−r+1
0,
n
w
,
w > 0 and even
w odd
.
(6)
C. The Gallager binary (J, K)-regular random LDPC codes
The generating function of the number of sequences satisfying the nonzero part of one parity check is given by
X K
1
si =
g(s) =
(1 + s)K + (1 − s)K .
i
2
i even
The generating function for the strip is
GJ,K (s) = g(s)M =
n
X
Nn,w sw ,
(7)
(8)
w=0
where Nn,w denotes the total number of binary sequences of weight w satisfying xH1T = 0. Due to Lemma 1 we can compute
Nn,w precisely. The probability that xH1T = 0 is valid for a random binary x of weight w is equal to
p1 (w) =
Nn,w
.
n
w
Since the submatrices Hj , j = 1,
. . . , J, are obtained by independent random column permutations of H1 , the expected
n
number of codewords among all w
sequences of weight w is
1−J
n
n
J
E{An,w } =
p1 (w)J =
Nn,w
,
(9)
w
w
where Nn,w is computed recursively using Lemma 1.
5
log2(Weight enumerator)
25
20
15
10
(48,24), d=12 code
Gallager bound
Random even−distance code
Average (3,6) Gallager code
Random (3,6) Gallager codes
5
0
0
5
10
15
20
weight
Fig. 1. The theoretical and empirical spectra of (3, 6)-regular Gallager codes of length n = 48. The Gallager bound and the random coding bound are
defined by (9) and (6), respectively.
D. The empirical and theoretical average spectra for the Gallager and the Richardson-Urbanke ensemble
In this section, we compare the theoretical average spectra computed according to (6) and (9) with the empirically obtained
average spectra for the Gallager and the RU ensemble. Furthermore, we compare the average spectra with the spectra of both
randomly generated and constructed LDPC and linear block codes.
We remark that there is a weakness in the proof of Theorem 2.4 in [15] by Gallager, which is similar to the one in the
derivations (7)–(9) above. Formula (2.17) in [15] (and (9) in this paper) states that the average number of weight-w binary
n
sequences which satisfy the parity checks of all J strips simultaneously is obtained by computing the J-th degree of Nn,w / w
,
that is, the probability of weight-w sequences satisfying the parity checks of the first strip. This formula relies on the assumption
that parity checks of strips are statistically independent. Strictly speaking, this statement is not true because they are always
linearly dependent (the sum of the parity checks of any two strips is equal to the all-zero sequence). The detailed discussion
and examples can be found in Appendix A.
In our derivations of the bounds on the error probability in Section IV we rely on the same assumption. Hence, it is important
to compare the empirical and the theoretical average spectra. Moreover, as it is shown in Appendix A, in the Gallager ensemble
there is no code whose spectrum coincides with the average spectrum. Thus, estimating the deviation of the spectrum of a
particular code from the average spectrum is an interesting issue.
One more question that we try to answer in this section is how close are the average spectra for the Gallager and RU
ensembles. It is known [16] that the Gallager and RU ensembles have the same asymptotic average spectrum. However, their
relation for finite lengths is unknown.
In Figs. 1–4, the distance spectra of 100 randomly selected rate R = 12 codes of length n = 48 (“Random codes” in the
legends) and their empirical average spectrum (“Average” in the legends) are compared with the theoretical average spectra.
We take into account that all codewords of a (J, K)-regular LDPC code from the Gallager ensemble have even weight. If
K is even, then the all-one codeword belongs to any code from the ensemble. It is easy to see that in this case the weight
enumerators An,w are symmetric functions of w, that is, An,w = An,n−w , and hence we show only half of the spectra in these
figures.
In Figs. 1 and 2, we present the average spectra for the Gallager ensembles and the average spectrum for the ensemble
of random linear codes with only even-weight codewords, computed by using formulas from Section III-B, spectra of 100
random codes from the Gallager ensemble, and the empirical average spectrum computed over 100 random codes from the
same ensemble. The spectrum of a quasi-perfect [48, 24] linear code with minimum distance d = 12 is presented in the same
figures as well. In Figs. 3 and 4, the average spectrum for the Gallager ensemble and the average spectrum for the ensemble
of random linear codes are compared with the spectra of 100 random codes from the RU ensemble and the empirical average
spectrum computed over 100 random codes from the RU ensemble.
Observations regarding the Gallager codes:
• For the Gallager (3, 6) and (4, 8)-regular LDPC codes their empirical average spectra perfectly match with the theoretical
average spectra computed for the corresponding ensembles.
6
log2(Weight enumerator)
25
20
15
10
(48,24), d=12 code
Gallager bound
Random even−distance code
Average (4,8) Gallager code
Random (4,8) Gallager codes
5
0
0
5
10
15
20
weight
Fig. 2. The theoretical and empirical spectra of (4, 8)-regular Gallager codes of length n = 48. The Gallager bound and the random coding bound are
defined by (9) and (6), respectively.
25
log 2 (Weight enumerator)
20
15
10
5
(48,24), d=12 code
Gallager bound
Random linear code
Average (3,6) RU code
Random (3,6) RU codes
0
-5
0
5
10
15
20
weight
Fig. 3. The theoretical spectrum of (3, 6)-regular Gallager codes and the empirical spectra of (3, 6)-RU LDPC codes of length n = 48. The Gallager bound
and the random coding bound are defined by (9) and (5), respectively.
For all codes from the Gallager ensemble the number of high-weight codewords is perfectly predicted by the theoretical
average spectrum.
• The number of low-weight codewords has large variation.
Remarks about the RU codes:
• Most of the RU LDPC codes are irregular and have codewords of both even and odd weight.
• Typically, parity-check matrices of random codes from the RU ensemble have full rank, and these codes have lower rate
than LDPC codes from the Gallager ensemble. For this reason, the empirical average spectrum of the RU ensemble lies
below the theoretical average spectrum computed for the Gallager ensemble.
• The average distance properties of the RU codes are better than those of the Gallager codes.
• The variation of the number of low-weight codewords is even larger than that for the Gallager codes.
Since for all considered ensembles low-weight codewords have a much larger probability than for general linear codes, we
expect to observe a higher error floor.
•
7
25
log 2 (Weight enumerator)
20
15
10
5
(48,24), d=12 code
Gallager bound
Random linear code
Average (4,8) RU code
Random (4,8) RU codes
0
-5
0
5
10
15
20
weight
Fig. 4. The theoretical spectrum of (4, 8)-regular Gallager codes and the empirical spectra of (4, 8)-RU LDPC codes of length n = 48. The Gallager bound
and the random coding bound are defined by (9) and (5), respectively.
IV. E RROR PROBABILITY BOUNDS ON THE BEC
A. Lower bounds
In this section, we consider bounds on the ML decoding error probability. We start with a simple lower bound which is true
for any linear code.
Theorem 1:
Pe ≥ Pe (n, k, ε) ,
n
X
n i
ε (1 − ε)n−i .
i
i=r+1
(10)
Remark: [21, Theorem 38] gives a lower bound on the error probability of ML decoding. It differs from (10) by a multiplier
which is close to 1. This difference appears because of different definitions of the frame error event in this paper and in [21,
Theorem 38].
Proof. It readily follows from the definition of the decoding error probability and from the condition in (3) that if the number
of erasures ν > r ≥ rank (H), then the decoding error probability is equal to one.
The bound (10) ignores erasure combinations of weight less than or equal to r. Such combinations lead to an error if they
cover all nonzero elements of a codeword.
Theorem 2: Let the code minimum distance be dmin ≤ d0 . Then,
r
X
n − d0 w
Pe ≥ Pe (n, k, ε) +
ε (1 − ε)n−w .
w − d0
(11)
w=d0
Proof. There is at least one nonzero codeword c0 of weight at most d0 in the code. Each erasure combination of weight
w ≥ d0 which covers the nonzero positions of c0 leads to additional decoder failures taken into account as the sum in the
right-hand side (RHS) of (11).
We remark that upper bounds on the minimum distance of linear codes with given n ≤ 256 and k can be found in [27].
The lower bounds (10) and (11) are plotted in Figs. 5, 6, 7, and 13 for some code examples, and discussed in Section VI.
B. Upper bounds for general linear codes
Next, we consider the ensemble-average ML decoding block error probability E{Pe } over the BEC with erasure probability
ε > 0. This average decoding error probability can be interpreted as an upper bound on the achievable error probability for
codes from the ensemble. In other words, there exists at least one code in the ensemble whose ML decoding error probability
8
is upperbounded by E{Pe }. For the ease of notation, in the sequel we use Pe for the ensemble-average error probability. For
the ensemble of random binary [n, n − r] linear codes
n
X
n ν
Pe =
ε (1 − ε)n−ν
ν
ν=r+1
r
X
n ν
+
ε (1 − ε)n−ν Pe|ν ,
(12)
ν
ν=1
where Pe|ν denotes the conditional ensemble-average error probability given that ν erasures occurred.
By using the approach based on estimating the rank of submatrices of random matrices [28], the following expression for
Pe|ν was obtained in [19], [29], [30]
Pe|ν = Pr (rank(HI ) < ν) = 1 −
ν−1
Y
1 − 2j−r ≤ 2ν−r ,
(13)
j=0
where HI is an r × ν submatrix of a random r × n parity-check matrix H.
The bound obtained by combining (12) and (13) is used as a benchmark to compare the FER performance of ML decoding
of LDPC codes to the FER performance of general linear codes in Figs. 5–16.
An alternative upper bound for a specific linear code with known weight enumerator has the form [29]
( i
)
n
X
n X
n−w
Pe ≤
min
,
An,w
εi (1 − ε)n−i .
(14)
i
i−w
i=d
w=d
In particular, this bound can be applied to random ensembles of codes with known average spectra (see Section III). We refer
to this bound as the S-bound. It is presented for several ensembles in Figs. 5–16 and discussed in Section VI.
C. Random coding upper bounds for (J, K)-regular LDPC codes
In this subsection, we derive an upper bound on the ensemble-average error probability of ML decoding for the RU and
Gallager ensembles of (J, K)-regular LDPC codes. Similarly to the approach in [19], we estimate the rank of the submatrix
HI .
Theorem 3: The (J, K)-RU ensemble-average ML decoding error probability for [n, n − r] codes, n = M K, r = M J, and
M 1, is upperbounded by
n
X
n ν
Pe ≤
ε (1 − ε)n−ν
ν
ν=r+1
!r
r
n−ν
X
n ν
ε (1 − ε)n−ν .
(15)
+
2ν−r 1 + K
n
ν
K
ν=1
Proof. See Appendix B.
The same technique leads to the following bound for the Gallager ensemble of random LDPC codes.
Theorem 4: The (J, K)-Gallager ensemble-average ML decoding error probability for [n, n − r] codes, n = M K, r = M J,
and M 1, is upperbounded by
(
J
µK )
r J(n−ν)/K
X
X
µ+ν−r
µ+J −1
M
n−ν
n ν
Pe ≤
min 1, 2
min 1,
ε (1 − ε)n−ν
J
−
1
µ/J
n
ν
ν=1
µ=0
n
X
n ν
+
ε (1 − ε)n−ν .
(16)
ν
ν=r+1
Proof. See Appendix C.
We refer to the bounds (15) and (16) as R-bounds, since they are based on estimating the rank of submatrices of H.
Computations show that while for rates close to the capacity these bounds are rather tight, for small ε (or for rates significantly
lower than the capacity) these bounds are weak. The reason for the bound untightness is related to the Gallager ensemble
properties discussed in detail in Appendix A.
9
V. S LIDING - WINDOW NEAR -ML DECODING FOR QC LDPC CODES
A binary QC LDPC block code can be considered as a TB parent convolutional code determined by a polynomial parity-check
matrix whose entries are monomials or zeros.
A rate R = b/c parent LDPC convolutional code can be determined by its polynomial parity-check matrix
h11 (D)
h12 (D) . . . h1c (D)
h21 (D)
h22 (D) . . . h2c (D)
(17)
H(D) =
,
..
..
..
..
.
.
.
.
h(c−b)1 (D) h(c−b)2 (D) . . . h(c−b)c (D)
where D is a formal variable, hij (D) is either zero or a monomial entry, that is, hij (D) ∈ {0, Dwij } with wij being a
nonnegative integer, and µ = maxi,j {wij } is the syndrome memory.
The polynomial matrix (17) determines an [M0 c, M0 b] QC LDPC block code using a set of polynomials modulo DM0 − 1.
If M0 → ∞ we obtain an LDPC convolutional code which is considered as a parent convolutional code with respect to the
QC LDPC block code for any finite M0 . By tailbiting the parent convolutional code to length M0 > µ, we obtain the binary
parity-check matrix
H0 H1 . . . Hµ−1 Hµ
0 ...
0
0 H0 H1 . . . Hµ−1 Hµ . . .
0
..
..
.. . .
..
..
.
.
.
.
.
.
HTB =
H 0 . . .
0
H0 H1 . . . Hµ−1
µ
. .
..
..
..
..
..
..
.
.
. .
.
.
.
.
.
.
H1 . . . Hµ
0
...
0 . . . H0
of an equivalent (in the sense of column permutation) TB code (all matrices Hi including HTB should have a transpose
operator to get the exact TB code [31]), where Hi , i = 0, 1, . . . , µ, are the binary (c − b) × c matrices in the series expansion
H(D) = H0 + H1 D + · · · + Hµ Dµ .
If every column and row of H(D) contains J and K nonzero entries, respectively, we call C a (J, K)-regular QC LDPC
code and irregular otherwise.
Notice that by zero-tail termination [31] of (17) at length W > µ, we can obtain a parity-check matrix of a [W c, (W − µ)b]
ZT QC LDPC code.
Consider a BEC with erasure probability ε > 0. Let H be an M0 (c − b) × M0 c parity-check matrix of a binary [n =
M0 c, k = M0 b, dmin ] QC LDPC block code, where dmin is the minimum Hamming distance of the code. An ML decoder
corrects any pattern of ν erasures if ν ≤ dmin − 1. If dmin ≤ ν ≤ n − k, then a unique correct decision can be obtained for
some erasure patterns. The number of such correctable patterns depends on the code structure.
As explained in Section II, ML decoding over the BEC is reduced to solving (1). Its complexity for sparse parity-check
matrices is of order ν 2 , ν = |I|, that is, still computationally intractable for LDPC codes of large lengths.
In order to reduce decoding complexity, we apply a sliding-window decoding algorithm which is modified for QC LDPC
block codes. This decoder is determined by a binary parity-check matrix
H0 . . . Hµ 0
0 ... 0
0 H0 . . . Hµ 0 . . . 0
..
..
..
HW = ... . . . . . .
(18)
.
.
.
0 . . . 0 H0 . . . Hµ 0
0
0 . . . 0 H0 . . . Hµ
of size (W − µ)(c − b) × W c, where W ≥ 2µ + 1 denotes the size of the decoding window in blocks. The matrix (18)
determines a ZT LDPC parent convolutional code. We start decoding with BP decoding applied to the original QC LDPC
block code of length n = M0 c, and then apply ML decoding to the ZT LDPC parent convolutional code determined by the
parity-check matrix (18). It implies solving a system of linear equations
T
xI,W HI,W
= sW ,
where xI,W = (xI,W,i , xI,W,i+1 mod n , . . . , xI,W,i+W c−1 mod n ), i = 0, s, 2s, . . . mod n is a subvector of xI corresponding
to the chosen window, s denotes the size of the window shift, and sW and HI,W are the corresponding subvector of s
and submatrix of HI , respectively. The final decision is made after αn/s steps, where α denotes the number of passes of
sliding-window decoding. The formal description of the decoding procedure is given below as Algorithms 1 and 2.
Notice that the choice of s affects both the performance and the complexity. By increasing s we can speed up the decoding
procedure at the cost of some performance loss. In the sequel, we use s = c bits that corresponds to the lowest possible FER.
10
Algorithm 1 BP-BEC
while there exist parity checks with only one erased symbol do
Assign to the erased symbol the modulo-2 sum of all nonerased symbols participating in the same parity check.
end while
Algorithm 2 Wrap-around algorithm for near-ML decoding of QC LDPC codes over the BEC
Input: BEC output y.
Perform BP decoding for y.
wstart ← 0;
wend ← W − 1;
corrected ← 1;
while corrected > 0 do
corrected ← 0;
Apply ML decoding to the window (ywstart , . . . , ywend );
wstart ← wstart + s mod n;
wend ← wend + s mod n;
if wstart = 0 then
corrected ← number of corrected erasures after a full round;
end if
end while
return y
VI. N UMERICAL RESULTS AND SIMULATIONS
A. Short codes
In Figs. 5 and 6, we compare upper and lower bounds on the error probability of ML decoding for short codes on the
BEC. First, notice that the lower bounds in Theorems 1 and 2 (sphere packing and the tightened sphere packing bound) almost
coincide with each other near the channel capacity. However, the new bound is significantly tighter than the known one in the
low erasure probability region. For further comparisons we use the new bound whenever information on the code minimum
distance is available. The upper bounds in Theorems 3 and 4 are also presented in Figs. 5 and 6. These two bounds are
indistinguishable at high symbol erasure probabilities but the difference between them is visible in the low ε region where all
bounds are rather weak. Notice that for high ε random coding bounds for LDPC codes are close to those of random linear
codes. For (J, K)-regular LDPC codes with J = 4 the random bound is almost as good as the random bound for general
linear codes of the same length and dimension in a wide range of ε values. In Figs. 5 and 6, as well as in subsequent figures,
S-bounds (14) are computed based on the spectra of the corresponding code ensembles.
In Fig. 7, we compare upper bounds on the ML decoding FER performance for the RU ensemble of (J, K)-regular LDPC
100
10-2
FER
10-4
10-6
S-bound for (3,6) LDPC code
R-bound for (3,6) RU code
R-bound for (3,6) Gallager code
Random linear codes
Tightened sphere packing
Sphere packing
-8
10
10-10
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Symbol erasure probability
Fig. 5. Bounds for binary (3, 6)-regular LDPC codes of length n = 96 and rate R = 1/2. The S-bound is defined by (14), and R-bounds for RU and
Gallager codes are defined by (15) and (16), respectively. The random coding bound is computed by (12)–(13), while the sphere packing bound and the
tightened sphere packing bound are computed according (10) and (11), respectively.
11
100
10-2
FER
10-4
10-6
S-bound for (4,8) LDPC code
R-bound for (4,8) RU code
R-bound for (4,8) Gallager code
Random linear codes
Tightened sphere packing
Sphere packing
10-8
10-10
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Symbol erasure probability
Fig. 6. Bounds for binary (4, 8)-regular LDPC codes of length n = 96 and rate R = 1/2. The S-bound is defined by (14), and R-bounds for RU and
Gallager codes are defined by (15) and (16), respectively. The random coding bound is computed by (12)–(13), while the sphere packing bound and the
tightened sphere packing bound are computed according to (10) and (11), respectively.
100
10-1
R=1/3,
n=108,
1 - (4,6)
2 - (6,9)
3 - (8,12)
FER
10-2
R=1/2,
n=96,
1 - (3,6)
2 - (4,8)
3 - (6,12)
10-3
10-4
0.15
R=2/3,
n=108,
1 - (3,9)
2 - (4,12)
3 - (6,18)
0.2
0.25
0.3
0.35
0.4
Lower bound
Average FER for linear codes
R-bound for 1st LDPC code
R-bound for 2nd LDPC code
R-bound for 3d LDPC code
S-bound for 1st LDPC code
S-bound for 2nd LDPC code
S-bound for 3d LDPC code
0.45
0.5
0.55
0.6
0.65
Symbol erasure probability
Fig. 7. Error probability bounds for binary (J, K)-regular LDPC codes of length n ≈ 100. The S- and R-bounds are defined by (14) and (15), respectively.
The average ML decoding FER performance for random linear codes is computed by (12)–(13). The lower bound is (11).
codes with different pairs (J, K) to the upper bounds for general linear codes of different code rates. Interestingly, the
convergence rate of the bounds for LDPC codes to the bounds for linear codes depends on the code rate. For rate R = 1/3,
even for rather sparse codes with column weight J = 4, their FER performance under ML decoding is very close to the FER
performance of general linear codes.
Next, we present simulation results for two ensembles of LDPC codes. The first ensemble is the Gallager ensemble.
In Figs. 8 and 9, we present the FER performance of ML decoding simulated for 10 randomly selected (3, 6) and (4, 8)regular Gallager codes of length n = 96. The FER performance for these codes has large variation and most of the FER
performance curves are located between the R-bound and the S-bound.
In Figs. 10 and 11, we present the FER performance of ML decoding simulated for 10 randomly selected (3, 6) and (4, 8)RU codes of length n = 96. The FER performance variation in both cases is smaller than for the Gallager codes and the
performance of (4, 8)-RU codes is concentrated around the R-bound.
There are two reasons for better behavior of the RU codes. First, the code rate for the RU codes is typically equal to R = 1/2,
whereas for the Gallager codes the rate is R ≥ 52/96 = 0.5417. A second reason to the difference in the FER performance is
the approximation discussed in Appendix A. In our derivations we used the Gallager approximation for estimating the number
of non-full-rank submatrices of size r × ν, where ν denotes the number of erasures. For decreasing values of ν, the number of
12
100
10-1
FER
10-2
10-3
S-bound
R-bound
Average FER for linear codes
Random (3,6) Gallager LDPC codes
10-4
0.25
0.3
0.35
0.4
Symbol erasure probability
Fig. 8. Error probability bounds and simulation results for (3, 6)-regular Gallager codes of length n = 96. S- and R-bounds are defined by (14) and (16),
respectively. The average ML decoding FER performance for random linear codes is computed by (12)–(13).
100
10-1
FER
10-2
10-3
S-bound
R-bound
Average FER for linear codes
Random (4,8) Gallager LDPC codes
10-4
0.25
0.3
0.35
0.4
Symbol erasure probability
Fig. 9. Error probability bounds and simulation results for (4, 8)-regular Gallager codes of length n = 96. S- and R-bounds are defined by (14) and (16),
respectively. The average ML decoding FER performance for random linear codes is computed by (12)–(13).
100
10-1
FER
10-2
10-3
S-bound
R-bound
Average FER for linear codes
Random (3,6) RU LDPC codes
10-4
0.2
0.22 0.24 0.26 0.28
0.3
0.32 0.34 0.36 0.38
0.4
Symbol erasure probability
Fig. 10. Error probability bounds and simulation results for (3, 6)-RU codes of length n = 96. S- and R-bounds are defined by (14) and (15), respectively.
The average ML decoding FER performance for random linear codes is computed
. by (12)–(13)
13
100
10-1
FER
10-2
10-3
10-4
0.2
S-bound
R-bound
Average FER for linear codes
Random (4,8) RU LDPC codes
0.22 0.24 0.26 0.28
0.3
0.32 0.34 0.36 0.38
0.4
Symbol erasure probability
Fig. 11. Error probability bounds and simulation results for (4, 8)-RU codes of length n = 96. S- and R-bounds are defined by (14) and (15), respectively.
The average ML decoding FER performance for random linear codes is computed by (12)–(13).
100
10-1
FER
10-2
10-3
Linear codes
R-bound (3,6) RU
R-bound (3,6) Gallager
Simulations (3,6) Gallager
Simulations (3,6) RU
R-bound (4,8) RU
R-bound (4,8) Gallager
Simulations (4,8) Gallager
Simulations (4,8) RU
10-4
0.25
0.3
0.35
0.4
Symbol erasure probability
Fig. 12. Simulation results for (3, 6)-regular and (4, 8)-regular codes of length n = 96 from the Gallager and RU ensembles. The S-bound is defined by
(14), and R-bounds for RU and Gallager codes are defined by (15) and (16), respectively. The average ML decoding FER performance for random linear
codes is computed by (12)–(13).
independent parity checks in “independent” strips decreases, and, therefore, the bound becomes loose. Since the parity-check
matrix for the RU codes is not split into strips, the inter-row dependencies for this submatrix are weaker than for the Gallager
codes.
In Fig. 12, we compare the best code among 10 randomly selected (3, 6)-regular LDPC codes and the best code among 10
randomly selected (4, 8)-regular LDPC codes of the two ensembles. As predicted by bounds, the ML decoding performance
of the (4, 8)-regular codes is much better than that of the (3, 6)-regular codes in both ensembles. Due to the rate bias and
imperfectness of the Gallager approximation, codes from the Gallager ensemble are weaker than the RU codes with the same
parameters. The performance of the RU codes perfectly matches the R-bound. Moreover, the best (4, 8)-RU code shows even
better FER performance than the average FER performance of general linear codes in the high erasure probability region.
B. Codes of moderate length
The FER performance for relatively long codes of length n = 1008 is shown in Fig. 13. Notice that the difference between
the lower and upper bounds for general linear codes, which was noticeable for short codes becomes very small for n = 1008.
Since the lower bound (10) is simply the probability of more than r erasures, the fact that the upper and the lower bounds
almost coincide leads us to the conclusion that even for rather low channel erasure probabilities ε, achieving ML decoding
performance requires correcting almost all combinations of erasures of weight close to the code redundancy r. Notice that
according to the S-bounds in Fig. 13, error floors are expected in the low erasure probability region. The error-floor level
strongly depends on J and K, and rapidly decreases with increasing J.
14
100
10-1
R=1/3
1 - (4,6)
2 - (6,9)
3 - (8,12)
10-2
R=1/2
1 - (3,6)
2 - (4,8)
3 - (6,12)
FER
10-3
10-4
-5
10
10-6
10-7
0.2
Lower bound
Average FER for linear codes
R-bound for 1st LDPC code
R-bound for 2nd LDPC code
R-bound for 3d LDPC code
S-bound for 1st LDPC code
S-bound for 2nd LDPC code
S-bound for 3d LDPC code
R=2/3
1 - (3,9)
2 - (4,12)
3 - (6,18)
0.3
0.4
0.5
0.6
0.7
0.8
Symbol erasure probability
Fig. 13. Error probability bounds for binary (J, K)-regular LDPC codes of length n = 1008. S- and R-bounds are defined by (14) and Theorem 3,
respectively. The average ML decoding FER performance for random linear codes is computed by (12)–(13). The lower bound is (11).
100
FER
10-2
10-4
S-bound
R-bound RU codes
R-bound Gallager codes
BP decoding
Irregular
Random (3,6)-Gallager codes
Random (3,6)-RU codes
-6
10
10-8
0.38
0.4
0.42
0.44
0.46
0.48
0.5
Symbol erasure probability
Fig. 14. Error probability bounds and simulation results for (3, 6)-regular LDPC codes of length n = 1008. The S-bound is defined by (14), and R-bounds
for the RU and Gallager codes are defined by (15) and (16), respectively.
In Figs. 14 and 15, we show simulation results for 5 randomly selected (3, 6)-regular and (4, 8)-regular codes of length n =
1008, respectively, from the Gallager and RU ensembles. In the same plots the rank and spectral bounds for the corresponding
ensembles are shown. We observe that for rates close to the capacity, the rank and spectral bounds demonstrate approximately
the same behavior as the average simulated FER. For low channel erasure probabilities the spectral bound predicts error floors.
As expected, the spectral bound is weak for rates far below the channel capacity. Since all 10 codes (5 for the Gallager
ensemble and 5 for the RU ensemble) show identical FER performance, we present only one of the BP decoding FER curves
in each plot. Finally, we show the FER performance of ML decoding for 2 non-random codes. The first code is an irregular
QC LDPC code with a base matrix of size 12 × 24 optimized for the BEC using the approach in [32]. The second code is
the (4, 8)-regular so-called double-Hamming code from [33]. Simulations show that the irregular code has better ML decoding
FER performance than any randomly generated code and almost everywhere outperforms the double-Hamming code. The
double-Hamming code is better than the randomly generated RU and Gallager codes, but it mimics their error-floor behavior.
C. Sliding-window near-ML decoding for QC LDPC codes
Simulation results for the irregular rate-12/24 LDPC code and for the double-Hamming regular LDPC code are presented
in Fig. 16. Parameters of the codes and the SWML decoder are summarized in Table I.
15
100
10-1
FER
10-2
10-3
S-bound
R-bound RU codes
R-bound Gallager codes
BP decoding
Double Hamming
Random (4,8)-Gallager codes
Random (4,8)-RU codes
10-4
0.42
0.43
0.44
0.45
0.46
0.47
0.48
0.49
0.5
Symbol erasure probability
Fig. 15. Error probability bounds and simulation results for (4, 8)-regular LDPC codes of length n = 1008. The S-bound is defined by (14), and R-bounds
for the RU and Gallager codes are defined by (15) and (16), respectively.
100
10-1
10-2
FER
10-3
10-4
10-5
Irregular, BP
Irregular, ML
Double Hamming, BP
Double Hamming, ML
S-bound for (3,6)-codes
S-bound for (4,8)-codes
R-bound for (3,6)-codes
R-bound for (4,8)-codes
Random linear codes
10-6
10-7
10-8
0.34
0.36
0.38
0.4
0.42
0.44
0.46
0.48
0.5
Symbol erasure probability
Fig. 16. Error probability bounds and the simulated FER performance of SWML decoding for QC LDPC codes of length n = 4800. S- and R-bounds are
defined by (14) and (15), respectively. The average FER performance of ML decoding for random linear codes is computed by (12)–(13).
The SWML decoding FER performance of the double-Hamming code is very close to the theoretical bounds, despite very
low decoding complexity. In contrast, the FER performance of BP decoding for this code is extremely poor. For the irregular
code, BP decoding performs much better than for the double-Hamming code, but its SWML decoding FER performance is
worse than that for the double-Hamming code. In general, both codes show very good error-correcting performance.
VII. T HRESHOLDS
In order to obtain an asymptotic upper bound on the error probability for (J, K)-regular LDPC codes, we use the following
inequality:
K
n−ν
n−ν
K
≤
.
(19)
n
n
K
16
TABLE I
E XAMPLE PARAMETERS OF SWML DECODERS FOR CODES OF LENGTH n = 4800.
Code and decoder
Codes
parameters
Irregular
Double-Hamming
Base matrix size
12 × 24
8 × 16
Lifting degree
24
32
Decoding window size
51
68
Window shift
24
16
Overall TB length
200
300
Maximum number of passes
15
15
For the RU ensemble, it follows from (15) that
Pe
n
X
n ν
≤
min {1, Bν }
ε (1 − ε)n−ν
ν
ν=1
n
o
≤ n · exp − min {max {T1 , T1 − log Bν }} ,
ν
where
!r
Bν = 2ν−r
T1 = − log
1+
n−ν
K
n
K
,
n
− ν log ε − (n − ν) log(1 − ε) .
ν
Denote by α = ν/n the normalized number of erasures. The asymptotic error probability exponent can be written as
log Pe
E(ε) = lim −
n→∞
n
≥ min {max {F1 (α, ε) , F2 (α, ε)}} ,
(20)
α∈[0,1]
where
F1 (α, ε) = −h(α) − α log ε − (1 − α) log(1 − ε) ,
(21)
F2 (α, ε) = F1 (α, ε) − F3 (α) ,
J
J
F3 (α) =
α−
log 2 +
log 1 + (1 − α)K ,
K
K
h(α) = −α log α − (1 − α) log(1 − α) .
(22)
(23)
(24)
In (20)–(24) all logarithms are to the base of e. The asymptotic decoding threshold is defined as the maximum ε providing
E(ε) > 0, or as the minimum ε providing E(ε) = 0. It is easy to see that F1 (α, ε) is always positive except at the point
α = ε where F1 (α, ε) = 0, F2 (α, ε) > 0 for α < ε, and F2 (α, ε) = 0 at α = ε if F3 (α) = F3 (ε) = 0. In other words, a lower
bound on the ML decoding threshold can be found as the unique solution of the equation
"
#
log 1 + (1 − ε)K
J
1−
.
(25)
ε=
K
log 2
Notice that increasing K leads to the simple expression
J
=1−R
K
for the threshold, which corresponds to the capacity of the BEC. Numerical values for the lower bound from (25) on the ML
decoding threshold for different code rates and different column weights are shown in Table II.
Surprisingly, the new lower bounds on the threshold marked by asterisk in Table II are essentially identical to the upper
bounds on the ML decoding threshold in [34, Eq. (37), Table 1], although the analytical expressions for the bounds are different.
This confirms the tightness of the bound in Theorem 3 and the bounds in [34] for rates near the channel capacity.
ε −−−−→
K→∞
17
TABLE II
A SYMPTOTIC LOWER BOUNDS ON THE ML DECODING THRESHOLD FOR BINARY (J, K)- REGULAR LDPC CODES ON THE BEC.
J
R
3
4
5
6
1/4
0.746930∗
8
9
—
—
1/3
—
0.665737∗
—
0.749989
—
0.750000
0.666633
0.666665
1/2
0.491422∗
—
0.497987
2/3
0.323598
0.330648
0.499507
0.499878
0.499992
0.499998
0.332560
0.333106
0.333314
0.333327
3/4
0.241029
0.247364
0.249191
0.249747
0.249975
0.249992
VIII. C ONCLUSION
Both a finite-length and an asymptotic analysis of ML decoding performance for LDPC codes on the BEC have been
presented. The obtained bounds are very useful since unlike other channel models, for the BEC, ML decoding can be
implemented for rather long codes. Moreover, an efficient sliding-window decoding algorithm which provides near-ML decoding
of very long codes is developed. Comparison of the presented bounds with empirical estimates of the average error probability
over sets of randomly constructed codes has shown that the new bounds are rather tight at rates close to the channel capacity
even for short codes. For code length n > 1000, the bounds are rather tight for a wide range of parameters. The new bounds
lead to a simple analytical lower bound on the ML decoding threshold on the BEC for regular LDPC codes.
A PPENDIX A
There is a weakness in the proof of Theorem 2.4 in [15] by Gallager, analogous to the one in the derivations (7)–(9) above.
Formula (2.17) in [15] and (9) in this paper state that the average number of weight-w binary sequences which simultaneously
satisfy all parity checks in J strips is
#J
"
n
Nn,w
,
(26)
E{An,w } =
n
w
w
where Nn,w is the number of weight-w sequences satisfying the parity checks of the first strip H1 . This formula relies on the
assumption that parity checks of strips are independent. It is known that this assumption is incorrect because the strips of the
parity-check matrix are always linearly dependent (the sum of the parity checks of any two strips is the all-zero sequence)
and, as a consequence, the actual rate of the corresponding (J, K)-regular codes is higher than 1 − J/K. The fact that we
intensively use the strip-independence hypothesis in our derivations motivated us to study deeper the influence of the strip
independence assumption both on the conclusions in [15] and on the derivations done in this paper.
In order to verify how this assumption influences the precision of estimates, consider the following simple example.
Example 1: Consider a (3, 3)-regular code with M = 2. The first strip is
1 1 1 0 0 0
.
0 0 0 1 1 1
The other two strips are obtained by random permutations of the columns of this strip. In total there exist (6!)2 LDPC codes,
but most of the codes are equivalent. By taking into account that the first row in each strip determines the second row, we
obtain that the choice of each code is determined by the choice of the third and fifth rows of the parity-check matrix. Thus,
2
there are at most 63 = 400 equiprobable classes of equivalent codes. We compute the average spectra over codes with a
certain code dimension and the average spectrum over all codes. The obtained results are presented in Table III.
TABLE III
S PECTRA OF (3, 3)- REGULAR LDPC CODES OF LENGTH 6.
Dimension
Number of codes
2
288
1
Average spectrum
0
3
108
1
4
4
1
1
2
0
5
2
0
2
0
5
0
0
6
0
9
0
0
0
0
0
Average parameters over the ensemble
2.29
—
1
0
24
25
0
81
25
0
0
18
Notice that the lower bound on the code rate R ≥ 1 − J/K = 0, but since there exist at least two rows that are linearly
dependent on other rows, a tightened lower bound on the code rate is R ≥ 1 − 4/6 = 1/3. Let us compare these empirical
estimates with the Gallager bound. The generating function for the first strip is
g(s) = 1 + N6,2 s2 + N6,4 s4 = 1 + 6s2 + 9s4 .
According to (26)
6
E{A6,2 } =
2
6
E{A6,4 } =
4
6
6
!3
=
24
,
25
=
81
,
25
2
9
6
!3
4
which matches with the empirical average over all codes presented in Table III.
These computations lead us to the following conclusions:
• In the ensemble of (J, K)-regular LDPC codes there are codes of different dimensions. The average spectra depend on
the dimension of the code and differ from the average spectrum over all codes of the ensemble.
• The average over all codes may coincide with the Gallager estimate, but does not correspond to any particular linear code.
Moreover, the estimated number of codewords (sum of all spectrum components) is not necessarily equal to a power of
2.
Notice that if M is large enough, then the influence of strip dependence on the precision of the obtained spectrum estimate
is negligible. However, if ν M , that is, in the low ε region, the assumption of strip independence should be used with
caution.
A PPENDIX B
Proof of Theorem 3
Assume that the number of erasures is ν > 0. The error probability of ML decoding over the BEC is estimated as the
probability that ν columns of the random parity-check matrix H from the RU ensemble corresponding to the erased positions
are linearly dependent, ν ≤ r.
Let HI be the submatrix consisting of the columns determined by the set I of the erased positions, |I| = ν. We can write
X
Pr (rank(HI ) < ν| ν) ≤
Pr xI HIT = 0 ν) .
(27)
xI 6=0
Consider a random vector s =
the all-zero vector is
xI HIT .
Denote by
sji
the subvector (si , . . . , sj ) of the vector s. The probability of s being
p(s = 0|ν) = p(s1 = 0|ν)
r
Y
p(si = 0|si−1
= 0, ν) .
1
(28)
i=2
Next, we prove that p(s1 = 0|ν) ≥ p(si = 0|si−1
= 0, ν), i = 2, 3 . . . , r. We denote by νi the number of erasures in nonzero
1
positions of the i-th parity check. For the choice of a random vector x and a random parity-check matrix from the RU ensemble
the probability of a zero syndrom component si is
1, νi = 0
p(si = 0|νi ) =
.
(29)
1
2 , νi > 0
First, we observe that for all i
p(si = 0|ν) = p(si = 0|νi = 0, ν)p(νi = 0|ν) + p(si = 0|νi > 0, ν)p(νi > 0|ν)
1
= 1 · p(νi = 0|ν) + · (1 − p(νi = 0|ν))
2
1 + p(νi = 0|ν)
=
.
(30)
2
For all i 6= j, let K 0 denote the number of row positions in which the corresponding elements either in row j or in row i of
H are nonzero. Since K 0 ≥ K the following inequality holds:
n−K 0
ν
p(νj = 0|νi = 0, ν) =
n
≤
ν
n−K
ν
n
ν
= p(νj = 0|ν) .
(31)
19
For the second parity check, by using arguments similar to those in (30), we obtain
1
(1 + p(ν2 = 0|s1 = 0, ν)) .
2
The conditional probability in the RHS can be estimated as
p(s2 = 0|s1 = 0, ν) =
(32)
min{K,ν}
p(ν2 = 0|s1 = 0, ν) =
X
p(ν2 = 0|s1 = 0, ν1 , ν)p(ν1 |s1 = 0, ν)
ν1 =0
(a)
=
K
X
p(ν2 = 0|ν1 , ν)
ν1 =0
p(s1 = 0|ν1 , ν)p(ν1 |ν)
,
p(s1 = 0|ν)
(33)
where equality (a) follows from the fact that p(ν2 = 0|s1 = 0, ν1 , ν) = p(ν2 = 0|ν1 , ν). By substituting (29) into (33), we get
p(ν2 = 0|s1 = 0, ν)
p(ν2 = 0|ν1 = 0, ν)p(ν1 = 0|ν)
=
+
p(s1 = 0|ν)
Pmin{K,ν}
ν1 =1
p(ν2 = 0|ν1 , ν)p(ν1 |ν)
2p(s1 = 0|ν)
Pmin{K,ν}
p(ν2 = 0|ν1 = 0, ν)p(ν1 = 0|ν) + ν1 =0
p(ν2 = 0|ν1 , ν)p(ν1 |ν)
=
.
2p(s1 = 0|ν)
The second term in the nominator is equal to p(ν2 = 0|ν), and p(νi = 0|ν) does not depend on i. Thus, we obtain
p(ν2 = 0|ν1 = 0, ν) + 1
2p(s1 = 0|ν)
(a)
p(ν2 = 0|ν) + 1
≤ p(ν2 = 0|ν)
2p(s1 = 0|ν)
p(ν2 = 0|s1 = 0, ν) = p(ν2 = 0|ν)
(b)
= p(ν2 = 0|ν) ,
(34)
where inequality (a) follows from (31) and equality (b) follows from (30). From (30), (32), and (34) we conclude that
p(s2 = 0|s1 = 0, ν) ≤ p(s2 = 0|ν) .
Consecutively applying these derivations for i = 3, 4, . . . , r we can prove that
p(si = 0|si−1
= 0, ν) ≤ p(si = 0|ν) ,
1
and then from (28) it follows that
p(s = 0|ν) ≤ p(s1 = 0|ν)r .
The probability that the i-th row in HI has only zeros can be bounded from above by
n−ν
K
p(νi = 0|ν) = n ,
K
and the probability that the entire sequence of length ν is a codeword (all r components of the syndrome vector are equal to
zero) is
!r
n−ν
−r
K
p(s = 0|ν) ≤ 2
1+ n
.
(35)
K
By substituting (35) into (27), we obtain
!r
Pe|ν = Pr (rank(HI ) < ν| ν) ≤ 2ν−r
and the statement of Theorem 3 follows from (12).
1+
n−ν
K
n
K
,
20
A PPENDIX C
Proof of Theorem 4
Assume that the number of erasures is ν > 0. Let HI be the submatrix consisting of the columns numbered by the set I
of the erased positions, |I| = ν. In Section IV it is shown that the problem of estimating the FER of ML decoding can be
reduced to the problem of estimating the rank of the submatrix HI . Let Hj,I denote the j-th strip of HI , j = 1, 2, . . . , J.
Denote by µi the number of all-zero rows in Hi,I , and define µ = (µ1 , . . . , µJ ).
Assume that the vector x is chosen uniformly at random from the set of binary vectors of length n, and let xI be the
subvector of x consisting of the elements numbered by the set I. Then,
X
T
Pr (rank(HI ) < ν|ν) ≤
Pr xI : xI Hj,I
= 0 for all j and xI 6= 0|ν, µ p(µ|ν) .
(36)
µ
For the Gallager ensemble the conditional probability for the vector µ given that the number of erasures is ν is
p(µ|ν) =
=
J
Y
p(µi |ν)
i=1
J
Y
i=1
M
µi
n−µi K
ν
n
ν
=
J
Y
M
i=1
µi
n−ν
µi K
n
µi K
,
where we take into account that the strips are obtained by independent random permutations. By using the inequality in (19),
we can bound this distribution from above as
" J # " J
#
Y M
Y n − ν µi K
p(µ|ν) ≤
(37)
µi
n
i=1
i=1
J
J
PJi=1 µi K
µK
n−ν
M
n−ν
M
=
,
(38)
≤
µ/J
µ/J
n
n
PJ
where µ = i=1 µi , and the second inequality follows from the fact that the maximum of the first product in (37) is achieved
when µ1 = µ2 = · · · = µJ = µ/J.
According to (29) each of the M −µi nonzero rows of the i-th strip produces a zero syndrome component with probability 12 .
PJ
For a given µ, where i=1 µi = µ, 0 ≤ µ ≤ r, the probability of having a zero syndrome vector can be upperbounded using
a union bound argument as
X
T
T
Pr xI : xI Hj,I
= 0 for all j and xI 6= 0|ν, µ ≤ min 1,
Pr xI Hj,I
= 0 for all j|ν, µ
xI 6=0
J
Y
2−M +µj
≤ min 1, (2ν − 1)
j=1
n
o
PJ
≤ min 1, 2ν−M J+ j=1 µj = min 1, 2ν−r+µ .
(39)
From (36) it follows that
Pr (rank(HI ) < ν| ν) ≤
r
X
µ=0
min 1, 2ν+µ−r
X
p(µ|ν) .
The total number of different µ with a given sum µ is equal to µ+J−1
J−1 . From (38) we obtain
J
µK
X
µ+J −1
M
n−ν
p(µ|ν) ≤
.
J −1
µ/J
n
PJ
µ:
j=1
(40)
P
µ: J
j=1 µj =µ
(41)
µj =µ
Next, we use (12), where for the conditional probability Pe|ν = Pr (rank(HI ) < ν|ν) we apply estimates (40) and (41).
Each of the ν erasures belongs to J rows, and the total number Jν of nonzero elements are located in at least Jν/K rows.
Thus, the number of zero rows never exceeds r − Jν/K = J(n − ν)/K, which explains the upper summation limit in the
second sum of (16). Thereby, we prove (16) of Theorem 4.
21
R EFERENCES
[1] I. E. Bocharova, B. D. Kudryashov, E. Rosnes, V. Skachek, and Ø. Ytrehus, “Wrap-around sliding-window near-ML decoding of binary LDPC codes
over the BEC,” in Proc. 9th Int. Symp. Turbo Codes Iterative Inf. Processing (ISTC), 2016, pp. 16–20.
[2] I. E. Bocharova, B. D. Kudryashov, and V. Skachek, “Performance of ML decoding for ensembles of binary and nonbinary regular LDPC codes of finite
lengths,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2017, pp. 794–798.
[3] D. Burshtein and G. Miller, “An efficient maximum-likelihood decoding of LDPC codes over the binary erasure channel,” IEEE Trans. Inf. Theory,
vol. 50, no. 11, pp. 2837–2844, 2004.
[4] E. Paolini, G. Liva, B. Matuz, and M. Chiani, “Maximum likelihood erasure decoding of LDPC codes: Pivoting algorithms and code design,” IEEE
Trans. Comm., vol. 60, no. 11, pp. 3209–3220, 2012.
[5] M. Cunche, V. Savin, and V. Roca, “Analysis of quasi-cyclic LDPC codes under ML decoding over the erasure channel,” in Proc. Int. Symp. Inf. Theory
Appl. (ISITA), 2010, pp. 861–866.
[6] S. Kim, S. Lee, and S.-Y. Chung, “An efficient algorithm for ML decoding of raptor codes over the binary erasure channel,” IEEE Commun. Lett.,
vol. 12, no. 8, 2008.
[7] T. Richardson and R. Urbanke, Modern coding theory. Cambridge University Press, 2008.
[8] S. Sankaranarayanan and B. Vasic, “Iterative decoding of linear block codes: A parity-check orthogonalization approach,” IEEE Trans. Inf. Theory,
vol. 51, no. 9, pp. 3347–3353, 2005.
[9] N. Kobayashi, T. Matsushima, and S. Hirasawa, “Transformation of a parity-check matrix for a message-passing algorithm over the BEC,” IEICE Trans.
Fundamentals, vol. 89, no. 5, pp. 1299–1306, 2006.
[10] I. E. Bocharova, B. D. Kudryashov, V. Skachek, and Y. Yakimenka, “Distance properties of short LDPC codes and their impact on the BP, ML and
near-ML decoding performance,” in Proc. 5th Int. Castle Meeting Coding Theory Appl., 2017, pp. x–y.
[11] H. Pishro-Nik and F. Fekri, “On decoding of low-density parity-check codes over the binary erasure channel,” IEEE Trans. Inf. Theory, vol. 50, no. 3,
pp. 439–454, 2004.
[12] G. Hosoya, T. Matsushima, and S. Hirasawa, “A decoding method of low-density parity-check codes over the binary erasure channel,” in Proc. Int.
Symp. Inf. Theory Appl. (ISITA), 2004, pp. 263–266.
[13] P. M. Olmos, J. J. Murillo-Fuentes, and F. Pérez-Cruz, “Tree-structure expectation propagation for decoding LDPC codes over binary erasure channels,”
in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2010, pp. 799–803.
[14] B. N. Vellambi and F. Fekri, “Results on the improved decoding algorithm for low-density parity-check codes over the binary erasure channel,” IEEE
Trans. Inf. Theory, vol. 53, no. 4, pp. 1510–1520, 2007.
[15] R. G. Gallager, Low-density parity-check codes. M.I.T. Press: Cambridge, MA, 1963.
[16] S. Litsyn and V. Shevelev, “On ensembles of low-density parity-check codes: Asymptotic distance distributions,” IEEE Trans. Inf. Theory, vol. 48, no. 4,
pp. 887–908, 2002.
[17] Air Interface for Fixed and Mobile Broadband Wireless Access Systems, IEEE P802.16e/D12 Draft, Oct. 2005.
[18] Digital Video Broadcasting (DVB), European Telecommunications Standards Institute ETSI EN 302 307, Rev. 1.2.1, Aug. 2009.
[19] C. Di, D. Proietti, I. E. Telatar, T. J. Richardson, and R. L. Urbanke, “Finite-length analysis of low-density parity-check codes on the binary erasure
channel,” IEEE Trans. Inf. Theory, vol. 48, no. 6, pp. 1570–1579, 2002.
[20] I. Sason and S. Shamai, Performance analysis of linear codes under maximum-likelihood decoding: A tutorial. Now Publishers Inc, 2006.
[21] Y. Polyanskiy, H. V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307–2359,
2010.
[22] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman, “Efficient erasure correcting codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2,
pp. 569–584, 2001.
[23] B. D. Kudryashov, “Decoding of block codes obtained from convolutional codes,” Problemy Peredachi Informatsii, vol. 26, no. 2, pp. 18–26, 1990.
[24] R. Y. Shao, S. Lin, and M. P. Fossorier, “Two decoding algorithms for tailbiting codes,” IEEE Trans. Comm., vol. 51, no. 10, pp. 1658–1665, 2003.
[25] V. Tomás, J. Rosenthal, and R. Smarandache, “Decoding of convolutional codes over the erasure channel,” IEEE Trans. Inf. Theory, vol. 58, no. 1, pp.
90–108, 2012.
[26] I. E. Bocharova, B. D. Kudryashov, V. Skachek, and Y. Yakimenka, “Low complexity algorithm approaching the ML decoding of binary LDPC codes,”
in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2016, pp. 2704–2708.
[27] M. Grassl, “Bounds on the minimum distance of linear codes and quantum codes,” Online available at http://www.codetables.de, 2007, accessed on
2017-01-06.
[28] G. Landsberg, “Ueber eine anzahlbestimmung und eine damit zusammenhängende reihe.” Journal für die reine und angewandte Mathematik, vol. 111,
pp. 87–88, 1893.
[29] E. R. Berlekamp, “The technology of error-correcting codes,” Proceedings of the IEEE, vol. 68, no. 5, pp. 564–593, 1980.
[30] S. J. MacMullan and O. M. Collins, “A comparison of known codes, random codes, and the best codes,” IEEE Trans. Inf. Theory, vol. 44, no. 7, pp.
3009–3022, 1998.
[31] R. Johannesson and K. S. Zigangirov, Fundamentals of convolutional coding, 2nd ed. John Wiley & Sons, 2015.
[32] I. Bocharova, B. Kudryashov, and R. Johannesson, “Searching for binary and nonbinary block and convolutional LDPC codes,” IEEE Trans. Inf. Theory,
vol. 62, no. 1, pp. 163–183, 2016.
[33] I. E. Bocharova, F. Hug, R. Johannesson, and B. D. Kudryashov, “Double-Hamming based QC LDPC codes with large minimum distance,” in Proc.
IEEE Int. Symp. Inf. Theory (ISIT), 2011, pp. 923–927.
[34] I. Sason and R. Urbanke, “Parity-check density versus performance of binary linear block codes over memoryless symmetric channels,” IEEE Trans.
Inf. Theory, vol. 49, no. 7, pp. 1611–1635, 2003.
| 7 |
arXiv:1712.01542v1 [math.RA] 5 Dec 2017
CAPABLE LIE ALGEBRAS WITH THE DERIVED
SUBALGEBRA OF DIMENSION TWO OVER AN ARBITRARY
FIELD
PEYMAN NIROOMAND, FARANGIS JOHARI, AND MOHSEN PARVIZI
Abstract. In this paper, we classify all capable nilpotent Lie algebras with
the derived subalgebra of dimension 2 over an arbitrary field. Moreover, the
explicit structure of such Lie algebras of class 3 is given.
1. Introduction and Motivation
The concept of capability was introduced by P. Hall in [9]. Recall that a group G
is called capable if there exists some group E such that G ∼
= E/Z(E), where Z(E)
denotes the center of E. There are some fundamental known results concerning
capability of p-groups. For instance, in [3, Corollary 4.16], it is shown the only capable extra-special p-groups (the p-groups with Z(G) = G′ and |G′ | = p) are those
of order p3 and exponent p. In the case that G′ = Z(G) and Z(G) is elementary
abelian p-group of rank 2, Heineken in [11] proved that the capable ones has order
at most p7 .
By some results due to Lazard, we may associate a p-group to a Lie algebra. Therefore some results of Lie algebras and p-groups have similarities in the structure. But
in this way not every thing are the same and there are differences between groups
and Lie algebras, so that most of time the proofs are different. Similar to the
concept of the capability for groups, a Lie algebra is called capable provided that
L∼
= H/Z(H) for a Lie algebra H. Beyl et al. in [3] introduced the epicenter Z ∗ (G)
of a group G that plays an important role in the capability of G. The analogous
notion of the epicenter, Z ∗ (L) for a Lie algebra L was defined in [19]. It is shown
that L is capable if and only if Z ∗ (L) = 0.
Another notion having relation to the capability is the concept of exterior square of
Lie algebras, L ∧ L, which was introduced in [6]. Our approach is on the concept of
the exterior center Z ∧ (L), the set of all elements l of L for which l ∧l′ = 0L∧L for all
l′ ∈ L. Niroomand et al. in [17] showed Z ∧ (L) = Z ∗ (L) for any finite dimensional
Lie algebra L. In [17], the last two authors obtained the structure of a capable
nilpotent Lie algebra L when dim L2 ≤ 1. It developed the result of [3, Corollary
4.16] for groups to the class of Lie algebras.
Recall that from [18], a Lie algebra H is called generalized Heisenberg of rank n if
H 2 = Z(H) and dim H 2 = n. If n = 1, then H is a Heisenberg Lie algebra that
is more well-known. Such algebras are odd dimension and have the presentation
H(m) ∼
= ha1 , b1 , . . . , am , bm , z [al , bl ] = z, 1 ≤ l ≤ mi.
Date: December 6, 2017.
Key words and phrases. Capability, Schur multiplier, generalized Heisenberg Lie algebras, stem
Lie algebras.
Mathematics Subject Classification 2010. Primary 17B30; Secondary 17B05, 17B99.
1
2
P. NIROOMAND, F. JOHARI, AND M. PARVIZI
Recently, Niroomand et al. in [18] proved the capable generalized Heisenberg Lie
algebras of rank 2 have dimension at most 7 over a filed of characteristic not equal
to 2. They developed the result of Heineken [11] for groups to the area of Lie algebras. They also characterized the structure of all capable nilpotent Lie algebras
of class two when dim L2 = 2. In virtue of the recent results in [18], in this paper
we intend to classify the structure of all capable nilpotent Lie algebras of class two
with the derived subalgebra of dimension 2 over an arbitrary filed. Furthermore,
we determine the structure of all nilpotent Lie algebras of class 3 with the derived
subalgebra of dimension 2 and then we specify which one of them are capable.
2. Preliminaries
All Lie algebras in this paper are finite dimensional. The Schur multiplier of a
Lie algebra L, M(L), is defined as M(L) ∼
= F/R and F
= R ∩ F 2 /[R, F ] where L ∼
is a free Lie algebra. It can be shown that the Lie algebra M(L) is abelian and
independent of the choice of the free Lie algebra F (see [1, 2, 4, 10, 13, 14, 15, 16, 17]
for more information on this topics).
Throughout the paper ⊗ used to denote the operator of usual tensor product of
algebras. For a Lie algebra L, we denote the factor Lie algebra L/L2 by L(ab) . Also
we denote an abelian Lie algebra of dimension n by A(n).
The following proposition plays a key role in detecting the capability of Lie algebras.
Proposition 2.1. Let L be a finite dimensional Lie algebra with a central ideal I.
Then
(i) dim(M(L)) ≥ dim(M(L/I)) − dim(L2 ∩ I),
(ii) dim(M(L)) = dim(M(L/I)) − dim(L2 ∩ I) if and only if I ⊆ Z ∗ (L).
Proof. The result follows from [19, Proposition 4.1(iii) and Theorem 4.4 ].
The following lemma from [19] is a useful instrument in the next investigations.
Lemma 2.2. Let I be an ideal of a Lie algebras L such that L/I is capable. Then
Z ∗ (L) ⊆ I.
The next corollary shows that the epicenter of a Lie algebra is contained in its
derived subalgebra.
Corollary 2.3. Let L be a finite dimensional non-abelian nilpotent Lie algebra.
Then Z ∗ (L) ⊆ L2 .
Proof. Since dim L/L2 ≥ 2, L/L2 is capable, using [17, Theorem 3.3]. Therefore
the result follows by Lemma 2.2.
We need the notion of a central product of Lie algebras, it is defined as follows.
Definition 2.4. The Lie algebra L is a central product of A and B, if L = A + B,
where A and B are ideals of L such that [A, B] = 0 and A ∩ B ⊆ Z(L). We denote
the central product of two Lie algebras A and B by A ∔ B.
The Heisenberg Lie algebra can be presented in terms of central products.
Lemma 2.5. [12, Lemma 3.3] Let L be a Heisenberg Lie algebra of dimension
2m + 1. Then L is central products of its ideals Bj for all 1 ≤ j ≤ m such that Bj
is the Heisenberg Lie algebra of dimension 3.
LIE ALGEBRAS WITH THE DERIVED SUBALGEBRA OF DIMENSION TWO
3
It is not an easy matter to determine the capability of a central product in
general. The next result gives the answer in a particular case.
Proposition 2.6. [18, Proposition 2.2] Let L be a Lie algebra such that L = A ∔ B
with A2 ∩ B 2 6= 0. Then A2 ∩ B 2 ⊆ Z ∧ (L) and so L is non-capable.
Following [18, Propositions 2.4 and 2.6], for determining the capable nilpotent
Lie algebras of class 2 with the derived subalgebra of dimension 2, it is enough to
consider a generalized Heisenberg Lie algebra H when
5 ≤ dim H ≤ 7.
Throughout the paper multiplication tables with respect to a fixed basis with trivial
products of the form [x, y] = 0 omitted, when x, y belongs to the Lie algebra.
Nilpotent Lie algebras of dimension at most 5 can be described uniformly over all
fields in [5, 7, 8]. For the dimension 6, 7 over an algebraic closed filed the structure
are known in [5, 7, 8]. Using notation and terminology of [5, 7, 8], we list the
generalized Heisenberg Lie algebras of rank two of dimension at most 6 and 7 over
field of characteristic 2 and characteristic different 2, respectively. Recall that in a
field F of characteristic 2, ω denotes a fixed element from F \ {x2 + x|x ∈ F}.
Theorem 2.7. Let H be a generalized Heisenberg Lie algebra of rank 2. Then
(i) over a field F of characteristic 2, the list of the isomorphism types of
generalized Heisenberg Lie algebras of dimension at most 6 are the fol(2)
lowing: L5,8 = hx1 , . . . , x5 [x1 , x2 ] = x4 , [x1 , x3 ] = x5 i and L6,7 (η) =
hx1 , . . . , x6 |[x1 , x2 ] = x5 , [x1 , x3 ] = x6 , [x2 , x4 ] = ηx6 , [x3 , x4 ] = x5 + x6 i
where η ∈ {0, ω},
(ii) over a field F of characteristic different from 2, the list of the isomorphism
types of generalized Heisenberg Lie algebras of dimension at most 7 are
L5,8 = hx1 , . . . , x5 [x1 , x2 ] = x4 , [x1 , x3 ] = x5 i,
L6,22 (ǫ) = hx1 , . . . , x6 [x1 , x2 ] = x5 = [x3 , x4 ], [x1 , x3 ] = x6 , [x2 , x4 ] = ǫx6 i,
∗
where ǫ ∈ F/(∼) and char F 6= 2,
L1 = 27A = hx1 , . . . , x7 [x1 , x2 ] = x6 = [x3 , x4 ], [x1 , x5 ] = x7 = [x2 , x3 ]i,
L2 = 27B = hx1 , . . . , x7 [x1 , x2 ] = x6 , [x1 , x4 ] = x7 = [x3 , x5 ]i.
3. capable generalized Heisenberg Lie algebras of rank two
Here, we are going to determine the structures of capable generalized Heisenberg
Lie algebras of rank two. By [18, Proposition 2.6], generalized Heisenberg Lie
algebras with the derived subalgebra of dimension 2 are capable if their dimension
lies between 5 and 7. According to Theorem 2.7, we have the presentation of all
capable generalized Heisenberg Lie algebras of rank two over a filed F with char F 6=
2. But when char F = 2 the structure of them is unknown. Therefore, at first we
intend to find the structure of them on an arbitrary filed and then we determine
which ones are capable.
Theorem 3.1. Let L be a 7-dimensional generalized Heisenberg Lie algebra over
a filed F with char F = 2 of rank two. Then L ∼
= hx1 , . . . , x7 [x1 , x2 ] = x6 =
[x3 , x4 ], [x1 , x5 ] = x7 = [x2 , x3 ]i or L ∼
= hx1 , . . . , x7 [x1 , x2 ] = x6 , [x1 , x4 ] = x7 =
[x3 , x5 ]i.
4
P. NIROOMAND, F. JOHARI, AND M. PARVIZI
Proof. Let L2 = hz1 i ⊕ hz2 i. By [17, Theorem 3.6], we have L/hz2 i ∼
= H(2) ⊕ A(1)
or L/hz2 i ∼
= H(1) ⊕ A(3). First suppose that L/hz2 i ∼
= H(2) ⊕ A(1). There exist
two ideals I1 /hz2 i and I2 /hz2 i of L/hz2 i such that
I1 /hz2 i ∼
= H(2) and I2 /hz2 i ∼
= A(1).
Clearly, L = I1 + I2 , I1 ∩ I2 = hz2 i, [I1 , I2 ] ⊆ hz2 i and [L, I2 ] ⊆ hz2 i. Thus
I1 /hz2 i = hx1 + hz2 i, y1 + hz2 i, x2 + hz2 i, y2 + hz2 i, z1 + hz2 i|[x1 , y1 ] + hz2 i = [x2 , y2 ] +
hz2 i = z1 + hz2 ii and I2 = hqi ⊕ hz2 i ∼
= A(2) for some q ∈ L. Hence the set
{x1 , y1 , x2 , y2 , z1 , z2 , q} is a basis of L and
[x1 , y1 ] = z1 + α1 z2 , [x2 , y2 ] = z1 + α2 z2 ,
[x1 , y2 ] = α3 z2 , [x1 , x2 ] = α4 z2 ,
[x2 , y1 ] = α5 z2 , [y1 , y2 ] = α6 z2 ,
[x1 , q] = α7 z2 , [y1 , q] = α8 z2 ,
[x2 , q] = α9 z2 , [y2 , q] = α10 z2 .
By changing variable, we assume that α1 = α2 = 0. Since q ∈
/ L2 = Z(L) =
hz1 i⊕hz2 i, q is not central. Thus [L, I2 ] = [hqi, I2 ] = hz2 i. Without loss of generality,
assume that α7 6= 0. By [17, Theorem 3.6], we have L/hz1 i ∼
= H(2) ⊕ A(1) or
L/hz1 i ∼
= H(2) ⊕ A(1). There exist two
= H(1) ⊕ A(3). First suppose that L/hz1 i ∼
ideals I3 /hz1 i and I4 /hz1 i of L/hz1 i such that
∼ H(2) and I4 /hz1 i =
∼ A(1).
I3 /hz1 i =
Clearly, L = I3 + I4 and [I3 , I4 ] ⊆ hz1 i.
We claim that q, x1 ∈ I3 and [x2 , q] = [y1 , q] = [y2 , q] = [x1 , x2 ] = [x1 , y2 ] = 0. Let
a+hz1 i ∈ I3 /hz1 i and I4 /hz1 i = hb+hz1 ii such that q+hz1 i = (a+hz1 i)+(αb+hz1 i).
If a + hz1 i = 0, then since [L, I2 ] = [hqi, I2 ] = hz2 i, we have [hqi, L] ∈ hz1 i ∩ hz2 i = 0.
So q ∈ Z(L) = L2 = hz1 i ⊕ hz2 i. It is a contradiction. Thus q − a − αb ∈ hz1 i and
a + hz1 i 6= 0. We have
a = η1 x1 + η2 x2 + η3 y1 + η4 y2 + η5 z1 + η6 z2 + η7 q,
αb = η1′ x1 + η2′ x2 + η3′ y1 + η4′ y2 + η5′ z1 + η6′ z2 + η7′ q,
and so
q = a + αb + γz1 = η1 x1 + η2 x2 + η3 y1 + η4 y2 + η5 z1 + η6 z2 + η7 q+
η1′ x1 + η2′ x2 + η3′ y1 + η4′ y2 + η5′ z1 + η6′ z2 + η7′ q + γz1 .
Since the set {x1 , y1 , x2 , y2 , z1 , z2 , q} is linearly independent and L is a Lie algebra
on filed over characteristic two, η1 = η1′ , η2 = η2′ , η3 = η3′ , η4 = η4′ , η5 = −η5′ −γ, η6 =
η6′ , η7 = 1 − η7′ . Thus
q + hz1 i = a + hz1 i + αb + hz1 i = (η1 + η1′ )x1 + (η2 + η2′ )x2 + (η3 + η3′ )y1 +
(η4 + η4′ )y2 + (η6 + η6′ )z2 + (η7 + η7′ )q + hz1 i.
We conclude that q + hz1 i = (η7 + η7′ )q + hz1 i so η7 6= 0 or η7′ 6= 0. Since q + hz1 i ∈
/
I4 /hz1 i, we have η7 6= 0 and η7′ = 0. Thus q + hz1 i ∈ I3 /hz1 i. Hence q ∈ I3 . Now,
we prove that x1 ∈ I3 . By contrary, assume that x1 ∈
/ I3 . We may assume that
x1 + hz1 i = a1 + hz1 i + α′ b + hz1 i, where a1 + hz1 i ∈ I3 /hz1 i and I4 /hz1 i = hb + hz1 ii.
Now, if a1 + hz1 i = 0, then x1 + hz1 i = α′ b + hz1i ∈ I3 /hz1 i and so [q, x1 ] + hz1 i = 0.
Since [q, x1 ] = z2 , we have [q, x1 ] ∈ hz1 i ∩ hz2 i = 0 and hence [q, x1 ] = 0. It is a
contradiction. Thus a1 + hz1 i =
6 0 and x1 − a1 − α′ b ∈ hz1 i. Taking into account the
LIE ALGEBRAS WITH THE DERIVED SUBALGEBRA OF DIMENSION TWO
5
basis of L, we can write a1 = β1 x1 + β2 x2 + β3 y1 + β4 y2 + β5 z1 + β6 z2 + β7 q and
α′ b = β1′ x1 + β2′ x2 + β3′ y1 + β4′ y2 + β5′ z1 + β6′ z2 + β7′ q. Therefore x1 = a1 + α′ b + γz1 =
(β1 + β1′ )x1 + (β2 + β2′ )x2 + (β3 + β3′ )y1 + (β4 + β4′ )y2 + (β5 + β5′ )z1 + (β6 + β6′ )z2 +
(β7 + β7′ )q. Now, since the set {x1 , y1 , x2 , y2 , z1 , z2 , q} is linearly independent and
the characteristic of the field is 2, we have β1 = 1 − β1′ , β2 = β2′ , β3 = β3′ , β4 = β4′ ,
β5 = β5′ + γ, β6 = β6′ and β7 = β7′ . Now we have x1 + hz1 i = a1 + hz1 i + α′ b + hz1 i =
(β1 +β1′ )x1 +(β2 +β2′ )x2 +(β3 +β3′ )y1 +(β4 +β4′ )y2 +(β6 +β6′ )z2 +(β7 +β7′ )q +hz1 i =
(β1 + β1′ )x1 + hz1 i. Therefore β1 6= 0 or β1′ 6= 0. But x1 + hz1 i ∈
/ I4 /hz1 i and hence
β1 6= 0 and β1′ = 0 which implies x1 + hz1 i ∈ I3 /hz1 i. Thus x1 ∈ I3 .
Now, we want to show [x1 , x2 ] = [x2 , q] = [y2 , q] = [y1 , q] = [x1 , y2 ] = 0. By
contrary, let α9 6= 0. Using the same procedure as used in the previous step, we
can show x2 ∈ I3 . Because I3 /hz1 i ∼
= H(2), we have [x2 , q] + hz1 i = 0 and so
[x2 , q] ∈ hz1 i ∩ hz2 i = 0 which is a contradiction. Similarly, one can show that each
of the mentioned brackets vanishes.
Here, we want to prove that exactly one of the brackets [x2 , y1 ] or [y1 , y2 ] vanish.
Note that the case [x2 , y1 ] = [y1 , y2 ] = 0 leads to L/hz1 i ∼
= H(1) ⊕ A(3) which is a
contradiction since L/hz1 i ∼
= H(2) ⊕ A(1). Now, without loss of generality, assume
that [y1 , y2 ] 6= 0. Using the same process, we may prove [x2 , y1 ] = 0 and y1 , y2 ∈ I3
and x2 ∈ I4 . Thus L = hx1 , y1 , x2 , y2 , z1 , z2 , q|[y1 , y2 ] = z2 = [x1 , q], [x1 , y1 ] = z1 =
[x2 , y2 ]i. Similarly if either L/hz2 i ∼
= H(2) ⊕ A(1) and L/hz1 i ∼
= H(1) ⊕ A(3) or
∼
∼
L/hz2 i = H(1) ⊕ A(3) and L/hz1 i = H(1) ⊕ A(3), then L ∼
= hx1 , . . . , x7 [x1 , x2 ] =
x6 , [x1 , x4 ] = x7 = [x3 , x5 ]i. The result follows.
Corollary 3.2. Let L be a 7-dimensional generalized Heisenberg Lie algebra over
any filed F. Then L ∼
= hx1 , . . . , x7 [x1 , x2 ] = x6 = [x3 , x4 ], [x1 , x5 ] = x7 = [x2 , x3 ]i ∼
=
L1 or L ∼
= L2 .
= hx1 , . . . , x7 [x1 , x2 ] = x6 , [x1 , x4 ] = x7 = [x3 , x5 ]i ∼
Proof. The result follows from Theorems 2.7 (ii) and 3.1.
The following result gives the Schur multipliers of Lie algebras L6,22 (ǫ) and
It helps to determine the capability of these Lie algebras in the next proposition.
(2)
L6,7 (η).
(2)
Proposition 3.3. The Schur multiplier of Lie algebras L6,22 (ǫ) and L6,7 (η) are
abelian Lie algebras of dimension 8.
Proof. Using the method of Hardy and Stitzinger in [10], we can obtain that in
both cases, the dimension of the Schur multipliers are 8.
In the characterizing of capable generalized Heisenberg Lie algebra of rank two
of dimension 6 in [18, Theorem 2.12], the Lie algebra L6,22 (ǫ) is missing. Here, we
improve this result as bellow.
(2)
Proposition 3.4. L6,22 (ǫ) and L6,7 (η) are capable.
Proof. Let L ∼
= L6,22 (ǫ) is capable. Then by using Theorem 2.7, we have L6,22 (ǫ)2 =
Z(L6,22 (ǫ)) = hx5 i ⊕ hx6 i. By Proposition 2.1 (ii) and [19, Corollary 4.6], it is
enough to show that dim M(L6,22 (ǫ)/hxi i) − 1 < dim M(L6,22 (ǫ)) for i = 5, 6.
Clearly, L6,22 (ǫ)/hxi i ∼
= H(2) or L6,22 (ǫ)/hxi i ∼
= H(1) ⊕ A(2) for i = 5, 6, by using
[17, Theorem 3.6]. Thus for i = 5, 6, we have dim M(L6,22 (ǫ)/hxi i) = 5 or 8, by
[17, Lemma 2.6 and Theorem 2.7]. Since dim M(L6,22 (ǫ)) = 8, by Proposition 3.3,
6
P. NIROOMAND, F. JOHARI, AND M. PARVIZI
we conclude that
dim M(L6,22 (ǫ)/hxi i) − 1 < dim M(L6,22 (ǫ)) for i = 5, 6.
(2)
Therefore L6,22 (ǫ) is capable. By a similar way, we can see that L6,7 (η) is also
capable. The proof is completed.
The next result is useful.
Lemma 3.5. [18, Lemma 2.11] L5,8 and L1 are capable while L2 is not.
We are ready to summarize our results to show that which ones of generalized
Heisenberg Lie algebras of rank 2 is capable.
Theorem 3.6. Let H be an n-dimensional generalized Heisenberg Lie algebra of
rank 2. Then H is capable if and only if H is isomorphic to one of Lie algebras
(2)
L5,8 , L6,22 (ǫ), L6,7 (η) or L1 .
Proof. Let H be capable. Then [18, Proposition 2.6] implies 5 ≤ dim H ≤ 7.
Using Corollary 3.2 and Theorem 2.7, H is isomorphic to one of Lie algebras
(2)
L5,8 , L6,22 (ǫ), L6,7 (η), L1 or L2 . By Proposition 3.4 and Lemma 3.5, all of them
are capable while L2 is non-capable. The converse is held by Proposition 3.4 and
Lemma 3.5.
4. Stem nilpotent Lie algebras of class 3 with the derived
subalgebra of dimension 2
We know that every nilpotent Lie algebra with the derived subalgebra of dimension 2 is of class two and three. In this section, we are going to obtain the structure
of stem Lie algebras of class 3 with the derived subalgebra of dimension 2. Then we
determine which of them is capable. Moreover, we show that all such Lie algebras
of dimension greater than 6 are unicentral.
Recall that an n-dimensional nilpotent Lie algebra L is said to be nilpotent of maximal class if the class of L is n − 1. If L is of maximal class, then dim(L/L2 ) = 2,
Zi (L) = Ln−i and dim(Lj /Lj+1 ) = 1 for all 0 ≤ i ≤ n − 1 and 2 ≤ j ≤ n − 1 (see
[4] for more information).
From [8], the only Lie algebra of maximal class of dimension 4 is isomorphic to
L4,3 = hx1 , . . . , x4 |[x1 , x2 ] = x3 , [x1 , x3 ] = x4 i.
We say a Lie algebra L is a semidirect sum of an ideal I and a subalgebra K if
L = I + K, I ∩ K = 0. The semidirect sum of an ideal I and a subalgebra K is
denoted by K ⋉ I.
Let cl(L) denotes the nilpotency class of a Lie algebra L. The following two
lemmas characterize the structure of all stem Lie algebras L of dimensions 5 and
6, when cl(L) = 3 and dim L2 = 2.
Lemma 4.1. Let L be a nilpotent stem Lie algebra of dimension 5 such that
dim L2 = 2 and cl(L) = 3. Then
∼ L5,5 = hx1 , . . . , x5 |[x1 , x2 ] = x3 , [x1 , x3 ] = x5 , [x2 , x4 ] = x5 i.
L=
Moreover, L5,5 = I ⋊ hx4 i, where
I = hx1 , x2 , x3 , x5 |[x1 , x2 ] = x3 , [x1 , x3 ] = x5 i ∼
= L4,3 , and [I, hx4 i] = hx5 i.
LIE ALGEBRAS WITH THE DERIVED SUBALGEBRA OF DIMENSION TWO
7
Proof. By the classification of 5-dimensional nilpotent Lie algebras in [8], we get
L∼
= L5,5 . It is easy to check that L5,5 = I⋊hx4 i such that I = hx1 , x2 , x3 , x5 |[x1 , x2 ] =
x3 , [x1 , x3 ] = x5 i ∼
= L4,3 and [I, hx4 i] = hx5 i.
Lemma 4.2. Let L be a nilpotent stem Lie algebra of dimension 6 such that
dim L2 = 2 and cl(L) = 3. Then L ∼
= L6,10 = hx1 , . . . , x6 |[x1 , x2 ] = x3 , [x1 , x3 ] =
x6 , [x4 , x5 ] = x6 i. Moreover, L6,10 = I ∔ hx4 , x5 , x6 |[x4 , x5 ] = x6 i = I ∔ K such that
I = hx1 , x2 , x3 , x6 |[x1 , x2 ] = x3 , [x1 , x3 ] = x6 i ∼
= L4,3 and K = hx4 , x5 , x6 |[x4 , x5 ] =
x6 i ∼
= H(1).
Proof. By the classification of 6-dimensional nilpotent Lie algebras in [8], we get
L∼
= L6,10 . Clearly Z(L) = hx6 i and L6,10 = I+K, where I = hx1 , x2 , x3 , x6 |[x1 , x2 ] =
x3 , [x1 , x3 ] = x6 i ∼
= L4,3 and K = hx4 , x5 , x6 |[x4 , x5 ] = x6 i ∼
= H(1). Since I ∩ K =
hx6 i = Z(I) = Z(L) and [I, K] = 0, we can see L6,10 = I ∔ K.
The following proposition is an useful instrument in the next.
Proposition 4.3. Let L be an n-dimensional nilpotent stem Lie algebra of class 3
(n ≥ 5) and dim L2 = 2 such that L = I + K, where I and K are two subalgebras
of L, I ∼
= L4,3 is the maximal class Lie algebra of dimension 4 and [I, K] ⊆ Z(I) =
Z(L). Then
(i) If K is a non-trivial abelian Lie algebra such that K ∩ I = 0, then [I, K] =
Z(L) and K ∼
= A(1). Moreover, L = I ⋊ K ∼
= L5,5 .
(ii) Assume dim K 2 = 1 and I ∩ K = K 2 = Z(L).
(a) If K 2 = Z(K), then L = I ∔ K, where n = 2m + 4. Moreover, for
m = 1, we have L = I ∔ K ∼
= L6,10 , where K ∼
= H(1). For m ≥ 2, we
∼
have L = I ∔ K = I1 ∔ I2 , where I1 ∼
= H(m − 1).
= L6,10 and I2 ∼
(b) If K 2 6= Z(K), then L = (I ⋊A)∔K, where K ∼
= H(m) and A ∼
= A(1),
[I, A] = Z(L) = Z(I) = K 2 and n = 2m + 5. Moreover, I ⋊ A ∼
= L5,5 .
Proof.
(i) We have [I, K] ⊆ Z(I) = Z(L), so I is an ideal of L. Since K ∩I = 0
and I ∼
= L4,3 , dim K = dim L − dim I = n − 4 and so L = I ⋊ K, where
K ∼
= A(n − 4). We claim that [I, K] = Z(I). By contrary, assume that
[I, K] = 0. Since K is abelian, we have K ⊆ Z(I) = Z(L) ⊆ I. Now
I∩K 6= 0, so we have a contradiction. Thus [I, K] = Z(I) = Z(L). We know
that I is a Lie algebra of maximal class of dimension 4 so Z(I) = Z(L) = I 3 .
We also have dim L2 = dim I 2 = 2. Therefore L2 = I 2 . We claim that
K∼
= A(1).
First assume that n = 5. Lemma 4.1 implies L ∼
= I ⋊K ∼
= L5,5 and
since I ∼
= L4,3 , we have I = hx1 , . . . , x4 |[x1 , x2 ] = x3 , [x1 , x3 ] = x4 i and
Ln−4
Z(I) = hx4 i = I 3 . Now let n ≥ 6 and K = i=1 hzi i. In this case we show
that K ∩ I 6= 0, which is a contradiction. So this case does not occur. By
using Jacobian identity, for all 1 ≤ i ≤ n − 4, we have
[zi , x3 ] = [zi , [x1 , x2 ]] = [zi , x1 , x2 ] + [x2 , zi , x1 ] = 0
since [zi , x1 ] and [x2 , zi ] are central. Thus [zi , x3 ] = 0 for all 1 ≤ i ≤ n − 4.
Now, let [zi , x1 ] = αi x4 with αi 6= 0. Putting zi′ = zi + αi x3 , we have
′
[zi , x1 ] = [zi + αi x3 , x1 ] = [zi , x1 ] + αi [x3 , x1 ] = αi x4 − αi x4 = 0. So we
also obtain [zi′ , x3 ] = 0. Thus [zi′ , x3 ] = [zi′ , x1 ] = 0 for all i, 1 ≤ i ≤ n − 4.
Now let [zi′ , x2 ] = αx4 and [zj′ , x2 ] = βx4 , in which α 6= 0 and β 6= 0 for
i 6= j and fixed i, j. Put di = βzi′ − αzj′ . We have [di , x2 ] = [βzi′ − αzj′ , x2 ] =
8
P. NIROOMAND, F. JOHARI, AND M. PARVIZI
βαx4 −βαx4 = 0 and so [di , x2 ] = 0. On the other hand, [di , x1 ] = [di , x2 ] =
[di , x3 ] = 0. Therefore [di , I] = 0 and hence di ∈ Z(L) = Z(I) = hx4 i. Since
di = βzi′ − αzj′ = β(zi − αi x3 ) − α(zj − αj x3 )
= βzi − αzj + (ααj − βαi )x3 ∈ Z(I)
so 0 6= βzi − αzj ∈ K ∩ I = 0, which is a contradiction. Thus n = 5,
K ∼
= A(1) and L = I ⋊ hz1 i and [x2 , z1 ] = x4 , as required. Considering
the classification of nilpotent Lie algebras of dimension 5 with dim L2 = 2
given in [8] and Lemma 4.1, we should have L ∼
= L5,5 .
(ii) Since I ∩ K = K 2 = Z(L) = Z(I) ∼
= A(1), dim(K) = dim(L) − dim(I) +
dim(I ∩ K) = n − 4 + 1 = n − 3. We know that dim K 2 = 1, so [17, Theorem
3.6] implies K ∼
= H(m) and A ∼
= A(n − 2m − 4). If
= K1 ⊕ A, in which K1 ∼
∼
A = 0, then K = H(m). Since I ∩K = K 2 = Z(L) = Z(I) = Z(K) ∼
= A(1),
dim(K) = dim(L) − dim(I) + dim(I ∩ K) = n − 4 + 1 = n − 3. Now
since 2m + 1 = dim(K) = n − 3, we have n = 2m + 4. We are going to
show that [I, K] = 0. In fact, we show that there exists I1 ∼
= L3,4 and
K2 ∼
= H(m) with [I1 , K2 ] = 0 and L = I1 ∔ K2 . First let m = 1. We have
dim L = 6 and K = hx, y, x4 |[x, y] = x4 i, since K ∼
= H(1). By looking
the classification of nilpotent Lie algebras of dimension 6 with dim L2 = 2
given in [5] and Lemma 4.2, we should have L ∼
= L6,10 . Now, let m ≥ 2 and
H(m) = ha1 , b1 , . . . , am , bm , z [al , bl ] = z, 1 ≤ l ≤ mi. Lemma 2.5 implies
that H(m) = T1 ∔ . . . ∔ Tm , in which Ti ∼
= H(1) for all 1 ≤ i ≤ m. With the
same procedure as case in (i) and changing the variables we can see that
[Ti , I] = 0 for all i, 1 ≤ i ≤ m. So [I, K] = 0 and hence L = I ∔ K. Since
m ≥ 2, we have L = (I ∔ T1 ) ∔ (T2 ∔ . . . ∔ Tm ) such that I ∔ T1 ∼
= L6,10
and T2 ∔ . . . ∔ Tm ∼
= H(m − 1), as required. The case (a) is completed.
Now, let A 6= 0 and so n 6= 2m − 4. Thus L = I + (K1 ⊕ A) such that
[I, K] ⊆ Z(L) = Z(I). We are going to show that A ∼
= A(1), [I, K1 ] = 0 and
[I, A] = Z(I) = Z(L). Similar to the part (ii), we can see that [I, K1 ] = 0.
We claim that [I, A] 6= 0. By contrary, let [A, K1 ] = [I, A] = 0 and so A ⊆
Z(L) = Z(I). Since A ∩ I = 0, we have A = 0, which is a contradiction. So
we have [I, A] = Z(L). We claim that dim A = 1. Let dim A ≥ 2. Similar to
the proof of the part (i), we have [a1 , x1 ] = [a2 , x1 ] = [a1 , x3 ] = [a2 , x3 ] = 0
where a1 , a2 ∈ A and a1 6= a2 . Now let [a1 , x2 ] = αx4 and [a2 , x2 ] = βx4
such that α 6= 0 and β 6= 0. Putting a′1 = βa1 − αa2 . We have
[a′1 , x2 ] = [βa1 − αa2 , x2 ] = βαx4 − βαx4 = 0
and so [a′1 , x2 ] = 0. Hence [a′1 , x1 ] = [a′1 , x2 ] = [a′1 , x3 ] = 0. Therefore
[a′1 , I] = 0 and hence a′1 ∈ Z(L) = Z(I) = hx4 i = K12 . So a′1 ∈ K1 and since
K1 ∩ A = 0, we have a contradiction. Hence A ∼
= A(1) and so n = 2m + 5.
Thus L = (I ⋊ A) ∔ K1 such that [I, A] = Z(L) = Z(I). By part (i), we
have I ⋊ A ∼
= L5,5 . The case (b) is completed. The result follows.
We need the following lemma for the next investigation.
Lemma 4.4. [20, Lemma 1] Let L be a nilpotent Lie algebra and H be a subalgebra
of L such that L2 = H 2 + L3 . Then Li = H i for all i ≥ 2. Moreover, H is an ideal
of L.
LIE ALGEBRAS WITH THE DERIVED SUBALGEBRA OF DIMENSION TWO
9
In the following, we determine the central factor of all stem Lie algebras T such
that cl(T ) = 3 and dim T 2 = 2.
Lemma 4.5. Let T be an n-dimensional stem Lie algebra such that cl(T ) = 3 and
dim T 2 = 2. Then Z(T ) = T 3 ∼
= A(1) and T /Z(T ) ∼
= H(1) ⊕ A(n − 4).
Proof. Since T is stem, we have A(1) ∼
= A(1).
= T 3 ⊆ Z(T ) $ T 2 . Thus Z(T ) = T 3 ∼
2
∼
This follows T /Z(T ) = A(1). Since T /Z(T ) is capable, [17, Theorem 3.6] implies
that T /Z(T ) ∼
= H(1) ⊕ A(n − 4). It completes the proof.
In the following theorem, we determine the structure of all stem Lie algebras of
class 3 with the derived subalgebra of dimension 2.
Theorem 4.6. Let T be an n-dimensional stem Lie algebra such that cl(T ) = 3
and dim T 2 = 2. Then one of the following holds
(a) T ∼
= L4,3 .
(b) T ∼
=I ⋊K ∼
= L5,5 where K ∼
= A(1), I ∼
= L4,3 and Z2 (T ) = Z2 (I) ⋊ K.
∼
∼
(c) T = I ∔I1 , where I1 = H(m), Z2 (T ) = Z2 (I)+I1 , I ∼
= L4,3 and n = 2m+4.
Moreover if m ≥ 2, then L ∼
= H(m − 1), if m = 1,
= L6,10 ∔ I2 , where I2 ∼
then L ∼
= L6,10 .
(d) T ∼
= H(m), K ∼
= A(1), I ∼
= L4,3 ,
= L5,5 ∔ I1 , where I1 ∼
= (I ⋊ K) ∔ I1 ∼
Z2 (T ) = (Z2 (I) ⋊ K) ∔ I1 and n = 2m + 5.
Moreover, in the cases (b), (c) and (d), Z(T ) = Z(I) = I12 = [I, K].
Proof. Since cl(T ) = 3, we have dim T ≥ 4. If dim T = 4, then T must be a Lie
algebra of maximal class and hence T ∼
= L4,3 . Assume that dim T ≥ 5. We have
T /Z(T ) ∼
= A(1), by Lemma 4.5. There exist
= H(1) ⊕ A(n − 4) and Z(T ) = T 3 ∼
ideals I1 /Z(T ) and I2 /Z(T ) of T /Z(T ) such that
I1 /Z(T ) ∼
= H(1) and I2 /Z(T ) ∼
= A(n − 4).
2
2
2
Since T /Z(T ) = I1 +Z(T ) /Z(T ), we have T = I12 +Z(T ) and Z(T ) = T 3 . Using
Lemma 4.4, we have T 2 = I12 and so cl(T ) = cl(I1 ) = 3. Hence I1 is a Lie algebra
of maximal class and since dim I1 = 4, we have I1 ∼
= L4,3 . Now, Z(T ) = Z(I1 )
because Z(T ) ∩ I1 ⊆ Z(I1 ) and dim Z(T ) = 1. Since Z(T ) ⊆ I1 ∩ I2 ⊆ Z(T ),
we have I1 ∩ I2 = Z(T ) = Z(I1 ). Now we are going to determine the structure
of I2 . We have I2 /Z(T ) ∼
= A(n − 4) so I22 ⊆ Z(T ) ∼
= A(1), and hence cl(I2 ) ≤ 2
and [I1 , I2 ] ⊆ I1 ∩ I2 = Z(T ) = Z(I1 ) ∼
= A(1). We have dim T /Z(T ) ≥ 4 and
so dim I2 ≥ 2. Let cl(I2 ) = 1. Therefore [I1 , I2 ] = I1 ∩ I2 = Z(T ), otherwise
[I1 , I2 ] = 0 and since I2 is abelian, I2 ⊆ Z(T ) ∼
= A(1). It is a contradiction, since
dim I2 ≥ 2. Hence I2 = Z(T ) ⊕ A, where A ∼
= A(n − 4) and [I1 , I2 ] = Z(T ). Now
Z(T ) ⊆ I1 , A ∩ I1 = 0 and I1 ∩ I2 = Z(T ) so T = I1 + I2 = I1 + Z(T ) + A = I1 ⋊ A.
Using the proof of Proposition 4.3 (i), we have T ∼
= I1 ⋊K ∼
= L5,5 in which K ∼
= A(1)
and [K, I1 ] = Z(T ). This is the case (b).
Now, let cl(I2 ) = 2. Since I22 = I1 ∩ I2 = Z(T ) = Z(I1 ) ∼
= A(1), by [17, Theorem
3.6], we have I2 ∼
= H(m) ⊕ A(n − 2m − 4). First assume that A(n − 2m − 4) = 0.
Then n = 2m − 4 and I2 ∼
= H(m). Using Proposition 4.3 (ii)(a), we can similarly
prove [I1 , I2 ] = 0 and T = I1 ∔ I2 where I2 ∼
= H(m). This is the case (c). Now, let
A(n − 2m − 4) 6= 0. Then n 6= 2m − 4 and hence T = I1 + (K ⊕ A) where K ∼
= H(m)
and A ∼
= A(n − 2m − 4) and [I1 , K ⊕ A] ⊆ Z(T ) = Z(I1 ). Similar to the case (c), we
have [I1 , K] = 0. Now we claim that [I1 , A] = Z(T ) ∼
= A(1). Let [I1 , A] = 0. Since
[K, A] = 0, we have A ⊆ Z(T ) = Z(I1 ) = Z(K) = K 2 ∼
= A(1). It is a contradiction,
10
P. NIROOMAND, F. JOHARI, AND M. PARVIZI
since A ∩ K = 0. Therefore [I1 , A] = Z(T ) and hence T ∼
= (I1 ⋊ A) ∔ K where
A∼
= A(n − 2m − 4) and [I1 , A] = Z(T ) = Z(I1 ). Similar to the case (b), one can
obtain that A ∼
= A(1), so n − 2m − 4 = 1, n = 2m + 5 and [I1 , A] = Z(T ). So
T = (I1 ⋊ A) ∔ K in which A ∼
= A(1) and [I1 , A] = Z(T ). This is the case (d).
Now, we have
Z2 (T )/Z(T ) = Z(T /Z(T )) = Z(I1 /Z(T )) ⊕ I2 /Z(T ) and Z(T ) = Z(I1 )
also Z(I1 /Z(T )) = I12 /Z(T ), so Z2 (T )/Z(T ) = I12 /Z(T ) ⊕ I2 /Z(T ). Since I1 is
maximal class of dimension 4, we have Z2 (T ) = Z2 (I1 ) + I2 = I12 + I2 . The result
follows.
In the following theorem, we classify all non-capable stem Lie algebras of class
3 with the derived subalgebra of dimension 2.
Theorem 4.7. Let T be an n-dimensional stem Lie algebra such that cl(T ) =
dim T 2 = 2 and n ≥ 6. Then T ∼
= (I ⋊ K) ∔ H or T ∼
= I ∔ H such that H
∼
∼
H(m), K = A(1), I = L4,3 and [K, I] = Z(T ) = Z(I) = H 2 . Moreover, T
non-capable.
3,
∼
=
is
Proof. By Theorem 4.6 (c) and (d), we obtain (I ⋊ A) ∔ H or I ∔ H such that
H ∼
= H(m), A ∼
= A(1), I ∼
= L4,3 and [A, I] = Z(T ) = Z(I) = H 2 . By using
Proposition 2.6, T is non-capable. The result follows.
The capable stem Lie algebras of class 3 with the derived subalgebra of dimension
2 are characterized as following.
Lemma 4.8. L4,3 and L5,5 are capable.
Proof. From [8], let L5,7 = hx1 , . . . , x5 |[x1 , x2 ] = x3 , [x1 , x3 ] = x4 , [x1 , x4 ] = x5 i and
L6,13 = hx1 , . . . , x6 |[x1 , x2 ] = x3 , [x1 , x3 ] = x5 , [x2 , x4 ] = x5 , [x1 , x5 ] = x6 , [x3 , x4 ] =
x6 i. We have Z(L5,7 ) = hx5 i and Z(L6,13 ) = hx6 i, so L5,7 /hx5 i ∼
= L4,3 and
L6,13 /hx6 i ∼
= L5,5 . Thus L4,3 and L5,5 are capable.
We are in a position to characterize the capability of an n-dimensional stem Lie
algebra T such that cl(T ) = 3 and dim T 2 = 2.
Theorem 4.9. Let T be an n-dimensional stem Lie algebra such that cl(T ) = 3
and dim T 2 = 2. Then T is capable if and only if T ∼
= L5,5 .
= L4,3 or T ∼
Proof. Let T be capable. By Theorems 4.6, 4.7 and Lemma 4.8, T is isomorphic to
L4,3 or L5,5 . The converse holds by Lemma 4.8.
The next theorem gives a necessary and sufficient condition for detecting the
capability of stem Lie algebras of class 3 with the derived subalgebra of dimension
2.
Theorem 4.10. Let T be an n-dimensional stem Lie algebra such that cl(T ) = 3
and dim T 2 = 2. Then T is capable if and only if 3 ≤ dim(T /Z(T )) ≤ 4.
Proof. The result follows from Lemma 4.5 and Theorem 4.9.
Recall that a Lie algebra L is called unicentral if Z ∗ (L) = Z(L).
Corollary 4.11. Let T be an n-dimensional stem Lie algebra such that cl(T ) = 3
and dim T 2 = 2. Then T is non-capable if and only if n ≥ 6. Moreover, T is
unicentral.
Proof. The result follows from Theorems 4.7 and 4.9.
LIE ALGEBRAS WITH THE DERIVED SUBALGEBRA OF DIMENSION TWO
11
5. Nilpotent Lie Algebras with the derived subalgebra of dimension
two
In this section, we are going to determine all capable nilpotent Lie algebras
with the derived subalgebra of dimension 2. At first we show that every finite
dimensional nilpotent Lie algebra of class 3 with derived subalgebra of dimension 2
can be considered as a direct sum of a non-abelian stem Lie algebra of class 3 and
an abelian Lie algebra.
The following result shows that the capability of the direct product of a non-abelian
Lie algebra and an abelian Lie algebra depends only on the capability of its nonabelian factor.
Theorem 5.1. Let L be a finite dimensional nilpotent Lie algebra of class 3 and
dim L2 = 2. Then L = T ⊕ A such that Z(T ) = L2 ∩ Z(L) = L3 = T 3 and
Z ∗ (L) = Z ∗ (T ), where A is an abelian Lie algebra.
Proof. By using [12, Proposition 3.1], L = T ⊕ A such that Z(T ) = L2 ∩ Z(L) and
Z ∗ (L) = Z ∗ (T ), where A is an abelian Lie algebra. Since T is stem, Lemma 4.5
implies Z(T ) = T 3 , as required.
In the following theorem, all capable nilpotent Lie algebras of class 2 with the
derived subalgebra of dimension 2 are classified.
Theorem 5.2. Let L be an n-dimensional nilpotent Lie algebra of nilpotency class
2 and dim L2 = 2. Then L is capable if and only if L ∼
=
= L5,8 ⊕ A(n − 5), L ∼
(2)
∼
∼
L6,22 (ǫ) ⊕ A(n − 6), L = L6,7 (η) ⊕ A(n − 6) or L = L1 ⊕ A(n − 7).
Proof. This is immediately obtained from [18, Propositions 2.4 and 2.6] and Theorem 3.6.
We are ready to determine all capable Lie algebras of class 3 when its derived
subalgebra is of dimension 2.
Theorem 5.3. Let L be an n-dimensional Lie algebra such that cl(L) = 3 and
dim L2 = 2. Then L is capable if and only if L ∼
= L4,3 ⊕ A(n − 4) or L ∼
= L5,5 ⊕
A(n − 5).
∼ T ⊕ A, where A is an abelian Lie algebra and
Proof. Theorem 5.1 implies L =
∼ A(1) and Z ∗ (L) = Z ∗ (T ). Now the result follows
Z(T ) = T 2 ∩ Z(L) = L3 = T 3 =
from Theorem 4.9.
The following result is obtained from Theorems 4.10 and 5.3.
Corollary 5.4. Let L be a finite dimensional Lie algebra of class 3 and dim L2 = 2.
Then L is capable if and only if 3 ≤ dim(L/Z(L)) ≤ 4.
We summarize all results to classify all capable nilpotent Lie algebras with the
derived subalgebra of dimension at most two.
Theorem 5.5. Let L be an n-dimensional nilpotent Lie algebra with dim L2 ≤ 2.
Then L is capable is if and only if L is isomorphic to one the following Lie algebras.
(i) If dim L2 = 0, then L ∼
= A(n) and n > 1.
(ii) If dim L2 = 1, then L ∼
= H(1) ⊕ A(n − 3).
(2)
2
(iii) If dim L = 2 and cl(L) = 2, then L ∼
= L5,8 ⊕ A(n − 5), L = L6,7 (η) ⊕ A(n −
∼
∼
6), L = L6,22 (ǫ) ⊕ A(n − 6), or L = L1 ⊕ A(n − 7).
12
P. NIROOMAND, F. JOHARI, AND M. PARVIZI
(iv) If dim L2 = 2 and cl(L) = 3, then L ∼
= L4,3 ⊕A(n−4) or L ∼
= L5,5 ⊕A(n−5).
Proof. The result follows from [17, Theorems 3.3 and 3.6], Theorems 5.2 and 5.3.
References
[1] P. Batten, K. Moneyhun, E. Stitzinger, On characterizing nilpotent Lie algebras by their
multipliers. Comm. Algebra 24 (1996) 4319-4330.
[2] P. Batten, E. Stitzinger, On covers of Lie algebras, Comm. Algebra 24 (1996) 4301-4317.
[3] F. R. Beyl, U. Felgner, and P. Schmid, On groups occurring as center factor groups, J. Algebra
61 (1970) 161-177.
[4] L. Bosko, On Schur multiplier of Lie algebras and groups of maximal class, Internat. J.
Algebra Comput. 20 (2010) 807-821.
[5] S. Cicalò, W. A. de Graaf, C. Schneider, Six-dimensional nilpotent Lie algebras, Linear
Algebra Appl. 436 (2012), no. 1, 163-189.
[6] G. Ellis, A non-abelian tensor product of Lie algebras, Glasg. Math. J. 39 (1991) 101-120.
[7] M. P. Gong, Classification of nilpotent Lie Algebras of dimension 7 (over Algebraically closed
fields and R), A thesis in Waterloo, Ontario, Canada, 1998.
[8] W. A. de Graaf, Classification of 6-dimensional nilpotent Lie algebras over fields of characteristic not 2, Algebra 309 (2007) 640-653.
[9] P. Hall, The classification of prime power groups, J. Reine Angew. Math. 182, (1940) 130-141.
[10] P. Hardy, E. Stitzinger, On characterizing nilpotent Lie algebras by their multipliers t(L) =
3; 4; 5; 6; Comm. Algebra, 1998, 26(11), 3527-3539.
[11] H. Heineken, Nilpotent groups of class 2 that can appear as central quotient groups, Rend.
Sem. Mat. Univ. Padova 84 (1990), 241-248.
[12] F. Johari, M. Parvizi, P. Niroomand, Capability and Schur multiplier of a pair of Lie algebras,
J. Geometry Phys 114 (2017), 184-196.
[13] K. Moneyhun, Isoclinisms in Lie algebras. Algebras Groups Geom. 11 (1994) 9-22.
[14] P. Niroomand, F. G. Russo, A note on the Schur multiplier of a nilpotent Lie algebra, Comm.
Algebra 39 (2011) 1293-1297.
[15] P. Niroomand, F. G. Russo, A restriction on the Schur multiplier of nilpotent Lie algebras,
Electron. J. Linear Algebra 22 (2011) 1-9.
[16] P. Niroomand, On the dimension of the Schur multiplier of nilpotent Lie algebras, Cent. Eur.
J. Math. 9 (2011) 57-64.
[17] P. Niroomand, M. Parvizi, F. G. Russo, Some criteria for detecting capable Lie algebras, J.
Algebra 384 (2013) 36-44.
[18] P. Niroomand, F. Johari, M. Parvizi, On the capability and Schur multiplier of nilpotent Lie
algebra of class two, Proc. Amer. Math. Soc. 144 (2016), 4157-4168.
[19] A. R. Salemkar, V. Alamian and H. Mohammadzadeh, Some properties of the Schur multiplier
and covers of Lie Algebras, Comm. Algebra 36 (2008) 697-707.
[20] L. M. Zack, Nilpotent Lie algebras with a small second derived quotient, Comm. Algebra, 36
(2008) 460-4619.
School of Mathematics and Computer Science, Damghan University, Damghan, Iran
E-mail address: p niroomand@yahoo.com, niroomand@du.ac.ir
Department of Pure Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran
E-mail address: farangis.johari@mail.um.ac.ir, farangisjohary@yahoo.com
Department of Pure Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran
E-mail address: parvizi@math.um.ac.ir
| 0 |
Polymorphic Types in ACL2
Benjamin Selfridge
Eric Smith
University of Texas at Austin
Austin, TX
benself@cs.utexas.edu
Kestrel Institute
Palo Alto, CA
eric.smith@kestrel.edu
This paper describes a tool suite for the ACL2 programming language which incorporates certain
ideas from the Hindley-Milner paradigm of functional programming (as exemplified in popular languages like ML and Haskell), including a “typed” style of programming with the ability to define
polymorphic types. These ideas are introduced via macros into the language of ACL2, taking advantage of ACL2’s guard-checking mechanism to perform type checking on both function definitions
and theorems. Finally, we discuss how these macros were used to implement features of Specware
[1], a software specification and implementation system.
1
Introduction
Specware is “a next-generation environment supporting the design, development and automated synthesis
of scalable, correct-by-construction software.” [1] The language of Specware is a high-level programming and specification language used to create program specifications, and to refine them into executable
code.1 One of the selling points of a high-powered system like Specware is its robust type system, which
accommodates sum type definitions, pattern-matching, and polymorphic types.
Figure 1 contains a snippet of Specware code that will be used as a running example throughout
this work. It includes two type definitions, two function definitions (these are called ops in Specware),
and several theorems. The first type definition listed, SeqInt, represents a list of integers. It consists
of two type cases: SeqNil, representing an empty list, and SeqCons, consisting of a integer/SeqInt
pair. In Specware we call these sum types “coproduct” types, and we will stick to this terminology in the
remainder of this work.
The next type definition, Seq a, is a generalization of the SeqInt type to be polymorphic in the
type of its contents. We can instantiate the type variable a to be any of Specware’s built-in base types,
or any other type we wish to define. We can also define functions on this new type, like SeqAppend
and SeqRev, and we can state theorems about such functions, like SeqAppend_Associative and
SeqRev_of_SeqRev.
Polymorphic typing greatly enhances the expressibility of Specware, and it is one of many sophisticated language features that ACL2, a first-order untyped programming language and theorem prover,
does not support natively. The goal of this work is to present a partial implementation of some of these
features in ACL2, which includes a mechanism for defining “typed” ACL2 functions and theorems in
a style that mimics Specware syntax. This work is some of the fruits of a larger effort to use ACL2 as
a back-end prover for Specware. Since Specware emits proof obligations but does not directly verify
them, all the proofs must be translated into an external proof environment and verified independently.
The ACL2 constructs presented in this work were essential in the translation process, and they also have
1 Throughout this paper, we use the term Specware to refer to both the full Specware system, and to Metaslang, the programming language used by Specware.
F. Verbeek and J. Schmaltz (Eds.): ACL2 Workshop 2014 (ACL2’14).
EPTCS 152, 2014, pp. 49–59, doi:10.4204/EPTCS.152.4
Polymorphic Types in ACL2
50
type SeqInt =
| SeqNil
| SeqCons Int * SeqInt
type Seq a =
| SeqNil
| SeqCons a * (Seq a)
op [a] SeqAppend (x:Seq a, y:Seq a) : Seq a =
case x of
| SeqNil -> y
| SeqCons (hd,tl) -> SeqCons (hd, SeqAppend (tl, y))
op [b] SeqRev (x:Seq b) : Seq b =
case x of
| SeqNil -> SeqNil
| SeqCons (hd,tl) -> SeqAppend (SeqRev tl, SeqCons (hd,SeqNil))
theorem SeqAppend_Associative is [a]
fa(x:Seq a,y:Seq a,z:Seq a)
SeqAppend(SeqAppend(x,y),z) = SeqAppend(x,SeqAppend(y,z))
theorem SeqAppend_of_SeqNil_1 is [a]
fa (x:Seq a) SeqAppend(SeqNil,x) = x
theorem SeqAppend_of_SeqNil_2 is [a]
fa (x:Seq a) SeqAppend(x,SeqNil) = x
theorem SeqRev_of_SeqAppend is [a]
fa (x:Seq a,y:Seq a) SeqRev (SeqAppend (x,y)) = SeqAppend (SeqRev y, SeqRev x)
theorem SeqRev_of_SeqRev is [a]
fa (x:Seq a) (SeqRev (SeqRev x)) = x
Figure 1: Our example Specware program.
the potential to be quite useful on their own, independent of Specware. Although our work does not address type inference, this could be achieved by maintaining an ACL2 table mapping functions to types;
we discuss this more in the conclusion.
The organization of the remainder of this paper is as follows. Sections 2, 3, and 4 describe how
the program in Figure 1 is translated into ACL2 (via the use of some new macros), addressing the
coproduct type definitions, function definitions, and theorems respectively. Section 5 concludes the
paper by summarizing what has been accomplished and what remains to be done (type inference).
2
ACL2 “types” and polymorphic type definitions
Before we can begin thinking about defining polymorphic types in ACL2, we must first understand how
to implement types in the first place. ACL2 is an untyped language; in the ACL2 logic, every function
B. Selfridge & E. Smith
51
must accept all types of arguments. However, we can use predicates (functions of arity 1 that return either
T or NIL) to define a type. An example of such a function in ACL2 is integerp, which returns T if its
argument is an integer, and NIL otherwise. In this work, we use Int to designate this type, and Int-p
to designate the ACL2 predicate that recognizes Ints.2 Throughout this work, we maintain a distinction
between the name of a type, Type, and its recognizer function, Type-p. This enables better automation
of macros that operate on these types; if the user refers to a type Type, the system will automatically
append -p to its name.
In order to implement coproduct types in ACL2, we started with the pre-existing ACL2 book defsum.
The defsum macro3 uses a combination of guard checking and theorems about the output type of functions (along with many other useful theorems) in order to implement an ad-hoc sum type. It also includes
a pattern-matching mechanism, pm, which is a more sophisticated version of ACL2’s case macro. We
introduced a new macro, defcoproduct, which is defined in terms of defsum but also accommodates
polymorphism, and a slightly modified version of pm, which we named case-of in order to more closely
reflect the syntax of Specware. We can define the “concrete” (non-polymorphic) type SeqInt as follows:
(defcoproduct SeqInt
(SeqNil)
(SeqCons Int SeqInt))
This macro-expands to a simple call to defsum:
(DEFSUM SEQINT
(SEQNIL)
(SEQCONS (INT-P ARG-1) (SEQINT-P ARG-2)))
As we can see, this type is simple enough to have been defined using defsum alone. To define a polymorphic Seq data structure, however, we need to use defcoproduct:
(defcoproduct Seq
:type-vars (a)
(SeqNil)
(SeqCons a (:inst Seq a)))
;; type variables
;; two type cases - SeqNil and SeqCons
This code, although its syntax is obviously ACL2, still resembles the original Specware definition of Seq.
The defcoproduct macro defines a new type, Seq, by introducing two type constructors, SeqCons and
SeqNil. The type is defined using a single type variable a, which is a placeholder for the type of Seq’s
contents. The :inst tag is necessary to inform the macro that we are using a particular instance of the
Seq type.
Because the logic of ACL2 does not have true polymorphic types (indeed, it does not have a type
system at all), the ACL2 definition of Seq is not a true type definition, like it is in Specware. Instead, it
serves as a template for creating instantiations of Seq with a specific type replacing the type variable a.
The above definition actually introduces a macro, Seq-instantiate, which we can use to instantiate
Seq on integers as follows:
(Seq-instantiate int)
This macro-expands (as before) to a call to defsum:
2 We
define this function as a synonym for integerp, and likewise, we define Bool-p as a synonym for booleanp.
macro was introduced and first used in Swords and Cook [2]. We added some slight modifications to the original
macro in order to accommodate a bookkeeping mechanism that would enable automatic functional instantiation for polymorphic
theorems.
3 This
Polymorphic Types in ACL2
52
(DEFSUM SEQ-INT
(SEQNIL-INT)
(SEQCONS-INT (INT-P ARG-1)
(SEQ-INT-P ARG-2)))
We can see this looks nearly identical to the definition of the concrete coproduct type SeqInt above.
(We have deliberately omitted some bookkeeping information from this defsum call - this information
will be necessary when we wish to instantiate theorems about the Seq type on specific instances, but the
details of this are not important.)
We can also define polymorphic types with more than one type variable, or in terms of previously
defined polymorphic types:
(defcoproduct EitherSeq
:type-vars (a b)
(LeftSeq (:inst Seq a))
(RightSeq (:inst Seq b)))
This defines a new polymorphic type, EitherSeq, parameterized by variables a and b. We can now
instantiate it with concrete types:
(EitherSeq-instantiate int bool)
The above call expands to
(PROGN (SEQ-INSTANTIATE INT)
(SEQ-INSTANTIATE BOOL)
(DEFSUM EITHERSEQ-INT-BOOL
(LEFTSEQ-INT-BOOL (SEQ-INT-P ARG-1))
(RIGHTSEQ-INT-BOOL (SEQ-BOOL-P ARG-1))))
Notice that before EitherSeq is instantiated on integers and booleans, the Seq type must be instantiated
on these two types. This is automatically checked by the defcoproduct macro, and these instantiations
are collected and included before defining EitherSeq-Int-Bool. It is notable that none of the macros
presented in this paper query the ACL2 world. If some types have already been defined, the preliminary
type instantiations will be dismissed by ACL2 as redundant definitions, and the macro will still succeed.
The polymorphism supported by defcoproduct is somewhat limited; the macro does not support
mutually recursive datatypes, and all instantiation must happen in one step (i.e., we cannot instantiate the
a variable of EitherSeq and leave b as a type variable). However, these are not fundamental limitations,
and could be implemented with more work.
3
Polymorphic functions and theorems
Consider the Specware function SeqAppend defined in Figure 1. This is the “append” function on
lists, defined for the polymorphic type Seq. We can translate this definition into ACL2 using our new
defun-typed macro:
(defun-typed
:type-vars
((x (:inst
(:inst Seq
(case-of x
SeqAppend
(a)
Seq a)) (y (:inst Seq a)))
a)
;;
;;
;;
;;
type variables
typed argument list
output type
function body
B. Selfridge & E. Smith
53
(((:inst SeqNil a))
;; case 1: SeqNil
y)
(((:inst SeqCons a) hd tl)
;; case 2: SeqCons
((:inst SeqCons a) hd ((:inst SeqAppend a) tl y)))))
The defun-typed macro is a version of defun that requires type annotations for all its input values,
as well as a type annotation for its own return value. We supply a list of type variables (which can be
omitted if there are none), a list of the arguments with their associated types, an output type, and the
body of the function.
One obvious weakness of this definition is its verbosity. Every time we pattern match on a polymorphic type, we must include the :inst keyword in each pattern to indicate which instantiation for a
given constructor we are using. This could, of course, be solved by implementing type inference; for our
immediate task of translating Specware code to ACL2, this was not necessary, but it could be done with
some more work.
In order to use this function, we must first instantiate it:
(SeqAppend-instantiate int)
This macro-expands to
(PROGN (SEQ-INSTANTIATE INT)
(SEQNIL-INSTANTIATE INT)
(SEQCONS-INSTANTIATE INT)
(DEFUN SEQAPPEND-INT (X Y)
(DECLARE (XARGS :GUARD (AND (SEQ-INT-P X) (SEQ-INT-P Y))
:VERIFY-GUARDS NIL))
(IF (MBT (AND (SEQ-INT-P X) (SEQ-INT-P Y)))
(CASE-OF X ((SEQNIL-INT) Y)
((SEQCONS-INT HD TL)
(SEQCONS-INT HD (SEQAPPEND-INT TL Y))))
NIL))
(DEFTHM SEQAPPEND-INT-TYPE
(IMPLIES (AND (SEQ-INT-P X) (SEQ-INT-P Y))
(SEQ-INT-P (SEQAPPEND-INT X Y)))
:RULE-CLASSES (:TYPE-PRESCRIPTION :REWRITE))
(VERIFY-GUARDS SEQAPPEND-INT)
As before, we first instantiate the polymorphic Seq type for ints before defining our SeqAppend-Int
function. The SEQ-INSTANTIATE, SEQNIL-INSTANTIATE, and SEQCONS-INSTANTIATE are all redundant; they have the exact same definition. The defun-typed macro scans its argument list, output type,
and body for the :inst keyword, and calls the associated instantiation macro for each occurrence. This
was a brute-force way to guarantee that all the necessary functions and types will be defined before the
current function definition is submitted. Notice how a combination of ACL2 guards and theorems are
used to ensure that SeqAppend-Int satisfies all the typing requirements; we require that the guards of
the function calls made in the definition of SeqAppend-Int are never violated given our assumptions
about the types of x and y, and we also require that, assuming both input variables are Seq-Ints, the
output of this function is also a SeqInt. Notice how we check the latter first; for recursive definitions, it
is often the case that we need to know the output type of the function before we can verify the guards.
Of course, we can also define polymorphic functions in terms of other, previously defined ones.
Polymorphic Types in ACL2
54
(defun-typed SeqRev
:type-vars (a)
((x (:inst Seq a)))
(:inst Seq a)
(case-of x
(((:inst SeqNil a)) ((:inst SeqNil a)))
(((:inst SeqCons a) hd tl)
((:inst SeqAppend a)
((:inst SeqRev a) tl)
((:inst SeqCons a) hd ((:inst SeqNil a)))))))
If we instantiate SeqRev with the concrete type bool via
(SeqRev-instantiate bool)
this will macro-expand to
(PROGN
(SEQ-INSTANTIATE BOOL)
(SEQAPPEND-INSTANTIATE BOOL)
(SEQCONS-INSTANTIATE BOOL)
(SEQNIL-INSTANTIATE BOOL)
(DEFUN SEQREV-BOOL (X)
(DECLARE (XARGS :GUARD (SEQ-BOOL-P X)
:VERIFY-GUARDS NIL))
(IF (MBT (SEQ-BOOL-P X))
(CASE-OF X ((SEQNIL-BOOL) (SEQNIL-BOOL))
((SEQCONS-BOOL HD TL)
(SEQAPPEND-BOOL (SEQREV-BOOL TL)
(SEQCONS-BOOL HD (SEQNIL-BOOL)))))
NIL))
(DEFTHM SEQREV-BOOL-TYPE
(IMPLIES (SEQ-BOOL-P X)
(SEQ-BOOL-P (SEQREV-BOOL X)))
:RULE-CLASSES (:TYPE-PRESCRIPTION :REWRITE))
(VERIFY-GUARDS SEQREV-BOOL))
Notice that both the Seq type and the SeqAppend function are instantiated for bool before defining
SeqRev-Bool. Of course, it would have sufficed to only invoke (SEQAPPEND-INSTANTIATE BOOL),
but our defun-typed macro is not smart enough to figure that out; everywhere it sees an :inst keyword,
it calls the associated instantiation macro.
We can also state (and prove) theorems about functions involving polymorphic types using our new
macro defthm-typed. For instance, we can translate the SeqAppend_Associative theorem from the
introduction into ACL2 like so:
(defthm-typed SeqAppend_Associative
:type-vars (a)
;; type variables
((x (:inst Seq a))
;; type annotations for free variables
(y (:inst Seq a))
(z (:inst Seq a)))
B. Selfridge & E. Smith
55
(equal
;; theorem body
((:inst SeqAppend a) ((:inst SeqAppend a) x y) z)
((:inst SeqAppend a) x ((:inst SeqAppend a) y z))))
This macro-expands to
(PROGN
(ENCAPSULATE (((A-P *) => *))
(LOCAL (DEFUN A-P (X) (DECLARE (IGNORE X)) T))
(DEFTHM A-TYPE (BOOLEANP (A-P X))
:RULE-CLASSES :TYPE-PRESCRIPTION))
(SEQ-INSTANTIATE A)
(SEQAPPEND-INSTANTIATE A)
(DEFUND-TYPED SEQAPPEND_ASSOCIATIVE-A-BODY
((X SEQ-A) (Y SEQ-A) (Z SEQ-A))
BOOL
(EQUAL (SEQAPPEND-A (SEQAPPEND-A X Y) Z)
(SEQAPPEND-A X (SEQAPPEND-A Y Z))))
(DEFTHM SEQAPPEND_ASSOCIATIVE-A
(IMPLIES (AND (SEQ-A-P X)
(SEQ-A-P Y)
(SEQ-A-P Z))
(EQUAL (SEQAPPEND-A (SEQAPPEND-A X Y) Z)
(SEQAPPEND-A X (SEQAPPEND-A Y Z)))))
(DEFMACRO SEQAPPEND_ASSOCIATIVE-INSTANTIATE (A)
;; ... macro definition omitted
)
The defthm-typed macro does several things. First, it defines an encapsulated predicate, A-P, which
will be used to represent the type variable a. Then, after instantiating all the needed types and functions, we type check the body of the theorem by defining it as a function with output type bool (if the
theorem doesn’t even type check, then we don’t need to bother to try and prove it). Then, it proves
a version of SeqAppend_Associative where the Seq type has been instantiated on an encapsulated
predicate A-P. In theory, this proves the theorem in general for any type instantiation. Finally, a new
macro, SEQAPPEND_ASSOCIATIVE-INSTANTIATE, is introduced, which allows us to prove this theorem for a specific instantiation of the Seq type. This macro uses functional instantiation (along with
a substantial amount of bookkeeping) to prove the theorem automatically from the original theorem,
SEQAPPEND_ASSOCIATIVE-A. If we instantiate this theorem for integers via
(SeqAppend_Associative-instantiate int)
we get
(PROGN
(SEQ-INSTANTIATE INT)
(SEQAPPEND-INSTANTIATE INT)
(DEFTHM-TYPED
SEQAPPEND_ASSOCIATIVE-INT
((X SEQ-INT) (Y SEQ-INT) (Z SEQ-INT))
(EQUAL (SEQAPPEND-INT (SEQAPPEND-INT X Y) Z)
Polymorphic Types in ACL2
56
(SEQAPPEND-INT X (SEQAPPEND-INT Y Z)))
:HINTS
(("Goal" :DO-NOT-INDUCT T
:IN-THEORY (ENABLE SEQ-INT-FUNCTIONS)
:USE ((:FUNCTIONAL-INSTANCE
SEQAPPEND_ASSOCIATIVE-A (A-P INT-P)
(SEQ-A-P SEQ-INT-P)
(SEQCONS-A-P SEQCONS-INT-P)
(SEQNIL-A-P SEQNIL-INT-P)
(SEQCONS-A SEQCONS-INT)
(SEQNIL-A SEQNIL-INT)
(SEQCONS-A-ARG-2 SEQCONS-INT-ARG-2)
(SEQCONS-A-ARG-1 SEQCONS-INT-ARG-1)
(SEQAPPEND-A SEQAPPEND-INT)))))))
Notice how the theorem is proved using functional instantiation on the more general theorem,
SeqAppend_Associative-A.
We can use the macros described above to implement the entire Specware program of Figure 1 using
our four new macros, defcoproduct, defun-typed, defthm-typed, and case-of. The full listing
for the ACL2 version of this program is given in Figure 2.
4
Conclusions
These macros were introduced in an ACL2 book in order to facilitate the translation process from
Specware to ACL2. Instead of having our translator produce the raw ACL2 code, we hide the messy
details of implementing these high-level language features with ACL2 macros, which has the advantage
of both making the translation process easier and making the automatically generated ACL2 code much
more readable. The gen-acl2 tool was added to Specware in order to automatically translate Specware
programs into ACL2 code that uses the macros defined here (the ACL2 code in Figure 2 was generated
by this tool).
A byproduct of this effort was the ACL2 macros themselves, which are quite useful in their own right,
and suggest the possibility of implementing more of these high-level features in ACL2. Limitations of
the polymorphism presented here include the inability to define mutually recursive polymorphic types,
as well as the lack of partial type instantiation. These macros could be extended in a straightforward way
to include these features.
Type inference could be implemented by maintaining an ACL2 table mapping function names to their
types (essentially, the “typing” theorems exported by the defun-typed macro). When the user defines a
new function, the Hindley-Milner algorithm can be used to deduce the necessary input and output types
of the function (assuming all the function calls in the body already exist in the table), and we can export
a theorem capturing this which could be proved in a theory that only includes the typing theorems of
previously defined functions.
We also believe that the techniques used in this paper could be used to introduce “higher-order”
functions; in particular, we can modify our technique of extracting away a type variable by introducing
an encapsulated predicate, to extracting away a function by introducing an encapsulated function. We
have not thoroughly investigated the viability of this idea, but it seems to us like a fruitful avenue for
further research.
B. Selfridge & E. Smith
57
(in-package "ACL2")
(include-book "~/Desktop/specware-files/code/specware-book")
(set-ignore-ok t)
(set-bogus-defun-hints-ok t)
(defcoproduct SeqInt
(SeqNil)
(SeqCons Int SeqInt))
(defcoproduct Seq
:type-vars (a)
(SeqCons a (:inst Seq a))
(SeqNil))
(defun-typed
:type-vars
((x (:inst
(:inst Seq
(case-of x
(((:inst
(((:inst
((:inst
SeqAppend
(a)
Seq a)) (y (:inst Seq a)))
a)
SeqNil a)) y)
SeqCons a) hd tl)
SeqCons a) hd ((:inst SeqAppend a) tl y)))))
(defthm-typed SeqAppend_Associative
:type-vars (a)
((x (:inst Seq a))
(y (:inst Seq a))
(z (:inst Seq a)))
(equal ((:inst SeqAppend a) ((:inst SeqAppend a) x y) z)
((:inst SeqAppend a) x ((:inst SeqAppend a) y z))))
(defthm-typed SeqAppend_of_SeqNil_1
:type-vars (a)
((x (:inst Seq a)))
(equal ((:inst SeqAppend a) ((:inst SeqNil a)) x) x))
(defthm-typed SeqAppend_of_SeqNil_2
:type-vars (a)
((x (:inst Seq a)))
(equal ((:inst SeqAppend a) x ((:inst SeqNil a))) x))
58
Polymorphic Types in ACL2
(defun-typed SeqRev
:type-vars (a)
((x (:inst Seq a)))
(:inst Seq a)
(case-of x
(((:inst SeqNil a)) ((:inst SeqNil a)))
(((:inst SeqCons a) hd tl)
((:inst SeqAppend a)
((:inst SeqRev a) tl)
((:inst SeqCons a) hd ((:inst SeqNil a)))))))
(defthm-typed SeqRev_of_SeqAppend
:type-vars (a)
((x (:inst Seq a))
(y (:inst Seq a)))
(equal ((:inst SeqRev a) ((:inst SeqAppend a) x y))
((:inst SeqAppend a) ((:inst SeqRev a) y)
((:inst SeqRev a) x))))
(defthm-typed SeqRev_of_SeqRev
:type-vars (a)
((x (:inst Seq a)))
(equal ((:inst SeqRev a) ((:inst SeqRev a) x)) x)
:hints (("Goal" :in-theory (enable SeqAppend-a SeqRev-a))))
Figure 2: Our example Specware program, defined in ACL2.
B. Selfridge & E. Smith
59
References
[1] James McDonald & John Anton (2001): SPECWARE - Producing Software Correct by Construction.
[2] Sol Swords & William R. Cook (2006): Soundness of the Simply Typed Lambda Calculus in ACL2. In:
Proceedings of the Sixth International Workshop on the ACL2 Theorem Prover and Its Applications, ACL2
’06, ACM, New York, NY, USA, pp. 35–39, doi:10.1145/1217975.1217982.
| 6 |
arXiv:1703.02867v2 [] 7 Apr 2017
Constrained clustering via diagrams:
A unified theory and its applications to electoral
district design
Andreas Brieden
∗
Peter Gritzmann
†
Fabian Klemm
‡
April 10, 2017
Abstract
The paper develops a general framework for constrained clustering
which is based on the close connection of geometric clustering and diagrams. Various new structural and algorithmic results are proved (and
known results generalized and unified) which show that the approach is
computationally efficient and flexible enough to pursue various conflicting
demands.
The strength of the model is also demonstrated practically on realworld instances of the electoral district design problem where municipalities of a state have to be grouped into districts of nearly equal population
while obeying certain politically motivated requirements.
1
Introduction
Constrained Clustering General clustering has long been known as a fundamental part of combinatorial optimization and data analytics. For many
applications (like electoral district design) it is, however, essential to observe
additional constraints, particularly on the cluster sizes. Accordingly, the focus
of the present paper is on constrained clustering where a given weighted point
set X in some space X has to be partitioned into a given number k of clusters
of (approximately) predefined weight.
As has been observed for several applications, good clusterings are closely
related to various generalizations of Voronoi diagrams; see e.g. [15], [30], [20]
for recent work that is most closely related to the present paper. Besides electoral district design, such applications include grain-reconstruction in material
∗ andreas.brieden@unibw.de,
† gritzmann@tum.de,
Universität der Bundeswehr, 85579 Neubiberg, Germany
Department of Mathematics, Technical University of Munich, 80290
München, Germany
‡ klemm@ma.tum.de, Department of Mathematics, Technical University of Munich, 80290
München, Germany
1
sciences ([1]), farmland consolidation ([10], [15], [11]), facility and service districting ([55], [59], [45], [46], [40], [29], [3]), and robot network design ([21],
[19]).
We will present a general unified theory which is based on the relation of
constrained geometric clustering and diagrams. In Sections 2 and 3, we analyze
the model and prove various favorable properties.
Using several types of diagrams in different spaces, we obtain partitions that
are optimized with respect to different criteria: In Euclidean space, we obtain
clusters that are particularly well consolidated. Using locally ellipsoidal norms,
we can to a certain extent preserve originally existing structures. In a discrete
metric space derived from a graph that encodes an intrinsic neighboring relation,
we obtain assignments that are guaranteed to be connected. In the theoretical
part the various different issues will be visualized with the help of a running
example.
Electoral District Design Our prime example will be that of electoral district design which has been approached from various directions over the last half
century (see [51], [38], and [56] for surveys, [32] for the example of Germany,
and [36], [37] for general accounts on partitions). Municipalities of a state have
to be grouped to form electoral districts. The districts are required to be of
nearly equal population and of “reasonable” shape. Hence a crucial nature of
the electoral district design problem is that there are several partly conflicting
optimization criteria such as the grade of population balance, consolidation, or
a desire for some continuity in the development of districts over time. Therefore
we will show how our unified approach allows the decision maker to compare
several models with different optimization foci.
Section 4 will show the effect of our method for the federal elections in
Germany. The German law ([25]) requires that any deviation of district sizes
of more than 15% from the federal average is to be avoided. As a preview,
Figure 1 contrasts the occurring deviations from the 2013 election with the
deviations resulting from one of our approaches. The federal average absolute
deviation drops significantly from 9.5% for the 2013 election to a value ranging
from 2.1% to 2.7% depending on the approach. For most states, these deviations
are close to optimal since the average district sizes of the states i.e., the ratios
of their numbers of districts and eligible voters differ from the federal average
already about as much. See Section 4 for detailed results and the Appendix
for further statistics. Furthermore, an online supplement depicts the results of
all approaches for the full data set, see http://www-m9.ma.tum.de/material/
districting/.
Constrained Clustering via Generalized Voronoi Diagrams
In accordance with [22], the generalized Voronoi diagram for given functions
fi : X → R, i = 1, . . . , k, is obtained by assigning each point x ∈ X to a subset
Ci of X whose value fi (x) is minimal. We are interested in clusterings of X
that are induced by such diagrams (cf. Section 2.2).
2
0%,
5%
5%, 15%
15%, 25%
25%, 100%
(a) Deviations of the 2013
election districts in
Germany
(b) Deviations resulting from
our methodology.
(c) Colors depict the
deviation from the
federal average
district size.
Figure 1: Absolute deviations from the average population size per district.
Of course, in order to obtain suitable diagrams, the choice of the functions
fi is crucial. For parameters (D, h, S, M) we define the k-tuple of functions
F(D, h, S, M) := (f1 , . . . , fk ) via
fi (x) := h(di (si , x)) + µi .
Here, D := (d1 , . . . , dk ) is a k-tuple of metrics (or more general distance measures) on X , h : R≥0 → R≥0 is monotonically increasing, S := (s1 , . . . , sk ) ∈ X k
is a k-tuple of points in X , and M := (µ1 , . . . , µk ) ∈ Rk is a vector of reals. If
the metrics di are not all identical, we call the resulting diagram anisotropic.
(a) Diagram in Euclidean
space.
(b) Anisotropic diagram with
local ellipsoidal norms.
(c) Diagram in discrete space.
Figure 2: Exemplary clusterings and related diagrams.
3
We consider an exemplary selection of types of generalized Voronoi diagrams
(see also [4], [22], [48], [50]). For each of the considered types, Figure 2 depicts
an exemplary diagram together with its induced clustering.
In the Euclidean space, the choice
2
fi (x) := kx − si k2 + µi
yields power diagrams; see [5], [8]. For the particular case of centroidal diagrams,
in which the sites coincide with the resulting centers of gravity of the clusters,
the inherent variance is minimized. This can be achieved by optimization over
S (cf. [15], [14], [9], [28]).
The setting
fi (x) := kx − si k2 + µi
yields additively weighted Voronoi diagrams.
Allowing for each cluster an individual ellipsoidal norm yields anisotropic
Voronoi and power diagrams, respectively. Appropriate choices of norms facilitate the integration of further information such as the shape of pre-existing
clusters in our application.
We also consider the discrete case X = X. Here, we are given a connected
graph G := (X, E, δ) with a function δ : E → R>0 assigning a positive distance
to each edge. With dG (x, y) defined as the length of the shortest x-y-path in G
w. r. t. δ, this induces a metric on X . The choice of
fi (x) := dG (si , x) + µi
then leads to shortest-path diagrams. Such diagrams guarantee the connectivity
of all clusters in the underlying graph. This allows to represent intrinsic relations
of data points that cannot be easily captured otherwise.
As we will see, the parameters D and h mainly determine the characteristics
of the resulting diagrams. The points si then serve as reference points – called
sites – for the clusters.
It is shown that for any choice of D, h and S there exists a choice of the
additive parameter tuple M, such that the induced clusters are of prescribed
weight as well as optimally consolidated (cf. Corollary 2). Thus, we distinguish
between the structural parameters D, h and S and the feasibility parameter M.
Our approach does not automatically yield integral assignments in general but
may require subsequent rounding. However, the number of fractionally assigned
points and thus the deviation of cluster weights can be reasonably controlled
(see Theorem 5).
Typically, D and h are defined by the specific application as it determines
the requirements on the clusters. One can still optimize over the remaining
structural parameter S with respect to different criteria, e. g., optimal total
variances or margins. For any choice of structural parameters, the feasibility
parameter M is then readily provided as the dual variables of a simple linear
program.
As we will point out in more detail our framework extends various previously
pursued approaches. We feel that a unified and self-contained exposition serves
4
the reader better than a treatment that relies heavily on pointers to the scattered
literature. Hence we include occasionally new concise proofs of known results
whenever this adds to the readability of the paper. Of course, we try to always
give the appropriate references.
Organization of the Paper Section 2 yields the general definitions and
methodology for our approach. Section 3 provides a short study of typical
generalized Voronoi diagrams and shows their relevance for constrained clustering. Section 4 then presents our results for the electoral district design problem
for the example of Germany in all detail, while Section 5 concludes with some
final remarks.
2
Definitions and Methodology
We begin by describing our approach to constrained geometric clustering in a
general context. Due to the specific application we focus on the discrete case
of partitioning a given finite weighted set; some results for the continuous case
will however also be mentioned.
First, Section 2.1 defines the terminology for constrained clusterings. We
construct clusterings that are induced by a suitable dissection of the underlying
space. For this purpose, Section 2.2 formally defines generalized types of Voronoi
diagrams and shows how they relate to clusterings. Section 2.3 then yields
the main theoretical results that establish a correspondence of clusterings with
prescribed capacities and generalized Voronoi diagrams.
2.1
Constrained Clustering
Let k, m ∈ N and X be an arbitrary space. We consider a set
X := {x1 , . . . , xm } ⊂ X
with corresponding weights
Ω := (ω1 , . . . , ωm ) ∈ Rm
>0 .
Furthermore, let
K := (κ1 , . . . , κk ) ∈ Rk>0
Pk
Pm
such that i=1 κi = j=1 ωj .
The vector K contains the desired cluster ”sizes”. Hence, we want to find a
partition of X such that for each cluster Ci its total weight meets the prescribed
capacity κi .
For k = 2, integer weights, and κ1 = κ2 the associated decision problem
coincides with the well-known Partition problem and is therefore already N Phard.
5
We consider also a relaxed version of the problem by allowing fractional
assignments
C := (ξi,j ) i=1,...,k ∈ [0, 1]k×m
Pk
j=1,...,m
such that i=1 ξi,j = 1 for each j. C is called a (fractional) clustering of X
and ξi,j is the fraction of unit j assigned to cluster i. We further set Ci :=
(ξi,1 , . . . , ξi,m ), call it cluster i and let
supp(Ci ) := {xj ∈ X : ξi,j > 0}
denote its support, i. e., the set of those elements in X that are assigned to i
with some positive fraction. If C ∈ {0, 1}k×m , we call the clustering integer.
The weight of a cluster is given by
ω(Ci ) :=
m
X
ξi,j ωj .
j=1
A clustering C is strongly balanced, if
ω(Ci ) = κi
+
for each i. If lower and upper bounds κ−
i , κi ∈ R≥0 for the cluster weights are
given and
m
X
ωj ξi,j ≤ κ+
κ−
≤
i
i
j=1
holds for every i, C is called weakly balanced. A case of special interest for our
+
application is that of κ−
i = (1 − )κi and κi = (1 + )κi for all i for some given
> 0. Then, i.e., if
(1 − )κi ≤ ω(Ci ) ≤ (1 + )κi
for each i we call C -balanced, or, whenever the choice of is clear simply
balanced.
By BC and BC we denote the set of all strongly balanced
Pk and -balanced
Pm
fractional clusterings, respectively. Note that the condition i=1 κi = j=1 ωj
guarantees that BC 6= ∅. Similarly, let BCI and BCI denote the set of all
strongly balanced and -balanced integer clusterings, respectively. Of course,
BCI ⊂ BC ⊂ BC and BCI ⊂ BCI ⊂ BC .
2.2
Clusterings induced by Generalized Voronoi Diagrams
Let a k-tuple F := (f1 , . . . , fk ) of functions fi : X → R for i = 1, . . . , k be given.
For each cluster, the corresponding fi is supposed to act as a distance measure.
While a classical Voronoi diagram in Euclidean space assigns each point to a
reference point which is closest, this concept can be generalized by assigning a
point to each region for which the corresponding value of fi is minimal. Formally,
we set
Pi := {x ∈ X : fi (x) ≤ fl (x) ∀l ∈ {1, . . . , k}}
6
and call Pi the i-th (generalized) Voronoi region (or cell ). Then P := (P1 , . . . , Pk )
is the generalized Voronoi diagram (w. r. t. F).
Note that, in general, P does not constitute a partition of X . We will, of
course, in the application focus on choices of the functions fi that guarantee
that the cells Pi do not have interior points in common; see Lemma 6.
A generalized Voronoi diagram P is said to be feasible for a clustering C if
supp(Ci ) ⊂ Pi
for all i. Typically, we do not want a Voronoi region to contain elements “by
chance”, i. e., points that are not (at least fractionally) assigned to their corresponding cluster. Hence, we say P supports C, if
supp(Ci ) = Pi ∩ X
for all i.
2.3
Correspondence of Constrained Clusterings and Generalized Voronoi Diagrams
As described in the introduction, we are interested in finding a clustering C ∈ BC
that is supported by a generalized Voronoi diagram P w. r. t. functions fi (x) :=
h(di (si , x)) + µi . A natural question is for which choices of (D, h, S, M) such a
clustering exists.
By definition, a diagram P is feasible for C ∈ BC if and only if
ξi,j · h(di (si , xj )) + µi − min (h(dl (sl , xj )) + µl ) = 0
(1)
l=1,...,k
holds for every i = 1, . . . , k and j = 1, . . . , m.
We will now recover (1) as a complementary slackness condition in linear
programming. For this purpose, first note that in general, i. e., for any C ∈ BC
and (D, h, S, M), we have
k X
m
X
i=1 j=1
ξi,j · ωj ·
h(di (si , xj )) + µi − min (h(dl (sl , xj )) + µl ) ≥ 0, (2)
l=1,...,k
as all weights ωj are positive and each factor in the sum above is non-negative.
Pm
Pk
Using j=1 ωj ξi,j = κi for each i and i=1 ξi,j = 1 for each j, Inequality (2)
is equivalent to
k X
m
X
i=1 j=1
ξi,j · ωj · h(di (si , xj )) ≥
m
X
ωj min (h(dl (sl , xj )) + µl ) −
l=1,...,k
j=1
k
X
κi µi .
i=1
(3)
Note that (1) holds for every i and j if and only if (2), and hence (3), holds
with equality.
7
Now, observe that the left-hand side of (3) does not depend on M while the
right-hand side does not depend on C. For any choice of (D, h, S) equality in
(3) can therefore only hold if C minimizes the left-hand side while M maximizes
the right-hand side. Thus, in particular, C ∈ BC must be a minimizer of the
linear program
min
C∈Rk×m
k X
m
X
ξi,j · ωj · h(di (si , xj ))
i=1 j=1
k
X
ξi,j
i=1
m
X
=
1
s.t.
(i = 1, . . . , k)
ξi,j ωj = κi
(j = 1, . . . , m)
ξi,j
(i = 1, . . . , k; j = 1, . . . , m).
(P)
j=1
≥
0
By introducing auxiliary variables E := (η1 , . . . , ηm ), maximization of the
right-hand side of (3) can be formulated as linear program, as well:
max
m
X
M∈Rk ,
j=1
E∈Rm
ωj ηj −
k
X
κi µi
s.t.
i=1
(D)
ηj ≤ h(di (si , xj )) + µi (i = 1, . . . , k; j = 1, . . . , m)
Now, observe that (D) is the dual program to (P). Thus, P is feasible for C if
and only if C and (M, E) are primal-dual optimizers. In particular, as
ηj = min (h(dl (si , xj )) + µi )
i=1,...,k
holds for every optimal solution of (D), Equation (1) states exactly the complementary slackness conditions. Furthermore, if the complementarity is strict,
i. e., exactly one factor is strictly positive in any equation of type (1), this is
equivalent to P supporting C.
We summarize these observations in the following theorem.
Theorem 1. Let C ∈ BC, D be a k-tuple of metrics on X , S ∈ X k , h : R≥0 →
R, M ∈ Rk , and let P be the generalized Voronoi diagram w. r. t. F(D, h, S, M).
Further, set ηj := mini=1,...,k (h(di (si , xj )) + µi ) for j = 1, . . . , m, and E :=
(η1 , . . . , ηm ),
Then (M, E) is feasible for (D) and the following equivalencies hold:
P is feasible for C
⇔ C and (M, E) satisfy the
complementary slackness condition
for (P) and (D)
P supports C
⇔ C and (M, E) satisfy the
strict complementary slackness
condition for (P) and (D)
8
Theorem 1 establishes a one-to-one-correspondence of (fractional) strongly
balanced clusterings that are supported by a generalized Voronoi diagram and
those faces of the polytope BC which are optimal w. r. t. a corresponding objective function.
Observe that the deduction of Theorem 1 did not use any further assumptions on the functions fi besides the additive component M. Thus, we obtain
the following corollary.
Corollary 2. LetPfˆi : P
X → R, i = 1, . . . , k. Then C ∗ ∈ BC is an optik
m
mizer of minC∈BC i=1 j=1 ξi,j · ωj · fˆi (xj ) if and only if there exists M :=
(µ1 , . . . , µk ) ∈ Rk such that the generalized Voronoi diagram w. r. t. fi := fˆi +µi
is feasible for C ∗ .
For functions fˆi (x) := h(di (si , x)), this has already been shown by linear
programming duality in [15] for a discrete set X, h = (·)2 , and the Euclidean
metric. In a continuous setting, i. e., for X = Rn and balancing constraints
defined w. r. t. a probability distribution on Rn , this has been proven in [5] and
extended to more general function tuples F in [30]. Here, the particular case h =
id is contained in [3]. In [21], this result was deduced for the Euclidean case and
an arbitrary function h by carefully considering the optimality conditions of an
alternative optimization problem and deducing optimality (but not establishing
the linear programming duality). Furthermore, in [19] and [20] the general
continuous case was proved by discretization also involving linear programming
duality.
Using the second part of Theorem 1, we can now characterize supporting
diagrams.
Corollary 3. Let fˆi : X → R, i = 1, . .P
. , k. P
Then C ∗ ∈ BC lies in the relative
k
m
interior of the optimal face of minC∈BC i=1 j=1 ξi,j · ωj · fˆi (xj ) if and only if
there exists M := (µ1 , . . . , µk ) ∈ Rk such that the generalized Voronoi diagram
w. r. t. fi := fˆi + µi supports C ∗ .
Thus, for non-unique optima the supporting property of the diagrams may
still be established but comes with the price of more fractionally assigned elements (cf. Section 3.3).
Another fact that roots in a basic result about extremal points of transportation polytopes has been noted in their respective context by several authors (e. g., [55], [44], [59], [34], [31], [39], [15]): An optimal basic solution of
(P) yields partitions with a limited number of fractionally assigned points.
Our proof of the following Lemma 4 relies on the bipartite assignment graph
H(C) that is associated with a given clustering C ∈ BC. It is defined by
H(C) := ({1, . . . , k} ∪ X, E)
with
E := {{i, xj } : ξi,j > 0}.
By [41, Thm. 4] H(C) is acyclic if and only if C is extremal, i. e., a vertex of the
feasible region of (P).
9
Lemma 4. Let fˆi : X → R, i = 1, . . . , k. Then there exists M := (µ1 , . . . , µk ) ∈
Rk and a clustering C ∈ BC with at most (k −1) fractionally assigned points and
at most 2(k − 1) fractional components ξi,j such that the generalized Voronoi
diagram w. r. t. fi := fˆi + µi is feasible for C.
Proof. Let
C
be
an
extremal
solution
of
(P)
and
H(C) = ({1, . . . , k} ∪ X, E) be the corresponding assignment graph. As H(C)
is acyclic it follows that |E| ≤ k + m − 1. Further, from the definition of BC it
follows that deg(xj ) ≥ 1 for every j = 1, . . . , m. Moreover, for any fractionally
assigned element xj ∈
it follows that deg(xj ) ≥ 2. As H(C) is bipartite, we
PX
m
also have that |E| = j=1 deg(xj ). In conclusion, this yields
k + m − 1 ≥ |E| =
m
X
deg(xj ) ≥ m +
j=1
1
|{{i, xj } : 0 < ξi,j < 1}|
2
≥ m + |{xj ∈ X : xj is fractionally assigned}| ,
which implies the assertion.
Note that for the special case of ωj = 1 for all j every extremal solution of
the transportation problem (P) is integral, i. e., BC = BCI (cf. [41, Corollary
1]). In general, however, this is not the case.
Anyway, by solving the linear program (P) and applying suitable rounding, we obtain an integer clustering that satisfies an a-priori (and for many
applications very reasonable) error bound. More precisely, [54] showed that
minimizing the maximum error after rounding can be done in polynomial time
using a dynamic programming approach, while minimizing the sum of absolute
or squared errors is N P-hard. In [34] it was furthermore shown that minimizing
the number of fractional assignments while obeying a pre-defined error tolerance
is N P-hard.
The following theorem provides an upper bound for that guarantees the
existence of an -balanced integer clustering.
ˆ
:
X
→
R, i
=
1, . . . , k and
Then there exists C ∈ BCI together with M :=
(µ1 , . . . , µk ) ∈ Rk , such that the generalized Voronoi diagram w. r. t. fi := fˆi +µi
is feasible for C.
k
ˆ
For rational input, i. e., Ω ∈ Qm
>0 , K ∈ Q>0 , fi (xj ) ∈ Q, i = 1, . . . , k and
j = 1, . . . , m, C and M can be determined in polynomial time.
Theorem 5. Let
≥ max1≤j≤m,1≤i≤k
fi
ωj
κi .
Proof. By Corollary 2 a solution C˜ ∈ BC of (P) with a corresponding dual
solution (µ1 , . . . , µk ) ∈ Rk yields a generalized Voronoi diagram w. r. t. fi :=
˜ We may furthermore choose C˜ to be extremal.
fˆi + µi that is feasible for C.
˜ is a tree (otherwise we
We can also assume that the assignment graph H(C)
consider its connected components). We may further root this tree in the node
1 (that corresponds to cluster C1 ) and w.l.o.g. assume that the other clusters
are labelled according to their distance from the root, i. e., for 1 ≤ i < l ≤ k it
10
holds that the length of a shortest 1-i-path is at most the length of a shortest
1-l-path in H(C).
For each i = 1, . . . , k we denote the index sets of units that are either nonintegrally or integrally assigned to cluster Ci by
F (i) := {j : 0 < ξ˜i,j < 1}
and
I(i) := {j : ξ˜i,j = 1},
respectively. Now, we construct an integer clustering C ∗ ∈ {0, 1}k×m in which
∗
all integral assignments are preserved, i. e., ξi,j
:= ξ˜i,j for ξ˜i,j ∈ {0, 1}. The
remaining components will be determined successively for each cluster.
Let us begin with C1∗ . We round up fractional assignments ξi,j , j ∈ F (1),
successively as long as this is allowed by the upper bound κ1 . More precisely,
let
X
X
j ∗ := max{j ∈ F (1) :
ωr ≤ κ1 −
ωr }
r∈F (1):
r≤j
r∈I(1)
if the set on the right-hand side is non-empty, otherwise set j ∗ := 0. With
(
1, if j ≤ j ∗
∗
ξ1,j :=
0, otherwise.
Pm ∗
for every j ∈ F (1) it then follows that | j=1 ξ1,j
ωj − κ1 | < maxj=1,...,m ωj .
Now let i0 ≥ 2 and assume that we have determined Ci∗ appropriately for
every i < i0 . Let xj0 ∈ X be the predecessor of the node i0 in the rooted tree
˜ By assumption, the assignment ξ ∗ of unit xj to cluster Ci has already
H(C).
0
i,j0
been determined for every i < i0 . We then set
(
∗
1, if ξi,j
= 0 for all i < i0
∗
0
ξi0 ,j0 :=
(4)
0, otherwise.
In analogy to the first step, we define
X
X
j ∗∗ := max{j ∈ F (i0 ) \ {j0 } :
ωr ≤ κi0 −
ωr − ξi∗0 ,j0 ωj0 }
r∈F (i0 ):
r≤j
r∈I(i0 )
if the set on the right-hand side is non-empty, j ∗∗ := 0 otherwise, and set
(
1, if j ≤ j ∗∗
∗
ξi0 ,j :=
0, otherwise
for every j ∈ F (i0 ) \ {j0 } .
Every point that is fractionally assigned in C˜ is assigned either to its prede˜ or to exactly one successor by (4). Thus, it holds that C ∗ ∈ BCI .
cessor in H(C)
∗
As supp(Ci ) ⊂ supp(C̃i ), the already obtained generalized Voronoi diagram
remains feasible for C ∗ .
Hence, under the additional rationality assumptions, we can solve the linear
program (P) and perform the successive rounding in polynomial time.
11
Note that the bound of Theorem 5 is asymptotically worst-case optimal
(e. g., for m = 1, ω1 = k, κi = 1 for all i and letting k → ∞).
As in [15], we may also consider weakly balanced clusterings where lower and
+
upper bounds κ−
i , κi ∈ R≥0 for the cluster weights are provided. Of course, a
∗
minimizer C over the polytope of such weakly balanced clusterings is a minimizer of (P) when setting κi := ω(Ci∗ ) for every i. Hence, optimal weakly
balanced clusterings still yield supporting generalized Voronoi diagrams. Unfortunately, the converse is not true in general; see [15] for an example.
Naturally, we prefer generalized Voronoi diagrams that support a clustering
over diagrams that are only feasible, as they exclude the possibility of points
lying in a diagram’s region only “by coincidence”. At the same time, we prefer
clusterings with only few fractional assignments as provided by Theorem 5. As
a consequence of Corollary 3, this coincides whenever the optimum of (P) is
unique. In case of X = Rn , this can be guaranteed by an appropriate choice of
the structural parameters D, h, S. We first show that under mild assumptions
the generalized Voronoi-cells behave properly.
Lemma 6. Let X = Rn , D := (d1 , . . . , dk ) be a family of metrics induced by
strictly convex norms, h : R → R injective, S := (s1 , . . . , sk ) ∈ (Rn )k such that
si 6= sl for i 6= l, and M ∈ Rk .
Then for the generalized Voronoi diagram P w. r. t. F(D, h, S, M) we have
int(Pi ) ∩ int(Pl ) = ∅ whenever i 6= l.
Proof. Let B1 , B2 ⊂ Rn be the unit balls of the norms that induce d1 and d2 ,
respectively. Furthermore, denote by k·kBi , i = 1, 2, the corresponding norms,
i. e., di (x, 0) = kxkBi = min{ρ ≥ 0 : x ∈ ρBi } for every x ∈ Rn , i = 1, 2.
Suppose that there exists x0 ∈ Rn and δ > 0 such that x0 + δB2 ⊂ P1 ∩ P2
where B2 is the Euclidean unit ball. This means we have
h(kx − s1 kB1 ) − h(kx − s2 kB2 ) = µ2 − µ1
(5)
for all x ∈ x0 + δB2 . W.l.o.g. let s1 , s2 and x0 be affinely independent.
Next, let a ∈ Rn \ {0} such that
≤
n
Ha,a
: aT x ≤ aT x0 }
T x := {x ∈ R
0
is a halfspace that supports s1 + kx0 − s1 kB1 B1 in x0 .
≤
If Ha,a
T x does not support s2 +kx0 − s2 kB2 B2 in x0 , it follows that there ex0
ists z ∈ x0 + δB2 with kz − s1 kB1 = kx0 − s1 kB1 and kz − s2 kB2 6= kx0 − s2 kB2 .
As h is injective, this implies that (5) does not hold for z, a contradiction.
≤
Hence, Ha,a
must support s2 + kx0 − s2 kB2 B2 in x0 .
Tx
0
Now there exist λ > 1 and ν ∈ R such that x1 := s1 + λ(x0 − s1 ), x2 :=
s2 + ν(x0 − s2 ) ∈ int(x0 + δB2 ) and kx1 − s1 kB1 = kx2 − s1 kB1 . Furthermore,
due to the affine independence of s1 , s2 and x0 we have that x1 6= x2 .
≤
≤
As Ha,a
supports s1 + kx0 − s1 kB1 B1 in x0 , it follows that Ha,a
supTx
Tx
0
1
kx1 −s1 k
≤
ports s1 + kx1 − s1 kB1 B1 in s1 + kx0 −s1 kB1 (x0 − s1 ) = x1 . Analogously, Ha,a
Tx
2
B1
12
supports s2 + kx2 − s2 kB2 B2 in x2 . By the same argumentation as before we
≤
must also support s1 + kx2 − s1 kB1 B1 = s1 + kx1 − s1 kB1 B1
see that Ha,a
Tx
2
in x2 (as otherwise we find a point contradicting (5)).
Hence, s1 + kx1 − s1 kB1 B1 is supported in x1 and x2 by the halfspaces
≤
≤
Ha,aT x1 and Ha,a
T x , respectively. This contradicts the strict convexity of B1 .
2
In the situation of Lemma 6 we see (as a generalization of [15, Lemma 4.1])
that a minimal perturbation of the sites suffices for (P) having a unique optimum. For the proof we need the notion of cyclic exchanges. Consider a sequence
(i1 , xj1 , i2 , xj2 , . . . , ir , xjr )
of
pairwise
distinct
cluster
indices
i1 , . . . , ir ∈ {1, . . . , k} and pairwise distinct points xj1 , . . . , xjr . We define the
cyclic exchange Z := (ζi,j ) i=1,...,k ∈ Rk×m by
j=1,...,m
ζil ,jl := −
1
,
ωjl
ζil−1 ,jl :=
1
ωjl
for l = 1, . . . , r, reading indices modulo r, and equal 0 in the remaining components. Observe that for any C1 , C2 ∈ BC, it follows that C1 − C2 can be
decomposed into a sum of finitely many scaled cyclic exchanges.
Theorem 7. Let X = Rn , D := (d1 , . . . , dk ) be a k-tuple of metrics induced by
strictly convex norms, M ∈ Rk , and h : R → R injective.
Then for all S := (s1 , . . . , sk ) ∈ (Rn )k and > 0 there exists an S̃ :=
Pk
(s̃1 , . . . , s̃k ) ∈ (Rn )k with i=1 ||si − s̃i || < such that for (D, h, S̃) the linear
program (P) has a unique optimizer.
Proof. Suppose that the solution of (P) for F(D, h, S, M) is not unique.
Then there exists C ∈ BC, a cyclic exchange Z and α > 0 such that C ±αZ ∈
F , where F denotes the optimal face of (P). W.l.o.g. let Z correspond to the
sequence (1, x1 , 2, . . . , r, xr ). Since the values of the objective function of (P)
are the same it follows that
r−1
X
h(dl (sl , xl ))−h(dl (sl , xl+1 )) + h(dr (sr , xr ))−h(dr (sr , x1 )) = 0.
l=1
Set
c :=
r−1
X
h(dl (sl , xl ))−h(dl (sl , xl+1 )) + h(dr (sr , xr ))−h(dr (sr , x1 )).
l=2
In particular, c does not depend on s1 . It follows that the set of sites s̃1 such
that Z is orthogonal to the objective function vector of (P) is described by the
equation
h(d1 (x1 , s̃1 )) − h(d1 (x2 , s̃1 )) = −c.
With x1 and x2 interpreted as sites, this is the intersection of their corresponding
cells of the generalized Voronoi diagram w. r. t. F (d1 , d1 ), h, (x1 , x2 ), (c, 0) . By
13
Lemma 6, this set has an empty interior. Together with the fact that there can
only be a finite number of cyclic exchanges, the claim follows.
Finally, similarly to [20], we may derive the following continuous version of
Corollary 2 by considering a sequence of refining discretizations.
Corollary 8. Let X = Rn , and Ω a finite continuous measure with Ω(X ) =
Pk
ˆ
i=1 κi , and measurable functions fi : X → R for i = 1, . . . , k be given. Further, assume that for 1 ≤ i < l ≤ k and every c ∈ R it holds that Ω({x ∈ Rn :
fˆi (x)− fˆl (x) = c}) = 0.
Then any partition of X into measurable sets Ci with Ω(Ci ) = κi is optimal
Pk R ˆ
w. r. t.
f (x)Ω(dx) if and only if there exists µ1 , . . . , µk ∈ R such that
i=1
n Ci i
o
with Pi := x ∈ X : fˆi (x) + µi ≤ fˆl (x) + µl ∀l it follows that Pi = Ci up to
sets of Ω-measure 0 for every i.
3
Classes of Generalized Voronoi Diagrams
Our general approach can be summarized as follows: We first choose D and h
depending on the application. How this choice is made will depend on which
properties of the cells are desired; see Sections 3.1 to 3.3 for examples; see also
Table 4. Then we make an appropriate, possibly optimal choice of sites S. In
Euclidean space, for instance, we can optimize over S in order to approximate
maximally consolidated clusterings by evoking results of [14]. Over a discrete
space, we may heuristically search for sites that minimize the resulting deviation of cluster weights due to rounding a fractional clustering. In the case of
anisotropic diagrams, we will use the centers of the current districts as sites in
order to obtain similar new districts.
For any choice of S, we can get a solution C and the feasibility parameter
M from (P) and (D), respectively. After successive rounding, we then obtain a clustering together with the feasible generalized Voronoi diagram w. r. t.
F(D, h, S, M).
We will now shortly discuss appropriate choices for D and h and illustrate
them by a simple example. In particular, we show how these choices relate to
clusterings with certain characteristics.
An example Figure 3 shows an instance of constrained geometric clustering.
Here, the space X (gray filled area) is a subset of R2 that is not simply connected.
The set X consists of 500 points (blue dots), each of weight 1. We want to find
a clustering of k = 5 clusters, each of weight 100. Also, a “distance graph“ G
(black lines) is given whose edges are weighted by the Euclidean distances of
their nodes. For this example, G was constructed via a Delaunay triangulation of
X and dropping edges outside X . This graph encodes an intrinsic connectivity
structure for X. Finally, the black-framed white squares depict the sites S,
which we assume to be pre-determined in this example. Figures 4 to 8 show the
14
Figure 3: Exemplary constrained clustering instance of Section 3
clusterings and supporting diagrams obtained for the different choices of D and
h via the methodology described above.
3.1
Euclidean Space
First, we consider the case that all metrics in D are Euclidean.
Additively Weighted Voronoi Diagrams An obvious choice is h = id, i. e.,
fi (x) := ||x − si || + µi is the Euclidean distance to the respective cluster’s site
shifted by µi . Solving (P) for a general instance then means to search for a
fractional clustering minimizing the weighted average Euclidean distance to the
assigned sites. All clusterings in the relative interior of the optimal face of (P)
are supported by additively weighted Voronoi diagrams. For results on the latter
see [4], [50, Chapter 3.1.2], [7, Chapter 7.4]. Here, Voronoi regions are separated
by straight lines or hyperbolic curves and are in particular star-shaped w. r. t.
their respective sites; see Figure 4. If X is convex, this yields connected regions.
Of course, as the above example shows, this does not hold for general X .
Power Diagrams Taking the squared Euclidean distances, i. e., fi (x) := ||x−
si ||2 +µi , yields power diagrams (see [6], [50, Chapter 3.1.4]). Figure 5 shows this
case for our example. Here, regions are separated by straight lines perpendicular
to the line spanned by the respective sites. In particular, this yields convex
and therefore connected regions whenever X is a convex subset of Rn . Again,
connectivity might get lost when X is non-convex as is the case in this example.
Solving (P) may be interpreted as minimizing the weighted squared error when
the clusters are represented by their respective sites.
As already pointed out, power diagrams have been thoroughly investigated
for the example of farmland consolidation ([13], [11]) and a comprehensive theory on their algorithmic treatment has been derived ([15], [14], [9]).
15
Figure 4: Clustering w. r. t. fˆi (x) =
||si − x||
Figure 5: Clustering w. r. t. fˆi (x) =
||si − x||2
Let us close this subsection by briefly recapitulating a result from [15] that
deals with optimal choices of the sites in the strongly balanced case. Recall that
a feasible power diagram is called centroidal if
si = c(Ci ) :=
m
1 X
ξi,j ωj xj
κi j=1
for i = 1, . . . , k. The following result characterizes centroidal power diagrams
as local maximizers of the function φ : BC → R defined by
φ(C) :=
k
X
κi ||c(Ci )||2 .
i=1
Here, some trivial cases of clusters have to be excluded: A clustering C ∈ BC is
called proper, if for all i 6= l it holds that
| supp(Ci )| = | supp(Cl )| = 1 ⇒ supp(Ci ) 6= supp(Cl ).
Theorem 9 ([15, Theorem 2.4]). Let C ∈ BC be proper. Then there exists a
centroidal power diagram that supports C if and only if C is the unique optimum
of (P) and a local maximizer of φ.
Furthermore, finding the global maximum of φ over BC is equivalent to
optimizing
min
C∈BC
k X
m
X
ξi,j ωj ||xj − c(Ci )||2 .
(6)
i=1 j=1
This may be read as minimizing an average variance of clusters, also called the
moment of inertia.
16
Note that φ actually depends only on the centers of gravity rather than
on the clusters themselves. By definition those centers are given by a linear
transformation of BC into the set of all gravity centers in Rkd . Optimizing φ
over BC is then equivalent to maximizing an ellipsoidal norm in Rkd over the
set of gravity centers. One can now approximate this norm in the comparably
low dimensional space Rkd by a polyhedral norm. This yields an approximation
algorithm by solving a typically manageable number of linear programs of type
(P); see [14], [15].
Another possibility to derive local optima of φ is by means of a balanced
variant of the k-means algorithm (see [9]).
3.2
Anisotropic Diagrams with Ellipsoidal Norms
Using the Euclidean norm obviously cannot pay regard to the shape of X nor
any other information about the desired extents or orientations of the resulting
clusters. One possibility of including such information is to use anisotropic
diagrams.
While we could, in principle, employ arbitrary norms we will consider D
to be induced by ellipsoidal norms in the following. So, let Mi ∈ Rn×n be
symmetric positive definite matrices defining the ellipsoidal norms via
p
||x||Mi := xT Mi x,
i = 1, . . . , k. In our main application, the matrices are chosen so as to obtain
clusters similar to a pre-existing structure (cf. Section 4.4).
As in the Euclidean case, we consider the transformation functions h = id
and h = (·)2 . For h = id we obtain anisotropic Voronoi diagrams. These have
already been applied in the field of mesh generation ([42], [18]) on a Riemannian
manifold in Rn provided with a metric tensor M : X → M n×n . Hence, the
functions fi can be seen as local approximations of the geodesic distances w. r. t.
that manifold. Even without additive weights, the resulting diagrams need not
be connected.
We will refer to case h = (·)2 as anisotropic power diagrams. In [1] these
were successfully used for the reconstruction of polycrystalline structures, where
information about volumes as well as moments of crystals is given. Regions
are separated by straight lines or quadratic curves. Figures 6 and 7 show the
case of additively weighted anisotropic Voronoi diagrams and anisotropic power
diagrams, respectively. The dotted ellipses depict the unit balls of the respective
norms.
3.3
Shortest-Path Diagrams
Even in the anisotropic case, the diagrams considered so far might fail in depicting intrinsic relations of the points in X . In our application of electoral
district design, this occurs as points are only representatives of their municipalities’ regions. Thus, information about neighborhood relations is lost (cf.
17
Figure
6: Clustering w. r. t. fˆi (x) =
p
(x−si )T Mi (x−si )
Figure 7: Clustering w. r. t. fˆi (x) =
(x−si )T Mi (x−si )
Section 4). In such cases, generalized Voronoi diagrams on an edge-weighted
graph G = (X, E, δ) can be preferable.
Figure 8 shows the result if X is the discrete space of all elements in X together with the metric induced by G (see Section 1). Taking fi (x) := dG (si , x)+
µi , this means that the weighted average distance of elements to their assigned
sites in the graph is minimized. We will refer to this case as shortest-path diagrams. Of course, if G is complete and edge weights are the Euclidean distances
between the vertices, this again results in additively weighted Voronoi diagrams.
Figure 8: Clustering w. r. t. fˆi (x) =
dG (si , x)
Figure 9: Clustering w. r. t. fˆi (x) =
dG (si , x)2
In general, there are two main motivations to use a discrete space (X, dG ).
The obvious first reason are applications that live on some sort of graph. For
instance, [49] proposes to use Voronoi diagrams on discrete spaces for the repre-
18
sentation of service areas in cities as the Euclidean norm is often unsuitable for
approximating distances in urban traffic networks. Second, there are applications that require clusters to be connected in some sense. Often, of course, this
connectivity is already established when clusters are supported by a diagram
in X with connected regions. However, as has been observed before, this is
not always the case with the diagrams proposed so far, in particular, since the
underlying space might not be convex.
We say that a fractional clustering C ∈ BC is S-star-shaped for sites S =
(s1 , . . . , sk ) ∈ X k , if for every i ∈ {1, . . . , k} and x ∈ supp(Ci ) it follows that
v ∈ supp(Ci ) for every v on a shortest si -x-path in G. By the following result,
clusters that are supported by shortest-path diagrams are S-star-shaped. More
precisely, the Voronoi regions are shortest-path trees rooted at their sites.
Theorem 10. Let (X, dG ) be the metric space induced by a connected and
edge-weighted graph G = (X, E, δ). Let C ∈ BC, S ∈ X k , M ∈ Rk , and define
D := (d1 , . . . , dk ) via di = dG for i = 1, . . . , k.
If the generalized Voronoi diagram P w. r. t. F(D, id, S, M) supports C, then
C is S-star-shaped. In particular, si ∈ supp(Ci ) holds for all i ∈ {1, . . . , k}.
Proof. Let F be the optimal face of (P), then C ∈ relint(F ) holds by Corollary 3.
For some i ∈ {1, . . . , k}, let xj ∈ supp(Ci ) and let si = v1 , v2 , . . . , vt := xj be a
shortest si -xj -path in G.
Suppose that there exists xp such that vl = xp 6∈ supp(Ci ) for some l ∈
{1, . . . , t − 1}. By the feasibility of C there exists r ∈ {1, . . . , k} such that
xp ∈ supp(Cr ).
Due to the choice of xp and xj it follows that ξi,p = 0 < 1, ξi,j > 0,
ξr,j ≤ 1 − ξi,j < 1 and ξr,p > 0.
Let Z ∈ Rk×m be the cyclic exchange for the sequence (i, xj , r, xp ) (as defined
in Section 2.3). Then there exists some α > 0 such that C + αZ ∈ BC. If α
is sufficiently small C + αZ has (at least) one non-zero component more than
C. Since C ∈ relint(F ), it follows that C + αZ 6∈ F . Thus, 0 < dG (si , xp ) −
dG (si , xj ) + dG (sr , xj ) − dG (sr , xp ).
Now, by the triangle inequality, dG (sr , xj ) ≤ dG (sr , xp ) + dG (xp , xj ). As xp
lies on a shortest si -xj -path, it furthermore holds that dG (si , xj ) = dG (si , xp ) +
dG (xp , xj ).
Together, this yields dG (sr , xj ) − dG (si , xj ) ≤ dG (sr , xp ) − dG (si , xp ), a
contradiction.
As supp(Ci ) 6= ∅ for i = 1, . . . , k, this in particular implies si ∈ supp(Ci ).
The requirement that C ∈ BC lies in the relative interior of the optimal
face (P) is crucial for shortest-path distances to preserve star-shapedness. In
[59], a Lagrange relaxation model of an integer version of (P) for shortest-path
distances was considered and connectivity of resulting clusters was demanded,
while it was pointed out in [54], that this may not hold whenever non-unique
optima appear. Theorem 10 clarifies the situation: in view of [54] it is precisely
the requirement that the solution lies in the relative interior of the optimal set.
19
x3
b
a
x1 = s1
c
x2
x4 = s2
Figure 10: Intersecting shortest paths.
Let us now consider the example of Figure 10 with a := 1, b := 1, c := 2
and the resulting constrained clustering instance for Ω := 1, K := (2, 2)T . We
obtain the two clusterings C (a) , C (b) ∈ BC via
(a)
C1
and
1 1
:= (1, , , 0),
2 2
(a)
C2
(b)
1 1
:= (0, , , 1)
2 2
(b)
C1 := (1, 0, 1, 0),
C2 := (0, 1, 0, 1).
The generalized Voronoi diagram P ∗ w. r. t. f1 (x) = dG (s1 , x) + 1 and f2 (x) =
dG (s2 , x) (i. e., h = id and M = (1, 0)T ), consists of the cells P1∗ = {x1 , x2 , x3 }
and P2∗ = {x2 , x3 , x4 }. Thus, it is feasible for both C (a) and C (b) . Hence, they
are both minimizers of (P) by Theorem 1. However, only C (a) is supported by
P ∗ and S-star-shaped while C (b) contains a disconnected cluster.
More generally, this happens whenever shortest paths intersect. This can
have a dramatic effect on integer assignments. In fact, in order to conserve
connectivity after rounding, whole fractionally assigned branches of the shortestpath trees might have to be assigned to one of their supporting clusters. Of
course, this results in greater deviations of cluster weights. For our running
example, Figure 11 depicts the points that have originally been fractionally
assigned to both the blue and the green colored cluster and are now fully assigned
to the green cluster in the integer solution.
Figure 11: Rounded units in the case fˆi (x) = dG (si , x).
A natural idea to overcome this effect as well as to obtain more consolidated
clusters is to try to imitate the idea of squaring the distances (that led to power
diagrams in the Euclidean space) to the discrete space (X, dG ).
20
Figure 12: Non-connected cluster in the case fˆi (x) = dG (si , x)2 .
Let us once more consider the previous example from just above. This
time, however, we take the generalized Voronoi diagram P 0 w. r. t. f1 (x) =
dG (s1 , x)2 + 4 and f2 (x) = dG (s2 , x)2 (i. e., h = (·)2 and M = (4, 0)T ). It
follows that P10 = {x1 , x3 } and P20 = {x2 , x4 }. Thus, P 0 supports C (b) but is
not even feasible for C (a) . In fact, C (b) is the unique minimizer of (P) for this
choice of h and does not yield connected clusters. Figure 12 demonstrates the
same effect for our running example. Here, the single yellow unit in the center
is separated from its respective cluster.
Unfortunately, the following theorem shows that this is a general effect.
In fact, for any transformation function h that is not affine, clusters can be
disconnected despite being supported by a corresponding diagram. Thus, if
connectivity is to be guaranteed a-priorily, this dictates the choice of shortestpath diagrams in our approach.
Theorem 11. Let (X, dG ) be the metric space induced by a connected and
edge-weighted graph G. Let C ∈ BC, S ∈ X k , and M ∈ Rk , and define D =
(d1 , . . . , dk ) via di = dG for i = 1, . . . , k. Furthermore, let h : R≥0 → R be
continuous.
If h(x) = α · x + β for some α ∈ R≥0 , β ∈ R and the generalized Voronoi
diagram w. r. t. F(D, h, S, M) supports C, then C is S-star-shaped.
If h is any continuous function but not of the above type, this is not true in
general.
Proof. The first claim follows by replacing dG with α · dG + β in the proof of
Theorem 10. Note that in the case α = 0 the whole set BC is optimal, which
causes all components of a solution from the relative interior to be strictly
positive.
For the second claim, let some continuous function h : R≥0 → R be given.
Consider the graph from Figure 10 with X = {x1 , x2 , x3 , x4 }, edges {{x1 , x2 },
{x2 , x3 }, {x2 , x4 }} and edge weights δ({x1 , x2 }) = a, δ({x2 , x3 }) = b and
δ({x2 , x4 }) = c for some a, b, c ∈ R>0 . Furthermore, let ω1 = ω2 = ω4 = 1,
ω3 = 3 and κ1 = κ2 = 3. By Corollary 3 and since BC 6= ∅ there exists a clustering C ∈ BC that is supported by the generalized Voronoi diagram P w. r. t.
F(D, h, S, M) for some M ∈ R2 .
Now suppose that C is S-star-shaped. Together with the choice of the
weights this implies {x2 , x3 } ⊂ supp(C1 ) ∩ supp(C2 ) ⊂ P1 ∩ P2 . Hence, one
gets h(dG (s1 , xj )) + µ1 = h(dG (s2 , xj )) + µ2 for j = 2, 3. Subtraction of the
21
equality for j = 3 from the one for j = 2 and insertion of the shortest paths
lengths yields
h(a) − h(a + b) = h(c) − h(c + b).
Setting h̃ := h − h(0), taking the limit c → 0 and using the continuity of h this
implies
h̃(a + b) = h̃(a) + h̃(b).
Since a, b ∈ R>0 are arbitrary and h̃ is continuous, it readily follows that h̃ is
linear on R≥0 and thus h is of the form h(t) = α · t + β.
In order to see that α ≥ 0, it is sufficient to consider the example X =
{x1 , x2 }, si = xi , ωi = 1, κi = 1 for i = 1, 2 and a single edge {x1 , x2 } of
arbitrary positive length. If α < 0, then the optimal clustering is supp(C1 ) =
{x2 } and supp(C2 ) = {x1 }, which contradicts the claim of Theorem 10.
4
Application to the Electoral District Design
Problem
We will now apply our approach to the electoral district design problem. We
show the effect of using power diagrams, anisotropic power diagrams and shortestpath diagrams for the example of the Federal Republic of Germany.
4.1
Electoral District Design
Despite the differences in voting systems, the issue of designing electoral districts can be stated in a common way suitable for most of them. A state consists
of basic units such as municipalities or smaller juridical entities. Units are of
different weight, usually and in the following their number of eligible voters.
Units are then to be partitioned into a given number of districts of (approximately) equal weight. Hence, we are facing a constrained clustering problem as
defined in Section 2.1.
Usually, districts are required to be “compact” and “contiguous” (cf. [51]).
Both are, not formally defined juridical demands requiring a proper mathematical modelling. How to define a measure for “compactness” of districts has,
in fact, been widely discussed in the literature (e. g. [58], [47], [35], [2]). One
widely accepted measure ([33], [28]) is the moment of inertia as defined by (6),
where each municipality is modelled as a point in the Euclidean plane.
Contiguity usually requires that the area belonging to a district’s municipalities is connected. This can be modelled by the adjacency graph G with
nodes corresponding to the units and edges between two units whenever they
share a common border (which can be crossed). Connectivity of clusters is then
defined by demanding that each induced subgraph G[supp(Ci )] is connected.
The edges of G can be weighted, for example, by driving distances between the
corresponding municipalities’ centers.
22
Due to census development electoral districts have to be regularly adapted.
Therefore, a further requirement may be to design districts that are similar
to the existing ones. Let C o be the integer clustering that corresponds to the
original districts and C ∗ be a newly created integer clustering. One may measure the difference of the two district plans by the ratio of voter pairs that
used to share a common district but are now assigned to different ones. With
o
o
∗
∗
A(C o , C ∗ ) := {(j, r) : 1 ≤ j < r ≤ m ∧ ∃i : ξi,j
= ξi,r
= 1 ∧ ∀i : ξi,j
· ξi,r
= 0}
this is, more formally, given by
Pk
i=1
4.2
1
ω(Cio )
2
Dataset Description
X
ωj · ωr .
(7)
(j,r)∈A(C o ,C ∗ )
By the German electoral law [25], there are a total of 299 electoral districts that
are apportioned to the federal states according to their population. As districts
do not cross state borders, each state must be considered independently. A
district’s size is measured by its number of eligible voters ([24]). It should
not deviate from the federal average by more than 15%; a deviation exceeding
25% enforces a redesign of districts. As far as possible, for administrative and
technical reasons municipal borders must be conserved. Furthermore, districts
are supposed to yield connected areas.
For our application the data from the last German election held on September 22nd 2013 was used. Statistics about the number of eligible voters were
taken from [26]. Geographic data for the municipalities and election districts
of 2013 were taken from [23] and [27], respectively. Greater cities which on
their own constituted one or several election districts in 2013 were neglected
for our computations as any proper district redesign would imply to split these
municipalities up into smaller units and thus, of course, required data on a more
granular basis. For the same reason, the city states (Berlin, Bremen, Hamburg)
were not taken into consideration.
Figure 1a depicts the deviation of clusters sizes of the 2013 election from
the federal average. Accordingly, in the 2013 elections 57 of the 249 districts
that were considered had a deviation of more than 15%. The overall average
deviation is 9.5%. The maximum deviation of 25.15% is obtained for a district
in the state of Bavaria.
We have identified each municipality by a point in the plane given by its
geographic coordinates in the EPSG 3044 spatial reference system ([57]). For
the shortest-path clustering approach, we have an edge in the graph G between
two municipalities whenever their regions share a border. The edge lengths were
taken as driving distances obtained from the MapQuest Open Geocoding API
([43]).
In the following of this chapter, we state the practical results for all of
Germany and for various single states. The latter are typical examples, i. e.,
the individual results for the other states are very similar; see Table 5 and
http://www-m9.ma.tum.de/material/districting/.
23
4.3
Power Diagrams
As pointed out in Section 3.1, clusterings with minimal moment of inertia are
supported by centroidal power diagrams. Thus, power diagrams yield highly
consolidated district plans.
Squared Euclidean distances were already used for the problem of electoral
district design; see e.g. [33] and [34]. Centroidal power diagrams have been
used by [28], who presented a gradient descent procedure similar in spirit to the
balanced k-means approach ([9]).
In our approach, first a fractional clustering that is supported by a centroidal
power diagram was created. Here, the sites were determined as approximations
of the global optimizers of (6) as proposed in [14]. Second, the fractionally
assigned units were rounded optimally with respect to the resulting balancing
error.
As it turns out, non-connected districts do indeed sometimes occur. This is
due to the non-convexity of the states in general and particularly due to “holes”
in the state areas resulting from excluded municipalities or city states. In many
cases, this may not regarded a problem. However, since we insisted on connected districts we applied some post-processing. After running the approach
as described in Section 3.1, the resulting districts were checked for connectivity.
This was done in an automated manner using the adjacency graph of neighboring units and standard graph algorithms. If unconnected districts were detected,
the program (P) was rerun under preventing or forcing municipalities to be assigned to a certain district by constraining the corresponding decision variables
to 0 or 1, respectively. For example, if a district had been split into two parts by
a geographical barrier such as a lake or an indentation in the state border, the
smaller part (which mostly consisted of a a single municipality) was excluded
from being assigned to that district. This was comfortably done using a graphical user interface and required only a few, if any, iterations per state. For the
considered data, a total of 51 (0.46%) municipalities was preassigned in order
to establish connectivity.
Figure 13 shows the original districts of the state of Bavaria from the 2013
elections compared to the results of the power diagram approach. Figure 13b
furthermore depicts the resulting polyhedral power diagram regions. Here, three
units had to be fixed in order to establish connectivity.
4.4
Anisotropic Power Diagrams
Next, we show how to use anisotropic power diagrams in order to obtain clusters
that are similar to those of the 2013 election.
As in [1], a principal component analysis was performed
in order to determine
o
a local ellipsoidal norm for each district. Let C o := ξi,j
i=1,...,k be the integer
j=1,...,m
clustering encoding the original districts of some state. For each i = 1, . . . , k,
24
(a) Districts from the 2013 elections.
(b) Districts via power diagrams.
Figure 13: Districts for the state of Bavaria. Here and in the following, we use
a six-coloring of the districts for an easy distinguishability.
the covariance matrix Vi is computed as
Vi :=
m
o
X
ξi,j
ωj
o
o T
o ) (xj − c(Ci )) (xj − c(Ci )) .
ω(C
i
j=1
Using a singular value decomposition, we obtain an orthogonal matrix Q ∈ O(2)
(i)
(i)
and σ1 , σ2 > 0 such that
!
(i)
σ1
0
T
Vi = Q
(i) Q .
0
σ2
We then set
(i)
(σ1 )−1
Mi := Q
0
0
(i)
(σ2 )−1
!
QT
in order to obtain an ellipsoidal norm as described in Section 3.1. With the
centroids of C o as starting sites we performed a balanced k-means algorithm
(see [9]) in order to obtain centroidal anisotropic power diagrams.
As in the case of power diagrams and due to the same reasons, again not
all of the resulting clusters were connected. Applying the same post-processing
this could again be treated, affecting a total of 33 (0.30%) municipalities.
Figure 14 shows the 2013 elections’ districts of the state of Lower Saxony
and the results of the anisotropic power diagram approach. The blue ellipses in
14b depict the unit balls of corresponding local cluster norms. For this state no
post-processing was necessary.
25
(a) Districts from the 2013 election.
(b) Districts via anisotropic power
diagrams. Ellipses depict unit balls of
local norms (determined from the
districts of 2013).
Figure 14: Districts for the state of Lower Saxony.
4.5
Shortest-Path Diagrams
In order to enforce connectivity directly, we apply the shortest-path diagrams of
Section 3.3 w. r. t. the adjacency graph G of neighboring units. Shortest-path
distances have appeared in the context of electoral district design several times
(e. g., [55], [59], [54], [39], [53]). In [52] also Voronoi diagrams on a connectivity graph were considered but multiplicative rather than additive weights were
evoked, which led to substantially bigger balancing errors. In [59] and [54],
Lagrangian relaxations were applied which are, naturally, closely related to our
methodology.
In our approach, the effect of non-unique optima and therefore more fractionality could be observed as predicted in Section 3.3. This was again handled in
a post-processing phase by the implementation of a slightly more sophisticated
rounding procedure. Here, fractional components were rounded in an iterative
manner while updating the shortest-path distances to the already integrally assigned clusters in each step. In order to determine suitable sites si , i = 1, . . . , k,
a simple local search w. r. t. improvement of the resulting deviation of cluster
sizes was performed. Here, the units closest to the centroids of C o served as
starting sites.
Figure 15 shows the original districts of the state of North Rhine-Westphalia
from the 2013 elections compared to the results via shortest-path diagrams. The
edges in Figure 15b furthermore depict the shortest-path trees connecting the
resulting clusters.
26
(a) Districts from the 2013 election.
(b) Districts via shortest-path diagrams.
Figure 15: Districts for the state of North Rhine-Westphalia.
4.6
Evaluation
We
will
now
summarize
the
results
of
our
different
approaches. See Table 5 in the appendix for a more detailed overview of the
results for the different German states.
(a) Power diagram approach.
(b) Anisotropic power
diagram approach.
(c) Shortest-path diagram
approach.
Figure 16: Deviations after applying our methodology. Colors as in Figure 1.
As already pointed out, all approaches led to district plans complying with
the German electoral law, i. e. obeying the deviation limits and connectivity of
districts.
The largest maximal deviations occur for the states of
Mecklenburg-Vorpommern
(14.71%)
and
North
RhineWestphalia (14.34%), both for the anisotropic power diagram approach. How27
Districts Power
2013
Diagrams
Avg.
Max.
9.52%
25.1%
2.69%
10.60%
Anisotropic
Power
Diagrams
ShortestPath
Diagrams
2.73%
14.71%
2.13%
9.73%
Table 1: Deviations of district sizes from the federal average size.
ever, Table 5 shows that even those deviations are not far off from optimality.
In fact, the average district size in Mecklenburg-Vorpommern itself is already
8.9% below the federal average. In North Rhine-Westphalia, the high population density results in units of greater weight and thus greater rounding errors.
As expected, Table 5 shows that states with a finer division into municipalities
generally yield smaller deviations.
Figure 16 depicts the deviations of district sizes for our approaches and
Table 1 contains the average and maximal values for all our approaches and the
elections of 2013. While all approaches perform well, the shortest-path diagram
approach is slightly superior. This is not surprising, as the local search in the
shortest-path approach only focuses on the deviation error.
∆MoI
Power
Diagrams
Anisotropic
Power
Diagrams
Shortest-Path
Diagrams
-11.3%
1.3%
-0.5%
Table 2: Relative change of the moment of inertia as defined by (6) compared
to 2013.
Table 2 yields the relative change of the total moment of inertia (as defined in
(6)) compared to 2013. According to this measure, power diagrams lead to the
by far best consolidation. Shortest-path diagrams yield slightly more and the
anisotropic power-diagram slightly less consolidated districts. However, recall
that the moment of inertia is measured in Euclidean space, while the anisotropic
power diagrams minimize local ellipsoidal norms. Hence, a fair comparison
should really involve a measure similar to (6) but based on those local norms.
∆Pairs
Power
Diagrams
Anisotropic
Power
Diagrams
Shortest-Path
Diagrams
40.6%
21.4%
35.4%
Table 3: Total ratio of changed pairs of voters as defined by (7).
In order to compare the obtained districts to the ones of 2013, the ratio of
changed pairs according to (7) over the districts of all states are shown in Table 3.
Here,
indeed
the
28
anisotropic power diagram approach separates significantly less pairs of voters
that used to vote in the same district in the 2013 election.
(a) Original districts of the 2013 election.
(b) Districts via power diagrams.
(c) Districts via anisotropic power diagrams.
(d) Districts via shortest-path diagrams.
Figure 17: Districts for the state of Hesse resulting from the different approaches. See also Table 5 for corresponding key figures.
Figure 17 shows the results of the state of Hesse for all approaches in direct
comparison. In particular, they illustrate the numbers listed above. The power
diagram districts seem most consolidated, while elongated districts appear in
29
the shortest-path result. Also, a higher degree of similarity of the districts from
anisotropic power diagrams to the original districts can be recognized.
Finally, concerning the computational running times of our approaches, note
that once the parameters (D, h, S) are determined, a simple linear program (P)
in dimension k × m with k + m constraints and 2km non-zero entries is to be
solved. This, of course, is unproblematic even for fairly large instances (such as,
for example, 105 municipalities and 103 districts) using state-of-the-art solvers.
When, however, the structural parameters are part of the optimization process, the computational scalability strongly depends on the chosen approach: In
case of power diagrams, an approximate optimization of (6) (cf. Section 3.1)
also leads to solving a number of linear programs in dimension k × d and a fairly
small number of constraints. However, this means approximately maximizing an
ellipsoidal norm, which is an N P-hard problem with no constant-factor approximation unless P = N P (cf. [16], [17], [12], [15]). Thus, this will be problematic
for huge k. However, as the number of representatives is limited and, particularly, voting is in effect often restricted to subpopulations (like the states
within the Federal Republic of Germany), this remains tractable in practice (as
demonstrated).
In the case of anisotropic power diagrams, the applied balanced k-means
variant reduces to solving (P) a few times.
As we applied a local-search of sites for shortest-path diagrams, there, the
running times are, of course, highly dependent on the size of the considered
neighborhoods that are evaluated. In our computations, we restricted a sites
vector’s neighborhood to single site-exchanges with the respective 50 closest
units. Then, if the local search is performed separately for each cluster, we can
again expect good scalability in terms of k.
For our data sets, the computations ran on a standard PC within seconds
for anisotropic power diagrams, within few hours for (approximately) centroidal
power diagrams and were in the range of several hours for the shortest-path approach. In any case, for our application of electoral districting the computation
times were not critical.
5
Conclusion
We presented a unifying approach to constrained geometric clustering in arbitrary metric spaces and applied three specifications to the electoral district
design problem. We used a relation between constrained fractional clusterings
and additively weighted generalized Voronoi diagrams which is based on LPduality. In particular, we obtained clusterings with prescribed cluster sizes that
are embeddable into additively weighted generalized Voronoi diagrams. A short
discussion of typical classes of diagrams as well as details for the special cases
of power diagrams and shortest-path diagrams on graphs were provided.
Results for the example of electoral district design in Germany prove to be
very favorable with respect to the deviations from prescribed cluster sizes of
the obtained integer clusterings. In particular, we pointed out how the choice
30
consolidation
connectivity
conservation of
existing
structure
Power
Diagrams
Anisotropic
Power
Diagrams
⊕⊕
⊕
⊕
⊕⊕
ShortestPath
Diagrams
⊕⊕
⊕
Table 4: ”rule of thumb” for the choice of diagram types
of a class of diagrams can be made for the different optimization criteria. Table 4 summarizes our observations by providing an informal ”rule of thumb”
for the choice of a diagram type: If particularly consolidated districts are desired, power diagrams yield the best results. As they further produce convex
cells, the resulting districts are likely to be connected whenever the units can
be approximated well by points in the plane and the state area is close to convex. If districts are preferred that are similar to existing structures, anisotropic
power diagrams perform very well. Due to their relation to power diagrams,
they have favorable characteristics w. r. t. consolidation as well. Connectivity is
guaranteed a-priorily by shortest-path diagrams. Note that with edge distances
obtained from anisotropic norms, conservation of existing structures may here
be achieved, too. Thus, our methodology is capable of satisfying different requirements that may occur for political reasons.
References
[1]
Andreas Alpers et al. “Generalized Balanced Power Diagrams for 3D
Representations of Polycrystals”. In: Philosophical Magazine 95.9 (2015),
pp. 1016–1028.
[2]
Micah Altman. “Modeling the effect of mandatory district compactness
on partisan gerrymanders”. In: Political Geography 17.8 (1998), pp. 989–
1012.
[3]
Boris Aronov, Paz Carmi, and Matthew J. Katz. “Minimum-Cost LoadBalancing Partitions”. In: Algorithmica 54.3 (2009), pp. 318–336.
[4]
Peter F. Ash and Ethan D. Bolker. “Generalized dirichlet tessellations”.
In: Geometriae Dedicata 20.2 (1986), pp. 209–243.
[5]
F. Aurenhammer, F. Hoffmann, and B. Aronov. “Minkowski-type theorems and least-squares clustering”. In: Algorithmica 20.1 (1998), pp. 61–
76.
[6]
Franz Aurenhammer. “Power Diagrams: Properties, Algorithms and Applications”. In: SIAM Journal on Computing 16.1 (1987), pp. 78–96.
31
[7]
Franz Aurenhammer, Rolf Klein, and Der-Tsai Lee. Voronoi Diagrams
and Delaunay Triangulations. World Scientific, 2013.
[8]
E. R. Barnes, A. J. Hoffman, and U. G. Rothblum. “Optimal partitions
having disjoint convex and conic hulls”. In: Mathematical Programming
54.1 (1992), pp. 69–86.
[9]
Steffen Borgwardt, Andreas Brieden, and Peter Gritzmann. “A balanced
k-means algorithm for weighted point sets”. submitted. 2013. url: http:
//arxiv.org/abs/1308.4004.
[10]
Steffen Borgwardt, Andreas Brieden, and Peter Gritzmann. “Constrained
Minimum-k-Star Clustering and its application to the consolidation of
farmland”. In: Operational Research 11.1 (2011), pp. 1–17.
[11]
Steffen Borgwardt, Andreas Brieden, and Peter Gritzmann. “Geometric
Clustering for the Consolidation of Farmland and Woodland”. In: Mathematical Intelligencer 36.2 (2014), pp. 37–44.
[12]
A Brieden. “Geometric optimization problems likely not contained in
APX”. In: Discrete & Computational Geometry 28 (2002), pp. 201–209.
[13]
Andreas Brieden and Peter Gritzmann. “A Quadratic Optimization Model
for the Consolidation of Farmland by Means of Lend-Lease Agreements”.
In: Operations Research Proceedings 2003 SE - 42. Springer Berlin Heidelberg, 2004, pp. 324–331.
[14]
Andreas Brieden and Peter Gritzmann. “On clustering bodies: Geometry
and polyhedral approximation”. In: Discrete & Computational Geometry
44.3 (2010), pp. 508–534.
[15]
Andreas Brieden and Peter Gritzmann. “On Optimal Weighted Balanced
Clusterings: Gravity Bodies and Power Diagrams”. In: SIAM Journal on
Discrete Mathematics 26.2 (2012), pp. 415–434.
[16]
A Brieden et al. “Approximation of diameters: randomization doesn’t
help”. In: Proc. 39th Ann. Symp. Found. Comput. Sci. (FOCS). IEEE.
1998, pp. 244–251.
[17]
A Brieden et al. “Deterministic and randomized polynomial-time approximation of radii”. In: Mathematika 48 (2001), pp. 63–105.
[18]
Guillermo D. Canas and Steven J. Gortler. “Orphan-Free Anisotropic
Voronoi Diagrams”. In: Discrete & Computational Geometry 46.3 (2011),
pp. 526–541.
[19]
John Gunnar Carlsson, Erik Carlsson, and Raghuveer Devulapalli. “Balancing workloads of service vehicles over a geographic territory”. In: 2013
IEEE/RSJ International Conference on Intelligent Robots and Systems.
IEEE, 2013, pp. 209–216.
[20]
John Gunnar Carlsson, Erik Carlsson, and Raghuveer Devulapalli. “Shadow
Prices in Territory Division”. In: Networks and Spatial Economics 16.3
(2016), pp. 893–931.
32
[21]
J. Cortes. “Coverage Optimization and Spatial Load Balancing by Robotic
Sensor Networks”. In: IEEE Transactions on Automatic Control 55.3
(2010), pp. 749–754.
[22]
Herbert Edelsbrunner and Raimund Seidel. “Voronoi diagrams and arrangements”. In: Discrete & Computational Geometry 1.1 (1986), pp. 25–
44.
[23]
Federal Agency for Cartography and Geodesy. accessed 26-June-2015.
2013. url: http://www.geodatenzentrum.de/geodaten/.
[24]
Federal Constitutional Court of Germany. Pressemitteilung Nr. 12/2012
vom 22. Februar 2012, Beschluss vom 31. Januar 2012, 2 BvC 3/11. 2012.
[25] Federal Elections Act (Bundeswahlgesetz). Version as promulgated on 23
July 1993 (Federal Law Gazette I pp. 1288, 1594), last amended by Article
9 of the Ordinance of 31 August 2015 (Federal Law Gazette I p. 1474).
1993.
[26]
Federal Returning Officer. Wahl zum 17. Deutschen Bundestag am 22.
September 2013, Ergebnisse der Wahlbezirksstatistik. CD-Rom. 2014.
[27]
Federal Returning Officer. Wahlkreiskarte für die Wahl zum 18. Deutschen
Bundestag, Grundlage der Geoinformationen c Geobasis-DE / BKG (2011).
accessed 09-December-2016. 2012. url: http://www.bundeswahlleiter.
de/bundestagswahlen/2013.html.
[28]
Roland G. Fryer and Richard Holden. “Measuring the Compactness of Political Districting Plans”. In: Journal of Law and Economics 54.3 (2011),
pp. 493–535.
[29]
Lauro C. Galvão et al. “A multiplicatively-weighted Voronoi diagram approach to logistics districting”. In: Computers & Operations Research 33.1
(2006), pp. 93–114.
[30]
Darius Geiß et al. “Optimally solving a transportation problem using
Voronoi diagrams”. In: Computational Geometry: Theory and Applications 47.3 (2014), pp. 499–506.
[31]
John A. George, Bruce W: Lamar, and Chris A. Wallace. “Political district determination using large-scale network optimization”. In: SocioEconomic Planning Sciences 31.1 (1997), pp. 11–28.
[32]
Sebastian Goderbauer. “Optimierte Einteilung der Wahlkreise für die Deutsche
Bundestagswahl”. In: OR News 52 (2014), pp. 19–21.
[33]
Sidney Wayne Hess et al. “Nonpartisan Political Redistricting by Computer”. In: Operations Research 13.6 (1965), pp. 998–1006.
[34]
Mehran Hojati. “Optimal political districting”. In: Computers & Operations Research 23.12 (1996), pp. 1147–1161.
[35]
David L. Horn, Charles R. Hampton, and Anthony J. Vandenberg. “Practical application of district compactness”. In: Political Geography 12.2
(1993), pp. 103–120.
33
[36]
Frank K. Hwang and Uriel G. Rothblum. Partitions: Optimality and Clustering - Vol. 1: Single-Parameter. World Scientific, 2012.
[37]
Frank K. Hwang and Uriel G. Rothblum. Partitions: Optimality and Clustering - Vol. 2: Multi-Parameter. World Scientific, 2013.
[38]
Jörg Kalcsics. “Districting Problems”. In: Location Science. Springer International Publishing, 2015, pp. 595–622.
[39]
Jörg Kalcsics, Stefan Nickel, and Michael Schröder. “Towards a unified
territorial design approach — Applications, algorithms and GIS integration”. In: Top 13.71 (2005), pp. 1–56.
[40]
Jörg Kalcsics et al. “Planning Sales Territories — A Facility Location Approach”. In: Operations Research Proceedings 2001. Springer Berlin Heidelberg, 2002, pp. 141–148.
[41]
Victor Klee and Christoph Witzgall. “Facets and vertices of transportation
polytopes”. In: Mathematics of the Decision Sciences, Part 1. American
Mathematical Society, 1968, pp. 257–282.
[42]
Francois Labelle and Jonathan Richard Shewchuk. “Anisotropic Voronoi
diagrams and guaranteed-quality anisotropic mesh generation”. In: Proceedings of the nineteenth conference on Computational geometry - SCG
’03. 2003, p. 191.
[43]
MapQuest Open Geocoding API. accessed 01-July-2015. 2015. url: http:
//developer.mapquest.com.
[44]
Paul G Marlin. “Application of the transportation model to a largescale ”Districting” problem”. In: Computers and Operations Research 8.2
(1981), pp. 83–96.
[45]
John M. Mulvey and Michael P. Beck. “Solving capacitated clustering
problems”. In: European Journal of Operational Research 18.3 (1984),
pp. 339–348.
[46]
L. Muyldermans et al. “Districting for salt spreading operations”. In: European Journal of Operational Research 139.3 (2002), pp. 521–532.
[47]
Richard G. Niemi et al. “Measuring Compactness and the Role of a Compactness Standard in a Test for Partisan and Racial Gerrymandering”. In:
The Journal of Politics 52.4 (1990), pp. 1155–1181.
[48]
Atsuyuki Okabe and Atsuo Suzuki. “Locational optimization problems
solved through Voronoi diagrams”. In: European Journal of Operational
Research 98.3 (1997), pp. 445–456.
[49]
Atsuyuki Okabe et al. “Generalized network Voronoi diagrams: Concepts,
computational methods, and applications”. In: International Journal of
Geographical Information Science 22.9 (2008), pp. 965–994.
[50]
Atsuyuki Okabe et al. Spatial tessellations: concepts and applications of
Voronoi diagrams. Wiley Series in Probability and Statistics. John Wiley
& Sons, 2009.
34
[51]
Federica Ricca, Andrea Scozzari, and Bruno Simeone. “Political Districting: from classical models to recent approaches”. In: Annals of Operations
Research 204.1 (2013), pp. 271–299.
[52]
Federica Ricca, Andrea Scozzari, and Bruno Simeone. “Weighted Voronoi
region algorithms for political districting”. In: Mathematical and Computer Modelling 48.9–10 (2008), pp. 1468–1477.
[53]
Federica Ricca and Bruno Simeone. “Local search algorithms for political
districting”. In: European Journal of Operational Research 189.3 (2008),
pp. 1409–1426.
[54]
Michael Schröder. “Gebiete optimal aufteilen”. PhD thesis. Universität
Karlsruhe, 2001.
[55]
M Segal and D B Weinberger. “Turfing”. In: Operations Research 25.3
(1977), pp. 367–386.
[56]
Attila Tasnádi. “The political districting problem: A survey”. In: Society
and Economy 33.3 (2011), pp. 543–554.
[57]
Wikipedia. Spatial reference system — Wikipedia, The Free Encyclopedia.
accessed 29-January-2017. 2017. url: https : / / en . wikipedia . org /
wiki/Spatial_reference_system.
[58]
H. Peyton Young. “Measuring the Compactness of Legislative Districts”.
In: Legislative Studies Quarterly 13.1 (1988), p. 105.
[59]
Andris A. Zoltners and Prabhakant Sinha. “Sales Territory Alignment:
A Review and Model”. In: Management Science 29.11 (1983), pp. 1237–
1256.
35
36
9
850
1.6
3.9
1.8
3.6
1.2
0.5
1.0
0.3
0.9
0.3
3.5
8.7
1.9
State Dev
8.0
4.8
6.7
10.3
11.8
9.8
7.8
7.7
13.8
15.8
9.1
8.9
7.9
øDev
in %
1.6
3.9
2.4
7.1
1.4
0.6
2.4
3.8
1.4
1.6
3.6
8.7
1.9
øDev
in %
-15.1
5.3
1.2
1.5
-9.0
-14.0
-14.3
-10.8
-14.8
-7.5
-15.2
-12.1
-13.1
∆MoI
in %
51.2
49.0
38.2
41.2
36.9
36.5
42.9
40.9
44.5
29.5
39.2
24.3
41.0
∆Pairs
in %
Power Diagrams
5.1
3.9
1.8
4.3
1.6
1.7
1.5
4.2
1.2
1.7
3.5
8.7
2.1
1.0
-0.8
6.8
4.1
5.5
0.7
-3.1
1.1
4.3
4.8
0.0
-6.4
0.6
14.4
13.0
16.7
13.6
19.4
22.9
18.5
17.4
37.0
21.0
17.9
15.3
21.3
Anisotropic
Power Diagrams
øDev ∆MoI ∆Pairs
in %
in %
in %
Table 5: Overview of the resulting key figures for the different approaches.
m, k: Number of municipalities / districts, respectively;
State Dev: Absolute value of the deviation of the average district size in a state from the federal average;
øDev is the average relative deviation of district sizes from the federal average;
∆MoI signifies the relative change of the moment of inertia (see (6)) compared to 2013;
∆Pairs gives the proportion of changed voter-pair assignments (as defined in (7)).
4
13
9
11
52
437
222
1109
28
48
1001
391
15
40
10
20
6
2055
419
425
780
2304
36
1100
BadenWürttemberg
Bavaria
Brandenburg
Hesse
MecklenburgVorpommern
Lower Saxony
North RhineWestphalia
RhinelandPalatinate
Saarland
Saxony
Saxony-Anhalt
SchleswigHolstein
Thuringia
k
m
State
2013
1.6
3.9
1.8
3.6
1.2
0.5
1.1
3.4
0.9
0.3
3.5
8.7
1.9
0.1
22.7
0.2
-4.5
-1.7
1.6
-8.9
1.6
2.7
-1.6
5.0
-1.1
0.9
35.5
31.3
28.4
29.4
27.5
30.3
39.9
34.0
44.1
32.7
36.1
24.1
35.2
Shortest-Path
Diagrams
øDev ∆MoI ∆Pairs
in %
in %
in %
A
Results for the German federal election
| 8 |
International Conference on Fascinating Advancement in Mechanical Engineering (FAME2008),
11‐13, December 2008
Performance of 1-D and 2-D Lattice
Boltzmann (LB) in Solution of the Shock
Tube Problem
M. Komeili1, M. Mirzaei2, M. Shabouei3
Abstract- In this paper we presented a lattice Boltzmann
with square grid for compressible flow problems. Triple
level velocity is considered for each cell. Migration step
use discrete velocity but continuous parameters are
utilized to calculate density, velocity, and energy. So, we
called this semi-discrete method. To evaluate the
performance of the method the well-known shock tube
problem is solved, using 1-D and 2-D version of the
lattice Boltzmann method. The results of these versions
are compared with each other and with the results of the
analytical solution.
Keywords___ distribution function, Lattice Boltzmann,
phase velocity, shock tube, sound velocity.
I. INRTODUCTION
T is about 20 years since Frisch first time
(1987) succeeded to use the Lattice Boltzmann
Equations to calculate the viscosity of Lattice Gas
Cellular Automata (LGCA) [1]. Then, this method
has been considered as a new solver of the fluid
flow in the various regimes. Ranging from low
speed [2-4] to high speed flows [5-8]. In 1987
Alexander introduced a modified LB method to
model the compressible flows [9]. The Sung model
modifies the LB in such a manner that the fluid
velocity is added to particles’ velocity [10-17]. Its
method regains the results of the Euler and Navierstockes solutions with first-order and second-order
precision, respectively.
In the present work, we intend to develop the Sung
method to model 1-D and 2-D compressible flow
fields and to evaluate the performance of the model
for solution of 1-D problems.
I
1.
2.
3.
M. Komeili is the Master student of Aerospace
Engineering in K.N. Toosi University, Tehran, Iran
(e-mail: matin_444@yahoo.com).
M. Mirzaei is Associate professor in K.N.Toosi
University, Tehran, Iran (email: mirzaei@kntu.ac.ir).
M. Sabouei is the Master student of Aerospace
Engineering in K.N. Toosi University, Tehran, Iran
(email:m.shabouei@gmail.com).
II. LATTICE BOLTZMANN EQUATIONS
The first step in Lattice Boltzmann models is
determination of a distribution function. It defines
the probability of finding a particle on the specific
position and the specific time with a definite
velocity.
While, the distribution function is identified
throughout the domain. We will be able to calculate
the Microscopic quantities (i.e. density, velocity,
pressure, temperature, and energy).
The relations between the microscopic quantities
and the distribution function are
fdv
(1)
V vfdv
(2)
E efdv
(3)
Where, is the distribution function, the lower
cases relate to particles, and upper cases relate to
flow.
So, if we can find the distribution function in
each time step our problem has been solved. The
distribution function in the next time step is
calculated from the following equation:
f x vt , v, t t f x , v, t x, v, t
(4)
Department of Mechanical Engineering, Mepco Schlenk Engineering College, Sivakasi, India
Equation (3) is called BGK equation Where,
the collision operator that can be calculated by:
x, v, t
1
f x, v, t f
eq
x, v, t
is
If we equate relaxation time ( )
collision operator converts to:
(5)
f is equilibrium distribution function. This
quantity is only a function of microscopic
quantities. In the other word, the distribution
function and consequently the microscopic values
are obtained in the subsequent time if
f is
identified.
III. MODIFIED LATTICE BOLTZMANN EQUATION
A. Distribution Function
Conventional Lattice Boltzmann models cannot
be used for compressible flows due to their
limitation in prediction of maximum particles’
velocities. In our model in compressible flows, we
added flow velocity to particles velocity on each
node. So, each particle is translated from a node to
any node into the field.
In this case, distribution function depends on the
following variables:
x ; The position of particle.
r ; The migrating velocity which translates
particles from the starting points to their
destinations.
; The particle velocity.
; The particle energy.
t ; The time .
Since is a discontinuous variable, but and
are the continuous quantities, we called this method
“semi-discrete”.
In fact each node carries mass (m), momentum
(m ), and energy (m ) to its destination.
The relation between microscopic quantities and
the distribution function for this case is as follow:
Y f x , r , , , t d d
D
r s
(6)
T
Y , v , E
m, m , m
(7)
T
to 1 the
(9)
x, v, t f x, v, t f
eq
x, v, t
So, the well-known BGK will be:
(10)
eq
f x r t , r , , , t t f x , r , , , t
B. Equilibrium Distribution Function
First, we introduce some quantity in order to
define equilibrium distribution function
c jvk vk c jv
(11)
c jvk v c jv
(12)
v k v v k
(13)
1
2
jv c jv2
(14)
2
1 2
v 2c jv .v c jv
2
In the above equations c jv ; j 1,2,..., bv is a
set which its arrays are vector, bv defines number
of direction on each node and depends on the type
of the lattice, index v clarifies number of velocity
level on each direction.
jv is
any kind of energy
except kinetic one and it define by :
1 1 e
2
D
Where
r vectors.
D D1 D2 which D1 and D2 .
‘s’ is a set which contains all the
D
is
lattice
energy e E
dimensions;
(15)
e
is
interior
1 2
v .
2
cjv has symmetric properties. This means that:
(8)
International Conference on Fascinating Advancement in Mechanical Engineering (FAME2008),
11‐13, December 2008
bv
c
jv
j 1
0
c c
j 1
jv
bv
c c c
jv
j 1
(20)
eq
f x , t f x , c jvk , , , t d d dvk
eq
jvk
bv
jv
(16)
jv
jv
bv
cv I
D
(17)
0
(18)
D
We define jv as:
(21)
Where, is unit tensor.
Consider x an arbitrary node; v is flow velocity
on this node; vk is the vector which join x to the
nodes that surround; vk are vectors which connect
the end point of v
to the surrounded nodes
(Fig.1).
jv m, mc jv , m
T
jv
1
2
m, mv c jv , m v c jv
2
eq
f jvk
Substitution of
into equation (5), the
microscopic quantities are obtained from:
Y f
D
r S
eq
x , r , , , t d d
d
(22)
jv
vk
k ,v , j
Where d vk is determined as:
d vk k d v
(23)
k
Fig.1. migrating velocity of particles in hexagonal lattice.
k
Equilibrium distribution function is defined:
We have chosen triple velocity level for each
direction (cell), which are d 0 , d1 , and d 2 . d 0 is an
(19)
dvk cjv jv r c jvk
eq
f x, r , , , t
r c jvk
0
Where
x has following properties
x 0 for x 0
f eq x , r , , , t
is identified on
r , , S D1 D2 . However, this quantity is
nonzero if r , , c jvk c jv jv
arbitrary portion of the total density, we choose a
value between 0.4 and 0.55 .
d1 and d2 are calculated from the following
relations:
c2 2 b0 d 0 D 1e
d1
b1 c2 2 c1 2
(24)
D 1e c1 2 b0 d 0
b1 c 2 2 c1 2
(25)
d2
g x x dx g 0
Notice
is density fraction and it equals to
.
eq
Under these circumstances f jvk will be
c1 and c2 are particles module velocity which
should be defined such that d1 and d2 have nonnegative values.
Department of Mechanical Engineering, Mepco Schlenk Engineering College, Sivakasi, India
D 1e
c1 int
b0 d 0
c2 c1 1
(26)
Pressure is obtained from the following equation
P 1e
(27)
IV. TYPES OF LATTICE
A. 1-D Lattice
Fig.2 shows a 1-D lattice. In this lattice, a
particle is able to move only from a node to its
neighbors located in its right side or left- side.
Fig.3.migrating velocity in square lattice
In this figure the migrating velocities cjv are:
c21
c31
c41
1
c11
c22
c32
c42
2
c12
Fig.2.migrating velocity in 1‐D lattice
In this figure the migrating velocities are:
Notice that in this case we use dual velocity level
and assume d 0 0 . So there are two levels of
0
c10
c22
2 , c11
c21
1
c12
velocity and four directions:
k 1 to k 4
Since we have three levels of velocity and two
directions, the subscripts are defined as:
k 1 to k 2
j 1 to
In addition b1 b2 4
The contribution of each node which surrounds
the end point of vector
(Fig. 4) to the total
is defined as:
density
1 u 3 v 3
Moreover,
b0 1 , b1 b2 2
The contribution of each node to the total density
is calculated as:
2 v1
j4
v 1 to v 2
j2
v 0 to v 2
1 v2
j 1 to
2 u 4 v 4
(29)
3 u 1v1
4 u 2 v 2
(28)
3
2
4
1
Notice that in this case the velocity has just one
direction.
B. Square Lattice
If we employ the square lattice (Fig.3) the
particles may move in a 2-D space.
Fig.4.square lattice
International Conference on Fascinating Advancement in Mechanical Engineering (FAME2008),
11‐13, December 2008
V. NUMERICAL SIMULATIONS
A. Interior Nodes
Assuming, leads omission of collision step so:
f i out x c t , t t f i in x , t
(30)
D. Results
First, initial values are set for microscopic flow
parameters i.e. Velocity, density, and energy, then
calculate translated mass, momentum, and energy
from each node (based on velocity level and
direction) by the following equations:
m jvk 1
(31)
jvk c jv
1 2
2
(32)
2
jvk v 2cjv c
shock tube problem. This lattice sizes guarantee
the grid independency of the results. The results of
1-D and 2-D solutions were compared with each
other and with those of the analytical solution [18]
in Fig.5.
(33)
By multiplying The above quantities by d jv and
summation over all j , v, and k and for each node,
the microscopic quantities in the next step are
obtained.
B. Boundary Condition
For 1-D lattice, there is no boundary condition.
We just add two additional points to both right side
and left side of the domain.
For 2-D lattice periodic boundary condition is
employed in lateral direction in addition for
longitudinal direction two additional columns are
use on the limits of the domain.
The results of the shock tube problem [19,20]
are shown in figures 5. Fig 5-a represents the
variation of the gas density along the tube. In this
figure the results of the 1-D and the 2-D solutions
are compared with that of the analytical solution. It
can be seen that the 1-D and 2-D solutions leads to
the same results and in good agreement with
analytical solution. The same trend can be seen for
variation if internal energy, velocity and pressure in
Figs 5-b to 5-d. notice that the CPU time for 2-D
solution is 15 times of that the 1-D solution.
I.
CONCLUSION
We modified the lattice Boltzmann method with
adding the flow velocity to the migrating velocities.
In this manner the particles can translate to any
node in the domain. Accordingly, high Mach
numbers flow can be easily simulated. The 1-D
lattice is introduced in this paper for the first time.
The shock tube problem and other 1-D flows were
previously solved using 2-D lattice methods ([14])
but for the first time we develop the 1-D lattice
method for such problems. The results of this
method are accurate and moreover, its main
advantage over 2-D lattice method is its
computational efficiency. For instance its CPU time
C. Shock tube
To evaluate the validity of our models we
selected Sod test [12]. The initial conditions are:
L 1 .0 ;
R 0.125
p L 1 .0 ;
p R 0 .1
u L 1 .0 ;
u R 0.125
Index L and R show the initial values on the left
side and the right side respectively.
A 400 node lattice is adopted for 1-D sock tube
problem and a 400×4 node lattice is use for 2-D
Fig.5. The shock‐tube problem: the profiles of density,
internal energy, velocity, and pressure at the 75th
iteration time. Analytical solution is based on [18]
Department of Mechanical Engineering, Mepco Schlenk Engineering College, Sivakasi, India
For shock tube problem was 15 times less than
that of the 2-D lattice. The present method may be
expanded for more complicated flows (e.g.
employing of different gases on the two sides of the
tube, consideration of chemical interaction, and
reflection of shock on walls).
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
Frisch, U., Hasslacher, B., and Pomeau, Y.,
“Lattice Gas Automata for the Navier-Stokes
Equation” Phys. Rev.Lett., 56:1505, 1986.
Buick, M., “Lattice Boltzmann Method in
Interfacial Wave Modeling”, PHD thesis at
University of Edinburg, 1990.
Higuera, F., Jemenez, J., And Succi., S., “Lattice Gas
Dynamics with Enhanced Collision”, Europhys. Lett,
9:345, 1989.
Alexander, W., “Theory and Applications of the
Lattice Boltzmann Method”, PHD thesis at university
of Oxford, 1997.
Palmer, B.J., Rector, D.R., “Lattice Boltzmann
Algorithm for Simulating Thermal Flow in
Compressible Fluids”, Journal of Computational
Physics, 161, 11 1-20, 2000.
Vahala, G., Pavlo, P., Vahala, L., Martys, N.S.,
“Thermal Lattice Boltzmann Model(TLBM) for
compressible flow”, International Journal of Modern
Physics C, Vol. 9, No. 8, pp 1247-1261, 1998.
Hinton, F.L., Rosenbluth, M.N., Wong, S.K., LinLio, Y.R., Miller, R.L., “Modified Lattice Boltzmann
method for compressible fluid simulations”, Physical
Review E, Vol. 63, 061212, 2001.
Yang, T., “The Development of a Compressible
Lattice Boltzmann Method for Turbo Machine
Application”, PHD theses at Purdue university.
Alexander, F.J., Chen, H., Doolen, G.D., “Lattice
Boltzmann Model for Compressible Fluids”, Physical
Review A, Vol. 46, No. 4, pp 1967-1970, 1991.
Sun, Ch., “Adaptive Lattice Boltzmann Model for
Compressible Flows”, Tsinghue science and
technology, Vol.5, No.1, 2000.
Sun, Ch., Hsu, A., “Three-Dimensional Lattice
Boltzmann Model for Compressible flows”, Phys.
Rev. E, Vol. 68, 2003.
Sun, Ch., Munn, L.L.,” Influence of erythrocyte
aggregation on leukocyte margination in post
capillary expansions: A lattice Boltzmann analysis”,
Physics A, 362, pp191–196, 2006.
Sun, Ch., Munn, L.L., “Particulate nature of blood
determines macroscopic theology: A 2-D lattice
Boltzmann analysis”, Biophysical Journal, Vol.
88,pp. 1635–1645,2005.
Sun, Ch., “Lattice-Boltzmann models for high speed
flows”, Phys. Rev. E, Vol. 58,No.6, pp. 7283-7287,
1998.
Sun, Ch., “Adaptive lattice Boltzmann model for
compressible flows: Viscous and conductive
properties”, Phys. Rev. E, Vol. 61, No.3, pp. 26452653, 1999.
Moradi, B., “Simulation of Compressible Fluid
Flows with Lattice Boltzmann Method”. M.sc thesis
at Isfahan University of technology, Iran, 2004.
[17] Shiravi.Kh., V.R., “ Parallel Simulation of
Compressible Fluid with Lattice Boltzmann
Method”. M.sc thesis at Isfahan University of
Technology, Iran, 2006.
[18] G. A. Sod, J. Comput. Phys. 27, 1 (1978).
[19] Shabouei M, Ebrahimi R, Mazaheri Body K.,
“Numerical Simulation of Converging Spherical
Shock Problem”, In Ninth International Congress of
Fluid Dynamics & Propulsion (ICFDP 9),
Alexandria, Egypt 2008.
[20] Shabouei M, Ebrahimi R, Mazaheri Body K.,
“Numerical Solution of Cylindrically Converging
Shock Waves”, In International Conference on
Fascinating
Advancement
in
Mechanical
Engineering (FAME2008), India, 2008, arXiv
preprint arXiv:1602.02680.
International Conference on Fascinating Advancement in Mechanical Engineering (FAME2008),
11‐13, December 2008
| 5 |
arXiv:1703.07238v1 [math.RT] 21 Mar 2017
On the Weil representation of
infinite-dimensional symplectic group over a
finite field
Yu.A.Neretin1
We extend the Weil representation of infinite-dimensional symplectic group to a
representation a certain category of linear relations.
1
The statement of the paper
1.1. The spaces V2µ . Denote by C× be the multiplicative group of complex numbers. Denote by F a finite field of characteristic p > 2, by F× its
multiplicative group. Fix a nontrivial character Exp(·) of the additive group of
F,
Exp(a + b) = Exp(a) Exp(b),
Exp(·) ∈ C× .
Values of Exp(·) are e2πik/p .
For n = 0, 1, . . . , we denote by Vnd an n-dimensional linear space over F, to
be definite we assume that Vnd is the coordinate space Fn . Denote by Vnc the
second copy of this space.
d
Next, we define the space V∞
as a direct sum of a countable number of F
c
and the space V∞ as a direct product of a countable number of F. We equip the
first space by the discrete topology, the second space by the product-topology
(’d’ is an abreviation of ’discrete’ and ’c’ of ’compact).
Below subscripts µ, ν, κ denote 0, 1, . . . , ∞.
A space Vµc is the Pontryagin dual of Vµd , the pairing is given by
[x, y] :=
µ
X
xj yj .
j=1
(actually, the sum is finite). For a subspace L ⊂ Vµd we denote by L◦ the
anihilator of L in Vµc and vice versa.
Denote by V2µ the space
V2µ := Vµd ⊕ Vµc
equipped with the following skew-symmetric bilinear form (symplectic form):
1 Supported
(x, y), (x′ , y ′ ) =
µ
X
j=1
by the grant FWF, Project P28421.
1
(xj yj′ − yj x′j ).
The space V2∞ is a locally compact Abelian group and we can apply the Pontryagin duality.
For a subspace R ⊂ V2µ we denote by R♦ the orthocomplement with respect
to {·, ·}. By the Pontryagin duality (see [7], Proposition 38), for closed subspaces
X, Y ⊂ V2∞ the conditions X = Y ♦ and Y = X ♦ are equivalent (in other words
for a closed subspace X we have X ♦♦ = X).
Denote by Sp(V2µ ) the group of invertible continuous operators in V2µ preserving the form {·, ·}.
1.2. The Weil representation of Sp(V2∞ ). Consider the space ℓ2 (Vµd )
on the set Vµd . For any v = (v d , v c ) ∈ V2µ we define a unitary operator a(v) in
ℓ2 (Vµd ) given by
a(v) f (x) = f (x + v d ) Exp
We have
a(v)a(w) = Exp
hX
1
2 {v, w}
i
xj vjc + 21 vjd vjc .
a(v + w).
Thus we get a projective unitary representation of the additive group V2µ or a
linear representation of the Heisenberg group, the latter group is V2µ ⊕ F with
a product
(v; s) ⋆ (w; t) = v + w; s + t + 21 {v, w} .
Proposition 1.1 For any g ∈ Sp(V2∞ ) there exists a unique up to a scalar
d
factor unitary operator W (g) in ℓ2 (V∞
) such that
a(vg) = W (g)−1 a(v)W (g)
(1.1)
and
W (g1 )W (g2 ) = c(g1 , g2 )W (g1 g2 ),
where c(g1 , g2 ) ∈ C, |c(g1 , g2 )| = 1.
d
This statement is inside the original construction of A. Weil (the group V∞
is locally compact), in Section 2 we present a proof (this proof is used in Section
3).
Remark. The author does not know is the Weil representation of Sp(V2∞ )
projective or linear, for a finite symplectic group Sp(2n, F) it is linear, see, e.g.,
[4].
⊠
1.3. Linear relations. Let X, Y be linear spaces over F. A linear relation
T : X ⇒ Y is a linear subspace in X ⊕ Y . Let T : X ⇒ Y , S : Y ⇒ Z be linear
relations. Their product ST is defined as the set of all (x, z) ∈ X ⊕ Z such that
there exists y ∈ Y satisfying (x, y) ∈ T , (y, z) ∈ S.
Remark. Let A : X → Y be a linear operator. Then its graph graph(A)
is a linear relation X ⇒ Y . If A : X → Y , Y → Z are linear operators, then
product of their graphs
graph(A) : X ⇒ Y,
graph(B) : Y ⇒ Z
2
is the linear relation graph(BA) : X ⇒ Z.
⊠
For a linear relation T : X ⇒ Y we define:
• the kernel ker T is the intersection T ∩ (X ⊕ 0);
• the domain dom T is the image of the projection of T to X ⊕ 0;
• the image im T is the image of the projection of T to 0 ⊕ Y ;
• the indefiniteness indef T is the intersection T ∩ (0 ⊕ Y ).
We define the pseudo-inverse relation T : Y ⇒ X as the image of T under
the transposition of summands in X ⊕ Y .
For a linear relation T : X ⇒ Y and a subspace U ⊂ X we define a subspace
T U ⊂ Y consisting of all y ∈ Y such that there exists u ∈ U satisfying (u, y) ∈ T .
Remark. For a linear operator A : X → Y we have
ker graph(A) = ker A,
dom graph(A) = X,
im graphA = im(A),
indef graph(A) = 0.
If A is invertible, then graph(A) = graph(A−1 ). For a subspace U ⊂ X we
have graph(A)U = AU .
⊠
1.4. Perfect Lagrangian linear relations. We say that a subspace
Z ⊂ V2∞ is codiscrete if it is closed and the quotient V2∞ /Z is discrete.
Let µ, ν = 0, 1, . . . , ∞. Define a skew-symmetric bilinear form in V2µ ⊕ V2ν
by
(v, w), (v ′ , w′ ) V2µ ⊕V2ν = v, v ′ V2µ − w, w′ V2ν .
We say that a linear relation T : V2µ ⇒ V2ν is perfect Lagrangian if it
satisfies the following conditions:
1) subspace T is maximal isotropic in V2µ ⊕ V2ν (in particular, T is closed);
2) ker T and indef T are compact;
3) im T , dom T are codiscrete.
Remark. Let A ∈ Sp(V2µ ). Then graph(A) is a perfect Lagrangian linear
relation V2µ ⇒ V2µ .
⊠
Lemma 1.2 For any perfect Lagrangian linear relation T ,
(dom T )♦ = ker T,
(ker T )♦ = dom T,
(im T )♦ = indef T,
(indef T )♦ = im T.
Lemma 1.3 Let T : V2µ → V2ν be a perfect Lagrangian linear relation and
ker T = 0, indef T = 0. Then µ = ν and T is a graph of an element of Sp(V2µ ).
Theorem 1.4 If linear relations T : V2µ ⇒ V2ν , S : V2ν ⇒ V2κ are perfect
Lagrangian, then ST : V2µ ⇒ V2κ is perfect Lagrangian.
3
We define a category L, whose objects are the spaces V2µ and morphisms
are perfect Lagrangian relations. It is more convenient to think that objects of
the category are linear spaces, which are equivalent to the spaces V2µ as linear
spaces equipped with topology and skew-symmetric bilinear form.
1.5. Extension of the Weil representation.
Theorem 1.5 a) Let T : V2µ ⇒ V2ν be a perfect Lagrangian linear relation.
Then were exists a unique up to a scalar factor non-zero bounded operator
W (T ) : ℓ2 (Vµd ) → ℓ2 (Vνd ) such that
a(w)W (T ) = W (T )a(v),
for any (v, w) ∈ T .
(1.2)
b) For any perfect Lagrangian linear relations T : V2µ ⇒ V2ν , T : V2ν ⇒
V2κ ,
W (S) W (T ) = c(T, S) W (ST ),
where c(T, S) ∈ C× .
c) For any perfect Lagrangian T : V2µ ⇒ V2ν ,
W (T )∗ = W (T ).
1.6. Gaussian operators. Let H ⊂ Vµd ⊕ Vνd be a subspace such that
projection operators H → Vνd ⊕ 0 and H → 0 ⊕ Vµd have finite dimensional
kernels. Let Q be a quadratic form on H. We define a Gaussian operator
G(H; Q) : ℓ2 (Vµd ) → ℓ2 (Vνd ) by the formula
G(H; Q)f (x) =
X
y: (y,x)∈H
Exp Q(x, y) f (y).
(1.3)
Theorem 1.6 Any operator W (T ) : ℓ2 (Vµd ) → ℓ2 (Vνd ) is a Gaussian operator
up to a scalar factor. Any Gaussian operator has such a form.
1.7. Bibliographic remarks. 1) In 1951, K. O. Friedrichs [3] noticed
that the Stone–von Neumann theorem implies the existence of a representation
of a real symplectic group Sp(2n, R). The finite-dimensional groups Sp(2n, R)
were not interesting to him and he formulated a conjecture about an infinitedimensional symplectic group. The representation of Sp(2n, R) was explicitly
described by I. Segal in 1959, [16]. A. Weil extended this construction to groups
over p-adic and finite fields.
2) For infinite-dimensional symplectic group the ’Weil representation’ (which
was conjectured by Friedrichs) was constructed by D. Shale [17] and F. A. Berezin
[1], [2]. The p-adic case was examined by M. L. Nazarov [9], [8] and E. I. Zelenov
[19].
3) The third part of the topic was a categorization of the Weil representation,
it had a long pre-history (see [6]), for real and p-adic infinite-dimensional groups
it was obtained by M.L.Nazarov, G.I.Olshanski, and the author in [9], 1989 (see
more detailed expositions of different topics in [10], [8], [14], [11]).
4
For Sp(2n, F) the corresponding representation of the semigroup of linear
relations was done in the preprint of R.Howe (1976), which was not published
as a paper. A detailed exposition of this representation is contained in2 [12],
Section 9.4.
4) Now there exists well-developed representation theory of infinite-dimensional
classical groups and infinite symmetric groups. Numerous efforts to develop a
parallel representation theory of infinite-dimensional classical groups over finite
fields met some obstacles, which apparently, were firstly observed by Olshanski
in [14], 1991. For a collection of references on different approaches to this topic,
see [13]). In [13] there was proposed a version of GL(∞) over finite fields as a
group of continuous operators in V2∞ , it seems that this group is an interesting
and hand object from the point of view of representation theory. In particular,
there is a natural GL(V2∞ )-invariant measure on an infinite-dimensional Grassmannian (and also on flag spaces) over F and explicit decomposition of L2 on
this Grassmannian. Clearly, this topic is related to the group Sp(V2∞ ) and this
requires an examination of the Weil representation.
1.8. Acknowledges. This note is a kind of addition to the work by
Nazarov, Neretin, and Olshanski [9], [10], [8], [14]. I am grateful to M.L.Nazarov
and G.I.Olshanski. I also thank R.Howe who recently sent me preprint [5].
1.9. Structure of the paper. In Section 2 we prove Proposition 1.1.
The proof follows I.Segal arguments [16], see also [12], Section 1.2. The only
additional difficulty is Proposition 2.2 about generators of the group Sp(V2∞ ).
Statements on linear relations are established in Section 3. A proof of the
theorem about product of perfect Lagrangian relations is based on all the remaining statements, this makes a structure of the section slightly sophisticated.
Lemmas 1.2 and 1.3 are proved in Subs. 3.1–3.3, Theorem 1.5.a in Subs. 3.4.
Properties of Gaussian operators (Theorem 1.6) are established in Subs. 3.6–
3.8. Theorem 1.4 is proved in Subs. 3.9. The remaining part of Theorem 1.5 is
proved in Subs. 3.10.
The Weil representation of the group Sp(V2∞).
2
Here we prove Proposition 1.1.
2.1. Preliminary remarks on linear transformations in V2∞ . Denote
by GL(V2∞ ) the group of all continuous invertible linear transformations v 7→ vg
of V2∞ . Let
a b
g=
∈ GL(V2∞ ).
(2.1)
c d
Then the block c contains only finite number of nonzero elements; each row of
a contains only a finite number of nonzero elements, and each comumn of d
contains a finite number of nonzero elements.
We need some statements from [10], Sect. 2.
2I
refered to [9] being sure that case of finite fields was considered in that paper.
5
A. Let g : V2∞ → V2∞ be a continuous surjective linear transformation in
V2∞ . Then the inverse transformation is continuous.
d
c
B. We say that a continuous linear transformation P in V∞
or V∞
is Fredholm
d
if dim ker P < ∞, im P is closed (this condition is nontrivial only for V∞
), and
codim im P < ∞. The index of a Fredholm operator P is
ind P := dim ker P − codim im P.
The following statements hold:
c
d
1. An operator P in V∞
is Fredholm iff the dual operator P t in V∞
is
Fredholm.
2. Let P be Fredholm and H have a finite rank. Then P + H is Fredholm
and
ind(P + H) = ind P.
c
d
3. If P , Q are Fredholm linear transformations of V∞
or V∞
, then
ind P Q = ind P + ind Q.
a
C. For any matrix g =
c
Fredholm and ind d = − ind a.
b
∈ GL(V2∞ ) (see (2.1)) the blocks a, d are
d
2.2. Fredholm indices of blocks of g ∈ Sp(V2µ ). Elements g ∈ Sp(V2µ ) ⊂
GL(V2µ ) satisfy obvious additional conditions, below we refer to
ct a = act ;
(2.2)
t
(2.3)
t
ad − bc = 1.
a
Lemma 2.1 Let g =
c
b
d
∈ Sp(V2µ ). Then ind a = ind d = 0.
Proof. For g ∈ Sp(V2µ ), we have
ind(adt ) = ind(a) + ind(dt ) = ind(a) − ind(d) = 2 ind(a).
On the other hand, bct has finite rank. By (2.3),
ind(adt ) = ind(1 + bct ) = ind(1) = 0.
Thus, ind(a) = 0.
2.3. Generators of the group Sp(V2∞ ). We define
groups in Sp(V2∞ ):
a
— the subgroup H consists of matrices of the form
0
1
— the subgroup N+ consists of matrices of the form
0
6
the following sub
;
at−1
b
, where b = bt ;
1
0
— the subgroup N− consists of matrices of the form
1
c
0
, where c = ct .
1
Denote by Jk the following block matrix of the size ((k − 1) + 1 + ∞) + ((k −
1) + 1 + ∞):
1 0 0 0 0 0
0 0 0 0 1 0
0 0 1 0 0 0
.
Jk =
0 0 0 1 0 0
0 −1 0 0 0 0
0 0 0 0 0 1
Proposition 2.2 The group Sp(V2∞ ) is generated by the subgroup H, N+ , and
elements Jk .
2.4.
Generators of Sp(V2∞ ). Proof of Proposition 2.2. Let s =
1 0
∈ N− . Since Q contains only a finite number of nonzero elements, we
c 1
have
−1
J1 . . . Jm sJm
. . . J1−1 ∈ N+
for sufficiently large m. Thus, N− is contained in the subgroup generated by
H, N+ , and Jk .
Let
a b
g=
∈ Sp(V2∞ ).
c d
We will multiply it by elements of groups H, N+ , N− and elements Jk in an
appropiate way. As a result we will come to the unit matrix.
Recall that a is Fredholm of index 0. Therefore there are invertible operators
d
K,
L in
V∞ such that KaL is a block (l + ∞) × (l + ∞)-matrix of the form
0 0
. This follows from [13], Lemma 2.7. We pass to a new matrix g ′ given
0 1
by
0
0 b′11 b′12
0
K
0
a b
L
0
1 b′21 b′22
g ′ :=
=
t−1
t−1
′
c11 c′12 d′11 d′11
0 K
c d
0 L
c′21 c′22 d′21 d′22
(the size of the right hand side is l + ∞ + l + ∞). The condition (2.2) implies
c′12 = 0, c′21 = 0, (c′22 )t = c′22 . In particular the following matrix
1 0
0
0
0 1
0
0
s :=
0 0
1
0
0 0 −c′22 1
7
is contained in N− . The element sg ′ has the form
0 0 b′′11 b′′12
0 1 b′′21 b′′22
′′
sg ′ =
c′′11 0 d′′11 d′′11 =: g .
0 0 d′′21 d′′22
The matrix c′′11 is nondegenerate (otherwise, the whole matrix sg ′ is degenerate).
We take an element
J1 . . . Jl g ′′ =: g ′′′
and get a matrix of the form
g ′′′
b′′′
11
b′′′
21
d′′′
11
d′′′
21
b′′′
12
b′′′
22
′′′ .
d12
d′′′
12
′′
−c11
=
0
−1
0
.
1
−c′′11
0
=
0
0
0
1
0
0
Keeping in the mind (2.3) we observe
′′′
d11
d′′′
21
Element
is contained in N+ .
−c′′′
11
0
0
0
d′′′
12
d′′′
12
0
1
0
0
0
0
−1
−(c′′′
11 )
0
−1
0
0
g ′′′
0
1
2.5. Construction of the Weil representation of the symplectic
groups Sp(V2∞ ). A remaining part of a proof of Proposition 1.1 is based on
standard arguments (see [12], Sect. 1.2), which were proposed by I. Segal).
d
) is irreStep 1. The representation a(v) of the Heisenberg group in ℓ2 (V∞
ducible. Indeed, let us show that there are no nontrivial intertwining operators.
d
Any bounded operator Q in ℓ2 (V∞
) commuting with all operators
X
a(0, v c )f (x) = Exp
vjc xj f (x)
is an operator of multiplication by a function. Since Q commutes also with
shifts
a(v d , 0)f (x) = f (x + v d ),
we get that Q is a multiplication by a constant.
Step 2. An operator W (g) is defined up to a scalar factor (if it exists).
Indeed, the map v 7→ vg is an automorphism of the Heisenberg group. Therefore,
the formula v 7→ a(vg) determines a unitary representation of the Heisenberg
8
group. The operator W (g) intertwines unitary representations a(v) and a(vg).
By the Schur lemma, W (g) is unique up to a scalar factor.
Step 3. If W (g1 ), W (g2 ) exist, then
W (g1 )W (g2 ) = λ · W (g1 g2 ).
Indeed,
−1
W (g1 )W (g2 )
a(v)W (g1 )W (g2 ) = W (g2 )−1 W (g1 )−1 a(v)W (g1 ) W (g2 ) =
= W (g2 )−1 a(vg1 )W (g2 ) = a(vg1 g2 ).
Step 4. It remains to write operators corresponding to generators of the
group Sp(V2µ ).
— Operators corresponding to elements of the subgroup H are
a
0
W
f (x) = f (xa);
0 at−1
— Operators corresponding to elements of N+ are
X
1 b
W
f (x) = Exp 21
bkl xk xl f (x).
0 1
(2.4)
— An operator corresponding to Jk is the Fourier transform with respect to
the coordinate xk .
3
The Weil representation of the category of Lagrangian relations
3.1. On a canonical form of a compact isotropic submodule.
Lemma 3.1 For any compact isotropic subspace M ⊂ V2∞ there is an element
c
g ∈ Sp(V2∞ ) such that M g ⊂ V∞
. Moreover, we can choose g is such a way
c
that M g ⊂ V∞ be a subspace given by a system of equations of the type yα = 0,
where α ranges in a subset A ⊂ N.
c
Proof. Consider the projection map π : M → V∞
. Its fibers are compact and discrete, therefore they are finite. In particular, π −1 (0) is a finited
dimensional subspace in V∞
. We can choose an element h of the subgroup
−1
d
H ⊂ Sp(V2∞ ) such that (π (0))h ⊂ V∞
is the subspace consisting of vectors
(x1 , . . . , xk , 0, . . . ). Since M h is isotropic, π(M h is contained in the subspace
c
y1 = · · · = yk = 0. Therefore M hJ1 . . . Jk ⊂ V∞
.
c
Next, we wish to describe closed subspaces in V∞
modulo linear transforc
mations of V∞ (they are induced by elements of the subgroup H). The Pontryagin duality determines a one-to-one correspondence between sets of closed
d
c
(see [7], Theorem 27); the both groups
subgroups in Abelian groups V∞
and V∞
9
are equipped with an Abelian group of automorphisms x 7→ λx, where λ ∈ F× .
It is easy to see that the correspondence send invariant subgroups (subspaces)
to invariant subgroups. Therefore the question is reduced to a description of
d
subspaces in V∞
modulo linear transformations, the latter problem is trivial.
Corollary 3.2 Let L be a compact isotropic submodule of V2∞ . Then the space
L♦ /L is isomorphic as a symplectic space to V2n or V2∞ .
Lemma 3.3 For any codiscrete coisotropic submodule L ⊂ V2∞ there exists
c
g ∈ Sp(V2∞ ) such that Lg ⊃ V∞
.
Proof. We reduce the compact isotropic module L♦ to the canonical form
c
(i.e., L♦ is a coordinate subspace in V∞
).
3.2. Proof of Lemma 1.2. Consider a perfect Lagrangian relation T .
Obviously, for v ∈ ker T , Z ∈ dom T , we have {v, z} = 0, i.e., (dom T )♦ ⊃
ker T . Next, (dom T )♦ = ker T ; otherwise, we take an isotropic linear relation
T + (dom T )♦ ) T .
3.3. Proof of Lemma 1.3. Only the case µ = ν = ∞ requires a proof. Let
T : V2∞ ⇒ V2∞ be a perfect Lagrangian linear relation, ker T = 0, indef T = 0.
According Lemma 1.2, we have dom T = V2∞ , im T = V2∞ .
Lemma 3.4 The subspace T is isomorphic to V2∞ as a linear space with a
topology.
Proof of Lemma 1.3. We apply statement A of Subsect. 2.1. The
projections
πT : T → V2∞ ⊕ 0,
πT′ : T → 0 ⊕ V2∞
are continuous, therefore the inverse map πT−1 is continuous. Hence the map
πT′ ◦ πT−1 is continuous, this is the the linear transformation whose graph is T .
Proof of Lemma 3.4. Represent V2∞ as a union of a chain of compact
subspaces
c
W0 = V∞
⊂ W1 ⊂ W2 ⊂ . . .
Consider the projection map π : V2∞ ⊕ V2∞ → V2∞ ⊕ 0, denote by πT its
restriction to T . Then
c
c
= ∪∞
πT−1 V∞
j=1 T ∩ (V∞ ⊕ Wj ) .
Therefore
c
c
V∞
= ∪∞
j=1 πT T ∩ (V∞ ⊕ Wj ) .
In the right hand side we have a union of an increasing sequence of compact
sets, the left hand side is compact. Therefore for some k,
c
c
⊕ Wk ) .
V∞
= πT T ∩ (V∞
10
c
The set T ∩ (V∞
⊕ Wk ) is compact and πT is continuous, therefore the inverse
−1
c
c
map πT : V∞ → T ∩ (V∞
⊕ Wk ) is continuous.
d
Next, denote by el the standard basis in V∞
. Then
∞
M
c
T ≃ T ∩ (V∞
⊕ Wk ) ⊕
F · πT−1 el .
l=1
Thus T is isomorphic V2∞ ⊕ 0.
3.4. Proof of Theorem 1.5.a. Existence of operators W (T ).
Lemma 3.5 Let the claim of Theorem 1.5.a hold for a linear relation T : V2µ ⇒
V2ν . Then the same statement holds for any linear relation gT h, where g ∈
Sp(V2ν ), h ∈ Sp(V2µ ).
Proof. Obvious.
We start a proof of the theorem. Keeping in the mind Lemma 3.1 we can
assume
ker T ⊂ Vµc ,
indef T ⊂ Vνc .
Let w ∈ indef T . Then
a(w) W (T ) = W (T )a(0) = W (T ).
The operator a(w) is an operator of multiplication by a function,
X
xj wjc f (x)
a(w)f (x) = Exp
P
Therefore any function ψ ∈ im W (T ) vanishes on the set Exp
xj wjc 6= 1.
In other words, all elements of ψ ∈ im W (T ) are supported by the subspace
(indef T )◦ ⊂ Vνd .
Let v ∈ ker T . Then
W (T ) = W (T )a(v).
The operator a(v) is a multiplication by function taking finite number of values
λl = exp(2πli/p). Therefore, λl are the eigenvalues of a(v), and T (W ) = 0 on
all subspaces ker(a(v) − λl ) for λl 6= 1 Therefore, W (T ) is zero for any function
supported by Vµd \ (ker T )◦ .
Consider decompositions
ℓ2 (Vµd ) = ℓ2 (ker T )◦ ⊕ ℓ2 Vµd \ (ker T )◦ ;
ℓ2 (Vνd ) = ℓ2 (indef T )◦ ⊕ ℓ2 Vνd \ (indef T )◦ .
The operator W (T ) has the following block form with respect to this decomposition
f (T ) 0
W
,
W (T ) =
0
0
11
with a non-zero block
f (T ) : ℓ2 (ker T )◦ → ℓ2 (indef T )◦ .
W
The linear relation T determines a linear relation T ′ : dom T ⇒ im T . We
take the projection
dom T ⊕ im T → (dom T / ker T ) ⊕ (im T / indef T )
and the image Te of T ′ under this projection. Thus we get a linear relation
Te : dom T / ker T ⇒ im T / indef T . The spaces dom T / ker T , im T / indef T have
form V2κ . By construction, ker Te = 0, indef Te = 0. Therefore, Te is a graph of
a symplectic operator in V2κ .
f (T ) has the same commutation relations with the HeisenIt is easy to that W
e
berg group as W (T ). It remains to refer to Proposition 1.1.
3.5. One corollary from the previous Subsection. For a linear embedding B : Vµd → Vνd we define two operators
∗
σB
: ℓ2 (Vµd ) → ℓ2 (Vνd )
σB : ℓ2 (Vνd ) → ℓ2 (Vµd ),
in the following way
σB ϕ(x) = ϕ(xB);
(
ψ B −1 (y) ,
∗
σB ψ(y) =
0,
if y ∈ im B;
if y ∈
/ im B.
In fact, in the previous section we proved the following lemma:
Lemma 3.6 Any operator W (T ) can be decomposed as product of the form
W (T ) = W (g1 )λ∗B W (g2 )λC W (g3 ),
where g1 , g2 , g3 are elements of symplectic groups and B, C are appropriate
embeddings.
3.6. Gaussian operators are bounded. Consider a Gaussain operator
G(H, Q) given by (1.3). Consider projections π1 : H → Vµd ⊕0, π2 : H → 0⊕Vνd .
We represent H as a finite disjoint union of affine subspaces Zj such that π1 ,
π2 are injective on Zj . Consider operators
X
Exp Q(x, y) f (y).
Gj f (x) =
y: (y,x)∈Zj
Actually, forPeach x the sum consists of 0 or 1 element. Clearly, kGj k = 1, and
G(H, Q) = Gj .
3.7. Products of Gaussian operators.
12
Lemma 3.7 Let X be a finite-dimensional space, Y be a sum of a finite or
countable number of copies of F. Let Q be a quadratic form on X × Y . Consider
the sum
X
F (y) =
Exp Q(x, y) .
(3.1)
x∈X
Then there is a subspace Z ⊂ Y of codimension 6 dim X, a nonzero constant
c, and a quadratic form R on Z such that
(
c · Exp R(y) ,
if y ∈ Z;
F (y) =
0,
if y ∈
/ Z.
Proof. This follows from the following observation (see, e.g., [12], Sect.
9.2). Let X, U be finite-dimensional spaces over F of the same dimension.
Consider a quadratic form P (x) on X. Let
X
X
f (u) =
Exp P (x) +
u j yj .
x∈X
Then there is a subspace K ⊂ U and a quadratic form S on U such that
(
c · Exp S(u) ,
if u ∈ K;
f (u) =
.
(3.2)
0,
if u ∈
/K
We represent Q(x, y) as
Q(x, y) = Q(x, 0) +
X
xj lj (y) + Q(0, y),
j
where lj are linear forms on Y , and apply formula (3.2).
Lemma 3.8 A product of Gaussian operators is a Gaussian operator up to a
nonzero scalar factor.
Proof. Consider Gaussian operators
G(H, Q) : ℓ2 (Vµd ) → ℓ2 (Vνd ),
G(K, R) : ℓ2 (Vνd ) → ℓ2 (Vκd ).
We consider the set Z of all triples (u, v, w) ∈ Vµd ⊕Vνd ⊕Vκd , such that (u, v) ∈ H,
(v, w) ∈ K. The kernel of the product is given by
X
Exp Q(u, v) Exp R(v, w) .
N (u, w) =
v: (u,v,w)∈Z
We get a sum of the form (3.1). More precisely, consider the natural projection
Z → Vµd ⊕ Vκd . Denote by X its kernel (it is finite-dimensional), let Y be a
subspace complementary to the X. We apply Lemma 3.7 and obtain a Gaussian
expression for the kernel N .
13
Corollary 3.9 For any g ∈ Sp(V2µ ) operators W (g) are Gaussian.
Proof. Indeed, for generators of Sp(V2µ ) the operators W (g) are Gaussian,
see Subs. 2.5. Their products are Gaussian.
Lemma 3.10 For any perfect Lagrangian linear relation T , the operator W (T )
is Gaussian.
Proof. We refer to Lemma 3.6.
3.8. End of proof Theorem 1.6.
Lemma 3.11 Any Gaussian operator has the form W (T ).
Proof. Consider a Gaussian operator G(H; Q) : ℓ2 (Vµd ) → ℓ2 (Vνd ). Extend
the quadratic form Q to Vµd ⊕ Vνd in an arbitrary way. Represent Q as
Q(x, y) = Q(x, 0) + Q(0, y) +
X
skl xk yl .
Let C(x) be a quadratic form on a space Vκd . Denote by G[[C]] the operator
in ℓ2 (Vκd ) given by
G[[C]]ϕ(x) = Exp C(x) ϕ(x)
Recall (see Subs. 2.5, formula (2.4)), that such operators have a form W (g) for
certain g ∈ Sp(V2κ ). Consider a Gaussian operator
G := G[[−Q(x, 0)]] G[H; Q] G[[−Q(0, y)]].
Clearly, the statements ’the operator G(H, Q) has a form W (·)’ and ’the operator
G has a form W (·)’ are equivalent. The operator G has a form
X
X
X
Exp(xSy t ) ψ(y).
Exp
skl xk yl ψ(y) =:
Gψ(x) =
y: (y,x)∈H
y: (x,y)∈H
Let us describe the set T of all (v, w) ∈ V2µ ⊕ V2ν such that
a(w)G = Ga(v).
Consider (p, q) ∈ H. Then
(p, qS); (q, −pS) ∈ T.
Next, let (ξ, η) ∈ Vµc ⊕ Vνc be contained in H ◦ . Then
(0, ξ); (0, η) ∈ T.
(3.3)
(3.4)
It is easy to see that elements of forms (3.3) and (3.4) generate a perfect Lagrangian relation.
3.9. Products of perfect Lagrangian relations. Proof of Theorem
1.4.
14
Lemma 3.12 Fix a perfect Lagrangian relation T . Let (β, γ) satisfy
a(γ)W (T ) = W (T )a(β).
Then (β, γ) ∈ T .
Proof. Let p, q ∈ V2∞ . Then
a(p)a(q)a(−p)a(−q) = Exp {p, q} .
Let (β, γ) does not contained in T . Choose a vector (u, v) ∈ T that is not
orthogonal to (β, γ). This means that {β, u} 6= {γ, v}. For a field F of a prime
order this implies
Exp {β, u} 6= Exp {γ, v} .
(3.5)
For an arbitrary finite field we can multiply (u, v) by a constant factor, in this
way we can achieve (3.5). Next, we have
a(β)a(γ)a(−β)a(−γ)W (T ) = W (T )a(u)a(v)a(−u)a(−v),
or
Exp {β, u} · W (T ) = Exp {γ, v} · W (T )
Hence W (T ) = 0, and we came to a contradiction.
Lemma 3.13 Let T : V2µ ⇒ V2ν , S : V2ν ⇒ V2κ be perfect Lagrangian linear
relations. Then the linear relation ST satisfies the following properties:
a) The linear relation ST is isotropic and for any (v, w) ∈ ST we have
a(w) W (S) W (T ) = W (S) W (T ) a(v).
(3.6)
b) ker ST , indef ST are compact, and dom ST , im ST are codiscrete.
c) ST is contained in a certain perfect Lagrangian relation R such that
W (R) = W (S)W (T ).
Proof. a) Let (v, w), (v ′ , w′ ) ∈ ST . Choose u, u′ ∈ V2ν such that (v, u),
(v , u′ ) ∈ T and (u, w), (u′ , w′ ) ∈ S. Since T , S are isotropic, we have
′
{u, u′ } = {v, v ′ } = {w, w′ }.
Thus ST is isotropic.
Next,
W (S)W (T )a(v) = W (S)a(u)W (T ) = a(w)W (S)W (T ).
b) The subspace ker ST is the set of all v such that there is u satisfying
u ∈ ker T , (v, u) ∈ S. Thus we take the preimage Z of ker T under the projection
T → Vµ ⊕ 0, and ker ST is the projection of Z to 0 ⊕ Vµ . Fibers of the projection
T → Vµ ⊕ 0 are compact, therefore Z is compact, and ker ST is compact.
15
Let us verify the statement about im ST . The subspace im T ∩ dom S is
codiscrete in dom S. Its image H under the projection dom S → dom S/ ker S
also is codiscrete. The relation S determines a symplectic isomorphism Se :
e is codiscrete in im S/ indef S.
dom S/ ker S → im S/ indef S. The subspace SH
Therefore its lift to im S (it is im ST ) is codiscrete in im S and therefore in V2κ .
c) Operators W (T ) and W (S) are Gaussian. Therefore, W (S)W (T ) is Gaussian, and hence it has a form W (R).
Lemma 3.14 Let X, Y be codiscrete coisotropic subspaces in V2∞ . Consider
the symplectic space X/X ♦ , the image K of X ∩Y ⊂ X in X/X ♦ and the image
L of X ∩ Y ♦ ⊂ X in X/X ♦. Then L = K ♦ in X/X ♦. Moreover, L is compact,
and K is codiscrete.
Proof. We have
♦
e
e = (X ∩ Y ) + X ♦ ;
K = K/X
, where K
e ♦ , where L
e = (X ∩ Y ♦ ) + X ♦ .
L = L/X
The space X is equipped with a skew-symmetric bilinear form, whose kernel
e and L
e are orthogonal in X, therefore K and L are
is X ♦ . It is clear that K
orthogonal in X/X ♦. Next,
e ♦ = (X ♦ + Y ♦ ) ∩ X.
K
e ♦ . Let e
Let h ∈ K
h be its representative, e
h = a + b, where a ∈ X ♦ , b ∈ Y ♦ .
Then b also is a representative of h, and hence h ∈ L.
Lemma 3.15 Let T : V2µ ⇒ V2ν be a perfect Lagrangian linear relation. Let
R ⊂ T be a relation V2µ ⇒ V2ν . Assume that
(ker R)♦ = dom R,
(dom R)♦ = ker R,
(im R)♦ = indef R,
(indef R)♦ = im R.
or
Then T = R.
Proof. Let R ( T . Considering projection to V2µ ⊕ 0, we get
dom R ( dom T,
or
indef R ( indef T.
Therefore,
dom R ( (ker T )♦ ⊂ (ker R)♦ ,
or
indef R ( indef T = (im T )♦ ⊂ (im R)♦ .
This contradicts to the conditions of the lemma.
16
,
Proof of Theorem 1.4. Let T : V2µ ⇒ V2ν , S : V2ν ⇒ V2κ be perfect
Lagrangian linear relations. We wish to prove that ST is perfect Lagrangian.
Without loss of generality we can assume
ker T = 0,
indef S = 0.
(3.7)
Otherwise, we take the natural projection
dom T ⊕ V2ν → (dom T / ker T ) ⊕ V2ν
and the image Te of T under this projection. We get a linear relation Te :
dom T / ker T ⇒ V2ν . Clearly, the statements ’ST is perfect Lagrangian’ and
’S Te is perfect Lagrangian’ are equivalent. In the same way we can assume
indef S = 0.
Under the condition (3.7),
dom T = V2µ ,
im S = V2κ .
Next,
im ST = S im T = S(im T ∩ dom S),
indef ST = S indef T = S(indef T ∩ dom S).
Consider the map dom S ⊕V2κ → (dom S/ ker S)⊕V2κ . Let Sb be the image of S
under this map. It is a graph of an symplectic bijection σ : dom S/ ker S → V2κ .
We have
h
i
im ST = σ (im T ∩ dom S) + ker S / ker S ,
h
i
indef ST = σ (indef T ∩ dom S) + ker S / ker S .
By Lemma 3.14, the spaces in the square brackets are orthogonal complements
one to another. Therefore, the same holds for the left hand sides. By Lemma
3.15, ST is Lagrangian.
3.10. End of proof Theorem 1.5. After the establishment of Theorem
1.4 Lemma 3.13.a becomes the statement b) of Theorem 1.5.
To prove the statement c) of Theorem 1.5 we write adjoint operators to the
both sides of (1.2),
W (P )∗ a(−w) = a(−v)W (P )∗ ,
or
a(v)W (P )∗ = W (P )∗ a(w).
This is the defining relation for the operator W (P ).
17
References
[1] Berezin, F. A. Canonical operator transformation in representation of secondary quantization. Soviet Physics Dokl. 6, 1961, 212-215.
[2] Berezin, F. A. The method of second quantization. Nauka, Moscow, 1965
English transl.: Academic Press, New York-London, 1966.
[3] Friedrichs, K. O. Mathematical Aspects of the Quantum Theory of Fields,
Interscience Publishers, New York, 1953.
[4] Gurevich S., Hadani R. The geometric Weil representation. Sel. math., New
ser. 13 (2008) 465-481.
[5] Howe, R. Invariant theory and duality for classical groups over finite fields,
with application to singular representation theory. Preprint, Yale University,
1976.
[6] Howe, R. (1988), The Oscillator Semigroup, Proceedings of Symposia in
Pure Mathematics, American Mathematical Society, 48, 61-132
[7] Morris, S. A. Pontryagin duality and the structure of locally compact abelian
groups. Cambridge University Press, 1977.
[8] Nazarov, M. Oscillator semigroup over a non-Archimedean field. J. Funct.
Anal. 128 (1995), no. 2, 384-438.
[9] Nazarov, M.L., Neretin, Yu. Olshanski, G. Semi-groupes engendrés par la
représentation de Weil du groupe symplectique de dimension infinie. C. R.
Acad. Sci. Paris Sér. I Math. 309 (1989), no. 7, 443-446.
[10] Neretin, Yu. A. On a semigroup of operators in the boson Fock space. Funct.
Anal. Appl. 24 (1990), no. 2, 135-144.
[11] Neretin, Yu. A. Categories of symmetries and infinite-dimensional groups.
Oxford University Press, New York, 1996.
[12] Neretin, Yu. A. Lectures on Gaussian integral operators and classical
groups. European Mathematical Society (EMS), Zürich, 2011.
[13] Neretin Yu.A. The space L2 on semi-infinite Grassmannian over finite field.
Adv. Math. 250 (2014), 320-350.
[14] Olshanski, G. I. On semigroups related to infinite-dimensional groups. Topics in representation theory, 67-101, Adv. Soviet Math., 2, Amer. Math.
Soc., Providence, RI, 1991
[15] Olshanski, G. I. Weil representation and norms of Gaussian operators
Functional Analysis and Its Applications, 1994, 28:1, 42–54
18
[16] Segal, I. E. Foundations of the theory of dynamical systems of infinitely
many degrees of freedom. I. Mat.-Fys. Medd. Danske Vid. Selsk. 31, 1959,
no. 12,
[17] Shale, D. Linear symmetries of free boson fields. Trans. Amer. Math. Soc.
103, 1962, 149-167.
[18] Weil, A. Sur certains groupes d’opérateurs unitaires. Acta Math. 111, 1964,
143-211.
[19] Zelenov, E. I. A p-adic infinite-dimensional symplectic group. Izvest. Math.
43 (1994), no. 3, 421-441.
Math.Dept., University of Vienna,
Oskar-Morgenstern-Platz 1, 1090 Wien;
& Institute for Theoretical and Experimental Physics (Moscow);
& Mech.Math.Dept., Moscow State University;
& Institute for information transmission problems (Moscow);
e-mail: neretin(at) mccme.ru
URL:www.mat.univie.ac.at/∼neretin
19
| 4 |